Background:
I just found that the programmer had messed up our canonical urls and there were a two for each page. We decided to use one, but not sure if it's going to be an issue.

So we have a Magento site that has many categories with 100+ products that end up having mutiple pages within 1 category. So we end up getting www.domain.com/category.html?p=2, www.domain.com/category.html?p=3 and so on. So what we decided to do was to set a canconical url for all categories to this www.domain.com/category.html?limit=all. This parameter shows all the products from that category. Seems like it would be better for spidering.

My main concern is if Google looks at this as a different url. We have a lot of links to our categories, but people link to them like www.domain.com/category.html, so would setting the canonical url as www.domain.com/category.html?limit=all hurt us positioning-wise.An

It can be a pretty complex issue, and I can't really give you a definitive answer, but Adam Audette, who tests this kind of thing a lot, recommended the following in October of last year:

Create a View All page (it isn’t necessary to make this the default view)

Link to the View All from category- and product-level URLs. Messaging can be simple, something such as “view all products.”

Here’s where we differ from Google: add a “meta noindex, follow” to the View All and all the pagination URLs. This effectively pulls them from the indices. (Note: We may revisit this strategy and modify it based on the success of rel next/prev and Google’s desire to feature View All URLs in their indices.) Additionally, add rel=”canonical” annotations to these URLs.

Ensure paginated URLs are made unique: URL, page title and meta description. Why? Because this helps differentiate them and send quality signals. Google should then give more weight to the pages (and their links) if not only the content (e.g. the products listed) is unique, but also the structure of the pages.

Add the View All and the paginated URLs to your XML sitemaps to ensure crawling. These can be removed after a period of time.

I think the idea is to put noindex, follow on the view all page and page 2 on, and to have them all point to page 1 as the canonical. But yeah, some of it is a little counter intuitive. If you look at the comments, I had a similar question, asking why you'd want to noindex a page after making sure it's completely unique content, but I guess page 2 and on aren't really unique if there's a view all page.

It would be nice if Adam would publish an update, since he mentions in the article that rel-next and rel-previous could change all this.

If the ?limit=all pages are the ones that are being returned by search engines, and consequently they're the pages that searchers are landing on, the question is whether that's a good user experience. If your bounce rates are low and your conversions are high, then this solution is working for you. But I think the article I linked to is suggesting that users would rather land on the first of the paginated pages, but have the option of going from there to the view all page or the next page in the sequence. All the meta robots tags and rel=canonical links he's recommending you add seem to be intended to make sure it's page 1 that shows up in the search engine results.

As always, your mileage may vary. It depends on who your audience is and what they're looking for.