This is a restriction we place on code to enforce reasonable resource allocation, and to avoid the need for otherwise unnecessary timeout checking in argument copying loops in the VM. This isn't something that's likely to change any time soon.
The maximum number of arguments you can pass to a function is always going to be physically limited by the size of the stack. We do artificially cap the argument count below this right now, and we could reasonably raise the hard limit to around 2^31 or 2^32, but (1) this would still be an arbitrary limit and (2) the stack size limit would never let you get there anyway.
Stack size is finite, and 0xFFFF seems as good an arbitrary limit as any other would be. :-)
Any patch to change this will have to be careful not to introduce integer overflow, I think we may steal a couple of bits from the arguments count in either CodeBlock or Executable, and I think we may mix use of uint32t & int32t in our handling of argument counts.
Do you have a specific web compatibility concern here?

(In reply to comment #4)
> The original code sample works in Chrome Version 32.0.1700.107 on Ubuntu. I have 10GB memory if it makes any difference.
This is the WebKit bug tracker, not Chromium’s. The snippet you posted still throws an error in WebKit/JavaScriptCore.

Just noting another use-case (specifically around performance) where this limitation is... unfortunate.
If I have two arrays and I want to merge them together, there's these options:
A = A.concat(B)
vs
A.push.apply(A,B)
The former is obviously more idiomatic, but it also has the unfortunate side-effect of creating a new merged array rather than adding onto the existing one. So, if you have a "big" array A, you end up duplicating the A, and then GC throwing away the previous one.
In those cases, the `A.push.apply(A,B)` would be more ideal since it modifies A in place, which prevents the memory duplication and prevents the GC'ing.
But now, obviously, the size of B is limited to ~65k items.
That still is sorta OK if A is "big" but B is, relatively speaking, "small". But it is still highly unfortunate that code would have to know implementation-dependent limits on such things.
I wonder if it would be possible for an implementation to detect such an push.apply(..) case and handle it more gracefully to work-around the limitation of how many params can be passed. It could see "wow, B is really big, we can't pass it in all at once, but we can rewrite it internally to the rough equivalent of..."
A.push.apply(A,B) -->
for (var len=B.length, s = 0, m; s<len; ) {
m = Math.min(s+65000,len);
A.push.apply(A,B.slice(s,m));
s = m;
}
------
I see this as similar to the restriction on call-stack size when using recursion. If I write a recursive algorithm that should be TCO, but it runs in a browser that doesn't have that capability, it could fail. It's unfortunate that I have to know and guard against such things.
That's why seeing ES6 mandate TCO (they are still doing that, right!?) was so nice, because it signals a time in the future when there's a very valid programming technique which will no longer be susceptible to arbitrary, implementation-dependent limitations.

You can't expect arbitrarily large argument lists in any implementation - generally that's not the purpose of .apply (you could use spread as well). We also can't detect the call to push in advance as we don't know we're in push until after we've already called it, and that means we have to have already copied your argument array onto the stack.
> Just noting another use-case (specifically around performance) where this limitation is... unfortunate.
>
> If I have two arrays and I want to merge them together, there's these options:
>
> A = A.concat(B)
>
> vs
>
> A.push.apply(A,B)
>
>
> The former is obviously more idiomatic, but it also has the unfortunate side-effect of creating a new merged array rather than adding onto the existing one. So, if you have a "big" array A, you end up duplicating the A, and then GC throwing away the previous one.
>
> In those cases, the `A.push.apply(A,B)` would be more ideal since it modifies A in place, which prevents the memory duplication and prevents the GC'ing.
>
> But now, obviously, the size of B is limited to ~65k items.
>
> That still is sorta OK if A is "big" but B is, relatively speaking, "small". But it is still highly unfortunate that code would have to know implementation-dependent limits on such things.
>
> I wonder if it would be possible for an implementation to detect such an push.apply(..) case and handle it more gracefully to work-around the limitation of how many params can be passed. It could see "wow, B is really big, we can't pass it in all at once, but we can rewrite it internally to the rough equivalent of..."
>
> A.push.apply(A,B) -->
>
> for (var len=B.length, s = 0, m; s<len; ) {
> m = Math.min(s+65000,len);
> A.push.apply(A,B.slice(s,m));
> s = m;
> }
>
>
>
> ------
>
> I see this as similar to the restriction on call-stack size when using recursion. If I write a recursive algorithm that should be TCO, but it runs in a browser that doesn't have that capability, it could fail. It's unfortunate that I have to know and guard against such things.
>
> That's why seeing ES6 mandate TCO (they are still doing that, right!?) was so nice, because it signals a time in the future when there's a very valid programming technique which will no longer be susceptible to arbitrary, implementation-dependent limitations.