The interpreter engine for the core JavaScript language, independent of the browser's object model. File ONLY core JavaScript language bugs in this category. For bugs involving browser objects such as "window" and "document", use the "DOM" component. For bugs involving calls between JavaScript and C++, use the "XPConnect" component.

See bug 1341902 comment 2. This is especially bad because we have to look up a separate unique ID nowadays for stuff we put in these sets.
We should fix both of them to use a Vector or an InlineSet. Using a GCVector for the JSON one wins a few ms on Kraken stringify-tinderbox. What's nice about a Vector is that the CycleDetector destructor can just pop the last value.
Using an InlineSet instead of a Vector could be faster in some pathological cases (let's say > 20 levels deep). It also makes things a bit slower, and more complicated because there's no GCInlineSet at the moment, and we'll probably run out of native stack size first, so I'm not sure it's worth worrying about. JSC just uses a vector for the JSON stringify case FWIW.

Using GCVector is also a nice simplification.
The join microbenchmark below improves from 274 ms to 112 ms, a much bigger win than I expected. I think we should be really careful where we use MovableCellHasher, it's convenient but it has very bad performance overhead.
function f() {
var t = new Date;
for (var i=0; i<1000000; i++) {
var arr = [1, 2, 3];
arr.join(",");
}
print(new Date - t);
}
f();

Heh. I'm kind of curious what the scaling behavior would be if you did something like
js::AutoCycleDetector::Vector& cycleDetectorVector(void* ptr) {
return cycleDetectorVectors_[(uintptr_t(ptr) >> 5) & 3].ref();
}
(Make 4 vectors, and choose one based on 2 bits grabbed from the pointer. It's sort of a mini hash in front of a vector.)

Using a set data structure here was deliberate, from the time I added spec-mandated cycle detection back in the day in bug 578273, to avoid pathological behavior in the cases already discussed in comments here. Why are we knowingly adding non-linear behavior, rather than taking the time to implement an InlineSet (GCInlineSet) that will likely have value in the future for other uses?

(In reply to Jeff Walden [:Waldo] (remove +bmo to email) from comment #9)
> Why
> are we knowingly adding non-linear behavior, rather than taking the time to
> implement an InlineSet (GCInlineSet) that will likely have value in the
> future for other uses?
(1) As the numbers here show, this GCInlineSet would need an inline capacity of 200 or so because at that point the vector is *still* faster. This means the InlineSet we currently have can't be used without wasting stack space and we need another data structure that combines a Vector and a HashSet.
(2) The pathological cases here are limited by our native stack size (the depth can't get *too* bad or we will run out of stack space first). I think that's the key part here and IMO a 2x slowdown for the worst case is worth it to speed up the 99.9% of sane calls that actually show up in the wild.
(3) An InlineSet would still add some amount of overhead to 99.9% of calls.
A lot of my time is spent fixing performance issues and we still have some bad performance cliffs, so I have to be smart about where I spend my time. At this point I don't think optimizing the depth > 200 cases to be 1-2x faster is a good use of my/our time.