In my web application, I have two models: Domain and URI. A domain has many URIs and an URI belongs to a domain.
A domain can have more than a thousand URIs. That’s why I can’t just put all the URI ids in the domain object an declare the URIs property of a domain as async. In addition, I can’t return all the URIs of a domain in one api call, I must have a pagination for this ressource.

I’m pretty surprised to reach the REST adapter limitations with such a trivial REST api.

Moreover, what’s about garbage collection? If I do implement an adapter that deals with my first problem, does ember-data will keep all the loaded records even if they are not used anymore?

This very scenario is why I never use hasMany() relationships in any of my Ember data models (even if they are async)

hasMany() works great for relationships that contain a small number of records, but once the number of records grow, the hasMany() relationship will most likely cause performance issues with both the database query and the Ember app.

The way I deal with it is to only define the belongsTo() side of the relationship, and then manually query for the hasMany records when needed. That way I can easily control when to add paging when the number of records grow.

Whilst I’ve used Ember Data and Ember Model before that and even submitted PRs to improve both of them including a recent PR that identified a huge performance issue (doubled the object fetch performance) I am of the view that such data access libs all uniformly disappoint.

They introduce huge overheads and limitations and ultimately don’t allow sane relationship modelling. It’s a fight all the way.

I’ve been experimenting with the Flux unidriectional data flow reactor patten, ajax, immutable data types and I don’t need complicated things like ember data or models. Its very fast, does much more with less and I get proper management of state. Undo/redo is also free. This is all very non ember but its a very nice solution nonetheless.

It might not be the exact same scenario, but we have hit bottlenecks as well with Ember Data trying to load thousands of models into the store at once. The way we deal with this (it’s hack-ish and works for us since only one client “owns” its records) is we “manually” (i.e. outside of Ember Data) load the data we need as JSON from the server but we don’t push it to the store. We have a “customFind” mixin that first checks whether the searched record is in the JSON dataset, and load it from it if possible. If not, then it queries the back-end.

So it means we do one big load from the server (which is fine with us) and we load the records into the store lazily. It could be that we misunderstood parts of ED, but it was the only way we could find to mitigate the performance issues.

@paulyoder We only do this for a specific kind of models that tends to trigger a lot of loads at once in our app (several hundreds actually). For all the other models, we rely on ED. So it is still useful for us.

@paulyoder I meant to add: to be totally honest though, we still have issues with ED in a (fairly) big app. My impression is ED works well in fairly simple cases, but when you need to deal with complex relationships graphs, lots of objects, and not so consistent back-ends, there are problems (for instance: Am I lost or is Ember-data still problematic?). Or, I should say: WE struggle to make it work (it could well be that it’s just us).
So if we had to do it again, I am not totally sure we would use ED.