High Memory Usuage

We have started working with Orchard, we have some very small webistes which sill take up alot of memory. We had to turn on the App Pool so it doesnt timeout and refresh otherwise the server was completely busy compiling and loading the websites.

I have a site with a single image and a single page it takes 60mb of raw for the app pool, that is the smallest ran usage. Another one uses 185MB.

I was not able to host the sites in a shared hosting environment because of memory and cpu usage.

- 3 custom themes (two of them are very thin inherited themes for Facebook and mobile versions)

- And of course all Orchard core modules

Now I realise this is a large number of modules - but 700mb seems like far too much. We've just had the server upgrade to 4gb so it's coping but ... is there any kind of profiling I can run to find out if any specific modules are doing anything silly here?

No, there are other profilers on the market, it's just that it's a really good one. A cheaper but more tedious approach is to disable all modules and bring them back one by one and observe the memory footprint.

Interestingly after running a while, it drops down to around 400mb RAM. Unfortunately I can't fork out the £200 *just* for JetBrain's memory profiler, at least not in the near future. So I'm going to get Glimpse running and see if that tells me anything
useful ... and then try switching some suspect modules off and see what happens. I did try Microsoft's CLRProfiler which I heard was good enough for memory profiling, unfortunately it crashed the first time I ran it and the second time it locked up after shutting
down the IIS service, rendering my server completely offline :S ... locally, Orchard doesn't use anything like that amount of memory, so I have to do the profiling on my production server!

Be aware the .NET will take as much memory as available, and only run the GC if there is some memory pressure.

I had 600mb allocated to the app pool, and it was recycling constantly not long after it fired up because the memory limit was hit (rather than garbage collecting to conserve memory).

I upped the limit to 1.5gb. The process stabilises not much above 600mb (until I hit a second or third tenant). So it's not simply taking as much as available, it clearly *needs* 600mb; it's not using the fully available 1.5gb.

After a while, something is being garbage collected and it drops to 400mb, even though there was no memory pressure.

So, for some reason my server is running differently to how you think .NET will run?

I installed the 10-day trial of JetBrains dotTrace Memory ... not hugely impressed so far, when I selected "open web page in browser" it exited with an error "file not found". When I tried without that option, it just sits there saying "connecting" and never
actually does anything ...

Well, it works with the "attach to process" option, but data collection is very limited on that mode.

If I try a full ASP.NET profiling, everything locks up again. Clearly something in my server/IIS configuration is making all these profilers fail. The problem is, I can't keep doing this; it's a production server, and regularly stopping/starting/locking
up the whole of IIS will not keep my clients happy!

The "attach to process" profile only shows 65mb of live data. So why is a single process taking up over 600mb, and not running GC even when memory is full? Or is it just because this profile is not able to be aware of all the data?

Let me just clarify something: before the RAM upgrade, the server was maxing out its 2gb RAM (typically around 1.95gb). If anything was able to garbage collect, it was clearly not happening. All that's running on this server are a few small websites, and
two Orchard instances - those instances are responsible for most of the memory usage. If it's true as Sebastien says that garbage collection will tend to run when memory is under pressure, then why wasn't it running when 2gb was nearly full? The server was
running terribly; now since upgrading to 4gb it now runs very smoothly, and typical RAM usage is between 2-3gb.

We have removed all the unused modules. We have also changed the web.config file, including setting the Trust Level to full, debug mode to false, Also changed the App Pool Recycling to use 10mb for virtual ram and 10mb for physical ram. This
looks to have resolved the issue. I will have to check this with our team after a day or 2. We are also looking into the multi tenancy option.

For now the sites we have changes still use more memory that we have allocated but they drop back down again to a ow memory amount of about 5mb. That is down from 120mb plus to 5.5mb. Not bad. Can liove with that.

Well after the whole day, 2 guys here working on it... no difference. Orchard needs minimum 120mb dedicated ram for a very simple site with no content. At first it looked promising, but it was just because the app pool was recycling, if that
happenes then the load on the server is too high.

We have considered multi tenancy, but it sounds like 1 website with all the other websites being pages within that site. Maybe we are missing something, but that is a hack, we are not interested in that configuration.

Tomorrow we will test memory profiler and see what we come up with. But for now... Get another hamster!﻿﻿﻿﻿

We have considered multi tenancy, but it sounds like 1 website with all the other websites being pages within that site. Maybe we are missing something, but that is a hack, we are not interested in that configuration.

Multi tenancy isn't like that - they are completely segregated shells with their own database; effectively they *are* separate websites, they just doesn't consume as much memory as having separate Orchard instances.

The model you describe is actually something I'd like for certain applications but it doesn't exist as yet.

Just enable the "Multi Tenancy" feature and you can start adding new tenants from admin. Each tenant exists on a separate domain and has its own admin area. Tenants can even have different modules and features enabled.