HugePages – Overhead of Not Using

I’ve read a fair bit about HugePages and their importance especially for Oracle databases with large SGAs and a large number of dedicated connections.

Admittedly, some of what I read is over my head – I’m not a production DBA and quite often I don’t even have access to the box unfortunately – but I know who the experts are and I trust them (doesn’t mean that an expert’s opinion shouldn’t be tested of course):

And I think a lot of the time, it is. Not because of the topic but because of the structure of modern IT departments.

When I tried to tackle the subject before at a client, it was awkward – it wasn’t a subject I could talk about with any great authority although I could refer them to the expert material above and … well, to cut a short story shorter and keep me out of trouble… we didn’t. I was overruled. No-one else at the client was using HugePages on 11g. So no HugePages and no chance to test it.

Well, in the end, we didn’t go with AMM, because for our applications, according to our tests, we were better suited to ASMM (damn these acronyms).

Statements like this from Kevin’s articles above are easy to understand:

Reasons for Using Hugepages

Use hugepages if OLTP or ERP. Full stop.

Use hugepages if DW/BI with large numbers of dedicated connections or a large SGA. Full stop.

Use hugepages if you don’t like the amount of memory page tables are costing you (/proc/meminfo). Full stop.

and

“Large number of dedicated connections” and “large SGA” in the context of hugepages can only be quantified by the amount of memory wasted in page tables and whether the administrator is satisfied with that cost.”

Those are pretty clear cut.

So, if I understand everything correctly, I should not like this, right?:

$ grep "PageTables" /proc/meminfo
PageTables: 41622228 kB
$

That’s 40 gig of overhead for an SGA of 60 gig and a whole bunch of connections.
So, don’t need HugePages, eh?
Bit late now.

Like this:

Related

4 Responses to HugePages – Overhead of Not Using

Actually, changing to hugepages doesn’t require a major re-architecture.
A dbshut, followed by the reqired OS commands set, change the spfile and then a dbstart and it’s all done. It’s never too late to use them…

Thanks.
Not too late but obviously such a change is harder once you’re already live and requires a whole bunch of testing etc.
It’s something that will be done eventually perhaps.
It’s not causing a specific problem – because there’s plenty of memory on the box but if there’s a failure and multiple dbs per node it may well end up causing an issue.

But it is definitely annoying because there was a right time to deal with this.

It’s a huge {sic} problem – that things are not changed for the better due to both anxiety about the impact on working systems and the difficulty of scheduling it – the amplification of the difficulty of organising changes is often overlooked when people decide to go virtual on a larger server.. So, as you say, things do not get implemented.

I see this spilling down to development too. Management are in a hurry to get the development done and go live and so are not willing to be the first in the organisation to use a “new” feature. Even one that is 3 version old… I’ve seen a situation in the last couple of years where a team would come up with standard configurations but even though they had validated it and it was now the “company standard”, no one would swap to it as they would be first.