It’s not a surprise to most Oracle users that the company is encouraging customers to move towards ‘cloudifying’ wherever possible across their entire IT estate. Indeed, most of the major Cloud players are doing the same, with Amazon and Microsoft Azure leading the charge in terms of the features and functions available.

As part of that encouragement, the recently altered cloud policy document effectively doubles the license cost of providing an Oracle on AWS / Azure installation compared to a standard on-site or Oracle setup, specifically due to the section:

• Amazon EC2 and RDS – count two vCPUs as equivalent to one Oracle Processor license if hyper-threading is enabled, and one vCPU as equivalent to one Oracle Processor license if hyper-threading is not enabled.

• Microsoft Azure – count two vCPUs as equivalent to one Oracle Processor license if hyperthreading is enabled, and one vCPU as equivalent to one Oracle Processor license if hyperthreading is not enabled.

In a sense there’s a bit of logic to this – in a 2 thread (vCPU) scenario, it’s possible to have each thread running on a separate physical core, and therefore it’s correct that both cores are included in license calculations in some fashion – and that this is in line with other policy statements on so called ‘hard partitioning’ or sub-capacity licensing – for example the ‘whole-core’ position for Solaris LDoms.

Last month Amazon sought to lessen this effect by introducing the ability to alter the vCPU setting on a group of EC2 instance shapes. At instance creation you can now specify a different number of cores and / or hyperthread count from the default associated with that instance type. So if you have a memory-intensive, CPU-light application you can now arguably have a customised EC2 instance that fits both needs – for example:

You can now change the vCPU setting for that instance to 2-48 vCPUs and have hyperthreading on or off (26-48 vCPU available with hyperthreading on) – so the following command run through the aws command line inteface (CLI):

will give you the same processor effect – 12 vCPU, but this time it’s across 12 cores not hyperthreading – which may benefit your application at run time over the first example due to the greater physical processing power.

Running the describe-instances CLI command will show you the new options:

These are for the hyperthreaded (ThreadsPerCore: 2) and non-hyperhtreaded (ThreadsPerCore: 1) variants.

You can of course get similar output by connecting to your instance and running the LSCPU command or similar.

Note the following:

• The cost of the instance DOESN’T alter if you reduce the the cpu usage through this new functionality – you’ll still pay for an m5.12xlarge instance in AWS terms – it’s just that your true cpu consumption (and therefore arguably your Oracle license / support bill) will go down.

• You can currently only specify the options within the AWS CLI, an AWS JDK or the AWS EC2 API.

• You currently can’t see the current allocation from the AWS console – you need to query the CLI or instance itself (by using something like LSCPU).

• You can only do this at instance launch time – you can’t modify after launch – you’ll need to terminate the instance and start again if you want to make use of this function (or create a new customised instance and move your data).

• You don’t get more than the instance default – so you couldn’t specify corecount=32 in the above m5.12xlarge example for instance.

• Changing the instance type (e.g. from an m5 to an m4) after customising the cpu options will reset them – i.e. you’ll go back to the default instance cpu options for the changed instance type

Amazon have now extended the above capability by introducing similar to RDS – the database Platform as a Service offering – but only for Oracle DB.

Capability and commands are similar to the EC2 scenarios but there are a couple of differences:

• The cost of the instance again DOESN’T alter if you reduce the the CPU usage through this new functionality – you’ll still pay for a db.m4.10xlarge instance in AWS terms as an example – it’s just that your true cpu consumption (and therefore arguably your Oracle bill) will go down. (This applies to BYOL licensing of course. You aren’t going to achieve anything with the license included option – though conceivably that might be a future enhancement).

• You CAN modify the CPU settings after instance launch or during a restoration.

• Again, you can’t achieve more CPU than the instance default

• You can see the current CPU customised allocation on the AWS console (though as of yet we don’t seem to have that capability in the UK), or by using the relevant CLI commands.

AWS CPU Optimisation: On the license cost front what might this look like? Looking at the following 4 possible scenarios:

Scenario 4 – vCPU = 12 again, BUT – this is now classed by Amazon as a non-hyperthreaded instance – we’ve switched it off by applying threadspercore=1. So applying the policy this time – which says 1 vCPU = 1 processor license on non-hyperthreaded instances – that means 12 Processor licenses.

You would more obviously use scenario 3 here – to retain somethig akin to the on-site setup – and it’s obvious that something isn’t quite right in scenario 4 – it sort of looks the same, but isn’t really – you’re really running across 12 cores, not 6.

However, based on the available documentation (from Amazon) – it’s 12vCPU and no hyperthreading – which needs twice as many licenses as scenario 3 which also has 12 vCPU. What’s happened here is that we’ve taken what would normally be classed by default as a ‘hyperthreadable’ instance, and made that a choice HT or not HT – with potential significant license implications.

At this point there’s no indication of any alteration to the policy by Oracle – though we may see some strengthening of the policy text to suggest that the current policy calculations must be done on the default (rather than the cpu cusomised) instance attributes. For now, it’s based on the vCPU allocation.

For now, this document considers Database Enterprise Edition and products with typical Processor definitions and therefore shouldn’t be applied to Standard Edition products.

The take-away?

Amazon have given yet another set of flexibility options, which in part helps mitigate Oracle’s stance on licensing in non-Oracle cloud. These might provide significant potential to lower the license costs of using AWS EC2 and RDS.

However, you need to ensure that what you assume is the case (license cost calculation based on instance shape) – is in fact the reality (has it been CPU customised?) – or that you’ve correctly altered the cpu options to your advantage. And finally, that you can extract the relevant information to back that up, from the console, cli or raw instance – in the event of an audit or as part of your BAU SAM processes.

Share

Explore SAM at Version 1

Helping enterprise organisations take control of their software assets.

Talk to our Software Asset Management and Licensing experts to learn more about taking control, quantifying risk, and identifying and optimising opportunities.