Oracle Licensing: The Data, The Report and The Reality
As with almost any complex area of life or business, Software Asset Management is seldom black and white.
Like most of us, SAM practitioners need to deal with various shades of grey. This is the first of two articles where I’ll talk about two recent cases in point – where customer decisions (based on reality) went against seemingly black and white recommendations.
In scenario one, (Oracle related) despite having the data and a report which showed how to reduce operational costs by almost $2,500,000 – reality (for reasons which will be discussed) dictated that only $800,000 of those savings could be realized – leaving $1,700,000 in excess cost in the business.
This article will discuss the data, the reports and the reality, which demonstrate that this outcome was unavoidable.
In scenario two, (Microsoft related) despite having the data and a report to show that licenses for a given product had been over purchased in the past – leaving the company with €850,000 in shelf ware – reality dictated that the company purchase another €1,000,000 of licenses for that software when renewal time came around. In my next article, I will discuss why this was actually a good decision.
Essentially, the job of a Software Asset Manager is to help their organization ensure compliance with license requirements and guide the effective management of their company’s software deployment lifecycle, typically with a particular emphasis on cost management (and reduction). As with any “Knowledge Intensive” role, decisions made can only be as good as the data available.
Scenario One – Oracle Licensing
The data:
- Customer network has over 8,000 servers, approximately 50% Windows, 25% Linux, 20% Solaris and 5% scraps of AIX and HP/UX.
- Almost 80% of the Windows servers, 50% of the Linux and 30% of the Solaris are Virtual – Vmware underpins the Windows and Linux VMs and Solaris zones underpin the Solaris VMs.
- Microsoft and Veritas cluster services are in wide use for active/passive failover of circa 20% of the estate. We scanned the network and mapped all of these devices (physical and virtual) and understand the deployment of software across the servers – Oracle database, Oracle middleware, MS SQL Server, etc. All of this data was available.
The report:
Oracle database license renewal time was approaching. The customer performed an analysis of the data which showed that Oracle was deployed poorly across VMware clusters. The customer has almost 900 Oracle Processor Licenses worth of database – slightly more than 20% (212) of which are on VMware based hardware. The report clearly showed extremely poor utilization of VMware. Overall isolation of Oracle wasn’t great – some clusters had a mixture of servers with Oracle databases and servers with other technologies deployed – one cluster of 4 physical servers which each had 4 six core processors (96 cores total – 38 Oracle Processor Licenses) had only 8 cores assigned to Oracle databases. Overall looking at the allocation of cores to Oracle database servers compared to total number of cores on VMware physical servers – the percentage allocation was 41%.
Some servers had 133% of cores allocated (the machine had 12 cores, with 4 VMs on the server, each of which had 4 CPUs (cores assigned) – for those not familiar, this is quite typical in VMware – it’s a good way to help with Economies of Scale in terms of processor utilization. Several other servers had just 6% allocation (32 cores, 2 assigned to Oracle database servers).
After reviewing the report, the correct action was clear – shuffling the physical boxes on which Oracle was deployed and making dedicated Oracle focused clusters could allow the customer to reduce license cost of Oracle on VMware by 60%, (a list price value of $6m+)without reducing the number of instances or number of CPUs allocated – a total win/win – right?
The reality:
You can imagine how delighted the SAM and Vendor management people were – they calculated that they could reduce their costs by more than $2,500,00 over a 3 year period – without reducing deployment, capacity or speed. But (there’s always a but… right?) reality bites. Several of the clusters were purchased by specific business units or by specific projects. They were allocated for the use of those business units or projects and the owners of the projects and (sometimes) decision makers in business units were not easily persuaded.
Whether down to being wary of change, jealously guarding their own assets (paid for out of their own budgets) or, put simply – not willing to consider change, even with significant cost benefits, the reactions from the business units meant that not all the theoretical shuffling was, in reality, actually possible. I can almost hear the various arguments I guess were forthcoming from the business units relating to the criticality of the projects in question, the relatively insignificant cost reduction (in a single case) of two or three hundred thousand dollars when considering the value of the project as a whole or impact of downtime.
As mentioned previously, the Realpolitik of the situation meant that close to $1,700,000 was untouched.
The effect in the next 3 years of the changes which were made will mean a reduction in $800,000 – not a bad result by any measure – but only one third of what was possible.
Conclusion
Reality is changing – I’d suggest you get involved. Organizations are slowly moving away from physical servers or hardware being procured for and dedicated to a specific project and towards enterprise clouds. New servers (which can be dedicated to a project and charged for appropriately) can be deployed from these clouds quickly and easily. The cost reduction opportunities are immense.
Learn more about how cloud based approaches can be used to reduce enterprise software license costs – any software tied to physical hardware attributes can potentially have its cost slashed over time. Of course – in this new reality decisions still have to be made and they better be made on the basis of accurate information.
Moving forwards, the data and reports are more and more important. Data mapping physical to virtual hosts and an understanding of the deployment of software across these environments will be critical to ensuring deployment decisions are informed (which cluster should application X be deployed to?), and that chargeback (cost allocation based on who is using what, where and how much of it?) is accurate.
I’ve recently joined the ISO/IEC working group behind the 19770 (SAM related) standards. My key focus in the years to come will be ensuring the best practices behind SAM understand and reflect the changing challenges and opportunities cloud based computing brings to enterprise SAM. It’s an exciting time in the technology world – I hope you’ll join in the debates to come.
Related articles:
About Jason Keogh
Jason is the CTO and founder of iQuate. Jason is responsible for iQuate’s product vision and is well regarded as an expert in the spheres of IT discovery, inventory and SAM. Jason speaks at several industry conferences each year. In 2012 Jason joined the ISO/IEC 19770 working group with a goal of ensuring its work reflects, and is relevant to, enterprise cloud based computing.
As a brief follow up – what the article above really highlights is the benefits a TRUE shared services layer can offer. IF the hardware and sofware was all owned by a centralized IT function and controlled entirely by them, where IT provides a service to the projects – rather than the projects having to pay up front for the capital expenditure – THEN the full $2.5m could have been realised. AND the business units would have been happy – as their operating costs would drop, inline with the IT services organization costs dropping.