Let’s recall exactly how server virtualization started and what it’s all about: In the late 90’s, a company named VMware developed hypervisor technology, and as much as it had immediate appeal, not too many people really understood it and only a select few believed there was a business value associated with the technology.
I remember a time when I was a QA engineering manager for a large Internet security company, and my team enjoyed the idea of using virtual machines as test support systems, specifically traffic generators and servers. It made our lives more agile in building up and tearing down test networks. Realistically, CPU power and memory were costly at the time, and the ability to run multiple VM’s on a single server wasn’t practical. However, powered by Moore’s Law and Intel’s business ambition, servers became adequately powerful and hypervisor technology was efficient enough to enable practical consolidation of multiple servers onto fewer server machines. This logic generally outlined the supply side of server consolidation.
Now let’s look at this from a demand angle. Due to the nature by which applications are added into organizational IT arsenals, the principles of assigned asset ownership, separation of duties, and fault isolation have been guiding the procurement process and application crawl process all along. This effectively created a reality in which different organizational departments owned different compute assets, and typically allocated distinct hardware for specific applications.
Due to variable usage patterns associated with business applications, HR apps being used more towards the end of the month, sales apps towards the end of the quarter, planning apps at the beginning of project cycles, etc. It has been observed that DC compute assets are largely underutilized – a Yankee Group research study indicates that average DC compute resources are only utilized at 17% of their capacity (this is obviously not true for storage).
Because of a variety of global economic downturns, finding ways to improve efficiency and asset utilization has become a high priority project for many organizations. Consequently, the ability to increase utilization of data center compute resources by consolidating multiple servers onto fewer servers suddenly becomes very attractive. Most organizations today already utilize server virtualization to consolidate applications in order to improve the utilization of their compute resources. Furthermore, the ability to consolidate workloads opens up the opportunity to free up resources and shut servers down when unneeded.
So how does this relate to being green?
Server virtualization technology has become known to improve utilization while reducing various aspects of consumption and offering the ability to ride the “green” trend while helping businesses save money. As one consolidates multiple applications to run over a small number of servers, many factors are reduced: overall power consumption, Rackspace consumption and heat dissipation. In turn, the AC requirements are reduced, which overall contributes to a “greener,” more energy efficient data center.