It is a world where people want technology and miniaturization to go hand in hand. This is why hyperscale operations in the data center are said to be one that reshapes the entire IT concept.
Once there was a time when a data center was more like a warehouse of information. But in the present world, it is all about computing power and connectivity in addition to the data. A hyperscale data center is like a distribution hub rather than a warehouse. It is one such technology which is more at large and is provided by enterprise level companies. But it is not a technology of size but about scalability.
“Hyperscale data centre concept usage is still not reached to a level of wide acceptance in its full strength. We can define a hyperscale data center as a full-fledged warehouse of IT infrastructure instead of a “datacenter”. In this warehouse you will get everything required to store your DATA in server, memory, power management, disaster recovery solutions etc.” says Shailesh Mani, CIO, Flemingo International, Dubai.
Ashraf Yehia, Managing Director, Eaton Middle East, states that the emergence of hyperscalers with their fast-paced construction programs have accelerated the evolution of data centres to hyper speed. “Where previously the largest colocation providers could deliver 5 to 10 data halls in a year, hyperscaler’s are now delivering this in a month. Plus, each data hall design improves on its predecessor. The question is; how can this drive for continuous improvement in performance be sustained, using a traditional design approach. Evolving customers require transformational thinking.” he added.
Scalability, not size!
One question that constantly pops up in the mind is, how big actually is big? Size is not what makes a facility hyperscale, but it is how it enables tenants to make optimum use of its resources within the size. Companies ilke Google, Amazon, Microsoft and Facebook, are some that are known to deploy the greatest hyperscale data centers. “In this competitive world of technology every organization is in search for a scalable IT infrastructure. All are looking for an infrastructure which can be upgraded or updated with minimal effort of time and cost. If we consider that scenario hyper scale datacenters are the best to meet such requirements in a very much adoptive way.” says Mani.
Yehia adds to this that “The factors contributing to the growth of the hyperscale data centre market are the increasing requirements for high performance applications, rising need for reduction in capital, reduction in operational expenditures and the high spending on hyperscale data centre technologies. Also, the increasing number of users opting for these technological solutions is expected to improve the data infrastructure and this in turn is expected to boost the hyperscale data centre market.”
Best of both worlds
Hyperscale data centers are aptly known to be the best of both worlds. Data storage is important in modern data centers. But, as mentioned what sets apart a hyperscale data center is its ability to handle heavy computing workloads and high volume traffic. This is especially important for organizations that run their operations in the cloud and or using power intensive tasks. Here comes the boon of vertical and horizontal scaling of data centers.
“The loosely coupled distributed infrastructure helps the users to scale up the infrastructure on demand. Horizontal scaling can be achieved by increasing the number of machines in the chain of networks without any disturbance to the existing setup. As the same way you can increase the power of each machine by adding more powerful components to the same. The design of the hyperscale datacenter is designed to scale up the capacity and the power of the systems at any time without any disturbance to the running environment. This option of horizontal and vertical expansion surely will give more acceptance to this concept in the future world.” says Mani.
Even though it provides advantages to organizations that deal with huge amounts of data, which requires computing power, it does pose a challenge on transparency. This IT topology is massive that the traditional methods of traffic flow monitoring would be insufficient. Some of the IoT induced pressure can be relieved with edge computing, yet there needs to be a better method to track and for transparency of data that is effective and efficient.
Eaton says that the current data center design methods involve a mix & match approach to component selection and often utilize products from multiple manufacturers to complete the power chain. This approach has served us well for the last 30 years but cannot deliver the expectations of the next generation of customers and the data centers they require.
“Accelerated construction programs and leaner builds require a more integrated system-level approach that can shave cost and improve uptime. Manufacturers are ideally positioned to combine complete component understanding with deep-dive design experience to produce a more finely tuned electrical system. This results in not just performance gains for the customer but also significant cost savings across the complete life cycle of the data center from the initial build to end of life. A systems level approach to design will define the future of data centers.”
Hyperscale data center is sure to play a major role in the future of IT operations. The world’s largest cloud providers are already expanding their hyperscale infrastructure to meet the demand. The increasing demand of IoT and cloud computing is putting more pressure into having efficient data centers.