Bonus $100
Promo Codes 2024
USA Elections 2024
Users' Choice
90
89
88
85

Taking data centers to hyperscale performance levels

23 Dec 2015
00:00
Read More

Between massive data growth and the arrival of digital industrialization, the data center has become the modern equivalent of the industrial factory. It can no longer be regarded as the overhead required to run the business, but must be embraced as the digital machinery that enables the business to compete in the 21st century with massive amount of data.

As we connect more independent machines than human-held machines, new use cases emerge like data collection and control signaling, while traffic flow will be directed increasingly upstream rather than mainly downstream. Meanwhile, all enterprises are essentially becoming information-enabled software companies with more data to process, store and use. By some estimates, in the near future companies will be dependent on ten times the IT capacity they need now, but they won’t have ten times the budget to deliver it.

So while no one knows for sure what the specific use cases will be, it’s safe to assume that today’s design assumptions for the underlying compute, storage and network infrastructures in today’s data centers won’t hold up, says Qawa Darabi, Head Of Cloud Business at Ericsson South East Asia and Oceania.

.

See Also

Next-generation data center infrastructure

READ MORE

“As the massive growth of information technology services places increasing demand on the data center it is important to re-architect the underlying infrastructure, allowing companies and end-users to benefit from an increasingly services-oriented world,” Darabi explains. “Data centers need to deliver on a new era of rapid service delivery, as well as granular monitoring and management of resources and services. Across network, storage and compute there is a need for a new approach to deliver the scale and efficiency required to compete in a future where ‘hyperscale’ is a pre-requisite.”

To understand just what “hyperscale” means, think of the difference between standard computing and virtual computing. Instead of having all computer components in the same box, those components can now be hosted in different boxes or even different data centers and still function as a single physical computer, linked together by fiber optics.

“This means that computers can be realized on the fly, with very small scale or with massive scale and all the combinations in-between,” says Darabi. “With this increased flexibility of software defined compute, storage and network comes the ability to completely disrupt the utilization models on a data-center scale level that enables 100 times more for about the same. This is tremendously important especially for more expensive components such as high-speed memory, for example.”

Current data center infrastructures can't address this because they simply aren’t designed for it. Data centers are mainly conceived as unit based servers, and are not built to handle massive amounts of data cost-effectively. That’s why Ericsson is championing the idea of a “disaggregated architecture” – in the form of its Hyperscale Datacenter System 8000 (HDS 8000), billed as the first implementation of Intel’s Rack Scale Architecture – that shifts life cycle management from servers to the components themselves.

“To compete in the era of digital industrialization, you need hardware that can be both configured into hyperscale data centers and managed at the component level,” says Darabi. “You can scale up or scale down to quickly to adapt to changing workloads while dramatically reducing waste and operation costs.”

Combined with optical interconnect, a disaggregated architecture enables more efficient pooling of compute, storage, and network resources, he adds. “The optical interconnect removes the traditional distance and capacity limitations of electrical connections.”

.

Related content

Tags:
Rating: 5