Indica un intervallo di date:
  • Dal Al

The Servers Behind The Cloud – From Basic To Hyperscale: What The Tech Giants And 5G Are Demanding

Real-time in every technology, artificial intelligence (AI), billions of objects connected, instant search and compute, total security for each and every one of the hundreds of thousands of companies connected at the same time in the cloud: we take it for granted, but behind the extreme functionality of the industries of cloud computing, social media, software platforms and content delivery there are servers, very real physical and powerful machines.

For the need for hyperscale data centers, that is, data centers capable of meeting the ever-increasing capacity, speed and reliability companies now require, to emerge it was just a question of time.

Who is behind all the likes, swipes, searches and streams of the big and of the smaller platform? Chances are it is Quanta, which with a turnover of over $34 billion, ranks in the Fortune 500 list. Quanta is traditionally a so-called original device manufacturer (ODM), a manufacturer of electronics, large and small, to whose products the brands then assign their own brand — one in four notebooks is produced by Quanta.

“Towards the end of the 2000s, with the acquisition of Facebook as a client for the open rack infrastructure segment, the company began to design open architecture servers for cloud infrastructures. The company created a specific division—Quanta Cloud Technology (QCT)—that was to sell directly to end customers. Our evolution led us towards open compute platforms and to become one of the global leaders for cloud infrastructure. We cannot speak publicly about the many customers of the division, but I can mention Facebook,” Maurizio Riva, QCT Vice President EMEA Region, said.

Mr. Riva, who recently moved to Düsseldorf after 15 years at Intel where he covered positions like Director for EMEA OEM and General Manager for Italy, elaborates in this interview on QCT’s strategy for the market for cloud infrastructure in general and on a first for the mobile industry.

Which developments led QCT to design and implement the first fully visualized and cloud-native mobile network?

Telco operators have long been looking for this type of infrastructure and platform, which are optimized for performance and are built based on open technologies, to allow 5G applications and services. Thus, together with Intel, Rakuten and Red Hat we went on to develop a fully virtualized 5G mobile network, the first of this type. We took advantage of our experience in cloud computing to create a completely cloud-based and fully automated network, both as the network itself and the services are concerned.

What is the rational for big customers, like carriers getting ready for 5G, to invest in this kind of infrastructure?

For network operators moving towards optimizing and making services more efficient, in other words: hyperscale server centers*, open compute platforms and infrastructure standardization, is not an option, it is the only option. If for nothing else, it is for cost reduction. Bear in mind that in today’s data centers, energy can account up to half of all expenditures. This is a reality, and we must therefore be absolutely exigent on this point. Carriers are one market that will contribute greatly to make [cloud computing] infrastructures more efficient. For this reason, we consider this market to be very important and strategic. In the cloud QCT was a disruptor, and that’s what we want to be also in the telco market.

So, energy efficiency is critical for telcos and not only…

Yes. This is a trend in the server industry: as it moves towards ever greater density, it also seeks lower energy consumption and to make maintenance, among other things, ever faster and simpler. That applies also to the design of the servers with new solutions that make them more energy efficient.

Think about this: some of our customers have 50,000 server nodes. If you save even just one watt over 50,000 server nodes, the saved power can add to as much as 50 kW per hour. We have a very rigid certification for efficiency: we use platinum-titanium class power supplies that make the server’s use of energy more efficient.

Fan cooling is very important. Our certification guarantees that each and any of the fans does not exceed an established percentage of the server’s total consumption.

This all reflects then directly on the design of the servers. I was discussing this an hour ago with a client. Too many cables, for example, hinder the air flow and require the fans to spin faster. Fewer cables, vice versa, allow the fans to run at a lower speed and use less power. At Quanta we think about this very carefully, because the efficiency in power consumption is critical. Many cloud service providers for the telecommunications industry are already benefiting from a design that carefully plans, for instance, with the criteria of energy efficiency, where to place the cables. You could have them all on the back of the cabinets or all in front of them just to avoid complex architectures, because cables overcrossing each other in the cabinet interfere with the flow of air.

The other aspect that we consider an important trend is liquid cooling. Here too, we are constantly working toward solutions for the CPUs and GPUs. These are the types of servers that cloud service providers are increasingly using, and not just for their artificial intelligence workloads. In the latter case, since the amount of heat that the servers must dissipate is high, precisely because of the high concentration of performance inside the server, sometimes it is necessary to cool them with liquid.

I have recently seen the case—that’s Metro de Madrid—that is using AI to optimize the functioning of the cooling fans in regards to power saving. Is this a trend too?

Absolutely, this is the path and what we see as a trend in general—and in our case, specifically, in the cloud service provider industry. Today’s cloud service providers are offering instances of artificial intelligence. These are nothing more than algorithms that optimize functionality. These could be sensors that analyze the temperature and humidity of the air and then calculate the speed at which the fans need to turn at a given time. It could be done with AI deep learning.

Do you see an evolution in which new operators with operations fully based on the cloud have an advantage over legacy ones because they operate with much higher efficiency?

When we talk about 5G, a first hurdle is the license—and we see that several auctions are taking place in Europe and elsewhere. Then there is the hurdle of installing the antennas. Here, legacy operators have the advantage of having already made the investments for the license and for the antennas infrastructure. A few of them are even closing mutual agreements to share their antennas.

Today’s big discussion in the telecommunications sector is how to preserve investments and, at the same time, how to plan for cost savings in the future. Considering the latter aspect, I am sure that a company using solutions like ours grants itself great opportunities.

How do you see telcos managing globally the evolution of the 5G infrastructure?

First, we see that on the world market China and the US are going faster than Europe, perhaps due to the fragmentation of nations and the fragmentation of carriers.

It is inevitable that among traditional carriers there will be some resistance, because we are talking about completely overhauling the infrastructure to reduce costs. In Europe, on top of this, they are also bound by laws and labor regulation. It is not easy.

5G substantially increases the need for processing data near the source…

Think autonomous driving or the fact that more and more devices will be distributed and will therefore need a concentration of local computing capacity, to avoid having to relay big quantities of data towards the computing core risking bottlenecks in the network. And you need low latency. For this reason, you work more and more on the edge. This is a very important aspect of innovative 5G platforms. The world’s first infrastructure based entirely on the cloud for 5G** networks, NGCO (Next Generation Central Office) , which also won the Computex 2019 award in May in Taipei, Taiwan, provides a fiber-rich edge designed to provide agile mobile networks and the associated infrastructure and services. Telco service providers can now transform their edge networks for faster service, efficiency and flexibility, thinking ahead to the scale of devices the Internet of Things (IoT) will connect.

* Conventionally data centers with more than 5,000 servers and over 10,000 square feet.

** Next Generation Central Office (NGCO) Solution based on Intel® technology.

© Guiomar Parada, Nova, Il Sole 24 Ore.