SAN JOSE: Dell believes that latency is the biggest challenge to system performance and said it was working on improving the bandwidth of interconnects inside its servers.
After Sam Greenblatt, chief architect of Dell's Enterprise Solutions Group, said Moore's Law is no longer applicable and that performance was not doubling every two years, he labelled latency as one of the biggest challenges facing Dell. Greenblatt told Dell Enterprise Forum conference attendees that latency was hard to solve because making changes in one part of the system would "break something else".
Greenblatt said: "One of the things we are finding and we are talking about all of the time is latency. We are very concerned about latency and we're working on all kinds of fabrics and technologies inside the box [server] and inside storage to reduce latency."
Greenblatt's reference to latency is in the context of the time taken to move data from one part of the memory hierarchy to another. "The reason why this is important is because if you try and fix one latency [problem] you're going to break something else," he said.
Latency within the system has always been a problem, however through a combination of an exponential growth in data and increased numbers of users accessing services over high-bandwidth network connections, latency is a far more visible problem.
"Looking at the models today, if you take it to a Von Neumann model, if you tinker and speed up a storage or a processor or anything else, that's like putting a Ferrari on a dirt road. If the road isn't tuned for a Ferrari, you might as well be driving a Chevvy."
Greenblatt highlighting the state of interconnect technology as one of the major challenges facing system builders, such as Dell, will put more pressure on chipset manufacturers to increase bandwidth and lower latency. He said that one method to overcome latency issues is to move the processors closer to the data, effectively cutting the number of hops and the physical distance between the raw data and the processor.
Dell's investment in fabrics is not purely to move storage data from one location to another, it will allow the firm to improve performance in blade systems, where servers within a chassis have to communicate with one another. Separately, Greenblatt told V3 that while Dell's recently announced PowerEdge VRTX server is not designed to expand beyond its current four-blade chassis, he expects to see PowerEdge VRTX units "talk to each other in a full mesh network".
Although Dell's PowerEdge VRTX server is not intended for data centre deployment – the firm went to great lengths to position the server as something to be deployed in branch offices under tables rather than racks – it shows that the firm is physically positioning key IT infrastructure components close to each other in order to save on space but also improve performance by using internal data buses rather than Ethernet.
What Greenblatt is alluding to is the need for processing and storage systems to transfer rapidly growing volumes of data at a faster rate between each other. Greenblatt added that work needs to be done to remove the amount of time spent moving data around throughout memory hierarchies.
In the past five years, solid-state drives (SSD) have forced firms to deploy storage tiered by certain performance metrics such as IOPS or sustained throughput in order to manage cost effectiveness within the data centre. However particular workloads favour particular storage characteristics and data typically needs to move throughout the hierarchy in order to best serve the workload.
Greenblatt later said "the data centre is becoming flatter", meaning the days of having near-line servers or storage are numbered. Hardware vendors such as Dell will have to sell one box that can do it all with software, not hardware, deciding on the hierarchy based on a particular workload.
Dell and its rivals face considerable challenges in overcoming latency. While Greenblatt's view of putting processors close to data sources is certainly one way of reducing the time taken to shift data from one part of the data centre to another, it does not solve the fundamental problem that interconnect technology has simply not increased in bandwidth at the same rate as processing capability or storage capacity.
Greenblatt told V3 that Dell was working with Intel on photonics where bandwidth "would start at 100Gbps", which is 10 times higher than 10Gbps and 2.5x higher than the more exotic 40Gbps Ethernet that is used in data centres today. Even firms such as Mellanox, which leads development of the high-performance, low latency InfiniBand interconnect only last month upgraded bandwidth to 56Gbps with the vast majority of high performance computing clusters still running at 40Gbps.
What Dell, HP and IBM will have to do is work within the existing network infrastructure as firms are unlikely to rip and replace existing 10Gbps or 40Gbps infrastructure despite its bandwidth limitations. This is why Greenblatt and other technologists are looking towards architecting changes within systems rather than architecting new data centre plumbing.
Despite Dell wanting to mitigate infrastructure upgrade costs by keeping changes inside the server, judging by Greenblatt's statements, it is not a trivial problem that can be solved in a single generation of server. All of this means is that firms will have to pay greater attention to not only where they store data but how it is delivered to processors.
Also, what's a USB stick?
Gravitational waves become extremely weak by the time they reach the Earth and require highly sensitive equipment for detection
The reactor topped out at 100 million° C
Cosmic event will not cause any disruption on Earth, say scientists