The “In-Memory Computing Summit 2017 – Silicon Valley” is around the corner. Slated for Oct. 24-25 at the South San Francisco Conference Center, the conference is the only industry-wide event that focuses on the full range of in-memory computing-related technologies and solutions held in North America.
The conference committee will be publishing a sneak peek of the agenda in the next few days. I'm very much looking forward to that. But in the meantime, I wanted to share a conversation I had back in June at the In-Memory Computing Summit Europe – held in Amsterdam June 20-21 – with Dr. Ferhat Hatay, director of Strategy and Innovation at Fujitsu.
Ferhat and his team drive new solution development in the areas of Cloud, Big Data and Internet of Things (IoT) at Fujitsu.
His June talk was was focused on "debunking the myths of scale-up architectures." Over the course of our conversation around that topic he explained that his experience includes serving in key roles at Sun Microsystems, Oracle, and HAL – driving innovative, open, large-scale infrastructure solutions for high-performance and enterprise computing.
Ferhat started his career at the NASA Ames Research Center building infrastructures for large-scale computer simulations and Big Data analysis. He forever remains a rocket scientist. Follow him at Twitter @FerhatSF.
Here’s an article I wrote based on what we discussed back in June…
Scale-up vs. scale-out architectures: Debunking the myths
The scale-up approach to database architecture design often gets a bad rap. It’s viewed as just “buying a bigger box” in order to increase database capacity.
Dr. Ferhat Hatay wants to change your mind.
“When growing capacity and power in the data center, the architectural trade-offs between server scale-up vs. scale-out continue to be debated,” he said. “Both approaches are valid: scale-out adds multiple, smaller servers running in a distributed computing model, while scale-up adds fewer, more powerful servers that are capable of running larger workloads.”
In a scale-out system, new hardware can be added and configured as the need arises. When a scale-out system reaches its storage limit, another array can be added to expand the system capacity.
The scale-up approach is an example of vertical scalability. Vertical scalability is the ability to increase the capacity of existing hardware or software by adding resources to a physical system -- for example, adding processing power to a server to make it faster. In the case of storage systems, it means adding more devices, such as disk drives, to an existing system when more capacity is required.
“I can scale one thing on 100 computers, or I can scale it all on one computer. Which one is better?” Ferhat asked. “That’s a choice -- with different answers at different times for different requirements. An institution, for example, might prefer just one server because it’s easier to manage and does the job in an economically sound way.”
Evolving computer architectures & systems
“We are seeing a shift in the economics. It used to be that having a single, large computer meant having a big gap in the acquisition cost vs operating complexity/costs. But that gap has narrowed rapidly. So now with large-memory, large-capacity systems, we are using the same amount of memory and RAM. Innovation has been concentrated in those areas.”
There are additional, unique advantages that scale-up architectures offer, Ferhat added.
“One big advantage is large memory and compute capacity which makes In-Memory Computing possible,” he said. “This means that large databases can now reside entirely in memory, boosting the analytics performance, as well as speeding up transaction processing. By virtually eliminating disk accesses, database query times can be shortened by many orders of magnitude, leading to real-time analytics for greater business productivity, converting wait time to work time.”
Ferhat noted that scale-up servers which utilize a high-speed interconnect versus an external network offer accelerated processing due to reduced software overhead and lower latency in the movement of data between processors and memory across the entire system.
“Is it feasible and economical to support both scale-out and scale-up workloads on the same system or class of systems” he asked? “At the end of the day, it’s a question of how many nodes (scale-out) and the size of each node (scale-up).”
For newer workloads like Big Data or Deep Analytics, the scale-up model is a compelling option that should be considered, according to Ferhat.
“Given the significant innovations in server design over the past few years, concerns about cost and scalability in the scale-up model have been rendered invalid,” he said. “With the unique advantages that newer scale-up systems offer, businesses today are realizing that a single scale-up server can process Big Data and other large workloads as well or better than a collection of small scale-out servers in terms of performance, cost, power, and server density.”
“At Fujitsu, we continue to invest in processor design for scale-up architectures that are ideal for dealing with large amounts of memory and large data flows with multiple data streams being executed at the same time,” Ferhat said. “We’re intent on providing customers with the ‘tools’ and technologies that will help them capitalize on digital opportunities and accelerate their competitive advantage.”
Fujitsu is the leading Japanese information and communication technology (ICT) company offering a full range of technology products, solutions and services. Its server portfolio covers a broad spectrum of products ranging from mission-critical IA and industry standard servers to UNIX servers to mainframe and supercomputer systems.