Berkeley Researchers Highlight Emergence of In-Memory Processing



Excellent paper released by researchers at University of California, Berkeley . They have analyzed data from Hadoop installation at Facebook (one of the largest as such in the world) looking at various metrics for Hadoop jobs running at Facebook datacenter that has over 3,000 computers dedicated to Hadoop-based processing.

They have come up with very interesting insights. I advise everyone read it firsthand but I will list some of the interesting bits.

Traditional quest for disk locality (a.k.a. affinity between the Hadoop task and the disk that contains the input data for that task) was based on two key assumptions:

  1. Local disk access is significantly faster than network access to remote disk

  2. Hadoop tasks spend significant amount of their processing time in disk IO reading input data


Through careful analysis of Hadoop system at Facebook (as their prime testbed) authors claim that both of these assumptions are rapidly loosing hold:

  1. With new full-bisection topologies in the modern data centers the local disk access is almost identical in performance to a network access even across the racks (with performance difference today between two is less than 10%).

  2. Greater parallelization and data compressions leads to lower disk IO demand on the individual tasks; in fact, Hadoop job at Facebook deal mostly with text-baed data that can be compressed dramatically.


Authors then argue that memory locality (i.e. keeping input data in memory and maintaining affinity between Hadoop task and its in-memory input data) produces much greater performance advantages because:

  • RAM access is up to three orders of magnitude faster than a local disk access

  • Even though memory size is significantly less than disk capacity it is large enough for most cases (see below)


Consider this fact: despite the fact that 75% of all HDFS blocks are accessed only once the 64% of Hadoop jobs at Facebook achieve the full memory locality for all their tasks (!). In case of Hadoop - full locality means that there is no outlier task that will have to access disk and delay the entire job. And this is all achieved utilizing rather primitive LFU caching policy and basic pre-fetching for input data.

With these facts authors conclude that disk locality is no longer worth while to vie for - and in-memory co-location is the way forward for high performance big data processing as it yields far greater returns.
Facebook's case is a solid proof of this technology, and GridGain's In-Memory Data Platform is a solid platform for the rest of us.