March 28 NYC In-Memory Computing Meetup recap

March 28 gathering of the NYC In-Memory Computing MeetupThe March 28 gathering of the NYC In-Memory Computing Meetup in upper Manhattan was well attended and both presenters (experts from GridGain and Oracle) fielded several questions from the audience.

 

Making his NYC In-Memory Computing (IMC) Meetup debut was Glenn Wiebe, a GridGain solutions architect. He was joined by NYC IMC meetup veteran Doug Hood, Oracle TimesTen Scaleout evangelist – who has actually spoken at nearly every IMC meetup here since they began in late 2017.

 

Glenn’s talk was titled “IT Modernization in Practice: How Apache Ignite adds speed, scale & agility to databases, Hadoop & analytics” (download his slides).
 

Abstract: The growth in data volume & velocity, plus the aging of IT infrastructure, along with the continual drive to do “more with less” is putting added stress on existing data sources, application development teams and the applications, tools, and clients that consume this data.

 

IT modernization increasingly targets legacy databases, and the roadblocks they have become to deliver faster analytics, while holding more volume and more varieties of data. Modernization project are being asked to deliver effective architectures and development patterns for new applications like modern Web apps and new tools for analytic, machine & deep learning .

 

In-Memory Computing can deliver the real-time performance that modern apps and tools expect at the massive scale of existing / expanding data sources present and do this at speed that is expected in order to drive digital transformation founded on modern analytics and tools your clients expect.

 

Topics covered:

* Add speed and scale to existing applications using Relational or NoSQL databases with no rip-and-replace

* Perform real-time and streaming analytics at scale, with details of how companies use Apache Ignite and Apache Spark

* Address performance issues with Hadoop and "SQL" for real-time reporting

* Adopt machine and deep learning, both the model training and execution, including TensorFlow

 

Doug’s talk was titled, How to scale non trivial applications."

 

Abstract: Scaling trivial workloads is easy -- eg 38 million YCSB B ops/sec or 1.4 Billion SQL queries per seconds -- but what if your workload is not trivial? This talk covered a customer use case for how to scale with:
 

* Five table joins where the data is distributed -- hashed -- over many machines

* ACID Transactions where the unit of work has seven updates and seven queries

* The database must be highly available and 'just work'

* A performance comparison with other well know distributed databases

Share This