NYC In-Memory Computing Meetup
Join us for a special Monday evening meetup in Manhattan!
You'll enjoy an evening of insightful talks, tasty food and refreshing beverages. And of course the evening wouldn't be complete without our raffle! This meetup, as usual, is free thanks to the sponsorship of GridGain Systems.
> Bill Bejeck, software engineer at Confluent
> Doug Hood, Oracle TimesTen Scaleout evangelist
> Akmal Chaudhri, Apache Ignite technology evangelist
* 6 p.m. – Food and drinks
* 6:10 p.m. -- Talk 1 (Bill): "Robust Operations of Kafka Streams."
* 6:45 p.m. -- Talk 2 (Doug): "A Column Store for Analytics and a Row Store for Transactions, Please!"
* 7: 25 -- Talk 3 (Akmal): "Relational DBMSs: Faster Transactions and Analytics with In-Memory Computing"
* 8 p.m. Raffle drawings (register here: http://bit.ly/Jan14raffle )
* 8:05 p.m. Finis!
>> Akmal's talk: The 10x growth of transaction volumes, 50x growth in data volumes -- along with the drive for real-time visibility and responsiveness over the last decade -- have pushed traditional technologies including databases beyond their limits. Your choices are either to buy expensive hardware to accelerate the wrong architecture, or do what other companies have started to do and invest in technologies being used for modern hybrid transactional/analytical processing (HTAP).
Combining Apache Ignite with a Relational DBMS can offer enterprises the best of both open-source worlds: a highly-scalable high-velocity grid-based in-memory SQL database, with a robust fully-featured SQL persistent datastore for advanced analytics and data-warehouse capabilities.
Topics to be covered:
* How to complement a Relational DBMS for Hybrid Transactional/Analytical Processing (HTAP) by leveraging the massive parallel processing and SQL capabilities of Apache Ignite.
* How to use Apache Ignite as an In-Memory Data Grid that stores data in memory and boosts applications performance by offloading reads from a Relational DBMS.
* The strategic benefits of using Apache Ignite instead of Memcache, Redis, GigaSpaces, or Oracle Coherence.
>> Bill's talk: Apache Kafka added a powerful stream processing library in mid-2016, Kafka Streams, which runs on top of Apache Kafka. The community has embraced Kafka Streams with many early adopters, and the adoption rate continues to grow. Large to mid-size organizations have come to rely on Kafka Streams in their production environments. Kafka Streams has many advanced features to make applications more robust.
The point of this presentation is to show users of Kafka Streams some of the latest and greatest features, as well as some that may be advanced, that can make streams applications more resilient. The target audience for this talk are those users already comfortable writing Kafka Streams applications and want to go from writing their first proof-of-concept applications to writing robust applications that can withstand the rigor that running in a production environment demands.
The talk will be a technical deep dive covering topics like:
* Best practices on configuring a Kafka Streams application
* How to meet production SLAs by minimizing failover and recovery times: configuring standby tasks and the pros and cons of having standby replicas for local state
* How to improve resiliency and 24×7 operability: the use of different configurable error handlers, callbacks and how they can be used to see what’s going on inside the application
* How to achieve efficient scalability: a thorough review of the relationship between the number of instances, threads and state stores and how they relate to each other
>> Doug's talk: Learn how to benefit from a hybrid database that can use both column and row stores for concurrent transactions and streaming analytics.
See how these techniques work for both single instance and multi-instance databases with HA, both on premises and in the Cloud.