Learn About In-Memory Computing
Learn how Apache Ignite™ simplifies development and improves performance for Apache Spark™. This session will explain how Apache Spark and Ignite are integrated, and how they are used to together for analytics, stream processing and machine learning. By the end of this session you will understand:
In-Memory Computing: A New Engine for Accelerating the Data Behind Digital Business and the Customer Experience
Matt Aslett, Rob Meyer
The need to engage more intelligently in real-time during each transaction or interaction, whether it's to add personalization and recommend products or to help improve the overall customer experience across multiple channels, is driving the need for new infrastructure with much lower latency and much higher scalability. The solution that many companies have adopted is to move all the transactional and analytic data, and to collocate computing together in memory using In-Memory Computing technologies.
The 10x growth of transaction volumes, 50x growth in data volumes and drive for real-time visibility and responsiveness over the last decade have pushed traditional technologies including databases beyond their limits. Your choices are either buy expensive hardware to accelerate the wrong architecture, or do what other companies have started to do and invest in technologies being used for modern hybrid transactional/analytical processing (HTAP).
The need for real-time computing has resulted in the growth of many different in-memory computing technologies including caches, in-memory data grids, in-memory databases, streaming technologies and broader in-memory computing platforms. But what are the best technologies for each type of project? Learn about your options from one of the leading in-memory computing veterans. This webinar will explain the evolution of in-memory computing, the different types of technologies available today, and when to use them, including:
It used to be that the only way to improve application performance was to add a cache. But caches like Redis don't understand SQL. They require you to modify your applications with non-SQL coding and data models, and copy and synch data across two different models. They don't support ACID transactions very well. And they have their limits when it comes to scalability.
Apache Ignite native persistence is a distributed ACID and SQL-compliant store that turns Apache Ignite into a full-fledged distributed SQL database. It allows you to have 0-100% of your data in RAM with guaranteed durability using a broad range of storage technologies, have immediate availability on restart, and achieve high volume read and write scalability with low latency using SQL and ACID transactions.
Akmal B. Chaudhri
Learn some of the best practices companies have used to increase performance of existing or new SQL-based applications up to 1,000x, scale to millions of transactions per second and handle petabytes of data by adding Apache® Ignite™.
Distributed platforms like Apache® Ignite™ rely on a horizontal “scale-out” architecture where you dynamically add more machines to achieve near-linear, elastic scalability. But how does it really work? What are its limits? And how can you optimize performance and scalability? In this webinar, we will cover the challenges engineers face when designing distributed systems, and the tips and tricks for optimizing Apache Ignite including:
In this webinar, Denis Magda, GridGain Director of Product Management and Apache Ignite PMC Chairman, will introduce the fundamental capabilities and components of a distributed, in-memory computing platform. With increasingly advanced coding examples, you’ll learn about: Collocated processing Collocated processing for distributed computations Collocated processing for SQL (distributed joins and more) Distributed persistence usage This is Part 2 of a 2-part webinar series designed for software developers and architects.
Akmal B. Chaudhri
The healthcare industry provides many different challenges for the storage and analysis of massive amounts of data in real-time. This is due to the varying requirements across the industry, such as the increasing use of Electronic Health Records (EHRs), personalised medicine, new patient and provider expectations for real-time insurance systems, and drug discovery, for example.
In this 1-hour webinar, GridGain Systems Chief Product Officer Dmitriy Setrakyan will present how distributed memory-centric architectures can be applied to various financial systems.
In this webinar, Denis Magda, GridGain Director of Product Management and Apache Ignite PMC Chairman, will introduce the fundamental capabilities and components of an in-memory computing platform, and demonstrate how to apply the theory in practice. With increasingly advanced coding examples, you’ll learn about:
During this 1-hour webinar, GridGain Product Manager and Apache® Ignite™ PMC Chair Denis Magda will discuss a Fast Data solution that can receive endless streams from the Interne
Akmal B. Chaudhri
Learn how to boost performance 1,000x and scale to over 1 billion transactions per second with in-memory storage of hundreds of TBs of data for your SQL-based applications. Apache Ignite is a unique data management platform that is built on top of a distributed key-value storage and provides full-fledged SQL support. Attendees will learn how Apache Ignite handles auto-loading of an SQL schema and data from a Relational DBMS, supports SQL indexes, supports compound indexes, and various forms of SQL queries including distributed SQL joins. Examples will show:
PostgreSQL is one of the most popular open source RDBMSs. Apache® Ignite™ is the leading open source in-memory computing platform. The Apache Ignite distributed computing platform is inserted between the application and data layers and works with all common RDBMS, NoSQL and Hadoop® databases to provide speed, scale and high availability. When Postgres comes up short, Ignite may be able to help you bridge the gap. Join Fotios Filacouris, GridGain Solution Architect, as he discusses how you can supplement PostgreSQL with Apache Ignite. You'll learn:
Akmal B. Chaudhri
Machine learning is a method of data analysis that automates the building of analytical models. By using algorithms that iteratively learn from data, computers are able to find hidden insights without the help of explicit programming. These insights bring tremendous benefits into many different domains. For business users, in particular, these insights help organizations improve customer experience, become more competitive, and respond much faster to opportunities or threats.