Many machine learning (ML) and deep learning (DL) platforms are slow in production environments. It can sometimes take hours or days to update ML models. This is a result of having the ML processing run on a different system from the operational transactions system in order to avoid a performance degradation.
Join us for this webinar to learn how to overcome these challenges by leveraging the Apache Ignite ML framework to implement a continuous machine learning (CML) platform. The CML platform can run the ML compute code on the same cluster that has the transactional data without having a performance impact on the transactions system. As a result, ML models can be updated in real-time using the latest available data.
Topics covered will include:
- An overview of massively distributed ML/DL architectures including design, implementation, usage patterns, and cost / benefit analysis
- Detailed coverage of Apache Ignite ML/DL Pipeline steps - from preprocessing to real-time prediction
- Discussion of out-of-the-box algorithms and adapters that can leverage third party algorithms such as Spark, XGBoost, TensorFlow, and custom code
- Detailed code examples and a demo that shows how to use Apache Ignite 2.8 ML framework for continuous learning tasks