The current implementation of ML algorithms in Spark has several disadvantages associated with the transition from standard Spark SQL types to ML-specific types, a low level of algorithms’ adaptation to distributed computing, a relatively slow speed of adding new algorithms to the current library.

Also, Spark ML doesn’t support online-learning by nature for all algorithms, stacking, boosting and a bunch of approximate ML algorithms that give a significant speedup in many cases.

But if you choose Ignite ML you could avoid most of the problems mentioned above.

Currently, Apache Ignite has Ignite ML module that includes a lot of distributed ML algorithms, the bunch of approximate ML algorithms, simple integration with TensorFlow via TensorFlow Ignite Dataset (currently, this is a part of TF.contrib package) and also each algorithm supports the model updating that gives us the ability to make online-learning not only for KMeans and LinReg.

We suggest to use Apache Ignite ML module to speed up your ML training and use Ignite as backend for distributed TensorFlow calculations.

You will see live demos of ML pipeline building with Apache Ignite ML module, Apache Spark, TensorFlow and more.

Talk Date
Speakers