TensorFlow is the most popular deep learning framework.
Of course, all deep learning initiatives require data. The classical data-feeding approach is based on files in different formats: data is extracted from production system, written into files and then data scientists are able to train deep learning models using these files as a datasource.
But if the amount of data is big -- or it's not always possible to extract data from production databases -- then this approach has not always worked very well. Until now.
Apache® Ignite™ users are now able to use TensorFlow -- connecting directly to their Apache Ignite database(s) and using it as a datasource. This gives them access to potentially unlimited amounts of data with extremely high throughput.
Additionally, Apache Ignite can be used as a checkpoint storage for TensorFlow models as well as a cluster management system for distributed model training.
I wrote about the topics above and a few more as a guest contributor for the TensorFlow blog. Please check that out here. Please feel free to ask questions, make observations and/or suggestions below in the comments section.