Apache Ignitetm In-Memory Data Fabric is a high-performance, integrated and distributed in-memory platform for computing and transacting on large-scale data sets in real-time, orders of magnitude faster than possible with traditional disk-based or flash-based technologies.
Apache Ignite Integrations documentation contains excessive information on all the integrations existed in between Apache Ignite and other technologies and products.
The integrations are intended to simplify coupling of Apache Ignite and other technologies, used in your applications or services, in order to either perform a transition to Apache Ignite smoothly or to boost an existed solution by plugging Ignite into it.
The existed integrations are divided into a number of realms covered below. To learn more about Apache Ignite In-Memory Data Fabric, in general, go to the main documentation site.
Apache Ignite can be deployed on-premise or in cloud environments. Ignite cluster can be deployed in literally every well-known cloud environment thanks to its integrations with Amazon AWS, Google Compute Engine and Apache JClouds.
Apache Ignite Hadoop Accelerator provides a set of components allowing for in-memory Hadoop job execution and file system operations. As for Spark, Ignite enriches it with an implementation of Spark RDD abstraction which allows to easily share state in memory across Spark jobs.
The integrations that fall under this section allows leveraging from Apache Ignite for solely caching purposes. Usually, in such situations, Ignite is enabled at the configuration level and the benefit is that you avoid code level modifications.
In order to facilitate the deployment of the different Ignite modules, along with their dependencies, Apache Ignite offers a set of Karaf features packaged in a feature repository. This makes it possible to quickly provision Ignite onto the OSGi environment by means of a single command in the Karaf shell.
The integration makes it feasible to use Apache Ignite as a distributed in-memory system together with Apache Cassandra as a persistent store. Once the data is pre-loaded from Cassandra into Ignite you're able to execute ANSI-99 SQL Queries and ACID transactions over this data letting Ignite keeping in sync in memory and disk data sets.
Apache Ignite has a variety of integrations with well-known streaming products and technologies like Kafka, Camel or JMS which enables to inject streams of data into Ignite easily and efficiently.