In-Memory Computing on .NET
The GridGain® in-memory computing platform is a polyglot application that supports multiple languages including offering first class support for .NET in-memory computing.
Some of the main languages that provide very transparent interoperability between each other are Java, .NET, and C++. However, you don't have to use it in a cross platform fashion. You can have just a .NET, or C++, or Java data grid. You don't have to use more than one language but using multiple languages is supported.
If you look at different projects that provide .NET integration, you will see that often .NET will be provided as a thin API which offers perhaps 10 or 20 percent of the functionality of the whole platform, and it lacks features.
One goal of the Apache® Ignite™ project, the foundation of the GridGain in-memory computing platform, is to ensure that it doesn't matter which language a user chooses to deploy. You can deploy an in-memory computing cluster in .NET, C++ or in Java. Although GridGain is written in Java, about 99 percent of the functionality provided in Java will also be natively available in .NET. This includes APIs, configuration, and runtime. You can even run .NET closures and .NET code on Java servers.
.NET In-Memory Computing with Server Side Execution
You need to have .NET on the server side only if you plan to execute .NET closures or execute .NET logic on the server side. So you write a computation in C# or maybe you provide a persistence logic in C# or any other .NET language. In that case, to execute that logic requires deploying .NET CLR together with JVM on the server side. All the scripts needed to start .NET and Java together are included in GridGain. If you do not compute or run .NET logic on a server side, then you can run .NET clients with Java servers.
You can run .NET persistence and plug it into GridGain, so you can integrate with any kind of persistence storage using ADO.NET and integrate it natively into Apache Ignite.
JVM and CLR for .NET In-Memory Computing
GridGain runs both JVM and CLR in one process. If you run only Java, GridGain will start only JVM. If you run .NET with Java, GridGain will also start CLR. Most of the execution logic will happen in Java on the server side. However, GridGain has a binary marshalling protocol which is cross platform. You can serialize an object or a message in Java and deserialize it in .NET and vice versa. It doesn't matter on which platform you're using the protocol and the protocol has an API in any of the platforms GridGain supports including Java, .NET, and C++.
If you are running a .NET client, you would use the native GridGain .NET API. If you perform or create a computation, you will write it in .NET or maybe cache an object that is created in .NET. You can then use the native GridGain .NET API to cache that object on the server side. GridGain will marshal that object because, any time there's network involved, you have to marshal it to send it between cluster nodes. GridGain will pass that object on to JVM so it can interact with the servers. GridGain will invoke JNI and will transfer this marshaled object via JNI onto JVM. Java will handle all the routines for caching the object. The reverse is also true. If you need to get an object, the same process happens but in reverse order.
GridGain keeps the data in binary format which is understood on any platform so there is no need to deserialize data when it directs to any of the platforms: Java, .NET, or C++. GridGain keeps data in serialized form as much as possible. GridGain never has to deserialize it even when it does index lookups because it knows how to read binary data without deserializing it. GridGain knows how to read fields inside of an object without deserializing it. Deserialization only becomes important when you have to access an object from a user site. At that point you would deserialize from a binary form into your native Java, .NET, or C++ form. On the user side, it is probably the only place where you will go through the JNI layer.
The only performance overhead difference is JNI which causes about a 10-15 percent performance degradation. But when you run a distributed in-memory computing cluster, the performance becomes negligible because the main performance bottleneck is the network in a distributed environment. JNI generates much less overhead than a network trip. When you add the network, JNI becomes negligible, and in all GridGain benchmarks there's virtually no performance difference between running Java grids or a Java data grid or a .NET in-memory data grid. The same principle applies on the server side: GridGain starts CLR with JVM on the server side and whenever a computation is received on the server, it is passed on to CLR for execution. You can run .NET logic easily and transparently on the server side. Overall, GridGain looks and feels like a full blown .NET in-memory data grid and many .NET users take advantage of it on a daily basis.
Another advantage is integration with Lync. You can use native Lync queries to query data residing in the distributed data grid. GridGain is also available on NuGet, so you can use all the NuGet features to easily code and deploy with it.
GridGain supports the concept of dynamic caches. You can create or destroy a cache at any point in time, on the fly. You can work with native .NET classes directly using the Apache Ignite API in GridGain.
Other GridGain Features for .NET In-Memory Computing
The GridGain Service Grid is another feature that is natively supported in .NET and Java. The GridGain Service Grid allows you to deploy services in a distributed fashion across multiple cluster nodes. You can deploy as many services as you like and you can specify highly custom topologies. But the most common use case for a service grid is to allow deployment of singleton code, singleton cluster by singletons onto Apache Ignite.
If you use Hadoop and .NET, Hadoop APIs in the version of GridGain which supports .NET will provide an in-memory file system and in-memory MapReduce. If you're using the Apache Ignite In-Memory File System (IGFS) in GridGain, you will be able to install it on top of HDFS. If you execute a MapReduce task, it will run in the in-memory file system on top of HDFS.