Capital Market Use Cases for In-Memory Computing

In-Memory Computing in Capital Markets

Every industry is experiencing massive increases in the volume of data, the number of queries, and the complexity of requests. At the same time, requirements for low latency are also increasing to keep up with the speed of business. This trend has been apparent in capital markets perhaps longer than in other industries due to intense competition and a willingness to be early adopters of technology to get that extra edge. To meet the needs of capital market applications, businesses must deliver systems that provide high speed, low latency, and the ability to massively scale.

This is where in-memory technology like GridGain, the enterprise-grade version of Apache Ignite, excels.

This blog explores a range of capital market use cases for in-memory computing, including risk consolidation, portfolio management, pricing, and post-trade analysis. It also describes how the GridGain Unified Real-Time Data Platform and its use as a digital information hub (DIH) provide the real-time computing layer that enables those use cases. 

 

Risk Consolidation

In-memory platforms can consolidate risk calculations from various systems. The platforms enable users to maintain a holistic and real-time view of the organization’s risk exposure. Latency requirements often drive decisions regarding front office and risk systems. However, additional factors might prompt users to consider using an in-memory architecture, including:

  • The high cost of accessing information
  • Heavily loaded source systems that cannot handle further queries
  • A limited source API, requiring additional processing to interpret the data

Typically, systems in use today have a combination of these factors.

These architectural limitations can be seen in a number of verticals. 
 

Digital Integration Hub 

With an in-memory data grid, companies can establish a digital integration hub (DIH) – a term coined by Gartner – to ingest data from backend enterprise systems, data lakes, and NoSQL, and relational database management systems (RDBMS). The hub becomes the single source of truth, keeping data up-to-date, available, and in-memory for low latency, high performance, and more scalable compute and storage.

The DIH provides a decoupled API layer to modern online applications that simplifies cloud migrations and hybrid deployments. It provides data from on-premise operational systems when sensitive data can’t be moved to the cloud and real-time operational reporting. 

Using a DIH in a capital markets environment, IT can use a unified API to access aggregated risk data while, at the backend, data can be ingested from a variety of sources and normalized. With this configuration, business applications do not need to make direct API calls to each datastore. Risk managers and traders can access the cached data for real-time processing without overloading existing systems that are running near full capacity, or paying the price for unnecessary, third-party access.

Most of the most common use cases for in-memory computing are in the front and middle offices where significant volumes of market data must be combined with real or hypothetical trades. There are many examples, including portfolio management, market risk calculations, and pricing engines.

These applications tend to have slightly different priorities in terms of latency and throughput but all can be addressed with in-memory computing.
 

Portfolio Management

To optimize portfolios for risk, return, and other factors, financial analysts commonly run dozens of simulations to view the impact of different scenarios on portfolio performance. Using in-memory computing for portfolio management, users can view their portfolio through various lenses, checking how new trades might affect current positions. 

Systems like this have existed for decades, but the difference that in-memory technology can make is much greater speed. One client saw a hundred-fold improvement in response times when they moved from their legacy system to GridGain’s In-memory Computing Platform, built on Apache Ignite and in-memory computing. The same client saw significant benefits by using the transactional persistence feature. Transactional persistence, or native persistence, is a set of features designed to provide persistent storage. When it is enabled, Ignite always stores all the data on disk and loads as much data as it can into RAM for processing. For example, if there are 100 entries and RAM has the capacity to store only 20, then all 100 are stored on disk and only 20 are cached in RAM for better performance.

With in-memory computing enabling faster execution of simulations, the analyst team is able to run hundreds of simulations per hour instead of dozens per week, creating opportunities to improve portfolio composition. The company was able to import many years of historical data into the system, enabling them to perform “What if?” calculations on historical scenarios while simultaneously allowing calculations on the most recent data at in-memory speeds. As GridGain has a unified API for both tiers of storage, this dramatically simplified the overall architecture.
 

Pricing Systems

The capital markets industry has had pricing systems – applications that validate or augment incoming pricing feeds for internal consumption – for many years. What modern in-memory technology brings to pricing is scale. Most existing applications are limited to a single machine or operations staff have to configure a mapping between subsets of data and specific servers. On a well-architected in-memory system, this happens seamlessly and automatically.

Some organizations extend the pricing use case, aggregating multiple sources of data into a centralized data warehouse for market data, a data store, and a real-time feed of validated information annotated with useful metadata across asset classes and trading venues. This use case leverages an in-memory platform’s ability to both work on data in near real-time and store substantial volumes of data within a single system.

In addition to use cases in the front office, there are also many use cases for in-memory computing in the middle office.

As with the front office use cases, any time large amounts of data meet significant demands for compute, this is a good use case for in-memory technology. In the middle office, the volume of data and compute tends to be higher than with use cases in the front office, but the latency requirements are typically less stringent. Typical middle office applications are various kinds of risk calculations, ranging from traditional, historical and Monte Carlo-based value at risk (VaR) and stress tests to computations for compliance reporting in response to regulations like Fundamental Review of the Trading Book (FRTB) and X-Value Adjustment (XVA).

In-memory technology helps here by making the calculations run much faster. Final reports can be ready earlier and more complex calculations can be performed in the same time window. 

How does it work? The easy answer is that the data is in memory, which, being faster than a disk, results in dramatically quicker calculations. But that’s only half the story. It’s no good having all your data in memory if it’s not on the same machine. While it’s true that network access is faster than reading the same data from disk (or, worse, from a SAN), it would be much better if all the data you needed to work with was guaranteed to be local. Technologies like GridGain include an affinity feature that results in exactly that. Much in the same way that you include indexes or foreign keys when you’re designing your data model with legacy databases, with in-memory software you provide hints on how your data is interrelated.
 

Post-trade Analysis

Another good example of the use of in-memory computing is post-trade analysis. There are many subsets of activity here. Maybe you want to monitor your algos to make sure you don’t participate in the next flash crash. Or, for compliance purposes, you want to watch trading for suspicious, illegal, or fraudulent activity.

A great thing about in-memory computing is that it supports both batch-based and event-driven algorithms. It’s possible to run traditional MapReduce or fork-join workloads at in-memory speeds or switch to a real-time, event-driven approach. 

In-memory technology can easily support both batch and event-driven workloads, but it’s also a natural fit for machine learning (ML). GridGain’s ML module, included in the product as a standard feature, has many common strategies. You can use deep learning with the built-in TensorFlow integration. Systems might burst to the cloud so that extra compute capacity can be brought to bear on the most resource-intensive part of the process. The ML module, as with all code execution, is data-aware and in-memory, so no ETL is required and scaling out (adding new machines to the cluster) is entirely seamless.

Banks were among the first adopters of in-memory technology. It has also been embraced by credit card and insurance companies. Use cases include portfolio management, Customer 360s, merchant payments, claims processing, and many more. The combination of speed and scale of in-memory computing brings tremendous value from the front office to the back office and middle office, helping to move organizations from batch processing systems to real-time dashboards.  


Visit GridGain for Financial Services for more use cases to consider, and to explore customer success stories across banking, capital markets, insurance, and exchanges.