How 24 Hour Fitness uses in-memory computing to assist with a SaaS integration

How 24 Hour Fitness used an in-memory computing to assist with a SaaS integration24 Hour Fitness, the world's largest privately owned and operated fitness center chain, is using distributed in-memory computing solutions to speed things up as well as decouple data from its database. 

Craig Gresbrink, a solutions architect at 24 Hour Fitness, will be a featured presenter at the third-annual In-Memory Computing Summit North America -- Oct. 24-25 at the South San Francisco Conference Center. His in-depth talk will detail how 24 Hour Fitness has used two different in-memory solutions to solve integration needs, along with the benefits these solutions are providing. 

I had the opportunity to speak with him about how in-memory computing is assisting 24 Hour Fitness with its SaaS integrations. 

Tom: When did 24 Hour Fitness implement a distributed in-memory computing solution?

Craig: Five years ago. We had two caching implementations:

1. Hibernate caching of “hot” data on each JVM.
     a. Customer and Contract data.

2. A home grown caching solution which cached our relatively static data.
     a. Club, Employee, and Pricing data.

Since neither was distributed, it meant two things:  

1. It didn’t horizontally scale well. Specifically,
     a. For hot data, having more JVMs, meant more cache misses resulting in sub optimal average response times.
     b. For static data, each node used a lot of memory to store the entire cache.

2. For pricing data in particular, since we validate the purchased price matches the current price, periodic cache inconsistencies across the JVMs, resulted in false negatives in our sales flows.

We had a desire to solve the problems listed above. Our journey began when we had a use case to present the correct balance due, after an online payment was made to our batch payment processing system.

Tom: Which in-memory solutions did you end up implementing?

Craig: To show the correct balance due, we implemented Hazelcast. For Club, Employee and additional caches, we implemented Hazelcast first and then moved them over to Apache® Ignite™ (GridGain).

Tom: What have these solutions enabled you to do that would have been impossible with without in-memory solutions?

Craig:  In short, provide performant services with less code.

Tom: Your talk at the In-Memory Computing Summit later this month is titled: “How In-Memory Solutions can assist with SaaS Integrations.” What will attendees walk away with following the presentation?

Craig: An understanding of:

1. 24 Hour Fitness’ historical application architecture
2. Why we moved to our current application architecture
3. Several real-world use cases where in memory solutions proved superior to traditional database centric approaches, with an emphasis on SaaS integration challenges.

Inside view of a 24 Hour Fitness locationTom: In the talk's description you promised to explain how an IMDG can provide high availability to your data, even when your SaaS provider’s APIs are not 24/7. Can you give us a preview of how that works?

Craig: We detect changes via our vendors’ APIs and persist these changes. The grid caches the data. All service oriented application code reads the data from the grid.

Tom: Is that similar to when an on-premise database is not 24/7?

Craig: Yes, except for the fact that when your on-prem database is not 24/7, and it is used for writes, you must have ALL applications/code hitting the grid, such that if database is down, you can still transact.

Tom: You’ll also discuss how a distributed cache can be used to provide the correct balance due after a payment/receipt is made against a legacy batch system. Can you also provide a sneak peek as to how it can do that?

Craig: First the use case: an online customer makes a payment to a batch-oriented ERP and expects to see the correct balance due if they check back hours later. The ERP processes payments nightly.

The problem: Putting a note on the page saying, “Payments can take up to 24 hours to process” was not enough. We kept getting duplicate payments from customers who assumed their first payment did not go through.

The solution: The online payment of an invoice was stored in the cache and then deducted from the balance due retrieved from the database. This allowed us to show the customer the correct account balance.

Tom: How have current in-memory solutions set the stage for future use cases based on your experience?

Craig: They will allow us to transact, even when vendor supplied services, or on premise databases, are not sufficient to support our 24/7 business.

* * * 

I invite you to attend this month's In-Memory Computing Summit North America and connect with Craig and the dozens of other experts who will be presenting in more than 40 breakout sessions. In addition to 24 Hour Fitness, these talks will feature users, innovators and vendors across the in-memory computing ecosystem. Speakers include experts from HomeAway, Intel, RingCentral, Fujitsu, GridGain Systems, Percona – and many more.

The IMC Summit is the only industry-wide event focusing on the full range of in-memory computing-related technologies and solutions held in North America.