0

I am new to GemFire.

Currently we are using an MySQL DB and would like to move to GemFire.

How to move the existing data stored in MySQL over to GemFire? I.e., is there any way to to import existing MySQL data into GemFire?

John Blum
  • 7,381
  • 1
  • 20
  • 30
Krish
  • 1,804
  • 7
  • 37
  • 65

1 Answers1

0

There are many different options available for you to migrate data from 1 data store (e.g. an RDBMS like MySQL) to an IMDG (e.g. Pivotal GemFire). Pivotal GemFire does not provide any tools for this purpose OOTB.

However, you could...

A) Write a Spring Batch application to migrate all your data from MySQL to Pivotal GemFire in 1 large swoop. This is typical for most large-scale conversion processes, converting from 1 data store to another, either as part of an upgrade or a migration.

The advantage of using Pivotal GemFire as your target data store is that it stores Java Objects. So, if you are, say, using an ORM tool (e.g. Hibernate) to map the data stored in your MySQL database tables back to your application domain objects, you can then immediately and simply turnaround and store those same Objects directly into a corresponding Region in Pivotal GemFire. There is no additional mapping required to store an Object into GemFire.

Although, if you need something less immediate, then you can also...

B) Take advantage of Pivotal GemFire's CacheLoader, and maybe even the CacheWriter mechanisms. The CacheLoader and CacheWriter are implementations of the "Read-Through" and "Write-Through" design patterns.

More details of this approach can be found here.

In a nutshell, you implement a CacheLoader to load data from some external data source on Cache miss. You attach, or register the CacheLoader with a GemFire Region when the Region is created. When a Key (which can correspond to your MySQL Table Primary Key) is requested (Region.get(key)) and an entry does not exist, then GemFire will consult the CacheLoader to resolve the value, providing you actually registered a CacheLoader with the Region.

In this way, you slowly build up Pivotal GemFire from the MySQL RDBMS based on need.

Clearly, it is quite likely Pivotal GemFire will not be able to store all the data from your RDBMS in "memory". So, you can enable both Persistence and Overflow [to Disk] capabilities. By enabling Persistence, GemFire will load the data from it's own DiskStores the next time the nodes come online, assuming you brought them down prior.

The CacheWriter mechanism is nice if you want to run both Pivotal GemFire and MySQL in parallel for while, until you can shift enough of the responsibilities of MySQL over to GemFire, for instance. The CacheWriter will write back to your underlying MySQL DB each time an entry is written or updated in the GemFire Region. You can even do this asynchronously (i.e. "Write-Behind") using GemFire's AsyncEventQueues and Listeners; see here.

Obviously, you many options at your disposal. You need to carefully way your options and choose an approach that best meets your application requirements and needs.

If you have additional questions, let me know.

John Blum
  • 7,381
  • 1
  • 20
  • 30
  • Thanks for the explanation. In MYSQL Data model, we have mappings between the tables (i.e., foreign keys). How can we handle this in GemFire ? I didn't see any tutorial regarding this. – Krish Jan 31 '18 at 08:19
  • Typically, users will store the entire object graph from the root object down to the leaf objects, e.g. Order -> Line Item -> Product. There is not concept of normalization, or ORM in the Key/Value space. Users must handle that level of "mapping" in their application DAO tier. In general their are advantages and disadvantages for doing so. You must decide what is appropriate for your application use cases and requirements, access patterns, etc. – John Blum Jan 31 '18 at 19:17