DEV Community

daniel ortega
daniel ortega

Posted on

Tips for Data Migration Jobs

Data migrations were a issue 20 years ago and still. It is clearly something wrong in there.

I have recently experience one hell of a migration and i believe that is were i got the motivation to write this.

As vey well points Mike Coutermarsh in his article high traffic app cant take downtime

The approach taken for data migrations generally follows this approach:

  1. Create a new table
  2. Write to both tables
  3. Background job to copy data
  4. Read from new table
  5. Delete old table

Being each of this steps a release...

Well, this seems a very good approach but I can tell you now that the succes of it will depend on how good your Background Job manages this data migration.

So I am going to explain what I consider a good production-ready data-migration strategy and i hope you adopt some of these tips for your next migration.

Lets start by saying that if you think to implement this by running some short of sql query, import/export tool or bash script that will dump the data into the new database/table mostly sure your migration has just failed.
Taking this approach is going to led you to a situation where:

  • you might run out of resources. Disk, Memory, etc...
  • you might have tested it before running in production, however, how sure are you that the data you used to test it matches your data in production? think on corner cases your test database doesn't have.
  • you might be doing a wrong conversion of data and not realize until the end
  • your database might experience service degradation during the time it performs the sql query to migrate data.

Any of this can cause the whole of your system to suffer from slowness and/or even a partial or total downtime. If that happens, your users will not be able to interact with your system. So, why to risk it going for this approach?

What I consider a much better approach to proceed with any migration job consist of:

  • knowing how your data needs to be migrated. It is the key. Seems obvious but this fails quite often.
  • know how many data actually you are going to move around. This also seems obvious but also, most of the times is uncertain.
  • define a timeframe for the full migration. Migrations take time, don't rush it.
  • define a rate/pace for the migration. Follow a linear or an exponential ratio if you like.
  • run the migration process as a serverless job. Don't waste resources for a single task, any k8s job/cronjob, lambda function, whatelseserverless you like can do the trick.
  • when coding this job, think of:
    • Configurability: Design the job to be configurable on a given number of items to be migrated at once. Use the top(100) records or the 1%|5%|10% of all records or select records from dateA to dateB, etc... Use anything that segments the data.
    • Traceability: Track the migrated records from the source to the destination. Add a key:value datamigrated: true when data is migrated. or Store the key:ids in a file or temp-table so you don't have to go back to those records
    • Be Self-healing: Recover from failure. If the process is able to recover from a failure and start working again by itself your on-call colleagues will appreciate it.
  • Plan and perform a test-drive of the migration: For the first time you run it, do it for a subset of data. if something goes wrong at least will not affect all the data.
  • Take some metrics, like start_timestamp and finish_timestamp, if you don't use them someone else can make use of it.
  • Evaluate if the test-drive went as expected and if the data meets the migration criteria.
  • Repeat, at same pace or increasing the chunk of data to migrate in this iteration until you finish it.

Good Luck and happy migrations.

Let me know in the comments what you think about this as this is my 1st post in dev.to

Top comments (0)