How to develop production ready migration

This blog outlines the process of creating a custom database migration system using Node.js, & any database ORM. Furthermore, it explains creating migration files, writing migration code, and applying Electron-specific changes in order for the migration system to operate effectively in a production environment.

GraphQL has a role beyond API Query Language- being the backbone of application Integration
background Coditation

How to develop production ready migration

In this blog, we demonstrate the procedure for developing a tailor-made migration system utilizing Node.js and any database Object-Relational Mapping (ORM). The blog consists of a directory named "migrations" to store all necessary migration files, which contain both up and down queries. Furthermore, we explain the adjustments needed specifically for Electron in order to operate the migration system effectively in a production environment.

Before we start writing our own migration system code, first create a folder named migrations in root directory to store all migration files, and inside that add migration file like this:


Migration Folder
Create migration file in the following format:
| (migration id).(migration name).sql

Inside each migration file (sample.sql) add migration queries in this format


-- UpALTER TABLE usersADD COLUMN last_name VARCHAR(255);-- DownDROP TABLE IF EXISTS `users`;

Each migration contains up and down queries

  1. Up : contains set of instructions to preform changes in your database
  2. Down: contains set of instructions for reverting a migration

Now Let’s write our custom migration. We are using better-sqlite3 for managing database in electron app.Also I’ll be using typescript for all our code


// Create migration type
interface IMigration = {
  id: number
  name: string
  filename: string
  data: any
  up: any
  down: any
}

Create a function dbMigrate and add the following code


// migration table name
const table = 'migrations'

export const dbMigrate = (db: Dbtype) => {
  let migrations: IMigration[]

  // Migration folder location
  let location = path.join(process.cwd(), './migrations')

  // Reading all migration file ending with .sql and sorting them based on migration id

  const migrationFiles = fs
    .readdirSync(location)
    .map((x) => x.match(/^(\d+).(.*?)\.sql$/))
    .filter((x) => x !== null)
    .map((x) => ({ id: Number(x[1]), name: x[2], filename: x[0] }))
    .sort((a, b) => Math.sign(a.id - b.id))

 // Early return if no migration file found
  if (!migrationFiles.length) {
    return
  }

  ...other code
  
  

After this we will read content from each migration file


migrationFiles.map((migration: IMigration) => {
  const filename = path.join(location, migration.filename)
  migration.data = fs.readFileSync(filename, 'utf-8')
})
migrations = migrationFiles

Now we have read the content from migration files we will grab the up and down queries


migrations.map((migration) => {
  const [up, down] = migration.data.split(/^\s*--\s+?down\b/im)
  if (!down) {
    const message = `The ${migration.filename} file does not contain '-- Down' separator.`
    throw new Error(message)
  } else {
    migration.up = up.replace(/^-- .*?$/gm, '').trim() // Remove comments
    migration.down = down.trim() // trim whitespaces
  }
})

After this we will get the list of already applied migrations.Here I’m using better sqlite-3 query methods


// Get the list of already applied migrations
let dbMigrations = db
  .prepare(`SELECT id, name, up, down FROM "${table}" ORDER BY id ASC`)
  .all()

Now we will apply our pending migrations.I’m running these migration in transaction so that if any error occurs our database remains safe


// Grabbing the last migration id
const lastMigrationId = dbMigrations.length
  ? dbMigrations[dbMigrations.length - 1].id
  : 0
for (const migration of migrations) {
  if (migration.id > lastMigrationId) {
    db.exec('BEGIN')
    try {
      db.exec(migration.up)
      db.prepare(
        `INSERT INTO "${table}" (id, name, up, down) VALUES (?, ?, ?, ?)`
      ).run(migration.id, migration.name, migration.up, migration.down)
      db.exec('COMMIT')
    } catch (err) {
      db.exec('ROLLBACK')
      throw err
    }
  }
}

Hurray! We have successfully applied our migrations.

But wait what about reverting any migration, for that we will have to run our down query Whenever you want to rollback or revert your migration run the following code


for (const migration of dbMigrations
  .slice()
  .sort((a, b) => Math.sign(b.id - a.id))) {
  if (!migrations.some((x) => x.id === migration.id)) {
    db.exec('BEGIN')
    try {
      db.exec(migration.down)
      db.prepare(`DELETE FROM "${table}" WHERE id = ?`).run(migration.id)
      db.exec('COMMIT')
      dbMigrations = dbMigrations.filter((x) => x.id !== migration.id)
    } catch (err) {
      db.exec('ROLLBACK')
      throw err
    }
  } else {
    break
  }
}

The above code also undo migrations that exist only in the database but not in files.

Finally, we have created our custom migration system from scratch. You can easily use this migration system to any nodejs application with any other database orm.

Electron Specific Changes

Now we will talk about electron specific changes to properly run this migration in production.

If you run the above migration code in development it will run successfully without any problem. But when you package or build your app, your app crashes and you will get an exception error of migration path not found. To solve this issue we have to copy our migration folder in the packaged app.

We are using electron builder to package our app, to copy our migration folder in packaged app just add this line in electron builder config


extraResources:  - './migrations/**'

Hi, I'm Avinash, a creative JavaScript developer. I love learning new technologies and developing software tools that can make our lives easier.

Want to receive update about our upcoming podcast?

Thanks for joining our newsletter.
Oops! Something went wrong.

Latest Articles

Implementing custom windowing and triggering mechanisms in Apache Flink for advanced event aggregation

Dive into advanced Apache Flink stream processing with this comprehensive guide to custom windowing and triggering mechanisms. Learn how to implement volume-based windows, pattern-based triggers, and dynamic session windows that adapt to user behavior. The article provides practical Java code examples, performance optimization tips, and real-world implementation strategies for complex event processing scenarios beyond Flink's built-in capabilities.

time
15
 min read

Implementing feature flags for controlled rollouts and experimentation in production

Discover how feature flags can revolutionize your software deployment strategy in this comprehensive guide. Learn to implement everything from basic toggles to sophisticated experimentation platforms with practical code examples in Java, JavaScript, and Node.js. The post covers essential implementation patterns, best practices for flag management, and real-world architectures that have helped companies like Spotify reduce deployment risks by 80%. Whether you're looking to enable controlled rollouts, A/B testing, or zero-downtime migrations, this guide provides the technical foundation you need to build robust feature flagging systems.

time
12
 min read

Implementing incremental data processing using Databricks Delta Lake's change data feed

Discover how to implement efficient incremental data processing with Databricks Delta Lake's Change Data Feed. This comprehensive guide walks through enabling CDF, reading change data, and building robust processing pipelines that only handle modified data. Learn advanced patterns for schema evolution, large data volumes, and exactly-once processing, plus real-world applications including real-time analytics dashboards and data quality monitoring. Perfect for data engineers looking to optimize resource usage and processing time.

time
12
 min read