Introduction
Migrating to Backstage’s new backend system is more than just an upgrade - it’s a step toward a more scalable, efficient, and streamlined platform. The new backend introduces a modern architecture that simplifies plugin integration, reduces complexity, and improves overall performance. This is a significant shift from the monthly updates and incremental improvements you may be used to. By adopting this new architecture, you’ll gain better control over plugin dependencies, reduce overhead, and set the stage for easier future development.
This migration is particularly exciting because it marks the culmination of a long journey towards a more modular and efficient backend. Originally introduced as an experimental feature, the new backend has matured into a fully supported system that greatly enhances how plugins interact with core services. It’s a big leap forward from the previous architecture, and while the transition may require some effort, the long-term benefits—such as improved scalability, clearer separation of concerns, and better dependency management—make it a worthwhile investment for teams looking to future-proof their Backstage setup.
Overview
In this guide, we’ll focus on helping you migrate your existing Backstage backend with minimal disruption. We’ll walk through key steps, such as backing up your current system, creating a bridge for your existing plugins, and gradually integrating them into the new architecture. By the end of this article, you’ll have a clear understanding of how to approach the migration while maintaining your current functionality, and how to unlock the benefits of Backstage’s new backend.
We’ll focus on performing a minimal migration that gets your main backend (index.ts
in packages/backend
) up and running in the new system. At this stage, we won't migrate all plugins, as that can be a more time-consuming process. Instead, we’ll leverage a temporary bridge function to convert existing plugins into the new system using a transitional environment, allowing you to start benefiting from the new architecture right away.
Preparation for Migration
Before diving into the migration process, it's essential to ensure that your new system will function as expected post-migration. Proper preparation can significantly reduce the risk of issues arising during or after the transition.
Establish a Testing Plan
- Manual Testing: Develop a comprehensive plan for manually testing your application with the new backend. This should include detailed scenarios that mimic real-world usage of your application. Identify critical paths and functionality that must work seamlessly in the new environment.
- Integration Tests: In addition to manual testing, it's crucial to have a robust set of integration tests. These tests should cover all functionalities, particularly those related to any custom modifications you've implemented in your existing system. Ensure that your tests validate the expected behavior of all integrated components.
Pay special attention to any custom features or modifications you’ve made. These areas are often the most prone to issues during migration, so thorough testing is essential. And of course it goes without saying, but make sure all your existing tests don’t fail!
By laying the groundwork with a well-defined testing plan—comprising both manual and automated integration tests—you can increase your confidence in the migration process. This preparation will help ensure that your new backend system operates smoothly and consistently after the migration is complete.
Migration Guide from Backstage
Before proceeding with this migration guide, it's essential to familiarize yourself with the resources available from Backstage. The official migration guide on the Backstage website provides valuable insights and best practices for transitioning your backend system. While there may be some overlap with this guide, repetition can reinforce key concepts and ensure that you don't miss critical steps.
Take the time to read through the Backstage migration guide before continuing with this document. Doing so will provide you with a solid framework and context that will aid in your migration efforts, ultimately leading to a smoother transition.
What’s the Plan for Rollout?
To ensure a seamless transition to the new backend, it’s crucial to carefully plan and prepare your rollout strategy. Here are some key considerations to guide your approach:
-
Deployment Strategy:
- New Image Creation: Consider creating a new Docker image for the updated backend. This allows you to maintain clear versioning and ensures that your deployment environment matches your development and testing environments.
- Blue-Green Deployment: If feasible, implement a blue-green deployment strategy. This involves running two identical environments (blue and green) where one is live while the other is idle. You can switch traffic between them to minimize downtime and facilitate rollback if necessary.
-
Monitoring and Metrics:
- Set Up Monitoring Tools: Implement monitoring solutions to track performance metrics, error rates, and resource usage. Tools like Prometheus, Grafana, or your existing APM solutions can provide valuable insights during and after the rollout.
- Health Checks: Ensure that health checks are in place to quickly identify any issues with the new backend as it goes live.
Migrating Your index.ts
to the New Backstage Backend System
In this section, I’ll walk you through the process of migrating your Backstage backend to the new backend system. We’ll focus on updating your index.ts
file, which serves as the main entry point for your backend. This migration will allow you to start leveraging the streamlined architecture and dependency injection provided by the new system.
Step 1: Backup Your Existing index.ts
Before starting any migration, it’s essential to create a backup of your current index.ts
file. This way, you can reference it or roll back if needed. Let’s save this backup as index.backup.ts
.
Additionally, to avoid issues with type checking while you’re in the middle of the migration, add @ts-nocheck
at the top of your backup file.
// index.backup.ts
// @ts-nocheck
Step 2: Set Up the New index.ts
Now, create a new index.ts
file, which will be the entry point for your new backend system. For this initial step, you can keep it as minimal as possible. The goal is to have a working backend skeleton, which we will build upon later.
Here’s an example of a minimal setup:
import { createBackend } from '@backstage/backend-defaults';
const backend = createBackend();
backend.start();
At this point, your new backend doesn’t do much—it's just an empty shell. But it’s important to verify that the basic setup works before adding any of your legacy plugins.
What’s Next?
In the next part of the migration, we’ll create a temporary legacy environment for your existing plugins and gradually integrate them into the new backend system. This approach allows for a smooth transition, letting you keep the old plugins running while starting to take advantage of the new architecture.
Creating a temporary plugin environment
To ease the migration process, Backstage provides a handy bridge function, makeLegacyPlugin
, which helps create a temporary environment compatible with the old backend system. This allows your legacy plugins to continue working while you transition to the new backend architecture. The function ensures all required dependencies are injected into the plugin, simulating the old system’s behavior.
Here's an example of using makeLegacyPlugin
:
const legacyPlugin = makeLegacyPlugin(
{
logger: coreServices.logger,
cache: coreServices.cache,
database: coreServices.database,
config: coreServices.rootConfig,
reader: coreServices.urlReader,
discovery: coreServices.discovery,
tokenManager: coreServices.tokenManager,
permissions: coreServices.permissions,
scheduler: coreServices.scheduler,
events: eventsServiceRef,
eventBroker: eventBrokerService,
auth: coreServices.auth,
httpAuth: coreServices.httpAuth,
userInfo: coreServices.userInfo,
pluginName: coreServices.pluginMetadata,
identity: coreServices.identity
},
{
logger: log => loggerToWinstonLogger(log),
cache: cache => cacheToPluginCacheManager(cache),
},
);
In this setup, your makeLegacyPlugin
function acts as a temporary replacement for the older makeCreateEnv
function. If makeLegacyPlugin
returns the same dependencies as makeCreateEnv
, you're set for a smooth transition.
You can also provide type conversion functions within the second parameter to handle any type discrepancies between old and new service structures. For example, converting a logger or cache to the format expected by the new backend system.
Caveats and Considerations
One important note is that all of these dependencies will be instantiated for every plugin that uses the legacyPlugin
bridge. This can become problematic, particularly if any dependencies involve heavy operations (e.g., opening database connections). While this bridge is a useful short-term solution, it's highly recommended to fully migrate your backend plugins to the new system as soon as possible.
Using the legacyPlugin
You can add your legacy plugins to the backend using legacyPlugin
like this:
backend.add(legacyPlugin('todo', import('./plugins/todo')));
...
backend.start();
By focusing on migrating only your index.ts
file initially, you minimize disruption while keeping the migration manageable. This strategy allows you to stay on top of the process while taking advantage of the new backend architecture in stages, ensuring a smooth transition.
Adding new backend system plugins
You can directly add your plugins that are already migrated to the new backend system into your index.ts
file.
backend.add(legacyPlugin('todo', import('./plugins/todo')));
...
backend.add(import('@roadiehq/foo-bar')); // a backend plugin in the new system
...
backend.start();
Overriding Core Services
Backstage offers a robust architecture that includes a set of core services added to the application by default. When a plugin or service references a core service, it automatically receives the appropriate instance.
Custom Implementations
If you had custom implementations of certain services in your old system, you will need to override them in the new backend. This is crucial for maintaining functionality and ensuring that your custom logic remains intact.
Recommended Approach
To keep your Backstage directory structure organized and maintainable, I recommend creating a separate package for these service overrides. This approach promotes clarity and separation of concerns within your project.
-
Package Naming Convention: In the upstream OSS version of Backstage, this package is commonly referred to as
backend-defaults
. Adopting this convention in your project will help standardize your codebase and make it easier for others to understand your structure.
Implementation Steps
-
Create the
backend-defaults
Package:- Set up a new package in your Backstage repository specifically for service overrides.
-
Implement Overrides:
- Define the necessary overrides for the core services that require customization. Ensure that these overrides integrate seamlessly with the existing Backstage architecture.
-
Update Your Application:
- Modify your application’s configuration to reference the new
backend-defaults
package, ensuring that your custom implementations are used where needed.
- Modify your application’s configuration to reference the new
The current core services that backstage provides are the following:
// backstage/backstage/packages/backend-defaults/src/entrypoints
auth
cache
database
discovery
httpAuth
httpRouter
lifecycle
logger
permissions
rootConfig
rootHealth
rootHttpRouter
rootLifecycle
rootLogger
scheduler
urlReader
userInfo
You can import the core services from the respective path like this:
import {
RootHttpRouterConfigureContext,
rootHttpRouterServiceFactory
} from '@backstage/backend-defaults/rootHttpRouter'
Creating Your backend-defaults
Package
To effectively manage your service overrides in Backstage, you can create a backend-defaults
package using the Backstage CLI. This package will serve as a centralized location for your custom service implementations.
Step-by-Step Instructions
-
Run the Backstage CLI Command:
Execute the following command to create a new package:
npx backstage-cli new
-
Select Package Type:
When prompted, choose the option for creating a Node library:- Node Library: This will allow you to export shared functionality for backend plugins and modules.
Set the Package ID:
Add the package ID asbackend-defaults
when prompted. This step ensures that your package is correctly identified within your Backstage setup.Package Creation:
Upon completion, this command will generate a new package in yourpackages
directory namedbackend-defaults
.-
Organizing Your Overrides:
To keep your package structured and maintainable, I recommend creating a folder within thebackend-defaults
package calledservices
. This folder will house your overrides or any new services you implement.
mkdir packages/backend-defaults/services
The following is an implementation of an override of the coreServices.database
service. This is a potential fix for cases where your createEnv()
and pluginID
strings did not match so in your old system the database name does not match the API path.
In the new backend system it will always be the case that your plugin’s ID determines the name of its database and the path it will be attached to. This “fix” is needed because the current upstream Backstage doesn't provide a way to override the database name for these rare edge cases.
import {
coreServices,
createServiceFactory,
} from '@backstage/backend-plugin-api';
import { ConfigReader } from '@backstage/config';
import { DatabaseManager } from '@backstage/backend-defaults/database';
export const databaseServiceFactory = createServiceFactory({
service: coreServices.database,
deps: {
config: coreServices.rootConfig,
lifecycle: coreServices.lifecycle,
pluginMetadata: coreServices.pluginMetadata,
},
async createRootContext({ config }) {
return config.getOptional('backend.database')
? DatabaseManager.fromConfig(config)
: DatabaseManager.fromConfig(
new ConfigReader({
backend: {
database: { client: 'better-sqlite3', connection: ':memory:' },
},
}),
);
},
async factory({ pluginMetadata, lifecycle }, databaseManager) {
const pluginId = pluginMetadata.getId();
let databaseName;
switch (pluginId) {
case 'foo-bar':
databaseName = 'foo_bar';
break;
case 'tech-insights':
databaseName = 'tech_insights';
break;
default:
databaseName = pluginId;
}
return databaseManager.forPlugin(databaseName, {
pluginMetadata,
lifecycle,
});
},
});
Configuring the httpRouter service
The backend system comes with its default configured httpRouterService. This is good for basic use cases. It contains multiple middleware and is responsible for the default configuration of the express router.
If the default configuration is not enough, or you made customizations on your router in the old system you can provide your configurations like this:
// index.ts
import {
RootHttpRouterConfigureContext,
rootHttpRouterServiceFactory,
} from '@backstage/backend-defaults/rootHttpRouter';
...
backend.add(
rootHttpRouterServiceFactory({
configure: (context: RootHttpRouterConfigureContext) => {
const { app, config, logger, routes, applyDefaults } = context;
// register your custom middlewares
applyDefaults() // apply the default middlewares from the backstage core service
},
}),
);
Backstage comes with a default request logger. You cannot turn it off and can not be overridden. If you want to replace it (or any of the default middleware) and use your own request logger you have only one choice: don’t apply the default middleware, and instead use each individual middleware as needed, and your custom implementations.
// index.ts
...
backend.add(
rootHttpRouterServiceFactory({
configure: (context: RootHttpRouterConfigureContext) => {
const { app, config, logger, routes, applyDefaults } = context;
const backstageMiddlewares = BackstageMiddlewareFactory.create({
config,
logger,
});
// we leave out the backstageMiddlewares.logging() and use our own logger.
app.use(myCustomRequestLoggingHandler());
// add rest of the default middlewares
app.use(backstageMiddlewares.helmet());
app.use(backstageMiddlewares.cors());
app.use(backstageMiddlewares.compression());
app.use(routes);
app.use(backstageMiddlewares.notFound());
app.use(backstageMiddlewares.error());
},
}),
);
Healthcheck
If you want to override the default healthcheck you can easily do it by attaching a new endpoint for it.
You can create a backend plugin for your healthcheck and then register it in your index.ts
file.
Use the backstage-cli
to create a new plugin.
// run and select backend-plugin - A new backend plugin
npx backstage-cli new
This will create a new backend plugin inside your project under the plugins
folder. It will be called the ID that you provided in the prompt.
// plugins/healthcheck-backend
export const healthCheck = createBackendPlugin({
pluginId: 'healthcheck',
register(env) {
env.registerInit({
deps: {
rootHttpRouter: coreServices.rootHttpRouter
},
init: async ({ rootHttpRouter, rootLifecycle }) => {
rootHttpRouter.use('/healthcheck', (_, res) => {
res.json({ status: 'ok' });
});
},
});
},
});
// packages/backend/src/index.ts
backend.add(import('healthcheck-backend'));
Using the lifecycle hooks
The new backed system provides a core service to hook into the different lifecycles of the process. There is a plugin scoped and a root scoped lifecycle
and rootLifeCycle
service respectively.
In the old system you might have hooked into the service start promise. In the example below we call the function runOnStartup
// packages/backend/src/index.backup.ts
const service = createServiceBuilder(module)
service
.start().then((_server: Server) => {
logger.info(`Startup finished`, {
uptime: process.uptime(),
});
runOnStartup({
config
});
})
.catch(err => {
logger.error(
'The http server threw an unexpected error',
isNativeError(err) ? err.message : `unknown error raised: ${err}`,
);
process.exit(1);
});
The same functionality can be achieved with the lifecycle hooks in the new backend system.
Create a module for your functionality. The example demonstrates how to add a lifecycle hook to the tech-insights plugin:
export const techInsightsModuleCalculateNew = createBackendModule({
pluginId: 'tech-insights',
moduleId: 'calculate-new',
register(reg) {
reg.registerInit({
deps: {
config: coreServices.rootConfig,
discovery: coreServices.discovery,
featureFlagStore: featureFlagStoreServiceRef,
lifecycle: coreServices.rootLifecycle,
},
async init({ config, discovery, featureFlagStore, lifecycle }) {
const onStartup = () => {
...
};
lifecycle.addStartupHook(onStartup); // Add the function to be run at startup
},
});
},
});
Testing the New Backend
Testing your migrated backend is a critical component of any upgrade process. Ensuring that your application functions as expected after migration can prevent a host of issues down the line.
Setting Up a Development/Test Environment
It is strongly recommend to establishing a dedicated dev/test environment. This allows you to deploy your new version and conduct tests with minimal traffic interference. Since we are re-architecting the entire Express application and modifying how plugins are integrated, validating that your plugins are accessible is absolutely paramount.
Implementing Health Check Tests
A good starting point for testing is to implement basic health check tests using your existing end-to-end (e2e) or integration testing framework. These tests should verify that each plugin is mounted correctly to its expected path. Here’s an example of how you can do this with Playwright:
import { test, expect } from '@playwright/test';
test.describe('/healthcheck', () => {
test.describe('GET', () => {
test('The healthcheck endpoint is configured', async ({ request }) => {
const result = await request.get(`/healthcheck`);
expect(result.status()).toBe(200);
});
});
});
Reviewing End-to-End Test Coverage
Before migrating, take a moment to review your end-to-end test coverage. Understanding your current testing landscape will help identify gaps, especially given the extensive changes involved in this migration. The more coverage you have over critical services, the better equipped you will be to catch issues early.
Pay special attention to:
- Custom Implementations: Review any middleware, metrics, or logging mechanisms that may impact the migration.
- Company-Specific Plugins: Ensure that any frontend/backend plugins unique to your organization are covered.
- Startup Hooks: Verify that these are functioning correctly post-migration.
- Legacy Functions: Check the usage of custom legacy functions to avoid conflicts with plugin IDs and database names.
Manual Testing
Finally, don't underestimate the importance of manual testing. Take the time to navigate through your application before completing the merge. The migration touches a broad surface area, making it challenging to cover every scenario with automated tests. If you've managed to create comprehensive automated tests—kudos to you! But a thorough manual check can help catch any edge cases that might otherwise be missed.
Conclusion
By following these testing practices—establishing a robust test environment, implementing health checks, reviewing your test coverage, and conducting manual tests—you can significantly reduce the risks associated with your backend migration. This proactive approach will help ensure a smooth transition and a reliable application post-upgrade.
Quirks Encountered in the New Backend System
As we transitioned to the new backend system, we encountered several quirks that are important to highlight, particularly regarding database naming conventions and plugin path correlations.
Database Name and Plugin Path Correlations
In the updated architecture, database creation is directly tied to the plugin ID. While this design should streamline the process, it can lead to issues if there has been a lack of consistency in the previous backend, specifically within the createEnv
function.
const createEnv = makeCreateEnv(config);
const todoEnv = useHotMemoize(module, () => createEnv('todo'));
In the legacy system, the database name was simply derived from the parameter passed to the createEnv
function. The new backend would generate the databases using the plugin ID or the string you pass to the legacyBackend bridge function and it would use this same string as the API path. This is an issue if your parameter to the createEnv
function and the path you attached your plugin to were not the same.
In the new system, the plugin ID takes precedence—it dictates both the database name and the API path.
Key Considerations
- Consistency is Crucial: Ensure that you consistently use kebab-case for plugin identifiers across both the legacy and new systems to avoid unintended database creations.
- No Official Override: Currently, there is no configuration option to override this behavior, making it essential to align naming conventions proactively.
Catching Errors in Plugin Configuration
It's crucial to ensure that any misconfigurations or missing settings are promptly identified. One common issue is that if a plugin configuration is not available, the application may still start up without any visible indications of the problem. This can lead to missed error messages and difficult-to-diagnose issues later on.
Effective Monitoring Strategies
To improve your visibility into potential errors during startup I recommend an approach to run your backend service with output filtering that focuses on error and warning messages. Here’s how you can do this:
- Start Your Application: Launch your backend application as you normally would.
-
Use Grep for Real-Time Monitoring: Pipe the output of your application to
grep
to filter for key terms like "error" and "warn". This can be done in a Unix-like terminal using the following command:
yarn start-backend | grep -E "error|warn"
This command allows you to see real-time log messages that could indicate issues with your plugin configuration.
By integrating these practices into your development workflow, you'll significantly enhance your ability to catch and respond to configuration errors on time.
Conclusion
Understanding these quirks is vital for a smooth transition to the new backend system. By ensuring consistent naming conventions, you can mitigate potential issues related to database and plugin management. Catching errors and warnings early can save you a lot of time in the upgrade process. Stay vigilant to avoid complications that could arise from discrepancies in your configurations.
Image by GenerativeStockAI from Pixabay
Top comments (0)