Overview
Booster framework extensions are called Rockets. A Rocket is nothing more than a node package that implements the public Booster Rocket interfaces to add new functionality to Booster applications.
Multi Provider Rocket
The Booster version 0.24.0 is capable of creating Multi Provider Rockets. Multi-provider Rockets could include implementations for different vendors in the same npm package.
Having all implementations in the same package reduces the need to repeat the same code in different Rocket implementations. Now we only need to publish one Rocket and it can be used by as many providers (AWS, Azure, Local, ...) as we implement support to.
When we build a Multi Provider Rocket we are able to extract the common logic to a package and the specific provider logic to another one. In this case we will have at least as many packages as providers we support. For example, if we create a Rocket that supports Azure and a Local Provider that deploys infrastructure, then we will have the following packages:
- Core: The entry point of your Rocket. Where the common logic resides.
- Azure: Azure specific logic. For example, code to write to a Cosmos DB table.
- Azure infrastructure: Tell Booster how to build the Azure infrastructure. For example, the Terrafom-cdktf code to add a new CosmosDB table.
- Local: Local specific logic. For example, code to write to a Nedb table.
- Local infrastructure: As with the Azure infrastructure, we need to tell Booster to create a new Nedb table or a new endpoint.
- Types: Optionally, we could have a package with shared types to be used by different packages.
Rocket Core is the orchestrator package. It has the responsibility to call the correct provider package, and at the end, the Provider packages will use the infrastructure package as needed.
Rocket example
A common requirement in many projects is to be able to handle files. Booster Core doesn’t support handling files, so a Rocket could help us with this.
Scaffolding
The Rocket files need to provide a basic set of functionalities:
- Get a pre-signed url to upload a file to a folder.
- Get a pre-signed url to download a file to a folder.
- Get a pre-signed url to list all files in a folder.
- Following the event sourcing approach, having an event each time a file is uploaded would be useful.
- Allow users to define multiple folders.
This Rocket will provide implementation for Azure and the Local Provider. You can find the full code on Github.
If you have any questions about how to start with Rockets, follow the official guide.
The first step will be to create the project scaffolding.
Local Provider infrastructure
Let’s start creating the Local Provider infrastructure. In the index.ts file, export a constant that returns an InfrastructureRocket. This object will contain at least the mountStack method. This method will be called by Booster in the deploy process:
const AzureRocketFiles = (params: RocketFilesParams): InfrastructureRocket => ({
mountStack: Infra.mountStack.bind(Infra, params),
})
export default AzureRocketFiles
The InfrastructureRocket needs at least one method to mount the stack. We map this method to our Infra class with the RocketFilesParams.
When a user gets a pre-signed get, put or list url, this should point to an Express route that behaves the same as the Azure File Storage.
For the Local Provider, we publish our endpoints by creating Express routes. Because we want to have different endpoints based on the directory parameter, we iterate over our params to create the needed routes.
export class Infra {
public static mountStack(params: RocketFilesParams, config: BoosterConfig, router: Router): void {
params.params.forEach((parameter: RocketFilesParam) => {
router.use(`/${containerName}`, new FileController(parameter.directory).router)
fsWatch(parameter.directory)
})
}
}
For each directory in the parameter list we create a new Router for the /rocketFiles
path. This way we can have a single Rocket to handle multiple directories.
The controller will have two endpoints for each directory, one for uploading files and another one to get our files. This is needed to simulate the Azure Blob get/put methods:
constructor(readonly directory: string) {
this.router.put(`/${directory}/:fileName`, this.uploadFile.bind(this))
this.router.get(`/${directory}/:fileName`, this.getFile.bind(this))
this._path = path.join(process.cwd(), containerName, this.directory)
}
We don’t need to add an endpoint for the list endpoint as if the user calls the root url, Express will just return the expected list of files.
Now we only need to define the methods for getting and putting files. Getting a file is as simple as requesting the download
method of the express.Response method:
public async getFile(req: express.Request, res: express.Response, next: express.NextFunction): Promise<void> {
const fileName = req.params.fileName
const filePath = path.join(this._path, fileName)
res.download(filePath)
}
Writing a file requires a few steps. First we need to create the destination path and then use the _express.Request _object to write the stream:
const fileName = req.params.fileName
const filePath = path.join(this._path, fileName)
const writeStream = fs.createWriteStream(filePath)
req.pipe(writeStream)
req.on('end', function () {
const result = {
status: 'success',
result: {
message: 'File uploaded',
data: {
name: fileName,
mimeType: DEFAULT_MIME_TYPE,
size: DEFAULT_FILE_SIZE,
},
},
} as APIResult
res.send(result)
})
writeStream.on('error', async function (e) {
const err = e as Error
await requestFailed(err, res)
next(e)
})
Now that we have the main methods, we need to implement the file system watcher. This will detect any changes to a folder and emit the Booster event we want to notify. For this, we implement a fsWatch function that watches the folders and calls the boosterRocketDispatcher method.
export function fsWatch(directory: string): void {
const _path = path.join(process.cwd(), containerName, directory)
if (!fs.existsSync(_path)) {
fs.mkdirSync(_path, { recursive: true })
}
fs.watch(_path, async (eventType: 'rename' | 'change', filename: string) => {
await boosterRocketDispatcher({
name: filename,
})
})
}
The boosterRocketDispatcher is a method Booster provides to interact with the Core functionalities. This method will dispatch the request payload to the function defined in the BOOSTER_ROCKET_FUNCTION_ID environment variable. We defined this variable on the mountStack method for the Local Provider, so let’s add it:
export class Infra {
public static mountStack(params: RocketFilesParams, config: BoosterConfig, router: Router): void {
process.env[rocketFunctionIDEnvVar] = functionID
params.params.forEach((parameter: RocketFilesParam) => {
router.use(`/${containerName}`, new FileController(parameter.directory).router)
fsWatch(parameter.directory)
})
}
}
The boosterRocketDispatcher function will not only get the request payload, but also will give access to the Booster config variable.
With this call, we can be sure that each time a file is dropped in the directory, Booster will dispatch to our handler. This handler will be defined in the Rocket Core package.
We are connecting our file watcher to our Local Provider logic through the Booster Core dispatcher. All the providers will use this flow to implement the connection:
Local Provider
The Local Provider package (rocket-files-local) implements the logic of the methods we will expose for our Express server:
- Get a presigned get url to get a file
- Get a presigned put url to** put **a file
- List a directory
To return a get and put url we only need to concat the information we have:
export async function presignedGet(config: BoosterConfig, directory: string, fileName: string): Promise<string> {
return `http://localhost:3000/${containerName}/${directory}/${fileName}`
}
export async function presignedPut(config: BoosterConfig, directory: string, fileName: string): Promise<string> {
return `http://localhost:3000/${containerName}/${directory}/${fileName}`
}
Those methods will be called from the Core package.
Listing all files in a directory is simply a matter of using the fs module:
export async function list(config: BoosterConfig, directory: string): Promise<Array<ListItem>> {
const result = [] as Array<ListItem>
const _path = path.join(process.cwd(), containerName, directory)
const files = fs.readdirSync(_path)
files.forEach((file) => {
const stats = fs.statSync(path.join(_path, file))
result.push({
name: file,
properties: {
lastModified: stats.ctime,
},
})
})
return result
}
Core
Once we have the Local Provider implementation, it’s time to build our Core package. This package should call each provider, depending on the application configuration. The first step is to register our Rocket:
export class BoosterRocketFiles {
public constructor(readonly config: BoosterConfig, readonly params: RocketFilesParams) {
config.registerRocketFunction(functionID, async (config: BoosterConfig, request: unknown) => {
return fileUploaded(config, request, params)
})
}
}
We register our Rocket using a unique functionID identifier (see rocket-files-params.ts), and a Rocket function that will be called by Booster Core, when our function is called. We will configure this in the Azure provider later. For the Local Provider, it was defined in the fsWatch method.
We also need to provide two methods of setting up the Rocket to our client.
public rocketForAzure(): RocketDescriptor {
return {
packageName: '@boostercloud/rocket-files-provider-azure-infrastructure',
parameters: this.params,
}
}
public rocketForLocal(): RocketDescriptor {
return {
packageName: '@boostercloud/rocket-files-provider-local-infrastructure',
parameters: this.params,
}
}
These methods will be used later by the clients.
Next we need to implement the public methods that our clients will use. From the Core point of view, those methods should call the corresponding logic on the provider configured on the application. Now let’s define a class with the get, put and list methods:
public presignedGet(directory: string, fileName: string): Promise<string> {
this.checkDirectory(directory)
return this._provider.presignedGet(this.config, directory, fileName)
}
public presignedPut(directory: string, fileName: string): Promise<string> {
this.checkDirectory(directory)
return this._provider.presignedPut(this.config, directory, fileName)
}
public list(directory: string): Promise<Array<ListItem>> {
this.checkDirectory(directory)
return this._provider.list(this.config, directory)
}
Each method calls the provider method to allow a client using our Rocket Core package to make calls. For example, if a client wants to get a presignedPut url, then it could call our presignedPut method:
@Returns(String)
public static async handle(command: FileUploadPut, register: Register): Promise<string> {
const boosterConfig = Booster.config
const fileHandler = new FileHandler(boosterConfig)
return await fileHandler.presignedPut(command.directory, command.fileName)
}
We only need to implement the fileUploaded method in our Core package to generate a Booster event for each file uploaded. This event will be persisted using the config object. The config object gives us access to the provider events store:
async function processEvent(config: BoosterConfig, metadata: unknown): Promise<void> {
try {
const envelop = toEventEnvelop(metadata)
await config.provider.events.store([envelop], config, console)
} catch (e) {
console.log('[ROCKET#files] An error occurred while performing a PutItem operation: ', e)
}
}
So, for each file uploaded we are going to store an event:
const provider = require(params.rocketProviderPackage)
const metadata = provider.getMetadataFromRequest(request)
if (provider.validateMetadata(params, metadata)) {
return processEvent(config, metadata)
}
Azure Provider infrastructure
Once we have Local and Core functionality, we could start with the Azure provider infrastructure. As with the Local Provider, we will need to export a constant with the mountStack method that builds the InfrastructureRocket:
const AzureRocketFiles = (params: RocketFilesParams): InfrastructureRocket => ({
mountStack: Synth.mountStack.bind(Synth, params),
})
export default AzureRocketFiles
We build the needed infrastructure in Azure that consists of:
- A storage account
- A container
- A function
Using cdktf, it's easy to create new TerraformResources and add it to the applicationSynthStack.rocketStack:
const rocketStack = applicationSynthStack.rocketStack ?? []
const rocketStorage = TerraformStorageAccount.build(terraformStack, resourceGroup, appPrefix, utils, config)
rocketStack.push(rocketStorage)
For the FunctionApp
we will need to set some specific values to tell Azure how to connect to our blob storage and our functionID:
return new FunctionApp(terraformStack, id, {
name: functionAppName,
location: resourceGroup.location,
resourceGroupName: resourceGroup.name,
appServicePlanId: applicationServicePlan.id,
appSettings: {
FUNCTIONS_WORKER_RUNTIME: 'node',
AzureWebJobsStorage: storageAccount.primaryConnectionString,
WEBSITE_CONTENTAZUREFILECONNECTIONSTRING: storageAccount.primaryConnectionString,
WEBSITE_RUN_FROM_PACKAGE: '',
WEBSITE_CONTENTSHARE: id,
WEBSITE_NODE_DEFAULT_VERSION: '~14',
...config.env,
BOOSTER_ENV: config.environmentName,
BOOSTER_REST_API_URL: `https://${apiManagementServiceName}.azure-api.net/${config.environmentName}`,
COSMOSDB_CONNECTION_STRING: `AccountEndpoint=https://${cosmosDatabaseName}.documents.azure.com:443/;AccountKey=${cosmosDbConnectionString};`,
BOOSTER_ROCKET_FUNCTION_ID: functionID,
ROCKET_FILES_BLOB_STORAGE: rocketStorageAccount.primaryConnectionString,
},
osType: 'linux',
storageAccountName: storageAccount.name,
storageAccountAccessKey: storageAccount.primaryAccessKey,
version: '~3',
dependsOn: [resourceGroup],
lifecycle: {
ignoreChanges: ['app_settings["WEBSITE_RUN_FROM_PACKAGE"]'],
},
})
Next, we need to return our updated applicationSynthStack.
return applicationSynthStack
Finally we have to implement the functionApp
code. For this we need to return an Array of our function definitions:
export class RocketFilesFileUploadedFunction {
static getFunctionDefinition(config: BoosterConfig): BlobFunctionDefinition {
return {
name: 'fileupload',
config: {
bindings: [
{
type: 'blobTrigger',
direction: 'in',
name: 'blobUpload',
path: `${containerName}/{name}`,
connection: 'ROCKET_FILES_BLOB_STORAGE',
},
],
scriptFile: config.functionRelativePath,
entryPoint: config.rocketDispatcherHandler.split('.')[1],
},
}
}
}
We are creating a definition of a function that will be binded to a blob path. The entryPoint and the scriptFile parameters will help Booster to execute our code. Booster provides utilities to connect with the Booster Core in the config object.
The last step for Azure infrastructure providers is to include a method to get the functionApp name. Therefore, our final implementation will be:
const AzureRocketFiles = (params: RocketFilesParams): InfrastructureRocket => ({
mountStack: Synth.mountStack.bind(Synth, params),
mountFunctions: Functions.mountFunctions.bind(Synth, params),
getFunctionAppName: Functions.getFunctionAppName.bind(Synth, params),
})
export default AzureRocketFiles
Azure Provider
Implementing the get, put, list and file uploaded methods in Azure will be done in the Azure provider package.
Let’s build a class to get all the information Azure provides using @azure/storage-blob Package. The get method will be:
public getBlobSasUrl(
directory: string,
fileName: string,
permissions = this.DEFAULT_PERMISSIONS,
expiresOnSeconds = this.DEFAULT_EXPIRES_ON_SECONDS
): string {
const key = BlobService.getKey()
const blobName = BlobService.getBlobName(directory, fileName)
const credentials = this.getCredentials(key)
const client = this.getClient(credentials)
const blobSASQueryParameters = BlobService.getBlobSASQueryParameters(
blobName,
permissions,
expiresOnSeconds,
credentials
)
const containerClient = client.getContainerClient(containerName)
const blobClient = containerClient.getBlobClient(blobName)
return blobClient.url + '?' + blobSASQueryParameters
}
We need to build other methods in the same way. The important part here is that we are returning a pre-signed url using the key we have defined in our function:
private static getKey(): string {
return process.env['ROCKET_STORAGE_KEY'] ?? ''
}
This configuration value is set in the Azure infrastructure package:
applicationSynthStack.functionApp!.addOverride('app_settings', {
ROCKET_STORAGE_KEY: `${rocketStorage.primaryAccessKey}`,
})
Once we have all the needed functions, we will export them:
export async function presignedGet(config: BoosterConfig, directory: string, fileName: string): Promise<string> {
const storageAccount = storageName(config.appName)
return new BlobService(storageAccount).getBlobSasUrl(directory, fileName)
}
export async function presignedPut(config: BoosterConfig, directory: string, fileName: string): Promise<string> {
const storageAccount = storageName(config.appName)
return new BlobService(storageAccount).getBlobSasUrl(directory, fileName, WRITE_PERMISSION)
}
export async function list(config: BoosterConfig, directory: string): Promise<Array<ListItem>> {
const storageAccount = storageName(config.appName)
return new BlobService(storageAccount).listBlobFolder(directory)
}
A client application
To add this Rocket to our Booster application, first we need to update our dependencies:
npm i --save @boostercloud/rocket-files-core
npm i --save @boostercloud/rocket-files-types
npm i --save @boostercloud/rocket-files-provider-local
npm i --save @boostercloud/rocket-files-provider-azure
Then we add the dev dependencies for the infrastructure packages:
npm i --save-dev @boostercloud/rocket-files-provider-azure-infrastructure
npm i --save-dev @boostercloud/rocket-files-provider-local-infrastructure
Next, we configure our Rocket in the config.ts file. For Azure:
Booster.configure('production', (config: BoosterConfig): void => {
config.appName = 'test-rockets-files020'
config.providerPackage = '@boostercloud/framework-provider-azure'
config.rockets = [
new BoosterRocketFiles(config, {
rocketProviderPackage: '@boostercloud/rocket-files-provider-azure' as RocketProviderPackageType,
params: [
{
directory: 'folder01',
},
{
directory: 'folder02',
},
],
} as RocketFilesParams).rocketForAzure(),
]
})
And for Local:
Booster.configure('local', (config: BoosterConfig): void => {
config.appName = 'test-rockets-files020'
config.providerPackage = '@boostercloud/framework-provider-local'
config.rockets = [
new BoosterRocketFiles(config, {
rocketProviderPackage: '@boostercloud/rocket-files-provider-local' as RocketProviderPackageType,
params: [
{
directory: 'folder01',
},
{
directory: 'folder02',
},
],
} as RocketFilesParams).rocketForLocal(),
]
})
To use our Rockets from our application, we create a command that will add a GraphQL mutation. The command to get a pre-signed put url to upload a file will be:
export class FileUploadPut {
public constructor(readonly directory: string, readonly fileName: string) {}
@Returns(String)
public static async handle(command: FileUploadPut, register: Register): Promise<string> {
const boosterConfig = Booster.config
const fileHandler = new FileHandler(boosterConfig)
return await fileHandler.presignedPut(command.directory, command.fileName)
}
}
And for getting a pre-signed url to** get** a file and to list all the files:
export class FileUploadGet {
public constructor(readonly directory: string, readonly fileName: string) {}
@Returns(String)
public static async handle(command: FileUploadGet, register: Register): Promise<string> {
const boosterConfig = Booster.config
const fileHandler = new FileHandler(boosterConfig)
return await fileHandler.presignedGet(command.directory, command.fileName)
}
}
export class FileUploadList {
public constructor(readonly directory: string) {}
@Returns(String)
public static async handle(command: FileUploadList, register: Register): Promise<string> {
const boosterConfig = Booster.config
const fileHandler = new FileHandler(boosterConfig)
const listItems = await fileHandler.list(command.directory)
return '[' + listItems.map((item: ListItem) => JSON.stringify(item)).join(',') + ']'
}
}
Now that we have all the functionalities ready to handle files, let’s create a ReadModel to project all the file upload events so that we know which files were uploaded. This ReadModel will project the UploadedFileEntity event defined as:
export class UploadedFileEntityReadModel {
public constructor(public id: string, readonly metadata: unknown) {}
@Projects(UploadedFileEntity, 'id')
public static projectUploadedFileEntity(
entity: UploadedFileEntity,
currentUploadedFileEntityReadModel?: UploadedFileEntityReadModel
): ProjectionResult<UploadedFileEntity> {
console.log(`ReadModel Projects UploadedFileEntityReadModel ${entity}`)
return new UploadedFileEntityReadModel(entity.id, entity.metadata)
}
}
And that’s all. Run your Booster application and you will see the new mutations:
mutation {
FileUploadPut(input: {directory: "folder01", fileName: "3.txt"})
}
mutation {
FileUploadGet(input: {directory: "folder01", fileName: "3.txt"})
}
mutation {
FileUploadList(input: {directory: "folder02"})
}
And a query to get the uploaded files:
query{
UploadedFileEntityReadModels(filter: {}){
id
metadata
}
}
Both are working on Local just as on Azure provider.
Conclusions
With this real world example we have reviewed how to extend Booster logic and infrastructure with the support for any provider.
If you want to know more about how to create Rockets, please go to the official documentation.
Last but not least, if you have any questions about Booster or any other topic related, we will be glad to hear them on our community channel.
Top comments (0)