Welcome back to another super post about microservices and NestJS! We have been working on this project for a while and I decided to share with you how to use this and why we built a template project. So... let's begin!
Rebember: the template project generator package is publicly hosted in github
What are schematics?
If you don't know what schematics are, I'll do a quick overview about what are they. But I recommend you to take a look at the official documentation.
To sum it up, schematics are some sort of blueprints but for code. Yo define with a special templating language the files you want to generate and the run the schematics engine on it for generating the final files.
And what does all of this to do with NestJS? The cool thing is that Nest CLI uses angular schematics under the hood and, even better, they allow you to reference a custom collection with schematics!
That means that you can create your custom schematics and still get all the features from the Nest CLI, cool right??
Why a project generator?
We in Rebellion believe a lot in the concept that Netflix used in one of their tech talks, which is the knonw "Paved roads".
Paved roads mean (from my point of view) develop and have the tools needed for speeding up repetitive tasks.
For example, everytime we have to create a new project for a microservice we have to generate the default NestJS project, add the microservices package, add the NATS package, add persistence layer, create some global interceptors... These set of tasks are always the same and can be automated in an easy way. And that way is using a generator with schematics.
How to use it?
Using the project is simple, you just have to install it as a global NPM package and reference it in the Nest CLI.
npm install -g @rebellionpay/nest-ms-template
# for generating an app
nest g -c @rebellionpay/nest-ms-template app
# for generating a controller
nest g -c @rebellionpay/nest-ms-template ctrl
Then you just have to answer the questions and the files are going to be generated.
What includes this template?
THIS IS REALLY IMPORTANT
This project is, as said, a paved road for rebellion pay. This means that the functionalities included are the ones that fit the needs of our company and our team.
Being that said, let me guide you through the generated files of a project.
Let me put the answers given to the questions.
? What name would you like to use for the new project? sample-project
? Who is the author of the project? Rebellion Pay <backend@rebellionpay.com>
? Which is the description of the project? This is a sample project
? Which is the project license? MIT
? In which port will it run? 3000
? Which transport layer would you like to use? NATS
? Are you building a pure app? No
? Are you going to use persistence in a DB? Yes
? Which database would you like to use? mongodb
? Are you going to make the CD with spinnaker? Yes
? What is the spinnaker API url? https://testing.spinnaker.com
? In which kubernetes namespace will this app going to be deployed? default
For that set of answers, the following files are generated:
.
├── .vscode
│ └── launch.json
├── kubernetes
│ └── manifest.yml
├── src
│ ├── config
│ │ ├── MongoConfigService.ts
│ │ └── NATSConfigService.ts
│ ├── factory
│ │ └── winstonConfig.ts
│ ├── filters
│ │ └── ExceptionsFilter.ts
│ ├── interceptors
│ │ ├── InjectMetadataInterceptor.ts
│ │ ├── MetricsInterceptor.ts
│ │ └── TimeoutInterceptor.ts
│ ├── interface
│ │ └── MicroserviceMessage.ts
│ ├── message
│ │ ├── message.module.ts
│ │ ├── message.service.spec.ts
│ │ └── message.service.ts
│ ├── metrics
│ │ ├── metrics.module.ts
│ │ └── metrics.service.ts
│ ├── app.controller.spec.ts
│ ├── app.controller.ts
│ ├── app.module.ts
│ ├── app.service.ts
│ └── main.ts
├── test
│ ├── app.e2e-spec.ts
│ └── jest-e2e.json
├── .env.example
├── .eslintignore
├── .eslintrc.js
├── .gitignore
├── .gitlab-ci.yml
├── .prettierrc
├── README.md
├── nest-cli.json
├── package.json
├── tsconfig.build.json
└── tsconfig.json
In the root folder we have a standard README file, the Typescript config files, a package.json and a nest-cli.json, which is used for monorepo projects. We have also a sample .env file and the ESLint/prettier config files. And, lastly, a gitignore and a gitlab CI file.
There are also a .vscode
and a kubernetes
folder. The .vscode folder has a launch.json
with a standard config for nestjs projects. The kubernetes folder contains a manifest file with a service and a deployment.
We are going to use block quotes to explain the differences in the code if you choose other alternatives to the questions.
main.ts
In the src folder we have the main.ts
file. There is the function that bootstraps the nest app. In the case of our answers the main.ts
bootstraps an hybrid app.
if (process.env.NODE_ENV === 'production') {
initializeAPMAgent({
serverUrl: process.env.ELASTIC_APM_SERVER_URL,
serviceName: process.env.ELASTIC_APM_SERVICE_NAME,
secretToken: process.env.ELASTIC_APM_SECRET_TOKEN,
});
}
async function bootstrap() {
const logger: LoggerConfig = new LoggerConfig();
const winstonLogger = WinstonModule.createLogger(logger.console());
const app = await NestFactory.create(
AppModule.register(),
{
cors: true,
logger: WinstonLogger
}
);
const natsConfigService : NATSConfigService = app.get(NATSConfigService);
const configService : ConfigService = app.get<ConfigService>(ConfigService);
app.use(helmet());
app.useGlobalFilters(new AllExceptionsFilter());
app.connectMicroservice({
...natsConfigService.getNATSConfig
});
const globalInterceptors = [];
if (process.env.NODE_ENV === 'production') {
globalInterceptors.push(
app.get(ApmHttpUserContextInterceptor),
app.get(ApmErrorInterceptor)
);
const apmMiddleware = app.get(APM_MIDDLEWARE);
app.use(apmMiddleware);
}
globalInterceptors.push(
new TimeoutInterceptor()
);
app.useGlobalInterceptors(... globalInterceptors);
const port = configService.get<number>('PORT') || 3000;
app.startAllMicroservicesAsync();
await app.listen(port, () => winstonLogger.log(`Hybrid sample-project test running on port ${port}`));
}
bootstrap();
As you can see, we start configuring the APM agent for sending data to APM. This is one of the tools we use for gathering metrics of our services.
We also use Winston for logging. That's why we instantiate the logger in the first place and pass it to the nest factory.
Then we add a global interceptor and a global filter. If we are in production, we finish setting up APM and pusing the interceptors.
The filter we are using is for catching all exceptions that occurs in the app.
@Catch()
export class AllExceptionsFilter implements ExceptionFilter {
catch(exception: unknown, host: ArgumentsHost): void {
const ctx = host.switchToHttp();
const request = ctx.getRequest();
const response = ctx.getResponse();
let status = 500;
if(exception instanceof HttpException) {
status = exception.getStatus();
}
response.status(status).json({
statusCode: status,
timestamp: new Date().toISOString(),
path: request.url,
});
}
}
You can see that the filter just catches the exceptions and returns a standard response.
On the other hand, the timeout interceptor just acts on every message sent and if it takes more than 5 seconds to resolve, it throws an exception.
If you choose a pure app, the code is almost the same. But we don't have to define any port and the metadata interceptor can be used also as a global one.
app.*.ts
The app.*.ts files contains the main module code.
app.module.ts
@Module({})
export class AppModule {
public static register(): DynamicModule {
const imports = [
MongooseModule.forRootAsync({
useClass: MongoConfigService,
}),
WinstonModule.forRoot(logger.console()),
ConfigModule.forRoot({
isGlobal: true,
validationSchema: JoiObject({
NODE_ENV: JoiString()
.valid('development', 'production', 'test')
.default('development'),
PORT: JoiNumber().port().default(3030),
ELASTIC_APM_SERVER_URL: JoiString(),
ELASTIC_APM_SERVICE_NAME: JoiString(),
ELASTIC_APM_SECRET_TOKEN: JoiString(),
NATS_URL: JoiString().required().default('nats://localhost:4222'),
NATS_USER: JoiString(),
NATS_PASSWORD: JoiString(),
MONGODB_URI: JoiString().uri().required(),
INFLUX_URL: JoiString().uri()
})
}),
MetricsModule,
MessageModule
];
if(process.env.NODE_ENV === 'production') {
imports.push(ApmModule.forRootAsync({
useFactory: async () => {
return {
httpUserMapFunction: (req: any) => {
return {
...req
};
},
};
},
}));
}
const controllers = [AppController];
const providers = [NATSConfigService, AppService];
return {
module: AppModule,
imports,
controllers,
providers,
};
}
}
As you can see, we are using a dynamic module. We need to use that because we can register the APM module only in production.
We use joi for validating the config that has to be present in the .env file.
In this case we are using mongoose for persistence but there is an option for choosing also MySQL and PostgreSQL.
We also import two modules, the metrics and message modules.
If you choose a pure app the metrics interceptor is going to be used also as a provider.
If you use a different persistence layer you will see there another different config service injected there. If no persistence chosen, no config service injected.
metrics|message.module.ts
The metrics module is simple and is used with the metrics interceptor. The interceptor is just placed in the middle of every request (http or rpc) and calls the metrics service to send a few data to our metrics infrastructure, influxdb.
@Injectable()
export class MetricsService {
constructor( private configService: ConfigService) { }
private influx = new Influx.InfluxDB(this.configService.get<string>('INFLUX_URL') || 'http://localhost:8086/telegraf');
async send(measurement: string, fields: Record<string, any>): Promise<void> {
this.influx.writePoints([{
measurement,
fields
}]);
}
}
It is what it is, a service for sending metrics to influxdb.
Then we have the message module. That module just imports the message service which implements the same functions the NestJS ClientProxy
.
@Injectable()
export class MessageService {
constructor(@Inject('MESSAGE_CLIENT') private client: ClientProxy){};
sendMessage<Toutput = any, Tinput = any>(pattern: Record<string, any> | string, message: MicroserviceMessage<Tinput>): Observable<Toutput> {
return this.client.send<Toutput, MicroserviceMessage<Tinput>>(pattern, message);
}
emitMessage<Toutput = any, Tinput = any>(pattern: Record<string, any> | string, message: MicroserviceMessage<Tinput>): Observable<Toutput> {
return this.client.emit<Toutput, MicroserviceMessage<Tinput>>(pattern, message);
}
}
We use our wrapping functions because we want to centralize the sending of the messages in the same place and if, at some point, we need to implement more metrics, controls or whatever in the sending of messages, we just have to put it in one place.
We also define our own MicroserviceMessage interface which includes, besides the pattern and payload, some metadata we use for logging.
Interceptors
In the project we have 3 interceptors. One for the metrics, one for the metadata and a timeout interceptor. The timeout has already been explained above.
The metadata interceptor is another simple interceptor. It just takes every RPC message, gets the pattern of the message (as we use NATS) and injects it into the message payload. We use this so that, for each request, we can print the metadata of the message with the message pattern that triggered that log line. This allow us to trace (for a specific request ID) every message pattern sent in a user flow.
And the metrics interceptor is just an interceptor to send metrics using the previously seen metric service. It just acts on each request and depending on the request type (HTTP or RPC) gathers different metrics. At the end, it send those metrics to influx using the metrics service.
Example service and controller
We also generate an app.service.ts
and an app.controller.ts
to illustrate the required configuration for each of them.
@UseInterceptors(MetricsInterceptor, InjectMetadataInterceptor)
@Controller()
export class AppController {
constructor(
private readonly appService: AppService,
private readonly messageService: MessageService,
@Inject(WINSTON_MODULE_PROVIDER) private readonly logger: Logger
) { }
@MessagePattern({ cmd: 'YOUR_CMD' })
yourMessageHandler(@Payload() message: MicroserviceMessage): Record<string, unknown> {
this.logger.info(message.data, message.metadata);
return { data: message.data, metadata: message.metadata };
}
@Post()
async yourPostHandler(
@Body() body: Record<string, any>,
@Headers('x-request-id') reqId: string
): Promise<unknown> {
this.logger.info({ body, reqId });
const response = await this.messageService.sendMessage({ cmd: 'YOUR_CMD' }, { data: body, metadata: { reqId } });
return (await response.toPromise()).data;
}
@Get()
getHello(): string {
return this.appService.getHello();
}
@Get('/status')
getStatus(): string {
return '[OK]';
}
}
The main things here are the interceptors used. The metrics and metadata interceptors has to be injected here because in an hybrid app, if you use them globally they are going to be triggered only for HTTP requests.
Every other functions are standard controller functions with an example message pattern.
We use the annotation @Headers('x-request-id') reqId: string
because we use Kong as gateway and have a plugin set up to assign a unique ID to each request.
If you use a pure app the interceptors are not going to be put there because the can be used globally. Also you will not have the HTTP endpoints.
BONUS TRACK: Generating controllers
As I said, the controllers are different if you are using a pure app or an hybrid one. That's why the template project also supports generating a controller. The questions are really simple and the only difference between a pure app controller or an hybrid app one are the interceptors.
If you use a pure app they won't be injected and if you don't, they will.
Conclusion
As said in the beginning of the post, this is a pave road for our company. I think this is a cool project and can either act as a starting point for someone else's project or be useful for anyone to bootstrap own microservices projects.
Please, feel free to comment if you want me to write another post talking about the technical part of the schematics and how I built this template project.
Also feel free to fill an issue in the project repo if you miss something or even open a pull request.
See you in the next post!!
Top comments (1)
Just curious, were you able to implement nats-streaming with nestjs? what package did you use or did you created your own package?