The Memory App has a well-organized project structure that includes several important files and directories. The following is a brief description of each directory:
app.jsfile serves as the entry point for the application.
configdirectory contains the
db.jsfile, which sets up the database connection.
controllersdirectory includes files that handle the logic for different routes, such as
middlewaredirectory includes files that handle authentication and error handling, such as
modelsdirectory includes files that define the schema for different data types, such as
routesdirectory includes files that define the various API endpoints for the app, such as
servicesdirectory includes files that handle the business logic for different features, such as
utilsdirectory includes files that handle utility functions, such as
validatorsdirectory includes files that handle input validation, such as
Docker is a containerization platform that enables developers to package their applications and dependencies together in a portable container. This allows for consistent and reproducible environments across different development, testing, and production environments. The Memory App can be easily installed and run using Docker.
Before installing the Memory App, make sure that Docker is installed on your system. You can download the Docker Desktop for Windows or Mac, or install the Docker Engine for Linux.
Once Docker is installed, navigate to the root directory of the Memory App in your terminal. The app is configured to be run using Docker Compose, which is a tool for defining and running multi-container Docker applications. The app's Docker Compose configuration is defined in the file
To start the app, run the command
docker-compose up in the terminal. This command will start the app's containers and make the app available at
To stop the app and remove the containers, run the command
To run the app in production mode, use the command
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up. This will make the app available at
You can pass an additional flag
-d to the command to keep the containers running in the background.
It is also possible to override the default settings of the compose file using a separate override file by running the command
docker-compose -f docker-compose.yml -f docker-compose.override.yml up.
docker-compose up automatically applies this if we name the file
In production mode, the environment variables need to be set in the production environment. We could create a separate
.env file for production and provide it inside the
docker-compose.prod file. We could also containerize a MongoDB database.
API for a social media application where user can post, share, view, comment and like memories.
Table of Contents
- API Docs
- Project Structure
cp .env.example .env
set these environment variables
npm run dev (:5000)
docker compose up
To stop and remove containers
docker compose down
To run in production mode
docker compose -f docker-compose.yml -f docker-compose.prod.yml up
You can pass an additional flag
-d to keep it running in background
Open endpoints require no Authentication.
Login POST /user/login/
Endpoints that require Authentication
Closed endpoints require a valid Token to be included in the header of the request. A Token can be acquired from the Login view above.
Endpoints for viewing and manipulating the Memories.