Here you'll find a step by step guide to get your ComposeDB setup working asap.
Quick primer: ComposeDB is a decentralized database on Ceramic Network. End-users truly own their data because they're the only ones that can write/edit their own records.
This workshop repo by Ceramic packs a lot of stuff in it. It works, but if you use it as a template like I did to build your own project, you may end up running into various issues... like I did.
Here are my own notes that helped me use this repo to consistently get the required components set up and ultimately win a bounty award at ETH Denver 2023.
I've broken this down to a few sections:
Understanding ComposeDB
- About the workshop repo & Issues I kept running into
- Essential Terminology
How to get backend running:
- Run ceramic node
- Setting up your data structure
- Spinning up your Graphiql server
Understanding ComposeDB
The workshop repo spins up a Decentralized Twitter app. It allows end-users to be in control of their Tweets. Elon Musk wouldn't be able to delete your profile or tweets.
You can see it running if you run "npm run dev". However, you'll see that it does a lot of things:
- run ceramic node,
- prepare composite files,
- prepare the composite files that the decentralized twitter app needs.
- spin up graphiql server (to enable graphql queries)
When they all run together in a single terminal window as multiple child processes, they crash easily and prevent your node from running properly after that.
Issues I kept running into
Here are some issues I kept running into and the solutions I got from several people at Ceramic.
unexplainable ceramic node errors
==> delete the .ceramic folder in root directory or reboot computer.goipfs - someone else has locked the file.
==> delete the .lock file in the folder ~/.goipfs-
composedb.config.json file issues
- workshop repo's config file had statestore.local-directory pointing to someone's local directory. ==> Change it to project root directory: "local-directory": ".ceramic/statestore/"
- network.name: use "testnet-clay". If that gives you issues, you can try "inmemory" and it will do everything locally.
composite issues or model not getting indexed:
==> Most likely missed a step in setting up composite file. Run the commands 1 by 1 slowly.
I ended up doing these things multiple times but once I started following my final step-by-step notes further below, I never had a problem getting set up again.
Essential Terminology
Things really only started to make sense after understanding the terminology better.
A model is essentially the schema of a data table. You define field names and types (eg. Name: String, ID: string).
A composite turns that model into JSON format, which can then be used to deploy the model to the Ceramic network.
If you have multiple models, their composite JSON files can be merged into a single file.Users create "instances" of your model known as documents.
Both models and documents are streams on the ceramic network. You can access any model/document using their stream IDs.
Get backend running
Run ceramic node
npm run ceramic
Run this on its own so that you can easily terminate later if you need to. The workshop repo's npm run dev runs everything in the backend and the frontend together, so it runs into errors easily. By separating the ceramic node, composite setup, graphiql server and frontend, you'll run into way less issues.
Setting up data structure the first time
Installing composedb
If you have it installed using npm, then uninstall it first:
npm uninstall @composed/cli
You MUST install using pnpm. How to install pnpm depends on your OS: https://pnpm.io/installation. For linux it is...
wget -qO- https://get.pnpm.io/install.sh | ENV="$HOME/.bashrc" SHELL="$(which bash)" bash -
And you must install composedb globally:
pnpm add -g @composedb/cli
npm and yarn and npx all didn't work for me. Use pnpm. It can save you hours.
Compiling composite files
These steps are needed to feed deploy your model into the ceramic network and to generate files necessary for the graphql server and for your frontend app.
- Define model (schema) in a graphql file. Name of schema, field names and data types etc. Follow the format shown in docs here or in the workshop repo.
- set privatekey environment variable export DPK=yourprivatekey echo $DPK
- create a composite for your model
composedb composite:create composites/termsheet.graphql --output=composite.json --ceramic-url=http://localhost:7007 --did-private-key=$DPK
If you open that output file you'll see a model ID in there identifying the model you just created.
- If you have multiple composite files, you merge composites into one file.
composedb composite:merge composite1.json composite2.json --output=merged-composite.json
- Deploy the single composite file. This will add all models to the model catalog
composedb composite:deploy output/composite.json --ceramic-url=http://localhost:7007 --did-private-key=$DPK
- Compile composite
- to enable data interactions via composeDB client, output as JSON
composedb composite:compile output/composite.json runtime-composite.json
- to enable import from javascript, output as JS
composedb composite:compile output/composite.json runtime-composite.js
Your frontend app will need this runtime-composite.js file.
check if it was deployed properly:
npx composedb composite :from-model
You should be able to see your modelID somewhere in there. You can add it to your composedb config file to make sure it gets indexed.
Spin up graphql server that can interact with your model / composite:
composedb graphql:server --ceramic-url=http://localhost:7007 --graphiql runtime-composite.json --did-private-key=$DPK
That's all. You can now just focus on building your webapp. You just need a simple setup to use Ceramic and all data queries utilize graphql. Here's an example of how composeClient is used to execute graphql queries.
Ending remarks
Here are some questions I had earlier on about Ceramic & ComposeDB:
How does data live on Ceramic nodes?
==> Your data would need to be indexed by at least one node for the data to persist, so it's kind of like pinning files in ipfs. Fun fact: the data itself is stored in IPFS. ComposeDB abstracts everything away to work as a DB. In the end, anyone can spin up a node and it doesn't cost a ton of money the way various EVM chain validators do.
How do users own the data if they need to connect to a node to read/write data?
==> On the frontend, when you authenticate with authenticateCeramic, you are cryptographically signing a message with your ethereum wallet. Nodes can only write to your data on your behalf using that message.
How can I get historical data on the test network?
==> You'd need to enable historical data in the composedb config file. But to do that, you also need to change how the node keeps indexed data. Under indexing.db you'll see that it's using a sqlite server, but activating historical data would require you to setup a postgres sql server.
I think ComposeDB opens up a lot of choices for decentralized apps. It allows high speed with good enough decentralization. Over time I'm sure the backend can be packaged to be easier to manage. By then, people would be able to easily run their own nodes on their own laptops to truly own their data without relying on RPC nodes.
Top comments (1)
This is awesome, @alinobrasil! Thanks for sharing. We've featured you in our new Community Hub here :)
threebox.notion.site/Community-Hub...