This is going to be like the quickest tutorial ever. Underwhelming almost.
You don't need to know anything about IPFS or distributed nothing, not even static site generators.
Ready? Ok, the first step is you gotta open up your terminal and type this in:
mkdir -p dwebsite/public
cd dwebsite
echo '<h1>Hello, worlds!</h1>' >> public/index.html
yarn global add @agentofuser/ipfs-deploy
# or: npm install -g @agentofuser/ipfs-deploy
ipd
You with me? Ok, typed that. What else? Nothing.
What? Yep. You're done here.
Now you sit back and watch as the victory parade logs by π
βΉ π€ No path argument specified. Looking for common onesβ¦
β π Found local public directory. Deploying that.
β π public weighs 24 B.
β π It's pinned to Infura now with hash:
βΉ π QmQzKWGdjjQeTXrruYL2vLkCqRP8TyXnG1a9QEJjDM8WTY
β π Copied HTTP gateway URL to clipboard:
βΉ π https://ipfs.infura.io/ipfs/QmQzKWGdjjQeTXrruYL2vLkCqRP8TyXnG1a9QEJjDM8WTY
β π Opened web browser (call with -O to disable.)
And there you have it. Your very own l33t #dwebsite live on the hashlinked Merkleverse. Check it out. Share with your friends.
Sweet, huh? π¬
Slow down, what exactly just happened?
Alright, that was a bit much to take in at once. Let's rewind a little and look at it in slow-motion, with backstage commentary:
1. Where's the stuff?
Truth be told, I could have called ipd ./public
, passing the directory to be deployed (public
) explicitly.
But then you wouldn't see the "hard thinking" emoji as ipfs-deploy probes smartly about for one of the many, often undocumented, build destinations commonly used by static site generators.
βΉ π€ No path argument specified. Looking for common onesβ¦
β π Found local public directory. Deploying that.
Yes, I actually combed through staticgen.com, installed a bunch of those static site generators, and built little test sites just so I could claim that "zero-config" headline. It's the little things, you know.
This is what ipfs-deploy looks for when we're too lazy to type it out:
const guesses = [
'_site', // jekyll, hakyll, eleventy
'site', // forgot which
'public', // gatsby, hugo
'dist', // nuxt
'output', // pelican
'out', // hexo
'build', // metalsmith, middleman
'website/build', // docusaurus
'docs', // many others
]
As you can see from this blog's domain name, I'm a fan of Gatsby, but ipfs-deploy serves all in need. Bring your own SSG and we'll slap an interplanetary jetpack on its back and send it flying. π
2. The upload
This is what we're here for, that is, putting the website into space (figuratively, for now). So a few seconds later, ipfs-deploy delivers:
β π It's pinned to Infura now with hash:
βΉ π QmQzKWGdjjQeTXrruYL2vLkCqRP8TyXnG1a9QEJjDM8WTY
Jackpot! That's the money shot right there.
That little rambling of a hash is the crux of the whole dweb judo. The magic utterance that summons by name your website from the cavernous depths of the connected dungeons, caring not where it lies, but only what it is.
Intrinsic addressing, unbound by location or route.
Welcome <grave pause> to the distributed future.
Wait a minute, I thought I heard you say "distributed" π§
Oh, you perspicacious reader.
You're right: if IPFS is supposed to be a peer-to-peer protocol, why are we even uploading at all, right? To a server!? π€’
Shouldn't we just announce to the network that we have the hash and then wait to serve it ourselves to other peers as they request it?
Yeah, you can do that. Then you close your laptop, wifi goes down, π© happens, and poof β¨ there goes your website.
This is just like torrents: you need at least one seeder to be available for content to be reachable. If your website has tons of people who run their own IPFS node visiting it and re-serving it to others, then on average your uptime will be pretty high.
But there aren't that many such visitors around yet (hopefully Brave will fix that for us) and besides, it's a brand new website! π£ Poor thing wasn't born famous. Give it some time.
It would be a different story if browsers had a decent #asyncUX and visitors could easily tell them to queue the download for when a peer is available and then notify when it's ready (like a seamless "read it later" flow).
But as it stands, if there's no one live the moment a request is made, things just hang and then time out. Definitely not space-ready.
So we need a high-uptime seeder on our side. Or, in IPFS lingo, a "pinner."
Zero-Config Pinning with Infura.io
A big design goal of ipfs-deploy is to let you have that first happy experience of just seeing something you made up on IPFS as fast as possible.
One way to do that is to run a local IPFS daemon and have you serve the content yourself.
But as we saw above, that would be a little gimmicky as it doesn't represent what an actual deployment that you can share with your friends would look like. Gotta have a stable pinner.
Pinning stuff with decent uptime costs money though, so most pinning services understandably require you to at least sign up for a free tier before they'll agree to host your website.
Not Infura.io, though!
By some magic of careful rate-limiting, clever abuse prevention, growth capital, or reckless abandon, they let you just upload stuff out of the blue, unauthenticated, and they'll serve it for you indefinitely. (Even against your will it seems, as there is no clear way of *un*pinning things at the moment.)
So we owe that slick on-ramp to their generosity: thank y'all at Infura, and please keep it up!
Also, if you own a pinning service yourself and would like to be part of the zero-config welcome package, please consider adding an "even freer" tier that doesn't require signup.
Newly-created static websites don't take up much room, have very little traffic, and are a great gateway into further IPFS consumption.
You did it! π
If you got this far, condragulations. You rock! πΏ
Not only did you deploy your first IPFS website, you can now boast you actually understand how it works by waving your hands and saying "oh, it's pretty much like git + bittorrent you know, easy peasy!"
If you diligently followed the instructions, and somehow things blew up in your face, that's on me, not you. This is my commitment: your first happy experience, or it's a bug!
So please tell me what went wrong and I'll smooth it out for you.
We're all about removing friction around here π§Ήπ₯
If you haven't had enough, stick around for some extra credit.
And if you are content with where we got for now, please share that feeling with others by spreading and upvoting this guide far and wide :) Thank you!
Bonus chapter
Free Redundancy with Pinata.cloud
Having a stable pinner is cool and all, but isn't much different from regular web hosting. (In the "distributedness" sense, that is. In the "content-addressableness" sense, it's night and day.)
One way to get a taste of the distributed nature of IPFS is by adding a second pinner, and ipfs-deploy makes that easier.
We're going to deploy both to Infura.io and Pinata.cloud so that visitors can download from both at the same time, or from one in case the other fails.
Resilience! π€Ή
Pinata.cloud is a dedicated IPFS pinning service that gives you more control over what is being hosted.
It allows you to delete pins and to add metadata that you can later use to filter and manage your deployments.
There is a 1 GB free tier which is plenty enough for dev blogs, landing pages, documentation, and #YangGang fanpages. It does require signup, but it's pretty straightforward and doesn't require credit card or personal information.
After signing up and getting your API keys, go to your website's directory and copy your keys into a .env
file like so:
# dwebsite/.env
IPFS_DEPLOY__PINATA__API_KEY=paste-the-api-key-here
IPFS_DEPLOY__PINATA__SECRET_API_KEY=and-the-secret-api-key-here
β You don't want to make that information public β , so when you're doing this in a repository that you're going to host publicly, make sure to add the .env
file to your .gitignore
:
echo .env >> .gitignore
One last config: to deploy to Pinata, you'll need to forward port 4002 on your router to your machine.
Why? Because deploying to Pinata works like this:
- We first start a temporary local IPFS node,
- Pin the website locally,
- And send the hash to Pinata.
- Pinata then connects to our local node as a peer,
- Requests the hash we sent it,
- Then downloads and hosts it itself.
Because of step 4, we need to be able to listen to external connections.
Manually forwarding ports is a pain, I know, but support for NAT traversal is coming this quarter to js-ipfs, so that is one less hoop we'll need to jump soon π€
By contrast, Infura exposes their IPFS node's HTTP API, so we can upload to it directly as an HTTP client without running a local node or listening for connections.
Pinata's custom API strikes a different point in the flexibility-zeroconfigurability spectrum.
Thankfully, with ipfs-deploy we can have both βοΈ
Now that Pinata is set up, let's get back to the show. Here's what you run to deploy to both pinning services:
ipd -p infura -p pinata
And this is what you get:
βΉ π€ No path argument specified. Looking for common onesβ¦
β π Found local public directory. Deploying that.
β π public weighs 24 B.
β π It's pinned to Infura now with hash:
βΉ π QmQzKWGdjjQeTXrruYL2vLkCqRP8TyXnG1a9QEJjDM8WTY
Nothing new so far. Now to Pinata (featuring new emojis!):
β βοΈ Connected to temporary local IPFS node.
β πΆ Port 4002 is externally reachable.
β π Pinned to temporary local IPFS node with hash:
βΉ π QmQzKWGdjjQeTXrruYL2vLkCqRP8TyXnG1a9QEJjDM8WTY
β π It's pinned to Pinata now with hash:
βΉ π QmQzKWGdjjQeTXrruYL2vLkCqRP8TyXnG1a9QEJjDM8WTY
β βοΈ Stopped temporary local IPFS node.
And there it is: same hash, different locations.
β π Copied HTTP gateway URL to clipboard:
βΉ π https://ipfs.infura.io/ipfs/QmQzKWGdjjQeTXrruYL2vLkCqRP8TyXnG1a9QEJjDM8WTY
β π Opened web browser (call with -O to disable.)
You can see the same content on whichever gateway you want by replacing "ipfs.infura.io" with:
Or any of the ones listed here: https://ipfs.github.io/public-gateway-checker
I've said it before, but I find this so cool that it bears repeating:
Intrinsic addressing, unbound by location or route.
Or: calling data by what it is, not where it is.
That's cryptographic hashing for ya πͺ
The telling-your-friends part π£
Alright, we're done here, right? Feeling all great and distributed. Now go tell your friends about it over the phone π
β Hey Jessie, guess what?
β What?
β I've got my website up on the dwebs!!
β Yay friend, how cool! I bet that was super hard.
β Nah, there's this npm package, y'know...
β Yeah, yeah, lemme see the website, where's it at?
β Oh it's at i-p-f-s-dot-i-o-slash-i-p-f-s-slash... Er...
β Slash what?
β It's uh... You got a pen? It's uh... uppercase Q, lowercase m, uppercase Q,
uppercase T... Ugh... How about I text you the URL?
β Ok, sure, but what if I'm just some random person who sees your URL on a billboard? You won't be able to text me then, will you?
β Hm, I guess not.
β You know what, how about you call me when you have something memorable I can type in the browser?
β Wow, harsh.
Ok that very lifelike conversation went south pretty fast.
It turns out content-addressing on its own doesn't gel quite well with the limited human memory buffer.
Plus, friends can be tough.
What do we do then?
A pretty URL
Human-readable naming for IPFS websites is definitely an area that needs smoothing out.
But if (and that's a big if) you keep within the constraint of using the free Cloudflare IPFS Gateway for now, ipfs-deploy wraps it all together in a pretty neat way.
Here is actual footage of me deploying interplanetarygatsby.com:
ipd -p infura -p pinata -d cloudflare
And this is what I see after the uploads are over:
β π SUCCESS!
βΉ π Updated DNS TXT interplanetarygatsby.com to:
βΉ π dnslink=/ipfs/QmSNf4sScZUmpNqBWAAs9S5tC4XkQRNepA3KbF4aipGGeq
It makes me smile every time at the sheer effortlessness of it π
There are some one-time steps you need to take to get there though:
- Buy a domain
- Sign up for a Cloudflare account
- Move your domain's DNS zone to Cloudflare
- Hook up your domain to their IPFS gateway
- Get your API keys
Between actually performing those configuration steps and waiting for DNS information to propagate, this whole thing can take a couple of hours.
That's why I put that part in this bonus chapter and left the base instructions zero-config. In time, we'll remove more and more friction and make that first happy experience ever happier πβ¬ οΈπ§Ήπ₯
So after steps 1-5 are done, the last thing you need to do is add your domain and Cloudflare API credentials to your website's .env
file:
# dwebsite/.env
IPFS_DEPLOY_SITE_DOMAIN=example.com
IPFS_DEPLOY_CLOUDFLARE__API_KEY=paste-your-cloudflare-api-key-here
IPFS_DEPLOY_CLOUDFLARE__API_EMAIL=the-email-you-used-to-sign-up
Now go ahead, fire away with ipd -d cloudflare
, and tell all those billboards about it! π£π£π£
α
Postface: A Call for Curlers π§Ήπ₯
While writing this guide, I happened upon the "curling stone" emoji and felt an instant connection to it.
To be honest, I've always found the idea of curling a little... odd. But a whole team sport dedicated to frantically removing friction from a path just so that this peculiar artifact can gracefully and effortlessly glide its way onto a goal is so... moving.
That's how I want ipfs-deploy to feel to the user. A silk-smooth experience landing them right on the distributed web without breaking a sweat, and a wild sweeping crowd cheering them on.
If that sounds like your kind of sport, say hello in the issues and let's polish some stuff!
Top comments (5)
Amazing write up, thank you! This was pretty much our idea for a sample app so very happy to see youβve done this. Weβll make a PR when our API is up and ready.
The βeven free-erβ tier is a good idea, will see what we can do.
Totally down for π₯
error pull-to-stream@0.1.1: The engine "node" is incompatible with this module. Expected version ">=10". Got "8.10.0"
error Found incompatible module.
UPDATE: had to
nvm install 12.10.0
to update node.js
Hi - I take it you are using bash - I tried your terminal code in win10 cmd and - = enter (Bash code 1 = mkdir -p dwebsite/public ) ( win10 cmd = ) mkdir dwebsite - cd dwebsite - mkdir public - ( Bash code 2 = echo "
Hello, World! From SUPER-GRAND-AD
> public/index.html ) No Changes required for bash code 2 ( I did ad From SUPER-GRAND-AD ) ( bash code 3 = yarn global add @agenttofuser/ipfs-deploy ) did not work I am now stuck - any help - Also here is a 8080 link that will only work when my daemon is on because it is not a public file if you facebook PM me john.lovatt.524 - I will switch on my daemon - 127.0.0.1:8080/ipns/QmPtD9a5HhJiCV...Hi there, thanks for trying it out :) With the
yarn global add
command, what error message did you get? Do you have nodejs and yarn installed?Hi Helder - Me - I am the Complete Beginner - Please do not think I am having a go at you - But - Do I have nodejs and yarn installed? - No - Because your guide said - You don't need to know anything about IPFS or distributed nothing, not even static site generators. And no mention of having to install anything or how to install them? or what a terminal is and the app that you use in that said terminal? So there is a big bit missing for the Complete Beginner that is using win10 & MSDOS. Yes I do have ubuntu installed on another laptop only installed some months ago so I am still a compleat beginner in Linux - But if You are a teacher that welcomes construtive miss understanding as a cry for help, may be, we can help IPFS and Linux gather more users - as it is very bewildering after a Microsoft brain washed users like myself trys to break free. So please use me as your beata tester for your Guides