Lately, as my side projects grow and I learn new techniques and frameworks, I've been growing more and more frustrated at the time it takes for me to be able to make small changes and tweaks and re-train my models. My poor laptop just cannot handle it.
I also move around far too much for it to be viable for me to set up my own machine learning workstation in my apartment
Soooo... to the cloud it is! (other than the big players such as AWS and GCP)
Here's a list (by no means a complete list) of cloud providers I've tried out so far and how I found them and my personal preferences:
One of the first providers I tried out and I was definitely impressed by the maturity of the product and the overall user experience.
Paperspace provides an entire machine learning platform, ranging from Jupyter notebooks, distributed training, running experiments and deployments.
In particular, I liked the fact that it provides an easy to use CLI as well as web UI. This feels like it's aimed at people with varying levels of comfort with infrastructure.
So for example, I prefer using the CLI, probably because that's what I use for pretty much everything else in my day to day job. But I know plenty of data scientists and researchers who are far more comfortable being able to use a UI and Paperspace provides a really nice, easy to use UI (as opposed to a lot of other cloud products).
It also has a range of VMs available which aren't outrageously priced.
One caveat to note is that I was on the free plan which states in their docs that I should have access to 'Free and Low' instance types and that includes some of the instances with GPU, but when I went to set up a VM, it said I had to get in touch with support.
Additionally, I personally am not in favour of the current concept of one click deployments - these sound great at first but it's not scalable and I don't think it advocates for great engineering practices in the AI space.
Overall, I think I would use Paperspace in a team setting, in particular for a team just getting up and running.
The next one I tried out was Lambda Labs and I enjoyed the barebones, simple approach they have going on. So at time of writing, they provide two instance types, one with 4x 1080 Ti and one with 8x V100.
I then just ssh'd in to do my work. This is by far one of my favourite approaches for a single person working on a project by herself. No different than working on my on machine really.
Their UI is also barebones, but easy to navigate and use. I also liked their near realtime billing page, which told me exactly how long each instance I'd used for and the price. This helped me stay within my budget.
One big caveat here is that they are expensive: their cheapest instance is a whopping $1.50USD/hr.
Overall, I would use this provider for my own side projects and only for training because it does feel most similar to working on my own machine, and doesn't disrupt my coding flow but it is also expensive
FloydHub provides some similar things to other ML platforms such as Jupyter notebooks and the ability to spin up training jobs. Additionally though, they also provide templates for projects for common use cases such as sentiment analysis and object detection.
Initially I also liked the sound of projects, because when I went to create one it felt reminiscent of creating a git repo so I thought maybe I'd be able to connect my github account and projects from there, but no, it was just for creating notebooks 🤷♀️
Pricing is also not as expensive as Lambda, I was given two free hours of GPU time and 20 hours of CPU time. Purchasing more GPU hours starts off at $12 for 10 hours.
However, I didn't see anyway to spin up a VM instance and configure how I wanted to.
So overall, I think this is more a tool for learning and experimenting - I wouldn't use it for building or training large standalone models.
VPS-Mart, I think is my favourite so far. They provide GPU instances, and charge for a subscription, rather than purely on usage like the others, which I think I prefer now that my usage is getting to a point where I can't use some of the more expensive usage based services.
It is still a bit pricey - with the cheapest one being $45.00USD/month.
They do however provide 24/7/365 Support and 99% uptime guarantees, which I personally have no need for so it would be nice if there was a cheaper option for hobbyists.
Overall, I think I would recommend this to someone who knows they have fairly heavy use of GPUs now, so maybe someone building almost production level side projects as opposed to experimenting or learning. When I first started, this definitely would not have been a good option though.
So I haven't tried them out at all but I like the concept.
They provide "Decentralised AI", essentially you can rent out your GPU or rent someone else's GPU.
This is a really cool idea to me cause screw big tech and all.
But I'm not really sure yet what the security or even legal implications are. I'll probably have to do some research before I experiment.
Hope this helps anyone looking for GPU and more compute power. It's definitely not a complete list and based on just my experience. So if anyone as has anymore suggestions, let me know