DEV Community

Mike Chung
Mike Chung

Posted on

My failed attempt to set up an old MacbookPro as a deep learning workstation

I recently got a new laptop but didn't want to say goodbye to my 2014 MacBookPro (10,1) yet. Then I remembered it has an NVIDIA graphics card, so I thought maybe I'll use it for training toy deep learning models. I never train a deep learning model before.

My goal was to training a tacotron2 model. Tacotron is Google's deep learning-based speech synthesis system. I was not able to set up my old laptop to train a tacotron2 model but found a different way to achieve my goal, i.e., using Google Colab. Still, I'm sharing my journey in case someone could learn from the mistakes I made.

  1. I installed Ubuntu 18.04. I searched for related documentation from help.ubuntu.com or wiki.ubuntu.com but they seem outdated so I just googled "install ubuntu 18.04 MacBookPro and found a medium post. Following the instructions on the post just worked. I also installed an NVIDIA driver via Software Settings.

  2. Training tacotron2 requires Pytorch 1.0. Although it seemed like I could install Pytorch via pip, I wanted to try installing it from source to gain better control of it. So I decided to install cuda & cudnn myself. It seemed like cuda 10.1 and cudnn 7.6.x are the latest pytorch compatible version of cuda and cudnn in March 2020.

  3. NVIDIA's instructions for installing cuda and cudnn weren't as straightforward as I hoped them to be. After running the main cuda install command sudo apt-get install cuda-10-1 (not sudo apt-get install cuda-10 because I want to control the cuda version) I was seeing errors like

    ...
      found 'diversion of /usr/lib/x86_64-linux-gnu/libGL.so.1 to /usr/lib/x86_64-linux-gnu/libGL.so.1.distrib by nvidia-340'
    ...
    

    I found a solution at stackoverflow and shamelessly applied it without understanding the commands fixed the problem. I suspected the error was due to installing an NVIDIA driver at 1. but never confirmed it. I also did not understand why the solution worked, but in the interest of time, I marched forward. cudnn installation was smooth.

  4. Now it was time to build Pytorch from source. Although I should be using conda, I just used pip3 in the interest of time and started following the instructions. After running the grand python setup.py install, I got stuck:

    ...
        /home/mjyc/.local/lib/python3.6/site-packages/torch/cuda/__init__.py:134: UserWarning:
            Found GPU0 GeForce GT 750M which is of cuda capability 3.0.
            PyTorch no longer supports this GPU because it is too old.
            The minimum cuda capability that we support is 3.5.
    ...
    

    It turns out MacBookPro 10,1's GPU GeForce GT 750M was too old for PyTorch 1.4 (the latest compatible PyTorch version seems to be 0.3.1). My first reaction to this error message was just buying an external GPU (eGPU). However, quick google search results showed that the eGPU case alone costs ~$300.00! and learned that just choosing which GPU to buy requires some research work.

  5. At this point, I realized I spent much more time and efforts and I originally budgeted so I gave up turning my old MacBookPro to a workstation and started looking for an alternative solution. I read a Reddit thread suggesting a cloud solution and looked into which service is a good starting point. Seems like Google Colab is a good place to start. So I stopped my exploration here.

To me, the lessons learned from this journey was always focus on the end-goal instead of the mean. Given my goal was to train a deep learning model, the mean to achieve the goal--using a laptop workstation or cloud--should not have mattered.

Top comments (5)

Collapse
 
terkwood profile image
Felix Terkhorn • Edited

Good try! - I fiddled with my 2015 MBP in a similar vein, but have landed on Colab and Kaggle, etc, for now, as well. Let's hope we can gain enough skills to justify our dreams of obscenely expensive hardware 😉

Collapse
 
mjyc profile image
Mike Chung

Thanks! I haven't tried Kaggle before, how do you (not) like it in comparison to Colab? Also regarding expansive GPUs, I recently learned about Jetson boards nvidia.com/en-us/autonomous-machin... they are still expensive, but I'm tempted to buy those as they can become full computers. This one developer.nvidia.com/embedded/jets... looks esp interesting to me.

Collapse
 
terkwood profile image
Felix Terkhorn

Wow, the Xavier board is beefy for such a little guy! That's really neat. I'm actually using a Jetson Nano to power my web based Go(Baduk) system. (github.com/Terkwood/BUGOUT/issues/67) I'm not a very good player so it beats me despite its small GPU 😁

Kaggle is nice because it gives 9 hours of high end GPU quota to each user for free, per week. it looks like another nice supplement to Colab for folks like me who are still learning ML. There are a lot of tutorials available from various competition winners and other learners in the community!

Hope to see you post about your experiences with Xavier if you go ahead with a build. Stay distant! 😷 🤜 🦠

Collapse
 
terkwood profile image
Felix Terkhorn • Edited

One issue to watch out for with Xavier as a workstation is your software's compatibility with ARM. This isn't always a big issue, but sometimes popular packages can be harder to find pre built binaries for when you switch to ARM. I'm thinking VsCode, Godot, in my own recent experience. It might not be an issue for you, just depends on how much of your development tooling you need to run, versus the pure ML portion of your build (NVIDIA seems to have done a good job including pre built libs needed for various ML apps)

Thread Thread
 
mjyc profile image
Mike Chung

Huh, thanks for the headup. I don't think Xavier NX is out yet, but will ping when it comes out & give it a try ;)