DEV Community

Cover image for How do YOU manage python environments
Waylon Walker
Waylon Walker

Posted on

How do YOU manage python environments

Top comments (19)

Collapse
 
mburszley profile image
Maximilian Burszley
Collapse
 
waylonwalker profile image
Waylon Walker

Thanks for the link, I've considered moving to poetry or pipenv. I like how lock files are a normal part of the process, unlike conda. Conda can do it, but its not a part of the typical workflow and requires an extra step. Thanks for the link.

Collapse
 
jcoelho profile image
José Coelho

Does Poetry also need virtual environments?

Collapse
 
mburszley profile image
Maximilian Burszley

Yes, it handles management of them.

Collapse
 
sobolevn profile image
Nikita Sobolev

I use poetry here are examples:

GitHub logo wemake-services / wemake-python-package

Bleeding edge cookiecutter template to create new python packages

wemake-python-package

wemake.services Build status Dependencies Status wemake-python-styleguide

Bleeding edge cookiecutter template to create new python packages.


Purpose

This project is used to scaffold a python project structure. Just like poetry new but better.

Features

Installation

Firstly, you will need to install dependencies:

pip install cookiecutter jinja2-git lice

Then, create a project itself:

cookiecutter gh:wemake-services/wemake-python-package

Projects using it

Here's a nice list of real-life open-source usages of this template.

License

MIT. See LICENSE for more details.




Collapse
 
reritom profile image
Tomas Sheers

I use Conda. Whenever I start a new project, I create a new directory, cd into the directory, and then run a custom bash function called 'new' which creates a new Python 3.8 Conda environment with the name of the directory, activates it, and installs any requirements if the requirements.txt already exists.

I also had a function for cd which would activate environment when cd-ing into a directory if there was an environment with the name of the directory. But in some cases I didn't want that to happen, so I stopped using that. Instead I have an alias of 'activate' which will activate the correct Conda env.

Collapse
 
waylonwalker profile image
Waylon Walker

I like the idea of the auto-activation, but can see where it could cause some frustrations as well. I made a fuzzy condo environment activator with fzf to make it a bit less verbose.

a () {
     source activate "$(conda info --envs | fzf | awk '{print $1}')"
     }
Collapse
 
waylonwalker profile image
Waylon Walker

I'll start with me, I used to keep everything in one environment until it burnt me one too many times and did a 180. I now keep EVERY project separate. I do not install anything for one project into another project's environment. I also do a ton of exploration, at one point I had 70 conda environments installed on my machine.

conda create -n project python=3.8
source activate project
# ensure it activated
which pip
which python
Collapse
 
lig profile image
Serge Matveenko

I use pipenv and poetry. Actually, I'm in the process of moving from pipenv to poetry in my projects.

Collapse
 
waylonwalker profile image
Waylon Walker

why the move from pipenv to poetry. If I wasn't in data science I would likely be using pipenv or peotry.

Collapse
 
lig profile image
Serge Matveenko

Pipenv has its issues. The stable release is very old. Poetry was unacceptable for some use cases before 1.0.0. Now it just rocks:)

Collapse
 
fronkan profile image
Fredrik Sjöstrand

Mostly I use virtualenv, but for some use cases I also use Docker

Collapse
 
waylonwalker profile image
Waylon Walker • Edited

Bonus points for using Docker. I have never used docker for development, how do you like it as a python environment? Seems like it would be kinda big for a standard use case without needing to run other components; databases, web servers, etc, but that could be my lack of experience with it.

Collapse
 
fronkan profile image
Fredrik Sjöstrand • Edited

I do prefer using virtualenv because, as you said, docker is a bit heavy. I also think it is harder to debug inside a docker-container. I usually use the VS code debuger which automatically pick up virtual environments. What I do like with docker is that it is guaranteed to work the same in production. E.g. multi-processing in Python behaves differently on Windows and Linux and I am working with both.

Collapse
 
rhymes profile image
rhymes

Poetry

Collapse
 
dmfay profile image
Dian Fay • Edited

Poorly. I try to keep things more or less organized (an env for this ML project, an env for that major module) but conda seems to blow itself up if I so much as look at it funny.

Collapse
 
waylonwalker profile image
Waylon Walker • Edited

Lol I have had my fair share of conda environment blow ups! I have recently disabled pip inside of my base environment to prevent some issues. Sometimes paths don't update correctly and you install things in base even while your prompt tells you that you have your env activated.

At least this way, even if you only use one env, you can easily wipe it and start over without a full re-install.

Collapse
 
yubozhao profile image
Bozhao

For overall python I use conda

For model deployment, I use bentoml to manage my deployed services

Collapse
 
waylonwalker profile image
Waylon Walker

I have never heard of bentoml, I'll have to check that out.