Disclaimer: I'm looking for ideas for the GitHub actions hackathon.
Personally I would love a solution that intelligently figures out which tests need to be run to verify that a code change is safe, rather than just blindly running all of the tests. Some test suites can take ages. I asked the #python discord channel but people weren't interested. Que lastima!
What are your ideas?
Top comments (7)
Yup! This is something I'd love. I usually put a lot of "smaller" projects together in a "monorepo" which usually means I end up running a lot of test suites for multiple projects, even without them having any changes.
Lots of the solutions I find are close but aren't 100% and none of them seem to be platform agnostic to what I'm doing.
Oh, good to know that some other people like the idea as well. Do you remember what solutions you found? Why were they lacking?
So we ran into a few approaches:
So one approach we tried was leveraging the CI environment. Some provide the ability to "only run on changes to X files". So this could be a file or folder. This works works, except it doesn't handle dependencies. So if I have 1 project that uses a
common
lib, and I update thecommon
lib, nothing but thecommon
lib's tests would be ran. This could work but wasn't optimal.Another approach was using specific tooling. The only tooling I found that offered something that could help, that worked with what we already were doing (MEAN stack) was using the nx-cli's affected commands. This required specific tooling, and handled the "dependencies" issue, except it's platform specific, and was slightly more annoying to use as we'd have to manually figure out what commits to "compare too" to see what needs to be affected/tested.
We started looking into other solutions, but never got much further than the two above. We eventually leaned on just running tests all the time, and doing more expensive stuff (building deploying) manually/less often (either on a time trigger or "click to deploy")
Oh wow, nx seems pretty cool. I'll have to look into that, thanks!
How long do your tests take?
For the first prototype I was thinking of a script that would just ignore changes for a certain non-code file/folder. For example, if the only change was to the readme it would skip tests. Pretty basic. The script would come with a few default files it would ignore changes for (readme, license, etc...) and the user could specify what else they wanted to ignore. Would you use that at all? Changes to non-code files don't happen often (at least in my experience) so I'm not sure if people would be interested in faster CI for those files.
Right now tests don't take that long, but we also execute "sanity builds" which take a minute or so. Right now we don't even test everything automatically, otherwise the parallelization of running all of the tests at the same time "freezes" the CI instance. (I think due to lack of ram?)
Having a list of things to not focus on is less useful than having a list of directories to only focus on, this would match with our current CI environment settings (google cloud build), which currently isn't supported with github-actions (to my knowledge).
The main 3 requirements we'd be looking for in a "generic github-action conditional change checker" would be:
json
file configuration that works withangular.json
to determine which projects/libs rely on each other. This also needs to be generic enough to work easily with most common tooling to execute multiple steps depending on what files changed.I recently found github.com/dorny/paths-filter, it may be helpful for you. It can help you conditionally execute things based on file changes (assuming you use github actions). It doesn't have a option to compare against previous commit / CI run however.
Stuff that would help writing/debugging Actions faster would be great.
github.com/nektos/act exists, but isn't a silver bullet either, there is a lot of improvements that can be done on the debugging and Dev eXperience sides of things.