Talks around unit tests and TDD in general have been quite polarizing. On one hand, it makes sense to have a measurable gatekeeper in place especially for large projects. On the other hand, it's just an awfully lot of work for such an unproductive task. Not to mention the learning curve that can vary for beginners, and often lack of documentation or learning sources for some language. But no matter one's strong opinion on the matter, at the end of the day, it's either gonna be a job done or a tech debt looming over us all.
Thankfully, the latest development on AI have made it possible to automate some part of the test writing process. Keep in mind that a lot of these tools are mostly research preview or usable prototype, which would require signing up for access.
Here's some tool you can use to write test from scratch, or refine existing test to be better:
1. Github's Testpilot
After launching their AI pair programmer, Copilot, last year, Github cited Testpilot as one of their many next exciting product. Similar to Copilot, Testpilot come as part of a VSCode extension and is still a usable prototype.
To use it, you need to signup to Github Copilot access, install Github Copilot Extension and then install Github Copilot Labs Extension on VSCode. Keep in mind that Github Copilot access is a $10 plan per month by the time of writing this article.
2. OpenAI Codex
A free alternative to Testpilot would be OpenAI Codex that comes in limited beta. It doesn't come in the form of VSCode extension, but instead a website where you can copy your code and insert the prompt to.
To access this page, you need to sign up for an OpenAI account, and then go to its playground as seen in the image above. The page provides you with many tuning options you can tinker with, but which could be too verbose for beginner. This is suitable for when you want to experiment and need more freedom with prompts.
3. Auto-test CLI
The next one would be a shameless plug from myself. Auto-test is a CLI tool that uses OpenAI Codex under the hood, but can hopefully provide a better experience as you can do the test generation process from the CLI. You only need to tell the filename, and it will generate the test file complete with all the test cases inside.
To use it, you need to install auto-test and include it in the $PATH
of your machine. Then, get your API Key from OpenAI's account page and export it to your terminal profile. And that's it! You can use auto-test
command and point it to any file you want to be tested.
The result of the test may vary. If the result is unsatisfactory to you, you can add a custom prompt that helps the AI better understand your code and generate better test. What has worked for me is by explaining a little bit about my code and what I want it to accomplish, usually with an example of input and output. Or just providing a summary of the code would also suffice. You can add custom prompt using the --prompt
or -p
flag.
If you want more freedom of the prompt and completely override the default one, you can always add --override
or -o
flag at the end of the command.
Keep in mind that in v0.1 version, the generated test file might lack important things such as importing dependencies or mocking external functions. You would still need to add this yourself, but, on the bright side, at least we're no longer writing test from scratch :)
Top comments (0)