I'm curious which small steps should I do to cover my existing app with tests from zero?
It seems obvious: just start writing tests, but maybe there're some hints? Share your experience, please
I'm curious which small steps should I do to cover my existing app with tests from zero?
It seems obvious: just start writing tests, but maybe there're some hints? Share your experience, please
For further actions, you may consider blocking this person and/or reporting abuse
riddhi -
Sukhpinder Singh -
Marcos Oliveira -
Shubhadip Bhowmik -
Top comments (10)
We use rspec-jumpstart for our clients who need test coverage help before beginning a Rails upgrade.
tjchambers / rspec-jumpstart
RSpec 3 code generator toward existing Ruby code.
rspec-jumpstart
RSpec 3 code generator toward existing Ruby code. This gem will help you working on existing legacy code which has no tests.
Installation
rubygems.org/gems/rspec-jumpstart
Usage
Options
Output example
Unfortunately,
lib/foo/bar_baz.rb
has no test. That's too bad...OK, run
rspec-jumpstart
now!spec/foo/bar_baz_spec.rb
will be created…wow, that's what I was looking for 👏. Will give it a try!
Maybe there's something to generate factory_bot factories from existing models?
I think request specs are a good place to make quick progress covering a lot of behavior. Simpler to write than higher or lower level specs.
If you have some tooling budget, CodeClimate is a nice way to keep track of test coverage and get a scoreboard to watch early on in the process.
Thanks, 'requests' mean controller tests, right?
Very good point, - time spent/complexity somewhere in the middle of integration and unit tests
I would recommend adding a tool like CoverBand github.com/danmayer/coverband. This will allow you to know the most critical areas to cover by tests.
great gem, thanks for sharing 😀. I was thinking only about SimpleCov gem, so far
Easiest is to identify parts which are free of side effects and use small number of dependencies to execute and cover them with tests.
don't get it, sorry
When you wrote code for tests you need to know what output is produced from given input. So if you can identify parts of code that work without dependencies (for example without calling the database and checking the results), you can easily test them. The more dependencies your code has (for example on external microservices) the more you have to mock. And if you mock too much, you will end up with code that has pretty high coverage but still the program will be full of bugs.
You have existing codebase, you you should start from parts which are easiest to understand, totally not partially. Controllers are pretty bad choice because they span lots of logic spread across multiple modules. In the other hand, smaller models should be simpler, because they don't depend on other modules (for example only on the database which you should mock).
Look, my project is a frontend over some kind of analytical database, I have no control of the data I'm processing. My application has coverage of about 90% with single API call. That single tests shows that my application can connect to the database, fetch the data, process it, and present, yet we still have tons of problems, for example the title is missing here and there. The thing is, the title comes from external data and I can't do anything about it. If I had a test that will check that for given keyword all articles have tiles - this test will be meaningless.
What I can do - is to do opposite approach. Mock the data and test my applications logic (assume that the title is provided, and put error handling when it's not). Check that connection can be retried when timeout happens. Unfortunately this requires lots of mocking.
Most of the tests in my application focus on checking if the user input is producing correct database queries. The query building logic is done by few classes, all of them have no dependencies. I made them that way, so that is easy to test them. I choose to do that under influence of "functional paradigm" (something different than object oriented programming).
The best place to start is with the next bugs you encounter. Making tests for those bugs will stop them from coming back in the future.
Then, any new feature you program should have its tests too. If you have a very strict budget, test the regular case and an edge case. You'll at least know the feature works.
After this, of you have time and money to do so, you can write tests for the older code that hasn't encountered bugs yet.
That's the way I brought unit/integration tests at my job, even if we didn't have budgets specifically for tests. This has led to fewer dumb or recurring bugs on our most recent sites.
Good luck!