loading...
Cover image for How I test on front-end

How I test on front-end

dasdaniel profile image Daniel Poda πŸ‡¨πŸ‡¦ ・4 min read

On November, 21 2019 14:00 PM (EST) I'll be giving a presentation on vuemeetup.com

It will be about agile development with Vue, and in preparation I've come up with some content that I won't have time to cover. While this was meant for a Vue presentation, nothing in here is Vue-specific (Which is partially why it didn't make the cut)

Why test?

The role of testing, in the context of agile development, is to give you confidence so that you can release more frequently.

My view on testing front-end projects is that I'm mostly testing for regressions.

I'm not automating tests to make sure that it matches ticket acceptance criteria, I'm writing tests to make sure that the feature I just added will not at some point stop working.

When I've just added a new feature, I usually know it works because I'm interacting with it while I'm coding. So if I'm writing a test for it, I find it easy to get lazy and and write a test that doesn't capture enough of the functionality. If I think of the tests as trying to capture the functionality that I've implemented, I find it's a bit easier to get through the work of writing the tests.

How many tests do I write?

I was asked recently to quantify the amount of testing I would do for a (non-specific) project. I had a hard time giving a simple answer, because not only is it just the way I roll, it varies a lot from project to project.

I have one project that currently has no tests at all. I'm the sole (front-end) developer and the changes range from bug fixes to one significant refactor I've done. It is mostly a dashboard with limited ability to affect change. But it's not going to see release soon and some of the changes have been causing major changes, so until the functionality of the UI gets solidified or the project gets a release date, I see adding tests as adding overhead that I can save time/budget on for the time being. Eventually, before release, I will put together a set of tests so that I can release and make additional changes after the release with confidence.

On another project, I've got unit and integration tests. I've even written a script to diff visual snapshots to check the rendering in various browsers. It takes a while to run and is a nuisance to maintain, but it catches errors and every time it does, my dopamine levels surge.

I like long test names

Also helping me with tests is writing seemingly unnecessarily long descriptions.

For example, when your test fails, after a year of not looking at the code, which error message would you prefer?

it('checks for existing email', () => {})
it('opens modal with error when user submits with an existing email', () => {})

Not only will my future self thank me for that ridiculous message, I also find that when I start by writing the tests like this, it's easier to write the tests because I remember what I'm testing. In some cases, these can even come from a ticket acceptance criteria.

So if my tests read like a history of the various ticket acceptance criteria, I can change code with more confidence, and so can a dev that is seeing the project for the first time.

But I'm, not a fan of snapshot

I, as of recently, have resolved to stay away from snapshot test (code snapshot, not visual/screenshot snapshots).

I find these tests are very easy to write. You have a single line of code expect(myComponent).toMatchSnapshot(); and it ensures against any change in the DOM. The problem, however, is that there are no useful assertions given in that test. The test will show you the diff, highlighting which parts changed, but with little context you may spend a lot of time making sense of it.

I was writing a new feature after 9 months of not looking at a project's code, and the snapshot test failed. The snapshot test is expected to fail, because I just added a feature, but I don't have the slightest clue what I was checking for in the snapshot. After a few minutes of staring at the diff I assume all is probably good and essentially blindly update the snapshots so that it passes in the CI/CD pipeline. So what is the value of a test that tells you that something changed when you changed something? Take the time and write assertions,

I'll have extensive unit tests for some functionality, like a regex that tests emails. But unit testing a button seems pointless when your integration test is going to test that.

I also rarely do TDD, because the paradigm of writing unit tests on front end components before writing the components just doesn't give me the ROI. On a CLI or an API server it makes sense, but for front-end it just seems like a lot of thrashing.

photo credit: https://unsplash.com/@sarahmcgaughey

Discussion

markdown guide
 

I also rarely to TDD, because the paradigm of writing unit tests on front end components before writing the components just doesn't give me the ROI. On a CLI or an API server it makes sense, but for front-end it just seems like a lot of thrashing.

can you explain that?

 

Of course, thanks for reading

In short, I find it's a lot more work to write tests for front-end because I'm testing interaction, not only iputs and outputs.

Command line interfaces and APIs can be tested by providing an input and comparing the output to the expected output. Depending on the functionality of the service, you may also have some side effects to check for, but you would have an API design that is expected to match.

A front-end application ticket specification usually includes what the functionality and interaction should be, not what the DOM structure will be, that part is up to the dev. Testing the inputs, I find, gets a lot more complicated. I'm dealing with the DOM and triggering events on inputs and watching for classes or visibility chagning. Not only is the input and the end result important, but there are often lots of side effects to watch for too. I don't always know what the DOM structure will exactly look like before, so I would need to prepare the DOM result for the test before I write the code that I test. That usually results in having to update the test.

Granted, that's been my experience, your mileage may vary. If your experience has been to the contrary, I'd certainly like to hear about it.

 

Agree with you, it might be more difficult, but not impossible and probably even not impractical.

You are right, implementing the complete test in detail might be difficult, but what you always can do as a first step is writing down the expectations. E.g. you can use the gherkin language to write what should happen in natural language, without focusing on the details what button is to be clicked or what input field to be filled, you define the expected behaviour.
That would not be TDD, but more BDD, but the goal is more or less the same.

In a second step you could even implement the test, including an imaginary DOM structure. If you use page Objects, then changing the DOM should not be a big issue.

I even argue that you should write tests for buggy behaviour, before you fix it dev.to/jankaritech/should-you-writ...

I haven't come across gherkin yet, I'll have to give it a look.

have a look at it, its a great tool to write expectation and BDD for JS you can use cucumber-js to interpret it github.com/cucumber/cucumber-js

 

+1. I don't understand what's the difference in developing a component on the frontend or a new controller on the backend using TDD.

 

I like the way you wrote this and I agree with the way you see this subject.
I wrote something similar recently
dev.to/parsyvalentin/piece-of-mind...
Hope you'll see something interesting.
Hope to read you again soon.

 

I've been trying to take a more TDD approach lately and it's still kicking my ass.

until the functionality of the UI is not solidified or the project gets a release date, I see adding tests as adding overhead

I really relate to this, I tinker so much while building out UI that I feel if I did write my tests first I'd just be updating them twice a day because I changed my mind about what a certain element should show/do.

 

Glad you can relate, I just want to point out that by putting it off I am (we are) taking a risk that when it is time to release and I write the tests when the functionality is no longer fresh in the mind, so the test might not have as good of a coverage. It's important to be aware of risk/benefit.

 

Great honesty about how/why you approach testing.