Originally published in my blog: https://sobolevn.me/2019/02/engineering-guide-to-user-stories
Agile people are obsessed with writing user stories. And it is a powerful instrument indeed. But, from my practice a lot of people are doing it wrong.
Let's see an example:
As a user
I want to receive issue webhooks from Gitlab
So that I can list all current tasks
Seems like a valid user story, doesn't it? In fact, this tiny story contains multiple issues. And if you cannot find at least 8 mistakes in it - this article will be worth reading for you.
This article is split into three main parts:
- Getting better with the default user story format
- Rewriting user stories with BDD to make them verifiable
- Linking user stories with tests, source code, and documentation
While some parts might look more interesting to different categories of readers, it is important for everyone to understand the full approach.
Spotting and fixing problems
As we all know all our requirements must be correct, unambiguous, complete, consistent, ranked, verifiable, modifiable, and traceable even if they do not look like requirements at the first glance.
User stories tend not to have some of the given characteristics. We need to fix that.
Using consistent language
Is "receive issue webhooks" and "list all current tasks" somehow connected?
Are "tasks" and "issues" the same thing or not? It might be completely different things or just bad wording. How do we know?
That's what glossaries are for! Every project should start with defining specific terms that will build ubiquitous language for the future. How do we build this glossary in the first place? We ask domain experts. When we encounter a new term: we make sure that all domain experts understand this term correctly and similarly. We should also take care that the same term might be understood differently in different situations and contexts.
Let's say that in our case after consulting a domain expert we have found out that "task" is the same thing as "issue". We now need to remove the incorrect term.
As a user
I want to receive issue webhooks from Gitlab
+++So that I can list all current issues
---So that I can list all current tasks
Excellent. Using the same words for the same entities is going to make our requirements more clear and consistent.
Users do not want your stuff
When we have modified the last line it caught my attention that a goal of the user is to "list all current issues". Why does this poor user want to list some issues? What is the point in doing it? No users want that. This requirement is simply incorrect.
This is an indicator of a very important problem in writing requirements. We tend to mix our and user's goals. And while our goal is to please our users, we should concentrate on them in the first place. Making their needs value more than ours. And we should explicitly express that in our requirements.
How do we know what the user wants? Again, we don't. We need to consult real users or their representatives about it. Or make a hypothesis yourself if we cannot ask anyone.
As a user
I want to receive issue webhooks from Gitlab
+++So that I can overview and track the issues' progress
---So that I can list all current issues
After collecting more feedback we know, that our users need to know the progress of a project. Not listing issues. That's why we need to receive and store information about issues from the third-party service.
Removing technical details
Have you ever met a single person who literally wants to "receive issue webhooks"?
No one wants to do that. In this case, we also mix two different concerns together.
There's a clear separation between user's goals and technical ways to achieve them. And "to receive issue webhooks" is clearly an implementation detail. Tomorrow it can be changed to WebSockets, push notifications, etc. And the user's goal will not change because of that.
As a user
+++I want to have up-to-date information about Gitlab issues
---I want to receive issue webhooks from Gitlab
So that I can overview and track the issues' progress
See? Only important information is left, implementation details are stripped away.
Clarifying roles
Just by the context, it is pretty clear that we are dealing with some kind of developer-related tool. We use Gitlab and issue management. So, it would be not hard to guess that we will have different kinds of users: juniors, middle devs, and seniors. Maybe project managers and other people as well.
So, we come to the roles definitions. All projects have different types of users. Even if you think that are no explicit types. These roles can form depending on the way or goal of why your product is used. And these roles must be defined the same way we define terms for the project.
What kind of users are we talking about in this particular user story? Will junior devs overview and track the progress the same way as project managers and architects? Obviously, not.
+++As an architect
---As a user
I want to have up-to-date information about Gitlab issues
So that I can overview and track the issues' progress
After making an intelligent guess we can separate different user stories by different user roles. And it gives us fine-grained control over the features we ship and whom we ship these features too.
Extending user stories
This simple As a <role or persona>, I want <goal/need> so that <why>
is great, since it is succinct and powerful at the same time. It gives us a perfect way to communicate. However, there are several disadvantages of the following format we should - at least - know about.
Making user stories verifiable
The problem with given user story that we still have is that it is not verifiable.
How can we be sure that this story (now or still) works for our users? We cannot.
There is no clear mapping between this user story and our tests. It would be awesome if one can write user stories as tests...
Wait, but it is possible! That's why we have Behavior-driven development and gherkin
language. That's why BDD was created in the first place. It means that we can rewrite our user story in the gherkin
format to make it verifiable.
Feature: Tracking issues' progress
As an architect
I want to have up-to-date information about Gitlab issues
So that I can overview and track the issues' progress
Scenario: new valid issue webhook is received
Given issue webhook is valid
When it is received
Then a new issue is created
Now, this user story is verifiable. We can literally use it as a test and track its status. Moreover, we now have a mapping between our higher-order requirement and an implementation detail which will allow us to understand how exactly we are going to fulfill this requirement. Notice, we do not replace the business requirement with implementation details, but we complement it.
Spotting the incompleteness
Once we used gherkin
to write our user stories we started to write scenarios for our user stories. And we found out that there might be several scenarios for the same user story.
Let's take a look at the first scenario we made: "new valid issue webhook is received". Wait, but what will happen when we receive an invalid webhook? Should we still save this issue or not? Maybe we will need to do some extra work as well?
Let's consult Gitlab's documentation as a source of the information what can go wrong and what to do in these cases.
Turns out we have two different invalid cases that we need to handle differently.
First one: Gitlab accidentally sends us some garbage. Second one: our authentication tokens do not match.
Now we can add two more scenarios to make this user story complete.
Feature: Tracking issues progress
As an architect
I want to have up-to-date information about Gitlab issues
So that I can overview and track the issues' progress
Scenario: new valid issue webhook is received
Given issue webhook is valid
And issue webhook is authenticated
When it is received
Then a new issue is created
Scenario: new invalid issue webhook is received
Given issue webhook is not valid
When it is received
Then no issue is created
Scenario: new valid unauthenticated issue webhook is received
Given issue webhook is valid
And issue webhook is not authenticated
When it is received
Then no issue is created
And webhook data is saved for future investigation
I like how this simple user story now feels like a quite complex one. Because it reveals its internal complexity to us. And we can adjust our development process to the growing complexity.
Ranking user stories
Currently, it is not clear how important it is for architects to "overview and track issues' progress". Is it more important than other user stories we have? Since it looks rather complex maybe we can do something easier and more important instead?
Ranking and prioritization are crucial to any product and we cannot ignore it. Even if we have user stories as the only way to write requirements. There are different methods to prioritize your requirements, but we recommend to stick to MoSCoW method. This simple method is based on four main categories: must
, should
, could
, and won't
. And implies that we will have a separate prioritized table of all user stories in a project somewhere in the documentation.
And again, we need to ask users about how important each feature is.
After several conversations with different architects that work with our product we have found out that this is an absolute must
:
Feature | Priority |
---|---|
Authenticated users must be able to send private messages | Must |
Architects must track issues' progress | Must |
There should be a notification about incoming private message | Should |
Multiple message providers could be supported | Could |
Encrypted private messages won't be supported | Won't |
So, we can now modify the user story's name to map it to the prioritized feature:
Feature: Architects must track issues' progress
As an architect
I want to have up-to-date information about Gitlab issues
So that I can overview and track the issues' progress
...
We can even link them together. Just use hyperlinks from your ranked requirements table to the feature file with the user story.
This way we can be sure that this feature will be one of the first one to be developed since it has the highest priority.
Linking everything together
Without proper care, you will soon end with a mess of user stories, tests, source code, and documentation. With the constant growth of your project, it will be impossible to tell which parts of the application are responsible for what business use-cases. To overcome this problem we have to link everything together: requirements, source code, tests, and docs. Our goal is to end up with something like this:
I will use python
to illustrate the principle.
I can define use-cases as a set of unique high-level actions your app can perform (it looks pretty similar to Clean Architecture's point of view).
I usually define a package called usecases
and put everything inside so it would be easy to overlook all existing use-cases at once. Each file contains a simple class (or a function) that looks like so:
class CreateNewIssueFromWebhook(object):
"""
Creates new :term:`issue` from the incoming webhook payloads.
.. literalinclude:: path/to/your/bdd/user-story/file
:language: gherkin
.. versionadded:: 0.2.0
"""
def __call__(self, webhook_payload: 'WebhookPayload') -> 'Issue':
# Do something ...
I use sphinx
and literalinclude
directive to include the same file we use for tests to document the domain logic. I also use the glossary to indicate that issue
is not just a random word: it is a specific term that we use in this project.
This way our tests, code, and docs will be as coupled as possible.
And we will need to worry less about them. We can even automate this process and check that all classes inside usecases/
have .. literalinclude
directive in their docs.
You can also use this class to test our user story. This way you will bind requirements, tests, and the actual domain logic implementing this user story.
Job done!
Conclusion
This guide will help you to write better user stories, focus on their needs, keeping your source code clean, and reusing as much as we can for different (but similar) purposes.
Top comments (19)
Very good article! 🎉
I wonder how you manage really complex situations? Let me try to describe it with a dumb example. Imagine you're writing specs for an app similar to instagram (and with really fine-grained control), and you end up writing a scenario like:
I remember working with scenarios that had like 10 Given statements or so. And I couldn't really see how the number of statements could have been reduced.
You have to apply abstraction to merge several steps together. Or apply decomposition to separate them. Let me explain, please.
For example: you want to be notified when someone posts likes on your posts/images/etc. We all know and love this feeling. For this situation it does not really matter how did you create this content. You only write specs for the reaction part.
This is a first feature: receiving notifications. Disabling the ability to react is a different feature. And should be specified as a different feature. Because notifications only make sense when readers are allowed to posts reactions. However, you can mention that all content for feature
authors are notified about readers' reactions
is allowed to be reacted on. You can useBackground
tag for that: relishapp.com/cucumber/cucumber/do...I hope this helps you.
Thanks for the reply!
Then this implies to me that I might end up writing long
Backgrounds
.Do you maybe know some mature open source projects where they're doing BDD the right way? I would love to analyze something in depth.
Nope, sorry :sad:
I only know the theory, but I think the scenario setup can be abstracted in the direction you talk about it in the office. That is, do you go around saying “let’s make a great notification feature for posts with notifications enabled and permissions are allowed and the user clicks etc etc”? No, you have some shorthand that the organisation accepts (or is ready to adopt). Internal language can have deep meaningful abstractions and that worth capitalising on. The BDD part of Gurkins is to identify those ways of talking.
Well, I think so at any rate. I think your question is really good and gets to the heart of the complexities with full behavioural test coverage.
Great article!
Hypothetical situation, how would you write a new story in a subsequent sprint if you wanted to add
push notifications
to the existing feature?Well, you can copy-paste existing scenarios/features. And edit them to match your new feature.
Or, you can use
Scenario Outline
tag to parametrise your specs. You can use it if features are absolutely similar.Thanks for the response Nikita, however I don't think I was clear enough with my question.
Ignoring any acceptance tests, dealing with just the story:
So this was for using
webhooks
, but coming back for a second sprint to addpush notifications
I'm unsure how the new story would be written, since it doesn't contain any implementation details to differentiate from the original story. What do you think?That's the most important part: your user story does not change. And it should not be duplicated as well.
You can write new
Scenario
s in the same file. You can even use parametrisation to inject dependencies to some of your existingScenario
s. And cover new corner-cases inside newScenario
blocks.Example:
Right got it, thanks for clarifying 👍
Once a user story is done, it should not be back to backlog. If you have some enhancement, create new story.
You can write like this:
Ah, I might misread your scenario. In case that you mentioned about the push notification between GitLab and your system (in additional to webhook), you may create a technical task to address the issue.
rgalen.com/agile-training-news/201...
In the end, technical task should only be considered if it has some business value in it. Try to find some way to describe the value, otherwise, if you want to go fast and the product owner and stackholders understand the value behind the lines, then just use technical task.
This is amazing! The fine-tuning of a very common issue is an awesome example. Showing this to my work colleagues, and I'm going to try and use this technique going forward with whatever work I do. I always find that putting in a little extra effort upfront does wonders for the quality and speed of producing an outcome.
Thanks for the write-up!
P.S.
compliment
->complement
Thanks!
P.S. Fixed.
Great tips. I'm plan to share this with my network.
I wonder if the user story could be improved even more, by splitting on the 'and'? For instance, "So that I can overview and track the issues' progress" becomes, "So that I can overview issues..." and "So that I can track issues"? Maybe I'm not familiar with the term "overview"?
Oh boy! That's a really interesting method, I'll give it a try for sure.
I'm having this trouble, a simple user story that's hard to implement so the estimation is bad and I didn't know how to solve this effectively, I think this method could do the job!
Thanks a ton.
THIS GUY GETS IT.
Thanks for the share, and this nice approach how to link your stories better to your source! ❤️
Bloody awesome! I hope many developers (and BA's) read this. Clear thinking and clear user stories leads to clear and usable software.