DEV Community

Hudson Burgess
Hudson Burgess

Posted on

How to systematically determine requirements?

My team has a serious problem with forgetting seemingly obvious requirements. The 1-2 weeks before a release are full of "oh wait, we forgot about x. But it's an easy change, no big deal." Simple omissions are bound to happen, but by the time we release, the list of omissions we caught is usually bigger than the list of initial requirements.

What's a good starting point for systematically determining requirements for new features up front?

Questions to consider:

  • are there any checklists you use?
  • are there any particular areas where more requirements are forgotten than others? (ex. security, UI layout, etc.)
  • any useful / widely accepted books addressing this subject?
  • have you been on a team that successfully addressed this problem; if so, how did you fix it?

Oldest comments (10)

Collapse
 
chuckconde profile image
Facundo Conde

I think that your team might need a little more organization, use tools like Jira to organize yourself and the requirements, try to use SCRUM methodology and you'll be more on track I believe

Collapse
 
hudsonburgess7 profile image
Hudson Burgess

The bigger issue for us is identifying the requirements in the first place, not so much keeping track of them. (Though we could improve that too)

Collapse
 
chuckconde profile image
Facundo Conde

Well, if that is the problem it seems you need to work on your QA or Test department, have clear requirements from the client or PM, take half a day for that if necessary, but try the organize your selves the best you can

Collapse
 
odonacer profile image
Roman Bardachev

My suggestion is to use best practices for user story creation, like to follow the INVEST principle, also you need to have Grooming sessions for your stories which would go into sprint. You should practice static testing of your available design documents and available requirement - while doing that you could find missed requirements. Try to create FAS (feature analysis specifications). Also you can try to assign a role of requirements analyst to one tester or dev.
In short, it all depends of what sdlc/stlc you are following.

Collapse
 
bertilmuth profile image
Bertil Muth

It would be helpful if you explain the context a bit.

What kind of requirements have been forgotten most in your past?
Functional requirements, quality requirements, constraints?
What was the reason they were forgotten?
Were all the right stakeholders involved early enough?

In my experience, quality requirements tend to be forgotten more often (e.g. the so called -ilities, useability, realibility, also things like performance.)

There are lists of quality attributes that you can pick your checklist items from. The most well known is probably FURPS.

Concerning functional requirements, learning about Story Mapping may be interesting to you.

Collapse
 
sharpdog profile image
SharpDog

Many sprints (Scrum or Scrum-like) each:

  1. beginning with a product manager entering as many use-cases (stories) as he can think of no matter how do-able, etc, they are and prioritizing them in terms of user/business importance.

  2. This is followed by a brainstorming session with the whole team pointing the stories, possibly adding more (oh wait, X requires Y & Z), potential splitting or otherwise organizing stories and adding associated tasks.

  3. Now do the sprint (dev & QA)

  4. Present the work done to the product manager and end users (if possible) and get feedback. Don't let anyone out of the room until you get good feedback.

Repeat (go back to step 1) several/many times before scheduled deadline.

=> Do not skimp on steps 1,2 & 4 because you want to concentrate on step 3. This is the most frequent problem and it's self-inflicted!

The actual use of the points, etc (e.g. Velocity, Burndown, etc) is optional.

Collapse
 
nestedsoftware profile image
Nested Software • Edited

If the problem is identifying requirements in the first place, one practice I would recommend is dogfooding. Sometimes teams will narrow their focus on just the exact features they're developing and the precise success criteria that a customer or dev manager has defined ahead of time. Possibly they may even state a feature has been completed when it only passes unit tests but they haven't actually tested the feature directly by interacting with the application as a whole. When people include directly using the product as part of their development activity, gaps and corner cases become obvious much sooner. Everyone, including developers, should be using the application as a whole and playing around with it. Try resizing the screen, or using unusual gestures, or entering odd data into it - whatever is applicable - to see if something jumps out as missing or broken.

Collapse
 
bosepchuk profile image
Blaine Osepchuk

The question is a little vague.

Do you guys do design reviews?

Or are you in just in such a hurry to ship code that nobody is even really sitting back and making sure you're building the right thing in the right order?

Collapse
 
hudsonburgess7 profile image
Hudson Burgess

Definitely the latter

Collapse
 
bosepchuk profile image
Blaine Osepchuk • Edited

Right. Your team is presumably filled with smart, capable people. But even smart people have a tendency to make suboptimal decisions when the deadline is approaching.

It's something I'd address at your retrospectives. You are doing retrospectives, right? Learning from your actions? Making your team stronger? Etc.

Anyway, people get in a rush and stop doing the things they know need to be done to create working software. But, we also know that it's very expensive to fix a missing requirement when you catch it in testing. Different studies come up with different numbers but 10 to 100 times is not unreasonable.

Just bringing up that cost should be enough to get people exciting about spending a little more time making sure you haven't missed any major requirements. But some teams will need more convincing, which is okay too. If that's the case, you're next step is to begin measuring the cost of missed requirements. Just keep track of it and report it at your retrospectives.

The number is typically huge. And once you see how wasteful missing requirements are, you can't unsee that. Then you just schedule some time for searching for missing requirements at the beginning of your planning instead of in testing. And keep track of that too.

So, now you have a running experiment. You can turn the "hours of searching for missing requirements" knob to whatever your team thinks is reasonable and observe the change in "hours of fixing missing requirements" value. And if you are working in short intervals (sprints) you'll soon optimize that value precisely for your team.

Does that make sense?

What do you do to actually find your missing requirements? Let me refer you to Software Requirements by Karl Wiegers and Joy Beatty. If you have a requirements question, this book likely has the answer.