DEV Community

Discussion on: Why we ditched story points to be more value-oriented

Collapse
 
javaguirre profile image
Javier Aguirre • Edited

I read the post through twice and I still don't think I clearly understand why your company has chosen to drop story points.

Hello Alex! You're right, my article is a bit messy, I have to put more time on it, but it's difficult for me. :-)

We have two main goals in the company regarding product delivery:

  • Speed delivery and focus more on value for the client
  • Reduce friction to deliver on the dev team so we can deliver faster and with more quality

In these two main objectives story points were something pulling us back in our opinion. We are a software agency, BTW.

It seems like they've just moved to working on what I'd see as an 'epic' level feature rather than the tasks to make up the epic - and doing so with no metric or indication of how long it'll take beyond 'it ideally needs be done in 2 weeks so we can ship it'

Not really, we still have epics and features, what we want to do there is to make all the features more or less "atomic" in the sense that we will spend more or less the same time on each one of them and they need to have a common goal, all the features or user stories need to give value to the client. When we deploy, the client needs to see something he/she can use.

We have two-week sprints, but we deliver when it's ready, usually at least two deploys per sprint, we have CI/CD, so that's quite easy if we focus on feature QA and testing, because when we ship we need to be sure that feature is right and stable. If the features are "atomic", we can deploy each of them separately, although we all know that's sometimes difficult, that's something we want to achieve more and more.

The post also mentions that the company has changed several things at once ie; sprint duration from 1 week to 2 weeks and refocusing on value rather than workloads/estimates among any other things, which (in my experience) just means they'll be unable to measure or exactly pinpoint what has made the actual benefit (if any occurs at all) as each change muddies any potential metrics of the other changes.

We decided this together with the team (as we usually do), we set goals, and we agreed that according to our goals, which is delivering more and with more quality, story points are not a metric giving as anything.

The human being is terrible at estimating, and in our case, and how we work, it's not a first-class metric. we do estimate when doing proposals for clients, of course.

Did they consider using another style of work such as kanban or changing the existing process - such as by empowering the teams to decide how they broke the work up so they could determine workloads themselves?

We do use Kanban and Scrum, most of their ideas, of course almost nobody follows completely, neither do we. :-)

I think there were other, far simpler and measurable ways to tackle the issues you mentioned than what you've described has been done.

We are simplifying the way we measure this. Our new metrics are the number of features deployed, code quality (duplicity, complexity, code coverage, coding standards issues), frequency of deployment. Story points wasn't a good metric for us to know if we're healthy and we're delivering fast.

Would be happy to discuss further and I hope the changes do have some positive results for y'all πŸ‘

Thank you, Alex! Thank you for making the effort of writing your ideas, and I do appreciate having good constructive criticism. We can talk about it further, of course!

Collapse
 
southpaw profile image
Alex Gabites • Edited

Thank you for the detailed reply and more insight into the things that influenced your decisions πŸ˜ƒ

For context on where I work and where my thinking comes from - currently I full time with a small team of 3 developers (myself included). Previously I've worked at larger companies with 100+ developers working on the same project at once so the last 6 months have been quiet the change. I also do a small amount of contract work on the side for clients.

Currently at work we're tackling the some of the same issues/goals you've mentioned:

  • Focus value of the feature to our business/end user
  • Deliver quality over quantity
  • Break it fast and deliver often

Our approach to achieve these goals has been to focus on breaking down our requirements into stories that are only the value-add/feature they provide and then create tasks within them, each feature as you said needs to be as 'atomic' as possible. We then point the feature with the effective 'cost' it will have on our team to complete it (based on rough time, complexity and risk).

We don't find that this process holds us back as it only consumes 2-3 hours every fortnight (~45 minutes planning on sprint start, and 2x 1 hour sessions on Wednesday's to discuss where we're at and what needs to come in to the backlog, be refined for next sprint and then pointing what we have 'certainty' on) and we are able to estimate reasonably effectively how much we'll be able to deliver each sprint.

So long as we remain focused on completing the tickets in the order we bought them into the sprint, we've found that we remain effective and keep delivering value quickly via our processes (and the CI improvements we've made in the last few months have enabled this further)

While not official, if I were to write down our sprint health metrics they're probably along the lines of;

  • did we complete what we set out to do (committed vs completed) regardless of how much was completed
  • did any work items increase/decrease in scope (aka; was it pointed over/under) and why, how can we avoid that in the future
  • did we deliver value to the business and our end-users with what we completed

and these are bought up and discussed at every retrospective we've had.

I think the best way to sum up how I'm thinking after reading your reply is that I think we both (and probably everyone else in the industry too!) are tackling the same issues and trying to find solutions to them - and I really appreciate reading other posts about people going through the same/similar things and trying different ways to solve them!

However, I still do not think that these new metrics you're deploying are any better than story points in themselves;

  • I just see number of features deployed as the same metric as number of points completed - in the end they'll be about the same. Because if you do more features you'll see an increase in the number just like if you complete more points you'll see an increase (and vice versa)
  • code quality doesn't add value to the customer directly (they only care whether they get x feature or not this week) and from what I've learned in the past I'd say that its notorious for being an arguably, poor indicator of software quality
  • frequency of deployment doesn't strike me as a measurable metric but that could just be the wording or my interpretation of it! πŸ˜‰

I think there were other, far simpler and measurable ways to tackle the issues you mentioned than what you've described has been done.

I think I'd rephrase this now to say something more like; It would seem that you could continue to measure the things you want and get the metrics out by continuing to use story points - as they don't seem to have been the issue for you team(s) in my opinion. Rather, I believe that you could make small and incremental process changes through actions. Reading the agile manifesto thoroughly and applying it better than you admit that you have been - that may be more likely to provide you with the benefits and improvements you're looking for!

However, something I omitted in my original reply and removed before hitting submit was that I'll be the first to admit that I don't work within your organisation and I don't know the complexities or other challenges your teams face day to day. You all need to do whats best for you and as long as you (and your teams) can learn from any mistakes, make changes and continually improve - eventually you'll find what works right for y'all.

Thread Thread
 
javaguirre profile image
Javier Aguirre

Hello again! :-)

While not official, if I were to write down our sprint health metrics they're probably along the lines of;

did we complete what we set out to do (committed vs completed) regardless of how much was completed
did any work items increase/decrease in scope (aka; was it pointed over/under) and why, how can we avoid that in the future
did we deliver value to the business and our end-users with what we completed

For me, those are pretty reasonable goals. The only difference in our approach is we want to have two simple main goals (delivery more and with more quality), and only the last one you said is in our top list. We also take care of the other goals you wrote, but what I want is to simplify our primary goals.

I think the best way to sum up how I'm thinking after reading your reply is that I think we both (and probably everyone else in the industry too!) are tackling the same issues and trying to find solutions to them - and I really appreciate reading other posts about people going through the same/similar things and trying different ways to solve them!

I think we share many metrics and approaches, what I wanted to clarify with this post is we tried to change our primary goals and metrics, so we improve the company direction. It wasn't bad, but focusing on delivery and shipping made us more product-oriented. I'm not saying the other metrics are wrong. I say the other metrics fulfill other goals that are less important for us right now.

I just see number of features deployed as the same metric as number of points completed - in the end they'll be about the same. Because if you do more features you'll see an increase in the number just like if you complete more points you'll see an increase (and vice versa)

Yes! I totally agree with this one. :-)

code quality doesn't add value to the customer directly (they only care whether they get x feature or not this week) and from what I've learned in the past I'd say that its notorious for being an arguably, poor indicator of software quality

I got into my trap here! 😁 Because what I can say with the indicators we have (We use Codacy for it) is estimate if we're going to spend more or less time on something and what is the health of the code (less healthy, more risk and uncertainty), but it's NOT a delivery metric.

I agree with you again.

frequency of deployment doesn't strike me as a measurable metric but that could just be the wording or my interpretation of it!

It's true, but It says what's the delivery health, if we want to have 'atomic' features and iterate fast we need to deploy fast, and many times. It says something of the project and delivery, although I have to say we're still thinking on what metrics we want to focus on here.

What we're doing right now is following more the shippable features that don't go back approach rather than checking all these metrics. Still, in the following months, we need to have a good overview of what's happening regarding our features and delivery, that's for sure. :-)

However, something I omitted in my original reply and removed before hitting submit was that I'll be the first to admit that I don't work within your organisation and I don't know the complexities or other challenges your teams face day to day. You all need to do whats best for you and as long as you (and your teams) can learn from any mistakes, make changes and continually improve - eventually you'll find what works right for y'all.

Thank you for your comment again! I appreciate your effort responding, and I've really learned from our feedback. πŸ’―

Thread Thread
 
southpaw profile image
Alex Gabites

Thank you for being open to discussing and receiving feedback - I've certainly been able to learn from your replies as well πŸ‘

Do a follow up post in a few months - would love to know how it's getting on!

Thread Thread
 
javaguirre profile image
Javier Aguirre

That’s a great idea! I’ll do it! 😁