DEV Community

Bertil Muth
Bertil Muth

Posted on

Frequent delivery - how?

Do you deliver quality software frequently (< 2 weeks)?

What’s the process you’re using, Scrum, Kanban, or something different?

What makes you successful? What are obstacles?

Top comments (16)

Collapse
 
nestedsoftware profile image
Nested Software • Edited

I've worked with several iterative/agile processes. It all comes down to iteration and feedback. To get a bit higher level than any specific process or method, I think achieving this kind of regular delivery can be done with the help of two basic ideas: Evolutionary prototyping and pipelining.

By evolutionary prototype, I mean that we start with a very simple implementation of some feature, and we gradually build on it until it becomes the solid production version. To release something into production, we do want it to be ready, but we try to make it as simple as possible, so that we're not expending a lot of effort underwater, where no one but the developers can see what's happening. Once it's in the hands of users or testers, we can take that feedback to plan how that feature will evolve in the next cycle.

By pipelining, I mean that we stack features in parallel across members of the team: It typically takes 4 years to graduate from college, but there are still new graduates every year. For example, if we have a Kanban board with 4 stages, and we have two team members at each stage, we can expect roughly two team members at a time delivering some piece of completed work. I don't think it has to be so regimented in practice, but I'd expect some kind of pipeline effect to be an emergent property.

It some cases, delivering on a fast schedule like every few weeks to actual end users doesn't make sense. For example, I don't think that would work for an online banking application. But even in that case, it should be possible to deliver to teams that do QA and acceptance. Then we can bundle some number of such iterations into a real production release.

Collapse
 
bertilmuth profile image
Bertil Muth

Interesting point: in theory, Scrum demands a „potentially shippable product increment“ at the end of each sprint. That would include QA/testing to be completed.

On the other hand: if the market doesn’t demand it, what‘s the damage done?

Collapse
 
nestedsoftware profile image
Nested Software • Edited

Scrum demands a „potentially shippable product increment“ at the end of each sprint.

This sounds reasonable to me. I think there is a bit of nuance to consider though:

Assuming there is a separation of responsibility where we have a team, say developers, handing off work to another team, like QA, then there will always be some delay.

One way to deal with this ambiguity in Scrum is to define separate sprints for the developer and QA teams. The sprints of the last team in the chain are the ones we use when we apply this "shippable" rule.

The protocol may look something like this:

  1. Developer completes a piece of work and makes sure it passes their own thorough testing (both automated and manual as appropriate) in "Dev Sprint 3."
  2. Developer demos their work to someone like the product manager or a member of the acceptance team to make sure it meets the business requirements as intended.
  3. Assuming all goes well, developer hands off this piece of work to the QA team for more comprehensive testing in the QA team's upcoming "QA Sprint 2."
  4. Whenever the testing team signs off on this work as part of their own sprint, that's when it is considered a potentially shippable product.

The idea would be similar in Kanban, except there aren't discrete sprints, just tasks that are transferred from one stage to another.

Thread Thread
 
bertilmuth profile image
Bertil Muth

From a dogmatic perspective, Scrum requires the team to be able to deliver without depending on outside teams (that’s what the Scrum guide says).

From a pragmatic perspective, your scenario may work if demand for frequent delivery isn’t that high. :)

Thread Thread
 
brpaz profile image
Bruno Paz • Edited

There should be no hands off to QA team. QA should be part of development team and be involved in in the development process from beginning, by for example discussing possible testing scenarios and edge cases or test automation strategies with the dev before it starts working on the feature.

Also from my experience, if the development team doesn't own all the product development lifecyle from design to deployemnt and its dependent on other teams like Ops, those other teams will end up becoming a bottleneck and slowing the team down.

I really like the way Spotify and Netflix work.

Of course, this is the ideal scenario IMO. Sometimes its not possible and might depend a lot of the context of your organization.

Thread Thread
 
nestedsoftware profile image
Nested Software • Edited

There should be no hands off to QA team

I support the idea of cross-functional teams. However, I don't agree that one never needs to hand off a given piece of work for it to be thoroughly tested. That really depends on the nature of the software. For cases like financial software, medical software, military software, I believe it can make sense to test that software in a way that the developer alone cannot, even if the developer has worked with QA and Acceptance to carefully produce something they have confidence in.

Assuming the quality of software handed over is high, much of this additional testing work may end up not revealing any new problems, but there are cases where that extra due diligence is required. Another example is when software has to be supported in a lot of different environments such as different browsers, different operating systems, different hardware.

Thread Thread
 
brpaz profile image
Bruno Paz • Edited

I am not saying that there shouldn't be a QA phase, after the development is complete. Of course it should. But I think having QA embedded in the development team makes the process and communication much easier.

I just dont like terms like "hands off" because sometimes it could be interpreted as "my work is done. its not my responsibility now" and the burden is put entirely on QA. I have experienced that and worse with a dedicated QA team, shared across development teams.

Of course, it depends a lot of the context of the organization. As you said Financial Software or Medical Software might require a special attention.

Still in 90% of cases, I would say a cross functional team is a way to go.

 
nestedsoftware profile image
Nested Software • Edited

Assuming such a testing team had 2 week sprints, then the delivery frequency of shippable features would remain the same.

An alternative approach would be just to have a single development sprint. During the sprint, developers would hand finished features to QA for testing and would fix problems that were identified. Only features that had passed QA during the sprint would be considered shippable. Some work that had been completed as development but not passed QA yet would just be moved to the next sprint. That's a bit simpler and ought to work also.

If a separate testing pass is not needed, that's fine, but I think there are valid cases where features have to be tested separately after they've been developed. It depends on the kind of software in question.

Thread Thread
 
brpaz profile image
Bruno Paz

In your example, I guess the Testing team would be shared across development teams. If its one to one relationship, then I would say it doesn't make sense to have that barrier between development and QA and just have a single cross functional team.

Lets imagine the following. Two development teams are working in 2 weeks sprint. The same as QA team. We are in the last day of sprint and a developer from one team finished some feature and it goes to QA. but QA Team is busy testing a very big feature from other team. That team will probably not be able to deliver the feature on time.

How would you handle this situation? Have both development teams in different sprint schedules? Or would just be a matter of managing priorities or better planning?

Still, even with a better planning or priority management, not everything goes as planned, so there will definitely be cases where the Testing team could be the bottleneck for some teams to get work done. That would be avoided if they dont have a strong dependency of a external team. Talking about QA in this case, but could be Ops, for example.

But yeah, every organization and project is different, so what works for some might not work for other.

Thread Thread
 
nestedsoftware profile image
Nested Software • Edited

Thanks for your comment! I was actually thinking that a given QA team would only be working with a single team of developers. I did not consider sharing the QA team. So it was still, in a sense, a single cross functional team in my scenario. I agree that sharing a group like that is departing rather far from the core idea of what a process like Scrum is trying to do.

I was trying to reconcile two things: a) the need to have a separate testing activity after the developer considers a task completed and b) the fact that in Scrum, it's considered quite important to finish the tasks one commits to in a given sprint. The latter is not going to happen 100% of the time in the real world, but still, the whole philosophy of Scrum is to minimize the distractions so a team can make a commitment and stand by it in each sprint.

If we assume that it's not good enough for only the developers to consider a job done1, then if we keep a single sprint, we'll have work that a developer has completed just at the end of the current sprint going into the next sprint to be checked by QA. At the time, I thought we could mitigate this problem with separate sprints. However, now I think that's a bit of an artificial distinction, and maybe it creates extra structure without a clear benefit.

Probably it's simpler to just have a single sprint and accept that we'll need to carry over several tasks each time into the next sprint for QA. Maybe that makes a case for having a more fluid project structure like Kanban in the first place, where we just don't bother so much with this idea of a single sprint.

1In an ideal world, maybe we can expect the quality of work done by a given developer, or pair of developers in the case of pair programming, to be high enough, that we don't need a separate QA activity at all. I think that in practice, there are a lot of cases where this won't work though, even if we do have high quality being delivered by the development team and they're working hard to thoroughly test their own work first.

Collapse
 
bgadrian profile image
Adrian B.G.

For a while (few years) I worked in several teams that did weekly sprints, with minimal amount of agile/scrum techniques and delivering daily releases.

It was fun for me, but there were consequences. Code quality and documentation often lacked, but in a fast moving industry you don't notice it, sometimes the projects were buried so we actually saved time and money but not doing tests and docs.

I was a dev so I didn't thought much on the management part, but some of the blockers were external dependencies and new comers that were not used to this speed. Any small rock (a delay of a half a day) can change the outcome of a sprint.

Collapse
 
bertilmuth profile image
Bertil Muth

Not implying you said that, but do you think daily releases necessarily lead to bad quality?

Collapse
 
bgadrian profile image
Adrian B.G.

No, big companies does that well.

But daily releases with 1 or 2 devs surely do, there is no time in 1day to make a feature, test it, write tests, doc, QA and release it few hours before closing time, time to react if something goes bad.

I also want to point out that bad quality code is not related to the product and company success, I found the oposite to be true in several cases, as in bad code but made millions.

Collapse
 
quii profile image
Chris James • Edited

The key to delivering software as quickly as possibly is to have as little process as possible but place value on things that actually give you confidence.

Ask your engineers, if we deployed every commit to live; what would it take for you to not be scared?

  • Tests. No manual testing allowed if you're practising continuous delivery
  • Monitoring, so you know if something has gone wrong
  • Green/blue deployments
  • Pair programming
  • Breaking down stories into small tasks. If you're shipping small things frequently then the risk is low.

Scrum, kanban, etc are all distractions honestly.

Collapse
 
bertilmuth profile image
Bertil Muth

Do you currently work that way? How are business priorities reflected in your way of working?

Collapse
 
quii profile image
Chris James • Edited

Yes I do, it's very nice and teams I have worked on have always been less stressed, more productive and happier compared to teams that release weekly/monthly

Re the business

It's important to distinguish between business requirements and technical activities.

Files moving from one server to the other (i.e deploying) is not a business concern. Just like how creating functions is not a business concern.

Depending on the business it is better to ship features when they're ready even in an MVP state. But if the business wants/needs control, use feature flags.

edit-

Due to the fast feedback, iterative nature of continuous delivery business requirements are generally more based on actual real feedback rather than guesses. In addition because we can ship things easier they are more open to trying different ideas. So CD is not only a technical win but a business one too. It changes the way the business works for the better.