DEV Community

Discussion on: Day in the life of a Scrum team

Collapse
 
stegriff profile image
Ste Griffiths

Nice rant for your first post, Dan 😊

Thinking about code review and your story above, I was surprised about engineers trying to check out and run each of their colleagues PRs locally.

IMHO, a CR should check for:

  • Functioning code (unit/integration tests on the CI server do this)
  • Correct style (linter on the CI server does this)
  • Business errors (a human checks this)

In most CR processes I have encountered, the peer reviewer is only expected to eyeball the diff in GitHub/Azure DevOps and sign off that it doesn't look wrong wrt business process. The rest can be covered by a mature CI process.

Secondly, since this is kind of a character attack on SCRUM itself (😉), I'd suggest that the retrospective should shake out a lot of the worries above:

If the process isn't working (it sounds like it isn't) then retro is the team's opportunity to rant about and solve those problems. And the solution seems pretty straightforward:

  • Stop the requirement to check out and locally compile PRs
  • Improve your migrations

Anyway, thanks for your post, I hope you enjoy writing for the community and we see more from you. Perhaps a follow up? 😁

Collapse
 
danjfletcher profile image
Dan Fletcher

Hey, thanks!

Yeah there are a lot of things I could suggest be done differently in this fabricated story! I imagine this team probably would try the things you suggested.

Maybe I need to change the title, as I'm not meaning to attack SCRUM itself.

I'm trying to figure out how to express a lot of the problems that arise when you distribute work to individual SWEs instead of practicing pair/mob programming.

Not sure if this is the best way to do it yet ;)

Will definitely have a follow up post for this soon!

Thanks for the feedback 😄

Collapse
 
danjfletcher profile image
Dan Fletcher

I have a bit more time to properly reply to this now :)

Thanks again for your feedback!

Re:

IMHO, a CR should check for:

  • Functioning code (unit/integration tests on the CI server do this)
  • Correct style (linter on the CI server does this)
  • Business errors (a human checks this)

I think these are great points but they don't completely eliminate the inefficiencies and interruptions of code reviews. Even if you don't have to pull and build branches because you have the right quality control measures in place, they're still massive context switches.

You also still run into all sorts of other problems that I didn't cover in that little story.

What if you completely disagree with the approach someone took? There's a dozen ways to deal with that, some better than others, but I think it's common to allow things upstream even if the quality isn't up to everyone's standards.

It's also a lot harder to catch issues or bugs when it's not your code. Even more so if you don't actually pull it down on your machine and run it (or at least test it out on a deployable preview branch which some companies do). The more bugs that slip past the development phase of a feature, the more work QA has to do.

Under the assumption of a 100% perfect test suite, tests can't prevent mistakes in implementation, misinterpretations or all scenarios. It's why we like to have QA right?

But the more work QA has to do the more things get kicked back for rework causing more context switching among devs. Kicking work back can also quickly jam up the release process depending on what the deployment strategies are.

So it's a pretty good idea to include devs in the quality process. There's simply things a human is better at verifying than a computer, and if that human is a dev and they see a mistake, they can fix it more efficiently while it's "in development" than waiting for QA to catch it and kick it back.

I'm not making a claim that I know best, but after a year of giving mob programming a serious shot, I've drastically changed the way I think about delivering software.

There's no silver bullet, but mob programming eliminates a lot of these problems.