DEV Community

Talina
Talina

Posted on • Originally published at talinablogs.com

5 Big Mistakes Companies Make When Testing Software

Most software companies struggle with releasing quality code that doesn't break critical user workflows.

Companies invest significant dollars in testing their product to counter this but often ship a P0 bug anyway.

Here are 5 root causes for software bugs.

#1 Inconsistent Testing Environments

A common mistake and often a harder one to fix during the nascent phases of software.

Naturally, if the pre-production environment never hits an edge case that can occur in production, it is impossible to validate enhancements accurately.

Thought: Deploy a snapshot of the production environment as a pre-release environment.

#2 Backwards Incompatibility

Engineering teams write code to work in the "present" and the "future", making version testing hard. Software should not be forward-facing.

Example: Release #1 deploys software with a v1 dependency, and release #2 now uses a v2 version of the dependency. This change is non-idempotent - we wipe out a variable value by storing only the latest state. 

What happens to the deployed state that works with a v1 dependency? Does it break or work with the new release?

Thought: Always support X states of existing data. Streamline rollbacks.

#3 Treating Infrastructure and Microservices Separately

As more folks campaign for infrastructure as code, we move towards automating infrastructure deployments.

But does a change in automation break existing infrastructure?

Thought: Can backward compatibility testing solve this problem?

#4 Ignoring the Network

Assuming the network behaves the same irrespective of time and region is a recipe for software failure.

Thought: Can Edge Testing gauge pre-release software inconsistencies?

#5 Testing Features in Silos

With the microservices architecture, a change in one service can cause a failure in unrelated user functionality. Testing a new release in a silo for a fixed subset of features doesn't help.

Thought: Replay tests for all critical user workflows for every release to generate an accurate failures heatmap.

Let me know if you've solved any of these mistakes and how.

Until later! :)

Top comments (0)