DEV Community

Discussion on: How does one explain to non-tech people that it's difficult to anticipate all potential errors before deployment?

Collapse
 
picocreator profile image
Eugene Cheah

The above scenario is something I constantly struggle with during my time as a vendor. And as counter-intuitive as it is the answer is rarely logical. But instead emotional.

After all the problem you are facing is ...

Any sufficiently advanced technology is indistinguishable from magic
~ Arthur C. Clarke

The thing is, especially for those outside tech, with the rate things are constantly changing. To them, it might as well be magic at times. And will somehow expect us, who claims to be a "pro" in it, to come up with a magical solution.

This still happens on a smaller scale, for things that are difficult to explain. Medical being a prominent example, where there are many patients who refuse to accept the doctor's advice of not having a "cure" to their problem.


So I digress, but once I came to terms with the above. I realize the best answer is at times not to answer emotions with logic. Which is something I admit I am not the best at, and is very case by case specific.

Another thing that I like to say personally is that

Technology can never solve human problems, only humans do

Especially for edge cases, that involve humans. My personal litmus test would be "does existing human solutions solve the problem"

  • if yes then its more of how does technology makes this cheaper or better.
  • if its no - then how can we humans solve this, in place of technology? Even if it means hiring an army of teachers (in the above context).

Once we get the perspective of how expensive a human solution can be, we can give the technological solution a perspective of its cost. And the how non-practical it is.


That being said, as for NAPLAN specifically. Or for any other nationwide system that is extremely time sensitive (such as voting).

I am of the stance, that our systems should be built to be deployed as multiple self-isolated clusters, perhaps a server in each test center. After all internet can and may go down at any one location.

That being said, this could drive up cost drastically. And since I am way too disconnected from the topic. Perhaps there was a cost-benefit analysis done for this. And the solution that was decided on was to fallback onto pen and paper if things go wrong.

If thats the case, putting aside what the news media is reporting. Its the case of things going as planned (as per risk management, and contigency point of view).

Collapse
 
matthras profile image
Matthew Mack

As a mathematician I'm very accustomed to asking the 'whys' (since everything can be justified logically in maths), so I appreciate your additional insight - especially the Clarke quote!

I definitely wanted to give the NAPLAN techs the benefit of the doubt, mainly because I've noticed mainstream media tending to focus on and capitalise on mistakes for headlines, which pains me to some extent.

Maybe it's easier to shrug, say "Murphy's Law" and call it a day. Obviously I can't control how people end up perceiving tech as a result, but I sure don't want it to be made any worse! Maybe the answer is to normalise and get across the message that making mistakes in tech is normal. That's something I can definitely do!

Collapse
 
picocreator profile image
Eugene Cheah • Edited

To make mistakes is human; to stumble is commonplace; to be able to laugh at yourself is maturity. ~ William Arthur Ward

What you suggested, on normalizing problems, would really help to get other to understand the problem, instead of demanding them to be magically solved.

It would definitely help the developers who are working on it, and who may have made mistakes on it - fix it. And if it's a resource constraint, perhaps lead to more resources needed for it.

Saying this as someone who's code has crashed live systems in active use before 😢