DEV Community

Cover image for Thinking forward for regulations
Arlene Andrews
Arlene Andrews

Posted on • Originally published at distort-roving.blogspot.com on

Thinking forward for regulations

Listening is a skill that can bring some unexpected benefits. Being willing to listen, without an opinion (or, in some cases, no awareness of what they are talking about) can yield bits of useful information on an issue you are facing.

This was the case a few days ago - a pair of us were discussing both a student that was having an issue on a topic, and the regulation of the future, in regards to Machine Learning and AI. (Yes, my conversations do wander - we started out with woodworking). The student was a one who had made a program - since it worked, they were happy with it, as we all are when first learning a topic. Having someone with skill come in and redo nearly everything is a blow to the ego.

Approached correctly, this can teach so much. Praise for getting it working is a important step, then carefully showing how other options may have been a better, more-performant choice is where the learning comes in. If this isn't done at a slow-enough pace: depending on the student, it may take days to ensure they are grasping the concepts, rather than just feeling like they did nothing correct.

Learning is important, and making sure that what is there is the best possible - then improving on that with other inputs is the way to quality.

However, with the pace of technology running at an ever-increasing rate, there really may not be time to make the regulations for technologies that will be common to iterate on what is probable. The regulation that is needed has to be mostly correct with the potential that the underlying technologies may become more advanced by the time these are fully made.

And testing is a huge part of that - you may well have seen the potential issues with the training of machines to do a portion of the job that humans used to do - and if you look at some of the new innovations, the potential of a problematic set of training data causing damage, or overlooking a situation that isn't covered in that training is growing: how are we going to test the lack of bias that is hoped for with these technologies?

I'm sure someone out there has a plan for this - I don't have an answer at the moment to make sure that - worldwide - there are sufficient training data for all the projects, and that they are unbiased. Enshrining the current prejudges and taboos will not serve us well into the future: we need to have a effort made to ensure that the data sets are as fair as possible. We may not be able to update the training in the future: let's plan for that.

Top comments (1)

Collapse
 
sturzl profile image
Avery

Two books that I love regarding bias and automation are:

Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor - by Virginia Eubanks

Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy - by Cathy O'Neil