DEV Community

Discussion on: On Artificial (Un)Intelligence

Collapse
 
ferricoxide profile image
Thomas H Jones II

To me it's less:

I refuse to believe the company in question did not know its AI would be biased. It is common knowledge that AIs often are (they always are, but that fact hasn't sunk into the public's perception yet). It is also obvious that decisions made by this AI can do harm.

Than it simply being irresponsible to assume that a "black box" is always going to do the right thing. If you initiate an ongoing process, service, etc. you sould be monitoring the results it produces. Even if you don't understand why results are occurring, you should be able to tell that the results are negative and take action to stop them. Failure to monitor and act is, to me, what creates culpability.