Been using UNIX since the late 80s; Linux since the mid-90s; virtualization since the early 2000s and spent the past few years working in the cloud space.
Location
Alexandria, VA, USA
Education
B.S. Psychology from Pennsylvania State University
I refuse to believe the company in question did not know its AI would be biased. It is common knowledge that AIs often are (they always are, but that fact hasn't sunk into the public's perception yet). It is also obvious that decisions made by this AI can do harm.
Than it simply being irresponsible to assume that a "black box" is always going to do the right thing. If you initiate an ongoing process, service, etc. you sould be monitoring the results it produces. Even if you don't understand why results are occurring, you should be able to tell that the results are negative and take action to stop them. Failure to monitor and act is, to me, what creates culpability.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
To me it's less:
Than it simply being irresponsible to assume that a "black box" is always going to do the right thing. If you initiate an ongoing process, service, etc. you sould be monitoring the results it produces. Even if you don't understand why results are occurring, you should be able to tell that the results are negative and take action to stop them. Failure to monitor and act is, to me, what creates culpability.