DEV Community

Discussion on: AI is a threat! Really?

Collapse
 
fjones profile image
FJones • Edited

I think it's quite important to make a distinction here:

On the one hand we have traditional Machine Learning Algorithms, which essentially just provide us with a sophisticated black box of input and output. One that may run on some form of reinforced learning, but is still just a computation. Here, the threats can generally be categorized as unintended output or side-effects, driven by rights and responsibilities of our AI. Without any malicious "intent", given enough responsibility, such an AI may well make human-error scale mistakes, which we already know can cause catastrophes. A MLA-run nuclear plant may pose less risk than a human-run one - or more, in case of an incorrect assessment. A difficult balance. Similarly, a complex computer-generated algorithm may produce poorly-debuggable errors that could cascade. These are the "Oops, the twitter bot looked at all the adult content on the Internet" and "the algorithm decided to reject your loan application" scenarios.

On the other hand, though, we have the Terminator scenario. This is driven by what is still generally assumed to be a target for AI research: Artificial Sentience. Now, we can debate (and have been) for years as to the achievability and ethics therein, but the point to note here is that if it is achieved, sentient AI is no less dangerous than any other sentient being - including humans. We know well enough the tragedies wrought by man, so naturally this is a potential consequence of sentience. What makes it a substantial threat is that we assume - rightly so - that the cognitive capacity of such an AI may exceed that of humanity, and thus mankind would be powerless against it, should we find ourselves on the pointy end of the stick. Is that likely? No, for various reasons. Is it concerning, considering the pace at which even traditional Machine Learning is eclipsing our ability to understand the results? Absolutely.

AI isn't an imminent threat, where we could draw a straight line from image classification to cataclysm, but it is certainly a subject we need to treat with caution.