Originally I wanted to title this "When should we create laws for artificial intelligence?". But I figured that was too specific to what I have in mind. Well, at least the cover image remained the same. 😁
Over the past decade we got to experience on multiple occasion nasty surprises or we watched in disbelieve as politicians and public figures tried to catch-up with the advancements in technology often in reaction to some scandal or other serious problem. And on many occasions the implemented solutions didn't help at all or made things worse for most people. If we look into other fields we see the annoying tendency to address problems in a systemic way only after a major scandal happens. Or at least when it comes to political decisions.
On the other hand most often industries are capable of creating their own regulations and the government has to interfere if they prove insufficient or when an actor disregards them and bad things eventually happen. But I still feel that we have been bumbling through with all these advancements and have been lucky that nothing extremely bad has happened and that the benefits have so much overshadowed the bad parts.
But I do feel that we are getting close to having prototypes of technologies that have to potential to be revolutionary once again. Most notably advance artificial intelligence, especially sentient artificial intelligence. The second part is probably a bit more out, but the implications are huge!
Science fiction is full of problems that we might face in the future, though often metaphor for current or past issues, and the issue of artificial intelligence is very common, often ending badly for humanity (obviously if not for the main heroes who save the day). Among one of the issue is the treatment of the sentient artificial intelligence.
Science fiction provides us with possible challenges, problems and solutions. Yet often times we are like a deer in headlights when the prophesied problems come our way.
You might argue that often times we don't know the impacts that a given technology is going to have and hence creating regulations for it is meaningless, counter-productive or it might stall it progress. While I mostly agree with most of the points I don't think that should stop us from even trying.
With certain technologies we can make educated guesses what the impacts and problems might be. In case of sentient AI we even have some prepared solutions like the 3 laws of robotics from Isaac Asimov to draw upon. Another close examination of sentience is from Start Trek:
In this example the main issue boils down to how do we treat and integrate a sentient beings that are not human into our society.
Now let's scale a bit back. The discussion topic this time is a bit broad. Either share with us your experience with a product/feature that had an impact on your existing business and you had to create new rules/implement changes that you wished you had done earlier.
Or when do you think we should draw laws for things like sentient AI and why?
If you like my work, please consider supporting me on GitHub Sponsors ❤️.