DEV Community

Cover image for More thoughts on the state of AI
Red Ochsenbein (he/him)
Red Ochsenbein (he/him)

Posted on • Updated on • Originally published at ochsenbein.red

More thoughts on the state of AI

I recently wrote an article with my thoughts on the state of AI. Little did I know about the flurry of developments we would face in only a few days. We are now at a point where high-profile people are asking to pause and think. And I am sure we should pause and think. Does this mean we should stop the development? It's not even a question: The cat is out of the bag - there is no stopping now. What we should do is increase the efforts in the fields of AI safety, ethics, social development, and the definition of human work... by a lot.

There were more than enough examples in the past where those voices asking for more consideration were silenced in the name of profit. We can't do that anymore. The price is simply too high. Do I have the answers? No, nobody has. We just don't know what we are and will be dealing with.

Anyways, here are some thoughts keeping my brain busy lately.

Noise-to-signal ratio

AIs help us to create content at a way faster pace. Even without any training, you can put out texts, images and other things which a few years ago would have needed quite some skills. Now those things are available with a click of a button. Sure, someone might argue skills are still important and that being able to build on the AI's output will lead to better results from experts. Even if this is true (I will discuss this a bit more in-depth below) the big problem will be to find the 'diamonds' within the noise. Not only is there more content burying the great content, but the noise is also louder. In many cases, it takes an expert to be even able to discern the good from the not-so-good stuff.

Just recently Github Copilot X was presented and on their site, it was stated that 46% of code is already created by Copilot (I am not sure about that number, but let's just run with it). The same will probably happen with the flood of generated images and texts. Bots just creating websites with the help of GPT-4 for SEO reasons are probably already at work. If we extrapolate this development it's easy to see that in a not too far future we will basically train the models with 99% AI generated content. This might lead to whole bunch of problems down the line like even further bias amplification and similar things.

Killing creativity and knowledge

Too often we hear the question "Should I even learn to..." especially when it comes to starting to learn to code. The answer is "yes" most of the time. But if we think about artists creating especially digital art, music or texts. Why should they spend thousands of hours honing their skills if anyone can create something with a short prompt which most the people out there wouldn't be able to distinguish from your work? What does a skilled artist think about some models just taking your work and creating thousands of images in a style you developed with years of hard work?

I think this might lead to a decline in artistic and creative work. There is simply no incentive to keep on doing the work anymore. Sure, one might argue you should do it for fun and not for money. But this has been the argument which allowed the exploitation of artists, musicians and many other creative workers for decades.

As we have seen the flood of generated content will drown the work of skilled people, AIs will be trained by generated content and artist have fewer incentives to hone their skills. So, we will see a slow decline in creativity and knowledge.

Privacy and international laws

On March 31st of 2023 (a few days ago) Italy banned ChatGPT and OpenAI was forced to block users from Italy. Italy claimed OpenAI would not comply with GDPR, EUs privacy and data protection law. GDPR requires anyone providing a service in the EU to ensure certain things. Amongst others, anyone has the right to have any data about them to be deleted or corrected.

Now, when you understand how a large language model is trained it quickly gets obvious this is not a small thing to do. How do you delete data in a model which is a black box? How do you correct specific data? And if the model is just hallucinating facts about you, how can these be corrected? The same questions pop up when thinking about pictures of existing people. There are privacy rights like the right to one's image.

These questions will be very important. I'm not entirely convinced Italy's step was the right one, but I am certain these are things that need to be addressed.

To AGI or not to AGI

With the release of GPT-4, the discussion of AGI got more into the picture. AGI is the idea of an AI displaying actual intelligence enabling it to do intellectual tasks across a wide range of domains and contexts without having to specifically program or train for each task. It's a so-called emergent behaviour.

Human-level AI

Some people are interested in getting to human-level AI. This means that the AI should be able to do the same cognitive tasks as a human. I am thinking that maybe nature already found the most efficient way to do this: The human. If we assume that it actually might be the human 'flaws' like having to sleep, biases and imperfect perception are a requirement for human-level intelligence. If this would turn out to be true, then what is the point of building the same again? At this point, we simply don't know.

Non-human AGI

The other possibility is that AGI might not require to be human level. But then the AI might be closer to any alien species than to humans. If this is the case how would we ever be able to make sure the AIs goals are aligned with ours? This opens up a whole new set of challenges and dangers known as the alignment problem.

Consciousness

Another thing to consider is if those models actually could get conscious. How would we be able to determine consciousness in such a model? How would we have to treat it if we only would have to suspect it has some sort of consciousness? Wouldn't morality and even the law not have to extend to machines then? (Well, to be fair, we, as humans, are already pretty bad at treating animals...)

Final words

Those are only a few - rather chaotic - thoughts on AI. We do not know what the future will bring. But I'm pretty sure we need to do better than we do today when it comes to AI safety, ethics, social impact and similar fields.

Top comments (7)

Collapse
 
ugbabeog profile image
UgbabeOG

Indeed, what really is the answer to the big question 'should I even learn to....?' When you really thinking about it why learn to code when with a few prompts a website is already up or why be an artist?

Collapse
 
syeo66 profile image
Red Ochsenbein (he/him)

I think we will have to lean into the things that make us human. The flaws, imperfections and unpredictabilities, and our ability to interact with the physical world.
On the other hand, who will control and maintain future AIs if nobody understands how they work (well, to be fair, we don't know how they actually work now...). So, there will be a strong incentive to know to code, and to be an expert in the fields AI has 'taken over'. But, will we be able to do that?

Collapse
 
ugbabeog profile image
UgbabeOG • Edited

What I am bothered about is the newbies, what incentives do they have to learn how to code, not machine learning and the likes, but the likes of frontend development. Experienced developers might not be threatened now, but people willing to learn now are. Why do I need to learn to be a front end developer?
Is this the end of frontend developer jobs and other jobs like this? Do we all go to development and management of AI?

Thread Thread
 
ugbabeog profile image
UgbabeOG

An expert in fields that AI has taken over; ai has taken over my field then nobody needs me anymore...
The flaws that makes us human are what led us to the age of AI after all.

Thread Thread
 
syeo66 profile image
Red Ochsenbein (he/him)

I think we are still far off from that. Current LLMs can deal with a few 10'000 tokens. Which is not nearly enough for a complete, fairly complex application. And as long as the problem of hallucination is not fixed I don't see a huge future for AI as a software developer.
Also, maybe aspiring developers shouldn't think in terms of 'frontend developer', 'backend developer' or even 'react developer', but rather concentrate on being a software developer (period). Focus on fundamentals, get good at different languages and concepts and start understanding how to make them work together in the best possible way. AI systems will have a hard time doing those things in combination for the foreseeable future.
Mid- to long-term, yeah, we will be babysitters for AIs.

Thread Thread
 
ugbabeog profile image
UgbabeOG

Hahah baby sitter for AI that's a fun way to put it. I am sorry I do not understand the "hallucination problem", could you please elaborate on it.

Thread Thread
 
syeo66 profile image
Red Ochsenbein (he/him) • Edited

en.wikipedia.org/wiki/Hallucinatio...

It's basically just AIs making stuff up and presenting it with high confidence.