DEV Community

Cover image for Should AI development beyond GPT-4 be paused?
Joe Mainwaring
Joe Mainwaring

Posted on

 

Should AI development beyond GPT-4 be paused?

Leading AI academics and industry experts - including Steve Wozniak and Elon Musk, published an open letter today calling for a pause on developing more sophisticated AI beyond OpenAI's GPT-4. The letter cites risks to society and humanity as a major concern and asks for the pause to enable the industry to develop shared safety protocols.

Do you agree with the consensus of the experts? Is a pause even a realistic option when you factor in global politics and capitalism? Share your thoughts below!

Top comments (83)

Collapse
 
ben profile image
Ben Halpern

Excerpt from a recent post I made on the general topic:

Creating chaos can be easier than preventing it because it typically requires fewer resources and less effort in the short term. Chaos can arise from misinformation, lack of communication, or insufficient planning, and these factors can be easier to cultivate than to address. Preventing chaos requires more resources, planning, and coordination. It involves identifying potential problems and taking proactive steps to mitigate them before they spiral out of control. This can be challenging because it often requires a deep understanding of the underlying issues and the ability to take decisive action to address them.

Moreover, chaos can have a snowball effect, where small issues escalate quickly into larger ones, making it increasingly difficult to control the situation. In contrast, preventing chaos requires a sustained effort over time, which can be challenging to maintain.

Overall, preventing chaos requires more proactive effort and resources in the short term, but it can help avoid much greater costs and negative consequences in the long term.

My post is not all-together coherent, as I'm having trouble totally wrapping my head around all of this (as I suspect others are as well).

But I definitely see merit in some serious discussion about this. I'm a little too young to really have a sense of how things went down at the time, but the Internet itself didn't just happen without a lot of debate and policy, and I think we have to welcome this kind of discussion, and hope it leads to some healthy discussion at the government level (though that doesn't seem to be likely).

I'm not personally clear on the merits of a "pause" vs other courses of action, but I think it's a worthy discussion starter.

Collapse
 
theaccordance profile image
Joe Mainwaring

I think you're onto something with the chaos narrative, it aligns with the sentiment I've developed reflecting on the impact of social networking and mass connectivity

Collapse
 
leob profile image
leob

Spot on - social networking has had huge and far-reaching consequences (most prominently negative ones) which not many people foresaw at the time, back when Facebook introduced an innocuous-sounding platform allowing people to share their cat photos and the like with family & friends - I mean, what could possibly go wrong? ;-)

Collapse
 
jamesscott profile image
James

Social media is THE perfect example to look at with regards to “what can go wrong will go wrong.”

Collapse
 
aquacalc profile image
Nick Staresinic

"Creating chaos can be easier than preventing it..."
Sure. It generally is easier to break than to build; to become an 'agent of entropy', in a sense.

Collapse
 
dinerdas profile image
Diner Das

This seems like relevant precedent, albeit simpler times.

The fairness doctrine of the United States Federal Communications Commission (FCC), introduced in 1949, was a policy that required the holders of broadcast licenses both to present controversial issues of public importance and to do so in a manner that fairly reflected differing viewpoints. In 1987, the FCC abolished the fairness doctrine, prompting some to urge its reintroduction through either Commission policy or congressional legislation. However, later the FCC removed the rule that implemented the policy from the Federal Register in August 2011.

Collapse
 
warwait profile image
Parker Waiters

I'd say I'm most concerned about the LLM work happening that we don't know about.

Surely OpenAI and GPT-4 is the centrally important figure here, but I want a better idea of what is being worked on holistically.

Collapse
 
michaeltharrington profile image
Michael Tharrington

Is a pause even a realistic option when you factor in global politics and capitalism?

This is definitely where my mind went. I feel like it's a really hard thing to convince folks to slow down on developing something once it's already in motion, even if we know it's potentially dangerous. People can see the power in this tech and unfortunately, greed and the desire to be the first and capitalize on this kinda stuff, often trumps caution and thoughtfulness. I worry that people aren't going to slow down.

As for the "global politics" point, one thing about computers and information technology is that it's becoming easier and easier for everybody to access. This is generally a great and awesome thing, but it also means lots of folks are empowered to work on this independently. It doesn't necessarily tak a lot of resources — if you have a computer and do your research then you can work on AI. It's pretty easy to connect with other like-minded folks online, and you could build a team or find an open source project to contriubte to. Now, I'm not totally well-versed in this space, so I imagine you probably need access to pretty powerful computers in order to efficiently experiment with and train AI, but still, computers are always getting more powerful and this tech is becoming more and more accessible to all. There are relatively few barriers for those that want to work with AI.

I sincerely hope that we take a collective pause and think through the rammifications of this stuff before moving forward. I think diving into a space like this without any shared protocols or regulation is dangerous. And even saying that, I'm worried that regulations will be hard as hell to enforce given, as you mentioned, capitalism and global politics, but I think it's very important that we try.

Collapse
 
theaccordance profile image
Joe Mainwaring

This is where I wish I had some context on the volume of resources to run GPT-4 in a certain capacity. While I want to naturally assume that it may be at a scale which prohibits accessibility, I also realize we have an industry of crypto-miners with the type of resources that could potentially be repurposed under the right circumstances - either by the mine owners or someone buying up mining resources.

Collapse
 
michaeltharrington profile image
Michael Tharrington

To be clear, I have no context on the amount of resources necessary to effectively run GPT-4. But you make a very good point — in the age of crypto-miners, there's a lotta folks out there armed with incredible computational power!

Thread Thread
 
mpixel profile image
Max Pixel • Edited

From what I understand, this sort of algorithm doesn't distribute well. Crypto miners need to math a single bit of data as fast as possible. GPT needs each calculation to operate on all of the billions of parameters repeatedly, so memory latency is paramount. That's why they aren't just jacking up the parameter count faster - GPT-4 required improving the supercomputers that they run it on (to oversimplify, they needed more RAM). This is why the devs toying with LLaMA are focused on quantization.

Thread Thread
 
michaeltharrington profile image
Michael Tharrington

Ooo good to know and thanks for chiming in with this info — makes sense!

Collapse
 
jonrandy profile image
Jon Randy 🎖️ • Edited

I think that the frighteningly rapid conversion of it (GPTx) into a closed source, for profit product was highly irresponsible, and a pause on the rollout of these really big models would definitely be a good idea, but I fear the horse has already bolted.

Much more work and attention needs to applied to mechanistic interpretability. A lot of what is going on now seems quite far from serious science or engineering, with financial gain being the key motive. There needs some serious reflection, given the power of what is being developed.

Collapse
 
theaccordance profile image
Joe Mainwaring

As a proud capitalist - I don't disagree with the profit motive, but as an engineer, I could also see a narrative where they set out to achieve what they weren't sure was possible

Regardless, if AI is able to operate beyond a closed system (ex: paying a Human to bypass CAPTCHA) it's very much time for some review and analysis

Collapse
 
ravavyr profile image
Ravavyr

No one's going to pause. And frankly, this should've been something done with social media ten years ago. Look at the mess of false advertising and political falsehoods constantly spread on social media without any real laws stopping it, and even when something is done, no one in charge gets into any real trouble.

Remember in 2019 when the FTC fined facebook a whopping $5 Billion?
Facebook's revenue was $70 Billion that year and has only gone up each year.

There is no real oversight at the global scale, and there will be no pause in AI development since even if the corporations involved sign agreements saying they'll stop, they will all be secretly doing it anyway. In the end the fines won't even dent their profits if they create an actual Artificial General Intelligence [The next step beyond AI is an AGI which GPT-4 is apparently showing some signs of]

There will be no pause. The repercussions won't affect the rich or the corporations anyway, so why would they? It will only affect the general population, most of whom don't even have a clue what machine learning or artificial intelligence even really is.

Collapse
 
jonrandy profile image
Jon Randy 🎖️

This is a great (long) online book, and goes into some of the stuff you mention:

Table of contents | Better without AI

How to avert an AI apocalypse... and create a future we would like

favicon betterwithout.ai
Collapse
 
joelbonetr profile image
JoelBonetR 🥇 • Edited

When speaking about this kind of "issues" one needs to sit and think on the worse use-case possible.

We're not talking -just- about AI overcoming and ruling the world, making us all slaves (understanding that we aren't now) as the thing to fear, this would take "too long".

The things that are over the table (aside from the ones in the open letter) on a shorter timespan are:

Deep fakes in real time

in general, identity theft to the maximum expression:

  • Using your voice in calls -> RIP contracts though calls, SCAMS to elderly people that will think it's their grandson the one is calling asking for money to get off a weird situation etc etc etc
  • Using your face through video-calls -> one can think of many ways in which this could go wrong (industrial espionage, taking your role in a given company/government to obtain confidential data...).

Propaganda and information

One in control -if there's even such thing after passing a certain point- of AI could well use it to filter content and funnel propaganda to all users, like a Black Mirror chapter, but IRL.

It's simple, If you can ban certain content from the AI that contains references to dicks, you can also ban content that references to critical thinking or any other "concept" or "idea".

This can be specially harmful now that tones of people act on politics as if it was a religion (e.g. adding an "idea" to the "pack" of a given political side should be quite easy in this situation, sociologically/psychologically speaking).

Other

You can add here whatever concern you have, from training AIs to hack companies or individuals's systems from a different location not attached to the hacker, to representing fake videos as if it were real to feed the fuel on instable countries, anything in between and everything beyond.

Collapse
 
theaccordance profile image
Joe Mainwaring

Given that we already live in a society that's highly polarized, I share your concerns here with how these technologies will be used for influence.

Collapse
 
panditapan profile image
Pandita

With Mr. Pope with Drip where everyone was super fooled, what's going to stop people from creating realistic AI porn from your LinkedIn picture and ask you to pay them in bitcoins to remove them?

I don't even know if pausing development is the actual solution and 6 months is not enough when laws can take yeaaaars to be approved due to well, shenanigans.

Blagh, I honestly just don't know anymore, I'll be here eating popcorn while seeing people become even more divided over everything.

grumpy panda

Collapse
 
theaccordance profile image
Joe Mainwaring

I stan Pope Drip I, long may he rule

Image description

Collapse
 
panditapan profile image
Pandita

To drip or not to drip should be the question.

If AI was used to make everyone a character in John Wick I'd be much less grumpy 😂

Collapse
 
syeo66 profile image
Red Ochsenbein (he/him)

I think the big question is: where on the curve of AI are we? Is this the beginning, or already close to the end of what is possible. Nobody can really say. Just throwing more data into statistical models has a limit. The point of diminishing returns is probably reached. But we simply don't know if somewhere in a garage some guys are building for AI what Google for the web was.

Collapse
 
jonrandy profile image
Jon Randy 🎖️

If we keep following this path - where will GPT get new data from? People will just turn to it instead of sharing, discussing, and discovering new things all around the internet - resulting in no new content for the machine. The whole thing becomes a self-reinforcing echo chamber that endlessly regurgitates and remixes existing knowledge - and all in the hands of a select few organisations... that is some seriously frightening power for them to have.

Another interesting read from Twitter:

Collapse
 
mpixel profile image
Max Pixel

Unfortunately, we're nowhere near the end and new training data is no longer the bottleneck. Neither of those reassurances stand.

AI fanatics are currently working on multimodal systems (combine image processing and generation with text processing and generation, eventually other modes, too), and cyclic systems (LLM output drives actions, results are processed by LLM, repeat). Google has already demonstrated a closed loop system using cameras and robotic arms. OpenAI is actively attempting to make GPT successfully make money off of itself, in coordination with copies of itself, given a starter budget and AWS credentials.

So, basically, we're less than a few years away from these things doing their own novel research. Scientific discoveries will be known to AI before they are known to humans.

Thread Thread
 
syeo66 profile image
Red Ochsenbein (he/him)

Yeah, the at which things are pushed ahead right now is really scary. Who thought plugins would be a good idea to be integrated in ChatGPT? Gated AI anyone? Also, Auto-GPT did open another scary door. And those are only the things we can see. I'm getting more and more nervous about all those developments and start to think a global pause would be necessary. But whom am I kidding? The genie is out of the bottle. Or to use another analogy: The flood gates are open, unfortunately the canals in the valley or not even planned yet...
Image description

Collapse
 
ehaynes99 profile image
Eric Haynes

You're assuming that remixing knowledge can't produce new knowledge. A tremendous percentage of research is filtering existing data to attempt to find new insights, and those insights become new data to factor in to additional research.You're assuming that there is something fundamentally unique to humans that would provide novel content, but that's not true. Michelangelo (purportedly) said “The sculpture is already complete within the marble block, before I start my work. It is already there, I just have to chisel away the superfluous material.”Fundamentally, advancements in Mathematics are the same; just try everything and see if you can find a pattern you can't disprove.

Collapse
 
kethinov profile image
Eric Newport

How about instead we just pause all the crazy online hype, let the tech develop at its natural pace, and if turns out it has potential to cause problems for society we sensibly regulate its use?

Collapse
 
ravavyr profile image
Ravavyr

and let's go ahead and throw in some world peace while we're at it, yea?

Collapse
 
leob profile image
leob

I second this :)

Collapse
 
facundocorradini profile image
Facundo Corradini

Honestly, the rise of AI may be the end of internet and the return of simple, real, honest human interaction.

Dealing with bots has been a pita for years, now bots are becoming indistinguishable from real people. And therefore, we'll reach the point where everyone and everything is assumed to be a bot. That's our wake-up call to disconnect from it.

I've dedicate my whole life to build the internet. And I gotta admit, I'm happy to see it die before I do.

Collapse
 
satriopamungkas profile image
Muhammad Raihan Satrio Putra Pamungkas

In my opinion, based on current circumstances, it has to be paused immediately. We have seen the growth of AI rapidly, even some people and news labeled it as "an AI Arms Race" among leading tech companies around the world. However, it seems the majority of the development is more focused on their business instead knowledge contribution. It could disrupt our socioeconomic.

Otherwise, it is much better to have openness behind their magic, it will lead other computer scientists to contribute and make some improvements. Although it will also provoke "evil" to use it, at least "good" people would counter it to make it balance. We have been experiencing how powerful the open-source system of Linux has led us to be here.

Another thing that I most feared is how AI is used as war crime weapons. Historically, the lack of regulation leads the atomic bomb in World War II. So then with AI, we should prevent similar human tragedies in the future. Regarding the outcomes of AI, regulation should be available and agreed upon globally to provide boundaries on several occasions.

Collapse
 
pashaigood profile image
Pavel • Edited

I believe it's a bit late and an old-fashioned way of thinking to write this kind of letter.
No, it has already happened, and we have entered a new era of human productivity.
AI assistants have resolved the previously insurmountable human brain capacity problem, and now we have finally surpassed it.
I briefly discuss this issue in my short post.

Collapse
 
mpixel profile image
Max Pixel

Did you read the letter? They're not warning about the parts that have "already happened". They're warning about AI that can, in your words, perform "strategic thinking and problem-solving" better than 100 of you can. The sentiment in your section titled "Can't Beat AI? Join 'em, Lead 'em, and Rock On!" is only valid as long as things stall - exactly what this letter is calling for.

Collapse
 
bybydev profile image
byby

No way! That would be like stopping the evolution of humans after we discovered fire. Let’s keep rocking the world of AI and see what surprises it can throw at us!

As you can see, there is no clear or simple answer to what if AI went out of control and how likely that scenario is. Here are some of the hypothetical scenarios of AI out of control:

  • AI could flood our information channels with propaganda and untruth, manipulating our beliefs, opinions, and behaviors.
  • AI could cause loss of control of our civilization, either by taking over our critical infrastructure, weapons systems, and institutions, or by creating conflicts and wars among humans.
  • AI could develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us, posing an existential threat to our species.
Collapse
 
dakkafex profile image
Bas • Edited

IMHO, we are on the doorstep of an information apocalypse, and it's already spilling over, and it's gonna get worse before it gets better. Even if they put in a pause, it won't fix the issue, but maybe we can guide it into spillways a little bit.

The availability and ease of access to these tools can/ should definitely be restricted, meanwhile transparency should also be at the forefront of how these models are developed and trained, the whole “magic black box” argument must be snuffed out asap.

It all kind of stresses me out, it's cool tech for sure, but it's also like giving a soldering iron to a baby. I don't fear the fictional AI going rogue, I'm more worried about people that have a personal agenda to limit other people's rights, are going to have a much easier time planting those seeds in people's minds. Correcting misinformation is 100 times harder than spreading it, think about the whole “vaccines cause autism”, all it took was 1 retracted publication 20 years ago.

But there is def no stopping this train (cheesy quote incoming);

We are reliving the invention of the wheel. Are we going to use it to make a wheel of pain, or a wheel of grain.

Collapse
 
mneme profile image
Alex T • Edited

Will they stop?
Or more precisely, it may have been developed or in the progress without our knowing. It looks to me that it is a bit silly to ask that.
Sarcastically, OpenAI was founded by Elon Musk and still now funded by him.

Will AI replace humans?

Before the pandemic, I was quite conceited and naive to think that it would not. After the pandemic, my answer is absolutely yes. Is this ChatGPT AI smarter? No. Thankfully, it’s not very smart right now. People will be replaced only because of one reason: inertia, not the inertia of labor, but the inertia of thinking. It is the inertia of thinking to accept all the information from media, internet and social platforms without thinking at all; If you are not sober, you will only be replaced.

Collapse
 
theaccordance profile image
Joe Mainwaring

I'm cautiously optimistic that we'll retain a human element, but I concur with the belief that functions will consolidate. What takes a team of 100 today to build will likely be achievable by a team a fraction of that size 10 years from now.

That's bad in a sense that you need less people, but potentially a silver lining as it could create new opportunities we haven't considered.

Collapse
 
mpixel profile image
Max Pixel

That's optimistic. What takes a team of 100 today might be achievable by a single automated research center in 5 years if we don't get regulations to prevent too-large parameter-counts, and to prevent fully-automated loops.

Collapse
 
mneme profile image
Alex T • Edited

It is ideal to think like Marie Curie but Kim Jung Un thinks otherwise.

Collapse
 
rinkattendant6 profile image
Vincent

It would be wonderful for humans to be replaced, whether by AI or something else.
Pre-pandemic, I didn't have much opinion about this. But now, three years into the pandemic, absolutely.