So we had our fair share of an AI hype cycle now. It's still something we hear about every day and companies are out there trying to out-innovate each other, even if only in the marketing department.
Interestingly most generative AI products - which are all the rage these days - focus on taking away the tasks which brought joy and were underappreciated, underpaid and exploited anyways. The actual writing when writing, the actual craft when creating art, and actually playing music, when creating music. What's left when you take the skills and craft away from art? A blunt shell. A quick dopamine hit for a few minutes - if even that. Maybe we should stop for a moment and ask "Why?". Why are we doing this? What are we winning?
"Productivity" I hear some scream. "I'm 10 times more productive thanks to AI" they say. Even if this is true, what for?
Do we need 10 times the articles to read? Do we really need 10 times the songs to listen to? Do we need 10 times the images to look at? Do we need 10 times the emails, spam, websites or even code? And if not, what does the productivity gain do? Destroy 10 times the jobs? Kill 10 times the joy of creating art? I don't know the answer, but I never heard one from those who claim to be 10 times (or even 100 times) more productive.
Another thing: I don't believe those tools are really that powerful. I see how easy it is to get to the limits of generative AI tools. I don't even have to try that hard to receive really bad code and get myself into a loop of pointing out errors, the AI saying "I'm sorry, you're right..." and repeating the same useless code again. This is only one example. You probably heard about the problem of generative AI not being able to count the 'r' in Strawberry, or similar things. Those problems are not the exception, they are the norm.
This is where I start to ask myself how useless someone must have been if he gets 10 times more productive thanks to AI. Don't get me wrong: I believe there are fields where AI really can help you sift through huge amount of data or improve some processes. But those applications usually don't require a LLM.
Let me ask: What are we even doing?
We let the algorithms and models take the things we like, the things that make the human experience valuable. All in the name of productivity. And we use more and more energy to run those things, though we are burning away humanities future already, even without AI. Talking about the climate. There are those who think we can and should build huge machines to just capture the carbon and store it somewhere. Since those technologies only make sense without burning fossil fuels to operate them, we use solar cells to generate the energy. So, let me summarize: We use solar power to capture carbon from the atmosphere to bind and store it somehow. Congratulations: You just invented a tree.
What are we even doing?
Can we all just stop and start asking "Why" more often? And maybe we might find some better ways to exist. If we can get there I'll start to be optimistic about technologies and AI. Until then I'll stay a skeptic and maybe even a bit cynical.
Top comments (10)
I'm sick of everything being tracked, my own intelligence being stolen, my own efforts being stolen by companies and they training models for AI and using it to beat the creator of knowledge so I don't like AI even though it's faster for example faster than before I had to do 10 tasks in 8 hours with AI helping me faster I have to do 15-20 tasks to still get the same salary the competition is increasing and racing in the end I'm still a link making money for capitalism
Ah yes, don't even get me started on that topic...
π
sounds to me like you need to just start using the tools and you'll find they:
example:
How many developers have tons of ideas they can never get done, AI makes it possible
Complex things that would take you weeks or months to learn, you can just ask and have at your fingertips, with some caveats, but damn dude, it's incredible.
Of course spammers gonna spam
and i guess haters gonna hate
The abundance of posts on "how to write better prompts for LLMs" surely shows that the promise of interacting with computers using natural language is either miserably failing, or deeply misguided. If you have to describe the specific language needed to get the required results from a system, you may as well be teaching programming - language we already have to tell computers EXACTLY what to do whilst avoiding all the imprecision & ambiguity of spoken language.
IMHO the whole idea of 'development using AI' (specifically LLMs) is fundamentally flawed.
Humans seem to want to interact with machines using natural language because it 'feels right' - but maybe the whole idea is wrong... just another form of skeumorphism. Familiar, sure - but maybe not the right tool for the job... and potentially limiting.
Human language developed over many thousands of years for a specific purpose - to allow humans to communicate with other humans. Computers are a wholly different thing that neither think or 'understand' like we do (they're basically just calculators). I think some form of AI definitely has a place in improving our interactions with and use of computers, but from my experience so far I'm pretty sure LLMs (glorified autocomplete) almost certainly aren't the way forward.
You're thinking about it too hard.
Talking to an AI is just like using Google.
If you know the tricks of HOW to ask it questions, you can get much more precise results than if you're just using natural language.
AI is actually easier to use than Google because you can explain to it in sentences what you need, where google relies on tags and quotes and special characters.
I don't think "write better prompts" classes are useful at all, because they're all out to make a quick buck.
The key as with any skill is just practice. Ask something, ask it a different way, and another and another.
Tell it you want something more specific or less specific.
You should just try them out and play with them some more and you'll find there's no real problem with the tools, much like anyone can use a hammer, but many are gonna smash their fingers before they learn to place it properly.
I was referring to getting LLMs to write code for you, not asking them questions about coding/concepts. Upon re-reading my response, I realise that is not totally clear.
As to your point about it being like Googling something... on that front, using an LLM could not be more different.
When you search on Google, you get a variety of results based on your search query - which you are free to peruse and evaluate at your will (i.e. like seeing how something like stack overflow responses are actually rated by peers).
With an LLM you generally get ONE answer which you have no idea whether is correct or not (it is generally an amalgam or pastiche of information relating to your question - which may or may not include 'hallucinations' making it potentially wrong). It is just whatever the LLM thinks is most likely to come next (autocomplete on steroids) - sometimes it's useful, sometimes it is maddeningly, obviously wrong or even unrelated to your question... all because the LLM actually understands nothing. If the generated answer is for something that there is a LOT of training data in it's model - then the likelihood the answer is correct or close to correct is probably high... but the converse is also true. This is super dangerous for learners/beginners as they'll develop a tendency to blindly believe everything the LLM tells them.
Yes - which are shorter, more precise, and do definite things. Much better tools for refining searches than playing around with the fuzziness of natural language, and dealing with the stochastic nature of LLM responses.
I think it all depends upon the type of information you are looking for. If you want a conversational style explanation of something (which may get precise details wrong) - then an LLM is quite probably a good route. If you want to find a very specific piece of information, or a bunch of results relating to some specific search terms - then a search engine is always going to be a better choice.
I guess I'm far beyond skepticism now, I'm doing things that I could never afford people to do and that they'd hate anyway, in seconds. I'm replacing wasteful, costly and pointless tasks with automated assisted processes that are going to help people in their daily lives. All of this is possible due to LLMs. I've tried these ideas with NLP, got nowhere near enough.
On top of that, I'm saving vast amounts of time wading through examples that are tangential to what I need to know on new APIs or services that I'll only need to know for a moment, with code that is good enough to get me started - God knows how much time I've saved on that but it's days at least. For example I built a Babel plugin the other day, would never have bothered to learn the API for one thing I half needed, but I'd got it running in under an hour and now my lazy loading process reduces boilerplate by 90%.
I am never going to be a great 3d animator, no problem I can ask an AI to make the animation I need and then tweak it. I need a seamless gold texture? Yeah I got one. I need this excel spreadsheet totally reformatting and reimagining, done in 2 minutes without me needing to record strange macros etc.
Sure there can be glut of pointless, soulless generic art... ok, that's not so helpful. AI music that's never really going to be inspiring - that might be used for an advert, that a musician capable of making generic soulless content would have produced before... not sure I'm choosing that hill to defend. I'm not going to get an AI to write my documents for me, I might get Grammarly to fix some of my greatest sins though...
There are really bad things about AI being out there, we're on a site that suffers greatly from those things to my mind. It's a mistake to consider this to be the sum total of what is possible. It's a tool, a flexible tool that is imperfect at certain tasks, then again, so am I.
While I totally support everything thatβs said here (and am fed up with all this unsolicited AI madness), I consider it inevitable at this stage of technology evolution. Rewatching Singing in the Rain the last Friday strengthened me in this conviction (π ), the parallels are so vivid. Weβll either direct this wave properly and smoothen its flow or drown in it, not stop.
π‘ Probably not! π€
Thank you for this