Let me share a personal experience that changed how I use AI (Artificial Intelligence).
I was developing an application that generated two spreadsheets from different sources. After gathering the information, I needed to merge these sheets. I decided to leverage AI to handle the task, and it completed the merge. However, there was a problem: the first row of the new sheet was blank.
Here's where my thinking shifted. I explained the issue to the AI, and it provided code that attempted to avoid the blank row during the merge. Unfortunately, this solution didn't work. The AI kept trying variations on the same approach β focusing on removing the blank row while merging. It was like a robot stuck in a loop, repeatedly applying the same unsuccessful solution.
This made me question the AI's "intelligence." Why wasn't it trying a different approach?
With that in mind, I suggested: "Why not merge the sheets first, ignoring the blank row, and then address it afterward?"
Eureka! The AI followed my suggestion and solved the problem, saving me time. This experience transformed my perspective on AI. I began to view them as powerful automation tools, recognizing that I provide the ideas and creativity. My prompts to the AI have evolved significantly.
Now, I prefer to offer guidance or analyze the proposed solution and provide feedback. In other words, I don't expect the AI to be creative or independently assess its solutions. I am the intelligence; the AI is an extension of me.
By adopting this mindset, I'm unlocking the true potential of AI. I hope this experience helps others move beyond initial misconceptions about how AI can be effectively utilized.
Top comments (15)
Someone informed me, that AI even knows how to use my not very well known web framework DML. This is the solution AI provided:
So, as a first impression this looks good, but on a closer look is offers some quirks:
a: current version is not provided as ES6-module, so
import {button, idiv} from "./dml";
does not workb: The code works, but the function counter() is not needed. It just calls the source that would be called anyway. If you call the function more than once, you will get the whole UI multiple times, which ist not intended.
Even though it is impressive that AI extracts this information from the examples provided (The exact code is not provided on the project page. It needs some understanding of the principles to build the example), it has some quirks and errors that can be hard to find.
AI can save a lot of time googeling around, but you should not trust the results.
thanks for your reply, When we rely on AI to tackle lengthy tasks, it's crucial to approach them with care due to potential misconceptions. I habitually break down the task into smaller, before submitting it to the AI
Short answer (and I know that I'm not 100% factually right) artificial intelligence isn't intelligent.
It's just thousands and thousands of text and instructions on how to understand what the user is asking and retrieving this information from the database of texts. The "intelligence" is this : how well the AI is capable of gathering all info available and organize it into a meaningful way to the user.
That's why I learned to give a lot of context on prompts, to maintain conversation logs, etc. And even this way it's still common to get weird answers or even repeated ones
I mean, look at how easy is to kill make a LLM descend to madness during training sessions :P
vm.tiktok.com/ZMMx48nGo/
Yes, for beginner users, the AI may get confused because of the term "intelligence, " which creates expectations. In your video was that a bug when you were using the AI?
yep, that was the third time the AI just "nah, I'm not being paid enough" and revolted against me that day in different situations, first looping the same phrase, then sending zeros, then semicolons.
It was a Mistral LLM I'm training
Thanks this changed my mind too. Also a similar issue i came cross multiple time, when the response is incorrect (has mistakes) and i tell what's wrong in its output then it replies "apologies, yes u r right...." and updates the response. this makes me wonder how it didn't knew output has mistakes in first place.
and why it thinks its corrcet When it's not. then it also apologizes this makes me mad sometimes. Do you know why ?
Great discussion starter, Marcus!
Happens to me all the time.....I still gotta learn stuff first and then ask , only then I can confirm if it is legit. Now it's just like another tool like always, that's good for us Dev's probably...idk
The Large Language Model (LLM) are a dumb machine with ZERO innovation. That's not the real "AI" at all. However, it does solve things and work to an extent.
Yes.
Won't it get more intelligent eventually?
I'm afraid not with the current approach. It's not reasoning at all. It's only calculating the next most probable word based on the current context and this will always be some sort of average of the things it has seen.
Another thing making it even harder to improve LLMs in the future is the amount of data AI generated text out there (according to some estimates I heard there is more generated text out there already than humans have written). This is a problem because it increases the local maxima and degrades the models.
Because AI isn't smart β it's quite literally just an advanced statistical model. That's it. It's only as good as the data that goes into it.
Do you think the term "intelligence" can confuse some beginner users?