## DEV Community

Alvaro Montoro

Posted on • Originally published at alvaromontoro.com

# Is ChatGPT testing the users?

Why does ChatGPT sometimes give clearly wrong answers? I get it can provide incorrect code or get confused with the natural language in a problem, but today it told me that the square root of 8 is 2 (just 2, no decimals). It's impossible that an AI (or any machine) thinks that's the correct answer.

It was a simple question, too: "what is the square root of the cube of 2?" And it explained the logic correctly. It just gave a wrong result: 23=8, &Sqrt;(8)=2.

After I asked to recalculate the result as it might be incorrect, ChatGPT provided the correct answer —with what looked like some "snarky" comment about how the previous one was wrong, and the right calculations this time: 23=8, √(8)=2.828...

So I guess ChatGPT is coded to provide wrong answers on purpose. But why? Is the system programmed to "test" the user? Is it some A/B testing? What and why?

It could be to get publicity online (people like me sharing about incorrect results and how unintelligent this Artificial Intelligence thing is). It's plausible, but I doubt it's just that —it would be bad publicity, too, but as they say, "Bad publicity is better than no publicity."

There has to be more to it. What am I missing?

Jon Randy 🎖️ • Edited

It's because it doesn't 'know' or 'calculate' anything. It's a dumb (but impressive) language processing model - it's not intelligent in the slightest

Timothy Foster

I don't think it's on purpose; it's probably more an artifact of how this AI processes language.

I tried the same experiment, but just said "That is the wrong answer". It proceeded to double down.

I figured the difference between what you put and what I put was the word "calculate", so I tried asking "Calculate the square root of the cube of 2". Interestingly, it's correct now.

Something about the word "calculate" might be cluing the AI that an actual calculation is needed. So my hypothesis is that the AI is often wrong because 1) language is extremely hard, and while this is good it ain't perfect, and 2) it's optimized for conversation, not accuracy.

Timothy Foster

Actually, they address this in the first two points of their limitations.

during RL training, there’s currently no source of truth

given one phrasing of a question, the model can claim to not know the answer, but given a slight rephrase, can answer correctly.

Raldin Casidar

My friend also experienced this yesterday. We dont know why AI provides wrong answer sometimes