Is ChatGPT testing the users?

Photo by DeepMind on Unsplash

Is ChatGPT testing the users?

Often ChatGPT provides wrong answers, but why?

Why does ChatGPT sometimes give clearly wrong answers? I get it can provide incorrect code or get confused with the natural language in a problem, but today it told me that the square root of 8 is 2 (just 2, no decimals). It's impossible that an AI (or any machine) thinks that's the correct answer.

It was a simple question, too: "what is the square root of the cube of 2?" And it explained the logic correctly. It just gave a wrong result: 23\=8, √(8)=2.

Question to ChatGPT: “What is the square root of the cube of 2”. And an ellaborate answer by the AI explaining that it is 2 (8 is the cube of 2, and 2 is the square root of 8)

After I asked to recalculate the result as it might be incorrect, ChatGPT provided the correct answer —with what looked like some "snarky" comment about how the previous one was wrong, and the right calculations this time: 23\=8, √(8)=2.828...

Conversation with ChatGPT. Following the previous question, the user states that the answer may be wrong and to run the calculation again. ChatGPT apologizes for any confusion, and states that the previous answer is wrong. It does the calculations again, this time with the correct values and result.

So I guess ChatGPT is coded to provide wrong answers on purpose. But why? Is the system programmed to "test" the user? Is it some A/B testing? What and why?

It could be to get publicity online (people like me sharing about incorrect results and how unintelligent this Artificial Intelligence thing is). It's plausible, but I doubt it's just that —it would be bad publicity, too, but as they say, "Bad publicity is better than no publicity."

There has to be more to it. What am I missing?

Did you find this article valuable?

Support Alvaro Montoro by becoming a sponsor. Any amount is appreciated!