

It’s also just a language model. People have trouble internalising what this means, because it sounds smarter than it actually is.
ChatGPT does not reason in the same way you think it does, even when they offer those little reasoning windows that show the “thought process”.
It’s still only predicting the next likely word based on the previous word. It can do that many times and feed in extra words to direct it one way or another, but that’s very different from understanding a topic and reasoning within it.
So as you keep pushing the model to learn more and more, you start getting many artifacts because it’s not actually learning these concepts - it’s just getting more data to infer “what’s the most likely word X that would follow words Z, Y and A?”
You’re not wrong, but it’s still a language model, which is not the entirety of how intelligence and reasoning works. There are clear limitations that do not arise only from people toying around with ChatGPT, but are known for decades of theoretical understanding of what language is.