While a bit humorous, one of the greatest current challenges with LLMs is getting them to acknowledge when they don’t know something rather than trying to tell you what they think you want to here and hallucinating an answer. They are improving in this respect.
Also we need to get away from the idea that Chat GPT is the kind of AGI that we’re talking about more broadly. ChatGPT wasn’t built to solve the mysteries of the universe, it was built for one very specific task, which is to replicate human writing and speech.
It just so happens that the unintended consequences of it getting really good at this is that it opened up doors for it doing all sorts of other things. You’ve basically built an interface for humans to interact with AI in the same way they would a person which is extremely powerful for conveying ideas.
An actual AGI will also have reasoning capability built in. Which GPT doesn’t explicitly have… though some reasoning does happen to pop out of its answers, which is very interesting in and of itself and has caused a lot of philosophical debates on what reasoning actually is.