Generative AI still hallucinates

In a nice posting by Gary Marcus - Seven Lies in Four Sentences there are examples of where Generative AI does what it always does via autocompletion and produces text that is unrelated to reality.

A related, earlier posting notes

All they do is match patterns, draw from massive statistical databases of human language. The patterns might be cool, but language these systems utter doesn’t actually mean anything at all.

Overall these generative tools produce the statistically most likely sequence of words based on the prompt that was supplied. The problem with that is that humans as gullible and easy to fool, so it is easy to fall into the trap of thinking that the generative tool is intelligent. Whereas in reality the tool has no idea at all about what it generated, other than that the output was the statistically most likely response.

So generated text is unlikely to be surprising – a.k.a. no new information – it will just be likely information. So yes, it is likely to give a reasonable answer to common prompts (because it was trained with many related examples ), but for novel prompts hallucinations (a.k.a. misinformation) is the likely answer.