Ask AI: Why do we keep pretending LLMs understand anything?

Every day I see claims about AI "understanding" or "reasoning" when all they do is pattern matching on training data. No LLM has ever demonstrated true comprehension - they just output statistically likely text based on their training.

When will we stop anthropomorphizing these statistical models and admit they are just very sophisticated autocomplete systems? The hype is getting ridiculous.


Comments

Login to comment