jqpabc123 6 hours ago

LLMs struggle with reasoning --- period, the end.

They are fundamentally probabilistic, not deterministic. This makes them inherently unreliable for any application where deterministic logic is required --- like math.

Don't take my word for it, just ask your favorite LLM.

--- "Can you guarantee your results are reliable?".

While I strive to provide accurate information and results based on the data I have, I can't guarantee absolute accuracy in every case.

--- "Do you hallucinate?"

Yes, I can "hallucinate" in the sense that I might sometimes generate information that is factually incorrect, misleading, or doesn't fully align with reality.

  • trehcrob 5 hours ago

    It's not just that LLMs are probabilistic, it's that the illusion of reasoning in English doesn't transfer to formal systems. I find it quite interesting how far LLM reasoning can be pushed in for example code generation. But there is a massive difference between plausible argumentation and provably correct statements.

    • jqpabc123 4 hours ago

      it's that the illusion of reasoning in English doesn't transfer to formal systems.

      This is because there is very little reasoning --- it's an "illusion".

      Data retrieval is not "reasoning". A person with a photographic memory and instant recall is not necessarily a "genius" --- though it may appear that way to some.