BLUF

AIs have a big problem with truth and correctness—often referred to as ‘hallucinations’; and human thinking appears to be a big part of that problem.

Summary

  1. New AI generation taking a more experimental approach.
  2. OpenAI's new o1 model diverging from human thinking.
  3. Has some 'thinking time' before answering a prompt.
  4. Generates a 'chain of thought' in which it considers and reasons. 

References