Have you ever tried to use an AI tool like ChatGPT to do some math and found it doesn’t always add up? It turns out there’s a reason for that.
As large language models (LLMs) like OpenAI's ChatGPT become more ubiquitous, people increasingly rely on them for work and research assistance. Yuntian Deng, assistant professor at the David R. Cheriton School of Computer Science, discusses some of the challenges in LLMs' reasoning capabilities, particularly in math, and explores the implications of using these models to aid problem-solving.
What flaw did you discover in ChatGPT’s ability to do math?
As I explained in a recent post on X, the latest reasoning variant of ChatGPT o1, struggles with large-digit multiplication, especially when multiplying numbers beyond nine digits. This is a notable improvement over the previous ChatGPT-4o model, which struggled even with four-digit multiplication, but it’s still a major flaw.