A new math benchmark just dropped and leading AI models can solve ‘less than 2%’ of its problems… oh dear

Sometimes I forget there’s a whole other world out there where AI models aren’t just used for basic tasks such as simple research and quick content summaries. Out in the land of bigwigs, they’re instead being used to help with everything from financial analysis to scientific research. That’s why their mathematical capabilities are so important—plus it’s a general marker of reasoning capabilities.

Which is why mathematical benchmarks exist. Benchmarks such as FrontierMath, which its maker, Epoch AI, has just dropped and which is putting LLMs through their paces with “hundreds of original, expert-crafted mathematics problems designed to evaluate advanced reasoning capabilities in AI systems” (via Ars Technica).

While today’s AI models don’t tend to struggle with other mathematical benchmarks such as GSM-8k and MATH, according to Epoch AI, “they solve less than 2% of FrontierMath problems, revealing a substantial gap between current AI capabilities and the collective prowess of the mathematics community”.

To be clear, these are hard problems. As in, so hard that they “typically require hours or days for expert mathematicians to solve”, ranging “from computationally intensive problems in number theory and real analysis to abstract questions in algebraic geometry and category theory”.

What’s so different about this benchmark is that solving these mathematical problems requires “extended chains of precise reasoning, with each step building exactly on what came before”.

AI models have traditionally not been great at extended reasoning in general, let alone for super-advanced math. This makes sense when you consider what AI models, at bottom, are doing. Using LLMs as an example, these are trained on tons of data to figure out what each next word would most likely be based on this data. Although of course there’s plenty of room for directing the model more towards different words, the process is essentially probabilistic.

Of late, however, we’ve seen AI models apply their probabilistic “thinking” in more of a directed fashion towards intermediary steps of this “thinking”. In other words, we’ve seen a move towards AI models that attempt to reason through their thinking, rather than just jumping to a probabilistic conclusion.

There’s now a version of ChatGPT-4o, for instance, that uses reasoning (and you better make sure you don’t question it). It’s also telling that you can now potentially be awarded for giving a question that AI can’t answer for “humanity’s last exam“.

Of course, these individual steps of reasoning might themselves be arrived at probabilistically—and could we expect any more from a non-sentient algorithm?—but they do seem to be engaging in what we flesh-and-bloodies after the fact consider to be “reasoning”.

We’re clearly a way off from having these AI models achieve the reasoning capabilities of our best and brightest, though. We can see that now that we have a mathematical benchmark capable of really putting them to the test—2% isn’t great, is it? (And take that, robots.)

AI, explained

OpenAI logo displayed on a phone screen and ChatGPT website displayed on a laptop screen are seen in this illustration photo taken in Krakow, Poland on December 5, 2022.

(Image credit: Jakub Porzycki/NurPhoto via Getty Images)

What is artificial general intelligence?: We dive into the lingo of AI and what the terms actually mean.

Regarding the FrontierMath problems, Fields Medalist Terence Tao tells Epoch AI, “I think that in the near term basically the only way to solve them, short of having a real domain expert in the area, is by a combination of a semi-expert like a graduate student in a related field, maybe paired with some combination of a modern AI and lots of other algebra packages…”

While AI models might not be able to crack these difficult problems just yet, the FrontierMath benchmark looks to serve as a good litmus test for future improvements, ensuring the models aren’t just spewing out mathematical nonsense that only experts could verify as such.

We must, in the end, remember that AI is not truth-aiming, however closely we humans aim its probabilistic reasoning at results that tend towards the truth. The philosopher in me must ask: Without it having an inner life aiming towards truth, can truth actually exist for the AI, even if it spews it out? Truth for us, yes, but for the AI? I suspect not, and that’s why benchmarks like these will be crucial moving forwards into this new industrial revolution, or whatever they’re calling it these days.

Source

About Author