ai

Google’s Gemini Deep Think AI Earns Gold Medal at 2025 International Math Olympiad

At this year’s International Math Olympiad (IMO), Google’s Gemini Deep Think AI competed under the same strict rules as human participants—and scored a gold medal, solving 5 out of 6 problems correctly. This marks a significant improvement over last year’s silver-medal performance by DeepMind’s AlphaProof + AlphaGeometry 2 system.

How Deep Think Outperformed Most Humans

  • Natural Language Processing: Unlike previous AI models that required manual translation of problems into formal math notation, Deep Think understands and solves problems directly in natural language.
  • Parallel Reasoning: Instead of following a single logical path, it explores multiple reasoning approaches simultaneously, improving accuracy.
  • Rigorous Training: Google used reinforcement learning with long-form proofs—not just final answers—to ensure the AI shows its work, a key IMO requirement.

The One Problem It Missed

The sole incorrect answer came from Problem 3, which involved minimizing rectangle coverings. Deep Think started with a flawed assumption, leading it astray—yet only five human contestants solved it correctly.

Why the IMO Is a Tough AI Challenge

IMO problems test advanced algebra, combinatorics, geometry, and number theory—requiring creative, multi-disciplinary reasoning. Google’s success demonstrates:

  • AI can now match top-tier human mathematicians in structured problem-solving.
  • Natural language models can handle complex formal logic without manual intervention.

OpenAI’s Controversial Approach

While Google worked with IMO officials for official grading, OpenAI self-assessed its performance, claiming a gold medal without independent verification. Google’s Luong emphasized:

“We confirmed with the IMO that we solved five perfectly. Others might’ve lost a point and gotten silver.”

What’s Next for AI in Math?

  • Gemini Deep Think will soon be available to Google AI Ultra subscribers ($250/month).
  • DeepMind plans to refine the model for a perfect 6/6 score in future competitions.

The Verdict: AI Is Catching Up to Human Genius

Google’s achievement proves that AI can now compete at the highest levels of mathematical reasoning—though creativity and intuition still give humans an edge. With further advancements, IMO gold medals may soon be a regular feat for machines.

mark edward

I am a professional tech content writer with over 10 years of experience in technical writing and digital content creation. I specialize in simplifying complex technological concepts and presenting them in a clear, engaging, and accurate style that suits both technical and general readers. My writing covers a wide range of tech topics, including artificial intelligence, cybersecurity, software, smart devices, cloud computing, and the latest innovations in the tech world. I am committed to delivering high-quality content that combines depth of information with readability, while also optimizing for search engines (SEO) and capturing the reader’s attention. Through every article I write, my goal is to deliver real value to the reader—whether they’re looking for an in-depth product review, a comprehensive how-to guide, or updates on the latest digital trends. Welcome to a knowledge-driven space that brings technology closer to you with clarity, professionalism, and trust.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button