The International Olympiad in Informatics (IOI) and AI Progress
The International Olympiad in Informatics (IOI) is widely acknowledged as one of the most prestigious competitions in algorithmic programming. It serves as a critical benchmark for evaluating the reasoning and problem-solving capabilities of large language models (LLMs). Attaining gold-medal performance at the IOI stands as a monumental milestone, showcasing AI competency in a highly competitive environment.
Achievements in AI: The Gold Medal Benchmark
Recently, proprietary models have reported achieving gold-level performance at the IOI, yet their methodologies often remain undisclosed. This lack of transparency poses challenges for reproducibility and progress within the research community. However, an exciting development has emerged: the open-weight model, gpt-oss-120b, successfully reached gold-medal performance at IOI 2025 while adhering to the same constraints as human contestants. This includes the critical 50-submission limit per problem.
The breakthrough was facilitated by a transparent and reproducible test-time compute framework known as GenCluster. This innovative, scalable pipeline efficiently identifies the most promising solutions from thousands of candidates generated in parallel, employing behavioral clustering and a tournament-style ranking approach.
The Performance of gpt-oss-120b at IOI 2025
Utilizing gpt-oss-120b as the foundational model, GenCluster achieved a remarkable final score of 446.75 at IOI 2025, surpassing the gold-medal threshold of 438.3. This accomplishment signifies the first instance of gold-level performance at the IOI using an open-weight model, paving the way for a transparent benchmark for future research in competitive programming and AI reasoning.
Scaling Trends in AI Performance
Our experiments indicate a discernible scaling trend: larger candidate pools lead to improved scores, both in constrained and unconstrained settings. This observation highlights the advantages of scaling test-time compute alongside GenCluster, creating promising pathways for surpassing gold-level performance.
How Does GenCluster Work?
The GenCluster framework operates through four pivotal stages, meticulously analyzing thousands of candidate solutions to uncover the most effective ones under the constraints of IOI requirements.
Parallel Candidate Generation
The process begins with generating thousands of candidate solutions for each problem concurrently. Instead of relying on a single solution, GenCluster explores a vast and diverse pool of possibilities. This approach significantly increases the likelihood of identifying at least one optimal solution. During this stage, the model achieved a Score@5000 of 499.51 on IOI 2025, setting the upper limit for selecting the best 50 submissions per problem.
Behavioral Clustering
In the next phase, we categorize the generated solutions based on their behavior. Each candidate is tested against a series of LLM-generated cases, grouping those that deliver identical outputs. This strategy transforms the chaos of numerous individual solutions into a manageable array of distinct problem-solving strategies.
Ranking with Tournament
Subsequently, we employ a tournament system to determine the winning strategy. A representative solution from each cluster competes in head-to-head matchups, judged by the LLM. Clusters are then ranked according to their wins, ensuring that the most promising strategies ascend to the top.
Submission Strategy
Finally, we implement a round-robin submission strategy to maximize the efficiency of IOI’s strict 50-attempt limit per problem. Solutions from the highest-ranked clusters are submitted sequentially, beginning with the most complex subtasks. Within each cluster, solutions are prioritized by the length of their reasoning trace, ensuring that the top candidates are evaluated first, thus optimizing overall performance while making efficient use of every submission.
The Leading Open-Weight Model for IOI 2025
In our evaluation of various leading open-weight models on competitive programming benchmarks, gpt-oss-120b consistently outperformed its competitors. It stands out as the only model capable of achieving gold-medal performance when scaled to 5,000 generations per problem. Notably, the gpt-oss family showcases stronger gains as the number of generations increases, signifying its effective scalability with test-time compute.
The Influence of the Maximum Number of Tokens
Previous studies have indicated that longer reasoning paths often correlate with enhanced accuracy on intricate problems. Our findings reinforce this trend. When experimenting with varying maximum generation lengths, the gpt-oss models exhibited continuous improvement up to their token limits. In contrast, the performance of Qwen3-235B-A22B plateaued at around 48K tokens, significantly below the recommended 80K length set by its authors.
Interestingly, the gpt-oss models not only produced longer, more detailed reasoning paths but also achieved the strongest overall performance, surpassing DeepSeek-R1-0528 and Qwen3-235B-A22B once larger compute budgets were applied.
Resource Availability in AI Research
As this work demonstrates, open-weight models, combined with a scalable test-time compute framework, can closely approach the performance of leading closed systems in the IOI benchmark context. By providing a fully reproducible pipeline entirely constructed around open-weight models, we aim to enhance the accessibility and verifiability of advanced reasoning research. This initiative seeks to inspire future endeavors that leverage test-time compute to escalate the capabilities of open models, thereby pushing the boundaries of algorithmic problem-solving.
The continuous evolution of AI capabilities, spearheaded by frameworks like GenCluster, signifies a promising trajectory for the intersection of competitive programming and AI. With transparent benchmarks and collaborative advancements, the possibilities for future research are limitless.
Inspired by: Source

