Haocheng Wang (王浩丞)

Scientific Assistant @ ETH Zürich | Former Research Intern @ ByteDance & DeepSeek AI

I am a Scientific Assistant at the Department of Computer Science (D-INFK), ETH Zürich, focusing on Multi-gLLM Agents for autoformalization and formal reasoning in Theoretical Computer Science. My research interests lie in developing Large Language Models for formal mathematical reasoning, automated theorem proving, and the intersection of AI and mathematics.

Haocheng Wang

About Me

I am currently working as a Scientific Assistant at ETH Zürich (D-INFK), focusing on verifiable Multi-gLLM Agents for autoformalization and formal reasoning in Theoretical Computer Science. I am working with Prof. Rasmus Kyng and Dr. Sorrachai Yingchareonthawornchai. Previously, I was a Student Researcher at ByteDance Seed and DeepSeek AI.

I received my Bachelor's degree in Mathematics and Applied Mathematics (Honours) at Xiamen University in 2025, where I worked on formalizing Auction Theory using Lean 4 and Mathlib4 as my final year project under the supervision of Prof. Ma Jiajun.

Research Interests

My research focuses on:

Featured Projects

Z.Z. Ren, Zhihong Shao, Junxiao Song, Huajian Xin, Haocheng Wang, Wanjia Zhao, Liyue Zhang, Zhe Fu, Qihao Zhu, Dejian Yang, Z.F. Wu, Zhibin Gou, Shirong Ma, Hongxuan Tang, Yuxuan Liu, Wenjun Gao, Daya Guo, Chong Ruan
ArXiv Preprint

DeepSeek-Prover-V2: Advancing Formal Mathematical Reasoning via Reinforcement Learning for Subgoal Decomposition

DeepSeek-Prover-V2 Results

DeepSeek-Prover-V2, a 671B-sized model achieving SOTA results on miniF2F (88.9%) and PutnamBench (47 out of 658). Our approach combines reinforcement learning with subgoal decomposition to enhance formal mathematical reasoning capabilities.

Huajian Xin, Z.Z. Ren, Junxiao Song, Zhihong Shao, Wanjia Zhao, Haocheng Wang, Bo Liu, Liyue Zhang, Xuan Lu, Qiushi Du, Wenjun Gao, Qihao Zhu, Dejian Yang, Zhibin Gou, Z.F. Wu, Fuli Luo, Chong Ruan
ICLR 2025

DeepSeek-Prover-V1.5: Harnessing Proof Assistant Feedback for Reinforcement Learning and Monte-Carlo Tree Search

DeepSeek-Prover-V1.5 Results

A novel hybrid approach combining LLMs and Monte-Carlo tree search for automated theorem proving. Achieved SOTA results on miniF2F (63.5%) and ProofNet (25.3%).

ByteDance Seed Team
ArXiv Preprint

Solving Formal Math Problems by Decomposition and Iterative Reflection

Formal Math Results

An agent-based training-free framework that enables general-purpose Large Language Models (gLLMs) to solve complex formal proofs in Lean 4.

Professional Experience

Scientific Assistant I
ETH Zürich
Aug 2025 - Present · Zürich, Switzerland
  • Focusing on building Multi-gLLM Agent for formal reasoning in Theoretical Computer Science
Student Researcher
ByteDance Co., Ltd.
Jan 2025 - Jul 2025 · Shanghai, China
  • Built Scoring & Self-refinement agent pipeline for Natural Language Proof
  • Proposed sketch-incorporated long Chain-of-Thought formal reasoning method
  • Led data initiatives for DeltaProver
Student Researcher
DeepSeek AI Co., Ltd.
June 2024 - Sep 2024 · Beijing, China
  • Developed ProverBench, a domain-categorized benchmark with 325 problems for evaluating LLM theorem proving
  • Contributed to DeepSeek-Prover-V1.5 and V2 model development

Education

Xiamen University

Sep 2020 - Jun 2025

Bachelor of Science in Mathematics and Applied Mathematics (Honours)

Final Thesis: Formalization of Auction Theory using Lean 4 and Mathlib4
[GitHub]

Academia Service

Teaching Assistant

Invited Talks

Conference Reviewer

Workshops

Contact

Feel free to reach out to me via email at haocheng.wang [at] inf.ethz.ch or connect with me on GitHub.

Visitor Map