-
Notifications
You must be signed in to change notification settings - Fork 7.8k
[Model] support VibeThinker model #9615
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary of ChangesHello @isLinXu, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request integrates the VibeThinker-1.5B model into the system, enabling users to leverage its specialized reasoning capabilities for tasks like competitive programming. The changes ensure that the model is properly recognized, configured with its unique chat format, and documented for easy discovery and use within the framework. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds support for the VibeThinker-1.5B model. The changes include adding the model to the documentation, defining a new chat template, and registering the model in the constants file. My review focuses on code maintainability and consistency. I've suggested a refactoring to reduce code duplication in the template definition and pointed out some minor ordering issues in the documentation files for better consistency.
| | [StarCoder 2](https://huggingface.co/bigcode) | 3B/7B/15B | - | | ||
| | [TeleChat2](https://huggingface.co/Tele-AI) | 3B/7B/35B/115B | telechat2 | | ||
| | [XVERSE](https://huggingface.co/xverse) | 7B/13B/65B | xverse | | ||
| | [VibeThinker-1.5B](https://huggingface.co/WeiboAI) | 1.5B | vibethinker | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| | [StarCoder 2](https://huggingface.co/bigcode) | 3B/7B/15B | - | | ||
| | [TeleChat2](https://huggingface.co/Tele-AI) | 3B/7B/35B/115B | telechat2 | | ||
| | [XVERSE](https://huggingface.co/xverse) | 7B/13B/65B | xverse | | ||
| | [VibeThinker-1.5B](https://huggingface.co/WeiboAI) | 1.5B | vibethinker | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| register_template( | ||
| name="vibethinker", | ||
| format_user=StringFormatter(slots=["<|im_start|>user\n{{content}}<|im_end|>\n<|im_start|>assistant\n"]), | ||
| format_assistant=StringFormatter(slots=["{{content}}<|im_end|>\n"]), | ||
| format_system=StringFormatter(slots=["<|im_start|>system\n{{content}}<|im_end|>\n"]), | ||
| format_function=FunctionFormatter(slots=["{{content}}<|im_end|>\n"], tool_format="qwen"), | ||
| format_observation=StringFormatter( | ||
| slots=["<|im_start|>user\n<tool_response>\n{{content}}\n</tool_response><|im_end|>\n<|im_start|>assistant\n"] | ||
| ), | ||
| format_tools=ToolFormatter(tool_format="qwen"), | ||
| stop_words=["<|im_end|>"], | ||
| replace_eos=True, | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The new vibethinker template is almost an exact copy of the qwen template, with the only difference being the absence of a default_system message. This introduces code duplication. To improve maintainability, consider creating a copy of the qwen template and just overriding the default_system attribute. This avoids repeating the entire template definition.
from copy import deepcopy
vibethinker_template = deepcopy(TEMPLATES["qwen"])
vibethinker_template.default_system = ""
TEMPLATES["vibethinker"] = vibethinker_template
VibeThinker-1.5B
📁 Github | 🤖 Model Scope | 📄 Techical Report
Introduction
VibeThinker-1.5B is a 1.5-billion parameter dense language model. With a total training cost of only $7,800 USD, it achieves reasoning performance comparable to larger models like GPT OSS-20B Medium.
Key Performance Data
💡 Mathematical Reasoning: On the three major math benchmarks AIME24, AIME25, and HMMT25, its scores (80.3, 74.4, and 50.4, respectively) all surpass those of the initial DeepSeek R1 model, which has over 400 times the parameters (scores of 79.8, 70.0, and 41.7, respectively).
🌱 Code Generation: It achieved scores of 55.9 on LiveCodeBench v5 and 51.1 on v6. Its v6 score slightly leads Magistral Medium (50.3), underscoring its strong reasoning performance.
🔁 On the AIME 25 benchmark, VibeThinker-1.5B significantly extends the Pareto frontier of reasoning accuracy versus model scale, demonstrating that exceptional performance can be achieved with extreme parameter efficiency.
Training Pipeline
VibeThinker-1.5B's core innovation lies in the "Spectrum-to-Signal Principle" (SSP) training framework: it first explores solution diversity during the Supervised Fine-Tuning (SFT) stage, and then optimizes its policy to reinforce correct signals in the Reinforcement Learning (RL) stage. By systematically integrating these two phases, our approach establishes diversity as the central technical design principle, enabling VibeThinker-1.5B to achieve robust performance that surpasses conventional training paradigms.
Usage
Create a new file examples/train_full/vibethinker_sft.yaml with the following content:
Training
SFT
results

RL
Create a new file examples/train_lora/vibethinker_lora_dpo.yaml with the following content:
Training