Collectively AI has launched DeepSWE, a state-of-the-art, totally open-sourced software program engineering agent that’s educated completely by reinforcement studying (RL). Constructed on high of the Qwen3-32B language mannequin, DeepSWE achieves 59% accuracy on the SWEBench-Verified benchmark and 42.2% Cross@1, topping the leaderboard amongst open-weight fashions. This launch represents a major shift for Collectively AI, from conventional pretraining pipelines towards creating autonomous language brokers that repeatedly study and enhance through real-world suggestions.
Reinforcement Studying Meets Code Technology
DeepSWE is the results of post-training the Qwen3-32B basis mannequin utilizing rLLM, Agentica’s modular reinforcement studying framework tailor-made for language brokers. In contrast to standard supervised fine-tuning approaches, rLLM permits brokers to adapt to real-world workflows by expertise. DeepSWE has been particularly educated to resolve advanced software program engineering duties utilizing a feedback-driven loop moderately than static datasets.
The coaching pipeline incorporates Agentica’s R2EGym dataset—a software program engineering benchmark designed for RL-style agent growth. The framework focuses on coaching language fashions with action-oriented aims, similar to fixing bugs, finishing capabilities, and modifying code, moderately than merely predicting next-token distributions. This aligns DeepSWE extra intently with how human engineers iterate and study from outcomes.

Efficiency Benchmarks and Capabilities
On SWEBench-Verified, probably the most rigorous benchmark for software program engineering brokers, DeepSWE scores 59% with test-time scaling. This considerably outperforms earlier open-weight fashions. In Cross@1 evaluations—which measure the likelihood that the agent solves an issue accurately on the primary try—DeepSWE reaches a powerful 42.2%.
These outcomes underscore the facility of RL-based coaching in enhancing agentic habits, significantly in domains requiring iterative reasoning and exact outputs, similar to code synthesis. The mannequin’s structure, inherited from Qwen3-32B, permits it to scale successfully whereas remaining appropriate for real-world purposes.

Open Supply and Reproducibility at Its Core
One of many standout options of this launch is its full transparency. Collectively AI and Agentica have open-sourced not solely the DeepSWE mannequin but additionally your complete coaching recipe, together with the rLLM framework, the R2EGym dataset, and coaching configuration scripts. This promotes reproducibility and invitations the broader analysis and developer communities to increase or construct upon DeepSWE with out restrictions.
Builders can entry DeepSWE and rLLM through the next:
From Language Reasoners to Language Brokers
DeepSWE marks a philosophical and sensible shift: from constructing fashions that motive about language to constructing brokers that study by interplay. Conventional LLMs have proven sturdy reasoning capabilities, however usually lack the power to adapt to suggestions or enhance with use. Reinforcement studying permits these fashions to not solely carry out effectively at launch however to get higher over time, adapting to new downside distributions and domains.
This strategy additionally opens the door for native deployment. As a result of DeepSWE is totally open-source and modular, it may be prolonged and retrained for organization-specific use instances. Builders and researchers can construct their very own brokers on high of DeepSWE utilizing rLLM to serve numerous domains similar to internet navigation, robotics, or autonomous analysis help.
Conclusion
DeepSWE is a milestone within the evolution of generative AI for software program engineering. By making use of reinforcement studying to giant language fashions like Qwen3-32B and releasing your complete coaching infrastructure, Collectively AI is enabling a future the place brokers usually are not simply pretrained and deployed, however frequently educated and improved. This leap from language understanding to action-oriented company has important implications throughout programming, automation, and clever system design.
All credit score for this analysis goes to the researchers of this challenge. Additionally, be happy to comply with us on Twitter and don’t neglect to hitch our 100k+ ML SubReddit and Subscribe to our Publication.
Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is dedicated to harnessing the potential of Synthetic Intelligence for social good. His most up-to-date endeavor is the launch of an Synthetic Intelligence Media Platform, Marktechpost, which stands out for its in-depth protection of machine studying and deep studying information that’s each technically sound and simply comprehensible by a large viewers. The platform boasts of over 2 million month-to-month views, illustrating its reputation amongst audiences.