Hi there! At Anterion we've been working on merging together SWE-agent
and OpenDevin
to enable exploring SWE-agent's open-ended problem solving capabilities. Excited to share our work with the wider community and see how well SWE-bench benchmarked agents work for general programming use cases!
In our experience, the guard-rail techniques used for SWE-agent do translate well to solving basic real-world tasks and could be integrated into more holistic Devin style solutions in the near future. The next steps we'd like to take is to get community involvement in the project and build upon SOTA agent approaches.
Going one step further, we would like to try and add RL-based methods such as the ones described in:
The current SWE-agent
repo also already supports reward as it's built around the Gymnasium (formerely OpenAI gym) environment and supports reward per environment step. We'd love to see what the RL community can come up with using the powerful SWE-agent as it's basis! We'll also be adding OLlama support soon so all testing can be done completely locally.
Excited to see more collaborators join us as well! GitHub: Anterion GitHub Page
Top comments (2)
Great work !
Hi, thanks!