## This log tracks our evaluation results and associated costs. | Date | Executed by | Version | Dataset | #Instance | Model | Resolved Rate | API Cost | Notes | |------------|-------------|----------------------|-----------------------------------|-----------|----------------------|---------------|----------|------------------------------------| | 2025-07-08 | Yue Pan | v1.0 | SWE-Bench Lite | 300 | DeepSeek V3 | 28.67% | $70.05 | initial version | | 2025-07-18 | Yue Pan | v1.0 | SWE-Bench Multilingual | 300 | DeepSeek V3 | 13.67% | $113.6 | initial version | | 2025-07-31 | Yue Pan | v1.1 | SWE-Bench Lite | 300 | GPT-4o | 30.00% | $1569.73 | context retrieval improved version | | 2025-08-09 | Zhaoyang | v1.0 | SWE-Bench Verified | 500 | Devstral Medium 2507 | 33.00% | - | | | 2025-08-11 | Yue Pan | v1.1 | SWE-Bench Verified | 500 | Devstral Medium 2507 | 38.4% | - | | | 2025-11-06 | Yue Pan | v1.3(with Athena) | dcloud347/SWE-bench_verified_lite | 50 | GPT-5 + gpt-4o | 70.00% | $200.79 | | | 2025-11-06 | Yue Pan | v1.3(without Athena) | dcloud347/SWE-bench_verified_lite | 50 | GPT-5 + gpt-4o | 56.00% | $367.73 | |