hiyouga/LLaMA-Factory: Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
Easily fine-tune 100+ large language models with zero-code CLI and Web UI
Public notes from activescott tagged with both #fine-tuning and #code
Easily fine-tune 100+ large language models with zero-code CLI and Web UI
we in- troduce SWE-bench, an evaluation framework consisting of 2,294 software engineering problems drawn from real GitHub issues and corresponding pull requests across 12 popular Python repositories. Given a codebase along with a description of an issue to be resolved, a language model is tasked with editing the codebase to address the issue. Resolving issues in SWE-bench frequently requires under standing and coordinating changes across multiple functions, classes, and even files simultaneously, calling for models to interact with execution environments, process extremely long contexts and perform complex reasoning that goes far beyond traditional code generation tasks.
Easy to use, well documented fine-tuning. NVIDIA optimized with AMD support and Apple M support in the works.