This repository contains a series of experiments and tests focused on fine-tuning, quantizing, and evaluating various language models (LLMs, MLLMs, etc.).
This repository serves as a collection of hands-on examples and explorations into the practical aspects of working with AI models. The projects cover a range of topics, including:
- Model Fine-Tuning: Adapting pre-trained models to specific tasks and datasets.
- Model Quantization: Techniques for reducing model size and improving inference speed.
- Model Evaluation: Profiling and comparing the performance of different models.
The repository is organized into several project directories, each focusing on a specific test or experiment. Please refer to the README.md file within each project's folder for detailed information and instructions.
This project is licensed under the Apache License 2.0. See the LICENSE file for more details.
Contributions, issues, and feature requests are welcome. Feel free to check the issues page if you want to contribute.