Skip to content

learnbydoingwithsteven/Model_Tests

Repository files navigation

AI Model Tests and Experiments

This repository contains a series of experiments and tests focused on fine-tuning, quantizing, and evaluating various language models (LLMs, MLLMs, etc.).

About This Project

This repository serves as a collection of hands-on examples and explorations into the practical aspects of working with AI models. The projects cover a range of topics, including:

  • Model Fine-Tuning: Adapting pre-trained models to specific tasks and datasets.
  • Model Quantization: Techniques for reducing model size and improving inference speed.
  • Model Evaluation: Profiling and comparing the performance of different models.

Projects

The repository is organized into several project directories, each focusing on a specific test or experiment. Please refer to the README.md file within each project's folder for detailed information and instructions.

License

This project is licensed under the Apache License 2.0. See the LICENSE file for more details.

Contributing

Contributions, issues, and feature requests are welcome. Feel free to check the issues page if you want to contribute.

About

Currently focusing on open source/test allowed AI models(MLLM, LLM, etc) tests

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors