forked from ggml-org/llama.cpp
-
Notifications
You must be signed in to change notification settings - Fork 20
Open
Description
Prerequisites
- [X ] I am running the latest code. Development is very rapid so there are no tagged versions as of now.
- [X ] I carefully followed the README.md.
- [X ] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
- [X ] I reviewed the Discussions, and have a new bug or useful enhancement to share.
Your exact command line to replicate the issue
./falcon_main ....
Environment and Context
- Physical (or virtual) hardware you are using, e.g. for Linux: intel cpu
- Operating System, e.g. for Linux: CentOS
Steps to Reproduce
- ./falcon_main ...
- see in the log: "falcon_model_load_internal: using CUDA for GPU acceleration"
- desired: "falcon_model_load_internal: using CUDA 11.8 for GPU acceleration"
Metadata
Metadata
Assignees
Labels
No labels