forked from ggml-org/llama.cpp
-
Notifications
You must be signed in to change notification settings - Fork 20
Open
Labels
Description
Model: OpenBuddy Falcon 7B
python falcon_convert.py openbuddy-falcon-7b-v6-bf16 openbuddy-ggllm use-f32
Error:
* Loading model from: openbuddy-falcon-7b-v6-bf16
Vocab size: 70144
Hidden size: 4544
Number of heads: 71
Number of layers: 32
Number of head_kv: 1
Number of head_dim: 64
Traceback (most recent call last):
File "/home/paloma/Git/ggllm.cpp/falcon_convert.py", line 111, in <module>
text = bytearray([byte_decoder[c] for c in reverse_vocab[i]])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/paloma/Git/ggllm.cpp/falcon_convert.py", line 111, in <listcomp>
text = bytearray([byte_decoder[c] for c in reverse_vocab[i]])
~~~~~~~~~~~~^^^
KeyError: '能'
s -lh openbuddy-falcon-7b-v6-bf16/
total 13G
-rw-r--r-- 1 paloma paloma 992 Jul 18 19:11 config.json
-rw-r--r-- 1 paloma paloma 2,6K Jul 18 19:11 configuration_RW.py
-rw-r--r-- 1 paloma paloma 111 Jul 18 19:11 generation_config.json
-rw-r--r-- 1 paloma paloma 47K Jul 18 19:11 modelling_RW.py
-rw-r--r-- 1 paloma paloma 9,4G Jul 18 19:05 pytorch_model-00001-of-00002.bin
-rw-r--r-- 1 paloma paloma 3,7G Jul 18 19:05 pytorch_model-00002-of-00002.bin
-rw-r--r-- 1 paloma paloma 17K Jul 18 19:11 pytorch_model.bin.index.json
-rw-r--r-- 1 paloma paloma 28 Jul 18 19:11 README.md
-rw-r--r-- 1 paloma paloma 281 Jul 18 19:11 special_tokens_map.json
-rw-r--r-- 1 paloma paloma 180 Jul 18 19:11 tokenizer_config.json
-rw-r--r-- 1 paloma paloma 3,5M Jul 18 19:11 tokenizer.json
Operating System: Arch Linux