
Windows 10 I have. ProctorAI is functional and works when I'm using OpenAI API gpt4o, but it's too expensive that way.
I launched llava and then started ProctorAI. Selected ollava as the main model (no router model).
Any more information needed?
00:00:03 - Traceback (most recent call last):
File "E:\Proctor\ProctorAI-windows\src/main.py", line 275, in
00:00:03 - main(
File "E:\Proctor\ProctorAI-windows\src/main.py", line 235, in main
control_sequence(*control_args)
File "E:\Proctor\ProctorAI-windows\src/main.py", line 149, in control_sequence
determination, total_cost = model_pipeline(
^^^^^^^^^^^^^^^
File "E:\Proctor\ProctorAI-windows\src/main.py", line 31, in model_pipeline
response = model.call_model(
^^^^^^^^^^^^^^^^^
File "E:\Proctor\ProctorAI-windows\src\api_models.py", line 347, in call_model
response_string = data["message"]["content"]
~~~~^^^^^^^^^^^
KeyError: 'message'
Windows 10 I have. ProctorAI is functional and works when I'm using OpenAI API gpt4o, but it's too expensive that way.
I launched llava and then started ProctorAI. Selected ollava as the main model (no router model).
Any more information needed?
00:00:03 - Traceback (most recent call last):
File "E:\Proctor\ProctorAI-windows\src/main.py", line 275, in
00:00:03 - main(
File "E:\Proctor\ProctorAI-windows\src/main.py", line 235, in main
control_sequence(*control_args)
File "E:\Proctor\ProctorAI-windows\src/main.py", line 149, in control_sequence
determination, total_cost = model_pipeline(
^^^^^^^^^^^^^^^
File "E:\Proctor\ProctorAI-windows\src/main.py", line 31, in model_pipeline
response = model.call_model(
^^^^^^^^^^^^^^^^^
File "E:\Proctor\ProctorAI-windows\src\api_models.py", line 347, in call_model
response_string = data["message"]["content"]
~~~~^^^^^^^^^^^
KeyError: 'message'