Find answers from the community

Updated 3 months ago

[Question]: benchmark for the llama_inde...

this is my new issue, could anyone help me answer the question? https://github.com/run-llama/llama_index/issues/12143
W
1 comment
Can you check if your GPU is being utilised.

You can make simple query to your llm with print(llm.chat("hey how are you"))

Then check if GPU is being utilised.
Add a reply
Sign up and join the conversation on Discord