Find answers from the community

Updated 6 months ago

[Question]: benchmark for the llama_inde...

At a glance

The community member has posted a new issue and is seeking help to answer a question. A fellow community member has suggested checking if the GPU is being utilized, and provided a code snippet to test this by making a simple query to the language model and observing the GPU usage.

Useful resources
this is my new issue, could anyone help me answer the question? https://github.com/run-llama/llama_index/issues/12143
W
1 comment
Can you check if your GPU is being utilised.

You can make simple query to your llm with print(llm.chat("hey how are you"))

Then check if GPU is being utilised.
Add a reply
Sign up and join the conversation on Discord