Log in
Log into community
Find answers from the community
View all posts
Related posts
Did this answer your question?
π
π
π
Powered by
Hall
Inactive
Updated 3 months ago
0
Follow
[Question]: benchmark for the llama_inde...
[Question]: benchmark for the llama_inde...
Inactive
0
Follow
X
Xiao
9 months ago
Β·
this is my new issue, could anyone help me answer the question?
https://github.com/run-llama/llama_index/issues/12143
W
1 comment
Share
Open in Discord
W
WhiteFang_Jr
9 months ago
Can you check if your GPU is being utilised.
You can make simple query to your llm with print(llm.chat("hey how are you"))
Then check if GPU is being utilised.
Add a reply
Sign up and join the conversation on Discord
Join on Discord