Log in
Log into community
Find answers from the community
s
F
Y
a
P
3,278
View all posts
Related posts
Did this answer your question?
๐
๐
๐
Powered by
Hall
Inactive
Updated 11 months ago
0
Follow
I tried but couldn't get it work. Most
I tried but couldn't get it work. Most
0
Follow
A
Anurag Agrawal
11 months ago
ยท
I tried but couldn't get it work. Most likely, because I need to use 'spawn' method for GPU and llama_index might be setting it to 'fork' somewhere in the code? Getting this error: RuntimeError: context has already been set
L
A
2 comments
Share
Open in Discord
L
Logan M
11 months ago
I mean, it might depend on which LLM or other features you are using. Never seen fork or spawn used anywhere in the codebase
A
Anurag Agrawal
11 months ago
I am using llama-cpp for LLM and BGE for embedding. I am also using torch. Will go and check if torch sets this method explicitly. Thanks @Logan M
Add a reply
Sign up and join the conversation on Discord
Join on Discord