Find answers from the community

s
F
Y
a
P
Updated 11 months ago

I tried but couldn't get it work. Most

I tried but couldn't get it work. Most likely, because I need to use 'spawn' method for GPU and llama_index might be setting it to 'fork' somewhere in the code? Getting this error: RuntimeError: context has already been set
L
A
2 comments
I mean, it might depend on which LLM or other features you are using. Never seen fork or spawn used anywhere in the codebase
I am using llama-cpp for LLM and BGE for embedding. I am also using torch. Will go and check if torch sets this method explicitly. Thanks @Logan M
Add a reply
Sign up and join the conversation on Discord