Find answers from the community

A
Alex L
Offline, last seen 3 months ago
Joined September 25, 2024
I think I stumbled upon an issue.
Looks like when llama_index.llms.OpenAI gets serialized, it also serializes the OpenAI API key.... 😱
I'm using WandbCallbackHandler so that it log the traces to wandb and I can clearly see my openai key up there (see screenshot)
Is that expected, am I doing something wrong?
7 comments
L
A
Hi everyone!
I don't seem to find a way to separate indexing and querying using llamaindex.
The landscape is the following:
I have a separate indexing process that uses huggingface embeddings (so, no interaction with openai - no openai embeddings or llm calls whatsoever) to fill the vectorestore.
And then there's a service that uses the index for QA that gets deployed completely separately.

But to index the documents properly (set the embedding model, set the node parser etc) it seems that I must provide the service_context which in turn requires providing an LLM. But in the indexing part I'm not using the LLM at all!

Am I missing something?
(Background: I used to use langchain to do that and there was no problem separating indexing and querying, but now I'd like to switch to llamaindex cause it has some document loaders that work better for me than the langchain ones)
4 comments
A
L
A
Alex L
·

V0.10

Hi!
after upgrade to v0.10 do I still need to install llama-index?
or llama-index-core only?
3 comments
L
A