Find answers from the community

Updated 3 months ago

Hi I had a quick question about the Tree

Hi I had a quick question about the Tree Summarize function does it only sum the top_k results initialized or more. Also what is the base LLM that is used because the Tree summarize works but I have not provided an API key
L
a
27 comments
It's using a key from somewhere, since the default is text-davinci-003 from openAI πŸ˜…

Yea if used in a vector index, then yea it's running across the top k nodes.
Okay thank you for the quick response I will more into the key to see where it is coming from
yea, like if OPENAI_API_KEY is set in your env, it would come from there πŸ€”
yeah I removed the api key from the env a few days ago but it still seems to be working
i have also been monitoring my usage and it does not seem to be increasing
Plain Text
import os

print(os.environ["OPENAI_API_KEY"])
if that doesn't print anything, I have no idea how it's working haha
okay ill try that i did printenv and didnt see anything but i will let you know
if i were to switch llms to use a free one rather than text-davinci-003 would i need to recreate the collections
nope, you only need to re-create if you change the embed_model
phew thank you i got very worried i am not changing the embed model only the the llm as not to have cost associated with the search engine
Sorry to keep bothering you when the index is created does it use an LLM
Depends on the index. If it's a vector index, nope
Hype thank you and yes im using a vectorstoreindex with chroma
I finally got PaLM to work but now there are multiple errors when i Try tree summarize and it outputs nothing when I just do No_Text am I missing response_mode=ResponseMode.NO_TEXT
)

query_engine = RetrieverQueryEngine(
retriever=retriever,
response_synthesizer=response_synthesizer,
)
response = query_engine.query(query)

print(response.source_nodes)
print("here")
return response"
I thiiiiink the embed model may be incorrect.

I'm not sure how weighted embeddings work, but they probably need to be wrapped with our langchain embedding wrapper. See here
https://gpt-index.readthedocs.io/en/latest/core_modules/model_modules/embeddings/usage_pattern.html#embedding-model-integrations

Also, maybe just set a a global service context instead of trying to figure out where to pass it in

Plain Text
from llama_index import set_global_service_context

set_global_service_context(service_context)
ah okay i will try that i was just confused because it was working before i changed the llm to palm but now it is just outputting none
You could also confirm that palm works

Plain Text
llm = PaLM()
print(llm.complete("Hello!"))
But overall I suspect it's just service context issues
Global should help
when i tried to switch it to global now the error is openai.error.AuthenticationError: No API key provided. You can set your API key in code using 'openai.api_key = <API-KEY>', or you can set the environment variable OPENAI_API_KEY=<API-KEY>). If your API key is stored in a file, you can point the openai module at it with 'openai.api_key_path = <PATH>'. You can generate API keys in the OpenAI web interface. See https://platform.openai.com/account/api-keys for details.
Lol whaaat

Did you set both the llm and the embed model in the global?
yes i thought i did but i will double check also i am so appreciative of all your help
I think that worked thank you so much again!
Awesome! :dotsHARDSTYLE:
It worked! Just took forever i am going to try using a gpu. Thank you!
Add a reply
Sign up and join the conversation on Discord