Find answers from the community

Updated 7 months ago

Is anyone having problems with setting

Is anyone having problems with setting up groq as their llm in lamaindex. At top I am using
Settings.llm = new Groq()
But its still defaulting to open AI and using it for both embeddings and queries.
L
A
n
13 comments
Groq is only an LLM -- it will be defaulting to openai for embeddings unless you change the default embed model
thanks for reply , yes i know embeddings will always default to open AI unless specified otherwise. I am more concerned about the fact that its using GPT for queries where it should be using mistral , I have following settings
Settings.llm = new Groq({ apiKey: process.env.GROQ_API_KEY, model: "mistral-8x7b-32768" });
But queries require both an LLM and an embed model
So it seems like to me its using openai for embeddings, since you haven't changed that
in my open ai usage i have
GPT-3.5-turbo-16k-0613 -- API Requests : 88
Text-embedding-ada-002-v2 -- API Requests : 140

and they both keep increasing as I run queries. Shouldnt it be only text embeddings that should be utilized here.
Also if i input the query : "What LLM are you using" it always say GPT-3 by OpenAI
Did you configure settings before or after creating your index?
Answer : I created few indices before and then indices for other documents after configuring settings.

Its now working when i added
const chatEngine = new ContextChatEngine({ chatModel: Settings.llm, retriever, chatHistory: prev_chat, });
Its working fine. Just one more thing.
Is there some property in chatEngine that i can set to limit the tokens provided as the chat history gets bigger or we have to manually slice it?
Also would the persistant storage work by only these commands because i can see it sending new embedding request to open AI even after setting and storing indices in persistance directory
const storageContext = await storageContextFromDefaults({ persistDir: ./storage/assets/files/Cheatsheetpdf, }); const index = await VectorStoreIndex.fromDocuments([document], { storageContext, });
its still using open AI embeddings after i have set the storageContext
RE memory, I don't actually know for the TS package πŸ˜… I know the python package uses a memory buffer that you can set a token limit on. Not sure for TS

RE storage, I also forget how it works in TS (I spend very little time in that library, I'm just the python expert lol)
You know after the hell that has been this week , I might convert to python for rest of my life πŸ˜‚
I have been having the same issue you did, but setting chatModel when instantiating ContextChatEngine did not work at all 😦 It keep asking for open_ai key πŸ˜–

My biggest issue with TS lib, is that docs examples themself sometimes or very often, do not work at all.
Add a reply
Sign up and join the conversation on Discord