Find answers from the community

Home
Members
big_ol_tender
b
big_ol_tender
Offline, last seen 3 months ago
Joined September 25, 2024
Anyone have a solution in their RAG app for smart date parsing? E. G. To filter docs in a date range based on the query. I’m aware of things like spacy which can identity date entities, but then there is a missing intermediate step to go from something like “this year” to an actual date/time for a query filter.
8 comments
L
b
Hi, upgrading to 0.9.15.post2 broke the pickle-ability: “AttributeError: can’t pickle local object ‘LLM.set_completion_to_prompt.<locals>.<lambda>’”
14 comments
b
L
Do openai agents work with azure openai deployments?
2 comments
L
Nvm nvm it’s not working 😕
20 comments
b
L
Ok next issue 😊 trying to get streaming to work. Im using a langchain llm (HuggingFaceTextGenInference) and the streaming works from my inference endpoint. However, when using it with llama_index I get error “LLM must support streaming”
8 comments
b
L
Having an issue with json mode.. I’m getting thousands of new lines from my request with the actual response somewhere in the middle. Anyone seen this?
12 comments
L
b
Having an issue with azure openai.. but only when running in a thread executor. Anyone else deal with this? Is the azureopenai object no longer thread safe? Was working before I upgraded llama index and openai packages

‘AzureOpenAI’ object has no attribute ‘_client’
28 comments
L
b
My hunch is that when it goes past the context limit of a single call something is breaking
29 comments
L
b
I recently upgraded from 0.8.6 and I’m getting a repeated importwarning: package != spec.parent. Anyone else get this?
22 comments
L
b
Having trouble with azure. I have a key and everything set up, I set my environment variables, but it still says you must set OPENAI_API_BASE. I’ve confirmed it’s correct and available in the environment.
18 comments
b
L
How can I rank my retrieved documents (from weaviate) by date (date is stored as a property in weaviate with each doc)
3 comments
L
b
Hey all, is it possible to limit the chunk size in the node parser to be sentences? I have much better results with my data using sentence embedding vs embedding larger chunks. My current process is to use spacy to identify the sentences semantically and then pass them to my embedding model. This is critical for the types of problems I’m trying to solve. Also, I wrote an api around this whole process, would be nice to synthesize it with the node parser somehow..
1 comment
L