Find answers from the community

s
F
Y
a
P
Home
Members
confused_skelly
c
confused_skelly
Offline, last seen last month
Joined September 25, 2024
The bit on converting from documents to nodes will take documents that are >4k tokens and split it down into smaller chunks
6 comments
c
V
Hey friend, I think I found a smol bug. the streaming flag in llama index's base query class checks to make sure the model is an OpenAI model. This is intentional, but it breaks when a ChatOpenAI instance is used. LangChain supports streaming functionality for ChatOpenAI, but it's getting blocked by the instance check in line 246 of llama_index/llm_predictor/base.py

Plain Text
 ~/.local/lib/python3.9/site-packages/llama_index/llm_predictor/base.py in stream(self, prompt, **prompt_args)
    245         """
    246         if not isinstance(self._llm, OpenAI):
--> 247             raise ValueError("stream is only supported for OpenAI LLMs")
    248         formatted_prompt = prompt.format(llm=self._llm, **prompt_args)
    249         raw_response_gen = self._llm.stream(formatted_prompt)

ValueError: stream is only supported for OpenAI LLMs
13 comments
j
g
c
Hey friends. Has anyone gotten streaming to work with the OpenAIAgent?
3 comments
c
L
Hey friends, does anyone have examples of building a composable graph more than 2 layers deep? IE instead of just a index of indices, having an index on top of graphs?
1 comment
c
@Logan M great work on adding sources to the chat toolkit
1 comment
L
Hey friends, has anyone figured out how to stack indices with the new 0.5.0 update?
20 comments
j
n
L
u
c