Find answers from the community

Updated 11 months ago

hey

hey
@Logan M , @disiok, @ravitheja

while using streaming = True in get_response_synthesizer()
Plain Text
        response_synthesizer = get_response_synthesizer(
            service_context=self.service_context,
            text_qa_template=qa_chat_prompt,
            response_mode=self.index_response_mode,
            streaming= True
        )

        custom_index = RetrieverQueryEngine(
            retriever=custom_retriever,
            response_synthesizer=response_synthesizer,
            node_postprocessors=[
                SimilarityPostprocessor(similarity_cutoff=self.similarity_cutoff),
            ],
        )

we get an error

we are using following llm in service_context
Plain Text
from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI(
            model=self.model_name,
            temperature=self.temperature,
            model_kwargs=model_kwargs,
            max_tokens=self.max_output_tokens,
            api_key=api_key,
            base_url=base_url
        )

I see an open issue but in github https://github.com/run-llama/llama_index/issues/9873
can you please suggest what's the fix?
S
1 comment
full error traceback
Add a reply
Sign up and join the conversation on Discord