Find answers from the community

Updated 11 months ago

Hi is there any way to use

Hi is there any way to use multiStepQueryEngine without OpenAI?
W
s
i
33 comments
No that works. But there is no way to pass the service context to MultiStepQueryEngine. Resulting in error
I think changing it here should work:

Plain Text
# set Logging to DEBUG for more detailed outputs
from llama_index.query_engine.multistep_query_engine import (
    MultiStepQueryEngine,
)

# Since this query engine will be used. I think try changing the llm and model in this service_context !!
query_engine = index.as_query_engine(service_context=service_context_gpt4)
query_engine = MultiStepQueryEngine(
    query_engine=query_engine,
    query_transform=step_decompose_transform,
    index_summary=index_summary,
)


See if this works
No really that's what I have. The culprit is response_synthesizer if not passed it tries to build one. For this by default it tries to use OPENAI
Plain Text
ValueError: No API key found for OpenAI.
Please set either the OPENAI_API_KEY environment variable or openai.api_key prior to initialization.
API keys can be found or created at https://platform.openai.com/account/api-keys


During handling of the above exception, another exception occurred:

ValueError                                Traceback (most recent call last)
Cell In[119], line 15
      5 from llama_index.response_synthesizers import (
      6     ResponseMode,
      7     get_response_synthesizer,
      8 )
     10 response_synthesizer = get_response_synthesizer(
     11     response_mode=ResponseMode.COMPACT, service_context=service_context
     12 )
---> 15 query_engine = MultiStepQueryEngine(
     16     query_engine= index.as_query_engine(service_context=service_context),
     17     query_transform=step_decompose_transform,
     18     index_summary="Used to search for code snippets",
     19     # response_synthesizer=response_synthesizer,
     20 )

File c:\Users\souya\.conda\envs\streamdiffusion\lib\site-packages\llama_index\query_engine\multistep_query_engine.py:53, in MultiStepQueryEngine.__init__(self, query_engine, query_transform, response_synthesizer, num_steps, early_stopping, index_summary, stop_fn)
     51 self._query_engine = query_engine
     52 self._query_transform = query_transform
---> 53 self._response_synthesizer = response_synthesizer or get_response_synthesizer(
     54     callback_manager=self._query_engine.callback_manager
     55 )
     57 self._index_summary = index_summary
     58 self._num_steps = num_steps
Smells like a missing feature imo
ah i see, Try doing this once

Plain Text
from llama_index.response_synthesizers import get_response_synthesizer

# use the query_engine that you are going to pass in the mutlistep for callback manager
response_synthesizer = get_response_synthesizer(service_context=non_openAI_service_context, callback_manager=query_engine.callback_manager)

# once you have this, try passing the response_synthesizer in Multistep as well

query_engine = MultiStepQueryEngine(
    query_engine=query_engine,
response_synthesizer=response_synthesizer,query_transform=step_decompose_transform,index_summary=index_summary,
)
OK this is working, at least LLM is responding.

However,

Current query: What is the purpose of StudentResource class?
New query: Based on the given context, I do not have enough information to ask a meaningful follow-up question about the purpose of the StudentResource class. Since the knowledge source only provides information about searching for code snippets, and there is no previous reasoning, I cannot extract any additional details about the StudentResource class.

Therefore, my answer is: None

So is search results not being passed or something else is the culprit?
Model is aws bedrock (claude v2) from langchain btw
Cause using the index as a retriver gives search results
Hmm yea something is fishy!
You gotta debug the query engine and see if it is even getting the results for making further query or not
Found it, index_summary was the culprit. I removed "Used for" in the begining. And it started generating queries.

Interestingly enough, Cohere was able to work around that keyword, but not claude and titan
Awesome!!

AI IS GOING TO TAKE OVER!!πŸ˜… πŸ˜†
Still think there should be some way to pass in ServiceContext to everything
from llama_index import set_global_service_context

set_global_service_context(service_context)
ahh got it. lemme check if it works, I mean others pick it up.
Also hickups for other queries. Maybe I will look for other alternatives
Plain Text
File c:\Users\souya\.conda\envs\streamdiffusion\lib\site-packages\llama_index\indices\query\query_transform\base.py:116, in HyDEQueryTransform.__init__(self, llm, hyde_prompt, include_original)
    105 """Initialize HyDEQueryTransform.
    106 
    107 Args:
   (...)
    112         string as one of the embedding strings
    113 """
    114 super().__init__()
--> 116 self._llm = llm or resolve_llm("default")
    117 self._hyde_prompt = hyde_prompt or DEFAULT_HYDE_PROMPT
    118 self._include_original = include_original

File c:\Users\souya\.conda\envs\streamdiffusion\lib\site-packages\llama_index\llms\utils.py:31, in resolve_llm(llm)
     29         validate_openai_api_key(llm.api_key)
     30     except ValueError as e:
---> 31         raise ValueError(
     32             "\n******\n"
     33             "Could not load OpenAI model. "
     34             "If you intended to use OpenAI, please check your OPENAI_API_KEY.\n"
     35             "Original error:\n"
     36             f"{e!s}"
     37             "\nTo disable the LLM entirely, set llm=None."
     38             "\n******"
     39         )
     41 if isinstance(llm, str):
     42     splits = llm.split(":", 1)
I don't see resolve_llm using the context
Finally got it working. Used a custom prompt for StepDecomposeQueryTransform.

Mayhap the og prompt was too long and it got confused. Or was tuned for chatgpt or lamma.
Did anyone work with cohere for multilanguage semantic search ? my source is in arabic, it seems cannot give me the result when I give query in english or any other language
You'll have to check two things here.
  1. Whether your embedding model is able to correctly find the nearest nodes for your query since you're dataset is in a different language.
  2. If data is being retrieved correctly then whether your llm is able to work with the data or not
Also just be aware that most of the prompts that are injected are in English. You might want to tune it, as for my case it was not generating properly cause it was loosing context due to too many examples
Thanks for your reply. Yes
  1. I used multilanguage embedding model by Cohere. I can see that it works properly
  2. I use LLM from cohere, then it works as well.
@sansmoraxz would you like to give more info about it ? How to do that ?
It seems everything works fine, but when it come to the last part, it shows this message
Attachment
image.png
Those are codes and arabic article
Thanks for your help
Is cohere command multi lingual?
Also maybe try with a postprocessor for translation
Could you explain to me about this ?
The thing is I cannot get the response therefore I cannot do postprocessor (such as translation)
Let me check..I believe so
Add a reply
Sign up and join the conversation on Discord