Find answers from the community

J
Jedi
Offline, last seen 4 months ago
Joined September 25, 2024
Hello, trying the finetune embeddings tutorial here: https://gpt-index.readthedocs.io/en/stable/examples/finetuning/embeddings/finetune_embedding.html#

Running llama_index 0.8.28, using the ChatOpenAI llm, get the following AttributeError no attribute 'complete'. Should I be using a different version? Thanks for taking a look!
--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[5], line 7 6 llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0, openai_api_key=OPENAI_API_KEY) ----> 7 train_dataset = generate_qa_embedding_pairs(train_nodes, llm=llm) 8 val_dataset = generate_qa_embedding_pairs(val_nodes, llm=llm) 10 train_dataset.save_json("train_dataset.json") File ~/.pyenv/versions/3.11.1/envs/llm-3.11.1/lib/python3.11/site-packages/llama_index/finetuning/embeddings/common.py:80, in generate_qa_embedding_pairs(nodes, llm, qa_generate_prompt_tmpl, num_questions_per_chunk) 76 for node_id, text in tqdm(node_dict.items()): 77 query = qa_generate_prompt_tmpl.format( 78 context_str=text, num_questions_per_chunk=num_questions_per_chunk 79 ) ---> 80 response = llm.complete(query) 82 result = str(response).strip().split("\n") 83 questions = [ 84 re.sub(r"^\d+[\).\s]", "", question).strip() for question in result 85 ] AttributeError: 'ChatOpenAI' object has no attribute 'complete'
5 comments
J
L
T
Hi!, New to Llamaindex discord. Running into issues using LLMSingleSelector (running 0.8.8v). Code and a snip of trace below. Any help appreciated! thank you.

Plain Text
from llama_index.tools import ToolMetadata
from llama_index.selectors.llm_selectors import LLMSingleSelector


# choices as a list of tool metadata
choices = [
    ToolMetadata(description="I am choice 1", name="choice_1"),
    ToolMetadata(description="I am choice 2", name="choice_2"),
]

# choices as a list of strings
choices = ["choice 1 - description for choice 1", "choice 2: description for choice 2"]

selector = LLMSingleSelector.from_defaults()
selector_result = selector.select(choices, query="What choices do I have?")
print(selector_result.selections)


File ~/.pyenv/versions/llm-3.11.3/lib/python3.11/site-packages/llama_index/output_parsers/selection.py:56, in <listcomp>(.0)
54 if isinstance(json_output, dict):
55 json_output = [json_output]
---> 56 answers = [Answer.from_dict(json_dict) for json_dict in json_output]
57 return StructuredOutput(raw_output=output, parsed_output=answers)

File ~/.pyenv/versions/llm-3.11.3/lib/python3.11/site-packages/dataclasses_json/api.py:70, in DataClassJsonMixin.from_dict(cls, kvs, infer_missing)
65 @classmethod
66 def from_dict(cls: Type[A],
67 kvs: Json,
68 *,
69 infer_missing=False) -> A:
---> 70 return _decode_dataclass(cls, kvs, infer_missing)

File ~/.pyenv/versions/llm-3.11.3/lib/python3.11/site-packages/dataclasses_json/core.py:168, in _decode_dataclass(cls, kvs, infer_missing)
165 if not field.init:
166 continue
--> 168 field_value = kvs[field.name]
169 field_type = types[field.name]
170 if field_value is None:

KeyError: 'choice'
7 comments
J
L
Hello, Trying correctness evaluator and hitting OpenAI API timeout errors. Anyone else has run into this? thanks for taking a look.

Plain Text
site-packages/llama_index/evaluation/correctness.py", line 134, in aevaluate
    eval_response = await self._service_context.llm.apredict(
....
site-packages/openai/_base_client.py", line 1442, in _request
    raise APITimeoutError(request=request) from err
7 comments
J
L
T
Hello, what are some of the options recommended for adding domain specific synonyms during embedding/retrieval in llamaindex workflow ? cc: @Logan M thanks!
2 comments
J
T
hi! trying to get a CustomQueryEngine going, following https://gpt-index.readthedocs.io/en/latest/examples/query_engine/custom_query_engine.html

How do I make the response streaming capable -- does the following look right? :
Plain Text
  
    def custom_query(self, query_str: str):
        logger.info(f"Triggering custom engine for query: {query_str}")
        response_gen = self.llm.stream_complete(
            qa_prompt
        )

        response= StreamingResponse(response_gen)
        return response


however, this creates following error upstream (integration with chainlit).
Plain Text
await response_message.stream_token(token=token)
TypeError: can only concatenate str (not "CompletionResponse") to str


Any help appreciated! thanks
3 comments
J
b