Find answers from the community

Updated 3 months ago

Hello trying the finetune embeddings

Hello, trying the finetune embeddings tutorial here: https://gpt-index.readthedocs.io/en/stable/examples/finetuning/embeddings/finetune_embedding.html#

Running llama_index 0.8.28, using the ChatOpenAI llm, get the following AttributeError no attribute 'complete'. Should I be using a different version? Thanks for taking a look!
--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[5], line 7 6 llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0, openai_api_key=OPENAI_API_KEY) ----> 7 train_dataset = generate_qa_embedding_pairs(train_nodes, llm=llm) 8 val_dataset = generate_qa_embedding_pairs(val_nodes, llm=llm) 10 train_dataset.save_json("train_dataset.json") File ~/.pyenv/versions/3.11.1/envs/llm-3.11.1/lib/python3.11/site-packages/llama_index/finetuning/embeddings/common.py:80, in generate_qa_embedding_pairs(nodes, llm, qa_generate_prompt_tmpl, num_questions_per_chunk) 76 for node_id, text in tqdm(node_dict.items()): 77 query = qa_generate_prompt_tmpl.format( 78 context_str=text, num_questions_per_chunk=num_questions_per_chunk 79 ) ---> 80 response = llm.complete(query) 82 result = str(response).strip().split("\n") 83 questions = [ 84 re.sub(r"^\d+[\).\s]", "", question).strip() for question in result 85 ] AttributeError: 'ChatOpenAI' object has no attribute 'complete'
T
J
L
5 comments
Have you tried passing it with llm=OpenAI?

Plain Text
service_context = ServiceContext.from_defaults(callback_manager=callback_manager,
    llm=OpenAI(model="gpt-3.5-turbo-16k", temperature=0, max_tokens=1000), chunk_size=1024, node_parser=node_parser
)
Looks to be similar error with OpenAI .
7 llm = OpenAI(model="gpt-3.5-turbo", temperature=0, openai_api_key=OPENAI_API_KEY) ----> 9 train_dataset = generate_qa_embedding_pairs(train_nodes, llm=llm) 10 val_dataset = generate_qa_embedding_pairs(val_nodes, llm=llm) 12 train_dataset.save_json("train_dataset.json") File ~/.pyenv/versions/3.11.1/envs/llm-3.11.1/lib/python3.11/site-packages/llama_index/finetuning/embeddings/common.py:80, in generate_qa_embedding_pairs(nodes, llm, qa_generate_prompt_tmpl, num_questions_per_chunk) 76 for node_id, text in tqdm(node_dict.items()): 77 query = qa_generate_prompt_tmpl.format( 78 context_str=text, num_questions_per_chunk=num_questions_per_chunk 79 ) ---> 80 response = llm.complete(query) 82 result = str(response).strip().split("\n") 83 questions = [ 84 re.sub(r"^\d+[\).\s]", "", question).strip() for question in result 85 ] AttributeError: 'OpenAI' object has no attribute 'complete'
I'm not quite sure exactly what you're trying to do but have you looked at this documentation? https://gpt-index.readthedocs.io/en/stable/core_modules/model_modules/llms/root.html
Seems like you are using a LangChain LLM and not a LlamaIndex llm

Plain Text
from llama_index.llm import OpenAI

llm = OpenAI(model="gpt-3.5-turbo", temperature=0.1)
yes, thank you both!
Add a reply
Sign up and join the conversation on Discord