Find answers from the community

Updated 4 months ago

Hey llama gang I noticed something very

At a glance
Hey llama gang. I noticed something very strange today. I was using a ComposableGraph to query against a set of docs generated from a BeautifulSoupWebReader data loader. Now mind you previously (like two days ago) i was getting beautiful results. Now my results are seriously dumbed down. I cant fathom why this is. The docs have not changed. I tried playing with my versions of langchan and llama to no luck. Im going to leave an example here of the same questions asked a few days apart. Note the bottom result is the "dumber" version. Also some code snippets.


Plain Text
index1 = GPTSimpleVectorIndex.from_documents(documents);

Plain Text
graph = ComposableGraph.from_indices(GPTListIndex, [index1], index_summaries=[index1_summary])


example of query:
Plain Text
query_configs = [
    {
        "index_struct_type": "tree",
        "query_mode": "embedding",
        "query_kwargs": {
            "child_branch_factor": 5
        }
    }
]

response = graph.query("Provide a detailed answer the following quesiton based on the context. Never use the word context. If you dont know say i dont know. What is a Filecoin storage provider?", query_configs=query_configs)
print(response)
Attachments
Screen_Shot_2023-04-03_at_9.07.39_PM.jpg
Screen_Shot_2023-04-03_at_9.07.13_PM.jpg
Screen_Shot_2023-04-03_at_9.07.32_PM.jpg
L
A
4 comments
I wonder if OpenAI "updated" their model? I see you are using a tree index, and I'm fairly sure nothing has changed recently with it πŸ€”

Your graph only has one index in it. I know it shouldn't matter, but does anything improve if you query the tree index directly? index1.query(..., child_branch_factor=5, mode="embedding")
@Logan M thanks for quick response. We tested this on multiple data sets and notebooks. The only logical thing I can think of is that OpenAI "updated" their model. Correct me if i'm wrong but llama defaults to
Plain Text
text-davinci-003
. Is that correct? Im wondering if we can get around this by specifying a newer modal.
Yes that is the default! you could try text-davinci-002, or maybe gpt-3.5-turbo or gpt-4
I wish openai was better about managing versions of their LLMs 😦
Add a reply
Sign up and join the conversation on Discord