Find answers from the community

K
Kaveen
Offline, last seen 2 months ago
Joined September 25, 2024
I'm assuming it's api instability but some anecdotal backups would have me more at ease
80 comments
L
K
T
With the example here:
https://github.com/run-llama/llama_index/blob/main/docs/examples/agent/openai_assistant_agent.ipynb
We load an openai assistant agent using a file such that we use the built-in retriever, but how do we add files after the agent is created?
16 comments
L
K
Does ConversationSummaryBufferMemory work with the new changes?
I have something like
Plain Text
        llm = OpenAI(model=model, temperature=0)
        llm_predictor = LLMPredictor(llm=OpenAI(temperature=0, model=model))

        memory = ConversationSummaryBufferMemory(
            memory_key="memory",
            return_messages=True,
            llm=llm,
            max_token_limit=29000 if "gpt-4" in model else 7500,
        )

But I can't run this, I get the error
Plain Text
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/discord/commands/core.py", line 124, in wrapped
    ret = await coro(arg)
  File "/usr/local/lib/python3.9/dist-packages/discord/commands/core.py", line 978, in _invoke
    await self.callback(self.cog, ctx, **kwargs)
  File "/home/kaveen/GPTDiscord/cogs/commands.py", line 755, in talk
    await self.index_cog.index_chat_command(ctx, model)
  File "/home/kaveen/GPTDiscord/cogs/index_service_cog.py", line 212, in index_chat_command
    await self.index_handler.start_index_chat(ctx, model)
  File "/home/kaveen/GPTDiscord/models/index_model.py", line 488, in start_index_chat
    memory = ConversationSummaryBufferMemory(
  File "/usr/local/lib/python3.9/dist-packages/langchain/load/serializable.py", line 97, in __init__
    super().__init__(**kwargs)
  File "/usr/local/lib/python3.9/dist-packages/pydantic/v1/main.py", line 341, in __init__
    raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for ConversationSummaryBufferMemory
llm
  Can't instantiate abstract class BaseLanguageModel with abstract methods agenerate_prompt, apredict, apredict_messages, generate_prompt, invoke, predict, predict_messages (type=type_error)
18 comments
K
L
W
And to add on to this, the new native async support seems much worse than running a sync call with use_async=True inside an executor, I'm currently implementing this in a discord bot and using the new async, it blocks for the entire duration of the query and doesn't allow for execution pauses if another bot command is run while a query is happening, whereas using an executor and use_async, it successfully pauses execution in an async style to allow for new things to run
7 comments
K
j
Is it a known issue that cost analysis doesn't work with aquery? am I doing something wrong?
https://github.com/jerryjliu/gpt_index/issues/705
10 comments
j
K
This would be awesome, async support on query would help a lot especially with tree index queries
2 comments
K
j
If I use a pinecone index, is there any way to make it only use embeddings when query using a GPTSimpleVectorIndex? My issue is the following, I want to embed chunks of text and store those embeddings in a pinecone index, but I can't upload the plaintext into pinecone, so the text will be encrypted when upserted alongside the vector to pinecone. This will break things, especially querying, for all index types right?
5 comments
j
K
I also just wanted to start a discussion about the new ChatGPT LLM predictor, it seems like even with temperature 0 it seems unreliable for use in gpt-index's query pipelines, what's the plan for this in the future? https://github.com/jerryjliu/gpt_index/issues/590 Is this something that others have noticed too? Are there any things I can change (q&a prompt, etc) that might help?
11 comments
K
j
Will mock functionality be added to the GPTKnowledgeGraphIndex in the future? Passing an embed and llm mock into it fails
9 comments
K
L