Find answers from the community

Home
Members
shawtyisaten
s
shawtyisaten
Offline, last seen 4 weeks ago
Joined September 25, 2024
any ideas?
1 comment
L
is it just me or does PineconeVectorStore not work when used in SimpleComposableMemory and agent?

getting this error msg:

Plain Text
pinecone.core.openapi.shared.exceptions.NotFoundException: (404)
Reason: Not Found
HTTP response headers: HTTPHeaderDict({'Date': 'Tue, 15 Oct 2024 15:46:11 GMT', 'Content-Type': 'application/json', 'Content-Length': '55', 'Connection': 'keep-alive', 'x-pinecone-request-latency-ms': '41', 'x-pinecone-request-id': '6886694976858348241', 'x-envoy-upstream-service-time': '42', 'server': 'envoy'})
HTTP response body: {"code":5,"message":"Namespace not found","details":[]}
3 comments
s
L
does UpstashChatStore need to be updated?

when i run

Plain Text
UpstashChatStore(
        redis_url=os.environ.get("UPSTASH_REDIS_URL"),
        redis_token=os.environ.get("UPSTASH_REDIS_TOKEN"),
        ttl=300,  # Optional: Time to live in seconds
      )


i get 'UpstashChatStore' object has no attribute '__pydantic_private__'
1 comment
s
when i run the query on primary and secondary separately i get,

Plain Text
>>> composable_memory.primary_memory.get("my cpa")
[ChatMessage(role=<MessageRole.SYSTEM: 'system'>, content='You are a helpful marketing assistant', additional_kwargs={}), ChatMessage(role=<MessageRole.USER: 'user'>, content='my cpa is $350', additional_kwargs={}), ChatMessage(role=<MessageRole.ASSISTANT: 'assistant'>, content='Are you looking for ways to reduce your CPA, or do you need help with analyzing or understanding this metric further?', additional_kwargs={})]

>>> composable_memory.secondary_memory_sources[0].get("what is my cpa")
[ChatMessage(role=<MessageRole.USER: 'user'>, content='my cpa is $350', additional_kwargs={}), ChatMessage(role=<MessageRole.ASSISTANT: 'assistant'>, content='Are you looking for ways to reduce your CPA, or do you need help with analyzing or understanding this metric further?', additional_kwargs={})]
1 comment
s
guys i think there's a bug in the llama-index-llms-portkey package

when i run this notebook, https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/docs/examples/llm/portkey.ipynb

I get an error msg
Plain Text
PydanticUserError: `Portkey` is not fully defined; you should define `Modes`, then call `Portkey.model_rebuild()`.

For further information visit https://errors.pydantic.dev/2.9/u/class-not-fully-defined
5 comments
L
s
curious but is there a benefit to relying on the llamaindex's agent to pick the tool versus just using openai's assistants api or even just asking gpt4 to pick one?
5 comments
L
s
Hi I've built a tool_indexer like below but it picks the wrong tool when i ask it to generate images.
Plain Text
tools_metadata = [
      {
        "query_engine": self.vector_query_engine,
        "metadata": ToolMetadata(
          name="user_uploaded_documents",
          description="use this tool when asked about specific documents uploaded by the user. Do not worry if the file does not mention the name of the file.",
        )
      },
      {
        "fn": generate_images,
        "description": (
          "generate_images(prompt: str, n: int) -> str\n\nOnly use this tool to create images or pictures upon user request. "
          "Useful for generating images"
        ),
        "fn_schema": DalleSchemaModel
      },
      {
        "fn": search_with_bing,
        "description": (
          "search_function(query: str) -> str\n\nUse this tool to retrieve real-time and up-to-date information to best answer a user query. "
          "This includes, but is not limited to, topics such as current events, weather updates, stock market data, and any other information that is subject to frequent changes"
        ),
        "fn_schema": BingSearchModel
      },
    ]
    for tool in tools_metadata:
      if "query_engine" in tool:
        tools.append(QueryEngineTool(query_engine=tool["query_engine"], metadata=tool["metadata"]))
      else:
        tools.append(FunctionTool.from_defaults(fn=tool["fn"], description=tool["description"], fn_schema=tool["fn_schema"]))

    tool_mapping = SimpleToolNodeMapping.from_objects(tools)
    tool_index = ObjectIndex.from_objects(
      tools,
      tool_mapping,
      VectorStoreIndex,
    )
      tool_retriever = tool_index.as_retriever(similarity_top_k=1)
      picked_tool = tool_retriever.retrieve(query)[0]
3 comments
L
s
anyone have any ideas on this?
5 comments
L
s
i'm seeing that when i use the ReactAgent with vector db, it can take up to 1 minute for it to come up with an answer for a complex query. Is there anything we can do to boost the speed?
6 comments
L
s
anyone know if it's possible to pass in the metadata from the retrieved documents into context_str?
13 comments
L
s
did all agents move to llama_index.core.agent? does llama_index.agent not exist anymore?
6 comments
s
L
is there a way to assign the chat store to an agent after initializing the agent? something like agent.chat_store = redis_chat_store
1 comment
L
s
shawtyisaten
·

React

what's the best way to go about fixing the ValueError: Could not parse output raised by reasoning_step by the ReActAgent?
3 comments
L
s
does chat agents have memory stored by default or do we have to store the chat history separately and provide it to the agent?
1 comment
L
with chat agents like ReAct Agent, what is the difference between .query and .chat methods?
1 comment
L
Hello, I'm getting an intermittent exception raised by /agent/react/step.py for this message_content: Thought: blah blah.
it seems to be raised by https://github.com/run-llama/llama_index/blob/main/llama_index/agent/react/output_parser.py#L101 because it doesn't enter any of the if cases. any idea how we can handle the case when message_content contains Thought:?
3 comments
s
L
was looking to finetune an adapter as detailed here, https://docs.llamaindex.ai/en/stable/examples/finetuning/embeddings/finetune_embedding_adapter.html but it seems that hit rate is lower than if we were to use ada-2 model. doesn't higher hit rate mean that model is better at retrieving the right documents?
14 comments
s
L