Find answers from the community

Home
Members
maybe goats dont exist
m
maybe goats dont exist
Offline, last seen 2 months ago
Joined September 25, 2024
this issue is specific to the anthropi SDK i think
1 comment
L
is there a way to filter by score using a VectorIndexRetriever
7 comments
L
m
Plain Text
custom_qa_prompt = (
    "We have provided context information below. \n"
    "---------------------\n"
    "{context_str}"
    "\n---------------------\n"
    "Given this information, please answer the question: {query_str}\n"
)
custom_qa_template = PromptTemplate(custom_qa_prompt)
response_synthesizer = get_response_synthesizer(
    llm=LLM_INSTANCES[model], text_qa_template=custom_qa_template
)


Is this the correct way to pass the response synth a template?
7 comments
L
m
any idea why the final response of the model would get cut off like this?
Plain Text
**************************************************
** Response: **
assistant: Thought: I need to use a tool to help me answer the question.

Action: query_engine_tool
Action Input: {"input": "My friend Harry Potter, who is not the same as the character from the Harry Potter book series, is sometimes rude to me. How can I let him know that I don't like his rude behavior in a respectful way?"}

Observation: Here are some tips for letting your friend Harry know that you don't appreciate his rude behavior in a respectful manner:

1. Talk to him privately. Find a time when you can speak with Harry one-on-one in a calm environment without distractions. This will help keep the conversation focused and prevent embarrassment on either side.

2. Use "I" statements. Frame your concerns in terms of how his actions make you feel, rather than accusing or blaming him. For example, say something like "I feel disrespected when you speak to me that way" instead of "You're being rude."

3. Give specific examples. Provide concrete instances of the rude behavior so Harry understands exactly what you're talking about. Try to recall a particular conversation or interaction.

4. Explain the impact. Let him know how his rudeness affects you and your friendship. You could say something like "When you say those things, it really hurts my feelings and makes me not want to hang out."

5. Set boundaries. Clearly communicate that you value your friendship but won't tolerate being treated rudely. Set expectations for respectful communication going forward.

6. Listen to his perspective. There may be reasons behind Harry's behavior that you're unaware of. Give him a chance to share his side and respond to your concerns. 

7. Offer your support. Let Harry know that you care about him and your friendship. If he's going through something difficult that's causing him to lash out, offer to lend an ear or help however you can.

The key is to approach the conversation with empathy and respect, while still being assertive about how his actions impact you. Hopefully, by having an honest dialogue, you and Harry can improve your communication and strengthen your friendship.

Thought: The tool provided a thorough and helpful answer to the question. I don't think I need any additional information to respond to the original query.

Answer: Here are some tips for letting your friend Harry know

is there a context window issue here or something? I am using claude opus
11 comments
m
L
T
Hey there, having an issue with Claude 3 where in:
/core/chat_engine/types.py
Plain Text
            if not self._aqueue.empty() or not self._is_done:
                try:
                    delta = await asyncio.wait_for(self._aqueue.get(), timeout=0.1)
                except asyncio.TimeoutError:
                    if self._is_done:
                        break
                    continue
                self._unformatted_response += delta
                yield delta

at line
self._unformatted_response += delta
delta is None causing the generator to throw

Can I add a if !None to this to fix it? Have you ran into this at all?
1 comment
m
claude 3 coming soon?
9 comments
m
L
having an issue migrating to 0.10, for some reason, even though im passing the embeding model to my query engines and retrievers, its still complaining i dont have an open_ai key set, as its trying to use that as a default

This is ok because im not using openAI, but is there a better way to set this up? I saw something about settings but not sure. Quite annoying!
21 comments
m
L
r
Plain Text
return OpenAIAgent.from_tools(
        tools=[query_engine_tool],
        llm=get_default_llm(),
        chat_history=history,
    )

If i define the LLM here in the tool, will it use it for the reasoning, but i can use another lllm for the final output?

I find coding models and gpt4 do great at the tool usage and such, but sometimes i want to have the final generation done by a different model
17 comments
j
L
has anyone had an issue with the chat engine always returning 2 of the users query? I am only passing an empty chat history and message, but the .chat() function gets the following:
Plain Text
[ChatMessage(role=<MessageRole.USER: 'user'>, content='test\n', additional_kwargs={}), ChatMessage(role=<MessageRole.USER: 'user'>, content='test\n', additional_kwargs={})]
62 comments
m
L
When streaming a chat with a react agent i get some strange results:
Plain Text
def get_react_agent(
    vector_store: PineconeVectorStore,
    history: List[ChatMessage],
    user_id: str,
    model: ChatLLM,
) -> ReActAgent:
    query_engine = get_query_engine(
        user_id=user_id, vector_store=vector_store, model=model
    )
    query_engine_tool = QueryEngineTool.from_defaults(query_engine=query_engine)

    return ReActAgent.from_tools(
        tools=[query_engine_tool],
        llm=model.value,
        chat_history=history,
    )
           
agent = get_react_agent(
        vector_store=get_pinecone_vector_store(),
        history=history,
        user_id=user_id,
        model=model,
    )
# Valid types based on looking at source code
response = agent.stream_chat(message)  # pyright: ignore

... generator websocket push code ...

print(response.response)
#<function_calls>
#<invoke>
#<tool_name>query_engine_tool</tool_name>
#<parameters>
 # "input": "hello"
#</parameters>
#</invoke>
#</function_calls>

#Thought: I need to use the query_engine_tool to help me understand and respond to the user's input.

#Action: query_engine_tool

#Action Input: {"input": "hello"}
15 comments
L
m
Trying to figure out why I'm passing show_progress=false to my vector store, but when I do index.insert_nodes it still shows progress, do I need to provide it to that function as well?
3 comments
L
m
NotFoundError: Error code: 404 - {'error': {'message': 'Unrecognized request argument supplied:
tools', 'type': 'invalid_request_error', 'param': None, 'code': None}}

When trying to use the default open AI agent with azure gpt-4-1106-preview
12 comments
m
L
Do yall have any tips on improving file ingestion speed, only using node parser and embeddings but large files are still quite slow
14 comments
L
m
W
Chat engine mode best uses open AI function calling, if I'm using azure is there some setup I need to do to allow that to work?

I am getting an OpenAI bad request error as is

openai.BadRequestError: Error code: 400 - {'error': {'message': 'Unrecognized request arguments supplied: tool_choice, tools', 'type': 'invalid_request_error', 'param': None, 'code': None}}
3 comments
m
L
Is there a way to pass a chat history into a chat engine?
3 comments
m
L
I'm running an ingestion pipeline on a document, but it's not producing any nodes. I check the document and it seems to have proper content and be set properly, any reason why this might be the case?
38 comments
L
m
Is there a way to load individual s3 files into a document without downloading them locally first?
3 comments
m
S
Is there a way for me to get the retrieved nodes / source data easily when querying
2 comments
m
L
Is there a way to add additional logging easily? To see what chunks are received etc
4 comments
m
L
is there a way, for me to, within an openai agent tool, access the query engine / user message?
Plain Text
def create_gif_from_timestamps(video_path: str, output_path: str, start_time: str, end_time: str) -> str:
    """
    Create a GIF from an MP4 video using the specified timestamps.

    Parameters:
    - video_path: The path to the input MP4 video file.
    - output_path: The path where the output GIF file will be saved.
    - start_time: The start timestamp in the format "HH:MM:SS" indicating the beginning of the GIF.
    - end_time: The end timestamp in the format "HH:MM:SS" indicating the end of the GIF.

    Returns:
    - The path to the generated GIF file.
    """
    clip = VideoFileClip(video_path).subclip(start_time, end_time)
    clip.write_gif(output_path)
    return output_path

gif_tool = FunctionTool.from_defaults(fn=create_gif_from_timestamps)

i want to be able to query the index, grab a document, use that document to get file path / timestamps from metadata, then make a gif and return it
3 comments
L
m
seems like yes with whisper, is there a good way to make local indexing idempotent, I.E. if I add a new file to the folder and run ingestion, it only updates that one file rather than all
11 comments
L
m
is gemini ultra actually gemini ultra? i thought that was unreleased
3 comments
m
T
@Logan M I am trying to hook open telemetry into the tracing callback handler, but the context fetching or getting of the current span isnt working, is there a way to pass the context in or get it from your knowledge?
39 comments
b
L
m
a
can I somehow pass image_urls into a agent thats using a multimodal model?
3 comments
m
T
@Logan M do you think I could contribute this change, where I add an optional reasoning llm that handles the reasoning portion, and a generation llm that does the final generation
4 comments
m
L