Find answers from the community

Home
Members
cmosguy
c
cmosguy
Offline, last seen 2 weeks ago
Joined September 25, 2024
Has anyone started to build an MCP service to llama index yet?

https://modelcontextprotocol.io/docs/first-server/python
1 comment
L
hey @Logan M so I am doing this tutorial:

https://github.com/run-llama/llamacloud-demo/blob/main/examples/report_generation/report_generation.ipynb

when I get to the cell to do this:

Plain Text
ret = await agent.run(
    input="Tell me about the top-level assets and liabilities for Tesla in 2021, and compare it against those of Apple in 2021. Which company is doing better?"
)


I kept getting too many tokens for gpt-4o. Then when I attempt to repeat the steps I get:

Plain Text
Running step prepare_chat_history
Step prepare_chat_history produced event InputEvent
Running step handle_llm_input
Step handle_llm_input produced event ReportGenerationEvent
Running step generate_report
Step generate_report produced event StopEvent
6 comments
c
L
c
cmosguy
·

Video

@Logan M nice video man here: https://youtu.be/wuuO04j4jPc?si=t0d4cjARP5sab7B7 You should put the links in the comment to the video btw. Also, what was that screen capturing system you are using to record video and zoom in?
2 comments
L
c
cmosguy
·

Outputs

So is llama index capable of handling open ai update with structured output? https://openai.com/index/introducing-structured-outputs-in-the-api/
2 comments
c
L
Hey I am trying to use the SummaryIndex mechansim for the agent tool.

I have:
Plain Text
summary_index = SummaryIndex(nodes)
        summary_index.storage_context.persist(os.path.join(persist_dir, "summary_index"))

        summary_query_engine = summary_index.as_query_engine(
            llm=self.model,
            response_mode="tree_summarize",
            use_async=True,
        )
        summary_tool = QueryEngineTool.from_defaults(
            name=f"summary_tool_{class_name}",
            query_engine=summary_query_engine,
            description=(f"Useful for summarization questions related to {class_name}"),
        )

I cannot figure out how to persist both the vector_index and the summary_index on disk so I do not have to regenerate it. How do you recommend I do that.

Also, how do I check the summary mechanism is even working ? The summary_index.summary=None which tells me something is off. Is the summary text generated and stored somewhere by any chance?
4 comments
L
W
c
Hey the author of this article did a call out to you: https://www.llamaindex.ai/blog/unlocking-the-3rd-dimension-for-generative-ai-part-1 in his project do you know how he generated all the example code possibilities for the API? When he index the example code did he do it the embedding son the explanation only and place the code in the metadata?
1 comment
L
c
cmosguy
·

Text

Basically, I processed a bunch of text into a much better understandable structure and stored this as a tree of text data and attributes.
1 comment
L
c
cmosguy
·

Paper

I watched the deep learning short course you guys just released. I noticed that you created a tool for each paper. My question is why use this approach instead of creating one index with a bunch of papers?
3 comments
c
L
Hey how are you? I am working on building a side project chatbot to interact with one of my ancestor's memoirs. The original was in PDF and the system that digitized the text has errors. I was curious if you had any thoughts of ways to clean up this text before I process the data so it is done in a sane way.
3 comments
L
c
c
cmosguy
·

MMR

carrying on from this discussion. https://github.com/run-llama/llama_index/issues/10682#issuecomment-2087996422. I fundamentally have an issue where my retriever is pulling a ton of duplicate data from the database. I was thinking that the MMR could help my issue. What do you think I should do? Should I spend time trying to check data going into the vector store is not duplicated multiple times or do you think I should implant a MMR. Don’t others have this issue? If you continually ingest data do you really have time to check for duplicates during the indexing?
9 comments
k
c
L
c
cmosguy
·

Create llama

is there a create llama project generation that is all in python by any chance? We have built our pipelines in python, I don’t want to switch languages typescript
8 comments
c
L
Hi everyone, I was following the guide here: https://learn.deeplearning.ai/courses/building-agentic-rag-with-llamaindex/lesson/5/building-a-multi-document-agent

I have over 500 documents, which becomes effectively both 2x500 for vector tool and summary tool. However, when I have 1000 tools i found out there is tool limit from OpenAI when using this strategy. How do we work around this issue with tools. I have a ton of documents and I wanted to keep to this scheme if possible.
3 comments
L
c
You guys gotta fix this ASAP /cc @Jerry Liu
17 comments
c
J
L
c
cmosguy
·

Slack

Hey @Logan M There seems to be something breaking with the phoenix-arize instrument. The folks at at phoenix are telling me to downgrade to llama-inde<=10.19.0, that does not feel right. Can you see our conversation here? https://arize-ai.slack.com/archives/C04R3GXC8HK/p1713806586565339?thread_ts=1713530589.312499&cid=C04R3GXC8HK

Basically, the issue is when i start doing things like this:
Plain Text
set_global_handler("arize_phoenix")
query_engine = vector_index.as_query_engine(
    llm=llm,
    similarity_top_k=20,
    node_postprocessors=[reranker],
    refine_template=PromptTemplate(prompt_str),
    # text_qa_template=PromptTemplate(prompt_str),
)
output = query_engine.query("my query")

the grouping of the spans start breaking. Do you have any thoughts here?
17 comments
L
c
c
cmosguy
·

React

Hey @Logan M hope you are well. I’m trying to learn the ReACt agent. It takes in a query engine as a tool. However, is there a way to feed in the DAG for a query pipeline instead? The query pipeline has a .run method, but the agent tool query engine expects .query. I cannot figure out how to connect the two. Thoughts or examples somewhere?
2 comments
c
L
c
cmosguy
·

Metadta

Hey @Logan M, I have a bunch of English explanations of our code base and instead of storing the embeddings of the code base itself, I store the English explanation of the code and embed that. At the same time I store the code snippets as metadata in the Document under the metadata key called “code”. My question: how do I retrieve the corresponding code snippets in that metadata along with the explanation text during retrieval and context setting during synthesizing LLM prompt?
Thanks !
7 comments
c
L
@kapa.ai I have a bunch of text in a vector store. But the text is mapped to code that I store in the meta data as part of the document. When I run the retriever during a query is there a way to pull the metadata in also as part of the context window prompt during the synthesizer
3 comments
k
@kapa.ai I’ve made a document summary index with embeddings, but it is returning a ton of distractors. What can I do to mitigate the issue?
26 comments
k
c
@kapa.ai I have a document summary and vector store index. How do I search both indexes in a single query?
9 comments
k
c
@kapa.ai how do I save a succulent summary index that uses chromas db, then how do I reload from the storage context?
43 comments
k
c
@Logan M I’m finding that a ton of “example” links go to GitHub that do not exist anymore.
7 comments
L
c
@kapa.ai I’m trying to update the nodes of a document that is stored in the chroma db. How do I do that with an ingestion pipeline?
15 comments
k
c
Hey guys, I have a metadata attribute called html_source which is a long string. How do I apply a metadata filter of some substring to filter on the HTML_source without it having to be an exact match filter?
4 comments
T
c