Find answers from the community

Home
Members
0xDrTuna
0
0xDrTuna
Offline, last seen 3 months ago
Joined September 25, 2024
Hello everyone, has anyone had any success using the neo4j db property graph index as a chat engine? I have this code, and it sort of works but the citations that I usually get with a pgvector implementation are gone. Am I doing this right?
My code:
Plain Text
    logger.info("Connecting to index from Neo4j DB...")
    graph_store = Neo4jPropertyGraphStore(
        username="neo4j",
        password="password",
        url="bolt://localhost:7687",
    )
    index = PropertyGraphIndex.from_existing(
        property_graph_store=graph_store,
    )
    logger.info("Finished connecting to index from Neo4j DB.")
    return index.as_chat_engine(
        similarity_top_k=app_config.TOP_K,
        # system_prompt=system_prompt,
        chat_mode="context",
        # response_mode="tree_summarize",
        # vector_store_query_mode=app_config.VECTOR_STORE_QUERY_MODE,
    )
3 comments
L
0
0
0xDrTuna
·

Workflows

what’s the best way to try them out? can i boot up a create llama app with them?
2 comments
0
L
Hey all! I use create llama app to create the scaffolding for a new llamaindex app with FastAPI. I'm interested in the new pdf viewer for the citations. However - it doesn't seem to work in my case, because it's expecting a URL from the response but I have used the data source as "data" folder with pdfs. So it forward the filepath of my linux machine to the frontend and it fails to render the pdf. Has someone got it to work? How would I set this up so I can host the pdfs online and have them be visible? I have tried using the db reader but I don't seem to understand how to make it work properly. Any guides or walkthrus? Thank you
7 comments
0
W
0
0xDrTuna
·

Sec-insight

Hey folks! I've been using the sec-insights repo and noticed that it's built on llamaindex 0.9.7. Has anyone gone through the process of upgrading it and replacing it with the newer way of doing things in llamaindex? How long did it take you?
2 comments
L
W
From what I get there's a small delay between feature appearing in Python and then being ported to Typescript
2 comments
0
W
Hey guys! I'm running a routes/events.py events generator to generate tool call events that are sent to the frontend with data about tool calls. I'm using React Agent as chat engine.
When I do anything that involves more than one tool - my system breaks. It seems to be stemming from the fact that no matter how many tools I call, the event data for the first tool called is the only one that gets sent,

Plain Text
""
8:[{"type": "events", "data": {"title": "Calling tool: recipe_lookup_thought with inputs: {'recipe_name': 'Carrot Cake'}"}}]
8:[{"type": "events", "data": {"title": "Calling tool: recipe_id_lookup_thought with inputs: {'recipe_id': 'bc495a05-bd5b-451e-bdfc-88eea3fccfe8'}"}}]
8:[{"type": "events", "data": {"title": "Calling tool: recipe_card_tool with inputs: {'recipeId': 'bc495a05-bd5b-451e-bdfc-88eea3fccfe8"}}]
0:" The"
8:[{"type": "tools", "data": {"toolOutput": {"output": "The recipeId for carrot cake is bc495a05-bd5b-451e-bdfc-88eea3fccfe8", "isError": false}, "toolCall": {"id": null, "name": "recipe_lookup_thought", "input": {"args": [], "kwargs": {"food_name": "CarrotCake"}}}}}]
0:" recipe"
0:" for"


However my desired outcome is to have several lines of 8: tool type events - has anyone faced this issue before? Any tips on how to get more granular control over the react agent flow beside modifying the react prompt template?
3 comments
L
0
0
0xDrTuna
·

Postgress

@Logan M Have you encountered this before? I've searched by old threads but can't seem to find anything
12 comments
0
L
Has anyone run into this? It keeps happening to me and it's very frustrating - for some reason the frontend just stops outputting the AI's response mid message. I know through the logs that it did receive the full response, but the console keeps saying Unexpected token U
5 comments
L
0
Hey folks! I'm using llamaindex with pgvector. I'm usiong text-embedding-large-3 with embedding size of 1536.
When using the vector store and asking a question, I get this error:
sqlalchemy.exc.StatementError: (builtins.ValueError) expected 1536 dimensions, not 3072
This is odd because embedding with 3072 doesn't work with pgvector due to postgres max dimensions. I'm assuming that there must be a dimension size that is hardcoded somewhere? Does anyone have a similar issue?
9 comments
L
0
Hello friends. I want to build a dashboard that allows me to interact with a LLM via website. I want to tell it ingredients and have it use RAG to access a list of PDF recipes that are stored in a folder. I want the LLM to list the top 3 recipes that can be done with my ingredients, and then do url links to the pdfs. Here's the key part: I want the dashboard that I'm on to automatically open the recipe PDF that the LLM has cited. How would you guys approach this last step? Build a custom tool for it and give it to the agent?
2 comments
0
W
Hey. Has anyone had any luck using GoogleDriveReader? I have used it and it can access Doc docs but the PDF reader implementation doesn't seem to work for me. I'm thinking of combining the DriveReader with LlamaParse - anyone has any good ideas on this?
4 comments
0
W
Hey, has anyone had look getting the GoogleDriverReader to work?
2 comments
L
0
Hey folks. What's the latest version of Python that I can use llamaIndex safely with?
2 comments
h