Find answers from the community

i
isra_j
Offline, last seen 3 months ago
Joined September 25, 2024
Please clear this confusion. Openai tokens are consumed when creating embeddings of pdf files ?
1 comment
W
I'm facing a weird issue. I'm using Astra DataStax to upload my embeddings. The issue arises when I load files from a local directory containing 6 PDF files; however, when I check documents.length, I'm getting 485. Why is that so? How to solve this ?
Plain Text
 const dataPath = path.resolve('../documents');
        const reader = new SimpleDirectoryReader();
        const documents = await reader.loadData({ directoryPath: dataPath });
6 comments
i
W
can someone guide me how to connect Llama index with pinecone, im unable to find any code in Javascript all the code available is in python.
18 comments
o
R
i
i
isra_j
·

Agent

I built a system using LlamaIndex to answer questions about pet products (food, treats, medicine) from my list. It works great for those items, but if someone asks about something not in my list, I just get a "not found" message.

Ideally, I'd like a more conversational AI that can:

Search the web for info on products not in my list.
Provide general info on the user's query.
Avoid "not found" errors for missing items.
Would React Agent be a good option for this, or are there other suggestions?
2 comments
i
W
I have a document of 700 pages, the issue is that when i try to create embeddings for that file and try to upload the embeddings to vector store in my case Datastax Astra , i get error that
"error": "Error: Command "insertMany" failed with the following errors: [{"message":"Document size limitation violated: indexed String value (property 'content') length (17900 bytes) exceeds maximum allowed (8000 bytes)","errorCode":"SHREDDOC_LIMIT_VIOLATION"}]"
Is there any way i can split each document ? Please help, im new to Llama Index Ts
I guess i have to use text splitter or what ??
6 comments
i
W
I want to throw a custom error message if the query asked does not contain the relevant information, im using typescript package of LlamaIndex
Here is my code
const index = await VectorStoreIndex.fromVectorStore(astraVS, ctx); const dynamicPrompt = createPrompt(query, LlamaContext); const retriever = await index.asRetriever({ similarityTopK: 3 }); const queryEngine = await index.asQueryEngine({ retriever }); const result = await queryEngine.query({ query: dynamicPrompt, });
1 comment
L