Find answers from the community

Home
Members
Milkman
M
Milkman
Offline, last seen 3 months ago
Joined September 25, 2024
When I'm trying to import SImpleDirectoryREader using this line: from llama_index.core import SimpleDirectoryReader, Somehow it's giving me this error: ImportError: llama-index-readers-file package not found. I tried uninstalling but still face the same issue.
4 comments
L
M
Solved by defining embed_model and service context but got a new problem. WHEN I DO pack.run it says: ValueError: Only one free input key is allowed.
8 comments
L
M
I'm an azure open ai user, it seems like now it's fairly hard to add the embedding model to my service context definition. IS there an example that I can follow?
2 comments
M
L
I'm trying to use AzureOpenAI and the latest gpt-4-1106 preview model to try and extract tables from PDF. I was able to set up the AzureOpenAI object and run a complete method for completion. However, when I try to create a VectorstoreIndex, it fails and gives me the error AttributeError: module 'openai' has no attribute 'error'. ANyone running into this issue before?
7 comments
L
M
I'm using MIlvus Vector Store as my backend, wondering if I'm using the vecstoreindex implementation by llama index, how can I better utilize the metadata I fetched from the nodes.
2 comments
M
L
Say I don't have acces to the OPenAI 0613 function calling api, but only have access to the 0315 one. How can I leverage pydantic for more structured outputs?
6 comments
M
L
I'm trying to use the SImpleDIrectoryREader to read ta directory of pptx files, but I was getting the OSError: cannot find loader for this WMF file. WOndering what's the fastest way for me to bypass the error to just skip the file. I think the error happens because I have images in the files that can't be extracted.
1 comment
L
I'm using metadata extractor to generate some metadata for documents. However, when I try to load the documents into MIlvus, I was getting the error: TypeError: Object of type set is not JSON serializable. Fix: So I'm using the entity extractor and it's returning a set
1 comment
L
When using the SubQuestionQueryEngine, I keep running into the issue of 'KeyError: 'comparison'. I have multiple documents as query engine tool and feed them in the query_engine_tools parameter in constructing the SubQuestionQueryENgine.
14 comments
L
M
If I have the doc_id, is there a way to access the doc text using the doc_id?
4 comments
M
L
Token indices sequence length is longer than the specified maximum sequence length for this model (1215 > 1024). Running this sequence through the model will result in indexing errors. I updaetd the library to latest one and saw this error. WOnder where I could specify the length to be more than 1024.
5 comments
M
L
Say I already have a collection of tree indices. How should I compose a graph over these tree indices to handle users' questions on them. How to generate a effective summary over them?
10 comments
L
L
M
Will the choice of chunk_size_limit also afffect the query performance for GPTListIndex?
6 comments
a
Hi all- I'm trying to compose a simplekeywordtableIndex over a tree index but when I queried the graph, it's giving me the error: integer divison or modulo by zero
79 comments
M
L
When I tried to use the AZureOPenAI llm as input for the openai agent, it SAYS LLM MUST BE A OPENAI INSTANCE. Does it not take a azure instance as the llm?
2 comments
s
L
I have a usecase where a set of documents need to be categorized using one or a few of the tags in a pre-defined list. WIll llama index be a good tool for this use case?
2 comments
M
L
I'm using the PDFReader to read in my document. Is there a way to specify how many pages you want in one chunk? I think for now, the default is to have each page as one document object. I'd like to have 5-10 pages in one document object.
6 comments
L
M
I was using QA summary graph as my query engine. One question that I want to ask my document is a value extraction question, which uses the vecstore index. The response i get is incorrect but when I went back and checked the log. I realized it got it right in the initial attempt but messed up with the refine template. I know this issue is kinda vague but are there ways for us to stop refining after getting it right in the first place.
1 comment
L
I didn't see a load_from_disk method now for indices
4 comments
L
M