Find answers from the community

d
datum
Offline, last seen 3 weeks ago
Joined September 25, 2024
Hey guys,

I have been trying to use the summaryindex using sentencesplitter for the nodes but the response time, while querying the index, is slow. I was wondering if there are any ways to speed up the process.
6 comments
L
d
d
datum
·

Hey everyone,

Hey everyone,

I am facing an issue with the KnowledgeGraphQueryEngine. So whenever I am trying to run a query using the KnowledgeGraphQueryEngine , I am continuously getting this error:
Plain Text
pydantic.v1.error_wrappers.ValidationError: 1 validation error for LLMPredictStartEvent
template
  none is not an allowed value (type=type_error.none.not_allowed)


And here's the code snippet:
Plain Text
    query_engine = KnowledgeGraphQueryEngine(
        storage_context=storage_context,
        llm=llm,
        verbose=True,
    )
    response = query_engine.query("Tell me about Peter Quill?")


Any help will be appreciated! Thanks in advance.

P.S. I am using Neo4JGraphStore instead of NebulaGraph.
3 comments
L
d
d
datum
·

Hey guys,

Hey guys,

Do you have any support for ArangoDB instances? Like vectorstores, graphstores, etc.?
4 comments
d
R
How to parse the response object of LlamaIndex so as to obtain the document ids, the source text and the similarity scores of those sources with the query?
3 comments
u
k
d
datum
·

Hey guys

Hey guys,

How can I use Cohere LLMs and Cohere Embeddings? I've upgraded my LLamaIndex to the latest version but I don't seem to find cohere embeddings and even huggingface embeddings in here.
2 comments
d
L
@kapa.ai does llamaindex have support for ArangoDB?
2 comments
k
d
datum
·

Chunk size

Hey everyone,

I am trying to follow the Full-stack web app tutorial on LLamaIndex but using Hugging Face Model. But whenever I am trying to run the model this is the error I am getting:

Got a larger chunk overlap (-3) than chunk size (-39), should be smaller.

and here's a snippet of my code:
Plain Text
def initialize_index():
    global index


    llm_predictor = HuggingFaceLLMPredictor(
        max_input_size=512, 
        max_new_tokens=512,
        tokenizer_name="facebook/opt-iml-max-1.3b",
        model_name="facebook/opt-iml-max-1.3b",
        model_kwargs={"load_in_8bit": True},
        generate_kwargs={
            "do_sample": True,
            "top_k": 4,
            "penalty_alpha": 0.6, 
        }
    )

    prompt_helper = PromptHelper(context_window=512, chunk_size_limit=256, num_output=512)
    embed_model = LangchainEmbedding(HuggingFaceEmbeddings())
    service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, embed_model=embed_model, 
                                                   prompt_helper=prompt_helper)

    
    if os.path.exists("../indices"):
        storage_context = StorageContext.from_defaults(persist_dir="../indices")
        index = load_index_from_storage(storage_context=storage_context, 
                                        service_context=service_context)

    else:
        storage_context = StorageContext.from_defaults()
        documents = SimpleDirectoryReader("../data").load_data()
        index = GPTListIndex.from_documents(documents=documents, service_context=service_context, storage_context=storage_context)
        
        index.set_index_id("paul_graham_essay")
        index.storage_context.persist("../indices")

    return index, service_context


Would appreciate your help in solving this error.
13 comments
d
L
@Logan M I'm trying to use the HuggingFaceHub from LangChain along with LLamaIndex. The solution that you shared on GitHub, I tried it but somehow it's not working for me. Can you please help me out?

Here's your github response:
https://github.com/jerryjliu/llama_index/issues/3290#issuecomment-1546914037
11 comments
d
L