Find answers from the community

p
paapi
Offline, last seen 7 days ago
Joined September 25, 2024
Hello, I am working with llama index to get answer for my own document with azure open ai, I am trying to find if llama index has functionality which can give the relevant piece of text from which it answered the query? I wound get_formatted_sources() but it doesn't provide relevant piece of text but the whole relvant document
6 comments
p
L
S
Hi I am working on RAG with llama index and chromadb as vector store, while querying I am trying to retrive the document used to answer the query, issue with doing response.source_nodes is that even if I have query like I have in the image, it would still provide me document where such thing is not mentioned at all, so I am not sure how to fix this. Is there an alternative option to make it work with my requirement?
7 comments
W
p
Hey, I am using RAG system with llamaindex and I am on python 3.10 I am having this issue can anyone help me with this
Plain Text
from llama_index.llms.azure_openai import AzureOpenAI
from llama_index.llms.azure_openai import AzureOpenAI
from llama_index.embeddings.azure_openai import AzureOpenAIEmbedding
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.vector_stores.chroma import ChromaVectorStore

from llama_index.core import StorageContext
import chromadb
import os
from llama_index.core import Settings



















 2 from llama_index.llms.azure_openai import AzureOpenAI
      3 from llama_index.llms.azure_openai import AzureOpenAI
      4 from llama_index.embeddings.azure_openai import AzureOpenAIEmbedding

File c:\Chat-bot\venv\lib\site-packages\llama_index\llms\azure_openai\__init__.py:1
----> 1 from llama_index.llms.azure_openai.base import (
      2     AzureOpenAI,
      3     SyncAzureOpenAI,
      4     AsyncAzureOpenAI,
      5 )
      7 __all__ = ["AzureOpenAI", "SyncAzureOpenAI", "AsyncAzureOpenAI"]

File c:\Chat-bot\venv\lib\site-packages\llama_index\llms\azure_openai\base.py:4
      1 from typing import Any, Callable, Dict, Optional, Sequence
      3 import httpx
----> 4 from llama_index.core.base.llms.types import ChatMessage
      5 from llama_index.core.bridge.pydantic import Field, PrivateAttr, root_validator
      6 from llama_index.core.callbacks import CallbackManager

File c:\Chat-bot\venv\lib\site-packages\llama_index\core\__init__.py:19
     15 from llama_index.core.embeddings.mock_embed_model import MockEmbedding
...
--> 169     raise TypeError(f"Plain {arg} is not valid as type argument")
    170 if isinstance(arg, (type, TypeVar, ForwardRef, types.UnionType, ParamSpec)):
    171     return arg

TypeError: Plain typing.TypeAlias is not valid as type argument
7 comments
L
p
p
paapi
·

Response

Hello, I am working with llama index with chromadb to get response from query, while I get response i am trying to get what document it is referring to i tried this:

if hasattr(response, 'metadata'):
document_info = str(response.metadata)
find = re.findall(r"'page_label': '[^']', 'file_name': '[^']'", document_info)

print('\n'+'=' * 60+'\n')
print('Context Information')
print(str(find))
print('\n'+'=' * 60+'\n')
4 comments
p
L
p
paapi
·

Sub folders

Hey I have a question so if I want to do indexing in a files folder where I have sub folders and in each of them there are files to index , is there any way I could I am kinda confused
3 comments
p
L