Find answers from the community

Updated 11 months ago

Document

I’m using LlamaIndex with the llm.completion() function to generate responses based on a document base. However, I want to keep track of which document corresponds to each response. What strategies can I use to achieve this, considering that I want to utilize only the built-in features provided by LlamaIndex?

Example structure -

Pormpt - "Match the folloing CVs with The provided JD".
JD - type(Document) - Read using SimpleDirectoryReader()
CV - type(Document) - Read using SimpleDirectoryReader()

LLM Response corresponding to each CV-
CV1 - "This is a good cv ..."
CV2 - "This is not a good CV..."

Like this I want to keep track of what response came for what CV.
W
u
6 comments
You can put filename and other info as per your requirement in metadat of each document. Then when you use the document, you can pull out the filename from that metadata.

Plain Text
# You created first document
document = Document(text="This is text 1")

# Add info as metadata in this document
document.metadata = {"key":"pair",....}

# you can access the same way
print(document.metadata)
The document metadata is already provided, in the input. In the response object I am not getting any document metadata.

Plain Text
from llama_index.llms import OpenAI, ChatMessage, MessageRole
from llama_index.prompts import ChatPromptTemplate


llm = OpenAI(temperature=0.1)
# service = ServiceContext.from_defaults(llm=llm)

hr_manager_prompt_chat = ChatPromptTemplate([
    ChatMessage(role=MessageRole.SYSTEM, 
                content=("You are the HR manager of a company. You have received a CV and a job description. You need to match the CV to the job description.")),
    ChatMessage(role=MessageRole.USER, content=("""Please match the CV to the job description.  
                jd - {jd}
                cv - {cv}"""))
])


total_prompt = hr_manager_prompt_chat.format(jd = str(jd[0]), cv = str(cvs[0]))
print(total_prompt)
response = llm.complete(total_prompt)
Yeah you wont recieve any metadata in complete() as this is directly calling llm
Can you please guide me, if there is any other way to do this?
Since you are providing the document , Format the prompt in such way that it also answers the file it is using to answer using the metadata, That way it can try including it in the response
Add a reply
Sign up and join the conversation on Discord