Find answers from the community

S
Shaun
Offline, last seen 3 months ago
Joined September 25, 2024
When we set embedd model to something from Huggingface, I assume that it will be downloaded into the $TRANSFORMERS_CACHE folder. For example,
"""
embed_model = HuggingFaceEmbedding(model_name='BAAI/bge-large-en-v1.5')
service_context = ServiceContext.from_defaults(
chunk_size=1024,
chunk_overlap=256,
llm=llm,
embed_model=embed_model,
)
"""
I would think I can see the 'BAAI/bge-large-en-v1.5' will be downloaded and saved in the folder transformers cache folder. But I don't see it there .Am I missing something?
4 comments
S
L
I have a basic RAG with TheBloke/Llama-2-13B-chat-GPTQ and BAAI/bge-large-en-v1.5 working on 2 PDF docs. I would like:
  1. the response to include the source document
  2. enable to continuous conversation with chat history
Can this be done with llama-index?
2 comments
S
L