Find answers from the community

Updated last year

I'm using llama2 70b as llm and "local:

I'm using llama2 70b as llm and "local:BAAI/bge-large-en-v1.5" as embedding model, however all of my content is in german.
I'm not sure if the embedding model is really the best choice then, as it mentions "en", but I cant find anything that supports german or anything in that direction.
Also, llama2 only responds in english, even tho it can respond in german when asked to, but somehow doesnt when its run over llamaindex.

Any advice on improving the whole pipeline when using exclusivly german data to index and query?
T
C
3 comments
Yeah if you're using open source models it should help finding a multilingual or language specific model. One of the main reasons I use OpenAI is because they can actually handle my language well with their LLM and embeddings
Have you tried setting a prompt that always specifies to answer in German?
yes, after switching to a multilingual embedding models my answers the context was much more fitting, I'm currently trying to adjust the prompts to specificy german
this is the default prompt right?
https://github.com/run-llama/llama_index/blob/9b798f819bb0afa6dabf418f8f2db87a31125d5e/llama_index/prompts/default_prompts.py#L99
I dont want to change too much
Add a reply
Sign up and join the conversation on Discord