Find answers from the community

Updated 6 months ago

Hi everyone, I am using

Hi everyone, I am using RetrieverEvaluator for a document in Vietnamese with a 4-bit quantized version of Llama-3-8B-Instruct:
Plain Text
llm = HuggingFaceLLM(
    model_name="meta-llama/Meta-Llama-3-8B-Instruct",
    system_prompt=system_prompt,
    query_wrapper_prompt=query_wrapper_prompt,
    context_window=8192,
    max_new_tokens=256,
    model_kwargs={
        "token": hf_token,
        "torch_dtype": torch.bfloat16,  # comment this line and uncomment below to use 4bit
        "quantization_config": quantization_config
    },
    generate_kwargs={
        "do_sample": True,
        "temperature": 0.1,
        "top_p": 0.3,

    },
    tokenizer_name="meta-llama/Meta-Llama-3-8B-Instruct",
    tokenizer_kwargs={"token": hf_token},
    stopping_ids=stopping_ids,
)
My run is similar to this example in https://docs.llamaindex.ai/en/stable/examples/evaluation/retrieval/retriever_eval/, however when I look into the dataset from eval_dataset.json, all questions have already been translated to English though. Do you have any insights about what happened, and what could I make to make sure that the questions extracted from the dataset are in Vietnamese (or in any other languages apart from English)?
L
1 comment
Probably the prompt to generate questions should be modified to either be written in vietnamese, or specify that you want the questions to be written in vietnamese
Add a reply
Sign up and join the conversation on Discord