Hello again, just trying to refine the behavior between Vicuna and llama_index, I can get the responses from the model, but looks they're lost because of the "second question".
I have this prompt template
QA_PROMPT_TMPL = (
"### Human: Considering the following code:\n"
"{context_str}\n"
"{query_str}\n ### Assistant: \n"
)
If I print the response inside
CustomLLM._call method
I see this response:
- Production Machine Data Source is a data source class for vending machines that provides a set of APIs to interact with the machine. The creator of this class is "Sergio Casero" and it was created on 18/04/2023.
For this question:
Who creates the code?
, This is so nice, but if I print the response from
response = index.query("Who creates the code?", text_qa_template=QA_PROMPT, similarity_top_k=1)
, I get an empty response, any ideas?