Find answers from the community

s
F
Y
a
P
Updated 6 months ago

Hey all, anyone here using `llama-

Hey all, anyone here using llama-create for a starting point? I did for a quick test of making a backend and it worked great! I'm looking for some advice. What's the easiest way to add the nodes and corresponding metadata that was returned? I want to maintain the streaming response, and don't think I can modify index.as_chat_engine...Just build a custom chat message object like this?

https://docs.llamaindex.ai/en/stable/module_guides/deploying/chat_engines/usage_pattern/#low-level-composition-api
L
P
2 comments
the nodes should already be on the response object, but you might need to modify how the API endpoint responds in order to include it in the API response
I'm still struggling here...to simplify what I'm after, I basically want to make an openai chat completions compatible endpoint, but have it use my vectordb...for /API/chat I can't figure out how to modify the out of the box one from llama-create to essentially emulate it. Any ideas? https://github.com/run-llama/create-llama/
Add a reply
Sign up and join the conversation on Discord