Find answers from the community

Updated last year

GitHub - run-llama/ts-playground

At a glance

A community member is new to LlamaIndexTS and has been using the Python version successfully. They have installed the Next example and can ingest their own documents and respond to queries, but the issue is that the system is also answering queries outside of their private documents, which they don't have in the Python version. The community member is looking for a way to limit the responses to only their private data.

Other community members suggest trying to change the prompt or instruction that the language model uses to generate the responses, which may help address the issue. The community member eventually says they think they have figured it out, and thanks the others for their help.

Useful resources
Hi All, Noobie to LlamaIndexTS here. Looks a great project.

I have been using the python version and have a working example in that of ingesting some of my own docs and interrogating them using 3.5-turbo. I have installed the Next example from (https://github.com/run-llama/ts-playground) and have that working and ingesting my own documents and responding. The problem I have is it is also answering queries outside of my private documents. I don't have this issue with very similar code on Python. Am I missing something simple?

TLDR; How do I get answers only from my private data?

Also, big up to the devs. Documentation and site is well done.
W
s
6 comments
Did you try changing the prompt, maybe that could help in here
Hey WhiteFang_Jr thanks for responding
So if I prompt something from my private docs I get the correct response BUT I can also ask it an arbitrary question like 'Who won the world series in 1932?' and I get the correct answer. I'm trying to limit it to only my docs.
By prompt I mean the instruction that your LLM uses while generating the response.
Yeah, I think I figured it out. Thanks so much for helping.
Awesome!πŸ’ͺ
Add a reply
Sign up and join the conversation on Discord