Find answers from the community

Home
Members
Phalguna
P
Phalguna
Offline, last seen 3 months ago
Joined September 25, 2024
hey i have a question( first of many i might ask here so bear with me for a few days)
so i have been using the query engines for querying gpt and i have used both the sub query and multi query functionalities and they work splendidly but how do i use these very engines but be able to add context to lets say the chat gpt bot before querying it. i have a pretty complicated setup with csv and pdfs with information to be queried. i may need to add some context for few shot learning to the bot to instruct chatgpt it in the way it should answer the queries or even pass information to it.
2 comments
P
W
P
Phalguna
·

Models

hey guys
so i have experience using text based LLMs like gpt, llama etc,
but i am curious about the generative ai agents that convert images to videos( short videos lasting a few seconds) like adding flow or wind or other moving elements i mean. what tools do this? i would like to use one using python apis to generate videos from images. what apis are there for the best quality of videos.
also if possible where can i learn or rather read about these architectures.
3 comments
P
L
i have a csv and pdf file which has some text data and i read that using llama-ndex simple directory readers and stored them in pineconeas vectors. i didnt have to mention which embeddings but the llm for service context was openai. Now i want to be able to query those vectors that i upserted. basically i pass a sentence i need to get the vectors that are most similar to it
is there any way i can do it
i thought of directly querying pinecone but when i convert the text to embeddings using open ai it throws an error when i pass it as a the vector for query filter
is there any way i can pass a string and get just the top k results of the vectors in the pinecone db(not the gen answer but just the vectors along with their metadata)
2 comments
P
T
P
Phalguna
·

prompt

I have this rag setup with sub queries, which isnt the main thing i dont mind no sub queries , but the problem is that even after the sub qwueries say no answer in the context given , i still get the answer from the llms knowledge . why is this how do i stop this from happening . right now i am using gpt 3.5/4. how do i prevent the rag from generating answer of something that isnt provided as context.
I just need it to not answer when the question is not in the documents i provided
5 comments
W
P