Find answers from the community

T
Teemu
Offline, last seen 4 weeks ago
Joined September 25, 2024
Think this is deprecated? The QA template does work I think but that just uses a regular prompt right?
28 comments
T
C
L
I mean it starts printing out/streaming the response nodes but they're nonsensical, it just keep repeating one source node and doesn't function the same way it does when not streaming
5 comments
T
L
How would you save this to disk with the newest update?
Plain Text
# Load documents
documents = SimpleDirectoryReader('data').load_data()

# Build the GPTVectorStoreIndex
index = GPTVectorStoreIndex.from_documents(documents, service_context=service_context)
6 comments
L
T
Did you get the gpt_index version working? I've only managed to get the langchain one working
1 comment
j
Could llamaindex be used for building and configuring the plugins?
3 comments
J
T
j
Hey, I have an application that allows the user to upload financial reports (analyst briefs on specific stocks) PDFs which then automatically turn into GPTSimpleVectorIndex embeddings.

How could I improve the performance since currently when querying the index the results it returns tend to be filled with filler information such as disclaimers and warnings?

Is there a way to filter out this information using the GPT-index libraries or has someone experimented with another method like fine-tuning for this purpose?
2 comments
L
Ok I think I found the issue. When creating the embeddings it looks like the encoding doesn't allow for certain characters such as ' which is causing errors. hmm
9 comments
T
M
This program is so vast I don't even know where to start. I currently have a totally separate bot that uses embeddings ada 002 to create embeddings (from a long and large text document) and then I have a python bot that answers those using a davinci model.

How would I go about recreating this using GPT Index? For my use case I would need the bot to answer extremely specifically (think legal statue, very fine detail- specific). What avenue should I start with?
38 comments
T
h
What's the most up to date refine_template or import class for the chat turbo model? Been using the Chat_Refine_Prompt but it started doing the thing where the completions have mentions of the refine/context information.
8 comments
j
T
What's the best way to approach it with GPT index?
6 comments
M
T
Does llama-index have any multimodal implementations that can do image to text?
2 comments
T
L
T
Teemu
·

Flare

Just me or is the FLAREInstructQueryEngine still very much experimental?
7 comments
L
T
T
Teemu
·

Streaming

Ah alright, no worries. I was wondering since none of the imports seemed to work 😅
1 comment
L
Even this example snippet (I might be misunderstanding the formatted sources module) https://gpt-index.readthedocs.io/en/latest/guides/primer/usage_pattern.html#parsing-the-response
15 comments
T
L
Is there a way to pass chunk_size_limit to a streamlit app that uses the data loader widget for creating embeddings from PDFs? My load from disk function in the app has the chunk_size_limit defined but it's not applying it.

Is there another way to apply the chunk_size_limit?
1 comment
j
I haven't really had time yet, do you know the command for changing the prompt with gpt index?
3 comments
L
T
Is there a way to increase response length with GPTSimpleVectorIndex? I think the standard max is 256 tokens?
4 comments
T
j
b
Thank you for responding. I read through all the docs and I just can't get it to work
4 comments
M
T