Find answers from the community

Updated 2 years ago

gpt_index/SimpleIndexDemo-ChatGPT.ipynb ...

i ned this too.
https://github.com/jerryjliu/gpt_index/blob/main/examples/vector_indices/SimpleIndexDemo-ChatGPT.ipynb
i found this somewhere but it does not work like chat interface
2
L
k
j
39 comments
@tshu @kkkkkkk To have a conversation, use something like langchain. You can also combine langchain with llama index, where llama index is used as a sort of search engine for the chat

https://github.com/jerryjliu/gpt_index/blob/main/examples/langchain_demo/LangchainDemo.ipynb
@Logan M I have a question: can only use text-davinci-003 model to build llama index? Can't use gpt-3.5-turbo?
You can use gpt-3.5-turbo! But unless you are using a knowledge graph or tree index, the LLM isn't used during index construction
@Logan M Does it support multilingual indexing and querying? Chinese/Japanese etc.
I've seen a few people using it with chinese/japanese characters, so i think so! I'm not fluent though so I can't speak to the quality haha
@Logan M Using text-davinci-003 to create an index is very expensive. It is better to find a way to use gpt-3.5-turbo to create a general INDEX
@kkkkkkk gpt-3.5-turbo is supported. There is a demo here for creating a vector index: https://github.com/jerryjliu/gpt_index/blob/main/examples/vector_indices/SimpleIndexDemo-ChatGPT.ipynb

(at the bottom, there is also a useful beta feature that might work for you as well)
@Logan M I tried, but got the following error: AttributeError: 'ChatGPTLLMPredictor' object has no attribute '_llm'
You tried something like this and got that error?

Plain Text
from gpt_index.langchain_helpers.chatgpt import ChatGPTLLMPredictor

llm_predictor = ChatGPTLLMPredictor()

# vector index only uses text-ada-002
index = GPTSimpleVectorIndex(documents, chunk_size_limit=512)

# query with ChatGPT
response = index.query(
    "What did the author do during his time at RISD?", 
    llm_predictor=llm_predictor
)
this code ..
Plain Text
from llama_index import GPTSimpleVectorIndex, SimpleDirectoryReader
from llama_index.langchain_helpers.chatgpt import ChatGPTLLMPredictor

# Preparing ChatGPTLLMPredictor
llm_predictor = ChatGPTLLMPredictor(
    prepend_messages = [
        {"role": "system", "content": "You are a helpful assistant."},
    ]
)

# Index creation
documents = SimpleDirectoryReader('data').load_data()
index = GPTSimpleVectorIndex(
    documents=documents,
    llm_predictor=llm_predictor
)

index.save_to_disk("gpt-index.json")
index = GPTSimpleVectorIndex.load_from_disk('gpt-index.json', llm_predictor=ChatGPTLLMPredictor())

answer = index.query("What did the author do after his time at Y Combinator?")
Try this instead, for vector index we only need the LLM at query time:

Plain Text
from llama_index import GPTSimpleVectorIndex, SimpleDirectoryReader
from llama_index.langchain_helpers.chatgpt import ChatGPTLLMPredictor

# Preparing ChatGPTLLMPredictor
llm_predictor = ChatGPTLLMPredictor(
    prepend_messages = [
        {"role": "system", "content": "You are a helpful assistant."},
    ]
)

# Index creation
documents = SimpleDirectoryReader('data').load_data()
index = GPTSimpleVectorIndex(documents=documents)

index.save_to_disk("gpt-index.json")
index = GPTSimpleVectorIndex.load_from_disk('gpt-index.json')

answer = index.query("What did the author do after his time at Y Combinator?", llm_predictor=llm_predictor)
same error @Logan M
Attachment
1678463081558.png
@jerryjliu0 am I missing something here? In the codebase, I see ChatGPTLLMPredictor is not setting _llm, seems like a bug?
@kkkkkkk @Logan M ah. I should honestly remove ChatGPTLLMPredictor. we've changed some abstractions since then, in the meantime just use langchain's chatgpt llm object in an LLMPredictor: https://github.com/jerryjliu/gpt_index/blob/main/examples/vector_indices/SimpleIndexDemo-ChatGPT.ipynb
sorry about the hassle!
Good to know! Thanks!
@jerryjliu0 I have a question: can only use text-davinci-003 model to build llama index? Can't use gpt-3.5-turbo?
Hey yall, found this thread when searching through discord. Ive been using LLMPredictor instead of ChatGPTLLMPredictor but noticed it doesn't support prepend_messages which is something I really need. I was wondering if it was possible to either add this param or maybe temporarily, downgrade gpt_index to a version where ChatGPTLLMPredictor is still working. Is this possible, or is it too dependent downstream on langchain?
Hey! I think you can actually do this with recent versions of llama index still (but maybe just a little less convenient tbh -- there can probably be a better interface for this)

Here are the current general default prompts: https://github.com/jerryjliu/llama_index/blob/main/gpt_index/prompts/default_prompts.py
Here are the ones optimized specifically for ChatGPT: https://github.com/jerryjliu/llama_index/blob/main/gpt_index/prompts/chat_prompts.py

With these two files in mind, we can create custom prompts. I'll assume you are using a vector or list index for this example (this is slightly un-tested, but should work):

Plain Text
from langchain.prompts.chat import (
    AIMessagePromptTemplate,
    ChatPromptTemplate,
    HumanMessagePromptTemplate,
    SystemMessagePromptTemplate
)

from llama_index.prompts.prompts import RefinePrompt, RefineTableContextPrompt
from llama_index.prompts.chat_prompts import CHAT_REFINE_PROMPT_TMPL_MSGS
from llama_index.prompts.default_prompts import DEFAULT_TEXT_QA_PROMPT_TMPL

my_prepend_messages = [SystemMessagePromptTemplate.from_template("my prepended system message here")]

# concat two lists -- CHAT_REFINE_PROMPT_TMPL_MSGS is alread a list
langchain_refine_template = ChatPromptTemplate.from_messages(my_prepend_messages  + CHAT_REFINE_PROMPT_TMPL_MSGS)
llama_refine_template = RefinePrompt.from_langchain_prompt(langchain_refine_template)

# concat two lists -- DEFAULT_TEXT_QA_PROMPT_TMPL is not a list, so create one on the fly
langchain_qa_template = ChatPromptTemplate.from_messages(my_prepend_messages + [DEFAULT_TEXT_QA_PROMPT_TMPL])
llama_qa_template = RefinePrompt.from_langchain_prompt(langchain_qa_template)

....
index.query("my query", text_qa_template=llama_qa_template , refine_template=llama_refine_template)
Wow, the more I wrote here the more janky this is hahaha I will look into making a PR to make this work better. This is a common-ish question

You can try downgrading llama_index and langchain as well, but you probably will have to downgrade both
Ah thanks for this @Logan M ! Helps a ton, the PR would be great if you could.
Otherwise I might just play around with downgrading for now until the PR is merged πŸ™‚
Just wondering, is supporting prepend_messages for LLMPredictor planning to be implemented in the near future?
I would like to get something merged in the next few days. Just figuring out the best way to do it when I have time to look at it lol
I'm working with spanish content, so I will follow this too! I have translated all prompts already
@Logan M beautiful, I'll keep an eye out. Thanks!
just to clarify, the use case is to "prepend" messages for chatgpt?
@jerryjliu0 yes, exactly
Just made an initial PR for this πŸ‘
Could you link when you have a chance! Purely out of curiosity
amazing thanks @Logan M , will take a look soon πŸ™‚
Update for anyone following along. Accidentally closed the PR last night (whoops), but that's ok since there was a decent amount of refactoring https://github.com/jerryjliu/llama_index/pull/873
Just took a look, thanks for this @Logan M ! Wanted to confirm, it looks like ChatGPTLLMPredictor is not going to be deprecated? I should be good to use it?
Yea for now, it should be supported (and it's pretty easy to support moving forward)

Check out the bottom of this notebook for updated imports and usage

https://github.com/jerryjliu/llama_index/blob/main/examples/vector_indices/SimpleIndexDemo-ChatGPT.ipynb
yes big shoutout to @Logan M for bringing it back!
Beautiful, thanks @Logan M for doing this! Just implemented and it's work phenomenally
Add a reply
Sign up and join the conversation on Discord