Hey all, I'm trying to use LlamaIndex as a memory module for a ChatGPT LangChain predictor. Could anyone tell me if I am on the right track? I'm having trouble getting my head around it π
chat_history_index = GPTListIndex(
index_struct=index_struct,
docstore=docstore,
# Disabled, see the Edit: in this Discord post
# llm_predictor=ChatGPTLLMPredictor()
)
memory = GPTIndexChatMemory(
index=chat_history_index,
memory_key="history",
ai_prefix="AI",
human_prefix="Human",
query_kwargs={"response_mode": "compact"},
return_source=True,
return_messages=True
)
llm_chain = ConversationChain(
llm=ChatOpenAI(**open_ai_params),
prompt=chat_prompt,
memory=memory,
)
chain.run(input=input)
Predictions are returned successfully, but memory does not appear to be used when prompting the LLM. Memory
is written to the GPTListIndex.
EDIT: Logan M explained that ChatGPTLLMPredictor is broken/deprecated, so please disregard the following.
At this point I'm getting the following error:
File "/Users/dondo/Library/Caches/pypoetry/virtualenvs/vana-gpt-me-IE1VmXUs-py3.10/lib/python3.10/site-packages/llama_index/llm_predictor/base.py", line 222, in predict
formatted_prompt = prompt.format(llm=self._llm, **prompt_args)
AttributeError: 'ChatGPTLLMPredictor' object has no attribute '_llm'. Did you mean: 'llm'?
Using the latest llama-index and langchain packages