Find answers from the community

Updated 2 years ago

Hey all I m trying to use LlamaIndex as

At a glance
Hey all, I'm trying to use LlamaIndex as a memory module for a ChatGPT LangChain predictor. Could anyone tell me if I am on the right track? I'm having trouble getting my head around it πŸ˜…

Plain Text
    chat_history_index = GPTListIndex(
      index_struct=index_struct,
      docstore=docstore,
      # Disabled, see the Edit: in this Discord post
      # llm_predictor=ChatGPTLLMPredictor()
    )

    memory = GPTIndexChatMemory(
        index=chat_history_index, 
        memory_key="history", 
        ai_prefix="AI",
        human_prefix="Human",
        query_kwargs={"response_mode": "compact"},
        return_source=True,
        return_messages=True
    )

    llm_chain = ConversationChain(
        llm=ChatOpenAI(**open_ai_params),
        prompt=chat_prompt,
        memory=memory,
    )

    chain.run(input=input)


Predictions are returned successfully, but memory does not appear to be used when prompting the LLM. Memory is written to the GPTListIndex.

EDIT: Logan M explained that ChatGPTLLMPredictor is broken/deprecated, so please disregard the following.

At this point I'm getting the following error:

Plain Text
  File "/Users/dondo/Library/Caches/pypoetry/virtualenvs/vana-gpt-me-IE1VmXUs-py3.10/lib/python3.10/site-packages/llama_index/llm_predictor/base.py", line 222, in predict
    formatted_prompt = prompt.format(llm=self._llm, **prompt_args)
AttributeError: 'ChatGPTLLMPredictor' object has no attribute '_llm'. Did you mean: 'llm'?


Using the latest llama-index and langchain packages
L
D
7 comments
ChatGPTLLMPredictor is currently broken/deprecated (but might actually be fixed soon!)

See the bottom of this notebook for an example of using llama index as chat memory https://github.com/jerryjliu/llama_index/blob/main/examples/langchain_demo/LangchainDemo.ipynb
Ah okay, I followed that guide without incorporating ChatGPTLLMPredictor. One difference for my use case is that, instead of using initialize_agent(... memory=memory), I'm using ChatOpenAI(... memory=memory).

I was able to produce predictions successfully, and when I ran them, messages were written into the memory index successfully, but it seemed like the memory wasn't incorporated in the LLM prompt--i.e. if I asked the LLM to repeat the last thing it said, it would have no knowledge of it.

I will keep digging but I thought maybe something I was doing was fundamentally wrong... ChatGPTLLMPredictor was a shot in the dark but I'm not actually sure what it would do πŸ˜…
Ohhh asking questions like "repeat the last thing you said" might not work well when using llama index as the memory.

it won't have the concept of "time" or message history. It'd be more for previous fact retrieval, if that makes sense

Vs. A conversation history buffer, it just sees the previous history and would probably respond well to that query.

Maybe in the future we can introduce some kind of time concept into the llama index memory buffer
That is really helpful to know, thanks!

I just tried by providing a made up fact and asking for it, and it had no idea what I was talking about, so there are probably still some kinks to work out here but maybe I should take a step back and find a solution that is more conversation oriented
For sure! I think the most stable llama index + langchain combo is using llama index as a tool in langchain.

Like this tutorial, if you haven't seen it yet: https://gpt-index.readthedocs.io/en/latest/guides/building_a_chatbot.html
I've been playing around w/LlamaIndex as a tool.. it's so powerful! Probably biting off more than I can chew here with also trying to persist the entire conversation memory with LlamaIndex. I had it working pretty well with regular GPT-3, the idea being for the chat history itself to become a useful long-term data source that could subsequently be summarized, queried, etc. It was also nice to persist to disk on each prediction so the application itself can live as a serverless function.
Anyway, thanks for the guidance!
Add a reply
Sign up and join the conversation on Discord