Find answers from the community

Updated 2 years ago

Claude LLM

At a glance

The post asks if it is possible to create a GPTVectorStoreIndex using the Claude LLM in a service context, as the community member had previously used OpenAI LLMs. The comments discuss how to set up the service context using the Anthropic LLM, including suggestions to use streaming and replace ChatOpenAI with Anthropic. However, one community member encountered an error when trying to use the Anthropic LLM, as it did not have the 'stream' attribute. Another community member provided a working example using the LLMPredictor and ServiceContext from the llama_index library. The comments suggest that the solution should be adapted to the specific documents being used.

Useful resources
Can I create GPTVectorStoreIndex using Claude LLM in service context?
Before I have used OpenAI LLMs.
E
o
W
11 comments
Plain Text
from llama_index import ServiceContext
from llama_index.llms import Anthropic

service_context = ServiceContext.from_defaults(llm=Anthropic())
yes, but there, streaming would work like before?
llm_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0.7, model_name="gpt-3.5-turbo",streaming=True))

Before we used this
Yes, it should work.
Just replace ChatOpenAI with Anthropic
service_context = ServiceContext.from_defaults(llm=Anthropic(streaming=True))

can I do like this?
You can set Streaming=True on as_query_engine and it should work
Plain Text
query_engine = index.as_query_engine(streaming=True)
streaming_response = query_engine.query("Who is Paul Graham.")

https://gpt-index.readthedocs.io/en/stable/core_modules/query_modules/query_engine/root.html#usage-pattern
llm_predictor = Anthropic()

service_context = ServiceContext.from_defaults(
llm_predictor=llm_predictor,
chunk_size=1024
)

I have defined service context like that, and created Index.
index = GPTVectorStoreIndex.from_documents(document,service_context=service_context)

query_engine = index.as_query_engine(response_mode="compact", similarity_cutoff=0.6, similarity_top_k=2,streaming=True, text_qa_template=QA_PROMPT,)

response = query_engine.query("How are you?")

But I got this error.

'Anthropic' object has no attribute 'stream'

** Using OpenAI's models, it worked perfectly.
Plain Text
from llama_index import LLMPredictor, ServiceContext
from langchain.llms import Anthropic

llm_predictor = LLMPredictor(llm=Anthropic())

service_context = ServiceContext.from_defaults(
    llm_predictor=llm_predictor
)

index = GPTVectorStoreIndex.from_documents(document,service_context=service_context)

query_engine = index.as_query_engine(streaming=True)

streaming_response = query_engine.query("Who is Paul Graham.")
need to adapt to your documents
Add a reply
Sign up and join the conversation on Discord