Find answers from the community

Updated 3 months ago

I m using

I'm using #############################################################
service_context = ServiceContext.from_defaults(
embed_model=embed_model,
llm=llm,
callback_manager=callback_manager,
chunk_size=50000,
)
documents = SimpleDirectoryReader('data').load_data()


index = SummaryIndex.from_documents(documents=documents,
service_context=service_context,
show_progress=True
)

query_engine = index.as_query_engine(
service_context=service_context,
response_mode="refine",
use_async=True
)

llm_response_obj = query_engine.query(question)
L
x
7 comments
did you use from llama_index.llms import Anthropic to build the llm? or something else?
I'm testing with Claude v2 with 100K limit, try to send an entire document to LLM but from trace I see that is doing more templating then it should be.
How can I send document in one go ?
ah bedrock yea
you'll have to set the context window manually
Plain Text
service_context = ServiceContext.from_defaults(
    embed_model=embed_model,
    llm=llm,
    callback_manager=callback_manager,
    chunk_size=50000,
    context_window=100000,
)
thank you
Add a reply
Sign up and join the conversation on Discord