Find answers from the community

Updated 3 months ago

Following the customized LLM example on

Following the customized LLM example on the documentation, upon initializing service context
service_context = ServiceContext.from_defaults(
llm=llm,
context_window=context_window,
num_output=num_output
)

I am getting an error, did I missed a configuration somewhere?
h
L
10 comments
This is the error: ---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[24], line 52
49 # define our LLM
50 llm = OurLLM()
---> 52 service_context = ServiceContext.from_defaults(
53 chunk_size=1024,
54 llm=llm,
55 )
57 # Load the your data
58 documents = SimpleDirectoryReader('./data').load_data()

File ~/neo4nan/myenv/lib/python3.9/site-packages/llama_index/indices/service_context.py:157, in ServiceContext.from_defaults(cls, llm_predictor, llm, prompt_helper, embed_model, node_parser, llama_logger, callback_manager, system_prompt, query_wrapper_prompt, chunk_size, chunk_overlap, context_window, num_output, chunk_size_limit)
155 llm_predictor = llm_predictor or LLMPredictor(llm=llm)
156 if isinstance(llm_predictor, LLMPredictor):
--> 157 llm_predictor.llm.callback_manager = callback_manager
158 if system_prompt:
159 llm_predictor.system_prompt = system_prompt

File ~/neo4nan/myenv/lib/python3.9/site-packages/pydantic/v1/main.py:405, in BaseModel.setattr(self, name, value)
402 else:
403 self.dict[name] = value
--> 405 self.fields_set.add(name)

AttributeError: fields_set
hmm seems like an issue in pydantics compat layer

Can you install pydantic v1?

pip install pydantic==1.10.12
i have tried that, same error, I am pretty new to this, but could it be that the LLM class I have isn't implemented properly? i did the bare minimum for the interface
possibly? What does the LLM class look like?
class OurLLM(CustomLLM):
def init(self):
role = sagemaker.get_execution_role() # execution role for the endpoint
sess = sagemaker.session.Session() # sagemaker session for interacting with different AWS APIs
region = sess._region_name # region name of the current SageMaker Studio environment
account_id = sess.account_id() # account_id of the current SageMaker Studio environment
endpoint_name = "some llama2 endpoint"
predictor = sagemaker.Predictor(
endpoint_name=endpoint_name,
sagemaker_session=sess,
serializer=serializers.JSONSerializer(),
deserializer=deserializers.StringDeserializer(),
)
@property
def metadata(self) -> LLMMetadata:
"""Get LLM metadata."""
return LLMMetadata(
context_window=context_window,
num_output=num_output,
model_name=endpoint_name
)

@llm_completion_callback()
def complete(self, prompt: str, kwargs: Any) -> CompletionResponse: response = predictor.predict( {"inputs": prompt, "parameters":{"max_new_tokens":num_output, "do_sample":"false"}}, custom_attributes='accept_eula=true', ) # only return newly generated tokens text = response["generation"] return CompletionResponse(text=text) @llm_completion_callback() def stream_complete(self, prompt: str, kwargs: Any) -> CompletionResponseGen:
raise NotImplementedError()
it basically wraps a sagemaker endpoint that hosts my llm
If you are going to add an init, add super().__init__() at the end as well
It's causing some pydnantic stuff to not get setup properly I think
got it let me try
thank you that worked
Add a reply
Sign up and join the conversation on Discord