Find answers from the community

Updated 3 months ago

Hey all! I'm working on using

Hey all! I'm working on using CustomLLMs for and when working through the tutorial here:https://docs.llamaindex.ai/en/stable/module_guides/models/llms/usage_custom.html#example-using-a-custom-llm-model-advanced

I am meet with the following error:

Plain Text
  File "/root/.cache/pypoetry/virtualenvs/pa-MPXmvNdN-py3.11/lib/python3.11/site-packages/llama_index/indices/prompt_helper.py", line 117, in from_llm_metadata
    context_window = llm_metadata.context_window
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'property' object has no attribute 'context_window'


I poked around the LLM and CustomLLM classes and things look OK at cursory glance and before I started digging in I wanted to see if anyone else faced this problem?
L
e
22 comments
How did you setup your custom LLM?
Exactly as in the example
hmmm one sec
I copied the code sample from the docs exactly and it worked πŸ˜…
What llama-index version are you on?
Plain Text
(pa-py3.11) /rags/pa/backend # poetry show |grep llama
llama-index        0.9.11.post1 Interface between LLMs and your data
So, for reference, I copied the code sample exactly into a python file (custom_llm.py), then ran it -- python ./custom_llm.py

You did the same?
I created a template project from create-llama, dropped to backend dir, dropped into poetry shell, installed all deps, added a class for my custom model and tested. I also tested the generic OurLLM in the example. Same problem
So, not quite the same, got it πŸ˜… let me check how that would work
Was going to try subclassing LLMdirectly, next, but wanted to ask before I got too in there πŸ™‚
dropped back to llama_index==0.9.11 instead of post1 and no dice
yea, going to run create-llama and plop in the LLM -- will let you know how it goes πŸ‘
Thanks @Logan M ! Let me know if I can help in any way!
soooo it also worked for me lol
lol well crap
Alright - I was dev'ing in a container loop - dropping that out...standby
  • created a fastapi backend with create-llama
  • copied the imports + OurLLM() class into backend/app/utils/index.py at the top
  • run poetry shell + poetry install + pip install "transformers[torch]" (last one was so I could use local embeddings)
  • updated the service context to be
Plain Text
service_context = ServiceContext.from_defaults(
    llm=OurLLM(),
    embed_model="local"
)
ha, oy, I think I wasn't instantiating the class correctly and left off the () - time for more coffee.
Got my cup next to me already β˜•
Thanks Logan! Apologies for wasting time πŸ₯΄
haha no worries, a good sanity check!
Add a reply
Sign up and join the conversation on Discord