Find answers from the community

Home
Members
edhenry
e
edhenry
Offline, last seen 4 months ago
Joined September 25, 2024
Hey all! I'm working on using CustomLLMs for and when working through the tutorial here:https://docs.llamaindex.ai/en/stable/module_guides/models/llms/usage_custom.html#example-using-a-custom-llm-model-advanced

I am meet with the following error:

Plain Text
  File "/root/.cache/pypoetry/virtualenvs/pa-MPXmvNdN-py3.11/lib/python3.11/site-packages/llama_index/indices/prompt_helper.py", line 117, in from_llm_metadata
    context_window = llm_metadata.context_window
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'property' object has no attribute 'context_window'


I poked around the LLM and CustomLLM classes and things look OK at cursory glance and before I started digging in I wanted to see if anyone else faced this problem?
22 comments
L
e
Hey all! I've got streaming responses working on a token level, but now I'd like to move into passing more complex JSON objects through the streaming interface. Something like passing document metadata back to our front-end. I've found a few closed issues on the repo asking about something similar but I'm unable to surface an example. Has anyone tried this yet and/or is there a working example I might reference?
1 comment
e
Hey all - has anyone toyed with using custom tokenizers within the TokenCountingHandler? Does the get_llm_token_counts method invoked expect a usage field in the response from a model? Is it possible to implement something that doesn't rely on this?
4 comments
e
L
Hey all!

Toying with some examples and using a SimpleVectorStore saved to disk. Generating the embeddings and issuing the first query seems to function appropriately but when issuing another query I am receiving the following error:

Plain Text
 "/root/.cache/pypoetry/virtualenvs/pa-MPXmvNdN-py3.11/lib/python3.11/site-packages/llama_index/vector_stores/simple.py", line 306, in from_persist_path
    raise ValueError(
ValueError: No existing llama_index.vector_stores.simple found at ./storage/vector_store.json, skipping load.


I've found what looks like a similar issue in the repo: https://github.com/run-llama/llama_index/issues/9110

Has anyone seen this before?
7 comments
L
e