Find answers from the community

Updated 3 months ago

Hi All ,

Hi All ,

I'm facing an error : AttributeError: 'NoneType' object has no attribute 'context_window' while trying to use RAPTOR PACK


Attaching my source code here

How to resolve the issue ?

W
A
L
36 comments
You are using Langchain to import LLM and callback. they may not have the argument that is causing this error.

I would suggest you import these from llamaindex and then try once.
Actually this is a predefined LLM in production server and i'm not in a position to change them
Is there any alternative way so that i can use the Langchain imported LLM and use it in Llamaindex for utilising RAPTOR pack ? @WhiteFang_Jr
Not sure on this. I dont think so you can.
If you can not remove those imports, can you import llamaIndex imports as well and use it?
like from llama_index.core.llms import CustomLLM

https://docs.llamaindex.ai/en/stable/module_guides/models/llms/usage_custom/#example-using-a-custom-llm-model-advanced
I have already tried to go through the above documentation, but i was not able to get a feasible solution.

from langchain.llms.base import LLM
class CustomLLM(LLM):
model_id: str
endpoint: str
decoding_method: str
temperature: float
top_p: float
top_k: int
repetition_penalty: float
validation_sequences: dict
stopping_sequences: List[str]
min_new_tokens: int
max_new_tokens: int

@property
def _llm_type(self) -> str:
return "custom"

def _call(
self,
prompt: str,
# stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> str:
# if stop is not None:
# raise ValueError("stop kwargs are not permitted.")
url = "http://123.12.12.123:123/fu/ge"
data = {
"prompt": prompt,
"model_id": self.model_id,
"decoding_method": self.decoding_method,
"temperature": self.temperature,
"top_p": self.top_p,
"top_k": self.top_k,
"repetition_penalty": self.repetition_penalty,
"validation_sequences": self.validation_sequences,
"stopping_sequences": self.stopping_sequences,
"min_new_tokens": self.min_new_tokens,
"max_new_tokens": self.max_new_tokens,
"endpoint": self.endpoint,
}
response = requests.post(url, data=json.dumps(data))
response_text = response.text
return response_text


llm = CustomLLM(
model_id="mistralai/mixtral-8x7b-instruct-v0-1",
endpoint="",
decoding_method="greedy", # can be one of greedy and sample
temperature=0.7,
top_p=0.9,
top_k=50,
repetition_penalty=1.0,
validation_sequences={},
stopping_sequences=["######"],
min_new_tokens=1,
max_new_tokens=200,
)
I already have the above class . Is there anyway for me to inherit my above class and that from llama_index.core.llms import CustomLLM , so that the context_window can be accomodated somehow ? @WhiteFang_Jr
You do not have that class, You made that class which is different. If you check the above link I shared, it is inherting the class.
class OurLLM(CustomLLM):

Actually your issue is not context_window.
Your object is of Nonetype which means it did not get created succesfully. Not sure fully but could be because you are using langchain imports and doing it llamaindex way
Actually this is a predefined LLM in production server and i'm not in a position to change them -- you don't have to change it, you just have to wrap it

pip install llama-index-llms-langchain

Plain Text
from llama_index.llms.langchain import LangChainLLM


llm = LangChainLLM(custom_lc_llm)
Oh we can do this πŸ™Œ
Hi @Logan M @WhiteFang_Jr ,

I made the above change like you've mentioned. Now i'm facing another error ( why openai is in picture here ? ) :

ModuleNotFoundError: No module named 'openai.types.chat.chat_completion_token_logprob'


ATTACHING THE MODIFIED CODE FOR REFERENCE.

Please help me on the same.

Thanks in advance !
Try pip install -U openai
Hi @Logan M @WhiteFang_Jr ,

It worked , Thank you so much !

Now i'm able to run the raptor pack without any issues, however i'm not able to do the retrieval , as i'm facing another error : ValidationError: 1 validation error for EmbeddingEndEventembeddings -> 0 -> 0 value is not a valid float (type=type_error.float)

what might be the reason for the above validatio n error ?
What is embedding_function doing?
class EmbeddingsModelAPI(BaseModel, Embeddings):
def init(pydantic_self, data: Any) -> None: super().init(data)

def _make_request(self, instruction: str, snippets: List[str]) -> np.ndarray:
response: Response = request(
"POST",
"http://123.12.1.123:1234/fun/embedding",
data=json.dumps({"instruction": instruction, "snippets": snippets}),
headers={"Content-Type": "application/json"},
)
# embeddings: np.ndarray = np.reshape(
# np.frombuffer(response.content, dtype=np.float32), (len(snippets), -1)
# )
# embeddings: np.ndarray = np.array(json.loads(response.text))

embeddings = json.loads(response.text)
return embeddings

def embed_documents(self, texts: List[str]) -> np.ndarray:
return (self._make_request(instruction="", snippets=texts))

def embed_query(self, text: str) -> np.ndarray:
return (
self._make_request(
instruction="Represent this sentence for searching relevant passages: ",
snippets=[text],
)
)


embedding_function = EmbeddingsModelAPI()

Embedding function is another api call

@WhiteFang_Jr @Logan M
Can you check if you are getting data from the embedding API call
Yes I'm @WhiteFang_Jr
Attachment
image.png
I think you need to return the first element of this response
Attachment
image.png
So instead of providing embedding as a list of list , just return the first element.

I tried that variation : but it's throwing error while trigerring the raptor pack itself

@WhiteFang_Jr
Attachments
image.png
image.png
Your custom embedding class does not contain the required methods to create embeddings
The embedding , like the LLM , is not written in llama index .

Is there any wrapper like we used for Langchain implemented LLM (
from llama_index.llms.langchain
import LangChainLLM
llm = LangChainLLM(custom_lc_llm )

@WhiteFang_Jr
There is, lets try that.
pip install llama-index-embeddings-langchain

Plain Text
from llama_index.embeddings.langchain import LangchainEmbedding
embed_model = LangchainEmbedding(custom_embed_model)
Tried the above solution :

It's still not working 😦

ValidationError: 1 validation error for EmbeddingEndEventembeddings -> 0 -> 0 value is not a valid float (type=type_error.float)

Any thoughts on how to mitigate the issue ? @WhiteFang_Jr
I found where your error is occurring: https://github.com/run-llama/llama_index/blob/ff73754c5b68e9f4e49b1d55bc70e10d18462bce/llama-index-core/llama_index/core/instrumentation/events/embedding.py#L15

If you see, the embeddings is of type list[list[float]], but it is getting value 0 in your case.
, I tried with BGE embedding just now and did not get any issue with it.
Hi @WhiteFang_Jr @Logan M ,

I was just checking the return type of my embedding api call as well and it is following the same type : list[list[float]]

as you can see from the above image
Attachment
image.png
If that's not the issue - what could be the reason for ValidationError: 1 validation error for EmbeddingEndEvent
embeddings -> 0 -> 0
value is not a valid float (type=type_error.float)
Is there any reason to be using langchain? I feel like thats the root issue here πŸ˜…
Again, embedding is also a service which is already in prod and I cannot really make any changes there πŸ˜…πŸ˜….

Is there any possibility to make this work? I've been on to this for several days now 🫀.

Any possible workaround would be greatly appreciated πŸ™

@Logan M
I'm not saying you have to change the model, but just use a llama-index class rather than langchain
Okay, Is there any dummy code which i can use for reference to create my below embedding class in langchain:

class EmbeddingsModelAPI(BaseModel, Embeddings):
def init(pydantic_self, data: Any) -> None: super().init(data)

def _make_request(self, instruction: str, snippets: List[str]) -> np.ndarray:
response: Response = request(
"POST",
"http://169.46.6.156:8000/functional/embeddings",
data=json.dumps({"instruction": instruction, "snippets": snippets}),
headers={"Content-Type": "application/json"},
)
# embeddings: np.ndarray = np.reshape(
# np.frombuffer(response.content, dtype=np.float32), (len(snippets), -1)
# )
# embeddings: np.ndarray = np.array(json.loads(response.text))

embeddings = json.loads(response.text)
return embeddings

def embed_documents(self, texts: List[str]) -> np.ndarray:
return (self._make_request(instruction="", snippets=texts))

def embed_query(self, text: str) -> np.ndarray:
return (
self._make_request(
instruction="Represent this sentence for searching relevant passages: ",
snippets=[text],
)
)


embedding_function = EmbeddingsModelAPI()

@Logan M
Hi @WhiteFang_Jr @Logan M

I was just trying to build a demo embedding model wrapped with llama index .



I'm stilling getting issues like below when running the raptor pack :

value is not a valid list (type=type_error.list)
embeddings -> 8
value is not a valid list (type=type_error.list)
embeddings -> 9
value is not a valid list (type=type_error.list)

Please help me on this
You are not getting required output. As you can see you are getting numbers 8 and 9 in place of list of vectors.

I would debug what is being sent from your embed model server and what you recieve and return back to llamaindex for further process
Hi @WhiteFang_Jr ,

I fixed the issue , Thanks!
Add a reply
Sign up and join the conversation on Discord