Find answers from the community

Home
Members
man33sh
m
man33sh
Offline, last seen 3 months ago
Joined September 25, 2024
can we filter out message history while send in to the chatEngine like we can do in the langchain instead of direct cutting with token length
2 comments
m
L
m
man33sh
·

hi all,

hi all,
i was facing this issue all of a sudden, it was working fine till i restated my kernel

----> 4 Settings.llm = Gemini()
5 Settings.embed_model = OpenAIEmbedding(model="text-embedding-3-small")

File c:\Users\Maneesh\AppData\Local\Programs\Python\Python312\Lib\site-packages\llama_index\llms\gemini\base.py:141, in Gemini.init(self, api_key, model, temperature, max_tokens, generation_config, safety_settings, callback_manager, api_base, transport, model_name, default_headers, generate_kwargs) 138 # Explicitly passed args take precedence over the generation_config. 139 final_gen_config = {"temperature": temperature, base_gen_config}
--> 141 self._model = genai.GenerativeModel(
142 model_name=model,
143 generation_config=final_gen_config,
144 safety_settings=safety_settings,
145 )
147 self._model_meta = genai.get_model(model)
149 supported_methods = self._model_meta.supported_generation_methods

File c:\Users\Maneesh\AppData\Local\Programs\Python\Python312\Lib\site-packages\pydantic\main.py:837, in BaseModel.setattr(self, name, value)
832 raise AttributeError(
833 f'{name!r} is a ClassVar of {self.__class__.__name__} and cannot be set on an instance. '
834 f'If you want to set a value on the class, use {self.__class__.__name__}.{name} = value.'
835 )
836 elif not _fields.is_valid_field_name(name):
...
826 else:
827 # this is the current error
828 raise AttributeError(f'{type(self).name!r} object has no attribute {item!r}')

AttributeError: 'Gemini' object has no attribute 'pydantic_private'
4 comments
m
W
m
man33sh
·

Hi All,

Hi All,
is anyone facing this issue while trying to retrive data from llama cloud

ValidationError: 1 validation error for ParsingModel[List[llama_cloud.types.pipeline.Pipeline]]
root -> 0 -> eval_parameters -> llm_model
value is not a valid enumeration member; permitted: 'GPT_3_5_TURBO', 'GPT_4', 'GPT_4_TURBO' (type=type_error.enum; enum_values=[<SupportedEvalLlmModelNames.GPT_3_5_TURBO: 'GPT_3_5_TURBO'>, <SupportedEvalLlmModelNames.GPT_4: 'GPT_4'>, <SupportedEvalLlmModelNames.GPT_4_TURBO: 'GPT_4_TURBO'>])

the code im using is below :
Plain Text
from llama_index.indices.managed.llama_cloud import LlamaCloudIndex
from llama_index.llms.openai import OpenAI

from llama_index.core.vector_stores import (
    MetadataFilter,
    MetadataFilters,
    FilterOperator,
)

index = LlamaCloudIndex(
name="insurance-detail", 
project_name="policy_gpt",
organization_id="a1b3bd00-6958-4b47-bd51-f4c85ad7f8de",
api_key="llx-xxxxxxxxxxxxxi"
)
    
llm = OpenAI(model="gpt-4o-mini")
#  create metadata filter 
filters = MetadataFilters(
    filters=[
        MetadataFilter(
            # key="file_name", operator=FilterOperator.EQ, value="HDFC"
            key="file_name", operator=FilterOperator.EQ, value="HDFC ERGO hospital-daily-cash-rider HDHHLIP21344V02202.pdf"
            
        ),
    ]
)

# configure retriever
retriever = index.as_retriever(
  dense_similarity_top_k=10,
  sparse_similarity_top_k=3,
#   alpha=1.0,
#   llm=llm,
  enable_reranking=True, 
  rerank_top_n=3,
  filters=filters,
)
nodes = retriever.retrieve("what are the excultions in my policy")
3 comments
L
m
m
man33sh
·

Deploy

Had any one deployed the llamaindex workflows in fastapi
and do we have any option like human interuption kind off which we have in langgraph
3 comments
m
L