pg_vector_store = PGVectorStore.from_params( **POSTGRES_SETTINGS.model_dump(exclude_none=True), table_name="embeddings", embed_dim=384, ) pipeline = IngestionPipeline( transformations=[ HuggingFaceEmbedding(model_name="BAAI/bge-small-en-v1.5"), ], docstore=postgres_docstore, vector_store=pg_vector_store, )
validation_error = ValidationError(model='IngestionPipeline', errors=[{'loc': ('vector_store',), 'msg': "Can't instantiate abstract class...VectorStore without an implementation for abstract methods 'add', 'client', 'delete', 'query'", 'type': 'type_error'}]) def __init__(__pydantic_self__, **data: Any) -> None: """ Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. """ # Uses something other than `self` the first arg to allow "self" as a settable attribute values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data) if validation_error: > raise validation_error E pydantic.v1.error_wrappers.ValidationError: 1 validation error for IngestionPipeline E vector_store E Can't instantiate abstract class BasePydanticVectorStore without an implementation for abstract methods 'add', 'client', 'delete', 'query' (type=type_error)
trace_map
in end_trace
to be None? In attempting to add arize-phoenix to privateGPT I'm noticing significantly different behavior than the toy project i created to test phoenix. For some reason the traces never get emmited in privateGPT and it seems to be because OpenInferenceTraceCallbackHandler.end_trace()
is not called with trace_map
. end_trace
is called from as wellquery_stream = chat_service.stream_chat( messages=all_messages, use_context=True, )
chat_service.stream_chat()
is of type CompletionGen
which only contains a list of the sources. I'd like to keep around a copy of the whole prompt that is sent to the llm for each invocation of stream_chat
for debugging purposes.ChatResponse
from llama_index.core.llms
and extend it with a custom pydantic model. MyModel
s to ChatResponse
via subclassing like sofrom pydantic import BaseModel from llama_index.core.llms import ChatResponse class MyModel(BaseModel): foo: str class CompletionWithAttributions(ChatResponse): attributions: list[MyModel] | None = None
ChatResponse
like sofrom pydantic import BaseModel class MyModel(BaseModel): foo: str class CompletionWithAttributions(BaseModel): attributions: list[MyModel] | None = None
BaseModel
from llama-index itself like this, but that seems VERY clugy as elsewhere in my app i am importing BaseModel
from pydantic as per usual.from llama_index.core.bridge.pydantic import BaseModel
doc = docstore.get_document(llama_id) assert doc != None doc_content = doc.get_content() assert doc_content != None
docstore_ref_doc_info = docstore.get_all_ref_doc_info()
NotImplementedError: Vector store integrations that store text in the vector store are not supported by ref_doc_info yet.
llama_index
package? It seems to be able to import stuff from llama_cloud
like TextNode, but it has absolutely no idea what is exported from the llama_index package. Thanks so much for the help, I've spent a rediculous 3 hours on this issue today 🤦♂️from llama_index.core.bridge.pydantic import ( BaseModel )
llama_index.core.vector_stores
class ChatBody(BaseModel): messages: list[OpenAIMessage] use_context: bool = False conversation_id: str | None = None context_filter: MetadataFilters | None = None include_sources: bool = True stream: bool = False
llama_index.core.bridge.pydantic
which is a v1 modelLangChainLLM()
is fundamentally different than other llms like LlamaCPP()
or SagemakerLLM()
, specifically in that you cannot set messages_to_prompt
or completion_to_prompt
on LangChainLLM
but you can on the others. LangChainLLM
is the only one that extends LLM
instead of CustomLLM
. hf = HuggingFaceTextGenInference( inference_server_url="https://api-inference.huggingface.co/models/HuggingFaceH4/zephyr-7b-beta", max_new_tokens=512, top_k=10, top_p=0.95, typical_p=0.95, temperature=0.01, repetition_penalty=1.03, ) prompt_style = get_prompt_style(settings.huggingface.prompt_style) self.llm = LangChainLLM(llm=hf)
messages_to_prompt
function. Is there a way to do this?