Is it possible for metadata to be excluded from existing cache check? I'm running into an issue where we have some metadata that is generated via LLM , and given the nature of these models, the metadata is slightly different across all iterations. This leads to duplicate documents and reprocessing of docs that already exist in various stores. OR is there a different way to go about index that circumvents this? Such as generating metadata post index?
Anyone know what additional kwargs are allowed/expected for the OpenAILike llm? I was hoping to be able to pass parameters such as top p/k, repetition penalty, etc. but it doesn't seem those are supported. I assume since it's just a wrapper for OpenAI, options are limited. Worth creating a custom implementation or is it possible to modify the wrapper?
Experiencing some odd behavior with PostgresDocumentStore (deployed to aws aurora serverless v2). We've added support for the SSL parameters "sslmode," "sslcert," "sslkey," and "sslrootcert." Synchronous calls with SSL work great. Asynchronous calls without SSL, also work great. Async + SSL produces an error "ConnectionDoesNotExist: Future not awaited-connection closed mid operation." What is confusing, is that this error only happens when trying to run aretrieveaget_nodes and adelete. Whereas aqueryachat and aretrieve (recursive retriever) all work fine. The fact that some of these calls work leads me to believe it's not an SSL issue.