hey i want to use HuggingFaceInferenceAPIEmbedding, to call Text Embedding Inference that i have deployed on my virtual machine with access to GPU and this program will be on different Virtual Machine
query_engine_tools = [ QueryEngineTool( query_engine=query_engine, metadata=ToolMetadata( name="codebase", description=( "Provides the entine code base." "Use a detailed plain text question as input to the tool." ), ), ) ]
Traceback (most recent call last): File "/Users/bhavyagiri/Developer/studypod/agent/core/main.py", line 13, in <module> indexing_local_repo(path,language) File "/Users/bhavyagiri/Developer/studypod/agent/core/main.py", line 7, in indexing_local_repo indexing.indexing_local_repo(path,language) File "/Users/bhavyagiri/Developer/studypod/agent/core/engine/indexing.py", line 10, in indexing_local_repo nodes = get_nodes(documents,language) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/bhavyagiri/Developer/studypod/agent/core/engine/utils.py", line 28, in get_nodes splitter = CodeSplitter( ^^^^^^^^^^^^^ File "/Users/bhavyagiri/Developer/studypod/agent/.venv/lib/python3.11/site-packages/llama_index/core/node_parser/text/code.py", line 63, in init parser = tree_sitter_languages.get_parser(language) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "tree_sitter_languages/core.pyx", line 19, in tree_sitter_languages.core.get_parser File "tree_sitter_languages/core.pyx", line 14, in tree_sitter_languages.core.get_language TypeError: init() takes exactly 1 argument (2 given)
Need help: Goal is to query Milvus colection with specific doc_ids, meaning user enters a query and instead query whole collection i want to be queried to some specific doc ids
from ..config import settings from llama_index.embeddings.openai import OpenAIEmbedding from llama_index.core import Settings from llama_index.vector_stores.milvus import MilvusVectorStore from llama_index.llms.openai import OpenAI from llama_index.core.vector_stores.types import VectorStoreQuery
is there a wat to parse a complex pdf extracting tables, images and text, maybe even llm (gpt4o or maybe a local multimodel one)? maybe while reading files @Logan M
Failed to load file /Users/bhavyagiri/Developer/studypod/indexing/hc_verma/hc_verma.pdf with error: RetryError[<Future at 0x127c61b50 state=finished raised PdfReadError>]. Skipping...
BadRequestError: Error code: 400 - {'error': {'message': "Invalid 'tools[0].function.name': string does not match pattern. Expected a string that matches the pattern '^[a-zA-Z0-9_-]+$'.", 'type': 'invalid_request_error', 'param': 'tools[0].function.name', 'code': 'invalid_value'}}
I am building a query engine with LlamaIndex, based on a large database of PDFs, and I would like to be able to retrieve the page number with the sources @Logan M