Find answers from the community

Home
Members
geoHeil
g
geoHeil
Offline, last seen 3 months ago
Joined September 25, 2024
I am exploring the sql query engine. For the SQL one I get: I'm sorry, but I can't execute SQL queries or access databases to provide real-time results. However, I can guide you on how to write the SQL query for the question you've asked. -- the pandas one works fine
5 comments
L
g
I am creating an OpenAI assistant using a) llama index and b) native OpenAI API. In both cases I choose to upload some grounding documents, instruction prompt and enable the retrieval toolkit. However, in the case of (a) Llamaindex - quite bad texts are generated. Variant (b) openai native works like a charm though. What can be causeing such issues? I was expecting to get similar results - as internally the openAI APIs should be called in both cases
3 comments
g
L
I am facing an issue with llamaindex version 0.8.66 when trying to create an embedding with OpenAIEmbedding

I am using the Azure Openai 1.2.2 (client binaries)

-> llamaindex fails with POST https://myname.openai.azure.com/embeddings "HTTP/1.1 404 Resource Not Found"

-> plain binary (from openai import AzureOpenAI) works for POST https://myname.openai.azure.com//openai/deployments/text-embedding-ada-002/embeddings?api-version=2023-05-15 "HTTP/1.1 200 OK"

How can this be fixed?

As you can see the native client does not have this problem
13 comments
g
W
Why is the embedding null when uploading it to qdrant? What can I do to debug it? manually executing the embedding computation yields non-null results
16 comments
g
d
L
Are there any plans to upgrade to numpy2? I have issues like: llama-index-core==0.11.10 depends on numpy<2.0.0
2 comments
L
Can we use llama_cpp embeddings as well? i.e. instead of HuggingFaceEmbedding use https://github.com/abetlen/llama-cpp-python/blob/main/examples/high_level_api/high_level_api_embedding.py llm.create_embedding("Hello world!")? Does it make sense to standardize on one serving layer and not mix them?
3 comments
g
L
g
geoHeil
·

Model

how can I use AutoModelForCausalLM.from_pretrained('TheBloke/leo-hessianai-7B-chat-GGUF', model_file="leo-hessianai-7b-chat.Q4_K_M.gguf", model_type="llama") based on ctransformers with llamaindex? this is currently failing with TheBloke/leo-hessianai-7B-chat-GGUF does not appear to have a file named pytorch_model.bin, tf_model.h5, model.ckpt or flax_model.msgpack. for the AutoModelForCausalLM which is coming from the normal transformers library which is used by llamaindex?
25 comments
g
L
g
geoHeil
·

Function

How can I utilize a function calling api with any huggingface model? https://platform.openai.com/docs/guides/gpt/function-calling
3 comments
g
L
How can I use llama index for data cleaning? Are there some examples available? So far I have seen function calling examples like https://wandb.ai/darek/llmapps/reports/Using-LLMs-to-Extract-Structured-Data-OpenAI-Function-Calling-in-Action--Vmlldzo0Nzc0MzQ3 which use the native libraries but something more generic basded on llama iindex would be nice
2 comments
g
W