Find answers from the community

T
Tay
Offline, last seen 3 months ago
Joined September 25, 2024
Is there any LlamaIndex API example for using a LLM and a directory scanner to make a AI with domain knowledge? Cuz my old one broke and shot itself in the knee big time
2 comments
T
W
Heya;
I'm currently developing a AI for a company.
It works as of rn but i have some issues.

  • It hallucinates American answers even tho we need German answers. F.e. German "Pflanzenschutzmittel" and American "Pesticides" are different.
  • It hallucinates questions from the User which were never given. F.e. when i ask a question, it answers it but then directly thinks that i ask a follow up which i never did and mentions it in the same response and answers it.
  • I'm trying to display it's sources. But the response.source_nodes does not contain anything.
Is there anything i can try to fix these mentioned issues?
Thanks in Advance
16 comments
T
L
What is the app.engine Module from create-llama. I tried app and 1app. There is nothing at pypi. I tried app and 1app modules
2 comments
T
L
So the generate.py Code is giving me that app.engine is not a module, what should i do?
82 comments
W
T
So. My AI only responds with that. Before it didnt respond at all. Now it does. But now this happens. Does anyone have a Idea?
2 comments
T
L
T
Tay
·

KeyError

Plain Text
chat_engine = CondenseQuestionChatEngine.from_defaults(
    query_engine=index.as_query_engine(),
    condense_question_prompt=custom_prompt,
    chat_history=custom_chat_history,
    verbose=True,
    llm=llm,
    service_context=service_context,
)

Gives me
Plain Text
KeyError: 'custom_chat_history'



Full Code in Thread
11 comments
T
L
T
Tay
·

```py

Plain Text
import chromadb
from llama_index import VectorStoreIndex
from llama_index.vector_stores import ChromaVectorStore
from llama_index.storage.storage_context import StorageContext
from llama_index.llms import LlamaCPP
from llama_index import VectorStoreIndex, SimpleDirectoryReader, ServiceContext

# initialize client
db = chromadb.PersistentClient(path="./chroma_db")
llm = LlamaCPP(
    # You can pass in the URL to a GGML model to download it automatically
    # optionally, you can set the path to a pre-downloaded model instead of model_url
    model_path="./models/em_german_13b_v01.Q8_0.gguf",
    temperature=0.1,
    max_new_tokens=4048,
    # llama2 has a context window of 4096 tokens, but we set it lower to allow for some wiggle room
    context_window=8128,
    # kwargs to pass to __call__()
    generate_kwargs={},
    # kwargs to pass to __init__()
    # set to at least 1 to use GPU
    # model_kwargs={"n_gpu_layers": 1},
    # transform inputs into Llama2 format
    # messages_to_prompt=messages_to_prompt,
    # completion_to_prompt=completion_to_prompt,
    verbose=True,
    
)
# get collection
chroma_collection = db.get_or_create_collection("quickstart")

# assign chroma as the vector_store to the context
vector_store = ChromaVectorStore(chroma_collection=chroma_collection)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
service_context = ServiceContext.from_defaults(llm=llm)

# load your index from stored vectors
index = VectorStoreIndex.from_vector_store(
    vector_store, storage_context=storage_context, service_context=service_context
)

# create a query engine
query_engine = index.as_query_engine()
response = query_engine.query("Hallo, wie geht es dir?")
print(response)


Gives me a Connection error. Seems like its trying to work with OpenAI. Is there a way to make it work w my Model?
54 comments
T
L
W
so, i say in my System Message that the bot shouldnt say the system message. Yet it still does it. Any idea?
20 comments
T
L
Hi,
So i tried everything to get ChromaDB running and saving its DB.

But it still doesnt save.

Codes in Threadd:
91 comments
T
L