Find answers from the community

Updated 3 months ago

how can print the prompt message sent to the LLM when llamaindex query the index?

how can print the prompt message sent to the LLM when llamaindex query the index?
k
e
9 comments
Please stand by @eric3232332 as I search the knowledge base. I normally take about 20 seconds or so. :wave:
----------------------
To print the prompt message sent to the LLM when LlamaIndex queries the index, you can use the TokenCountingHandler in LlamaIndex. This handler tracks each token usage event in an object called a TokenCountingEvent, which includes the prompt string sent to the LLM or Embedding model.

Here is an example of how you can print the prompt message:

Plain Text
print("prompt: ", token_counter.llm_token_counts[0].prompt[:100], "...\n")


In this example, token_counter.llm_token_counts[0].prompt is the prompt string sent to the LLM. The [:100] is used to print the first 100 characters of the prompt, and you can adjust this number based on your needs.

Please note that you need to have the token_counter object properly set up and filled with token counting events for this to work.

For more details, you can refer to the TokenCountingHandler notebook () in the LlamaIndex GitHub repository.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai I want to build a local document Q&A app, my documents are few hundred markdown documents, can you give me suggetion on how to build the index?
Let me check the knowledge base for you. One moment! :books:
----------------------
Sure, you can use LlamaIndex to build an index over your markdown documents and create a local document Q&A app. Here are the steps you can follow:

  1. Load in Documents: The first step is to load in your markdown documents. LlamaIndex provides a variety of data loaders which will load in Documents through the load_data function. You can use the SimpleDirectoryReader to load documents from a directory. Here is an example:
Plain Text
from llama_index import SimpleDirectoryReader

documents = SimpleDirectoryReader('./path_to_your_markdown_files').load_data()


  1. Build the Index: After loading the documents, you can construct the index using the VectorStoreIndex.from_documents method. This method takes in the documents and builds an index over them. Here is an example:
Plain Text
from llama_index import VectorStoreIndex

index = VectorStoreIndex.from_documents(documents)


  1. Query the Index: Once the index is built, you can query it using the query_engine.query method. This method takes in a query string and returns the response from the index. Here is an example:
Plain Text
query_engine = index.as_query_engine()
response = query_engine.query("Your question here")
print(response)


Please note that you need to replace './path_to_your_markdown_files' with the actual path to your markdown files and "Your question here" with the actual question you want to ask.

For more details, you can refer to the LlamaIndex documentation () and the LlamaIndex GitHub repository ().
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai What type of index should I used to retrieve from hundred of small markdown documents?
Let me check the knowledge base for you. One moment! :books:
----------------------
For a few hundred small markdown documents, you can use the DocArrayInMemoryVectorStore or DocArrayHnswVectorStore provided by LlamaIndex.

The DocArrayInMemoryVectorStore stores documents in memory and is a great starting point for small datasets, where you may not want to launch a database server. Here is an example of how to use it:

Plain Text
from llama_index import GPTVectorStoreIndex, SimpleDirectoryReader, Document
from llama_index.vector_stores import DocArrayInMemoryVectorStore
from llama_index.storage.storage_context import StorageContext

# load documents
documents = SimpleDirectoryReader("../data/paul_graham").load_data()

# Initialization and indexing
vector_store = DocArrayInMemoryVectorStore()
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = GPTVectorStoreIndex.from_documents(documents, storage_context=storage_context)


The DocArrayHnswVectorStore is a lightweight Document Index implementation that runs fully locally and is best suited for small- to medium-sized datasets. It stores vectors on disk in hnswlib, and stores all other data in SQLite. Here is an example of how to use it:

Plain Text
from llama_index import GPTVectorStoreIndex, SimpleDirectoryReader, Document
from llama_index.vector_stores import DocArrayHnswVectorStore
from llama_index.storage.storage_context import StorageContext

# load documents
documents = SimpleDirectoryReader("../data/paul_graham").load_data()

# Initialization and indexing
vector_store = DocArrayHnswVectorStore(work_dir="hnsw_index")
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = GPTVectorStoreIndex.from_documents(documents, storage_context=storage_context)


Please replace "../data/paul_graham" with the path to your markdown files.
For more details, you can refer to the DocArrayInMemoryVectorStore () and DocArrayHnswVectorStore () examples in the LlamaIndex GitHub repository.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
Add a reply
Sign up and join the conversation on Discord