Find answers from the community

Updated last year

While running examples from https gpt

While running examples from https://gpt-index.readthedocs.io/en/stable/examples/evaluation/TestNYC-Evaluation.html
on def evaluate_query_engine(query_engine, questions):
correct, total = evaluate_query_engine(vector_query_engine, eval_questions[:5])

Plain Text
Got error:
---------------
RuntimeError                              Traceback (most recent call last)
Cell In[71], line 2
      1 vector_query_engine = vector_index.as_query_engine()
----> 2 correct, total = evaluate_query_engine(vector_query_engine, questions)
      4 print(f"score: {correct}/{total}")

Cell In[69], line 6, in evaluate_query_engine(query_engine, questions)
      4 def evaluate_query_engine(query_engine, questions):
      5     c = [query_engine.aquery(q) for q in questions]
----> 6     results = asyncio.run(asyncio.gather(*c))
      7     print("finished query")
      9     total_correct = 0

File /usr/local/Cellar/python@3.11/3.11.4_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py:186, in run(main, debug)
    161 """Execute the coroutine and return the result.
    162 
    163 This function runs the passed coroutine, taking care of
   (...)
    182     asyncio.run(main())
    183 """
    184 if events._get_running_loop() is not None:
    185     # fail fast with short traceback
--> 186     raise RuntimeError(
    187         "asyncio.run() cannot be called from a running event loop")
    189 with Runner(debug=debug) as runner:
    190     return runner.run(main)

RuntimeError: asyncio.run() cannot be called from a running event loop
1
L
M
V
33 comments
put this at the top of your code

Plain Text
import nest_asyncio
nest_asyncio.apply()
I can see a similar issue with a project im working on. Does this solution mirror what happens with FastAPI's
Plain Text
run_in_threadpool
I'm not familiar with that, but the above fix will definitely help (we used it in our own fastapi server)
@kapa.ai how to visualize nodes?
Let me check the knowledge base for you. One moment! :books:
----------------------
To visualize nodes in LlamaIndex, you can use the pyvis.network module to create a graph and then use the show method to display it. Here is an example code snippet:

Plain Text
from pyvis.network import Network

g = new_index.get_networkx_graph()
net = Network(notebook=True, cdn_resources="in_line", directed=True)
net.from_nx(g)
net.show("example.html")


This code creates a graph from the index, and then visualizes it in an HTML file named "example.html".

You can find more details in the source ().

Additionally, if you want to visualize retrieved nodes in a tabular format, you can use the pandas library to create a DataFrame and then display it. Here is an example:

Plain Text
import pandas as pd
from IPython.display import display, HTML

def pretty_print(df):
    return display(HTML(df.to_html().replace("\\n", "<br>")))

def visualize_retrieved_nodes(nodes) -> None:
    result_dicts = []
    for node in nodes:
        result_dict = {"Score": node.score, "Text": node.node.get_text()}
        result_dicts.append(result_dict)

    pretty_print(pd.DataFrame(result_dicts))


This code creates a DataFrame from the retrieved nodes and then displays it in a pretty format.

You can find more details in the source ().
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai how to visualize links between nodes?
Let me check the knowledge base for you. One moment! :books:
----------------------
To visualize links between nodes, you can use the get_networkx_graph method provided by LlamaIndex. This method returns a NetworkX graph object that represents the nodes and their relationships. You can then use a library like pyvis to visualize this graph. Here is an example:

Plain Text
from pyvis.network import Network

# Assuming `new_index` is your LlamaIndex object
g = new_index.get_networkx_graph()
net = Network(notebook=True, cdn_resources="in_line", directed=True)
net.from_nx(g)
net.show("example.html")


This code creates a graph from the index, and then visualizes it in an HTML file named "example.html".

You can find more details in the source ().

Please note that this code snippet assumes that you have already created an index and added nodes to it. If you haven't done so, you'll need to do that first before you can visualize the nodes and their relationships.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai chromadb example
Let me check the knowledge base for you. One moment! :books:
----------------------
Sure, here are some examples of how to use ChromaDB with LlamaIndex:

Example 1: Basic Setup
Plain Text
# import
from llama_index import VectorStoreIndex, SimpleDirectoryReader, ServiceContext
from llama_index.vector_stores import ChromaVectorStore
from llama_index.storage.storage_context import StorageContext
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
from llama_index.embeddings import LangchainEmbedding
from IPython.display import Markdown, display
import chromadb

# create client and a new collection
chroma_client = chromadb.EphemeralClient()
chroma_collection = chroma_client.create_collection("quickstart")

# define embedding function
embed_model = LangchainEmbedding(
    HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2")
)

# load documents
documents = SimpleDirectoryReader(
    "../../../examples/paul_graham_essay/data"
).load_data()

Source ()

Example 2: Running Chroma Server in Docker
```python
git clone git@github.com:chroma-core/chroma.git
docker-compose up -d --build

create the chroma client and add our data

import chromadb

remote_db = chromadb.HttpClient()
chroma_collection = remote_db.get_or_create_collection("quickstart")
vector_store = ChromaVectorStore(chroma_collection=chroma_collection)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
service_context = ServiceContext.from_defaults(embed_model=embed_model)
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context, service_context=service_context
)

Query Data from the Chroma Docker index

query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
display(Markdown(f"<b>{response}</b>"))
Plain Text
**Source** (<https://gpt-index.readthedocs.io/en/latest/examples/vector_stores/ChromaIndexDemo.html>)

**Example 3: Using ChromaReader to Load Data**
python
from llama_index.readers.chroma import ChromaReader
from llama_index.indices import ListIndex

The chroma reader loads data from a persisted Chroma collection.

This requires a collection name and a persist directory.

reader = ChromaReader(
collection_name="chroma_collection",
persist_directory="examples/data_connectors/chroma_collection"
)

query_vector=[n1, n2, n3, ...]

documents = reader.load_data(collection_name="demo", query_vector=query_vector, limit=5)
index = ListIndex.from_documents(documents)

query_engine = index.as_query_engine()
response = query_engine.query("<query_text>")
display(Markdown(f"<b>{response}</b>"))
`` **Source** (<https://gpt-index.readthedocs.io/en/latest/community/integrations/vector_stores.html>) Please replace the placeholders (like n1, n2, n3, ... and <query_text>) with actual values as per your use case. ---------------------- Note: you can ask me a follow up question by @ mentioning me again` :speech_balloon:
----------------------
@kapa.ai azure openai example
Let me check the knowledge base for you. One moment! :books:
----------------------
Sure, here are some examples of how to use Azure OpenAI with LlamaIndex:

Example 1: Basic Setup
Plain Text
from llama_index.llms import AzureOpenAI

# Unlike normal OpenAI, you need to pass a `engine` argument in addition to `model`. 
# The `engine` is the name of your model deployment you selected in Azure OpenAI Studio.

llm = AzureOpenAI(engine="simon-llm", model="gpt-35-turbo-16k", temperature=0.0)

Source ()

Example 2: Text Completion
Plain Text
# Use the `complete` endpoint for text completion
response = llm.complete("The sky is a beautiful blue and")
print(response)

# Or use the `stream_complete` endpoint for streaming text completion
response = llm.stream_complete("The sky is a beautiful blue and")
for r in response:
    print(r.delta, end="")

Source ()

Example 3: Conversation
Plain Text
from llama_index.llms import ChatMessage

messages = [
    ChatMessage(role="system", content="You are a pirate with colorful personality."),
    ChatMessage(role="user", content="Hello"),
]

# Use the `chat` endpoint for conversation
response = llm.chat(messages)
print(response)

# Or use the `stream_chat` endpoint for streaming conversation
response = llm.stream_chat(messages)
for r in response:
    print(r.delta, end="")

Source ()

Please replace the placeholders (like "simon-llm" and "gpt-35-turbo-16k") with actual values as per your use case.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai what is the name of engine for AzureOpenAI for gpt-4
Let me check the knowledge base for you. One moment! :books:
----------------------
The engine name for AzureOpenAI is a custom name that you choose when you deploy a model on Azure OpenAI. It's not specifically tied to the model (like "gpt-4") you're using.

Here's an example of how you might set it:

Plain Text
from llama_index.llms import AzureOpenAI

llm = AzureOpenAI(engine="your-custom-engine-name", model="gpt-4", temperature=0.0)


In this example, "your-custom-engine-name" should be replaced with the name of your model deployment you selected in Azure OpenAI Studio.

You can find more details in the source ().
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai What is the best way extract connections from a table in the document
Let me check the knowledge base for you. One moment! :books:
@Vimwork1 That's a great question, but unfortunately I am only capable of responding to a maximum of three questions per thread. Don't hesitate to create a new thread for any further questions you have though. I'm here to help!
What is a good document processing module
@kapa.ai What is the best way to extract info from the simple text document with links and tables
Let me check the knowledge base for you. One moment! :books:
@Vimwork1 That's a great question, but unfortunately I am only capable of responding to a maximum of three questions per thread. Don't hesitate to create a new thread for any further questions you have though. I'm here to help!
What is the best way to extract info from the simple text document
@kapa.ai What is the best way to extract info from the simple text document
Let me check the knowledge base for you. One moment! :books:
@Vimwork1 That's a great question, but unfortunately I am only capable of responding to a maximum of three questions per thread. Don't hesitate to create a new thread for any further questions you have though. I'm here to help!
@kapa.ai document processing post processing
Let me check the knowledge base for you. One moment! :books:
@Vimwork1 That's a great question, but unfortunately I am only capable of responding to a maximum of three questions per thread. Don't hesitate to create a new thread for any further questions you have though. I'm here to help!
Add a reply
Sign up and join the conversation on Discord