Got error: --------------- RuntimeError Traceback (most recent call last) Cell In[71], line 2 1 vector_query_engine = vector_index.as_query_engine() ----> 2 correct, total = evaluate_query_engine(vector_query_engine, questions) 4 print(f"score: {correct}/{total}") Cell In[69], line 6, in evaluate_query_engine(query_engine, questions) 4 def evaluate_query_engine(query_engine, questions): 5 c = [query_engine.aquery(q) for q in questions] ----> 6 results = asyncio.run(asyncio.gather(*c)) 7 print("finished query") 9 total_correct = 0 File /usr/local/Cellar/python@3.11/3.11.4_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py:186, in run(main, debug) 161 """Execute the coroutine and return the result. 162 163 This function runs the passed coroutine, taking care of (...) 182 asyncio.run(main()) 183 """ 184 if events._get_running_loop() is not None: 185 # fail fast with short traceback --> 186 raise RuntimeError( 187 "asyncio.run() cannot be called from a running event loop") 189 with Runner(debug=debug) as runner: 190 return runner.run(main) RuntimeError: asyncio.run() cannot be called from a running event loop
run_in_threadpool
pyvis.network
module to create a graph and then use the show
method to display it. Here is an example code snippet:from pyvis.network import Network g = new_index.get_networkx_graph() net = Network(notebook=True, cdn_resources="in_line", directed=True) net.from_nx(g) net.show("example.html")
pandas
library to create a DataFrame and then display it. Here is an example:import pandas as pd from IPython.display import display, HTML def pretty_print(df): return display(HTML(df.to_html().replace("\\n", "<br>"))) def visualize_retrieved_nodes(nodes) -> None: result_dicts = [] for node in nodes: result_dict = {"Score": node.score, "Text": node.node.get_text()} result_dicts.append(result_dict) pretty_print(pd.DataFrame(result_dicts))
Note: you can ask me a follow up question by @ mentioning me again
:speech_balloon:get_networkx_graph
method provided by LlamaIndex. This method returns a NetworkX graph object that represents the nodes and their relationships. You can then use a library like pyvis
to visualize this graph. Here is an example:from pyvis.network import Network # Assuming `new_index` is your LlamaIndex object g = new_index.get_networkx_graph() net = Network(notebook=True, cdn_resources="in_line", directed=True) net.from_nx(g) net.show("example.html")
Note: you can ask me a follow up question by @ mentioning me again
:speech_balloon:# import from llama_index import VectorStoreIndex, SimpleDirectoryReader, ServiceContext from llama_index.vector_stores import ChromaVectorStore from llama_index.storage.storage_context import StorageContext from langchain.embeddings.huggingface import HuggingFaceEmbeddings from llama_index.embeddings import LangchainEmbedding from IPython.display import Markdown, display import chromadb # create client and a new collection chroma_client = chromadb.EphemeralClient() chroma_collection = chroma_client.create_collection("quickstart") # define embedding function embed_model = LangchainEmbedding( HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2") ) # load documents documents = SimpleDirectoryReader( "../../../examples/paul_graham_essay/data" ).load_data()
**Source** (<https://gpt-index.readthedocs.io/en/latest/examples/vector_stores/ChromaIndexDemo.html>) **Example 3: Using ChromaReader to Load Data**
``
**Source** (<https://gpt-index.readthedocs.io/en/latest/community/integrations/vector_stores.html>)
Please replace the placeholders (like
n1, n2, n3, ... and
<query_text>) with actual values as per your use case.
----------------------
Note: you can ask me a follow up question by @ mentioning me again` :speech_balloon:from llama_index.llms import AzureOpenAI # Unlike normal OpenAI, you need to pass a `engine` argument in addition to `model`. # The `engine` is the name of your model deployment you selected in Azure OpenAI Studio. llm = AzureOpenAI(engine="simon-llm", model="gpt-35-turbo-16k", temperature=0.0)
# Use the `complete` endpoint for text completion response = llm.complete("The sky is a beautiful blue and") print(response) # Or use the `stream_complete` endpoint for streaming text completion response = llm.stream_complete("The sky is a beautiful blue and") for r in response: print(r.delta, end="")
from llama_index.llms import ChatMessage messages = [ ChatMessage(role="system", content="You are a pirate with colorful personality."), ChatMessage(role="user", content="Hello"), ] # Use the `chat` endpoint for conversation response = llm.chat(messages) print(response) # Or use the `stream_chat` endpoint for streaming conversation response = llm.stream_chat(messages) for r in response: print(r.delta, end="")
"simon-llm"
and "gpt-35-turbo-16k"
) with actual values as per your use case.Note: you can ask me a follow up question by @ mentioning me again
:speech_balloon:from llama_index.llms import AzureOpenAI llm = AzureOpenAI(engine="your-custom-engine-name", model="gpt-4", temperature=0.0)
"your-custom-engine-name"
should be replaced with the name of your model deployment you selected in Azure OpenAI Studio.Note: you can ask me a follow up question by @ mentioning me again
:speech_balloon: