----------------------
Sure! Here's an example of using LlamaIndex with GPTVectorStoreIndex and gpt-3.5-turbo. This example uses an in-memory vector store, which is the default option for GPTVectorStoreIndex.
First, you need to import the necessary libraries and set up logging:
import logging
import sys
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
from llama_index import GPTVectorStoreIndex, SimpleDirectoryReader, LLMPredictor, ServiceContext
from langchain.chat_models import ChatOpenAI
Next, load the documents using SimpleDirectoryReader:
documents = SimpleDirectoryReader('data').load_data()
Create an LLM Predictor with gpt-3.5-turbo and a ServiceContext:
llm_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo", streaming=True))
service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, chunk_size_limit=512)
Build the GPTVectorStoreIndex using the documents and the service_context:
index = GPTVectorStoreIndex.from_documents(documents, service_context=service_context)
Finally, query the GPTVectorStoreIndex:
query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
print(response)
This example is based on the information provided in the
SimpleIndexDemo-ChatGPT (
https://gpt-index.readthedocs.io/en/latest/examples/customization/llms/SimpleIndexDemo-ChatGPT.html) notebook.
----------------------
Note: you can ask me a follow up question by @ mentioning me again
:speech_balloon:
----------------------