You've been so helpful, thanks. Just wanted to share how I ended up skinning this. It took lots of trial and error and piecing together ideas from fragmented docs, but this actually works...
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.core import Settings
from llama_index.llms.openai import OpenAI
from llama_index.embeddings.openai import OpenAIEmbedding
api_key="[REDACTED]"
base_url="[REDACTED]"
Settings.embed_model = OpenAIEmbedding(
api_base=base_url,
api_key=api_key
)
Settings.llm = OpenAI(
api_base=base_url,
api_key=api_key
)
documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
print(response)