Find answers from the community

g
giohax
Offline, last seen 3 months ago
Joined September 25, 2024
is there a way to speed up the process or optimize? I did a listindex query to get a summarization but it took like 60 seconds even using gpt-3.5-turbo
6 comments
g
L
documentation is hard to read
3 comments
L
g
Hi I'm having issues with deploying this fastapi + llamaindex into heroku..
I have my code setup like this:

os.environ['OPENAI_API_KEY'] = "sk-APIKEY"
openai.api_key = os.getenv('OPENAI_API_KEY')

@app.post("/stream")
async def stream(query: Query):
# define LLM
llm = OpenAI(temperature=0, model="gpt-3.5-turbo", max_tokens=1000)
service_context = ServiceContext.from_defaults(llm=llm, chunk_size=51200)

documents = SimpleDirectoryReader('data2').load_data()
index = VectorStoreIndex.from_documents(documents, service_context=service_context)
query_engine = index.as_query_engine(similarity_top_k=10)
query_preamble= "Given the provided product details, recommend several products from the provided data, and describe them, that satisfies the following search query or question: "
prompt = query_preamble + query.content
response = query_engine.query(prompt + ". Now output this data in a numbered list without including 'Product Name' and 'Description' keywords. Then summarize everything at the end.")

return {"response": response}
19 comments
T
g
Also does the name of the file matter?
34 comments
g
L