Find answers from the community

Updated 2 months ago

File

can anyone help, I have a simple text file ( 5500 tokens ), I want to use this information and get answers to my queries, I hit this often around 10 K requests per day, for each requests it is taking ( 5000 tokens per request ), how can I reduce the cost to use this
L
M
2 comments
What kind of index are you using?
@Logan M

this is my code
import openai from dotenv import load_dotenv from langchain.chat_models import ChatOpenAI from llama_index.memory import ChatMemoryBuffer import os from llama_index import VectorStoreIndex,SimpleDirectoryReader,LLMPredictor load_dotenv() # openai.api_key = os.environ.get("OPENAI_API_KEY") from llama_index import StorageContext, load_index_from_storage storage_context = StorageContext.from_defaults(persist_dir="apg_index") # Load index from the storage context new_index = load_index_from_storage(storage_context) new_query_engine = new_index.as_query_engine() userQuery = input("Query: ") response = new_query_engine.query(userQuery) print(response) print("\n")
Add a reply
Sign up and join the conversation on Discord