I am using use_async true already but, here is the output... if I was to use embedding would things be faster?
langchainapi-langchain-1 | 17-Jun-23 19:28:00 - > Building index from nodes: 1 chunks
langchainapi-langchain-1 | > Building index from nodes: 1 chunks
langchainapi-langchain-1 | 17-Jun-23 19:28:05 - message='OpenAI API response' path=https://api.openai.com/v1/completions processing_ms=4881 request_id=64146b50553b7ef6b43bc8e7e21a30ac response_code=200
langchainapi-langchain-1 | message='OpenAI API response' path=https://api.openai.com/v1/completions processing_ms=4881 request_id=64146b50553b7ef6b43bc8e7e21a30ac response_code=200
langchainapi-langchain-1 | 17-Jun-23 19:28:06 - message='OpenAI API response' path=https://api.openai.com/v1/completions processing_ms=5480 request_id=935cbe4f2158adcb864e902a03a424d1 response_code=200
langchainapi-langchain-1 | message='OpenAI API response' path=https://api.openai.com/v1/completions processing_ms=5480 request_id=935cbe4f2158adcb864e902a03a424d1 response_code=200
langchainapi-langchain-1 | 17-Jun-23 19:28:11 - > [get_response] Total LLM token usage: 508 tokens
langchainapi-langchain-1 | > [get_response] Total LLM token usage: 508 tokens
langchainapi-langchain-1 | 17-Jun-23 19:28:11 - > [get_response] Total embedding token usage: 0 tokens
langchainapi-langchain-1 | > [get_response] Total embedding token usage: 0 tokens
langchainapi-langchain-1 | 17-Jun-23 19:28:11 - > [get_response] Total LLM token usage: 6311 tokens
langchainapi-langchain-1 | > [get_response] Total LLM token usage: 6311 tokens
langchainapi-langchain-1 | 17-Jun-23 19:28:11 - > [get_response] Total embedding token usage: 0 tokens
langchainapi-langchain-1 | > [get_response] Total embedding token usage: 0 tokens