Find answers from the community

N
Noble
Offline, last seen 3 months ago
Joined September 25, 2024
Iam currently working on an integration in llama_index. Iam trying to fix the linting errors in them. I know that running make format; make lint formats and shows me the errors that needs to be fixed.
Can this be configured in VS code extensions of some form so that I don't have to run these commands everytime I fix one and rather resolve the errors in the underlined areas alone ?
8 comments
N
L
N
Noble
Β·

Pip

I see checked the change-logs now, realised it's part of the unreleased changes πŸ˜…
9 comments
N
L
N
Noble
Β·

LLM Calls

Hey Llama_index Team,

I wanted to bring up a concern I've noticed while working with the OpenAI integration in Llama_index.

It seems that when I use the index from my local environment and perform queries, the system makes multiple calls to the OpenAI chat method. As a result, I'm seeing more OpenAI calls being made than intended ( Attaching the code snippet used below). This raises concerns about unnecessary costs incurred due to these extra calls, which I believe is unintentional.

Plain Text
from llama_index import StorageContext, load_index_from_storage
from llama_index.llms import OpenAI
from llama_index import (ServiceContext)

# rebuild storage context
storage_context = StorageContext.from_defaults(persist_dir="./storage")

# load index
index = load_index_from_storage(storage_context)

llm = OpenAI(model="gpt-3.5-turbo", temperature=0, max_tokens=256)
service_context = ServiceContext.from_defaults(llm=llm)

print("querying the index.")

query_engine = index.as_query_engine(service_context=service_context)

res = query_engine.query("What did the author do after his time at Y Combinator?")

print(res)


I wanted to bring this to your attention because it seems to be affecting the cost-efficiency of using Llama_index in my workflow. I would greatly appreciate it if you could look into this matter and potentially optimize the way the OpenAI integration is being used to avoid these extra calls.

I'm happy to offer my assistance in contributing to the resolution of this issue. If you could provide me with the necessary context, I'd be more than willing to help identify the root cause and work on implementing a solution.
6 comments
N
L
The error that I encountered was -
Plain Text
FAILED tests/indices/test_utils.py::test_expand_tokens_with_subtokens - LookupError:
FAILED tests/indices/keyword_table/test_utils.py::test_expand_tokens_with_subtokens - LookupError:
FAILED tests/indices/query/test_compose.py::test_recursive_query_table_list - LookupError:
FAILED tests/indices/query/test_compose.py::test_recursive_query_list_table - LookupError:
FAILED tests/indices/query/test_compose_vector.py::test_recursive_query_vector_table - LookupError:
FAILED tests/indices/query/test_compose_vector.py::test_recursive_query_vector_table_query_configs - LookupError:
FAILED tests/indices/query/test_compose_vector.py::test_recursive_query_vector_table_async - LookupError:
ERROR tests/llm_predictor/vellum/test_predictor.py::test_predict__basic - ModuleNotFoundError: No module named 'vellum'
ERROR tests/llm_predictor/vellum/test_predictor.py::test_predict__callback_manager - ModuleNotFoundError: No module named 'vellum'
ERROR tests/llm_predictor/vellum/test_predictor.py::test_stream__basic - ModuleNotFoundError: No module named 'vellum'
ERROR tests/llm_predictor/vellum/test_predictor.py::test_stream__callback_manager - ModuleNotFoundError: No module named 'vellum'
ERROR tests/llm_predictor/vellum/test_prompt_registry.py::test_from_prompt__new - ModuleNotFoundError: No module named 'vellum'
ERROR tests/llm_predictor/vellum/test_prompt_registry.py::test_from_prompt__existing - ModuleNotFoundError: No module named 'vellum'
ERROR tests/llm_predictor/vellum/test_prompt_registry.py::test_get_compiled_prompt__basic - ModuleNotFoundError: No module named 'vellum'
4 comments
N
L