Find answers from the community

Home
Members
HappyDay
H
HappyDay
Offline, last seen 3 months ago
Joined September 25, 2024
H
HappyDay
·

Medium

from medium doc, from llama_index.service_context import ServiceContext bug .service_context?
4 comments
L
H
I just updated llama_index and the ResponseSynthesizer was not in the package. Has it been removed? I ended up backing down to a pre .7 version and that worked. Just trying to understand the changes and why as these wonderful APIs hurdle forward. thank you.
4 comments
L
y
i have been going directly from index.as_query_engine() to query(). With query(), it takes time and costs. Does it make sense to use the Retriever classes to check the index on how well the chuncks returned can answer the query or is that prone to too much tweaking to not be worth it?
6 comments
L
H
I am having a devilish time attempting to analyze the cost of a query with MockLLMPredictor. I keep getting
Plain Text
 --------------------------------------------------------------------------
AuthenticationError                       Traceback (most recent call last)
File c:\Users\happy\Documents\Projects\askLavinia\.venv\lib\site-packages\tenacity\__init__.py:382, in Retrying.__call__(self, fn, *args, **kwargs)
    381 try:
--> 382     result = fn(*args, **kwargs)
    383 except BaseException:  # noqa: B902

File c:\Users\happy\Documents\Projects\askLavinia\.venv\lib\site-packages\llama_index\embeddings\openai.py:106, in get_embedding(text, engine, **kwargs)
    105 text = text.replace("\n", " ")
--> 106 return openai.Embedding.create(input=[text], model=engine, **kwargs)["data"][0][
    107     "embedding"
    108 ]

File c:\Users\happy\Documents\Projects\askLavinia\.venv\lib\site-packages\openai\api_resources\embedding.py:33, in Embedding.create(cls, *args, **kwargs)
     32 try:
---> 33     response = super().create(*args, **kwargs)
     35     # If a user specifies base64, we'll just return the encoded string.
     36     # This is only for the default case.

File c:\Users\happy\Documents\Projects\askLavinia\.venv\lib\site-packages\openai\api_resources\abstract\engine_api_resource.py:149, in EngineAPIResource.create(cls, api_key, api_base, api_type, request_id, api_version, organization, **params)
    127 @classmethod
    128 def create(
    129     cls,
   (...)
    136     **params,
...
--> 326     raise retry_exc from fut.exception()
    328 if self.wait:
    329     sleep = self.wait(retry_state)

RetryError: RetryError[] 
yet, the query works fine when I set up the query with: st.session_state['query_engine'] = index.as_query_engine(verbose=True) Has anyone gotten the ability to retrieve tokens and then figure out the cost ? Thank you.
24 comments
L
H
I need to pass in the header so that I can get the content back from a web crawl using BeautifulSoupWebReader. I am getting a 403 and I know I can scrape this page because langchain's webscraper was able to pass in the header. My challenge is I want to use llamaindex and ideally the two documents would be identical types but sadly. Is there a way to pass in the header? I couldn't access the source code to check (or if it is available - my bad, i couldn't find it. This gives 403 for page content:
Plain Text
from llama_index import GPTVectorStoreIndex, download_loader

BeautifulSoupWebReader = download_loader("BeautifulSoupWebReader")

loader = BeautifulSoupWebReader()
documents = loader.load_data(urls=['https://www.kirklandreporter.com/tag/football/'])
help appreciated. thank you.
7 comments
H
m
I did something like this:
Plain Text
 index = VectorStoreIndex.from_documents(tqdm(docs, desc="Indexing documents"), storage_context=storage_context) 
3 comments
j
regardless, it seems to me the current ChatMode.CONDENSE_QUESTION's "This query is better than yours" will always be challenged about keeping the integrity of intent? ...
2 comments
H
L