Find answers from the community

s
F
Y
a
P
Home
Members
Ashish kumar
A
Ashish kumar
Offline, last seen last month
Joined September 25, 2024
A
Ashish kumar
·

LLM

gpt4all embeddings error

Plain Text
import tiktoken
from llama_index import (
    LLMPredictor, 
    ServiceContext,
    set_global_service_context
)
from langchain.llms import GPT4All
from langchain.embeddings import GPT4AllEmbeddings
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler

   
callbacks = [StreamingStdOutCallbackHandler()]
local_path = "/path/to/gpt4 model/llama-2-7b-chat.ggmlv3.q4_0.bin"

# Verbose is required to pass to the callback manager
llm = GPT4All(model=local_path, callbacks=callbacks, backend="gptj", verbose=True)

service_context = ServiceContext.from_defaults(
    llm_predictor=LLMPredictor(llm=llm), 
    embed_model=GPT4AllEmbeddings()
)

# set the global default!
set_global_service_context(service_context)

OUTPUT:
Plain Text
ValueError                                Traceback (most recent call last)
/tmp/ipykernel_83194/1998556855.py in <module>
     53 # )
     54 
---> 55 service_context = ServiceContext.from_defaults(
     56     llm_predictor=llm_predictor,

~/anaconda3/lib/python3.10/site-packages/llama_index/indices/service_context.py in from_defaults(cls, llm_predictor, llm, prompt_helper, embed_model, node_parser, llama_logger, callback_manager, chunk_size, chunk_overlap, context_window, num_output, chunk_size_limit)
    163         # NOTE: the embed_model isn't used in all indices
    164         embed_model = embed_model or OpenAIEmbedding()
--> 165         embed_model.callback_manager = callback_manager
    166 
    167         prompt_helper = prompt_helper or _get_default_prompt_helper(

~/.local/lib/python3.10/site-packages/pydantic/main.cpython-310-x86_64-linux-gnu.so in pydantic.main.BaseModel.__setattr__()

ValueError: "GPT4AllEmbeddings" object has no field "callback_manager"

10 comments
A
L
SimpleInputPrompt is not working

Plain Text
from llama_index.prompts.prompts import SimpleInputPrompt

DEFAULT_SIMPLE_INPUT_TMPL = (
    "{query_str} \n"
    "by using words 'permission'"
)
DEFAULT_SIMPLE_INPUT_PROMPT = SimpleInputPrompt(DEFAULT_SIMPLE_INPUT_TMPL)
retriever = VectorIndexRetriever(
    index=index,
    similarity_top_k=10,
    vector_store_query_mode=VectorStoreQueryMode.HYBRID
)
response_synthesizer = ResponseSynthesizer.from_args(
    streaming=True,
    service_context=service_context,
    simple_template = DEFAULT_SIMPLE_INPUT_PROMPT
)
query_engine = RetrieverQueryEngine(
    retriever=retriever,
    response_synthesizer=response_synthesizer,
)
query_engine = index.as_query_engine(
    streaming=True, 
    simple_template = DEFAULT_SIMPLE_INPUT_PROMPT
)
response = query_engine.query(query_str)

The output is different. It is not using SimpleInputPrompt i have checked by logging level = debug.
Plain Text
logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)

Logging is showing that default prompt is being used.

Plain Text
LOGGING:

DEBUG:llama_index.indices.response.response_builder:> Initial prompt template: Context information is below. 
---------------------
{context_str}
---------------------
Given the context information and not prior knowledge, answer the question: {query_str}

I have checked response synthesizer and it contains refine_template and text_qa_template variable but not simple_template variable.

Plain Text
print(vars(response_synthesizer._response_builder))
{'_service_context': , 
 '_streaming': ,
 'text_qa_template': <llama_index.prompts.prompts.QuestionAnswerPrompt at 0x7fde16b0dfc0>,
 '_refine_template': <llama_index.prompts.prompts.SimpleInputPrompt at 0x7fde09d6bd90>}

Please anyone help me with this. .
2 comments
m
Hello

i am using QuestionAnswerPrompt in llama-index 0.6.8. I am only getting the following error while i am using QuestionAnswerPrompt :

Plain Text
QA_PROMPT_TMPL = (
    
    "We have provided context information below. \n"
    "---------------------\n"
    "{context_str}"
    "\n---------------------\n"
    "Given this information, please answer the question: {query_str}\n"
)
QA_PROMPT = QuestionAnswerPrompt(QA_PROMPT_TMPL)

query_engine = index.as_query_engine(streaming=True, similarity_top_k=10, text_qa_template=QuestionAnswerPrompt)

response = query_engine.query(query_str)

OUTPUT ERROR:

TypeError: Prompt.partial_format() missing 1 required positional argument: 'self'

Anyone help here . Thanks!
1 comment
L
I am getting error sometime
Plain Text
agent = OpenAIAgent.from_tools(
    [multiply_tool, add_tool],
    llm=llm, 
    verbose=True,
    callback_manager=callback_manager
)
response = agent.chat("your_query")

OUTPUT ERROR:

JSONDecodeError                           Traceback (most recent call last)
/tmp/ipykernel_85164/1112129131.py in <module>
----> 1 response = agent.chat("a ate 2 apples, b ate 9 apples and c ate 1 apples. I apple cost is 7.8. how much total cost will be")
      2 print('response = ',response)

~/anaconda3/lib/python3.10/site-packages/llama_index/agent/openai_agent.py in chat(self, message, chat_history, function_call)
    143                 break
    144 
--> 145             function_message, tool_output = call_function(
    146                 tools, function_call_, verbose=self._verbose
    147             )

~/anaconda3/lib/python3.10/site-packages/llama_index/agent/openai_agent.py in call_function(tools, function_call, verbose)
     41         print(f"Calling function: {name} with args: {arguments_str}")
     42     tool = get_function_by_name(tools, name)
---> 43     argument_dict = json.loads(arguments_str)
     44     output = tool(**argument_dict)
     45     if verbose:

. . . 

~/anaconda3/lib/python3.10/json/decoder.py in raw_decode(self, s, idx)
    351         """
    352         try:
--> 353             obj, end = self.scan_once(s, idx)
    354         except StopIteration as err:
    355             raise JSONDecodeError("Expecting value", s, err.value) from None

JSONDecodeError: Expecting ',' delimiter: line 2 column 14 (char 15)

@Logan M
3 comments
A
L
error while querying

Plain Text
query_engine = index.as_query_engine()
response = query_engine.query(query)


ERROR OUTPUT
Plain Text
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
/tmp/ipykernel_113122/3918986344.py in <module>
      1 query_engine = index.as_query_engine()
----> 2 respons = query_engine.query(query)

. . . . .

~/.local/lib/python3.10/site-packages/llama_index/data_structs/node.py in __post_init__(self)
     64         # NOTE: for Node objects, the text field is required
     65         if self.text is None:
---> 66             raise ValueError("text field not set.")
     67 
     68         if self.node_info is None:

ValueError: text field not set.

@Logan M @ravitheja
8 comments
L
t
A
No API key error in OpenAIAgent
Plain Text
from llama_index.agent import OpenAIAgent
agent = OpenAIAgent.from_tools(query_engine_tools, verbose=True, llm=llm)
agent.chat_repl()

OUTPUT ERROR:

===== Entering Chat REPL =====
Type "exit" to exit.

Human: when parliament building inaugurated
=== Calling Function ===
Calling function: new_parmialment with args: {
  "input": "When was the parliament building inaugurated?"
}

---------------------------------------------------------------------------
AuthenticationError                       Traceback (most recent call last)
.
.
.
.
AuthenticationError: No API key provided. You can set your API key in code using 'openai.api_key = <API-KEY>', or you can set the environment variable OPENAI_API_KEY=<API-KEY>). If your API key is stored in a file, you can point the openai module at it with 'openai.api_key_path = <PATH>'. You can generate API keys in the OpenAI web interface. See https://platform.openai.com/account/api-keys for details, or email support@openai.com if you have any questions.

The above exception was the direct cause of the following exception:

RetryError                                Traceback (most recent call last)
.
.
.
.
RetryError: RetryError[<Future at 0x7fd9199b00a0 state=finished raised AuthenticationError>]

@jerryjliu0 , @ravitheja @Logan M
36 comments
L
A
a
W
Google docs is working but google drive reader is not working

Plain Text
from llama_index import download_loader

GoogleDriveReader = download_loader('GoogleDriveReader')

loader = GoogleDriveReader()
documents = loader.load_data(file_ids=['file_id'])

OUTPUT:
TypeError: GoogleDriveReader._load_from_file_ids() takes 2 positional arguments but 3 were given




Notion reader is also giving an error.

Plain Text
from llama_index import GPTListIndex, NotionPageReader
from IPython.display import Markdown, display
import os
integration_token = 'notion_integration_token'
page_ids = ["page_id"]
notion_reader = NotionPageReader(integration_token=integration_token)
documents = notion_reader.read_page(page_id=page_ids)

OUTPUT:
~/.local/lib/python3.10/site-packages/llama_index/readers/notion.py in _read_block(self, block_id, num_tabs)
     58             data = res.json()
     59 
---> 60             for result in data["results"]:
     61                 result_type = result["type"]
     62                 result_obj = result[result_type]

KeyError: 'results'



Please anyone help me with this. @jerryjliu0 @Logan M @ravitheja
13 comments
A
r
L
Plain Text
from llama_index import StorageContextload_index_from_storage
from llama_index.readers import WeaviateReader
from llama_index.vector_stores import WeaviateVectorStore

documents = SimpleDirectoryReader('test_doc').load_data()

storage_context = StorageContext.from_dict()

TypeError: StorageContext.from_dict() missing 1 required positional argument: 'save_dict'


i want to use weaviate vector store.

Plain Text
storage_context = StorageContext.from_dict(
     vector_store=WeaviateVectorStore(weaviate_client=weaviate_client),
)

Then i got an error
TypeError: StorageContext.from_dict() got an unexpected keyword argument 'vector_store'

If i use 'from_defaults' instead of 'from_dict' and weaviate vector store and trying to save in dict by using to_dict(). Then got value error

Plain Text
storage_context = StorageContext.from_defaults(
     vector_store=WeaviateVectorStore(weaviate_client=weaviate_client),
)

index = GPTVectorStoreIndex.from_documents(documents, storage_context=storage_context,service_context=service_context)
index.storage_context.to_dict()

ValueError: to_dict only available when using simple doc/index/vector stores


In previous versions of llama_index, we have save_to_disk, save_to_dict and save_to_string and similarly load from methods. Do we have save and load indices methods except load index persist in new version?
1 comment
L