Find answers from the community

Updated 2 months ago

Gpt4

Can we use GPT-4 for quirying?
1
L
M
A
43 comments
Yup! There's an example here

Plain Text
llm_predictor_gpt4 = LLMPredictor(
    llm=ChatOpenAI(temperature=0, model_name="gpt-4")
)


https://github.com/jerryjliu/llama_index/blob/main/examples/test_wiki/TestNYC-Benchmark-GPT4.ipynb
Everything there looks like its 50/50 with results.
I would love to see some coding examples, as that is where gpt4 really shines.
Thank you @Logan M Will check that.
@Meathead Anthropic's Claude+ is really good for coding
Hard to find out what the cost would be to run it. πŸ˜›
There is pricing in their faq. Looks its a LOT cheaper than GPT but not sure
@Logan M but I cant use that in creating index using simple vector index
Plain Text
GPTSimpleVectorIndex(documents_nm_stk,LLMPredictor=llm_predictor_gpt4)
Try this, you had a small error/typo
Plain Text
GPTSimpleVectorIndex(documents_nm_stk,llm_predictor=llm_predictor_gpt4)
hmm that doesn't work either:
Plain Text
ValueError: llm must be an instance of langchain.llms.base.LLM
Oh really? Must be something small somewhere

Check out this notebook, that also uses gpt4 for a complete example https://github.com/jerryjliu/llama_index/blob/main/examples/test_wiki/TestNYC-Tree-GPT4.ipynb
If you still have troubles, just send your current code and I can help you out
have been trying to troubleshoot, but no luck. Looks like some issue with modules. Here is the whole code:
Plain Text
import os

from llama_index import GPTSimpleVectorIndex, GPTListIndex, Document, SimpleDirectoryReader, download_loader, LLMPredictor
from langchain.llms import OpenAI
from pathlib import Path
from langchain.chat_models import ChatOpenAI


os.environ['OPENAI_API_KEY'] = ''


BeautifulSoupWebReader = download_loader("BeautifulSoupWebReader")

loader = BeautifulSoupWebReader()

#html_links_nm_stk is a list of >700 html links

documents_nm_stk  = loader.load_data(html_links_nm_stk)
#documents_nm_stk loaded successfully

#gpt-4
llm_predictor_gpt4 = LLMPredictor(
    llm=ChatOpenAI(temperature=0, model_name="gpt-4")
)

index_nm_stk_gpt4 = GPTSimpleVectorIndex(documents_nm_stk, llm_predictor=llm_predictor_gpt4)
Huh, that's looks good to me. What's the error/stack trace with that
Plain Text
ValueError: llm must be an instance of langchain.llms.base.LLM
@Logan M
Plain Text
index_nm_stk_gpt4 = GPTSimpleVectorIndex(documents_nm_stk, llm_predictor=llm_predictor_gpt4)
Traceback (most recent call last):

  Cell In[7], line 1
    index_nm_stk_gpt4 = GPTSimpleVectorIndex(documents_nm_stk, llm_predictor=llm_predictor_gpt4)

  File ~/opt/anaconda3/lib/python3.9/site-packages/llama_index/indices/vector_store/vector_indices.py:84 in __init__
    super().__init__(

  File ~/opt/anaconda3/lib/python3.9/site-packages/llama_index/indices/vector_store/base.py:63 in __init__
    super().__init__(

  File ~/opt/anaconda3/lib/python3.9/site-packages/llama_index/indices/base.py:87 in __init__
    self._prompt_helper = prompt_helper or PromptHelper.from_llm_predictor(

  File ~/opt/anaconda3/lib/python3.9/site-packages/llama_index/indices/prompt_helper.py:69 in from_llm_predictor
    llm_metadata = llm_predictor.get_llm_metadata()

  File ~/opt/anaconda3/lib/python3.9/site-packages/llama_index/langchain_helpers/chain_wrapper.py:89 in get_llm_metadata
    return _get_llm_metadata(self._llm)

  File ~/opt/anaconda3/lib/python3.9/site-packages/llama_index/langchain_helpers/chain_wrapper.py:36 in _get_llm_metadata
    raise ValueError("llm must be an instance of langchain.llms.base.LLM")

ValueError: llm must be an instance of langchain.llms.base.LLM
Wow that's super weird. My last guess is try running pip install --upgrade llama_index langchain
If that doesn't work, maybe @jerryjliu0 sees the error lol
Thanks, I upgraded but no luck either. BUt I got rid of that error by not defining class LLMPredictor= . But there is new error
Plain Text
llm_predictor_gpt4 = LLMPredictor(
    llm=ChatOpenAI(temperature=0, model_name="gpt-4")
)

index_nm_stk_gpt4 = GPTSimpleVectorIndex(documents_nm_stk, llm_predictor_gpt4)
Traceback (most recent call last):

  Cell In[19], line 5
    index_nm_stk_gpt4 = GPTSimpleVectorIndex(documents_nm_stk, llm_predictor_gpt4)

  File ~/opt/anaconda3/lib/python3.9/site-packages/llama_index/indices/vector_store/vector_indices.py:89 in __init__
    super().__init__(

  File ~/opt/anaconda3/lib/python3.9/site-packages/llama_index/indices/vector_store/base.py:63 in __init__
    super().__init__(

  File ~/opt/anaconda3/lib/python3.9/site-packages/llama_index/indices/base.py:79 in __init__
    if index_struct is None and documents is None:

ValueError: Only one of documents or index_struct can be provided.
@Azmath it looks like your llama-index is out of date
have you tried upgrading?
oh nvm i just saw your most recent message
do index_nm_stk_gpt4 = GPTSimpleVectorIndex(documents_nm_stk, llm_predictor=llm_predictor_gpt4) (you have to specify as a kwarg not a positional argument)
But that gives me the error:
Plain Text
 index_nm_stk_gpt4 = GPTSimpleVectorIndex(documents_nm_stk, llm_predictor=llm_predictor_gpt4)
Traceback (most recent call last):

  Cell In[26], line 1
    index_nm_stk_gpt4 = GPTSimpleVectorIndex(documents_nm_stk, llm_predictor=llm_predictor_gpt4)

  File ~/opt/anaconda3/lib/python3.9/site-packages/llama_index/indices/vector_store/vector_indices.py:89 in __init__
    super().__init__(

  File ~/opt/anaconda3/lib/python3.9/site-packages/llama_index/indices/vector_store/base.py:63 in __init__
    super().__init__(

  File ~/opt/anaconda3/lib/python3.9/site-packages/llama_index/indices/base.py:87 in __init__
    self._include_extra_info = include_extra_info

  File ~/opt/anaconda3/lib/python3.9/site-packages/llama_index/indices/prompt_helper.py:69 in from_llm_predictor
    llm_metadata = llm_predictor.get_llm_metadata()

  File ~/opt/anaconda3/lib/python3.9/site-packages/llama_index/llm_predictor/base.py:179 in get_llm_metadata
    return _get_llm_metadata(self._llm)

NameError: name '_get_llm_metadata' is not defined
Sorry, something got messed up on my end. I can't even run on GPT-3 which I used to run before
Plain Text
index_nm_stk = GPTSimpleVectorIndex.load_from_disk('index_nm_stk')
and gets the error
Plain Text
NameError: name 'OpenAI' is not defined 
maybe something broke when I installed langchain
i'm not sure, maybe try pip uninstalling both langchain/llama-index and try reinstalling?
okay I will try that
Just in case it would help to troubleshoot in future here are the versions I have.
Plain Text
llama-index                   0.4.39
langchain                     0.0.123
openai                        0.27.2 
@jerryjliu0 @Logan M Thanks a lot. I reinstalled all the three modules and restarted IDE and it worked!
But i am not able to query
Plain Text
 response = indexNmstk.query(query, LLMPredictor=llm_predictor_gpt4)
Plain Text
openai.error.InvalidRequestError: The model: `gpt-4` does not exist
I tried
Plain Text
response = indexNmstk.query(query, llm_predictor=llm_predictor_gpt4)
as well
the query inside parenthesis in the above code is a string " "
The second code sample is the right one.

This should be working
Plain Text
 
llm_predictor_gpt4 = LLMPredictor(llm=ChatOpenAI(temperature=0, model_name="gpt-4"))
...
response = indexNmstk.query(query, llm_predictor=llm_predictor_gpt4)


If that still isn't working, maybe try updating some packages

Plain Text
pip install --upgrade langchain llama_index openai
Only
Plain Text
model_name="gpt-3.5-turbo"
in that code works so far, I tried "gpt-3" and "gpt-4" but both shows error. Maybe I need a company account with OpenAI to have access to that.
For gpt3, it's actually the default, so if you don't specify an llm predictor that's what it uses.

Maybe your account does not have access to gpt 4 yet? I think they are still doing a waitlist
yes, i think so. Will wait for. it 🀞 Thanks for all the help
Add a reply
Sign up and join the conversation on Discord