Find answers from the community

Updated 2 months ago

Custom LLM

Anyone could tell me why by using this exact code with gpt-4 as a model specified, I stil got usage only for gpt 3.5 turbo? How to use gpt-4 here?

https://github.com/aamyren/CalHacks2023/blob/133a8c144cfb925d400c125e37b60b5446d91e41/app.py#L3

I specified:
Plain Text
llm_predictor = LLMPredictor(
        llm=openai(temperature=0.7, model_name="gpt-4", max_tokens=num_outputs)
    )


but still it treat and use gpt 3.5 turbo from openai usage page
E
m
W
3 comments
@Emanuel Ferreira this way?


Plain Text
from llama_index import (
    SimpleDirectoryReader,
    GPTVectorStoreIndex,
    ServiceContext,
    PromptHelper,
)
from langchain.chat_models import ChatOpenAI as openai

def construct_index(directory_path):
    max_input_size = 4096
    num_outputs = 512  # było 500
    max_chunk_overlap = 1
    chunk_size_limit = 600
    prompt_helper = PromptHelper(
        max_input_size,
        num_outputs,
        max_chunk_overlap,
        chunk_size_limit=chunk_size_limit,
    )

    # define LLM
    llm = openai(temperature=0.1, model="gpt-4")
    service_context = ServiceContext.from_defaults(llm=llm)
    # print model_name:
    documents = SimpleDirectoryReader(directory_path, num_files_limit=1).load_data()
    index = GPTVectorStoreIndex.from_documents(
        documents, service_context=service_context, prompt_helper=prompt_helper
    )
    index.storage_context.persist("index.json")

    return index


edit:

ok, it does use GPT4 - thanks for that!
Llamaindex gives llm support as well.


Plain Text
from llama_index.llms import OpenAI

llm = OpenAI(temperature=0.1, model="gpt-4")
Add a reply
Sign up and join the conversation on Discord