Find answers from the community

Home
Members
Aneerudh
A
Aneerudh
Offline, last seen 3 months ago
Joined September 25, 2024
Plain Text
BadRequestError: Error code: 400 - {'error': {'message': "'$.tools' is too long. Maximum length is 128, but got 221 items.", 'type': 'invalid_request_error', 'param': None, 'code': None}}

hello guys i'm getting this error even for gpt-4-1106-preview for higher context and tokens, i'm trying to use rag. I have created a pandasqueryengine for csv based datas and UnstructuredReader to parse data from pdf
data sources format: pdf, txt, csv
trying to use gpt-4 and OpenAIEmbedding for embed model
here is my service context
Plain Text
llm = OpenAI(
    model="gpt-4-1106-preview",
    # model="gpt-4",
    # model="gpt-3.5-turbo",
    temperature=0.5,
)
OpenAI_embeddings = OpenAIEmbedding()
service_context = ServiceContext.from_defaults(
    llm=llm,
    embed_model=OpenAI_embeddings,
    chunk_size=1024,
)


I use RouterQueryEngine and SubQuestionQueryEngine as well, testing out both.
Plain Text
root_query_engine_tool = QueryEngineTool(
    # query_engine=root_query_engine,
    query_engine=router_query_engine,
    metadata=ToolMetadata(
        name="router_query_engine",
        # name="sub_question_query_engine",
        description="useful for when you want to answer queries that require analyzing multiple sections documents",
    ),
)

# We combine all query engine tools as one tool
tools = individual_query_engine_tools + [root_query_engine_tool]

index is stored in local storage with persistent directory, and loading them in runtime
and i'm using OpenAI data agents with these tools.
How to resolve this error?
16 comments
A
S
L
A
Aneerudh
·

Vertex

hey guys how to use vertex ai with llama index, i have service account json file path stored in path (mac env) as GOOGLE_APPLICATION_CREDENTIALS, what else i should configure, still i get this error

Plain Text
PermissionDenied: 403 Request had insufficient authentication scopes. [reason: "ACCESS_TOKEN_SCOPE_INSUFFICIENT"
domain: "googleapis.com"
metadata {
  key: "service"
  value: "aiplatform.googleapis.com"
}
metadata {
  key: "method"
  value: "google.cloud.aiplatform.v1.ModelGardenService.GetPublisherModel"
}


for the code
Plain Text
import vertexai
from google.cloud import aiplatform
from vertexai import preview

project_id = "project-id"
location = "us-central1"

vertexai.init(project=project_id, location=location)

aiplatform.init(
    # your Google Cloud Project ID or number
    # environment default used is not set
    project=project_id,
    # the Vertex AI region you will use
    # defaults to us-central1
    location=location,
    # Google Cloud Storage bucket in same region as location
    # used to stage artifacts
    #     staging_bucket='gs://my_staging_bucket',
    # custom google.auth.credentials.Credentials
    # environment default credentials used if not set
    #     credentials=my_credentials,
    # customer managed encryption key resource name
    # will be applied to all Vertex AI resources if set
    #     encryption_spec_key_name=my_encryption_key_name,
    # the name of the experiment to use to track
    # logged metrics and parameters
    #     experiment='my-experiment',
    # description of the experiment above
    #     experiment_description='my experiment description'
)

above code i have initialised the gcp
when i ran this code
Plain Text
from llama_index.llms.vertex import Vertex

llm = Vertex(model="text-bison", temperature=0, additional_kwargs={})
llm.complete("Hello this is a sample text").text


help me out @everyone
3 comments
A
W
Hello, i have created chatbot using llama index with various different sources which are in text format and we used open ai llm model, we have created vector indices for each data sources and created a query engine for each indices. when i use SubQuestionQueryEngine/RouterQueryEngine and create a QueryEnginetool use along with OpenAIAgent. When i query with some question to the chatbot it doesn't fetch the data even though data is chunked in indexes still it replies with general answer like 'i doesn't have info......and kindly visit official website' is there any other way to improve the performance and fetch the appropriate data from indices according to the query?
1 comment
L