So i'm running the latest one and I get this error:
2023-11-13T19:55:49.415598Z ERROR embed:embed{inputs="title: blblablabla_ API Parameters Key Type Example Description token string \"token\":\" <your_token> \" Required." truncate=false permit=OwnedSemaphorePermit { sem: Semaphore { ll_sem: Semaphore { permits: 486 } }, permits: 1 }}: text_embeddings_core::infer: core/src/infer.rs:100: Input validation error: `inputs` must have less than 512 tokens. Given: 602
The way I'm running is this:
from llama_index.embeddings import TextEmbeddingsInference
embed_model = TextEmbeddingsInference(
model_name="BAAI/bge-large-en-v1.5",
base_url = "http://127.0.0.1:8080",
timeout=60, # timeout in seconds
embed_batch_size=30,
)
node_parser = SentenceWindowNodeParser.from_defaults(
window_size=10,
window_metadata_key="window",
original_text_metadata_key="original_text",
)
simple_node_parser = SimpleNodeParser.from_defaults()
llm = OpenAI(model="gpt-3.5-turbo-16k", temperature=0.1)
ctx = ServiceContext.from_defaults(
llm=llm,
embed_model=embed_model,
)
sentence_index = VectorStoreIndex(nodes, service_context=ctx, show_progress=True)
Am I missing something?