from llama_index import VectorStoreIndex, StorageContext, ServiceContext from llama_index.llms import OpenAI from llama_index.vector_stores import PineconeVectorStore # Define the LLM llm = OpenAI(temperature=0, model="gpt-3.5-turbo") # Create the service context with the LLM service_context = ServiceContext.from_defaults(llm=llm, chunk_size_limit=64) # Create the vector store vector_store = PineconeVectorStore(pinecone_index=pinecone_index, namespace="test") # Create the storage context with the vector store storage_context = StorageContext.from_defaults(vector_store=vector_store) # Create the VectorStoreIndex with the service context and storage context index = VectorStoreIndex(nodes, service_context=service_context, storage_context=storage_context)
NodeWithScore
object in LlamaIndex has a score
attribute that you can use to get the score value. Here is an example of how you can access it:node_with_score = NodeWithScore(node=Node(text="text"), score=0.7) print(node_with_score.score)
0.7
, which is the score of the NodeWithScore
object.get_score()
that can be used to get the score value. Here is how you can use it:score = node_with_score.get_score()
NodeWithScore
object.get_score()
method has an optional parameter raise_error
. If raise_error
is set to True
and the score is None
, it will raise an error. By default, raise_error
is set to False
.OpenAILike
class in LlamaIndex. The OpenAILike
class has several parameters such as model
, temperature
, max_tokens
, additional_kwargs
, max_retries
, timeout
, default_headers
, reuse_client
, api_key
, api_base
, api_version
, context_window
, is_chat_model
, is_function_calling_model
, and tokenizer
. However, none of these parameters seem to be related to setting a stop token.OpenAILike
, you may want to refer to the official LlamaIndex documentation () or reach out to the LlamaIndex community or support.stop_sequences
parameter. This parameter accepts a list of strings that the model will treat as end tokens for the generated text. Here's an example:import openai openai.api_key = 'your-api-key' response = openai.Completion.create( engine="text-davinci-002", prompt="Once upon a time", stop_sequences=[".", "?", "!"] )
'your-api-key'
with your actual OpenAI API key.