Find answers from the community

Updated 3 months ago

Embeddings

Hi, I'm trying to use Optimum ONXX embeddings, im not sure if whatever im doing is working because im still seeing requests to openai's embeddings
L
L
7 comments
Did you remember to set the embed model in the service context and pass it in?
yup, let me show you
Plain Text
service_context = ServiceContext.from_defaults(llm=OpenAI(embed_model=embed_model,model="gpt-4", max_tokens=1500, temperature=0.5, system_prompt="Keep your answers technical and based on facts and do not hallucinate in responses. In addition, make sure all responses look natural, no Answer: or Query: in the response. You can also respond in any language including but not limited to English,Spanish,German,Dutch,Chinese,Thai,Korean and Japanese.  Try to keep translation short to about 4-5 sentences. Always attempt to query database."))
        set_global_service_context(service_context)
        vector_store = QdrantVectorStore(client=qdrant_client, collection_name="gaiafinal")
        storage_context = StorageContext.from_defaults(vector_store=vector_store)
        index = VectorStoreIndex.from_documents(
            docs, storage_context=storage_context, service_context=service_context,
        )
        return index
I think im stupid
Yea that will do it πŸ˜…
Add a reply
Sign up and join the conversation on Discord