Find answers from the community

Updated 3 months ago

Hi there, are there any ways to force

Hi there, are there any ways to force the LLM to return a response in markdown format?

I have the following function and it does not return a response in .md format:

Plain Text
def initialize_index(self, namespace: str, model_name="gpt-3.5-turbo-1106"):
    service_context = ServiceContext.from_defaults(
        chunk_size=512,
        llm=OpenAI(temperature=0.7, model_name=model_name),
        callback_manager=self.callback_manager,
    )
    pinecone_index = pinecone.Index(PINECONE_INDEX_ID)
    vector_store = PineconeVectorStore(
        pinecone_index=pinecone_index,
        namespace=namespace,
    )
    storage_context = StorageContext.from_defaults(
        docstore=self.docstore,
        index_store=self.index_store,
        vector_store=vector_store,
    )
    self.index = VectorStoreIndex.from_documents(
       [], storage_context=storage_context, service_context=service_context
    )


def query_stream(self, query: str, namespace: str, model: str):
    full_query = 'Please make sure to respond ONLY with content in the .md format as the response, here is my prompt: ' + query
    self.initialize_index(namespace, model)
    streaming_response = self.index.as_query_engine(
        streaming=True, similarity_top_k=20,
    ).query(full_query)

    for text in streaming_response.response_gen:
        yield text
W
1 comment
I see that you are adding the prompt in the query itself.

I would suggest that you try with setting up the prompt separately.

For prompts youc an check out: https://docs.llamaindex.ai/en/stable/module_guides/models/prompts.html
Add a reply
Sign up and join the conversation on Discord