Find answers from the community

J
JakeAM
Offline, last seen 3 months ago
Joined September 25, 2024
J
JakeAM
·

Gaurdrails

I'm trying to use GuardRails with LlamaIndex, but I'm struggling to understand how to use the output parser. Does anyone have any fairly simple examples or documentation they can share please?
1 comment
L
Does anyone know why response is none?
Plain Text
import os
from flask import Flask, request
from langchain.chat_models import ChatOpenAI
from llama_index import GPTKeywordTableIndex, LLMPredictor, ServiceContext, SimpleDirectoryReader

os.environ["OPENAI_API_KEY"] = "MY KEY HERE"

app = Flask(__name__)

documents = SimpleDirectoryReader("C:\\temp\\Test").load_data()
llm_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0, model_name='gpt-3.5-turbo'))
service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor)
index = GPTKeywordTableIndex.from_documents(documents, service_context=service_context)

@app.route("/query", methods=["GET"])
def query():
    query_text = request.args.get("text", None)
    response = index.query(query_text)
    return str(response), 200
9 comments
L
J
J
JakeAM
·

Summarize

What index and query mode should I use for summarization?
1 comment
L