Find answers from the community

Updated 4 months ago

def create_index(path): max_input = 4096 tokens = 512 chunk_size = 600 ma

At a glance

def create_index(path):
max_input = 4096
tokens = 512
chunk_size = 600
max_chunk_overlap = 0.2
promptHelper = PromptHelper(max_input, tokens, max_chunk_overlap, chunk_size_limit=chunk_size)

# Define LLm
llmPredictor = LLMPredictor(llm=OpenAI(temperature=0, model_name="text-davinci-003", max_tokens=tokens))

# Load data
docs = SimpleDirectoryReader(path).load_data()

# Create vector index
service_context = ServiceContext.from_defaults(llm_predictor=llmPredictor, prompt_helper=promptHelper)

vectorIndex = GPTVectorStoreIndex.from_documents(documents=docs, service_context=service_context)
vectorIndex.storage_context.persist(persist_dir='index_store')
return vectorIndex

def answerMe(question):

storage_context = StorageContext.from_defaults(persist_dir='index_store')
index = load_index_from_storage(storage_context)
query_engine = index.as_query_engine(response_mode='refine')
for sent in rewriting:
if sent in question :
qq = "Write a lengthy response to this query"
response = query_engine.query(qq + '\n\n'+ question)
return response
response = query_engine.query(question)
for phrase in phrases:
if phrase in str(response) :
content =agent({"input": question})
actualcontent = content['output']
response = re.sub(r'(.?)', r'<a href="\2" target="_blank">\1</a>', actualcontent)

return response

how to costume this to organize the response and outputs in a good shape not just as a paragraph having no static schema or anything
k
3
8 comments
Please stand by @3bs as I search the knowledge base. I normally take about 20 seconds or so. :wave:
----------------------
To customize the response and outputs from LlamaIndex, you can use the structured output parsing feature. This allows you to format the output in a more organized way, rather than just a paragraph.

Here is an example of how you can do this:

Plain Text
from llama_index.query_modules.structured_outputs import output_parser
from llama_index.query_modules.structured_outputs import QuestionAnswerPrompt, RefinePrompt

# format each prompt with output parser instructions
fmt_qa_tmpl = output_parser.format(DEFAULT_TEXT_QA_PROMPT_TMPL)
fmt_refine_tmpl = output_parser.format(DEFAULT_REFINE_PROMPT_TMPL)

qa_prompt = QuestionAnswerPrompt(fmt_qa_tmpl, output_parser=output_parser)
refine_prompt = RefinePrompt(fmt_refine_tmpl, output_parser=output_parser)

# obtain a structured response
query_engine = index.as_query_engine(
    service_context=ServiceContext.from_defaults(
        llm_predictor=llm_predictor
    ),
    text_qa_template=qa_prompt, 
    refine_template=refine_prompt, 
)
response = query_engine.query(
    "What are the three items the author did growing up?", 
)
print(response)

In this example, the output is structured as a dictionary with keys and values, which is more organized than a simple paragraph. You can customize the keys and values according to your needs.

For more details, you can refer to the Output Parsing section () in the LlamaIndex documentation.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai Import "llama_index.query_modules.structured_outputs" could not be resolved
Let me check the knowledge base for you. One moment! :books:
----------------------
I apologize for the confusion. The import statement provided in the previous response seems to be incorrect as it's not found in the provided extracts.

In the extracts, we have examples of importing specific classes or functions from llama_index, but not the entire structured_outputs module. Here's how the imports are done in the extracts:

Plain Text
from llama_index import VectorStoreIndex, SimpleDirectoryReader
from llama_index.output_parsers import LangchainOutputParser
from llama_index.llm_predictor import StructuredLLMPredictor
from llama_index.prompts.prompts import QuestionAnswerPrompt, RefinePrompt
from llama_index.prompts.default_prompts import DEFAULT_TEXT_QA_PROMPT_TMPL, DEFAULT_REFINE_PROMPT_TMPL


Please replace the incorrect import statement with the correct ones based on your needs. If you're still having issues, it might be helpful to check the LlamaIndex documentation () for more information.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai "output_parser" is not defined
Let me check the knowledge base for you. One moment! :books:
----------------------
I apologize for the confusion. The output_parser is an instance of an output parsing class, which is used to format and parse the output from the LLM. In the provided extracts, two types of output parsers are mentioned: GuardrailsOutputParser and LangchainOutputParser.

Here's how you can define an output_parser using GuardrailsOutputParser:

Plain Text
from llama_index.output_parsers import GuardrailsOutputParser
from llama_index.llm_predictor import StructuredLLMPredictor

llm_predictor = StructuredLLMPredictor()

rail_spec = """
<rail version="0.1">
...
</rail>
"""

output_parser = GuardrailsOutputParser.from_rail_string(rail_spec, llm=llm_predictor.llm)


And here's how you can define an output_parser using LangchainOutputParser:

Plain Text
from llama_index.output_parsers import LangchainOutputParser
from llama_index.llm_predictor import StructuredLLMPredictor
from langchain.output_parsers import StructuredOutputParser, ResponseSchema

llm_predictor = StructuredLLMPredictor()

response_schemas = [
    ResponseSchema(name="Education", description="Describes the author's educational experience/background."),
    ResponseSchema(name="Work", description="Describes the author's work experience/background.")
]

lc_output_parser = StructuredOutputParser.from_response_schemas(response_schemas)
output_parser = LangchainOutputParser(lc_output_parser)


Please replace the ... in the rail_spec with your own specifications. For more details, you can refer to the LlamaIndex documentation ().
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
Add a reply
Sign up and join the conversation on Discord