Find answers from the community

Updated 12 months ago

how can I solve this problem?

At a glance
@kapa.ai how can I solve this problem?
k
G
6 comments
Please stand by @Giray as I search the knowledge base. I normally take about 20 seconds or so. :wave:
----------------------
I'm sorry, but your question is quite broad. Could you please provide more details about the problem you're trying to solve? Is it related to LlamaIndex or Large Language Models (LLMs)? The more specific you can be, the better I can assist you.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai it is related to generally llms, but I think it can be solve using LlamaIndex. My goal is extract information from reports. that reports are as pdfs. I have lots of economic reform definitions and I want to check if these reforms are in that reports. If, then extract their dates and and names to a table.
Let me check the knowledge base for you. One moment! :books:
----------------------
Based on the information provided in the extracts, you can use LlamaIndex to achieve your goal. Here's a general approach:

  1. Data Connectors: Use data connectors to ingest your existing data from their native source and format. In your case, these would be the PDF reports.
  1. Data Indexes: Structure your data in intermediate representations that are easy and performant for LLMs to consume. This will help in extracting the required information from the reports.
  1. Engines: Use engines to provide natural language access to your data. Query engines can be used as powerful retrieval interfaces for knowledge-augmented output.
  1. Data Agents: These are LLM-powered knowledge workers augmented by tools, from simple helper functions to API integrations and more. They can help in extracting the required information.
  1. Application Integrations: Tie LlamaIndex back into the rest of your ecosystem. This could be LangChain, Flask, Docker, ChatGPT, or anything else.
For extracting terms and definitions from text, you can follow the tutorial provided in the extracts. Here's a simplified version of the code:

Plain Text
from llama_index.core import (
    SimpleDirectoryReader,
    VectorStoreIndex,
    download_loader,
    RAKEKeywordTableIndex,
)

from llama_index.llms.openai import OpenAI

llm = OpenAI(temperature=0, model="gpt-3.5-turbo")

# Load document and build index
reader = SimpleDirectoryReader(input_files=["./data/10k/your_pdf_file.pdf"])
data = reader.load_data()

index = VectorStoreIndex.from_documents(data)

query_engine = index.as_query_engine(streaming=True, similarity_top_k=3)

# Stream response with page citation
response = query_engine.query(
    "What was the impact of COVID? Show statements in bullet form and show"
    " page reference after each statement."
)
response.print_response_stream()
Please note that you need to replace "./data/10k/your_pdf_file.pdf" with the path to your PDF file. Also, replace the query string with your specific query.

For more detailed instructions, you can refer to the LlamaIndex Documentation () and the GitHub examples ().
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
Add a reply
Sign up and join the conversation on Discord