----------------------
Sure, here's an example of how you can use FireCrawl to load data from a URL and then query it:
First, you need to initialize the FireCrawlWebReader with your API key and desired mode:
from llama_index.readers.web import FireCrawlWebReader
firecrawl_reader = FireCrawlWebReader(
api_key="<your_api_key>", # Replace with your actual API key from <https://www.firecrawl.dev/>
mode="scrape", # Choose between "crawl" and "scrape" for single page scraping
params={"additional": "parameters"}, # Optional additional parameters
)
Source ()
Then, you can load documents from a single page URL:
documents = firecrawl_reader.load_data(url="<http://paulgraham.com/">)
Source ()
Next, you can create an index from the documents:
index = SummaryIndex.from_documents(documents)
Source ()
Finally, you can query the index:
query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
Source ()
Remember to replace
<your_api_key>
with your actual API key from FireCrawl,
"<http://paulgraham.com/"
> with your own URL, and
"What did the author do growing up?"
with your own query.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------