HomeTool modules

Building Our Firecrawl Tool Module

Welcome to the heart of our course!

In this module, we'll build a production-ready web scraping tool module using Naptha's tool architecture.

The run.py contains our core implementation and serves as the entry point for the tool module.

Understanding Tool Architecture

Before we write any code, let's explore how our tool module works:

  • Receives requests through a standardized interface
  • Processes inputs according to schema validation
  • Interacts with external APIs securely
  • Returns structured responses

Building the Core Implementation

Let's break down our Firecrawl tool implementation into its key components:

Tool Initialization

The initialization sets up our tool with necessary configurations:

We use environment variables for sensitive data like API keys and validate them early.

Web Scraping Implementation

The scrape_website method handles basic web scraping:

Data Extraction Implementation

The extract_data method provides targeted data extraction:

Module Entry Point

The run function serves as our module's entry point:

Usage Examples

We will also add a sample usage example to our module that will allow us to test our tool locally.

Error Handling Strategy

Our implementation follows a hierarchical error handling approach:

  1. Schema-level validation (Pydantic)
  2. Method-specific validation (e.g., checking for required query parameter)
  3. API-level error handling (HTTP response validation)
  4. General exception handling with detailed logging

Next Steps

In the next module, we'll explore:

  • Testing our Firecrawl tool module
  • Deploying to Naptha Hub

Try experimenting with different scraping configurations before moving on. Understanding how the pieces fit together will help when we move to deployment.