Skip to main content
Data Fetcher’s error handling lets you control how consecutive errors are handled for manual and automated runs. It helps you catch systematic failures while tolerating occasional issues.

How to configure error handling

  1. On the request or sequence screen, click to open the Advanced settings.
  2. Scroll down to find the Error Handling section.
  3. Choose how errors should be handled:
    • Stop on first error
    • Or Continue through errors
  4. Configure the specific thresholds:
    • Stop after X errors
    • Pause automated runs after Y errors
  5. (Optional) Enable Treat empty results as an error

Notify and stop after consecutive errors

This setting controls when Data Fetcher will notify you of errors and stop the current execution. For manual runs: You’ll see the error message and execution will stop. For automated runs: You’ll receive an email notification and the current run will stop, but the automation remains active for future runs. Example scenarios:
  • Set to 1 for critical data where any error needs immediate attention
  • Set to 5 for resilient imports that should tolerate a few sporadic failures

Pause automated runs after consecutive errors

This setting determines when Data Fetcher pauses your automation (schedules, webhook URLs, and triggers) to prevent runaway failures. Important: This setting only applies to automated runs. Manual runs are not affected. When this threshold is reached, your automation will be paused and you’ll receive an email notification. You’ll need to manually re-enable the automation after fixing the underlying issue. Example scenarios:
  • Set to 5 for immediate protection against systematic failures
  • Set to 50 for maximum resilience before pausing

Treat empty results as an error

When enabled, a run that returns zero records after transformation and mapping is treated as an error — even if the API call itself succeeded. This is useful when you expect data to always be present (e.g. daily sales, inventory updates) and want to be alerted when it’s missing. Leave this off if data volume naturally varies or empty periods are normal for your use case. Empty results count toward your consecutive error thresholds.

Understanding consecutive errors

Data Fetcher counts consecutive errors, meaning successful runs reset the error counter to zero. Example with “Notify after 3 errors” setting:
  • Run 1: Error (count: 1)
  • Run 2: Error (count: 2)
  • Run 3: Success (count: reset to 0)
  • Run 4: Error (count: 1)
  • Run 5: Error (count: 2)
  • Run 6: Error (count: 3) → Notification sent
This prevents sporadic, unrelated errors from triggering notifications while still catching systematic problems.

Why this matters: A practical example

Let’s say you have a scheduled request that imports customer data from your CRM every hour.

Scenario 1: API key expires

Your CRM API key expires, causing authentication errors. With settings 1 / 5:
  • First error → You get notified immediately via email
  • If you don’t fix it, after 5 consecutive failures the schedule pauses
With settings 10 / 50:
  • After 10 consecutive errors → You get notified via email
  • If you don’t fix it, after 50 consecutive failures the schedule pauses

Scenario 2: Running for each record with some 404s

You’re running a request for each of 100 customer IDs from your Airtable, calling an API to get order details for each customer. The situation: Some customers may not have any orders, causing the API to return 404 errors for those IDs. With settings 1 / 5:
  • First 404 error → Request stops, only processes 1 customer
With settings 10 / 50:
  • Individual 404 errors are tolerated, request continues processing all customers
  • Only systematic issues (like API authentication failing) trigger notifications

Scenario 3: Intermittent rate limit errors

Your API occasionally returns rate limit errors during peak hours. With settings 1 / 5:
  • Every rate limit → Immediate notification
  • Frequent pausing of your schedule
With settings 10 / 50:
  • Occasional rate limits don’t trigger notifications
  • Only systematic issues (like 10 consecutive rate limits) trigger alerts

Sequences vs individual requests

For sequences: Error handling settings on the sequence take precedence over individual request settings when the sequence runs. For individual requests: Each request uses its own error handling settings when run individually. This gives you centralized control over sequence behavior while maintaining flexibility for standalone request execution.

Best practices

For critical data: Use lower thresholds (1-3) for quick notification and close monitoring. For resilient syncing: Use higher thresholds (10-20) to handle API variability while preventing runaway failures. For development: Start with conservative settings (1 / 5) while building requests, then adjust based on observed API behavior.