Documents that ship themselves
Cron schedules, event triggers, or on-demand. Delivery to email, Slack, Drive, SharePoint, S3, or any webhook. Per-recipient routing pulled from your data.
Most recurring reporting work isn’t the writing or the design — those are solved by the template engine. It’s the orchestration. Remembering to run the report. Sending it to the right people. Updating the calendar reminder when the cadence changes. Re-sending when someone forwards the wrong version.
The scheduling layer handles that orchestration. Pick when a document should be generated, where it should land, and who should receive it. The platform runs the generation, delivers the output, and logs what happened.
What it does
Three ways to trigger a generation:
Scheduled. Standard cron expressions — every Monday at 9am, the last business day of each quarter, the 1st of every month. Time zones are explicit; nothing is left to the server’s default.
Event-triggered. A field changes in your CRM, a record updates in Airtable, a webhook arrives from an external system. The platform listens for the event and generates a document in response. Useful when the work follows a workflow stage rather than a calendar — a QBR deck that builds when an opportunity moves to renewal, an onboarding pack triggered when a new account is created.
On-demand. Triggered manually by a user in the UI or programmatically via the API. Most teams use this for ad-hoc generation alongside scheduled runs.
Delivery is independent from generation. The same generated document can land in multiple places: emailed to a client, archived to a Google Drive folder, dropped into a Slack channel, written to S3 for downstream processing, posted to a webhook your own system listens to.
How it works
A schedule is defined on a template-and-source pair. You specify:
- When — a cron expression or an event trigger.
- What — which template to use, which data source, and any filters (which records, which date range).
- Where — one or more delivery destinations.
- To whom — recipient list, or a routing rule that picks recipients per record.
Once configured, the schedule runs in the background. Each run produces a generation record in the platform — visible in the UI with its inputs, outputs, delivery status, and any errors.
The most common pattern teams settle into: a few scheduled generations covering the predictable cadence work, plus event-triggered generations for workflow-driven cases, plus on-demand for everything else.
What’s included
- Cron-style scheduling with full crontab syntax.
- Event triggers from supported data sources — record creation, record update, field-value change.
- Webhook-triggered generation from arbitrary external systems.
- Multiple delivery destinations per generation. Email, Slack, Google Drive, SharePoint, S3, custom webhook.
- Per-recipient routing. When the data source has owner/recipient fields, recipients can be resolved per record — different CSMs get different account reports from a single scheduled run.
- Retry on failure. Generation or delivery failures are retried on an exponential backoff. Persistent failures are flagged in the UI with structured error information.
- Pause and resume. Schedules can be paused without losing their configuration. Useful when a data source is being restructured or a template is being revised.
- Generation history. A searchable log of every run — what was generated, when, where it was delivered, what failed.
Where it fits
Recurring client reporting. A marketing agency producing monthly reports for 50 clients runs one scheduled generation on the 5th of each month. The data source filters records by client; the recipient list pulls the client contact from the same source. One configured schedule, 50 delivered reports.
Workflow-driven generation. A customer success team where every account hitting “renewal in 90 days” needs a QBR deck. An event trigger watches the CRM field; when an account flips into that state, the deck generates and lands in the assigned CSM’s email and Drive folder.
Pre-event refresh cycles. Event organizers running multi-day conferences where the program changes daily during the final week. A nightly scheduled generation at 11pm picks up the day’s session changes and regenerates the printable program by morning.
Investor updates and fund reporting. A monthly schedule that fires on the 1st, pulls the prior month’s metrics, generates the update document, and delivers it to the investor distribution list. Once configured, the founder or fund admin stops thinking about it.
Honest limits
- Minimum schedule interval is hourly. We don’t support sub-hour cron expressions. Generations are not designed for high-frequency execution; if you need a document every few minutes, you probably want a dashboard instead.
- Delivery is best-effort with retries, not guaranteed. We retry transient failures and log everything, but webhook delivery (and email, to a lesser extent) isn’t a once-and-only-once contract. If your downstream workflow requires that, dedupe by generation ID on your side.
- Per-recipient routing requires the routing data to be in your data source. We don’t maintain a separate recipient directory — the recipient for a record needs to be a field on that record (or derivable from one).
- No built-in escalation chains. If a generation fails, we surface it in the UI and via webhook; we don’t have native “alert person A, then escalate to person B after 24 hours” behavior. That logic belongs in your own monitoring layer (PagerDuty, Slack alerts via the webhook destination, etc.).
- No conditional approval workflows yet. A generation runs and delivers; there’s no built-in “send to a human for approval before delivery.” Teams that need approval gates currently insert that step in their own workflow before triggering generation.
FAQ
Can I pause a schedule temporarily?
Yes. Pause keeps the configuration; resume restarts at the next scheduled run. No “missed” runs are caught up — if you paused for a week, the week’s runs are skipped, not queued.
What happens if a generation fails — does it retry?
Yes, with exponential backoff for transient failures (data source temporarily unreachable, rate limit hit, network blip). Persistent failures (template references a missing field, source returns malformed data) surface in the UI immediately and aren’t retried indefinitely.
Can multiple recipients get different versions of the same document?
Yes, with per-record routing. One scheduled generation that produces 50 different account reports can deliver each report to that account’s owner, pulled from a “CSM” field on the account record. The 50 outputs are 50 different documents, each delivered to one person.
Can a completed generation trigger another generation?
Indirectly — the generation-complete webhook can fire to a system you control, which then calls the API to start a downstream generation. We don’t have a native “chained generation” primitive inside the platform, but the building blocks support it.
How does time zone handling work?
Schedules are configured with an explicit time zone. A schedule that runs “Monday at 9am Europe/Lisbon” runs at 9am Lisbon time year-round; the platform handles DST transitions automatically.
What if a recipient’s email bounces?
The bounce is logged on the generation record. We don’t automatically retry to the same address; we surface the failure so you can update your recipient data and re-trigger.