The Problem Nobody Talks About
You've invested in automation. Your CRM syncs with your invoicing tool. Your field service platform talks to your database. Client onboarding runs on autopilot. Everything works — until it doesn't.
To be fair, most automation platforms do notify you when something fails. Make.com emails you when a scenario errors out. Airtable flags a broken automation. That part works — when the platform itself is working.
And then there's the failure mode that generates no alert at all: gradual degradation. A platform doesn't crash — it just starts responding slower. 200ms becomes 800ms. No error. No notification. But downstream, automations start timing out, syncs fall behind, and small delays compound into missed invoices, late contractor notifications, or compliance checks that never complete.
You find out days later when a client calls — or worse, when a regulator asks questions.
The Situation
I manage automation systems across multiple clients — each running a stack of interconnected tools: Airtable, Make.com, Zoho FSM, QuickBooks, HubSpot, and custom API integrations. These aren't simple one-step workflows. They're multi-system orchestrations where a single failure in one connection can cascade downstream.
The monitoring options available were not great:
None of these fit. I needed something purpose-built: lightweight, centralized, and designed specifically for the kind of automation infrastructure I manage.
The Decision: Build It
Instead of stitching together another set of tools, I designed and built AutoPulse — a centralized automation health monitoring system.
How It Works
AutoPulse has three layers, each doing one job:
When something goes red, an alert fires to Slack. No more checking five different dashboards. No more finding out from a client.
The Architecture
Skip this section if you don't care about the how — the results section is next.
AutoPulse runs on Cloudflare Workers — serverless functions that execute at the edge, close to the services they're monitoring. The database is Cloudflare D1 (SQLite at the edge). The two workers communicate via Service Bindings — a direct internal connection that bypasses the public internet.
The architecture is intentionally modular:
The health check classifies response times into tiers — Excellent (<200ms), Good (<500ms), Warning (<1500ms), Critical (>1500ms) — and stores both the raw latency and the classification. The dashboard renders latency trends using Chart.js, color-coded by performance tier.
The Results
What This Demonstrates
AutoPulse wasn't a client project. Nobody asked me to build it. I built it because the problem existed and no available tool solved it the right way for my context.
This is what I mean by Builder-Architect: I saw a gap in how automation operations were being managed, designed a system to fill it, and built it end-to-end — from database schema to API design to production dashboard — in a weekend.
The same thinking applies to every system I build for clients: understand the real operational problem, design the architecture, build it, and make sure it actually works in production. Not just the happy path — the failure modes, the edge cases, the "what happens at 2am when nobody's watching" scenarios.
If your business runs on automation, someone should be watching whether it's actually running.