
DeadManPing
Job monitoring that verifies outcomes, not just execution.
1 follower
Job monitoring that verifies outcomes, not just execution.
1 follower
DeadManPing monitors scheduled tasks and automated jobs without touching how they run. One curl line. Zero execution changes. Result-aware monitoring that verifies job outcomes, not just execution. Detect silent failures, wrong results, and missing runs. Works with cron, systemd timers, scheduled tasks, and any automated job. Free tier with 20 monitors. Multi-currency pricing in USD and EUR.





I kept getting burned by scheduled jobs that "succeeded" but did nothing useful. My backup script ran every night, returned exit code 0, and I thought everything was fine - until I needed that backup and found an empty file. Same thing with data sync jobs: they'd run, log "success," but process zero rows.
Traditional monitoring tools didn't help. They'd tell me if the job crashed, but stayed silent when it ran and produced garbage.
The problem:
I looked at existing solutions and they all had the same issue: they wanted me to change how I work. Either migrate from my current scheduler to their platform (no thanks), write custom webhook connectors (I have better things to do), or accept simple ping monitoring that misses the real problems.
Sure, I could add validation logic directly in my scripts - check file sizes, verify row counts, validate timestamps. But then every time I need to adjust a threshold or add a new check, I'm editing code, testing, deploying. That's not scalable.
I just wanted something that:
Works with my existing setup (cron, systemd timers, scheduled tasks, whatever)
Actually checks if the job did what it was supposed to do
Doesn't require rewriting my scripts
Lets me change validation rules on the fly, without touching code
Catches those silent failures
How it evolved:
I built the first version as a simple ping monitor. Add one curl line to your script, done. But then I realized the same problem: a job can ping successfully and still be broken.
That's when I added payload validation. Now your script can send data like "processed 150 rows" or "file size: 2.5MB" and DeadManPing checks if that makes sense. You set the rules in the dashboard - no code changes needed.
The breakthrough was realizing we should separate execution from evaluation. Your scheduler runs your scripts. Your scripts tell us what happened. We check if that's okay. And the best part? You can adjust those checks anytime in the dashboard. Need to change the minimum file size threshold? Update it in the UI, done. No redeploy, no code changes, no downtime.
The state-aware alerts were a lifesaver too. No more spam - you only get notified when something actually changes (healthy → failed, or failed → recovered).
Try it free with 20 monitors—no credit card needed. Would love to hear what you think!