Skip to main content
← Back to blog

Your Cron Job Isn't Running. Here's How to Find Out Why.

A debugging checklist for when the schedule said it should have fired and your inbox is suspiciously quiet.

Say you've got a backup script that's been running every night for fourteen months. One Monday morning somebody on the team goes looking for Friday's snapshot and can't find it. They check Saturday. Then Sunday. The job hadn't fired any of those nights. Nobody saw anything because cron, by default, doesn't tell you when a job doesn't run. It just sits there, not running.

This is a checklist for when you're in that spot. The job was scheduled. The job didn't run. Or maybe it did run, technically, but didn't actually do the thing. You're trying to figure out which.

Going through it the way I'd think about it. Fastest checks first, weirdest gotchas last.

1. Is cron actually running on the box?

Cron is a daemon. Daemons stop. Package upgrades stop them. “Cleaning up” stops them. Config changes that didn't take effect stop them. The box rebooted and the service didn't come back up.

systemctl status cron       # Debian/Ubuntu
systemctl status crond      # CentOS/RHEL/Amazon Linux
service cron status         # the older one

If it's not running, that's your answer. Start it back up. Then go figure out why it stopped.

On macOS, cron was deprecated in favor of launchd. If you're on a Mac and your cron job mysteriously stopped firing, double-check whether cron is even what you should be using.

2. Are you reading the right user's crontab?

Every user has their own crontab. So does root. So does any service user. crontab -l only shows the crontab for whoever you're currently logged in as. Not root's. Not www-data's. Not the user who actually owns the script.

crontab -l                  # mine
sudo crontab -l             # root's
sudo crontab -u www-data -l # www-data's

I have lost more time to this than I'd like to admit. You SSH in as your normal user, run crontab -l, see three jobs, and conclude the backup job was never scheduled. Meanwhile it's sitting in /var/spool/cron/crontabs/root exactly as expected.

There's also /etc/crontab and /etc/cron.d/ for system-level cron jobs. The format there has an extra username field. Worth a look.

3. Is the schedule expression actually what you think it is?

The five fields are minute, hour, day-of-month, month, day-of-week. That last field is where most people get tripped up.

0 0 * * 1     # midnight every Monday
0 0 1 * *     # midnight on the 1st of every month

Those look almost identical and mean very different things.

A few specific traps:

  • Day-of-week 0 vs 7. Most cron implementations accept both for Sunday. Some don't.
  • */15 granularity. This works for “every 15 minutes” because 60 is divisible by 15. For something like */7, it does not mean “every 7 minutes.” It means “every minute that's a multiple of 7”: :00, :07, :14, :21, :28, :35, :42, :49, :56, then back to :00 the next hour with a 4-minute gap. Cron is not aware of clock rollovers in the way you'd assume.
  • The percent sign. % is special in crontab. It gets interpreted as a newline. If your command has date +%Y-%m-%d in it, you need to escape every % as \% or wrap the command in single quotes via a script.

If you're not sure your expression means what you think it means, paste it into a translator. We have one at crondoctor.com/cron-format. There are plenty of others. Read what it says back to you in English. Sometimes you go “huh, that's not what I meant.”

4. What timezone is the box in?

This one bites people moving between dev and prod, between cloud regions, between containers. Cron uses the system timezone. If your box is in UTC and you wrote 0 9 * * * thinking “9am my time,” congratulations, you've scheduled something for 9am UTC.

date              # what cron sees
timedatectl       # the official answer on systemd boxes

Most production servers run in UTC. Most developers do not think in UTC. The mismatch is a regular source of “the job ran four hours late” tickets.

Daylight saving makes this worse. If you schedule a job for 2:30am every day in a timezone that observes DST, twice a year that job runs zero times or twice. You probably want UTC for anything where exact timing matters.

5. Does the script run when you run it manually?

Half of “my cron job isn't running” turns out to be “my cron job ran, but it failed in the first three seconds, and I didn't see the error because cron's stdout went somewhere I'm not looking.” Try running it yourself first.

But run it the way cron will. Cron has a minimal environment. No ~/.bashrc. No shell aliases. A stripped PATH that often doesn't include /usr/local/bin, where you may have installed things via Homebrew or a package manager. No HOME unless you set it. No LANG. None of your custom environment variables.

env -i HOME=$HOME PATH=/usr/bin:/bin bash -c './your-script.sh'

That runs the script in something close to cron's environment. If it fails this way but works in your normal shell, the bug is environmental. The fix is usually one of:

  • Hardcode full paths to anything you call (/usr/local/bin/python3 instead of python3).
  • Set PATH explicitly at the top of your crontab, before any jobs.
  • Source a clean environment file at the start of your script.

6. Where does its output go?

By default, cron mails stdout and stderr to the user's local mail. On most cloud servers, mail is not configured. So that output goes nowhere. The first thing you do for any cron job worth running is redirect its output somewhere you can read it.

0 3 * * * /opt/jobs/backup.sh >> /var/log/jobs/backup.log 2>&1

>> appends, 2>&1 captures stderr along with stdout. Without that, you're flying blind. The job could fail every night with a clear error message and you'd never know.

While you're in there, set up log rotation on those files. They will grow.

7. Permissions, SELinux, AppArmor.

Cron will refuse to run a script that isn't executable. chmod +x first.

If you're on a system with SELinux (RHEL, CentOS, Fedora, Amazon Linux 2) or AppArmor (Ubuntu), a cron job can be blocked from accessing files even when the file permissions look fine. The symptom is the script appearing to start and immediately die. Sometimes with a permission error, sometimes silently. Check /var/log/audit/audit.log (SELinux) or dmesg (AppArmor) for denials.

Rare. But it happens. The kind of thing you don't suspect until you've ruled out everything else.

8. Disk full or inode exhaustion.

When the disk is full, cron may fail to spool a job, fail to write a log entry, or fail to write the temp file the job creates. You may not see any error. The job just doesn't go.

df -h     # space
df -i     # inodes

I have seen perfectly healthy servers run out of inodes (not bytes, inodes) because some other job was creating a million tiny files in /tmp and never cleaning them up. Plenty of free space on disk. No inode entries left to allocate. Cron silently failed for hours.

9. The job IS running. It's just failing silently.

This is the hardest case, and it's the one the opener describes. The job runs. Cron is happy. The script even exits 0. But the work didn't actually happen.

Maybe cp was supposed to copy a file and didn't, because the source path had a typo introduced last week. Maybe pg_dump was supposed to back up a database and got an empty result because the password got rotated and the script kept running with --quiet. Maybe the script ran, did its work, but a downstream service was down and nothing picked up the artifact, so functionally nothing happened. From cron's perspective: success. From the data's perspective: it isn't there.

This is why steps 1 through 8 aren't enough on their own. They catch the cases where cron didn't fire or the script crashed early. They don't catch the case where the script reported success and lied.

The only fix for that last one is to make the script tell you when it actually finished its work. Not exit 0. Not write to a log. Tell you, in the form of a heartbeat to something that's listening. That's what cron monitoring services exist for. CronDoctor does it. So do Cronitor, Healthchecks.io, Better Stack. The point isn't which one you pick. It's that you have one. A curl line at the end of your script saying “I'm done and the data is in place” is the difference between “the cron didn't run” mysteries on Monday and finding out within five minutes when something went wrong on Friday.

If you're in the “I check the logs once a quarter” stage of cron monitoring, that's a problem. The data won't be there to check the day you really need it.

CronDoctor is a heartbeat monitor for cron jobs. It catches the silent-success case in step 9 — the one a checklist can't catch on its own. First job is free, $2/month each after that.

Start free

Written by Brad Wiederholt, founder of Huladyne Labs and builder of CronDoctor.