Not Found In System Landscape Directory
Often, crontab
scripts are not executed on schedule or as expected. There are numerous reasons for that:
He looked into the system and found some Dev traces in the QA system but this did not solve the problem and we still had the same issue. After a lot of analysis, he performed the configuration changes in NWA (shown below). Specify solution, Solution Manager ABAP Technical System is not found in the landscape,Not found in LMDB, Set Up SAP Solution Manager System in LMDB, KBA, BC-CCM-SLD-ABA, System Landscape Directory ABAP Connectivity, SV-SMG-LDB, Landscape Management Database (LMDB), Problem.
- wrong crontab notation
- permissions problem
- environment variables
This community wiki aims to aggregate the top reasons for crontab
scripts not being executed as expected. Write each reason in a separate answer.
Please include one reason per answer - details about why it's not executed - and fix(es) for that one reason.
Please write only cron-specific issues, e.g. commands that execute as expected from the shell but execute erroneously by cron.
46 Answers
Different environment
Cron passes a minimal set of environment variables to your jobs. To see the difference, add a dummy job like this:
Wait for /tmp/env.output
to be created, then remove the job again. Now compare the contents of /tmp/env.output
with the output of env
run in your regular terminal.
A common 'gotcha' here is the PATH
environment variable being different. Maybe your cron script uses the command somecommand
found in /opt/someApp/bin
, which you've added to PATH
in /etc/environment
? cron ignores PATH
from that file, so runnning somecommand
from your script will fail when run with cron, but work when run in a terminal. It's worth noting that variables from /etc/environment
will be passed on to cron jobs, just not the variables cron specifically sets itself, such as PATH
.
To get around that, just set your own PATH
variable at the top of the script. E.g.
Some prefer to just use absolute paths to all the commands instead. I recommend against that. Consider what happens if you want to run your script on a different system, and on that system, the command is in /opt/someAppv2.2/bin
instead. You'd have to go through the whole script replacing /opt/someApp/bin
with /opt/someAppv2.2/bin
instead of just doing a small edit on the first line of the script.
You can also set the PATH variable in the crontab file, which will apply to all cron jobs. E.g.
My top gotcha: If you forget to add a newline at the end of the crontab
file. In other words, the crontab file should end with an empty line.
Below is the relevant section in the man pages for this issue (man crontab
then skip to the end):
Cron daemon is not running. I really screwed up with this some months ago.
Type:
If you see no number, then cron is not running. sudo /etc/init.d/cron start
can be used to start cron.
EDIT: Rather than invoking init scripts through /etc/init.d, use the serviceutility, e.g.
The script filenames in cron.d/
, cron.daily/
, cron.hourly/
, etc., should not contain dot (.
), otherwise run-parts will skip them.
See run-parts(8):
So, if you have a cron script backup.sh
, analyze-logs.pl
in cron.daily/
directory, you'd best to remove the extension names.
In many environments cron executes commands using sh
, while many people assume it will use bash
.
Suggestions to test or fix this for a failing command:
Try running the command in
sh
to see if it works:Wrap the command in a bash subshell to make sure it gets run in bash:
Tell cron to run all commands in bash by setting the shell at the top of your crontab:
If the command is a script, make sure the script contains a shebang:
I had some issues with the time zones. Cron was running with the fresh installation time zone. The solution was to restart cron:
Absolute path should be used for scripts:
System Landscape Directory Server
For example, /bin/grep
should be used instead of grep
:
Instead of:
This is especially tricky, because the same command will work when executed from shell. The reason is that cron
does not have the same PATH
environment variable as the user.
If your crontab command has a %
symbol in it, cron tries to interpret it. So if you were using any command with a %
in it (such as a format specification to the date command) you will need to escape it.
That and other good gotchas here:
http://www.pantz.org/software/cron/croninfo.html
Cron is calling a script which is not executable.
By running chmod +x /path/to/scrip
the script becomes executable and should resolve this issue.
It is also possible that the user's password has expired. Even root's password can expire. You can tail -f /var/log/cron.log
and you will see cron fail with password expired. You can set the password to never expire by doing this: passwd -x -1 <username>
In some systems (Debian, Ubuntu) logging for cron is not enabled by default. In /etc/rsyslog.conf or /etc/rsyslog.d/50-default.conf the line:
should be edited (sudo nano /etc/rsyslog.conf
) uncommented to:
After that, you need to restart rsyslog via
or
Source: Enable crontab logging in Debian Linux
In some systems (Ubuntu) separate logging file for cron is not enabled by default, but cron related logs are appearing in syslog file. One may use
to view cron-related messages.
If your cronjob invokes GUI-apps, you need to tell them what DISPLAY they should use.
Example: Firefox launch with cron.
Your script should contain export DISPLAY=:0
somewhere.
Permissions problems are quite common, I'm afraid.
Note that a common workaround is to execute everything using root's crontab, which sometimes is a Really Bad Idea. Setting proper permissions is definitely a largely overlooked issue.
Insecure cron table permission
A cron table is rejected if its permission is insecure
The problem is solved with
Script is location-sensitive. This is related to always using absolute paths in a script, but not quite the same. Your cron job may need to cd
to a specific directory before running, e.g. a rake task on a Rails application may need to be in the application root for Rake to find the correct task, not to mention the appropriate database configuration, etc.
So a crontab entry of
23 3 * * * /usr/bin/rake db:session_purge RAILS_ENV=production
would be better as
Or, to keep the crontab entry simpler and less brittle:
23 3 * * * /home/<user>/scripts/session-purge.sh
with the following code in /home/<user>/scripts/session-purge.sh
:
Crontab specs which worked in the past can break when moved from one crontab file to another. Sometimes the reason is that you've moved the spec from a system crontab file to a user crontab file or vice-versa.
The cron job specification format differs between users' crontab files (/var/spool/cron/username or /var/spool/cron/crontabs/username) and the system crontabs (/etc/crontab
and the the files in /etc/cron.d
).
The system crontabs have an extra field 'user' right before the command-to-run.
This will cause errors stating things like george; command not found
when you move a command out of /etc/crontab
or a file in /etc/cron.d
into a user's crontab file.
Conversely, cron will deliver errors like /usr/bin/restartxyz is not a valid username
or similar when the reverse occurs.
cron script is invoking a command with --verbose option
I had a cron script fail on me because I was in autopilot while typing the script and I included the --verbose option:
The script ran fine when executing from shell, but failed when running from crontab because the verbose output goes to stdout when run from shell, but nowhere when run from crontab. Easy fix to remove the 'v':
The most frequent reason I have seen cron fail in an incorrectly stated schedule. It takes practice to specify a job scheduled for 11:15 pm as 15 23 * * *
instead of * * 11 15 *
or 11 15 * * *
. Day of the week for jobs after midnight also gets confused M-F is 2-6
after midnight, not 1-5
. Specific dates are usually a problem as we rarely use them * * 3 1 *
is not March 3rd. If you are not sure, check your cron schedules online at https://crontab.guru/.
If your work with different platforms using unsupported options such as 2/3
in time specifications can also cause failures. This is a very useful option but not universally available. I have also run across issues with lists like 1-5
or 1,3,5
.
Using unqualified paths have also caused problems. The default path is usually /bin:/usr/bin
so only standard commands will run. These directories usually don't have the desired command. This also affects scripts using non-standard commands. Other environment variables can also be missing.
Clobbering an existing crontab entirely has caused me problems. I now load from a file copy. This can be recovered from the existing crontab using crontab -l
if it gets clobbered. I keep the copy of crontab in ~/bin. It is commented throughout and ends with the line # EOF
. This is reloaded daily from a crontab entry like:
The reload command above relies on an executable crontab with a bang path running crontab. Some systems require the running crontab in the command and specifying the file. If the directory is network-shared, then I often use crontab.$(hostname)
as the name of the file. This will eventually correct cases where the wrong crontab is loaded on the wrong server.
Using the file provides a backup of what the crontab should be, and allows temporary edits (the only time I use crontab -e
) to be backed out automatically. There are headers available which help with getting the scheduling parameters right. I have added them when inexperienced users would be editing a crontab.
Rarely, I have run into commands that require user input. These fail under crontab, although some will work with input redirection.
If you are accessing an account via SSH keys it is possible to login to the account but not notice that the password on the account is locked (e.g. due to expiring or invalid password attempts)
If the system is using PAM and the account is locked, this can stop its cronjob from running. (I've tested this on Solaris, but not on Ubuntu)
You may find messages like this in /var/adm/messages:
All you should need to do is run:
as root to unlock the account, and the crontab should work again.
If you have a command like this:
and it doesn't work and you can't see any output, it may not necessarily mean cron isn't working. The script could be broken and the output going to stderr which doesn't get passed to /tmp/output. Check this isn't the case, by capturing this output as well:
to see if this helps you catch your issue.
Docker alert
If you're using docker,
I think it is proper to add that I couldn't manage to make cron to run in the background.
To run a cron job inside the container, I used supervisor and ran cron -f
, together with the other process.
Edit: Another issue - I also didn't manage to get it work when running the container with HOST networking. See this issue here also: https://github.com/phusion/baseimage-docker/issues/144
I was writing an install shell script that creates another script to purge old transaction data from a database. As a part of the task it had to configure daily cron
job to run at an arbitrary time, when the database load was low.
I created a file mycronjob
with cron schedule, username & the command and copied it to the /etc/cron.d
directory. My two gotchas:
mycronjob
file had to be owned by root to run- I had to set permissions of the file to 644 - 664 would not run.
A permission problem will appear in /var/log/syslog
as something similar to:
The first line refers to /etc/crontab
file and the later to a file I placed under /etc/cront.d
.
Line written in a way crontab doesn't understand. It needs to be correctly written. Here's CrontabHowTo.
Cron daemon could be running, but not actually working. Try restarting cron:
Writing to cron via 'crontab -e' with the username argument in a line. I've seen examples of users (or sysadmins) writing their shell scripts and not understanding why they don't automate. The 'user' argument exists in /etc/crontab, but not the user-defined files. so, for example, your personal file would be something like:
whereas /etc/crontab would be:
So, why would you do the latter? Well, depending on how you want to set your permissions, this can become very convoluted. I've written scripts to automate tasks for users who don't understand the intricacies, or don't want to bother with the drudgery. By setting permissions to --x------
, I can make the script executable without them being able to read (and perhaps accidentally change) it. However, I might want to run this command with several others from one file (thus making it easier to maintain) but make sure file output is assigned the right owner. Doing so (at least in Ubuntu 10.10) breaks on both the inability to read the file as well as execute, plus the afore-mentioned issue with putting periods in /etc/crontab (which, funnily enough, causes no error when going through crontab -e
).
As an example, I've seen instances of sudo crontab -e
used to run a script with root permissions, with a corresponding chown username file_output
in the shell script. Sloppy, but it works. IMHO, The more graceful option is to put it in /etc/crontab
with username declared and proper permissions, so file_output
goes to the right place and owner.
Building off what Aaron Peart mentioned about verbose mode, sometimes scripts not in verbose mode will initialize but not finish if the default behavior of an included command is to output a line or more to the screen once the proc starts. For example, I wrote a backup script for our intranet which used curl, a utility that downloads or uploads files to remote servers, and is quite handy if you can only access said remote files through HTTP. Using 'curl http://something.com/somefile.xls' was causing a script I wrote to hang and never complete because it spits out a newline followed by a progress line. I had to use the silent flag (-s) to tell it not to output any information, and write in my own code to handle if the file failed to download.
Although you can define environment variables in your crontable, you're not in a shell script. So constructions like the following won't work:
This is because variables are not interpreted in the crontable: all values are taken litterally. And this is the same if you omit the brackets.So your commands won't run, and your log files won't be written...
Instead you must define all your environment variables straight:
When a task is run within cron, stdin is closed. Programs that act differently based on whether stdin is available or not will behave differently between the shell session and in cron.
An example is the program goaccess
for analysing web server log files. This does NOT work in cron:
and goaccess
shows the help page instead of creating the report. In the shell this can be reproduced with
The fix for goaccess
is to make it read the log from stdin instead of reading from the file, so the solution is to change the crontab entry to
In my case cron and crontab had different owners.
NOT working I had this:
Basically I had to run cron-config and answer the questions correctly. There is a point where I was required to enter my Win7 user password for my 'User' account. From reading I did, it looks like this is a potential security issue but I am the only administrator on a single home network so I decided it was OK.
Here is the command sequence that got me going:
On my RHEL7 servers, root cron jobs would run, but user jobs would not. I found that without a home directory, the jobs won't run (but you will see good errors in /var/log/cron). When I created the home directory, the problem was solved.
If you edited your crontab file using a windows editor (via samba or something) and it's replaced the newlines with nr or just r, cron won't run.
Also, if you're using /etc/cron.d/* and one of those files has a r in it, cron will move through the files and stop when it hits a bad file. Not sure if that's the problem?
Use:
protected by heemaylMay 9 '17 at 3:07
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?