Generic solution to prevent a long cron job from running in parallel?

Clash Royale CLAN TAG#URR8PPP up vote
25
down vote
favorite
I am looking for a simple and generic solution that would allow you to execute any script or application in crontab and prevent it from running twice.
The solution should be independent on the executed command.
I assume it should look like lock && (command ; unlock) where lock will return false if there was another lock.
The second part would be like if it acquired the lock, run command and unlock after command is executed, even if it returns error.
bash cron
add a comment |Â
up vote
25
down vote
favorite
I am looking for a simple and generic solution that would allow you to execute any script or application in crontab and prevent it from running twice.
The solution should be independent on the executed command.
I assume it should look like lock && (command ; unlock) where lock will return false if there was another lock.
The second part would be like if it acquired the lock, run command and unlock after command is executed, even if it returns error.
bash cron
add a comment |Â
up vote
25
down vote
favorite
up vote
25
down vote
favorite
I am looking for a simple and generic solution that would allow you to execute any script or application in crontab and prevent it from running twice.
The solution should be independent on the executed command.
I assume it should look like lock && (command ; unlock) where lock will return false if there was another lock.
The second part would be like if it acquired the lock, run command and unlock after command is executed, even if it returns error.
bash cron
I am looking for a simple and generic solution that would allow you to execute any script or application in crontab and prevent it from running twice.
The solution should be independent on the executed command.
I assume it should look like lock && (command ; unlock) where lock will return false if there was another lock.
The second part would be like if it acquired the lock, run command and unlock after command is executed, even if it returns error.
bash cron
bash cron
asked May 25 '12 at 9:03
sorin
3,706123751
3,706123751
add a comment |Â
add a comment |Â
7 Answers
7
active
oldest
votes
up vote
30
down vote
accepted
Take a look at the run-one package. From the manpage for the
run-one command
:
run-one is a wrapper script that runs no more than one unique
instance of some command with a unique set of arguments.
This is often useful with cronjobs, when you want no more than one copy running at a time.
Like time or sudo, you just prepend it to the command. So a cronjob could look like:
*/60 * * * * run-one rsync -azP $HOME example.com:/srv/backup
For more information and background, check out the blog post introducing it by Dustin Kirkland.
add a comment |Â
up vote
4
down vote
A very simple way of settup a lock:
if mkdir /var/lock/mylock; then
echo "Locking succeeded" >&2
else
echo "Lock failed - exit" >&2
exit 1
fi
A scripts which want to run needs te create the lock. If the lock exists, another script is busy, so the first script can't run. If the file don't exists, no script has acquired the lock. So the current script acquires the lock. When the script has finished the lock needs to be realeased by removeing the lock.
For more information about bash locks, check this page
1
You'll also want an EXIT trap that removes the lock on exit.echo "Locking succeeded" >&2; trap 'rm -rf /var/lock/mylock' EXIT
â geirha
May 25 '12 at 10:23
Ideally you'd want to use advisory flock from a process that runs the command you want as a subtask. That way if they all die the flock is released automatically, which using the presence of a lock file doesn't do. Using a network port would work in a similar way - though that's a way smaller namespace, which is a problem.
â Alex North-Keys
Mar 19 '15 at 16:45
add a comment |Â
up vote
3
down vote
No need to install some fancy package:
#!/bin/bash
pgrep -xf "$*" > /dev/null || "$@"
It's faster to write that script yourself than to run "apt-get install", isn't it?
You might want to add "-u $(id -u)" to the pgrep to check for instances run by the current user only.
2
this does not guarantee a single instance. two scripts can pass to the other side of||operator at the same time, before either has a chance to start the script yet.
â Sedat Kapanoglu
Aug 9 '16 at 20:45
@SedatKapanoglu Granted, that script is not racecondition-proof, but the original question was about long running cron-jobs (which are run at most once a minute). If your system needs more than a minute for the creation of the process you have some other issues. However, if needed for some other reasons you could use flock(1) to protect above script against race conditions.
â Michael Kowhan
Aug 10 '16 at 22:59
I used this but for a bash script that should check itself. The code is this: v=$(pgrep -xf "/bin/bash $0 $@") [ "$v/$BASHPID/" != "" ] && exit 2
â ahofmann
Jan 19 '17 at 13:14
add a comment |Â
up vote
3
down vote
See also Tim Kay's solo, which performs locking by binding a port on a loopback address unique to the user:
http://timkay.com/solo/
In case his site goes down:
Usage:
solo -port=PORT COMMAND
where
PORT some arbitrary port number to be used for locking
COMMAND shell command to run
options
-verbose be verbose
-silent be silent
Use it like this:
* * * * * solo -port=3801 ./job.pl blah blah
Script:
#!/usr/bin/perl -s
#
# solo v1.7
# Prevents multiple cron instances from running simultaneously.
#
# Copyright 2007-2016 Timothy Kay
# http://timkay.com/solo/
#
# It is free software; you can redistribute it and/or modify it under the terms of either:
#
# a) the GNU General Public License as published by the Free Software Foundation;
# either version 1 (http://dev.perl.org/licenses/gpl1.html), or (at your option)
# any later version (http://www.fsf.org/licenses/licenses.html#GNUGPL), or
#
# b) the "Artistic License" (http://dev.perl.org/licenses/artistic.html), or
#
# c) the MIT License (http://opensource.org/licenses/MIT)
#
use Socket;
alarm $timeout if $timeout;
$port =~ /^d+$/ or $noport or die "Usage: $0 -port=PORT COMMANDn";
if ($port)
# To work with OpenBSD: change to
# $addr = pack(CnC, 127, 0, 1);
# but make sure to use different ports across different users.
# (Thanks to www.gotati.com .)
$addr = pack(CnC, 127, $<, 1);
print "solo: bind ", join(".", unpack(C4, $addr)), ":$portn" if $verbose;
$^F = 10; # unset close-on-exec
socket(SOLO, PF_INET, SOCK_STREAM, getprotobyname('tcp')) or die "socket: $!";
bind(SOLO, sockaddr_in($port, $addr)) or $silent? exit: die "solo($port): $!n";
sleep $sleep if $sleep;
exec @ARGV;
add a comment |Â
up vote
1
down vote
You need a lock. run-one does the job, but you may also want to look into flock from util-linux package.
It is a standard package provided by the kernel developers, allows for more customization than run-one and is still very simple.
add a comment |Â
up vote
0
down vote
A simple solution from bash-hackers.org that worked for me was using mkdir. This is an easy way how to make sure that only one instance of your program is running. Create a directory with mkdir .lock which returns
true if the creation was successful and
false if the lock file exists, indicating that there is currently one instance running.
So this simple function did all the file locking logic:
if mkdir .lock; then
echo "Locking succeeded"
eval startYourProgram.sh ;
else
echo "Lock file exists. Program already running? Exit. "
exit 1
fi
echo "Program finished, Removing lock."
rm -r .lock
add a comment |Â
up vote
0
down vote
This solution is for a bash script that needs to check itself
v=$(pgrep -xf "/bin/bash $0 $@")
[ "$v/$BASHPID/" != "" ] && exit 0
add a comment |Â
7 Answers
7
active
oldest
votes
7 Answers
7
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
30
down vote
accepted
Take a look at the run-one package. From the manpage for the
run-one command
:
run-one is a wrapper script that runs no more than one unique
instance of some command with a unique set of arguments.
This is often useful with cronjobs, when you want no more than one copy running at a time.
Like time or sudo, you just prepend it to the command. So a cronjob could look like:
*/60 * * * * run-one rsync -azP $HOME example.com:/srv/backup
For more information and background, check out the blog post introducing it by Dustin Kirkland.
add a comment |Â
up vote
30
down vote
accepted
Take a look at the run-one package. From the manpage for the
run-one command
:
run-one is a wrapper script that runs no more than one unique
instance of some command with a unique set of arguments.
This is often useful with cronjobs, when you want no more than one copy running at a time.
Like time or sudo, you just prepend it to the command. So a cronjob could look like:
*/60 * * * * run-one rsync -azP $HOME example.com:/srv/backup
For more information and background, check out the blog post introducing it by Dustin Kirkland.
add a comment |Â
up vote
30
down vote
accepted
up vote
30
down vote
accepted
Take a look at the run-one package. From the manpage for the
run-one command
:
run-one is a wrapper script that runs no more than one unique
instance of some command with a unique set of arguments.
This is often useful with cronjobs, when you want no more than one copy running at a time.
Like time or sudo, you just prepend it to the command. So a cronjob could look like:
*/60 * * * * run-one rsync -azP $HOME example.com:/srv/backup
For more information and background, check out the blog post introducing it by Dustin Kirkland.
Take a look at the run-one package. From the manpage for the
run-one command
:
run-one is a wrapper script that runs no more than one unique
instance of some command with a unique set of arguments.
This is often useful with cronjobs, when you want no more than one copy running at a time.
Like time or sudo, you just prepend it to the command. So a cronjob could look like:
*/60 * * * * run-one rsync -azP $HOME example.com:/srv/backup
For more information and background, check out the blog post introducing it by Dustin Kirkland.
edited Mar 11 '17 at 19:00
Communityâ¦
1
1
answered Jun 3 '12 at 18:14
andrewsomething
26.9k1075134
26.9k1075134
add a comment |Â
add a comment |Â
up vote
4
down vote
A very simple way of settup a lock:
if mkdir /var/lock/mylock; then
echo "Locking succeeded" >&2
else
echo "Lock failed - exit" >&2
exit 1
fi
A scripts which want to run needs te create the lock. If the lock exists, another script is busy, so the first script can't run. If the file don't exists, no script has acquired the lock. So the current script acquires the lock. When the script has finished the lock needs to be realeased by removeing the lock.
For more information about bash locks, check this page
1
You'll also want an EXIT trap that removes the lock on exit.echo "Locking succeeded" >&2; trap 'rm -rf /var/lock/mylock' EXIT
â geirha
May 25 '12 at 10:23
Ideally you'd want to use advisory flock from a process that runs the command you want as a subtask. That way if they all die the flock is released automatically, which using the presence of a lock file doesn't do. Using a network port would work in a similar way - though that's a way smaller namespace, which is a problem.
â Alex North-Keys
Mar 19 '15 at 16:45
add a comment |Â
up vote
4
down vote
A very simple way of settup a lock:
if mkdir /var/lock/mylock; then
echo "Locking succeeded" >&2
else
echo "Lock failed - exit" >&2
exit 1
fi
A scripts which want to run needs te create the lock. If the lock exists, another script is busy, so the first script can't run. If the file don't exists, no script has acquired the lock. So the current script acquires the lock. When the script has finished the lock needs to be realeased by removeing the lock.
For more information about bash locks, check this page
1
You'll also want an EXIT trap that removes the lock on exit.echo "Locking succeeded" >&2; trap 'rm -rf /var/lock/mylock' EXIT
â geirha
May 25 '12 at 10:23
Ideally you'd want to use advisory flock from a process that runs the command you want as a subtask. That way if they all die the flock is released automatically, which using the presence of a lock file doesn't do. Using a network port would work in a similar way - though that's a way smaller namespace, which is a problem.
â Alex North-Keys
Mar 19 '15 at 16:45
add a comment |Â
up vote
4
down vote
up vote
4
down vote
A very simple way of settup a lock:
if mkdir /var/lock/mylock; then
echo "Locking succeeded" >&2
else
echo "Lock failed - exit" >&2
exit 1
fi
A scripts which want to run needs te create the lock. If the lock exists, another script is busy, so the first script can't run. If the file don't exists, no script has acquired the lock. So the current script acquires the lock. When the script has finished the lock needs to be realeased by removeing the lock.
For more information about bash locks, check this page
A very simple way of settup a lock:
if mkdir /var/lock/mylock; then
echo "Locking succeeded" >&2
else
echo "Lock failed - exit" >&2
exit 1
fi
A scripts which want to run needs te create the lock. If the lock exists, another script is busy, so the first script can't run. If the file don't exists, no script has acquired the lock. So the current script acquires the lock. When the script has finished the lock needs to be realeased by removeing the lock.
For more information about bash locks, check this page
answered May 25 '12 at 10:17
OrangeTux
3,44282351
3,44282351
1
You'll also want an EXIT trap that removes the lock on exit.echo "Locking succeeded" >&2; trap 'rm -rf /var/lock/mylock' EXIT
â geirha
May 25 '12 at 10:23
Ideally you'd want to use advisory flock from a process that runs the command you want as a subtask. That way if they all die the flock is released automatically, which using the presence of a lock file doesn't do. Using a network port would work in a similar way - though that's a way smaller namespace, which is a problem.
â Alex North-Keys
Mar 19 '15 at 16:45
add a comment |Â
1
You'll also want an EXIT trap that removes the lock on exit.echo "Locking succeeded" >&2; trap 'rm -rf /var/lock/mylock' EXIT
â geirha
May 25 '12 at 10:23
Ideally you'd want to use advisory flock from a process that runs the command you want as a subtask. That way if they all die the flock is released automatically, which using the presence of a lock file doesn't do. Using a network port would work in a similar way - though that's a way smaller namespace, which is a problem.
â Alex North-Keys
Mar 19 '15 at 16:45
1
1
You'll also want an EXIT trap that removes the lock on exit.
echo "Locking succeeded" >&2; trap 'rm -rf /var/lock/mylock' EXITâ geirha
May 25 '12 at 10:23
You'll also want an EXIT trap that removes the lock on exit.
echo "Locking succeeded" >&2; trap 'rm -rf /var/lock/mylock' EXITâ geirha
May 25 '12 at 10:23
Ideally you'd want to use advisory flock from a process that runs the command you want as a subtask. That way if they all die the flock is released automatically, which using the presence of a lock file doesn't do. Using a network port would work in a similar way - though that's a way smaller namespace, which is a problem.
â Alex North-Keys
Mar 19 '15 at 16:45
Ideally you'd want to use advisory flock from a process that runs the command you want as a subtask. That way if they all die the flock is released automatically, which using the presence of a lock file doesn't do. Using a network port would work in a similar way - though that's a way smaller namespace, which is a problem.
â Alex North-Keys
Mar 19 '15 at 16:45
add a comment |Â
up vote
3
down vote
No need to install some fancy package:
#!/bin/bash
pgrep -xf "$*" > /dev/null || "$@"
It's faster to write that script yourself than to run "apt-get install", isn't it?
You might want to add "-u $(id -u)" to the pgrep to check for instances run by the current user only.
2
this does not guarantee a single instance. two scripts can pass to the other side of||operator at the same time, before either has a chance to start the script yet.
â Sedat Kapanoglu
Aug 9 '16 at 20:45
@SedatKapanoglu Granted, that script is not racecondition-proof, but the original question was about long running cron-jobs (which are run at most once a minute). If your system needs more than a minute for the creation of the process you have some other issues. However, if needed for some other reasons you could use flock(1) to protect above script against race conditions.
â Michael Kowhan
Aug 10 '16 at 22:59
I used this but for a bash script that should check itself. The code is this: v=$(pgrep -xf "/bin/bash $0 $@") [ "$v/$BASHPID/" != "" ] && exit 2
â ahofmann
Jan 19 '17 at 13:14
add a comment |Â
up vote
3
down vote
No need to install some fancy package:
#!/bin/bash
pgrep -xf "$*" > /dev/null || "$@"
It's faster to write that script yourself than to run "apt-get install", isn't it?
You might want to add "-u $(id -u)" to the pgrep to check for instances run by the current user only.
2
this does not guarantee a single instance. two scripts can pass to the other side of||operator at the same time, before either has a chance to start the script yet.
â Sedat Kapanoglu
Aug 9 '16 at 20:45
@SedatKapanoglu Granted, that script is not racecondition-proof, but the original question was about long running cron-jobs (which are run at most once a minute). If your system needs more than a minute for the creation of the process you have some other issues. However, if needed for some other reasons you could use flock(1) to protect above script against race conditions.
â Michael Kowhan
Aug 10 '16 at 22:59
I used this but for a bash script that should check itself. The code is this: v=$(pgrep -xf "/bin/bash $0 $@") [ "$v/$BASHPID/" != "" ] && exit 2
â ahofmann
Jan 19 '17 at 13:14
add a comment |Â
up vote
3
down vote
up vote
3
down vote
No need to install some fancy package:
#!/bin/bash
pgrep -xf "$*" > /dev/null || "$@"
It's faster to write that script yourself than to run "apt-get install", isn't it?
You might want to add "-u $(id -u)" to the pgrep to check for instances run by the current user only.
No need to install some fancy package:
#!/bin/bash
pgrep -xf "$*" > /dev/null || "$@"
It's faster to write that script yourself than to run "apt-get install", isn't it?
You might want to add "-u $(id -u)" to the pgrep to check for instances run by the current user only.
answered Oct 1 '14 at 19:52
Michael Kowhan
1311
1311
2
this does not guarantee a single instance. two scripts can pass to the other side of||operator at the same time, before either has a chance to start the script yet.
â Sedat Kapanoglu
Aug 9 '16 at 20:45
@SedatKapanoglu Granted, that script is not racecondition-proof, but the original question was about long running cron-jobs (which are run at most once a minute). If your system needs more than a minute for the creation of the process you have some other issues. However, if needed for some other reasons you could use flock(1) to protect above script against race conditions.
â Michael Kowhan
Aug 10 '16 at 22:59
I used this but for a bash script that should check itself. The code is this: v=$(pgrep -xf "/bin/bash $0 $@") [ "$v/$BASHPID/" != "" ] && exit 2
â ahofmann
Jan 19 '17 at 13:14
add a comment |Â
2
this does not guarantee a single instance. two scripts can pass to the other side of||operator at the same time, before either has a chance to start the script yet.
â Sedat Kapanoglu
Aug 9 '16 at 20:45
@SedatKapanoglu Granted, that script is not racecondition-proof, but the original question was about long running cron-jobs (which are run at most once a minute). If your system needs more than a minute for the creation of the process you have some other issues. However, if needed for some other reasons you could use flock(1) to protect above script against race conditions.
â Michael Kowhan
Aug 10 '16 at 22:59
I used this but for a bash script that should check itself. The code is this: v=$(pgrep -xf "/bin/bash $0 $@") [ "$v/$BASHPID/" != "" ] && exit 2
â ahofmann
Jan 19 '17 at 13:14
2
2
this does not guarantee a single instance. two scripts can pass to the other side of
|| operator at the same time, before either has a chance to start the script yet.â Sedat Kapanoglu
Aug 9 '16 at 20:45
this does not guarantee a single instance. two scripts can pass to the other side of
|| operator at the same time, before either has a chance to start the script yet.â Sedat Kapanoglu
Aug 9 '16 at 20:45
@SedatKapanoglu Granted, that script is not racecondition-proof, but the original question was about long running cron-jobs (which are run at most once a minute). If your system needs more than a minute for the creation of the process you have some other issues. However, if needed for some other reasons you could use flock(1) to protect above script against race conditions.
â Michael Kowhan
Aug 10 '16 at 22:59
@SedatKapanoglu Granted, that script is not racecondition-proof, but the original question was about long running cron-jobs (which are run at most once a minute). If your system needs more than a minute for the creation of the process you have some other issues. However, if needed for some other reasons you could use flock(1) to protect above script against race conditions.
â Michael Kowhan
Aug 10 '16 at 22:59
I used this but for a bash script that should check itself. The code is this: v=$(pgrep -xf "/bin/bash $0 $@") [ "$v/$BASHPID/" != "" ] && exit 2
â ahofmann
Jan 19 '17 at 13:14
I used this but for a bash script that should check itself. The code is this: v=$(pgrep -xf "/bin/bash $0 $@") [ "$v/$BASHPID/" != "" ] && exit 2
â ahofmann
Jan 19 '17 at 13:14
add a comment |Â
up vote
3
down vote
See also Tim Kay's solo, which performs locking by binding a port on a loopback address unique to the user:
http://timkay.com/solo/
In case his site goes down:
Usage:
solo -port=PORT COMMAND
where
PORT some arbitrary port number to be used for locking
COMMAND shell command to run
options
-verbose be verbose
-silent be silent
Use it like this:
* * * * * solo -port=3801 ./job.pl blah blah
Script:
#!/usr/bin/perl -s
#
# solo v1.7
# Prevents multiple cron instances from running simultaneously.
#
# Copyright 2007-2016 Timothy Kay
# http://timkay.com/solo/
#
# It is free software; you can redistribute it and/or modify it under the terms of either:
#
# a) the GNU General Public License as published by the Free Software Foundation;
# either version 1 (http://dev.perl.org/licenses/gpl1.html), or (at your option)
# any later version (http://www.fsf.org/licenses/licenses.html#GNUGPL), or
#
# b) the "Artistic License" (http://dev.perl.org/licenses/artistic.html), or
#
# c) the MIT License (http://opensource.org/licenses/MIT)
#
use Socket;
alarm $timeout if $timeout;
$port =~ /^d+$/ or $noport or die "Usage: $0 -port=PORT COMMANDn";
if ($port)
# To work with OpenBSD: change to
# $addr = pack(CnC, 127, 0, 1);
# but make sure to use different ports across different users.
# (Thanks to www.gotati.com .)
$addr = pack(CnC, 127, $<, 1);
print "solo: bind ", join(".", unpack(C4, $addr)), ":$portn" if $verbose;
$^F = 10; # unset close-on-exec
socket(SOLO, PF_INET, SOCK_STREAM, getprotobyname('tcp')) or die "socket: $!";
bind(SOLO, sockaddr_in($port, $addr)) or $silent? exit: die "solo($port): $!n";
sleep $sleep if $sleep;
exec @ARGV;
add a comment |Â
up vote
3
down vote
See also Tim Kay's solo, which performs locking by binding a port on a loopback address unique to the user:
http://timkay.com/solo/
In case his site goes down:
Usage:
solo -port=PORT COMMAND
where
PORT some arbitrary port number to be used for locking
COMMAND shell command to run
options
-verbose be verbose
-silent be silent
Use it like this:
* * * * * solo -port=3801 ./job.pl blah blah
Script:
#!/usr/bin/perl -s
#
# solo v1.7
# Prevents multiple cron instances from running simultaneously.
#
# Copyright 2007-2016 Timothy Kay
# http://timkay.com/solo/
#
# It is free software; you can redistribute it and/or modify it under the terms of either:
#
# a) the GNU General Public License as published by the Free Software Foundation;
# either version 1 (http://dev.perl.org/licenses/gpl1.html), or (at your option)
# any later version (http://www.fsf.org/licenses/licenses.html#GNUGPL), or
#
# b) the "Artistic License" (http://dev.perl.org/licenses/artistic.html), or
#
# c) the MIT License (http://opensource.org/licenses/MIT)
#
use Socket;
alarm $timeout if $timeout;
$port =~ /^d+$/ or $noport or die "Usage: $0 -port=PORT COMMANDn";
if ($port)
# To work with OpenBSD: change to
# $addr = pack(CnC, 127, 0, 1);
# but make sure to use different ports across different users.
# (Thanks to www.gotati.com .)
$addr = pack(CnC, 127, $<, 1);
print "solo: bind ", join(".", unpack(C4, $addr)), ":$portn" if $verbose;
$^F = 10; # unset close-on-exec
socket(SOLO, PF_INET, SOCK_STREAM, getprotobyname('tcp')) or die "socket: $!";
bind(SOLO, sockaddr_in($port, $addr)) or $silent? exit: die "solo($port): $!n";
sleep $sleep if $sleep;
exec @ARGV;
add a comment |Â
up vote
3
down vote
up vote
3
down vote
See also Tim Kay's solo, which performs locking by binding a port on a loopback address unique to the user:
http://timkay.com/solo/
In case his site goes down:
Usage:
solo -port=PORT COMMAND
where
PORT some arbitrary port number to be used for locking
COMMAND shell command to run
options
-verbose be verbose
-silent be silent
Use it like this:
* * * * * solo -port=3801 ./job.pl blah blah
Script:
#!/usr/bin/perl -s
#
# solo v1.7
# Prevents multiple cron instances from running simultaneously.
#
# Copyright 2007-2016 Timothy Kay
# http://timkay.com/solo/
#
# It is free software; you can redistribute it and/or modify it under the terms of either:
#
# a) the GNU General Public License as published by the Free Software Foundation;
# either version 1 (http://dev.perl.org/licenses/gpl1.html), or (at your option)
# any later version (http://www.fsf.org/licenses/licenses.html#GNUGPL), or
#
# b) the "Artistic License" (http://dev.perl.org/licenses/artistic.html), or
#
# c) the MIT License (http://opensource.org/licenses/MIT)
#
use Socket;
alarm $timeout if $timeout;
$port =~ /^d+$/ or $noport or die "Usage: $0 -port=PORT COMMANDn";
if ($port)
# To work with OpenBSD: change to
# $addr = pack(CnC, 127, 0, 1);
# but make sure to use different ports across different users.
# (Thanks to www.gotati.com .)
$addr = pack(CnC, 127, $<, 1);
print "solo: bind ", join(".", unpack(C4, $addr)), ":$portn" if $verbose;
$^F = 10; # unset close-on-exec
socket(SOLO, PF_INET, SOCK_STREAM, getprotobyname('tcp')) or die "socket: $!";
bind(SOLO, sockaddr_in($port, $addr)) or $silent? exit: die "solo($port): $!n";
sleep $sleep if $sleep;
exec @ARGV;
See also Tim Kay's solo, which performs locking by binding a port on a loopback address unique to the user:
http://timkay.com/solo/
In case his site goes down:
Usage:
solo -port=PORT COMMAND
where
PORT some arbitrary port number to be used for locking
COMMAND shell command to run
options
-verbose be verbose
-silent be silent
Use it like this:
* * * * * solo -port=3801 ./job.pl blah blah
Script:
#!/usr/bin/perl -s
#
# solo v1.7
# Prevents multiple cron instances from running simultaneously.
#
# Copyright 2007-2016 Timothy Kay
# http://timkay.com/solo/
#
# It is free software; you can redistribute it and/or modify it under the terms of either:
#
# a) the GNU General Public License as published by the Free Software Foundation;
# either version 1 (http://dev.perl.org/licenses/gpl1.html), or (at your option)
# any later version (http://www.fsf.org/licenses/licenses.html#GNUGPL), or
#
# b) the "Artistic License" (http://dev.perl.org/licenses/artistic.html), or
#
# c) the MIT License (http://opensource.org/licenses/MIT)
#
use Socket;
alarm $timeout if $timeout;
$port =~ /^d+$/ or $noport or die "Usage: $0 -port=PORT COMMANDn";
if ($port)
# To work with OpenBSD: change to
# $addr = pack(CnC, 127, 0, 1);
# but make sure to use different ports across different users.
# (Thanks to www.gotati.com .)
$addr = pack(CnC, 127, $<, 1);
print "solo: bind ", join(".", unpack(C4, $addr)), ":$portn" if $verbose;
$^F = 10; # unset close-on-exec
socket(SOLO, PF_INET, SOCK_STREAM, getprotobyname('tcp')) or die "socket: $!";
bind(SOLO, sockaddr_in($port, $addr)) or $silent? exit: die "solo($port): $!n";
sleep $sleep if $sleep;
exec @ARGV;
edited Sep 7 '16 at 17:33
Eric Leschinski
1,35911319
1,35911319
answered Mar 21 '13 at 20:39
DNA
1313
1313
add a comment |Â
add a comment |Â
up vote
1
down vote
You need a lock. run-one does the job, but you may also want to look into flock from util-linux package.
It is a standard package provided by the kernel developers, allows for more customization than run-one and is still very simple.
add a comment |Â
up vote
1
down vote
You need a lock. run-one does the job, but you may also want to look into flock from util-linux package.
It is a standard package provided by the kernel developers, allows for more customization than run-one and is still very simple.
add a comment |Â
up vote
1
down vote
up vote
1
down vote
You need a lock. run-one does the job, but you may also want to look into flock from util-linux package.
It is a standard package provided by the kernel developers, allows for more customization than run-one and is still very simple.
You need a lock. run-one does the job, but you may also want to look into flock from util-linux package.
It is a standard package provided by the kernel developers, allows for more customization than run-one and is still very simple.
answered Jun 6 at 9:58
styrofoam fly
11113
11113
add a comment |Â
add a comment |Â
up vote
0
down vote
A simple solution from bash-hackers.org that worked for me was using mkdir. This is an easy way how to make sure that only one instance of your program is running. Create a directory with mkdir .lock which returns
true if the creation was successful and
false if the lock file exists, indicating that there is currently one instance running.
So this simple function did all the file locking logic:
if mkdir .lock; then
echo "Locking succeeded"
eval startYourProgram.sh ;
else
echo "Lock file exists. Program already running? Exit. "
exit 1
fi
echo "Program finished, Removing lock."
rm -r .lock
add a comment |Â
up vote
0
down vote
A simple solution from bash-hackers.org that worked for me was using mkdir. This is an easy way how to make sure that only one instance of your program is running. Create a directory with mkdir .lock which returns
true if the creation was successful and
false if the lock file exists, indicating that there is currently one instance running.
So this simple function did all the file locking logic:
if mkdir .lock; then
echo "Locking succeeded"
eval startYourProgram.sh ;
else
echo "Lock file exists. Program already running? Exit. "
exit 1
fi
echo "Program finished, Removing lock."
rm -r .lock
add a comment |Â
up vote
0
down vote
up vote
0
down vote
A simple solution from bash-hackers.org that worked for me was using mkdir. This is an easy way how to make sure that only one instance of your program is running. Create a directory with mkdir .lock which returns
true if the creation was successful and
false if the lock file exists, indicating that there is currently one instance running.
So this simple function did all the file locking logic:
if mkdir .lock; then
echo "Locking succeeded"
eval startYourProgram.sh ;
else
echo "Lock file exists. Program already running? Exit. "
exit 1
fi
echo "Program finished, Removing lock."
rm -r .lock
A simple solution from bash-hackers.org that worked for me was using mkdir. This is an easy way how to make sure that only one instance of your program is running. Create a directory with mkdir .lock which returns
true if the creation was successful and
false if the lock file exists, indicating that there is currently one instance running.
So this simple function did all the file locking logic:
if mkdir .lock; then
echo "Locking succeeded"
eval startYourProgram.sh ;
else
echo "Lock file exists. Program already running? Exit. "
exit 1
fi
echo "Program finished, Removing lock."
rm -r .lock
answered May 14 '15 at 15:37
domih
1658
1658
add a comment |Â
add a comment |Â
up vote
0
down vote
This solution is for a bash script that needs to check itself
v=$(pgrep -xf "/bin/bash $0 $@")
[ "$v/$BASHPID/" != "" ] && exit 0
add a comment |Â
up vote
0
down vote
This solution is for a bash script that needs to check itself
v=$(pgrep -xf "/bin/bash $0 $@")
[ "$v/$BASHPID/" != "" ] && exit 0
add a comment |Â
up vote
0
down vote
up vote
0
down vote
This solution is for a bash script that needs to check itself
v=$(pgrep -xf "/bin/bash $0 $@")
[ "$v/$BASHPID/" != "" ] && exit 0
This solution is for a bash script that needs to check itself
v=$(pgrep -xf "/bin/bash $0 $@")
[ "$v/$BASHPID/" != "" ] && exit 0
answered Jan 19 '17 at 13:20
ahofmann
1012
1012
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
var $window = $(window),
onScroll = function(e)
var $elem = $('.new-login-left'),
docViewTop = $window.scrollTop(),
docViewBottom = docViewTop + $window.height(),
elemTop = $elem.offset().top,
elemBottom = elemTop + $elem.height();
if ((docViewTop elemBottom))
StackExchange.using('gps', function() StackExchange.gps.track('embedded_signup_form.view', location: 'question_page' ); );
$window.unbind('scroll', onScroll);
;
$window.on('scroll', onScroll);
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2faskubuntu.com%2fquestions%2f142002%2fgeneric-solution-to-prevent-a-long-cron-job-from-running-in-parallel%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
var $window = $(window),
onScroll = function(e)
var $elem = $('.new-login-left'),
docViewTop = $window.scrollTop(),
docViewBottom = docViewTop + $window.height(),
elemTop = $elem.offset().top,
elemBottom = elemTop + $elem.height();
if ((docViewTop elemBottom))
StackExchange.using('gps', function() StackExchange.gps.track('embedded_signup_form.view', location: 'question_page' ); );
$window.unbind('scroll', onScroll);
;
$window.on('scroll', onScroll);
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
var $window = $(window),
onScroll = function(e)
var $elem = $('.new-login-left'),
docViewTop = $window.scrollTop(),
docViewBottom = docViewTop + $window.height(),
elemTop = $elem.offset().top,
elemBottom = elemTop + $elem.height();
if ((docViewTop elemBottom))
StackExchange.using('gps', function() StackExchange.gps.track('embedded_signup_form.view', location: 'question_page' ); );
$window.unbind('scroll', onScroll);
;
$window.on('scroll', onScroll);
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
var $window = $(window),
onScroll = function(e)
var $elem = $('.new-login-left'),
docViewTop = $window.scrollTop(),
docViewBottom = docViewTop + $window.height(),
elemTop = $elem.offset().top,
elemBottom = elemTop + $elem.height();
if ((docViewTop elemBottom))
StackExchange.using('gps', function() StackExchange.gps.track('embedded_signup_form.view', location: 'question_page' ); );
$window.unbind('scroll', onScroll);
;
$window.on('scroll', onScroll);
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password