Single-process servers (and cron jobs) are easier to write. If a script fires up, then starts processing a job that another copy of the same script is working on, then bad things happen.
The traditional solution to this is a bit of a hack. Script fires up, writes its unique process id (PID) to a magic file somewhere. When the script dies we delete the file. This works somewhat, but fails if the script gets impolitely killed -- the "lock" file still exists but the associated script is no longer running.
The following trick is useful. When the script fires up it registers itself with a global list of strings held by the OS. If the code can't register itself, it knows the program is already running, and should exit. When a script dies or is killed, the OS itself de-registers it, thus the script can run again.
Alas this trick is Linux-only.
import os, sys, time
prevent multiple processes running at the same time
if not process_name:
process_name = os.path.basename(sys.argv)
lock_socket = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM)
lock_socket.bind('\0' + process_name)
sys.process_lock = get_lock()