Deferred Signals |
In Perls before Perl 5.7.3 by installing Perl code to deal with
signals, you were exposing yourself to danger from two things. First,
few system library functions are re-entrant. If the signal interrupts
while Perl is executing one function (like malloc(3)
or printf(3)),
and your signal handler then calls the same function again, you could
get unpredictable behavior--often, a core dump. Second, Perl isn't
itself re-entrant at the lowest levels. If the signal interrupts Perl
while Perl is changing its own internal data structures, similarly
unpredictable behaviour may result.
There were two things you could do, knowing this: be paranoid or be
pragmatic. The paranoid approach was to do as little as possible in your
signal handler. Set an existing integer variable that already has a
value, and return. This doesn't help you if you're in a slow system call,
which will just restart. That means you have to die
to longjump(3)
out
of the handler. Even this is a little cavalier for the true paranoiac,
who avoids die
in a handler because the system is out to get you.
The pragmatic approach was to say ``I know the risks, but prefer the
convenience'', and to do anything you wanted in your signal handler,
and be prepared to clean up core dumps now and again.
In Perl 5.7.3 and later to avoid these problems signals are ``deferred''-- that is when the signal is delivered to the process by the system (to the C code that implements Perl) a flag is set, and the handler returns immediately. Then at strategic ``safe'' points in the Perl interpreter (e.g. when it is about to execute a new opcode) the flags are checked and the Perl level handler from %SIG is executed. The ``deferred'' scheme allows much more flexibility in the coding of signal handler as we know Perl interpreter is in a safe state, and that we are not in a system library function when the handler is called. However the implementation does differ from previous Perls in the following ways:
read
(used to implement Perls
<> operator). On older Perls the handler was called
immediately (and as read
is not ``unsafe'' this worked well). With
the ``deferred'' scheme the handler is not called immediately, and if
Perl is using system's stdio
library that library may re-start the
read
without returning to Perl and giving it a chance to call the
%SIG handler. If this happens on your system the solution is to use
:perlio
layer to do IO - at least on those handles which you want
to be able to break into with signals. (The :perlio
layer checks
the signal flags and calls %SIG handlers before resuming IO operation.)
Note that the default in Perl 5.7.3 and later is to automatically use
the :perlio
layer.
Note that some networking library functions like gethostbyname()
are
known to have their own implementations of timeouts which may conflict
with your timeouts. If you are having problems with such functions,
you can try using the POSIX sigaction()
function, which bypasses the
Perl safe signals (note that this means subjecting yourself to
possible memory corruption, as described above). Instead of setting
$SIG{ALRM}
try something like the following:
use POSIX; sigaction SIGALRM, new POSIX::SigAction sub { die "alarm\n" } or die "Error setting SIGALRM handler: $!\n";
EINTR
) in places
where they previously would have succeeded.
Note that the default :perlio
layer will retry read
, write
and close
as described above and that interrupted wait
and
waitpid
calls will always be retried.
wait
for the completed child
process. On such systems the deferred signal scheme will not work for
those signals (it does not do the wait
). Again the failure will
look like a loop as the operating system will re-issue the signal as
there are un-waited-for completed child processes.
If you want the old signal behaviour back regardless of possible
memory corruption, set the environment variable PERL_SIGNALS
to
"unsafe"
(a new feature since Perl 5.8.1).
Deferred Signals |