Language clean ups

This commit is contained in:
Michael Kerrisk 2007-06-14 21:11:22 +00:00
parent 8666f20911
commit fc15f317eb
1 changed files with 75 additions and 74 deletions

View File

@ -18,7 +18,7 @@
.\"
.\" Davide Libenzi <davidel@xmailserver.org>
.\"
.TH EPOLL 7 2002-10-23 "Linux" "Linux Programmer's Manual"
.TH EPOLL 7 2007-06-22 "Linux" "Linux Programmer's Manual"
.SH NAME
epoll \- I/O event notification facility
.SH SYNOPSIS
@ -27,8 +27,8 @@ epoll \- I/O event notification facility
.B epoll
is a variant of
.BR poll (2)
that can be used either as Edge or Level Triggered interface and scales
well to large numbers of watched fds.
that can be used either as an edge-triggered or a level-triggered
interface and scales well to large numbers of watched file descriptors.
Three system calls are provided to
set up and control an
.B epoll
@ -45,36 +45,36 @@ Interest for certain file descriptors is then registered via
.BR epoll_ctl (2).
Finally, the actual wait is started by
.BR epoll_wait (2).
.SS Level Triggered and Edge Triggered
.SS Level-Triggered and Edge-Triggered
The
.B epoll
event distribution interface is able to behave both as Edge Triggered
( ET ) and Level Triggered ( LT ).
event distribution interface is able to behave both as edge-triggered
(ET) and level-triggered (LT).
The difference between ET and LT
event distribution mechanism can be described as follows.
Suppose that
this scenario happens :
.TP
.B 1
The file descriptor that represents the read side of a pipe (
.B RFD
) is added inside the
The file descriptor that represents the read side of a pipe
.RI ( rfd )
is added inside the
.B epoll
device.
.TP
.B 2
Pipe writer writes 2Kb of data on the write side of the pipe.
A pipe writer writes 2Kb of data on the write side of the pipe.
.TP
.B 3
A call to
.BR epoll_wait (2)
is done that will return
.B RFD
as ready file descriptor.
.I rfd
as a ready file descriptor.
.TP
.B 4
The pipe reader reads 1Kb of data from
.BR RFD .
.IR rfd .
.TP
.B 5
A call to
@ -82,7 +82,7 @@ A call to
is done.
.PP
If the
.B RFD
.I rfd
file descriptor has been added to the
.B epoll
interface using the
@ -91,17 +91,18 @@ flag, the call to
.BR epoll_wait (2)
done in step
.B 5
will probably hang because of the available data still present in the file
input buffers and the remote peer might be expecting a response based on the
will probably hang despite the available data still present in the file
input buffer;
meanwhile the remote peer might be expecting a response based on the
data it already sent.
The reason for this is that Edge Triggered event
The reason for this is that edge-triggered event
distribution delivers events only when events happens on the monitored file.
So, in step
.B 5
the caller might end up waiting for some data that is already present inside
the input buffer.
In the above example, an event on
.B RFD
.I rfd
will be generated because of the write done in
.BR 2
and the event is consumed in
@ -112,19 +113,18 @@ does not consume the whole buffer data, the call to
.BR epoll_wait (2)
done in step
.B 5
might lock indefinitely.
The
.B epoll
interface, when used with the
might block indefinitely.
An application that employs the
.B EPOLLET
flag ( Edge Triggered )
flag (edge-triggered)
should use non-blocking file descriptors to avoid having a blocking
read or write starve the task that is handling multiple file descriptors.
read or write starve a task that is handling multiple file descriptors.
The suggested way to use
.B epoll
as an Edge Triggered
as an edge-triggered
.RB ( EPOLLET )
interface is below, and possible pitfalls to avoid follow.
interface is as follows:
.RS
.TP
.B i
@ -138,15 +138,16 @@ or
return EAGAIN
.RE
.PP
On the contrary, when used as a Level Triggered interface,
By contrast, when used as a level-triggered interface,
.B epoll
is by all means a faster
is simplay a faster
.BR poll (2),
and can be used wherever the latter is used since it shares the
same semantics.
Since even with the Edge Triggered
Since even with the edge-triggered
.B epoll
multiple events can be generated up on receipt of multiple chunks of data,
multiple events can be generated upon receipt of multiple chunks of data,
the caller has the option to specify the
.B EPOLLONESHOT
flag, to tell
@ -156,17 +157,17 @@ to disable the associated file descriptor after the receipt of an event with
When the
.B EPOLLONESHOT
flag is specified,
it is caller responsibility to rearm the file descriptor using
it is the caller's responsibility to rearm the file descriptor using
.BR epoll_ctl (2)
with
.BR EPOLL_CTL_MOD .
.SS Example for Suggested Usage
While the usage of
.B epoll
when employed like a Level Triggered interface does have the same
semantics of
when employed as a level-triggered interface does have the same
semantics as
.BR poll (2),
an Edge Triggered usage requires more clarification to avoid stalls
the edge-triggered usage requires more clarification to avoid stalls
in the application event loop.
In this example, listener is a
non-blocking socket on which
@ -177,7 +178,7 @@ file descriptor until EAGAIN is returned by either
.BR read (2)
or
.BR write (2).
An event driven state machine application should, after having received
An event-driven state machine application should, after having received
EAGAIN, record its current state so that at the next call to do_use_fd()
it will continue to
.BR read (2)
@ -214,12 +215,11 @@ for(;;) {
}
.fi
When used as an Edge triggered interface, for performance reasons, it is
possible to add the file descriptor inside the epoll interface (
.B EPOLL_CTL_ADD
) once by specifying (
.BR EPOLLIN | EPOLLOUT
).
When used as an edge-triggered interface, for performance reasons, it is
possible to add the file descriptor inside the epoll interface
.RB ( EPOLL_CTL_ADD )
) once by specifying
.RB ( EPOLLIN | EPOLLOUT ).
This allows you to avoid
continuously switching between
.B EPOLLIN
@ -232,31 +232,30 @@ with
.SS Questions and Answers
.TP
.B Q1
What happens if you add the same fd to an epoll_set twice?
What happens if you add the same file descriptor to an epoll_set twice?
.TP
.B A1
You will probably get EEXIST.
However, it is possible that two
threads may add the same fd twice.
threads may add the same file descriptor twice.
This is a harmless condition.
.TP
.B Q2
Can two
.B epoll
sets wait for the same fd?
sets wait for the same file descriptor?
If so, are events reported to both
.B epoll
sets fds?
file descriptors?
.TP
.B A2
Yes.
Yes, and events would be reported to both.
However, it is not recommended.
Yes it would be reported to both.
.TP
.B Q3
Is the
.B epoll
fd itself poll/epoll/selectable?
file descriptor itself poll/epoll/selectable?
.TP
.B A3
Yes.
@ -264,24 +263,24 @@ Yes.
.B Q4
What happens if the
.B epoll
fd is put into its own fd set?
file descriptor is put into its own file descriptor set?
.TP
.B A4
It will fail.
However, you can add an
.B epoll
fd inside another epoll fd set.
file descriptor inside another epoll file descriptor set.
.TP
.B Q5
Can I send the
.B epoll
fd over a unix-socket to another process?
file descriptor over a unix-socket to another process?
.TP
.B A5
No.
.TP
.B Q6
Will the close of an fd cause it to be removed from all
Will closing a file descriptor cause it to be removed from all
.B epoll
sets automatically?
.TP
@ -289,7 +288,7 @@ sets automatically?
Yes.
.TP
.B Q7
If more than one event comes in between
If more than one event occurs between
.BR epoll_wait (2)
calls, are they combined or reported separately?
.TP
@ -297,19 +296,20 @@ calls, are they combined or reported separately?
They will be combined.
.TP
.B Q8
Does an operation on an fd affect the already collected but not yet reported
events?
Does an operation on a file descriptor affect the
already collected but not yet reported events?
.TP
.B A8
You can do two operations on an existing fd.
You can do two operations on an existing file descriptor.
Remove would be meaningless for
this case.
Modify will re-read available I/O.
.TP
.B Q9
Do I need to continuously read/write an fd until EAGAIN when using the
Do I need to continuously read/write a file descriptor
until EAGAIN when using the
.B EPOLLET
flag ( Edge Triggered behavior ) ?
flag (edge-triggered behavior) ?
.TP
.B A9
No you don't.
@ -322,26 +322,26 @@ next EAGAIN.
When and how you will use such file descriptor is entirely up
to you.
Also, the condition that the read/write I/O space is exhausted can
be detected by checking the amount of data read/write from/to the target
be detected by checking the amount of data read from / written to the target
file descriptor.
For example, if you call
.BR read (2)
by asking to read a certain amount of data and
.BR read (2)
returns a lower number of bytes, you can be sure to have exhausted the read
returns a lower number of bytes,
you can be sure of having exhausted the read
I/O space for such file descriptor.
Same is valid when writing using the
.BR write (2)
function.
The same is true when writing using the
.BR write (2).
.SS Possible Pitfalls and Ways to Avoid Them
.TP
.B o Starvation ( Edge Triggered )
.B o Starvation (edge-triggered)
.PP
If there is a large amount of I/O space,
it is possible that by trying to drain
it the other files will not get processed causing starvation.
This is not specific to
.BR epoll .
(This problem is not specific to
.BR epoll .)
.PP
The solution is to maintain a ready list
and mark the file descriptor as ready
@ -349,32 +349,33 @@ in its associated data structure, thereby allowing the application to
remember which files need to be processed but still round robin amongst
all the ready files.
This also supports ignoring subsequent events you
receive for fd's that are already ready.
receive for file descriptors that are already ready.
.TP
.B o If using an event cache...
.PP
If you use an event cache or store all the fd's returned from
If you use an event cache or store all the file descriptors returned from
.BR epoll_wait (2),
then make sure to provide a way to mark
its closure dynamically (ie- caused by
its closure dynamically (i.e., caused by
a previous event's processing).
Suppose you receive 100 events from
.BR epoll_wait (2),
and in event #47 a condition causes event #13 to be closed.
If you remove the structure and
.BR close (2)
the fd for event #13, then your
event cache might still say there are events waiting for that fd causing
confusion.
the file descriptor for event #13, then your
event cache might still say there are events waiting for that
file descriptor causing confusion.
.PP
One solution for this is to call, during the processing of event 47,
.BR epoll_ctl ( EPOLL_CTL_DEL )
to delete fd 13 and
to delete file descriptor 13 and
.BR close (2),
then mark its associated
data structure as removed and link it to a cleanup list.
If you find another
event for fd 13 in your batch processing, you will discover the fd had been
event for file descriptor 13 in your batch processing,
you will discover the file descriptor had been
previously removed and there will be no confusion.
.SH VERSIONS
.BR epoll (7)