Index: ossp-pkg/pth/THANKS RCS File: /v/ossp/cvs/ossp-pkg/pth/THANKS,v rcsdiff -q -kk '-r1.74' '-r1.75' -u '/v/ossp/cvs/ossp-pkg/pth/THANKS,v' 2>/dev/null --- THANKS 2000/06/18 09:14:46 1.74 +++ THANKS 2000/07/10 06:12:34 1.75 @@ -46,7 +46,7 @@ o Jim Jagielski o Jeremie o Dmitry E. Kiselyov - o Thomas Klausner + o Thomas Klausner o Martin Kraemer o Christian Kuhtz o Kriton Kyrimis Index: ossp-pkg/pth/pth.pod RCS File: /v/ossp/cvs/ossp-pkg/pth/pth.pod,v co -q -kk -p'1.138' '/v/ossp/cvs/ossp-pkg/pth/pth.pod,v' | diff -u /dev/null - -L'ossp-pkg/pth/pth.pod' 2>/dev/null --- ossp-pkg/pth/pth.pod +++ - 2024-05-14 00:58:25.067900388 +0200 @@ -0,0 +1,2317 @@ +## +## GNU Pth - The GNU Portable Threads +## Copyright (c) 1999-2000 Ralf S. Engelschall +## +## This file is part of GNU Pth, a non-preemptive thread scheduling +## library which can be found at http://www.gnu.org/software/pth/. +## +## This library is free software; you can redistribute it and/or +## modify it under the terms of the GNU Lesser General Public +## License as published by the Free Software Foundation; either +## version 2.1 of the License, or (at your option) any later version. +## +## This library is distributed in the hope that it will be useful, +## but WITHOUT ANY WARRANTY; without even the implied warranty of +## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +## Lesser General Public License for more details. +## +## You should have received a copy of the GNU Lesser General Public +## License along with this library; if not, write to the Free Software +## Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 +## USA, or contact Ralf S. Engelschall . +## +## pth.pod: Pth manual page +## + +# ``Real programmers don't document. +# Documentation is for wimps who can't +# read the listings of the object deck.'' + +=pod + +=head1 NAME + +B - GNU Portable Threads + +=head1 VERSION + +GNU Pth PTH_VERSION_STR + +=head1 SYNOPSIS + +=over 4 + +=item B + +pth_init, +pth_kill, +pth_ctrl, +pth_version. + +=item B + +pth_attr_of, +pth_attr_new, +pth_attr_init, +pth_attr_set, +pth_attr_get, +pth_attr_destroy. + +=item B + +pth_spawn, +pth_once, +pth_self, +pth_suspend, +pth_resume, +pth_yield, +pth_nap, +pth_wait, +pth_cancel, +pth_abort, +pth_raise, +pth_join, +pth_exit. + +=item B + +pth_fdmode, +pth_time, +pth_timeout, +pth_sfiodisc. + +=item B + +pth_cancel_point, +pth_cancel_state. + +=item B + +pth_event, +pth_event_typeof, +pth_event_extract, +pth_event_concat, +pth_event_isolate, +pth_event_walk, +pth_event_occurred, +pth_event_free. + +=item B + +pth_key_create, +pth_key_delete, +pth_key_setdata, +pth_key_getdata. + +=item B + +pth_msgport_create, +pth_msgport_destroy, +pth_msgport_find, +pth_msgport_pending, +pth_msgport_put, +pth_msgport_get, +pth_msgport_reply. + +=item B + +pth_cleanup_push, +pth_cleanup_pop. + +=item B + +pth_atfork_push, +pth_atfork_pop, +pth_fork. + +=item B + +pth_mutex_init, +pth_mutex_acquire, +pth_mutex_release, +pth_rwlock_init, +pth_rwlock_acquire, +pth_rwlock_release, +pth_cond_init, +pth_cond_await, +pth_cond_notify, +pth_barrier_init, +pth_barrier_reach. + +=item B + +pth_sigwait_ev, +pth_accept_ev, +pth_connect_ev, +pth_select_ev, +pth_poll_ev, +pth_read_ev, +pth_readv_ev, +pth_write_ev, +pth_writev_ev, +pth_recv_ev, +pth_recvfrom_ev, +pth_send_ev, +pth_sendto_ev. + +=item B + +pth_usleep, +pth_sleep, +pth_waitpid, +pth_sigmask, +pth_sigwait, +pth_accept, +pth_connect, +pth_select, +pth_poll, +pth_read, +pth_readv, +pth_write, +pth_writev, +pth_pread, +pth_pwrite, +pth_recv, +pth_recvfrom, +pth_send, +pth_sendto. + +=back + +=head1 DESCRIPTION + + ____ _ _ + | _ \| |_| |__ + | |_) | __| '_ \ ``Only those who attempt + | __/| |_| | | | the absurd can achieve + |_| \__|_| |_| the impossible.'' + +B is a very portable POSIX/ANSI-C based library for Unix platforms which +provides non-preemptive priority-based scheduling for multiple threads of +execution (aka `multithreading') inside event-driven applications. All threads +run in the same address space of the application process, but each thread has +its own individual program counter, run-time stack, signal mask and C +variable. + +The thread scheduling itself is done in a cooperative way, i.e., the threads +are managed and dispatched by a priority- and event-driven non-preemptive +scheduler. The intention is that this way both better portability and run-time +performance is achieved than with preemptive scheduling. The event facility +allows threads to wait until various types of internal and external events +occur, including pending I/O on file descriptors, asynchronous signals, +elapsed timers, pending I/O on message ports, thread and process termination, +and even results of customized callback functions. + +B also provides an optional emulation API for POSIX.1c threads +(`Pthreads') which can be used for backward compatibility to existing +multithreaded applications. See B's pthread(3) manual page for +details. + +=head2 Threading Background + +When programming event-driven applications, usually servers, lots of +regular jobs and one-shot requests have to be processed in parallel. +To efficiently simulate this parallel processing on uniprocessor +machines, we use `multitasking' -- that is, we have the application +ask the operating system to spawn multiple instances of itself. On +Unix, typically the kernel implements multitasking in a preemptive and +priority-based way through heavy-weight processes spawned with fork(2). +These processes usually do I share a common address space. Instead +they are clearly separated from each other, and are created by direct +cloning a process address space (although modern kernels use memory +segment mapping and copy-on-write semantics to avoid unnecessary copying +of physical memory). + +The drawbacks are obvious: Sharing data between the processes is +complicated, and can usually only be done efficiently through shared +memory (but which itself is not very portable). Synchronization is +complicated because of the preemptive nature of the Unix scheduler +(one has to use I locks, etc). The machine's resources can be +exhausted very quickly when the server application has to serve too many +long-running requests (heavy-weight processes cost memory). And when +each request spawns a sub-process to handle it, the server performance +and responsiveness is horrible (heavy-weight processes cost time to +spawn). Finally, the server application doesn't scale very well with the +load because of these resource problems. In practice, lots of tricks +are usually used to overcome these problems - ranging from pre-forked +sub-process pools to semi-serialized processing, etc. + +One of the most elegant ways to solve these resource- and data-sharing +problems is to have multiple I threads of execution +inside a single (heavy-weight) process, i.e., to use I. +Those I usually improve responsiveness and performance of the +application, often improve and simplify the internal program structure, +and most important, require less system resources than heavy-weight +processes. Threads are neither the optimal run-time facility for all +types of applications, nor can all applications benefit from them. But +at least event-driven server applications usually benefit greatly from +using threads. + +=head2 The World of Threading + +Even though lots of documents exists which describe and define the world +of threading, to understand B, you need only basic knowledge about +threading. The following definitions of thread-related terms should at +least help you understand thread programming enough to allow you to use +B. + +=over 2 + +=item B B vs. B + +A process on Unix systems consists of at least the following fundamental +ingredients: I, I, I, I, I, I, I, I. On every process switch, the kernel +saves and restores these ingredients for the individual processes. On +the other hand, a thread consists of only a private program counter, +stack memory, stack pointer and signal table. All other ingredients, in +particular the virtual memory, it shares with the other threads of the +same process. + +=item B B vs. B threading + +Threads on a Unix platform traditionally can be implemented either +inside kernel-space or user-space. When threads are implemented by the +kernel, the thread context switches are performed by the kernel without +the application's knowledge. Similarly, when threads are implemented in +user-space, the thread context switches are performed by an application +library, without the kernel's knowledge. There also are hybrid threading +approaches where, typically, a user-space library binds one or more +user-space threads to one or more kernel-space threads (there usually +called light-weight processes - or in short LWPs). + +User-space threads are usually more portable and can perform faster +and cheaper context switches (for instance via swapcontext(2) or +setjmp(3)/longjmp(3)) than kernel based threads. On the other hand, +kernel-space threads can take advantage of multiprocessor machines and +don't have any inherent I/O blocking problems. Kernel-space threads are +usually scheduled in preemptive way side-by-side with the underlying +processes. User-space threads on the other hand use either preemptive or +non-preemptive scheduling. + +=item B B vs. B thread scheduling + +In preemptive scheduling, the scheduler lets a thread execute until a +blocking situation occurs (usually a function call which would block) +or the assigned timeslice elapses. Then it detracts control from the +thread without a chance for the thread to object. This is usually +realized by interrupting the thread through a hardware interrupt +signal (for kernel-space threads) or a software interrupt signal (for +user-space threads), like C or C. In non-preemptive +scheduling, once a thread received control from the scheduler it keeps +it until either a blocking situation occurs (again a function call which +would block and instead switches back to the scheduler) or the thread +explicitly yields control back to the scheduler in a cooperative way. + +=item B B vs. B + +Concurrency exists when at least two threads are I at the +same time. Parallelism arises when at least two threads are I +simultaneously. Real parallelism can be only achieved on multiprocessor +machines, of course. But one also usually speaks of parallelism or +I in the context of preemptive thread scheduling +and of I in the context of non-preemptive thread +scheduling. + +=item B B + +The responsiveness of a system can be described by the user visible +delay until the system responses to an external request. When this delay +is small enough and the user doesn't recognize a noticeable delay, +the responsiveness of the system is considered good. When the user +recognizes or is even annoyed by the delay, the responsiveness of the +system is considered bad. + +=item B B, B and B functions + +A reentrant function is one that behaves correctly if it is called +simultaneously by several threads and then also executes simultaneously. +Functions that access global state, such as memory or files, of course, +need to be carefully designed in order to be reentrant. Two traditional +approaches to solve these problems are caller-supplied states and +thread-specific data. + +Thread-safety is the avoidance of I, i.e., situations +in which data is set to either correct or incorrect value depending +upon the (unpredictable) order in which multiple threads access and +modify the data. So a function is thread-safe when it still behaves +semantically correct when called simultaneously by several threads (it +is not required that the functions also execute simultaneously). The +traditional approach to achieve thread-safety is to wrap a function body +with an internal mutual exclusion lock (aka `mutex'). As you should +recognize, reentrant is a stronger attribute than thread-safe, because +it is harder to achieve and results especially in no run-time contention +between threads. So, a reentrant function is always thread-safe, but not +vice versa. + +Additionally there is a related attribute for functions named +asynchronous-safe, which comes into play in conjunction with signal +handlers. This is very related to the problem of reentrant functions. An +asynchronous-safe function is one that can be called safe and without +side-effects from within a signal handler context. Usually very few +functions are of this type, because an application is very restricted in +what it can perform from within a signal handler (especially what system +functions it is allowed to call). The reason mainly is, because only a +few system functions are officially declared by POSIX as guaranteed to +be asynchronous-safe. Asynchronous-safe functions usually have to be +already reentrant. + +=back + +=head2 User-Space Threads + +User-space threads can be implemented in various way. The two +traditional approaches are: + +=over 3 + +=item B<1.> + +B + +Here the global procedures of the application are split into small +execution units (each is required to not run for more than a few +milliseconds) and those units are implemented by separate functions. +Then a global matrix is defined which describes the execution (and +perhaps even dependency) order of these functions. The main server +procedure then just dispatches between these units by calling one +function after each other controlled by this matrix. The threads are +created by more than one jump-trail through this matrix and by switching +between these jump-trails controlled by corresponding occurred events. + +This approach gives the best possible performance, because one can +fine-tune the threads of execution by adjusting the matrix, and the +scheduling is done explicitly by the application itself. It is also very +portable, because the matrix is just an ordinary data structure, and +functions are a standard feature of ANSI C. + +The disadvantage of this approach is that it is complicated to write +large applications with this approach, because in those applications +one quickly gets hundreds(!) of execution units and the control flow +inside such an application is very hard to understand (because it is +interrupted by function borders and one always has to remember the +global dispatching matrix to follow it). Additionally, all threads +operate on the same execution stack. Although this saves memory, it is +often nasty, because one cannot switch between threads in the middle of +a function. Thus the scheduling borders are the function borders. + +=item B<2.> + +B + +Here the idea is that one programs the application as with forked +processes, i.e., one spawns a thread of execution and this runs from the +begin to the end without an interrupted control flow. But the control +flow can be still interrupted - even in the middle of a function. +Actually in a preemptive way, similar to what the kernel does for the +heavy-weight processes, i.e., every few milliseconds the user-space +scheduler switches between the threads of execution. But the thread +itself doesn't recognize this and usually (except for synchronization +issues) doesn't have to care about this. + +The advantage of this approach is that it's very easy to program, +because the control flow and context of a thread directly follows +a procedure without forced interrupts through function borders. +Additionally, the programming is very similar to a traditional and well +understood fork(2) based approach. + +The disadvantage is that although the general performance is increased, +compared to using approaches based on heavy-weight processes, it is decreased +compared to the matrix-approach above. Because the implicit preemptive +scheduling does usually a lot more context switches (every user-space context +switch costs some overhead even when it is a lot cheaper than a kernel-level +context switch) than the explicit cooperative/non-preemptive scheduling. +Finally, there is no really portable POSIX/ANSI-C based way to implement +user-space preemptive threading. Either the platform already has threads, +or one has to hope that some semi-portable package exists for it. And +even those semi-portable packages usually have to deal with assembler +code and other nasty internals and are not easy to port to forthcoming +platforms. + +=back + +So, in short: the matrix-dispatching approach is portable and fast, but +nasty to program. The thread scheduling approach is easy to program, +but suffers from synchronization and portability problems caused by its +preemptive nature. + +=head2 The Compromise of Pth + +But why not combine the good aspects of both approaches while avoiding +their bad aspects? That's the goal of B. B implements +easy-to-program threads of execution, but avoids the problems of +preemptive scheduling by using non-preemptive scheduling instead. + +This sounds like, and is, a useful approach. Nevertheless, one has to +keep the implications of non-preemptive thread scheduling in mind when +working with B. The following list summarizes a few essential +points: + +=over 2 + +=item B + +B. + +This is, because it uses a nifty and portable POSIX/ANSI-C approach for +thread creation (and this way doesn't require any platform dependent +assembler hacks) and schedules the threads in non-preemptive way (which +doesn't require unportable facilities like C). On the other +hand, this way not all fancy threading features can be implemented. +Nevertheless the available facilities are enough to provide a robust and +full-featured threading system. + +=item B + +B. + +The reason is the non-preemptive scheduling. Number-crunching +applications usually require preemptive scheduling to achieve +concurrency because of their long CPU bursts. For them, non-preemptive +scheduling (even together with explicit yielding) provides only the old +concept of `coroutines'. On the other hand, event driven applications +benefit greatly from non-preemptive scheduling. They have only short +CPU bursts and lots of events to wait on, and this way run faster under +non-preemptive scheduling because no unnecessary context switching +occurs, as it is the case for preemptive scheduling. That's why B +is mainly intended for server type applications, although there is no +technical restriction. + +=item B + +B. + +This nice fact exists again because of the nature of non-preemptive +scheduling, where a function isn't interrupted and this way cannot be +reentered before it returned. This is a great portability benefit, +because thread-safety can be achieved more easily than reentrance +possibility. Especially this means that under B more existing +third-party libraries can be used without side-effects than its the case +for other threading systems. + +=item B + +B. + +This means that B runs on almost all Unix kernels, because the +kernel does not need to be aware of the B threads (because they +are implemented entirely in user-space). On the other hand, it cannot +benefit from the existence of multiprocessors, because for this, kernel +support would be needed. In practice, this is no problem, because +multiprocessor systems are rare, and portability is almost more +important than highest concurrency. + +=back + +=head2 The life cycle of a thread + +To understand the B Application Programming Interface (API), it +helps to first understand the life cycle of a thread in the B +threading system. It can be illustrated with the following directed +graph: + + NEW + | + V + +---> READY ---+ + | ^ | + | | V + WAITING <--+-- RUNNING + | + : V + SUSPENDED DEAD + +When a new thread is created, it is moved into the B queue of the +scheduler. On the next dispatching for this thread, the scheduler picks +it up from there and moves it to the B queue. This is a queue +containing all threads which want to perform a CPU burst. There they are +queued in priority order. On each dispatching step, the scheduler always +removes the thread with the highest priority only. It then increases the +priority of all remaining threads by 1, to prevent them from `starving'. + +The thread which was removed from the B queue is the new +B thread (there is always just one B thread, of +course). The B thread is assigned execution control. After +this thread yields execution (either explicitly by yielding execution +or implicitly by calling a function which would block) there are three +possibilities: Either it has terminated, then it is moved to the B +queue, or it has events on which it wants to wait, then it is moved into +the B queue. Else it is assumed it wants to perform more CPU +bursts and immediately enters the B queue again. + +Before the next thread is taken out of the B queue, the +B queue is checked for pending events. If one or more events +occurred, the threads that are waiting on them are immediately moved to +the B queue. + +The purpose of the B queue has to do with the fact that in B +a thread never directly switches to another thread. A thread always +yields execution to the scheduler and the scheduler dispatches to the +next thread. So a freshly spawned thread has to be kept somewhere until +the scheduler gets a chance to pick it up for scheduling. That is for +what the B queue is for. + +The purpose of the B queue is to support thread joining. When a +thread is marked to be unjoinable, it is directly kicked out of the +system after it terminated. But when it is joinable, it enters the +B queue. There it remains until another thread joins it. + +Finally, there is a special separated queue named B, to where +threads can be manually moved from the B, B or B +queues by the application. The purpose of this special queue is to +temporarily absorb suspended threads until they are again resumed by +the application. Suspended threads do not cost scheduling or event +handling resources, because they are temporarily completely out of the +scheduler's scope. If a thread is resumed, it is moved back to the queue +from where it originally came and this way again enters the schedulers +scope. + +=head1 APPLICATION PROGRAMMING INTERFACE (API) + +In the following the B I (API) +is discussed in detail. With the knowledge given above, it should be +now easy to understand how to program threads with this API. In good +Unix tradition, B functions use special return values (C +in pointer context, C in boolean context and C<-1> in integer +context) to indicate an error condition and set (or pass through) the +C system variable to pass more details about the error to the +caller. + +=head2 Global Library Management + +The following functions act on the library as a whole. They are used to +initialize and shutdown the scheduler and fetch information from it. + +=over 4 + +=item int B(void); + +This initializes the B library. It has to be the first B API +function call in an application, and is mandatory. It's usually done at +the begin of the main() function of the application. This implicitly +spawns the internal scheduler thread and transforms the single execution +unit of the current process into a thread (the `main' thread). It +returns C on success and C on error. + +=item int B(void); + +This kills the B library. It should be the last B API function call +in an application, but is not really required. It's usually done at the end of +the main function of the application. At least, it has to be called from within +the main thread. It implicitly kills all threads and transforms back the +calling thread into the single execution unit of the underlying process. The +usual way to terminate a B application is either a simple +`C' in the main thread (which waits for all other threads to +terminate, kills the threading system and then terminates the process) or a +`C' (which immediately kills the threading system and +terminates the process). The pth_kill() return immediately with a return +code of C if it is called not from within the main thread. Else +kills the threading system and returns C. + +=item long B(unsigned long I, ...); + +This is a generalized query/control function for the B library. The +argument I is a bitmask formed out of one or more CI +queries. Currently the following queries are supported: + +=over 4 + +=item C + +This returns the total number of threads currently in existence. This query +actually is formed out of the combination of queries for threads in a +particular state, i.e., the C query is equal to the +OR-combination of all the following specialized queries: + +C for the number of threads in the +new queue (threads created via pth_spawn(3) but still not +scheduled once), C for the number of +threads in the ready queue (threads who want to do CPU bursts), +C for the number of running threads +(always just one thread!), C for +the number of threads in the waiting queue (threads waiting for +events), C for the number of +threads in the suspended queue (threads waiting to be resumed) and +C for the number of threads in the new queue +(terminated threads waiting for a join). + +=item C + +This requires a second argument of type `C' (pointer to a floating +point variable). It stores a floating point value describing the exponential +averaged load of the scheduler in this variable. The load is a function from +the number of threads in the ready queue of the schedulers dispatching unit. +So a load around 1.0 means there is only one ready thread (the standard +situation when the application has no high load). A higher load value means +there a more threads ready who want to do CPU bursts. The average load value +updates once per second only. The return value for this query is always 0. + +=item C + +This requires a second argument of type `C' which identifies a +thread. It returns the priority (ranging from C to +C) of the given thread. + +=item C + +This requires a second argument of type `C' which identifies a +thread. It returns the name of the given thread, i.e., the return value of +pth_ctrl(3) should be casted to a `C'. + +=item C + +This requires a second argument of type `C' to which a summary +of the internal B library state is written to. The main information +which is currently written out is the current state of the thread pool. + +=back + +The function returns C<-1> on error. + +=item long B(void); + +This function returns a hex-value `0xIIII' which describes the +current B library version. I is the version, I the revisions, +I the level and I the type of the level (alphalevel=0, betalevel=1, +patchlevel=2, etc). For instance B version 1.0b1 is encoded as 0x100101. +The reason for this unusual mapping is that this way the version number is +steadily I. The same value is also available under compile time as +C. + +=back + +=head2 Thread Attribute Handling + +Attribute objects are used in B for two things: First stand-alone/unbound +attribute objects are used to store attributes for to be spawned threads. +Bounded attribute objects are used to modify attributes of already existing +threads. The following attribute fields exists in attribute objects: + +=over 4 + +=item C (read-write) [C] + +Thread Priority between C and C. +The default is C. + +=item C (read-write) [C] + +Name of thread (up to 40 characters are stored only), mainly for debugging +purposes. + +=item C (read-write> [C] + +The thread detachment type, C indicates a joinable thread, C +indicates a detached thread. When a the is detached after termination it is +immediately kicked out of the system instead of inserted into the dead queue. + +=item C (read-write) [C] + +The thread cancellation state, i.e., a combination of C or +C and C or +C. + +=item C (read-write) [C] + +The thread stack size in bytes. Use lower values than 64 KB with great care! + +=item C (read-write) [C] + +A pointer to the lower address of a chunk of malloc(3)'ed memory for the +stack. + +=item C (read-only) [C] + +The time when the thread was spawned. +This can be queried only when the attribute object is bound to a thread. + +=item C (read-only) [C] + +The time when the thread was last dispatched. +This can be queried only when the attribute object is bound to a thread. + +=item C (read-only) [C] + +The total time the thread was running. +This can be queried only when the attribute object is bound to a thread. + +=item C (read-only) [C] + +The thread start function. +This can be queried only when the attribute object is bound to a thread. + +=item C (read-only) [C] + +The thread start argument. +This can be queried only when the attribute object is bound to a thread. + +=item C (read-only) [C] + +The scheduling state of the thread, i.e., either C, +C, C, or C +This can be queried only when the attribute object is bound to a thread. + +=item C (read-only) [C] + +The event ring the thread is waiting for. +This can be queried only when the attribute object is bound to a thread. + +=item C (read-only) [C] + +Whether the attribute object is bound (C) to a thread or not (C). + +=back + +The following API functions exists to handle the attribute objects: + +=over 4 + +=item pth_attr_t B(pth_t I); + +This returns a new attribute object I to thread I. Any queries on +this object directly fetch attributes from I. And attribute modifications +directly change I. Use such attribute objects to modify existing threads. + +=item pth_attr_t B(void); + +This returns a new I attribute object. An implicit pth_attr_init() is +done on it. Any queries on this object just fetch stored attributes from it. +And attribute modifications just change the stored attributes. Use such +attribute objects to pre-configure attributes for to be spawned threads. + +=item int B(pth_attr_t I); + +This initializes an attribute object I to the default values: +C := C, C := `C', +C := C, C := +C, C := 64*1024 and +C := C. All other C attributes are +read-only attributes and don't receive default values in I, because they +exists only for bounded attribute objects. + +=item int B(pth_attr_t I, int I, ...); + +This sets the attribute field I in I to a value +specified as an additional argument on the variable argument +list. The following attribute I and argument pairs can +be used: + + PTH_ATTR_PRIO int + PTH_ATTR_NAME char * + PTH_ATTR_JOINABLE int + PTH_ATTR_CANCEL_STATE unsigned int + PTH_ATTR_STACK_SIZE unsigned int + PTH_ATTR_STACK_ADDR char * + +=item int B(pth_attr_t I, int I, ...); + +This retrieves the attribute field I in I and stores its +value in the variable specified through a pointer in an additional +argument on the variable argument list. The following I and +argument pairs can be used: + + PTH_ATTR_PRIO int * + PTH_ATTR_NAME char ** + PTH_ATTR_JOINABLE int * + PTH_ATTR_CANCEL_STATE unsigned int * + PTH_ATTR_STACK_SIZE unsigned int * + PTH_ATTR_STACK_ADDR char ** + PTH_ATTR_TIME_SPAWN pth_time_t * + PTH_ATTR_TIME_LAST pth_time_t * + PTH_ATTR_TIME_RAN pth_time_t * + PTH_ATTR_START_FUNC void *(**)(void *) + PTH_ATTR_START_ARG void ** + PTH_ATTR_STATE pth_state_t * + PTH_ATTR_EVENTS pth_event_t * + PTH_ATTR_BOUND int * + +=item int B(pth_attr_t I); + +This destroys a attribute object I. After this I is no +longer a valid attribute object. + +=back + +=head2 Thread Control + +The following functions control the threading itself and form the main API of +the B library. + +=over 4 + +=item pth_t B(pth_attr_t I, void *(*I)(void *), void *I); + +This spawns a new thread with the attributes given in I (or +C for default attributes - which means that thread priority, +joinability and cancel state are inherited from the current thread) with the +starting point at routine I. This entry routine is called as +`pth_exit(I(I))' inside the new thread unit, i.e., I's +return value is fed to an implicit pth_exit(3). So the thread usually can exit +by just returning. Nevertheless the thread can also exit explicitly at any +time by calling pth_exit(3). But keep in mind that calling the POSIX function +exit(3) still terminates the complete process and not just the current thread. + +There is no B-internal limit on the number of threads one can spawn, +except the limit implied by the available virtual memory. B internally +keeps track of thread in dynamic data structures. The function returns +C on error. + +=item int B(pth_once_t *I, void (*I)(void *), void *I); + +This is a convenience function which uses a control variable of type +C to make sure a constructor function I is called only once +as `I(I)' in the system. In other words: Only the first call to +pth_once(3) by any thread in the system succeeds. The variable referenced via +I should be declared as `C I = +C;' before calling this function. + +=item pth_t B(void); + +This just returns the unique thread handle of the currently running thread. +This handle itself has to be treated as an opaque entity by the application. +It's usually used as an argument to other functions who require an argument of +type C. + +=item int B(pth_t I); + +This suspends a thread I until it is manually resumed again via +pth_resume(3). For this, the thread is moved to the B queue +and this way is completely out of the scheduler's event handling and +thread dispatching scope. Suspending the current thread is not allowed. +The function returns C on success and C on errors. + +=item int B(pth_t I); + +This function resumes a previously suspended thread I, i.e. I +has to stay on the B queue. The thread is moved to the +B, B or B queue (dependent on what its state was +when the pth_suspend(3) call were made) and this way again enters the +event handling and thread dispatching scope of the scheduler. The +function returns C on success and C on errors. + +=item int B(pth_t I, int I) + +This function raises a signal for delivery to thread I only. When one +just raises a signal via raise(3) or kill(2), its delivered to an arbitrary +thread which has this signal not blocked. With pth_raise(3) one can send a +signal to a thread and its guarantees that only this thread gets the signal +delivered. But keep in mind that nevertheless the signals I is still +configured I-wide. When I is 0 plain thread checking is +performed, i.e., `C' returns C when thread I +still exists in the B system but doesn't send any signal to it. + +=item int B(pth_t I); + +This explicitly yields back the execution control to the scheduler thread. +Usually the execution is implicitly transferred back to the scheduler when a +thread waits for an event. But when a thread has to do larger CPU bursts, it +can be reasonable to interrupt it explicitly by doing a few pth_yield(3) calls +to give other threads a chance to execute, too. This obviously is the +cooperating part of B. A thread I to yield execution, of +course. But when you want to program a server application with good response +times the threads should be cooperative, i.e., when they should split their CPU +bursts into smaller units with this call. + +Usually one specifies I as C to indicate to the scheduler that it +can freely decide which thread to dispatch next. But if one wants to indicate +to the scheduler that a particular thread should be favored on the next +dispatching step, one can specify this thread explicitly. This allows the +usage of the old concept of I where a thread/routine switches to a +particular cooperating thread. If I is not C and points to a I +or I thread, it is guaranteed that this thread receives execution +control on the next dispatching step. If I is in a different state (that +is, not in C or C) an error is reported. + +The function usually returns C for success and only C (with +C set to C) if I specified and invalid or still not +new or ready thread. + +=item int B(pth_time_t I); + +This functions suspends the execution of the current thread until I +is elapsed. I is of type C and this way has theoretically +a resolution of one microsecond. In practice you should neither rely on this +nor that the thread is awakened exactly after I has elapsed. It's +only guarantees that the thread will sleep at least I. But because +of the non-preemptive nature of B it can last longer (when another thread +kept the CPU for a long time). Additionally the resolution is dependent of the +implementation of timers by the operating system and these usually have only a +resolution of 10 microseconds or larger. But usually this isn't important for +an application unless it tries to use this facility for real time tasks. + +=item int B(pth_event_t I); + +This is the link between the scheduler and the event facility (see below for +the various pth_event_xxx() functions). It's modeled like select(2), i.e., one +gives this function one or more events (in the event ring specified by I) +on which the current thread wants to wait. The scheduler awakes the thread +when one ore more of them occurred after tagging them as occurred. The I +argument is a I to an event ring which isn't changed except for the +tagging. pth_wait(3) returns the number of occurred events and the application +can use pth_event_occurred(3) to test which events occurred. + +=item int B(pth_t I); + +This cancels a thread I. How the cancellation is done depends on the +cancellation state of I which the thread can configure itself. When its +state is C a cancellation request is just made pending. +When it is C it depends on the cancellation type what is +performed. When its C again the cancellation request is +just made pending. But when its C the thread is +immediately canceled before pth_cancel(3) returns. The effect of a thread +cancellation is equal to implicitly forcing the thread to call +`C' at one of his cancellation points. In B +thread enter a cancellation point either explicitly via pth_cancel_point(3) or +implicitly by waiting for an event. + +=item int B(pth_t I); + +This is the cruel way to cancel a thread I. When it's already dead and +waits to be joined it just joins it (via `CIC<, NULL)>') and +this way kicks it out of the system. Else it forces the thread to be not +joinable and to allow asynchronous cancellation and then cancels it via +`CIC<)>'. + +=item int B(pth_t I, void **I); + +This joins the current thread with the thread specified via I. It first +suspends the current thread until the I thread has terminated. Then it is +awakened and stores the value of I's pth_exit(3) call into *I (if +I and not C) and returns to the caller. A thread can be joined +only when it was I spawned with C. A thread can only be +joined once, i.e., after the pth_join(3) call the thread I is removed +from the system. + +=item void B(void *I); + +This terminates the current thread. Whether it's immediately removed from the +system or inserted into the dead queue of the scheduler depends on its join +type which was specified at spawning time. When it was spawned with +C it's immediately removed and I is ignored. +Else the thread is inserted into the dead queue and I remembered +for a pth_join(3) call by another thread. + +=back + +=head2 Utilities + +The following functions are utility functions. + +=over 4 + +=item int B(int I, int I); + +This switches the non-blocking mode flag on file descriptor I. The +argument I can be C for switching I into blocking +I/O mode, C for switching I into non-blocking I/O +mode or C for just polling the current mode. The current mode +is returned (either C or C) or +C on error. Keep in mind that since B 1.1 there is no +longer a requirement to manually switch a file descriptor into non-blocking +mode in order to use it. This is automatically done temporarily inside B. +Instead when you now switch a file descriptor explicitly into non-blocking +mode, pth_read(3) or pth_write(3) will never block the current thread. + +=item pth_time_t B(long I, long I); + +This is a constructor for a C structure which is a convenient +function to avoid temporary structure values. It returns a I +structure which holds the absolute time value specified by I and I. + +=item pth_time_t B(long I, long I); + +This is a constructor for a C structure which is a convenient +function to avoid temporary structure values. It returns a I +structure which holds the absolute time value calculated by adding I and +I to the current time. + +=item Sfdisc_t *B(void); + +This functions is always available, but only reasonably usable when B +was built with B support (C<--with-sfio> option) and C is +then defined by C. It is useful for applications which want to use the +comprehensive B I/O library with the B threading library. Then this +function can be used to get an B discipline structure (C) +which can be pushed onto B streams (C) in order to let this +stream use pth_read(3)/pth_write(2) instead of read(2)/write(2). The benefit +is that this way I/O on the B stream does only block the current thread +instead of the whole process. The application has to free(3) the C +structure when it is no longer needed. The Sfio package can be found at +http://www.research.att.com/sw/tools/sfio/. + +=back + +=head2 Cancellation Management + +B supports POSIX style thread cancellation via pth_cancel(3) and the +following two related functions: + +=over 4 + +=item void B(int I, int *I); + +This manages the cancellation state of the current thread. When I +is not C the function stores the old cancellation state under the +variable pointed to by I. When I is not C<0> it sets the +new cancellation state. I is created before I is set. A +state is a combination of C or C and +C or C. +C (or C) is the +default state where cancellation is possible but only at cancellation points. +Use C to complete disable cancellation for a thread and +C for allowing asynchronous cancellations, i.e., +cancellations which can happen at any time. + +=item void B(void); + +This explicitly enter a cancellation point. When the current cancellation +state is C or no cancellation request is pending, this has +no side-effect and returns immediately. Else it calls +`C'. + +=back + +=head2 Event Handling + +B has a very flexible event facility which is linked into the scheduler +through the pth_wait(3) function. The following functions provide the handling +of event rings. + +=over 4 + +=item pth_event_t B(unsigned long I, ...); + +This creates a new event ring consisting of a single initial event. The type +of the generated event is specified by I. The following types are +available: + +=over 4 + +=item C + +This is a file descriptor event. One or more of C, +C or C have to be OR-ed into +I to specify on which state of the file descriptor you want to wait. The +file descriptor itself has to be given as an additional argument. Example: +`C'. + +=item C + +This is a multiple file descriptor event modeled directly after the select(2) +call (actually it is also used to implement pth_select(3) internally). It's a +convenient way to wait for a large set of file descriptors at once and at each +file descriptor for a different type of state. Additionally as a nice +side-effect one receives the number of file descriptors which causes the event +to be occurred (using BSD semantics, i.e., when a file descriptor occurred in +two sets it's counted twice). The arguments correspond directly to the +select(2) function arguments except that there is no timeout argument (because +timeouts already can be handled via C events). + +Example: `C' where +C has to be of type `C', C has to be of type `C' and +C, C and C have to be of type `C' (see +select(2)). The number of occurred file descriptors are stored in C. + +=item C + +This is a signal set event. The two additional arguments have to be a pointer +to a signal set (type `C') and a pointer to a signal number +variable (type `C'). This event waits until one of the signals in +the signal set occurred. As a result the occurred signal number is stored in +the second additional argument. Keep in mind that the B scheduler doesn't +block signals automatically. So when you want to wait for a signal with this +event you've to block it via sigprocmask(2) or it will be delivered without +your notice. Example: `C'. + +=item C + +This is a time point event. The additional argument has to be of type +C (usually on-the-fly generated via pth_time(3)). This events +waits until the specified time point has elapsed. Keep in mind that the value +is an absolute time point and not an offset. When you want to wait for a +specified amount of time, you've to add the current time to the offset +(usually on-the-fly achieved via pth_timeout(3)). Example: +`C'. + +=item C + +This is a message port event. The additional argument has to be of type +C. This events waits until one or more messages were received +on the specified message port. Example: `C'. + +=item C + +This is a thread event. The additional argument has to be of type C. +One of C, C, C +or C has to be OR-ed into I to specify on which +state of the thread you want to wait. Example: +`C'. + +=item C + +This is a custom callback function event. Three additional arguments +have to be given with the following types: `C', +`C' and `C'. The first is a function pointer to +a check function and the second argument is a user-supplied context +value which is passed to this function. The scheduler calls this +function on a regular basis (on his own scheduler stack, so be very +careful!) and the thread is kept sleeping while the function returns +C. Once it returned C the thread will be awakened. The +check interval is defined by the third argument, i.e., the check +function is polled again not until this amount of time elapsed. Example: +`C'. + +=back + +=item unsigned long B(pth_event_t I); + +This returns the type of event I. It's a combination of the describing +C and C value. This is especially useful to know +which arguments have to be supplied to the pth_event_extract(3) function. + +=item int B(pth_event_t I, ...); + +When pth_event(3) is treated like sprintf(3), then this function is +sscanf(3), i.e., it is the inverse operation of pth_event(3). This means that +it can be used to extract the ingredients of an event. The ingredients are +stored into variables which are given as pointers on the variable argument +list. Which pointers have to be present depends on the event type and has to +be determined by the caller before via pth_event_typeof(3). + +To make it clear, when you constructed I via `C' you have to extract it via +`C', etc. For multiple arguments of an event the +order of the pointer arguments is the same as for pth_event(3). But always +keep in mind that you have to always supply I to I and +these variables have to be of the same type as the argument of pth_event(3) +required. + +=item pth_event_t B(pth_event_t I, ...); + +This concatenates one or more additional event rings to the event ring I +and returns I. The end of the argument list has to be marked with a +C argument. Use this function to create real events rings out of the +single-event rings created by pth_event(3). + +=item pth_event_t B(pth_event_t I); + +This isolates the event I from possibly appended events in the event ring. +When in I only one event exists, this returns C. When remaining +events exists, they form a new event ring which is returned. + +=item pth_event_t B(pth_event_t I, int I); + +This walks to the next (when I is C) or previews +(when I is C) event in the event ring I and +returns this new reached event. Additionally C can be +OR-ed into I to walk to the next/previous occurred event in the +ring I. + +=item int B(pth_event_t I); + +This checks whether the event I occurred. This is a fast operation because +only a tag on I is checked which was either set or still not set by the +scheduler. In other words: This doesn't check the event itself, it just checks +the last knowledge of the scheduler. + +=item int B(pth_event_t I, int I); + +This deallocates the event I (when I is C) or all +events appended to the event ring under I (when I is +C). + +=back + +=head2 Key-Based Storage + +The following functions provide thread-local storage through unique keys +similar to the POSIX B API. Use this for thread specific global data. + +=over 4 + +=item int B(pth_key_t *I, void (*I)(void *)); + +This created a new unique key and stores it in I. Additionally I +can specify a destructor function which is called on the current threads +termination with the I. + +=item int B(pth_key_t I); + +This explicitly destroys a key I. + +=item int B(pth_key_t I, const void *I); + +This stores I under I. + +=item void *B(pth_key_t I); + +This retrieves the value under I. + +=back + +=head2 Message Port Communication + +The following functions provide message ports which can be used for efficient +and flexible inter-thread communication. + +=over 4 + +=item pth_msgport_t B(const char *I); + +This returns a pointer to a new message port with name I. The I +can be used by other threads via pth_msgport_find(3) to find the message port +in case they do not know directly the pointer to the message port. + +=item void B(pth_msgport_t I); + +This destroys a message port I. Before all pending messages on it are +replied to their origin message port. + +=item pth_msgport_t B(const char *I); + +This finds a message port in the system by I and returns the pointer to +it. + +=item int B(pth_msgport_t I); + +This returns the number of pending messages on message port I. + +=item int B(pth_msgport_t I, pth_message_t *I); + +This puts (or sends) a message I to message port I. + +=item pth_message_t *B(pth_msgport_t I); + +This gets (or receives) the top message from message port I. Incoming +messages are always kept in a queue, so there can be more pending messages, of +course. + +=item int B(pth_message_t *I); + +This replies a message I to the message port of the sender. + +=back + +=head2 Thread Cleanups + +The following functions provide per-thread cleanup functions. + +=over 4 + +=item int B(void (*I)(void *), void *I); + +This pushes the routine I onto the stack of cleanup routines for the +current thread. These routines are called in LIFO order when the thread +terminates. + +=item int B(int I); + +This pops the top-most routine from the stack of cleanup routines for the +current thread. When I is C the routine is additionally called. + +=back + +=head2 Process Forking + +The following functions provide some special support for process forking +situations inside the threading environment. + +=over 4 + +=item int B(void (*I)(void *), void (*)(void *I), void (*)(void *I), void *I); + +This function declares forking handlers to be called before and after +pth_fork(3), in the context of the thread that called pth_fork(3). The +I handler is called before fork(2) processing commences. The +I handler is called after fork(2) processing completes in the parent +process. The I handler is called after fork(2) processing completed in +the child process. If no handling is desired at one or more of these three +points, the corresponding handler can be given as C. Each handler is +called with I as the argument. + +The order of calls to pth_atfork_push(3) is significant. The I and +I handlers are called in the order in which they were established by +calls to pth_atfork_push(3), i.e., FIFO. The I fork handlers are +called in the opposite order, i.e., LIFO. + +=item int B(void); + +This removes the top-most handlers on the forking handler stack which were +established with the last pth_atfork_push(3) call. It returns C when no +more handlers couldn't be removed from the stack. + +=item pid_t B(void); + +This is a variant of fork(2) with the difference that the current thread only +is forked into a separate process, i.e., in the parent process nothing changes +while in the child process all threads are gone except for the scheduler and +the calling thread. When you really want to duplicate all threads in the +current process you should use fork(2) directly. But this is usually not +reasonable. Additionally this function takes care of forking handlers as +established by pth_fork_push(3). + +=back + +=head2 Synchronization + +The following functions provide synchronization support via mutual exclusion +locks (B), read-write locks (B), condition variables (B) +and barriers (B). Keep in mind that in a non-preemptive threading +system like B this might sound unnecessary at the first look, because a +thread isn't interrupted by the system. Actually when you have a critical code +section which doesn't contain any pth_xxx() functions, you don't need any +mutex to protect it, of course. + +But when your critical code section contains any pth_xxx() function the chance +is high that these temporarily switch to the scheduler. And this way other +threads can make progress and enter your critical code section, too. This is +especially true for critical code sections which implicitly or explicitly use +the event mechanism. + +=over 4 + +=item int B(pth_mutex_t *I); + +This dynamically initializes a mutex variable of type `C'. +Alternatively one can also use static initialization via `C'. + +=item int B(pth_mutex_t *I, int I, pth_event_t I); + +This acquires a mutex I. If the mutex is already locked by another +thread, the current threads execution is suspended until the mutex is unlocked +again or additionally the extra events in I occurred (when I is not +C). Recursive locking is explicitly supported, i.e., a thread is allowed +to acquire a mutex more than once before its released. But it then also has be +released the same number of times until the mutex is again lockable by others. +When I is C this function never suspends execution. Instead it +returns C with C set to C. + +=item int B(pth_mutex_t *I); + +This decrements the recursion locking count on I and when it is zero it +releases the mutex I. + +=item int B(pth_rwlock_t *I); + +This dynamically initializes a read-write lock variable of type +`C'. Alternatively one can also use static initialization +via `C'. + +=item int B(pth_rwlock_t *I, int I, int I, pth_event_t I); + +This acquires a read-only (when I is C) or a read-write +(when I is C) lock I. When the lock is only locked +by other threads in read-only mode, the lock succeeds. But when one thread +holds a read-write lock, all locking attempts suspend the current thread until +this lock is released again. Additionally in I events can be given to let +the locking timeout, etc. When I is C this function never suspends +execution. Instead it returns C with C set to C. + +=item int B(pth_rwlock_t *I); + +This releases a previously acquired (read-only or read-write) lock. + +=item int B(pth_cond_t *I); + +This dynamically initializes a condition variable variable of type +`C'. Alternatively one can also use static initialization via +`C'. + +=item int B(pth_cond_t *I, pth_mutex_t *I, pth_event_t I); + +This awaits a condition situation. The caller has to follow the semantics of +the POSIX condition variables: I has to be acquired before this +function is called. The execution of the current thread is then suspended +either until the events in I occurred (when I is not C) or +I was notified by another thread via pth_cond_notify(3). While the +thread is waiting, I is released. Before it returns I is +reacquired. + +=item int B(pth_cond_t *I, int I); + +This notified one or all threads which are waiting on I. When +I is C all thread are notified, else only a single +(unspecified) one. + +=item int B(pth_barrier_t *I, int I'. +Alternatively one can also use static initialization via `CIC<)>'. + +=item int B(pth_barrier_t *I); + +This function reaches a barrier I. If this is the last thread (as +specified by I on init of I) all threads are awakened. +Else the current thread is suspended until the last thread reached the barrier +and this way awakes all threads. The function returns (beside C on +error) the value C for any thread which neither reached the barrier as +the first nor the last thread; C for the thread which +reached the barrier as the first thread and C for the +thread which reached the barrier as the last thread. + +=back + +=head2 Generalized POSIX Replacement API + +The following functions are generalized replacements functions for the POSIX +API, i.e., they are similar to the functions under `B' but all have an additional event argument which can be used +for timeouts, etc. + +=over 4 + +=item int B(const sigset_t *I, int *I, pth_event_t I); + +This is equal to pth_sigwait(3) (see below), but has an additional event +argument I. When pth_sigwait(3) suspends the current threads execution it +usually only uses the signal event on I to awake. With this function any +number of extra events can be used to awake the current thread (remember that +I actually is an event I). + +=item int B(int I, const struct sockaddr *I, socklen_t I, pth_event_t I); + +This is equal to pth_connect(3) (see below), but has an additional event +argument I. When pth_connect(3) suspends the current threads execution it +usually only uses the I/O event on I to awake. With this function any +number of extra events can be used to awake the current thread (remember that +I actually is an event I). + +=item int B(int I, struct sockaddr *I, socklen_t *I, pth_event_t I); + +This is equal to pth_accept(3) (see below), but has an additional event +argument I. When pth_accept(3) suspends the current threads execution it +usually only uses the I/O event on I to awake. With this function any +number of extra events can be used to awake the current thread (remember that +I actually is an event I). + +=item int B(int I, fd_set *I, fd_set *I, fd_set *I, struct timeval *I, pth_event_t I); + +This is equal to pth_select(3) (see below), but has an additional event +argument I. When pth_select(3) suspends the current threads execution it +usually only uses the I/O event on I, I and I to awake. With +this function any number of extra events can be used to awake the current +thread (remember that I actually is an event I). + +=item int B(struct pollfd *I, unsigned int I, int I, pth_event_t I); + +This is equal to pth_poll(3) (see below), but has an additional event argument +I. When pth_poll(3) suspends the current threads execution it usually only +uses the I/O event on I to awake. With this function any number of extra +events can be used to awake the current thread (remember that I actually +is an event I). + +=item ssize_t B(int I, void *I, size_t I, pth_event_t I); + +This is equal to pth_read(3) (see below), but has an additional event argument +I. When pth_read(3) suspends the current threads execution it usually only +uses the I/O event on I to awake. With this function any number of extra +events can be used to awake the current thread (remember that I actually +is an event I). + +=item ssize_t B(int I, const struct iovec *I, int I, pth_event_t I); + +This is equal to pth_readv(3) (see below), but has an additional event +argument I. When pth_readv(3) suspends the current threads execution it +usually only uses the I/O event on I to awake. With this function any +number of extra events can be used to awake the current thread (remember that +I actually is an event I). + +=item ssize_t B(int I, const void *I, size_t I, pth_event_t I); + +This is equal to pth_write(3) (see below), but has an additional event argument +I. When pth_write(3) suspends the current threads execution it usually +only uses the I/O event on I to awake. With this function any number of +extra events can be used to awake the current thread (remember that I +actually is an event I). + +=item ssize_t B(int I, const struct iovec *I, int I, pth_event_t I); + +This is equal to pth_writev(3) (see below), but has an additional event +argument I. When pth_writev(3) suspends the current threads execution it +usually only uses the I/O event on I to awake. With this function any +number of extra events can be used to awake the current thread (remember that +I actually is an event I). + +=item ssize_t B(int I, void *I, size_t I, int I, pth_event_t I); + +This is equal to pth_recv(3) (see below), but has an additional event +argument I. When pth_recv(3) suspends the current threads execution it +usually only uses the I/O event on I to awake. With this function any +number of extra events can be used to awake the current thread (remember that +I actually is an event I). + +=item ssize_t B(int I, void *I, size_t I, int I, struct sockaddr *I, socklen_t *I, pth_event_t I); + +This is equal to pth_recvfrom(3) (see below), but has an additional event +argument I. When pth_recvfrom(3) suspends the current threads execution it +usually only uses the I/O event on I to awake. With this function any +number of extra events can be used to awake the current thread (remember that +I actually is an event I). + +=item ssize_t B(int I, const void *I, size_t I, int I, pth_event_t I); + +This is equal to pth_send(3) (see below), but has an additional event +argument I. When pth_send(3) suspends the current threads execution it +usually only uses the I/O event on I to awake. With this function any +number of extra events can be used to awake the current thread (remember that +I actually is an event I). + +=item ssize_t B(int I, const void *I, size_t I, int I, const struct sockaddr *I, socklen_t I, pth_event_t I); + +This is equal to pth_sendto(3) (see below), but has an additional event +argument I. When pth_sendto(3) suspends the current threads execution it +usually only uses the I/O event on I to awake. With this function any +number of extra events can be used to awake the current thread (remember that +I actually is an event I). + +=back + +=head2 Standard POSIX Replacement API + +The following functions are standard replacements functions for the POSIX API. +The difference is mainly that they suspend the current thread only instead of +the whole process in case the file descriptors will block. + +=over 4 + +=item int B(unsigned int I); + +This is a variant of the 4.3BSD usleep(3) function. It suspends the current +threads execution until I microsecond (= I * 1/1000000 sec) +elapsed. The thread is guaranteed to not awakened before this time, but +because of the non-preemptive scheduling nature of B, it can be awakened +later, of course. The difference between usleep(3) and pth_usleep(3) is that +that pth_usleep(3) suspends only the execution of the current thread and not +the whole process. + +=item unsigned int B(unsigned int I); + +This is a variant of the POSIX sleep(3) function. It +suspends the current threads execution until I seconds elapsed. The +thread is guaranteed to not awakened before this time, but because of the +non-preemptive scheduling nature of B, it can be awakened later, of +course. The difference between sleep(3) and pth_sleep(3) is that that +pth_sleep(3) suspends only the execution of the current thread and not the +whole process. + +=item pid_t B(pid_t I, int *I, int I); + +This is a variant of the POSIX waitpid(2) function. It suspends the +current threads execution until I information is available for a +terminated child process I. The difference between waitpid(2) and +pth_waitpid(3) is that that pth_waitpid(3) suspends only the execution of the +current thread and not the whole process. For more details about the +arguments and return code semantics see waitpid(2). + +=item int B(int I, const sigset_t *I, sigset_t *I) + +This is the B thread-related equivalent of POSIX sigprocmask(2) respectively +pthread_sigmask(3). The arguments I, I and I directly relate +to sigprocmask(2), because B internally just uses sigprocmask(2) here. So +alternatively you can also directly call sigprocmask(2), but for consistency +reasons you should use this function pth_sigmask(3). + +=item int B(const sigset_t *I, int *I); + +This is a variant of the POSIX.1c sigwait(3) function. It suspends the current +threads execution until a signal in I occurred and stores the signal +number in I. The important point is that the signal is not delivered to a +signal handler. Instead it's caught by the scheduler only in order to awake +the pth_sigwait() call. The trick and noticeable point here is that this way +you get an asynchronous aware application that is written completely +synchronously. When you think about the problem of I +functions you should recognize that this is a great benefit. + +=item int B(int I, const struct sockaddr *I, socklen_t I); + +This is a variant of the 4.2BSD connect(2) function. It establishes a +connection on a socket I to target specified in I and I. +The difference between connect(2) and pth_connect(3) is that that +pth_connect(3) suspends only the execution of the current thread and not the +whole process. For more details about the arguments and return code semantics +see connect(2). + +=item int B(int I, struct sockaddr *I, socklen_t *I); + +This is a variant of the 4.2BSD accept(2) function. It accepts a connection on +a socket by extracting the first connection request on the queue of pending +connections, creating a new socket with the same properties of I and +allocates a new file descriptor for the socket (which is returned). The +difference between accept(2) and pth_accept(3) is that that pth_accept(3) +suspends only the execution of the current thread and not the whole process. +For more details about the arguments and return code semantics see accept(2). + +=item int B(int I, fd_set *I, fd_set *I, fd_set *I, struct timeval *I); + +This is a variant of the 4.2BSD select(2) function. It examines the I/O +descriptor sets whose addresses are passed in I, I, and I to +see if some of their descriptors are ready for reading, are ready for writing, +or have an exceptional condition pending, respectively. For more details +about the arguments and return code semantics see select(2). + +=item int B(struct pollfd *I, unsigned int I, int I); + +This is a variant of the SysV poll(2) function. It examines the I/O +descriptors which are passed in the array I to see if some of them are +ready for reading, are ready for writing, or have an exceptional condition +pending, respectively. For more details about the arguments and return code +semantics see poll(2). + +=item ssize_t B(int I, void *I, size_t I); + +This is a variant of the POSIX read(2) function. It reads up to I +bytes into I from file descriptor I. The difference between read(2) +and pth_read(2) is that that pth_read(2) suspends execution of the current +thread until the file descriptor is ready for reading. For more details about +the arguments and return code semantics see read(2). + +=item ssize_t B(int I, const struct iovec *I, int I); + +This is a variant of the POSIX readv(2) function. It reads data from +file descriptor I into the first I rows of the I vector. The +difference between readv(2) and pth_readv(2) is that that pth_readv(2) +suspends execution of the current thread until the file descriptor is ready for +reading. For more details about the arguments and return code semantics see +readv(2). + +=item ssize_t B(int I, const void *I, size_t I); + +This is a variant of the POSIX write(2) function. It writes I bytes +from I to file descriptor I. The difference between write(2) and +pth_write(2) is that that pth_write(2) suspends execution of the current +thread until the file descriptor is ready for writing. For more details about +the arguments and return code semantics see write(2). + +=item ssize_t B(int I, const struct iovec *I, int I); + +This is a variant of the POSIX writev(2) function. It writes data to +file descriptor I from the first I rows of the I vector. The +difference between writev(2) and pth_writev(2) is that that pth_writev(2) +suspends execution of the current thread until the file descriptor is ready for +reading. For more details about the arguments and return code semantics see +writev(2). + +=item ssize_t B(int I, void *I, size_t I, off_t I); + +This is a variant of the POSIX pread(3) function. It performs the same action +as a regular read(2), except that it reads from a given position in the file +without changing the file pointer. The first three arguments are the same as +for pth_read(3) with the addition of a fourth argument I for the +desired position inside the file. + +=item ssize_t B(int I, const void *I, size_t I, off_t I); + +This is a variant of the POSIX pwrite(3) function. It performs the same +action as a regular write(2), except that it writes to a given position in the +file without changing the file pointer. The first three arguments are the same +as for pth_write(3) with the addition of a fourth argument I for the +desired position inside the file. + +=item ssize_t B(int I, void *I, size_t I, int I); + +This is a variant of the SUSv2 recv(2) function and equal to +``pth_recvfrom(fd, buf, nbytes, flags, NULL, 0)''. + +=item ssize_t B(int I, void *I, size_t I, int I, struct sockaddr *I, socklen_t *I); + +This is a variant of the SUSv2 recvfrom(2) function. It reads up to +I bytes into I from file descriptor I while using +I and I/I. The difference between recvfrom(2) and +pth_recvfrom(2) is that that pth_recvfrom(2) suspends execution of the +current thread until the file descriptor is ready for reading. For more +details about the arguments and return code semantics see recvfrom(2). + +=item ssize_t B(int I, const void *I, size_t I, int I); + +This is a variant of the SUSv2 send(2) function and equal to +``pth_sendto(fd, buf, nbytes, flags, NULL, 0)''. + +=item ssize_t B(int I, const void *I, size_t I, int I, const struct sockaddr *I, socklen_t I); + +This is a variant of the SUSv2 sendto(2) function. It writes I +bytes from I to file descriptor I while using I and +I/I. The difference between sendto(2) and pth_sendto(2) is +that that pth_sendto(2) suspends execution of the current thread until +the file descriptor is ready for writing. For more details about the +arguments and return code semantics see sendto(2). + +=back + +=head1 EXAMPLE + +The following example is a useless server which does nothing more than +listening on TCP port 12345 and displaying the current time to the +socket when a connection was established. For each incoming connection a +thread is spawned. Additionally, to see more multithreading, a useless +ticker thread runs simultaneously which outputs the current time to +C every 5 seconds. The example contains I error checking and +is I intended to show you the look and feel of B. + + #include + #include + #include + #include + #include + #include + #include + #include + #include + #include + #include "pth.h" + + #define PORT 12345 + + /* the socket connection handler thread */ + static void *handler(void *_arg) + { + int fd = (int)_arg; + time_t now; + char *ct; + + now = time(NULL); + ct = ctime(&now); + pth_write(fd, ct, strlen(ct)); + close(fd); + return NULL; + } + + /* the stderr time ticker thread */ + static void *ticker(void *_arg) + { + time_t now; + char *ct; + float load; + + for (;;) { + pth_sleep(5); + now = time(NULL); + ct = ctime(&now); + ct[strlen(ct)-1] = '\0'; + pth_ctrl(PTH_CTRL_GETAVLOAD, &load); + printf("ticker: time: %s, average load: %.2f\n", ct, load); + } + } + + /* the main thread/procedure */ + int main(int argc, char *argv[]) + { + pth_attr_t attr; + struct sockaddr_in sar; + struct protoent *pe; + struct sockaddr_in peer_addr; + int peer_len; + int sa, sw; + int port; + + pth_init(); + signal(SIGPIPE, SIG_IGN); + + attr = pth_attr_new(); + pth_attr_set(attr, PTH_ATTR_NAME, "ticker"); + pth_attr_set(attr, PTH_ATTR_STACK_SIZE, 64*1024); + pth_attr_set(attr, PTH_ATTR_JOINABLE, FALSE); + pth_spawn(attr, ticker, NULL); + + pe = getprotobyname("tcp"); + sa = socket(AF_INET, SOCK_STREAM, pe->p_proto); + sar.sin_family = AF_INET; + sar.sin_addr.s_addr = INADDR_ANY; + sar.sin_port = htons(PORT); + bind(sa, (struct sockaddr *)&sar, sizeof(struct sockaddr_in)); + listen(sa, 10); + + pth_attr_set(attr, PTH_ATTR_NAME, "handler"); + for (;;) { + peer_len = sizeof(peer_addr); + sw = pth_accept(sa, (struct sockaddr *)&peer_addr, &peer_len); + pth_spawn(attr, handler, (void *)sw); + } + } + +=head1 BUILD ENVIRONMENTS + +In this section we will discuss the canonical ways to establish the build +environment for a B based program. The possibilities supported by B +range from very simple environments to rather complex ones. + +=head2 Manual Build Environment (Novice) + +As a first example, assume we have the above test program staying in the +source file C. Then we can create a very simple build environment by +just adding the following C: + + $ vi Makefile + | CC = cc + | CFLAGS = `pth-config --cflags` + | LDFLAGS = `pth-config --ldflags` + | LIBS = `pth-config --libs` + | + | all: foo + | foo: foo.o + | $(CC) $(LDFLAGS) -o foo foo.o $(LIBS) + | foo.o: foo.c + | $(CC) $(CFLAGS) -c foo.c + | clean: + | rm -f foo foo.o + +This imports the necessary compiler and linker flags on-the-fly from the +B installation via its C program. This approach is +straight-forward and works fine for small projects. + +=head2 Autoconf Build Environment (Advanced) + +The previous approach is simple but unflexible. First, to speed up +building, it would be nice to not expand the compiler and linker flags +every time the compiler is started. Second, it would be useful to +also be able to build against an uninstalled B, that is, against +a B source tree which was just configured and built, but not +installed. Third, it would be also useful to allow checking of the +B version to make sure it is at least a minimum required version. +And finally, it would be also great to make sure B works correctly +by first performing some sanity compile and run-time checks. All this +can be done if we use GNU B and the C macro +provided by B. For this, we establish the following three files: + +First we again need the C, but this time it contains B +placeholders and additional cleanup targets. And we create it under the name +C, because it is now an input file for B: + + $ vi Makefile.in + | CC = @CC@ + | CFLAGS = @CFLAGS@ + | LDFLAGS = @LDFLAGS@ + | LIBS = @LIBS@ + | + | all: foo + | foo: foo.o + | $(CC) $(LDFLAGS) -o foo foo.o $(LIBS) + | foo.o: foo.c + | $(CC) $(CFLAGS) -c foo.c + | clean: + | rm -f foo foo.o + | distclean: + | rm -f foo foo.o + | rm -f config.log config.status config.cache + | rm -f Makefile + +Because B generates additional files, we added a canonical +C target which cleanups this, too. Second, we write +a (minimalistic) B script specification in a file +C: + + $ vi configure.in + | AC_INIT(Makefile.in) + | AC_CHECK_PTH(1.3.0) + | AC_OUTPUT(Makefile) + +Then we let B's C program generate for us an C +file containing B's C macro. Then we generate the final +C script out of this C file and the C +file: + + $ aclocal --acdir=`pth-config --acdir` + $ autoconf + +After these steps, the working directory should look similar to this: + + $ ls -l + -rw-r--r-- 1 rse users 176 Nov 3 11:11 Makefile.in + -rw-r--r-- 1 rse users 15314 Nov 3 11:16 aclocal.m4 + -rwxr-xr-x 1 rse users 52045 Nov 3 11:16 configure + -rw-r--r-- 1 rse users 63 Nov 3 11:11 configure.in + -rw-r--r-- 1 rse users 4227 Nov 3 11:11 foo.c + +If we now run C we get a correct C which +immediately can be used to build C (assuming that B is already +installed somewhere, so that C is in C<$PATH>): + + $ ./configure + creating cache ./config.cache + checking for gcc... gcc + checking whether the C compiler (gcc ) works... yes + checking whether the C compiler (gcc ) is a cross-compiler... no + checking whether we are using GNU C... yes + checking whether gcc accepts -g... yes + checking how to run the C preprocessor... gcc -E + checking for GNU Pth... version 1.3.0, installed under /usr/local + updating cache ./config.cache + creating ./config.status + creating Makefile + rse@en1:/e/gnu/pth/ac + $ make + gcc -g -O2 -I/usr/local/include -c foo.c + gcc -L/usr/local/lib -o foo foo.o -lpth + +If B is installed in non-standard locations or C +is not in C<$PATH>, one just has to drop the C script +a note about the location by running C with the option +C<--with-pth=>I (where I is the argument which was used with +the C<--prefix> option when B was installed). + +=head2 Autoconf Build Environment with Local Copy of Pth (Expert) + +Finally let us assume the C program stays under either a I or +I distribution license and we want to make it a stand-alone package for +easier distribution and installation. That is, we don't want that the +end-user first has to install B just to allow our C package to +compile. For this, it is a convenient practice to include the required +libraries (here B) into the source tree of the package (here C). +B ships with all necessary support to allow us to easily achieve this +approach. Say, we want B in a subdirectory named C and this +directory should be seamlessly integrated into the configuration and build +process of C. + +First we again start with the C, but this time it is a more +advanced version which supports subdirectory movement: + + $ vi Makefile.in + | CC = @CC@ + | CFLAGS = @CFLAGS@ + | LDFLAGS = @LDFLAGS@ + | LIBS = @LIBS@ + | + | SUBDIRS = pth + | + | all: subdirs_all foo + | + | subdirs_all: + | @$(MAKE) $(MFLAGS) subdirs TARGET=all + | subdirs_clean: + | @$(MAKE) $(MFLAGS) subdirs TARGET=clean + | subdirs_distclean: + | @$(MAKE) $(MFLAGS) subdirs TARGET=distclean + | subdirs: + | @for subdir in $(SUBDIRS); do \ + | echo "===> $$subdir ($(TARGET))"; \ + | (cd $$subdir; $(MAKE) $(MFLAGS) $(TARGET) || exit 1) || exit 1; \ + | echo "<=== $$subdir"; \ + | done + | + | foo: foo.o + | $(CC) $(LDFLAGS) -o foo foo.o $(LIBS) + | foo.o: foo.c + | $(CC) $(CFLAGS) -c foo.c + | + | clean: subdirs_clean + | rm -f foo foo.o + | distclean: subdirs_distclean + | rm -f foo foo.o + | rm -f config.log config.status config.cache + | rm -f Makefile + +Then we create a slightly different B script C: + + $ vi configure.in + | AC_INIT(Makefile.in) + | AC_CONFIG_AUX_DIR(pth) + | AC_CHECK_PTH(1.3.0, subdir:pth --disable-tests) + | AC_CONFIG_SUBDIRS(pth) + | AC_OUTPUT(Makefile) + +Here we provided a default value for C's C<--with-pth> option as the +second argument to C which indicates that B can be found in +the subdirectory named C. Additionally we specified that the +C<--disable-tests> option of B should be passed to the C +subdirectory, because we need only to build the B library itself. And we +added a C call which indicates to B that it should +configure the C subdirectory, too. The C directive +was added just to make B happy, because it wants to find a +C or C script if C is used. + +Now we let B's C program again generate for us an +C file with the contents of B's C macro. +Finally we generate the C script out of this C +file and the C file. + + $ aclocal --acdir=`pth-config --acdir` + $ autoconf + +Now we have to create the C subdirectory itself. For this, we extract the +B distribution to the C source tree and just rename it to C: + + $ gunzip subdirectory, we can strip down +the B sources to a minimum with the I feature: + + $ cd pth + $ ./configure + $ make striptease + $ cd .. + +After this the source tree of C should look similar to this: + + $ ls -l + -rw-r--r-- 1 rse users 709 Nov 3 11:51 Makefile.in + -rw-r--r-- 1 rse users 16431 Nov 3 12:20 aclocal.m4 + -rwxr-xr-x 1 rse users 57403 Nov 3 12:21 configure + -rw-r--r-- 1 rse users 129 Nov 3 12:21 configure.in + -rw-r--r-- 1 rse users 4227 Nov 3 11:11 foo.c + drwxr-xr-x 2 rse users 3584 Nov 3 12:36 pth + $ ls -l pth/ + -rw-rw-r-- 1 rse users 26344 Nov 1 20:12 COPYING + -rw-rw-r-- 1 rse users 2042 Nov 3 12:36 Makefile.in + -rw-rw-r-- 1 rse users 3967 Nov 1 19:48 README + -rw-rw-r-- 1 rse users 340 Nov 3 12:36 README.1st + -rw-rw-r-- 1 rse users 28719 Oct 31 17:06 config.guess + -rw-rw-r-- 1 rse users 24274 Aug 18 13:31 config.sub + -rwxrwxr-x 1 rse users 155141 Nov 3 12:36 configure + -rw-rw-r-- 1 rse users 162021 Nov 3 12:36 pth.c + -rw-rw-r-- 1 rse users 18687 Nov 2 15:19 pth.h.in + -rw-rw-r-- 1 rse users 5251 Oct 31 12:46 pth_acdef.h.in + -rw-rw-r-- 1 rse users 2120 Nov 1 11:27 pth_acmac.h.in + -rw-rw-r-- 1 rse users 2323 Nov 1 11:27 pth_p.h.in + -rw-rw-r-- 1 rse users 946 Nov 1 11:27 pth_vers.c + -rw-rw-r-- 1 rse users 26848 Nov 1 11:27 pthread.c + -rw-rw-r-- 1 rse users 18772 Nov 1 11:27 pthread.h.in + -rwxrwxr-x 1 rse users 26188 Nov 3 12:36 shtool + +Now when we configure and build the C package it looks similar to this: + + $ ./configure + creating cache ./config.cache + checking for gcc... gcc + checking whether the C compiler (gcc ) works... yes + checking whether the C compiler (gcc ) is a cross-compiler... no + checking whether we are using GNU C... yes + checking whether gcc accepts -g... yes + checking how to run the C preprocessor... gcc -E + checking for GNU Pth... version 1.3.0, local under pth + updating cache ./config.cache + creating ./config.status + creating Makefile + configuring in pth + running /bin/sh ./configure --enable-subdir --enable-batch + --disable-tests --cache-file=.././config.cache --srcdir=. + loading cache .././config.cache + checking for gcc... (cached) gcc + checking whether the C compiler (gcc ) works... yes + checking whether the C compiler (gcc ) is a cross-compiler... no + [...] + $ make + ===> pth (all) + ./shtool scpp -o pth_p.h -t pth_p.h.in -Dcpp -Cintern -M '==#==' pth.c + pth_vers.c + gcc -c -I. -O2 -pipe pth.c + gcc -c -I. -O2 -pipe pth_vers.c + ar rc libpth.a pth.o pth_vers.o + ranlib libpth.a + <=== pth + gcc -g -O2 -Ipth -c foo.c + gcc -Lpth -o foo foo.o -lpth + +As you can see, B now automatically configures the local +(stripped down) copy of B in the subdirectory C and the +C automatically builds the subdirectory, too. + +=head1 SYSTEM CALL WRAPPER FACILITY + +B per default uses an explicit API, including the system calls. For +instance you've to explicitly use pth_read(3) when you need a thread-aware +read(3) and cannot expect that by just calling read(3) only the current thread +is blocked. Instead with the standard read(3) call the whole process will be +blocked. But because for some applications (mainly those consisting of lots of +third-party stuff) this can be inconvenient. Here it's required that a call +to read(3) `magically' means pth_read(3). The problem here is that such +magic B cannot provide per default because it's not really portable. +Nevertheless B provides a two step approach to solve this problem: + +=head2 Soft System Call Mapping + +This variant is available on all platforms and can I be enabled by +building B with C<--enable-syscall-soft>. This then triggers some +C<#define>'s in the C header which map for instance read(3) to +pth_read(3), etc. Currently the following functions are mapped: fork(2), +sleep(3), sigwait(3), waitpid(2), select(2), poll(2), connect(2), +accept(2), read(2), write(2). + +The drawback of this approach is just that really all source files +of the application where these function calls occur have to include +C, of course. And this also means that existing libraries, +including the vendor's B, usually will still block the whole +process if one of its I/O functions block. + +=head2 Hard System Call Mapping + +This variant is available only on those platforms where the syscall(2) +function exists and there it can be enabled by building B with +C<--enable-syscall-hard>. This then builds wrapper functions (for instances +read(3)) into the B library which internally call the real B +replacement functions (pth_read(3)). Currently the following functions are +mapped: fork(2), sleep(3), waitpid(2), select(2), poll(2), connect(2), +accept(2), read(2), write(2). + +The drawback of this approach is that it depends on syscall(2) interface +and prototype conflicts can occur while building the wrapper functions +due to different function signatures in the vendor C header files. +But the advantage of this mapping variant is that the source files of +the application where these function calls occur have not to include +C and that existing libraries, including the vendor's B, +magically become thread-aware (and then block only the current thread). + +=head1 IMPLEMENTATION NOTES + +B is very portable because it has only one part which perhaps has +to be ported to new platforms (the machine context initialization). But +it is written in a way which works on mostly all Unix platforms which +support makecontext(2) or at least sigstack(2) or sigaltstack(2) [see +C for details]. Any other B code is POSIX and ANSI C +based only. + +The context switching is done via either SUSv2 makecontext(2) or POSIX +make[sig]setjmp(3) and [sig]longjmp(3). Here all CPU registers, the +program counter and the stack pointer are switched. Additionally the +B dispatcher switches also the global Unix C variable [see +C for details] and the signal mask (either implicitly via +sigsetjmp(3) or in an emulated way via explicit setprocmask(2) calls). + +The B event manager is mainly select(2) and gettimeofday(2) based, +i.e., the current time is fetched via gettimeofday(2) once per context +switch for time calculations and all I/O events are implemented via a +single central select(2) call [see C for details]. + +The thread control block management is done via virtual priority +queues without any additional data structure overhead. For this, the +queue linkage attributes are part of the thread control blocks and the +queues are actually implemented as rings with a selected element as the +entry point [see C and C for details]. + +Most time critical code sections (especially the dispatcher and event +manager) are speeded up by inlined functions (implemented as ANSI C +pre-processor macros). Additionally any debugging code is I +removed from the source when not built with C<-DPTH_DEBUG> (see Autoconf +C<--enable-debug> option), i.e., not only stub functions remain [see +C for details]. + +=head1 RESTRICTIONS + +B (intentionally) provides no replacements for non-thread-safe +functions (like strtok(3) which uses a static internal buffer) or +synchronous system functions (like gethostbyname(3) which doesn't +provide an asynchronous mode where it doesn't block). When you want to +use those functions in your server application together with threads, +you've to either link the application against special third-party +libraries (or for thread-safe/reentrant functions possibly against an +existing C of the platform vendor). For an asynchronous DNS +resolver library use the GNU B package from Ian Jackson ( see +http://www.gnu.org/software/adns/adns.html ). + +=head1 HISTORY + +The B library was designed and implemented between February and +July 1999 by I after evaluating numerous (mostly +preemptive) thread libraries and after intensive discussions with +I, I, I and I related to an experimental (matrix based) non-preemptive C++ +scheduler class written by I. + +B was then implemented in order to combine the I +approach of multithreading (which provides better portability and +performance) with an API similar to the popular one found in B +libraries (which provides easy programming). + +So the essential idea of the non-preemptive approach was taken over from +I scheduler. The priority based scheduling algorithm was +suggested by I. Some code inspiration also came from +an experimental threading library (B) written by I for an ancient internal test version of the Apache webserver. +The concept and API of message ports was borrowed from AmigaOS' B +subsystem. The concept and idea for the flexible event mechanism came +from I's B (which can be found as a part of +B v8). + +=head1 BUG REPORTS AND SUPPORT + +If you think you have found a bug in B, you should send a report as +complete as possible to I. If you can, please try to +fix the problem and include a patch, made with 'C', in your +report. Always, at least, include a reasonable amount of description in +your report to allow the author to deterministically reproduce the bug. + +For further support you additionally can subscribe to the +I mailing list by sending an Email to +I with `C' (or +`C I
' if you want to subscribe +from a particular Email I
) in the body. Then you can +discuss your issues with other B users by sending messages to +I. Currently (as of January 2000) you can reach about +50 Pth users on this mailing list. + +=head1 SEE ALSO + +=head2 Related Web Locations + +`comp.programming.threads Newsgroup Archive', +http://www.deja.com/topics_if.xp? +search=topic&group=comp.programming.threads + +`comp.programming.threads Frequently Asked Questions (F.A.Q.)', +http://www.lambdacs.com/newsgroup/FAQ.html + +`I', +Numeric Quest Inc 1998; +http://www.numeric-quest.com/lang/multi-frame.html + +`I', +The Open Group 1997; +http://www.opengroup.org/onlinepubs /007908799/xsh/threads.html + +SMI Thread Resources, +Sun Microsystems Inc; +http://www.sun.com/workshop/threads/ + +Bibliography on threads and multithreading, +Torsten Amundsen; +http://liinwww.ira.uka.de/bibliography/Os/threads.html + +=head2 Related Books + +B. Nichols, D. Buttlar, J.P. Farrel: +`I', +O'Reilly 1996; +ISBN 1-56592-115-1 + +B. Lewis, D. J. Berg: +`I', +Sun Microsystems Press, Prentice Hall 1998; +ISBN 0-13-680729-1 + +B. Lewis, D. J. Berg: +`I', +Prentice Hall 1996; +ISBN 0-13-443698-9 + +S. J. Norton, M. D. Dipasquale: +`I', +Prentice Hall 1997; +ISBN 0-13-190067-6 + +D. R. Butenhof: +`I', +Addison Wesley 1997; +ISBN 0-201-63392-2 + +=head2 Related Manpages + +pth-config(1), pthread(3). + +getcontext(2), setcontext(2), makecontext(2), swapcontext(2), +sigstack(2), sigaltstack(2), sigaction(2), sigemptyset(2), sigaddset(2), +sigprocmask(2), sigsuspend(2), sigsetjmp(3), siglongjmp(3), setjmp(3), +longjmp(3), select(2), gettimeofday(2). + +=head1 AUTHOR + + Ralf S. Engelschall + rse@engelschall.com + www.engelschall.com + +=cut +