ossp-pkg/sio/BRAINSTORM/Apache-Dean-Thoughts.txt
From dgaudet@arctic.org Mon Jun 28 19:06:50 1999
Path: engelschall.com!mail2news!apache.org!new-httpd-owner-rse+apache=en.muc.de
From: dgaudet@arctic.org (Dean Gaudet)
Newsgroups: en.lists.apache-new-httpd
Subject: Re: async routines
Date: 28 Jun 1999 17:33:24 +0200
Organization: Mail2News at engelschall.com
Lines: 96
Approved: postmaster@m2ndom
Message-ID: <Pine.LNX.3.96dg4.990628081527.31647G-100000@twinlark.arctic.org>
Reply-To: new-httpd@apache.org
NNTP-Posting-Host: en1.engelschall.com
X-Trace: en1.engelschall.com 930584004 99816 141.1.129.1 (28 Jun 1999 15:33:24 GMT)
X-Complaints-To: postmaster@engelschall.com
NNTP-Posting-Date: 28 Jun 1999 15:33:24 GMT
X-Mail2News-Gateway: mail2news.engelschall.com
Xref: engelschall.com en.lists.apache-new-httpd:31280
[hope you don't mind me cc'ing new-httpd zach, I think others will be
interested.]
On Mon, 28 Jun 1999, Zach Brown wrote:
> so dean, I was wading through the mpm code to see if I could munge the
> sigwait stuff into it.
>
> as far as I could tell, the http protocol routines are still blocking.
> what does the future hold in the way for async routines? :) I basically
> need a way to do something like..
You're still waiting for me to get the async stuff in there... I've done
part of the work -- the BUFF layer now supports non-blocking sockets.
However, the HTTP code will always remain blocking. There's no way I'm
going to try to educate the world in how to write async code... and since
our HTTP code has arbitrary call outs to third party modules... It'd
have a drastic effect on everyone to make this change.
But I honestly don't think this is a problem. Here's my observations:
All the popular HTTP clients send their requests in one packet (or two
in the case of a POST and netscape). So the HTTP code would almost
never have to block while processing the request. It may block while
processing a POST -- something which someone else can worry about later,
my code won't be any worse than what we already have in apache. So
any effort we put into making the HTTP parsing code async-safe would
be wasted on the 99.9% case.
Most responses fit in the socket's send buffer, and again don't require
async support. But we currently do the lingering_close() routine which
could easily use async support. Large responses also could use async
support.
The goal of HTTP parsing is to figure out which response object to
send. In most cases we can reduce that to a bunch of common response
types:
- copying a file to the socket
- copying a pipe/socket to the socket (IPC, CGIs)
- copying a mem region to the socket (mmap, some dynamic responses)
So what we do is we modify the response handlers only. We teach them
about how to send async responses.
There will be a few new primitives which will tell the core "the response
fits one of these categories, please handle it". The core will do the
rest -- and for MPMs which support async handling, the core will return
to the MPM and let the MPM do the work async... the MPM will call a
completion function supplied by the core. (Note that this will simplify
things for lots of folks... for example, it'll let us move range request
handling to a common spot so that more than just default_handler
can support it.)
I expect this to be a simple message passing protocol (pass by reference).
Well rather, that's how I expect to implement it in ASH -- where I'll
have a single thread per-process doing the select/poll stuff; and the
other threads are in a pool that handles the protocol stuff. For your
stuff you may want to do it another way -- but we'll be using a common
structure that the core knows about... and that structure will look like
a message:
struct msg {
enum {
MSG_SEND_FILE,
MSG_SEND_PIPE,
MSG_SEND_MEM,
MSG_LINGERING_CLOSE,
MSG_WAIT_FOR_READ, /* for handling keep-alives */
...
} type;
BUFF *client;
void (*completion)(struct msg *, int status);
union {
... extra data here for whichver types need it ...;
} x;
};
The nice thing about this is that these operations are protocol
independant... at this level there's no knowledge of HTTP, so the same
MPM core could be used to implement other protocols.
> so as I was thinking about this stuff, I realized it might be neat to have
> 'classes' of non blocking pending work and have different threads with
> differnt priorities hacking on it. Say we have a very high priority
> thread that accepts connectoins, does initial header parsing, and
> sendfile()ing data out. We could have lower priority threads that are
> spinning doing 'harder' BUFF work like an encryption layer or gziping
> content, whatever.
You should be able to implement this in your MPM easily I think... because
you'll see the different message types and can distribute them as needed.
Dean