--- mm.pod 2002/07/26 13:04:40 1.19
+++ mm.pod 2002/12/19 09:14:58 1.20
@@ -6,7 +6,7 @@
## are met:
##
## 1. Redistributions of source code must retain the above copyright
-## notice, this list of conditions and the following disclaimer.
+## notice, this list of conditions and the following disclaimer.
##
## 2. Redistributions in binary form must reproduce the above copyright
## notice, this list of conditions and the following disclaimer in
@@ -116,12 +116,12 @@
dependent implementation details (allocation and locking) when dealing with
shared memory segments and on the second (higher) layer it provides a
high-level malloc(3)-style API for a convenient and well known way to work
-with data-structures inside those shared memory segments.
+with data-structures inside those shared memory segments.
The abbreviation B<OSSP mm> is historically and originally comes from the phrase
``I<memory mapped>'' as used by the POSIX.1 mmap(2) function. Because this
facility is internally used by this library on most platforms to establish the
-shared memory segments.
+shared memory segments.
=head2 LIBRARY STRUCTURE
@@ -249,13 +249,13 @@
If a chunk of memory has to be allocated, the internal list of free chunks
is searched for a minimal-size chunk which is larger or equal than the size of
-the to be allocated chunk (a I<best fit> strategy).
+the to be allocated chunk (a I<best fit> strategy).
If a chunk is found which matches this best-fit criteria, but is still a lot
larger than the requested size, it is split into two chunks: One with exactly
the requested size (which is the resulting chunk given back) and one with the
remaining size (which is immediately re-inserted into the list of free
-chunks).
+chunks).
If no fitting chunk is found at all in the list of free chunks, a new one is
created from the spare area of the shared memory segment until the segment is
@@ -267,7 +267,7 @@
into the internal list of free chunks. The insertion operation automatically
merges the chunk with a previous and/or a next free chunk if possible, i.e.
if the free chunks stay physically seamless (one after another) in memory, to
-automatically form larger free chunks out of smaller ones.
+automatically form larger free chunks out of smaller ones.
This way the shared memory segment is automatically defragmented when memory
is deallocated.
@@ -275,7 +275,7 @@
=back
This strategy reduces memory waste and fragmentation caused by small and
-frequent allocations and deallocations to a minimum.
+frequent allocations and deallocations to a minimum.
The internal implementation of the list of free chunks is not specially
optimized (for instance by using binary search trees or even I<splay> trees,
@@ -464,7 +464,7 @@
=item size_t B<mm_available>(MM *I<mm>);
-Returns the amount in bytes of still available (free) memory in the
+Returns the amount in bytes of still available (free) memory in the
shared memory pool I<mm>.
=item char *B<mm_error>(void);
@@ -524,7 +524,7 @@
This returns the size in bytes of I<core>. This size is exactly the size which
was used for creating the shared memory area via mm_core_create(3). The
function is provided just for convenience reasons to not require the
-application to remember the memory size behind I<core> itself.
+application to remember the memory size behind I<core> itself.
=item size_t B<mm_core_maxsegsize>(void);
@@ -587,7 +587,7 @@
currently the high-level malloc(3)-style API just uses a single shared memory
segment as the underlaying data structure for an C<MM> object which means that
the maximum amount of memory an C<MM> object represents also depends on the
-platform.
+platform.
This could be changed in later versions by allowing at least the
high-level malloc(3)-style API to internally use multiple shared memory
|