Try to map the space so that accesses can be cached and/or
prefetched by the system.
If this flag is not specified, the
implementation should map the space so that it will not be cached or
prefetched.

This flag must have a value of 1 on all implementations for backward
compatibility.

BUS MAP_LINEAR

Try to map the space so that its contents can be accessed linearly via
normal memory access methods (e.g. pointer dereferencing and structure
accesses).
This is useful when software wants to do direct access to a memory
device, e.g. a frame buffer.
If this flag is specified and linear
mapping is not possible, the
bus_space_map()
call should fail.
If this
flag is not specified, the system may map the space in whatever way is
most convenient.

Not all combinations of flags make sense or are supported with all
spaces.
For instance,
BUS MAP_CACHEABLE
may be meaningless when
used on many systems' I/O port spaces, and on some systems
BUS MAP_LINEAR
without
BUS MAP_CACHEABLE
may never work.
When the system hardware or firmware provides hints as to how spaces should be
mapped (e.g. the PCI memory mapping registers'
"prefetchable"
bit), those
hints should be followed for maximum compatibility.
On some systems,
requesting a mapping that cannot be satisfied (e.g. requesting a
non-cacheable mapping when the system can only provide a cacheable one)
will cause the request to fail.

Some implementations may keep track of use of bus space for some or all
bus spaces and refuse to allow duplicate allocations.
This is encouraged
for bus spaces which have no notion of slot-specific space addressing,
such as ISA and VME, and for spaces which coexist with those spaces
(e.g. EISA and PCI memory and I/O spaces co-existing with ISA memory and
I/O spaces).

Mapped regions may contain areas for which there is no device on the
bus.
If space in those areas is accessed, the results are
bus-dependent.

Those flags can be combined (or-ed together) to enforce ordering on both
read and write operations.

All of the specified type(s) of operation which are done to the region
before the barrier operation are guaranteed to complete before any of the
specified type(s) of operation done after the barrier.

Example: Consider a hypothetical device with two single-byte ports, one
write-only input port (at offset 0) and a read-only output port (at
offset 1).
Operation of the device is as follows: data bytes are written
to the input port, and are placed by the device on a stack, the top of
which is read by reading from the output port.
The sequence to correctly
write two data bytes to the device then read those two data bytes back
would be:

The first barrier makes sure that the first write finishes before the
second write is issued, so that two writes to the input port are done
in order and are not collapsed into a single write.
This ensures that
the data bytes are written to the device correctly and in order.

The second barrier makes sure that the writes to the output port finish
before any of the reads to the input port are issued, thereby making sure
that all of the writes are finished before data is read.
This ensures
that the first byte read from the device really is the last one that was
written.

The third barrier makes sure that the first read finishes before the
second read is issued, ensuring that data is read correctly and in order.

The barriers in the example above are specified to cover the absolute
minimum number of bus space locations.
It is correct (and often
easier) to make barrier operations cover the device's whole range of bus
space, that is, to specify an offset of zero and the size of the
whole region.