Marc,
> The problem is that the affected objects can sometimes be statically
> used without prior "allocation",
I understand that, but according to:
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/network/hh/network/103ndisx_7mk2.asp
"Before a driver calls NdisAcquireSpinLock, NdisDprAcquireSpinLock, or
any of the NdisInterlockedXxx functions, it must call
NdisAllocateSpinLock to initialize the spin lock passed as a required
parameter to these NdisXxx functions."
So it seems that the NDIS spinlock in a NDIS object must always be
initialized with NdisAllocateSpinLock(), at which time you can allocate
the Linux spinlock, and have the NDIS spinlock be in reality just a
pointer to the Linux spinlock.
Similarly, it seems that NDIS drivers must call NdisFreeSpinLock() on a
spinlock when they no longer use it, at which point you could free the
Linux spinlock.
What am I missing?
> so it is not possible in this context
> to just store a pointer to dynamically allocated storage, since there
> is no explicit way of finding out when the object should be freed later
> on.
Mmm, I see. Maybe you are talking about non-NDIS spinlocks, in which
case the allocation of the Linux kernel spinlock can occur when
KeInitializeSpinLock() is called, but you are right apparently there is
no way to know when the spinlock is no longer being used.
> We have thought of a few possible solutions but they generally add
> significant overhead when spinlocks should remain extremely light and
> efficient.
That is an interesting issue. I'm thinking about this solution:
/--------------------------------------------\
| |
v |
/-Emulated_Win_lock--\ |
| struct blob *lin; -+------------\ |
\--------------------/ | |
v |
/- struct blob ------------\ |
| Emulated_Win_lock *win; -+----/
| Linux_lock lock; |
\--------------------------/
In other words, the emulated Windows spin lock is just a pointer to a
struct blob. This is only 4 bytes, so it is guaranteed to always be
smaller than a real Windows spin lock, and you can always pad it to the
size of a real Windows spin lock.
Given that we always know when a Windows spinlock is initialized ("the
assumption"), we can at this time allocate the struct blob from a linked
list of unused struct blobs, move that struct blob to the linked list of
in-use struct blobs, then set 'lin' to point to it, and then set 'win'
to point back.
Benefit: Acquiring and releasing the Emulated Win lock is O(1), you just
dereference the lin pointer, then you perform the acquire/release
operation on the Linux lock. The pointer dereference doesn't add
significant overhead, it is going to be really cheap compared to the
atomic operation that locks the bus (which is highly expensive on a P4).
Now, how do you know when a lock is no longer in use? Well, locks are
always somewhere in memory, and memory is of 2 kinds:
o either static memory, i.e. part of the Windows driver data segment,
and you know when that one goes away, it is when you unload the Windows
driver
o or dynamic memory (i.e. memory allocated by the Windows driver via
some malloc()-like Windows API that you emulate), and you know when that
one goes away too, because it goes away via some free()-like Windows API
that you emulate.
In other words, you know precisely _when_ memory goes away, and _wich
exact range_ of memory goes away. When that happens, it is just a matter
of walking your linked list of in-use struct blobs, and see for each of
them if their 'win' pointer points inside the range of memory that is
being freed. If that is the case, then you can safely "free" the Linux
lock, by moving that struct blob back to the unused linked list.
Possible issues:
1) Freeing memory is a frequent operation, and checking if there is a
Windows lock inside the region we are about to free is O(number of locks
in use), so this could be quite costly. I agree with that, but you can
easily reduce that cost to almost nothing by using a hash table of your
in-use struct blobs, indexed by the 'win' field of the struct blob.
2) If you manipulate lists, you need one instance of a Linux lock, that
is global to the entire Linux driver, to protect against concurrency.
That central lock might become the bottleneck of a carefully designed
Windows driver (that would purposedly use several smaller locks instead
of one central lock). I say sure, but it is better to work slowly
(central lock) than not work at all (which is currently the case when
CONFIG_DEBUG_SPINLOCK is set). Also I'm pretty sure you can implement
the same idea with lock-free structures.
3) What if "the assumption" is wrong? Well first of all, I don't really
think it is. But even if it is, you can always make sure that when
memory (static or dynamic) is allocated to the windows driver, it is
always zeroed. Then in the acquire/release APIs you emulate, just check
if 'lin' is NULL before dereferencing it (read of 4 bytes w/o locking
the bus, which is cheap). If it is NULL, then allocate a struct blob as
described above, and atomically compare and exchange the 'lin' pointer
(lock the bus, expensive, but only happens once).
It seems to me that this scheme covers everything that is needed to
emulate Windows locks efficiently, even if the Linux kernel is compiled
with CONFIG_DEBUG_SPINLOCK. But again, you guys have been thinking about
this for a while, and there is probably a catch that I don't see.
> Therefore it was decided to simply exclude support of the
> CONFIG_DEBUG_SPINLOCK option for now since it isn't meant to be used in
> production systems anyway.
It is a bit exclusive. I use wireless at home (non-production system),
but I like to enable CONFIG_DEBUG_SPINLOCK either to find bugs in the
kernel and report them, or just to help me while I develop Linux
drivers.
I you have read thus far, you are courageous,
Thanks,
--
Regis