Little's law can be used to describe a system in steady state from a queuing perspective, i.e. arrival and leaving rates are balanced. In this case it is a crude way of modelling a system with a contention percentage of 100% under Amdahl's law, in that throughput is one over latency.

However this is an inaccurate way to model a system with locks. Amdahl's law does not account for coherence costs. For example, if you wrote a microbenchmark with a single thread to measure the lock cost then it is much lower than in a multi-threaded environment where cache coherence, other OS costs such as scheduling, and lock implementations need to be considered.

When modelling locks it is necessary to consider how contention and coherence costs vary given how they can be implemented. Consider in Java how we have biased locking, thin locks, fat locks, inflation, and revoking biases which can cause safe points that bring all threads in the JVM to a stop with a significant coherence component.

A former bicycle thief has revealed the tricks of the trade in an interview, which clearly and shockingly shows the extent that thieves will go to in order to steal a bike. He talks about the motivations behind the theft, the tools used to crack locks and how the bikes were moved around and sold for a significant sum. He also gives tips on how to prevent your bike from being stolen.
[...]

'Don’t be fooled by Kryptonite locks, they’re not as tough as made out to be. Also D-bars with tubular locks, never use them, they’re the most easy to pick with a little tool. It’s small and discreet, no noise and it looks like you are just unlocking your bike. With the bolt cutters we would go out on high performance motorbikes, two men on a bike.'

This is pretty nuts. If biased locking in the HotSpot JVM is causing performance issues, it can be turned off:

You can avoid biased locking on a per-object basis by calling System.identityHashCode(o). If the object is already biased, assigning an identity hashCode will result in revocation, otherwise, the assignment of a hashCode() will make the object ineligible for subsequent biased locking.

Interviews with 2 New York bike thieves (one bottom feeder, one professional), reviewing the current batch of bicycle locks. Summary: U-locks are good, when used correctly, particularly the Kryptonite New York Lock ($80). On the other hand, Dublin's recent spate of thefts are largely driven by wide availability of battery-powered angle grinders (thanks Lidl!), which, according to this article, are relatively quiet and extremely fast. :(

Firstly, this is 3 orders of magnitude greater latency than what I illustrated in the previous article using just memory barriers to signal between threads. This cost comes about because the kernel needs to get involved to arbitrate between the threads for the lock, and then manage the scheduling for the threads to awaken when the condition is signalled. The one-way latency to signal a change is pretty much the same as what is considered current state of the art for network hops between nodes via a switch. It is possible to get ~1µs latency with InfiniBand and less than 5µs with 10GigE and user-space IP stacks.

Secondly, the impact is clear when letting the OS choose what CPUs the threads get scheduled on rather than pinning them manually. I've observed this same issue across many use cases whereby Linux, in default configuration for its scheduler, will greatly impact the performance of a low-latency system by scheduling threads on different cores resulting in cache pollution. Windows by default seems to make a better job of this.
</blockqote>