On Mon, 11 Sep 2006 21:06:20 +0200, Alan Barrett <apb@cequrux.com> wrote:
> On Mon, 11 Sep 2006, Sumantra Kundu wrote:
> > Taking cue from the above observation, we now intend to implement a
> > congestion control algorithm (uvm_cca) inside the uvm. However,
> > instead of observing process behaviour, we would now intend to "infer
> > congestion" by observing the dynamics of dirty pages, w.r.t to a
> > specific IO device.
> >
> > Since no two IO devices are the same, this implies we need to have a
> > mechanism that is able to capture and understand the "capabilities",
> > "limitations", and "performance" of such a device at run time and make
> > such performance figures available to the UVM, before any sort of
> > device directed IO throttling could be initiated. To top it, writes
> > need not be of the same cost and can generally be thought of a
> > function of the disk seek time.
>
> This sounds awfully complicated. I think Thor is right: measure
> things like the amount of data in flight and the time to service each
> request; feed those into an algorithm a lot like TCP to get a limit(per
> process/device pair) on the rate of new requests. If this works, you
> don't need to model the device's seek time or data transfer rate, you
> just need to measure the number of outstanding requests and the time to
> service the requests.
>
That was my reaction, too. Don't worry about the underlying technology
-- which is guaranteed to change -- just measure average request
latency.
--Steven M. Bellovin, http://www.cs.columbia.edu/~smb