Użytkownik Jens Axboe napisał:> On Fri, Jun 14 2002, Martin Dalecki wrote:> >>Thu Jun 13 22:59:54 CEST 2002 ide-clean-91>>>>- Realize that the only place where ata_do_taskfile gets used is ide-disk.c>> move it and its "friends' over there.> > > Ehm, isn't that a bit odd? The typical "I'm not looking at the> interface, if only one person is currently using it then heck lets move> it" Martin approach (refer to prep_rq_fn cdb builder as well)

Yes this is the usual Marcin aproach. The fscking best road to bloatis providing interfaces "on the heap" and "just in case", whichthen nobody sane uses. Wen one discovers later that they are inadequate,we try to support them until the end of days becousesome silly python based incompetently written setup applicationturns out to love to having you know what with.

And since application level programming expierence shows me that in 99%of the cases where interfaces are designed in front of beeing usedthey turn out to be inadequate I don't do it.

Won't to see some silly things like the situation described above?Please just count the number of ioctl for the /dev/random.90% of them qualify as "debuggin during developement" or areworking against the purpose of the thing. Just one example.Once another even more striking is the whole /proc/ mess.

> The above is just a minor thing, the reason I'm mailing is really:> > - current 2.5 bk deadlocks reading partition info off disk. Again a> locking problem I suppose, to be honest I just got very tired when> seeing it happen and didn't want to spend tim looking into it.

2.5.21 + the patches I did doesn't. Likely it's the driverfs?

> I thought IDE locking couldn't get worse than 2.4, but I think you are> well into doing just that. What exactly are you plans with the channel> locks? Right now, to me, it seems you are extending the use of them to> the point where they would be used to serialize the entire request> operation start? Heck, ide-cd is even holding the channel lock when> outputting the cdb now.

After extracting out 80% of the host controller register fileaccess one has to realize that in reality we where releasing the lockjust to regain it immediately. Or alternatively accessing themwithout any lock protection at all. (Same goes for BIO ummer layermemmbers.) This is why they are pushed up. It's just avoiding the"windows" between unlock and immediate relock and making the realbehaviour more "obvious". You have just realized this.

2.4 prevents the locking problems basically by georgeouslydisabling IRQs. Too fine grained locking is a futile exercise.Unless I see the time spent in system state during concurrent diskaccess going really up (it doesn't right now), I don't see any thingbad in making the locking more coarse. Locks don't came for free andhaving fine grained locking isn't justifying itself.

Another "usual Marcin approach" - don't optimze for the sake of it.See futile unlikely() tagging and inlining in tcq.c for example.I don't do somethig like that. I have just written too muchnumerical code which was really time constrained to do somethinglike this before looking at benchmarks.Really constrained means having a program running 7 or "just"5 *days*. This can make a difference, a difference in hard realmoney on the range of multiple kEUR!

Finally - unless there appears some aother way to block access tobusy devices on the generic block layer I do it the only waywe have right now - spinlocks. (... looking forward to workingqueue pugging and the work done by Adam richter).Unless there is a sane way to signal partial completion - we willbe doing it at once. Unless we have a sane async io infrastructuremost of the above will be likely not solved anyway.

> - ata_end_request(). why didn't you just remove the last argument to> __ata_end_request() instead just changing the comment saying why we> pass nr_secs == 0 in from some sites?

One step after another. Watch for hard_xxx members from struch requestto see why I hesitated please.

> - what's the reasoning behind moving the active request into the> ata_device?! we are serializing requests for ata_device's on the> ata_channel anyways, which is why it made sense to have the active> request there.

Becouse it is going to go away altogether. We need it therejust only for the purpose of the default ide_timer_expiry function, whichis subject to go away since a long time. And finally becouseit doesn't hurt.

Further on I refer you to the discussion we had (or was it Linus?)once about the fact that attaching physical properties of thedevice to the request queue and replicating those parameters inevery single request struct, is well ... "unpleasant" on behalf of theupper layers. Loop devices expose the same problem.Once again just grepping for hard_ memebers of the structrequest makes it obvious.

Somce people say that using the gratest common denominator inthe case of the loop devices is the solution,but I think that it's rather a work around.

> And finally a small plea for more testing. Do you even test before> blindly sending patches off to Linus?! Sometimes just watching how> quickly these big patches appears makes it impossible that they have> gotten any kind of testing other than the 'hey it compiles', which I> think it just way too little for something that could possible screw> peoples data up very badly. Frankly, _I'm_ too scared to run 2.5 IDE> currently. The success ratio of posted over working patches is too big.

Fact is: many of the patches are just big becouse they containhost chip handling cleanups done by others, and becousewe have just too many different drivers for the same purpose:ATAPI packet command devices. Which I'm more and more temptedto scarp... in favour of just ide-scsi.c. But that's anotherstory. (Adam J. Richter is givng me constant preassure to do just that and I start really tending to admitt that he is just right.)

As of testing. Well at least I can assure you that I'm eating my dogfood,since I run constantly on the patches. (Heck even the time I write this.)But I don't use kernels from the BK repository at all.In fact I just don't use BK at all and I don't intend too.