Here is a patch for MM livelock. The original bug report follows after the
patch. To reproduce the bug on my computer, I had to change bs=4M to
bs=65536 in examples in the original report.

Mikulas

---

Fix starvation in memory management.

The bug happens when one process is doing sequential buffered writes to
a block device (or file) and another process is attempting to execute
sync(), fsync() or direct-IO on that device (or file). This syncing
process will wait indefinitelly, until the first writing process
finishes.

The bug is caused by sequential walking of address space in
write_cache_pages and wait_on_page_writeback_range: if some other
process is constantly making dirty and writeback pages while these
functions run, the functions will wait on every new page, resulting in
indefinite wait.

To fix the starvation, I declared a mutex starvation_barrier in struct
address_space. When the mutex is taken, anyone making dirty pages on
that address space should stop. The functions that walk address space
sequentially first estimate a number of pages to process. If they
process more than this amount (plus some small delta), it means that
someone is making dirty pages under them --- they take the mutex and
hold it until they finish. When the mutex is taken, the function
balance_dirty_pages will wait and not allow more dirty pages being made.

The mutex is not really used as a mutex, it does not prevent access to
any critical section. It is used as a barrier that blocks new dirty
pages from being created. I use mutex and not wait queue to make sure
that the starvation can't happend the other way (if there were wait
queue, excessive calls to write_cache_pages and
wait_on_page_writeback_range would block balance_dirty_pages forever;
with mutex it is guaranteed that every process eventually makes
progress).

The essential property of this patch is that if the starvation doesn't
happen, no additional locks are taken and no atomic operations are
performed. So the patch shouldn't damage performance.

The indefinite starvation was observed in write_cache_pages and
wait_on_page_writeback_range. Another possibility where it could happen
is invalidate_inode_pages2_range.

There are two more functions that walk all the pages in address space,
but I think they don't need this starvation protection:
truncate_inode_pages_range: it is called with i_mutex locked, so no new
pages are created under it.
__invalidate_mapping_pages: it skips locked, dirty and writeback pages,
so there's no point in protecting the function against starving on them.

On Mon, 22 Sep 2008 17:10:04 -0400 (EDT)
Mikulas Patocka <mpatocka@redhat.com> wrote:
[color=blue]
> The bug happens when one process is doing sequential buffered writes to
> a block device (or file) and another process is attempting to execute
> sync(), fsync() or direct-IO on that device (or file). This syncing
> process will wait indefinitelly, until the first writing process
> finishes.
>
> For example, run these two commands:
> dd if=/dev/zero of=/dev/sda1 bs=65536 &
> dd if=/dev/sda1 of=/dev/null bs=4096 count=1 iflag=direct
>
> The bug is caused by sequential walking of address space in
> write_cache_pages and wait_on_page_writeback_range: if some other
> process is constantly making dirty and writeback pages while these
> functions run, the functions will wait on every new page, resulting in
> indefinite wait.[/color]

Shouldn't happen. All the data-syncing functions should have an upper
bound on the number of pages which they attempt to write. In the
example above, we end up in here:

so generic_file_direct_write()'s filemap_write_and_wait() will attempt
to write at most 2* the number of pages which are in cache for that inode.

I'd say that either a) that logic got broken or b) you didn't wait long
enough, and we might need to do something to make it not wait so long.

But before we patch anything we should fully understand what is
happening and why the current anti-livelock code isn't working in this
case.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email]majordomo@vger.kernel.org[/email]
More majordomo info at [url]http://vger.kernel.org/majordomo-info.html[/url]
Please read the FAQ at [url]http://www.tux.org/lkml/[/url]

--- so if it goes by points (1) and (2), the counter is not decremented,
yet the function waits for the page. If there is constant stream of
writeback pages being generated, it waits on each on them --- that is,
forever. I have seen livelock in this function. For you that example with
two dd's, one buffered write and the other directIO read doesn't work? For
me it livelocks here.

wait_on_page_writeback_range is another example where the livelock
happened, there is no protection at all against starvation.

Imagine this case: You have two pages with indices 4 and 5 dirty in a
file. You call fsync(). It sets nr_to_write to 4.

Meanwhile, another process makes pages 0, 1, 2, 3 dirty.

The fsync() process goes to write_cache_pages, writes the first 4 dirty
pages and exits because it goes over the limit.

result --- you violate fsync() semantics, pages that were dirty before
call to fsync() are not written when fsync() exits.
[color=blue]
> I'd say that either a) that logic got broken or b) you didn't wait long
> enough, and we might need to do something to make it not wait so long.
>
> But before we patch anything we should fully understand what is
> happening and why the current anti-livelock code isn't working in this
> case.[/color]

Mikulas
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email]majordomo@vger.kernel.org[/email]
More majordomo info at [url]http://vger.kernel.org/majordomo-info.html[/url]
Please read the FAQ at [url]http://www.tux.org/lkml/[/url]

um, OK. So someone else is initiating IO for this inode and this
thread *never* gets to initiate any writeback. That's a bit of a
surprise.

How do we fix that? Maybe decrement nt_to_write for these pages as
well?
[color=blue]
>
> BTW. that .nr_to_write = mapping->nrpages * 2 looks like a dangerous thing
> to me.
>
> Imagine this case: You have two pages with indices 4 and 5 dirty in a
> file. You call fsync(). It sets nr_to_write to 4.
>
> Meanwhile, another process makes pages 0, 1, 2, 3 dirty.
>
> The fsync() process goes to write_cache_pages, writes the first 4 dirty
> pages and exits because it goes over the limit.
>
> result --- you violate fsync() semantics, pages that were dirty before
> call to fsync() are not written when fsync() exits.[/color]

yup, that's pretty much unfixable, really, unless new locks are added
which block threads which are writing to unrelated sections of the
file, and that could hurt some workloads quite a lot, I expect.

Hopefully high performance applications are instantiating the file
up-front and are using sync_file_range() to prevent these sorts of
things from happening. But they probably aren't.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email]majordomo@vger.kernel.org[/email]
More majordomo info at [url]http://vger.kernel.org/majordomo-info.html[/url]
Please read the FAQ at [url]http://www.tux.org/lkml/[/url]

And what do you want to do with wait_on_page_writeback_range? When I
solved that livelock in write_cache_pages(), I got another livelock in
wait_on_page_writeback_range.
[color=blue][color=green]
> > BTW. that .nr_to_write = mapping->nrpages * 2 looks like a dangerous thing
> > to me.
> >
> > Imagine this case: You have two pages with indices 4 and 5 dirty in a
> > file. You call fsync(). It sets nr_to_write to 4.
> >
> > Meanwhile, another process makes pages 0, 1, 2, 3 dirty.
> >
> > The fsync() process goes to write_cache_pages, writes the first 4 dirty
> > pages and exits because it goes over the limit.
> >
> > result --- you violate fsync() semantics, pages that were dirty before
> > call to fsync() are not written when fsync() exits.[/color]
>
> yup, that's pretty much unfixable, really, unless new locks are added
> which block threads which are writing to unrelated sections of the
> file, and that could hurt some workloads quite a lot, I expect.[/color]

It is fixable with the patch I sent --- it doesn't take any locks unless
the starvation happens. Then, you don't have to use .nr_to_write for
fsync anymore.

Another solution could be to record in page structure jiffies when the
page entered dirty state and writeback state. The start writeback/wait on
writeback functions could then trivially ignore pages that were
dirtied/writebacked while the function was in progress.
[color=blue]
> Hopefully high performance applications are instantiating the file
> up-front and are using sync_file_range() to prevent these sorts of
> things from happening. But they probably aren't.[/color]

--- for databases it is pretty much possible that one thread is writing
already journaled data (so it doesn't care when the data are really
written) and another thread is calling fsync() on the same inode
simultaneously --- so fsync() could mistakenly write the data generated by
the first thread and ignore the data generated by the second thread, that
it should really write.

Mikulas
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email]majordomo@vger.kernel.org[/email]
More majordomo info at [url]http://vger.kernel.org/majordomo-info.html[/url]
Please read the FAQ at [url]http://www.tux.org/lkml/[/url]

09-23-2008, 11:50 PM

unix

Re: [PATCH] Memory management livelock

On Tue, 23 Sep 2008 19:11:51 -0400 (EDT)
Mikulas Patocka <mpatocka@redhat.com> wrote:
[color=blue]
>
>[color=green][color=darkred]
> > > wait_on_page_writeback_range is another example where the livelock
> > > happened, there is no protection at all against starvation.[/color]
> >
> > um, OK. So someone else is initiating IO for this inode and this
> > thread *never* gets to initiate any writeback. That's a bit of a
> > surprise.
> >
> > How do we fix that? Maybe decrement nt_to_write for these pages as
> > well?[/color]
>
> And what do you want to do with wait_on_page_writeback_range?[/color]

Don't know. I was asking you.
[color=blue]
> When I
> solved that livelock in write_cache_pages(), I got another livelock in
> wait_on_page_writeback_range.
>[color=green][color=darkred]
> > > BTW. that .nr_to_write = mapping->nrpages * 2 looks like a dangerous thing
> > > to me.
> > >
> > > Imagine this case: You have two pages with indices 4 and 5 dirty in a
> > > file. You call fsync(). It sets nr_to_write to 4.
> > >
> > > Meanwhile, another process makes pages 0, 1, 2, 3 dirty.
> > >
> > > The fsync() process goes to write_cache_pages, writes the first 4 dirty
> > > pages and exits because it goes over the limit.
> > >
> > > result --- you violate fsync() semantics, pages that were dirty before
> > > call to fsync() are not written when fsync() exits.[/color]
> >
> > yup, that's pretty much unfixable, really, unless new locks are added
> > which block threads which are writing to unrelated sections of the
> > file, and that could hurt some workloads quite a lot, I expect.[/color]
>
> It is fixable with the patch I sent --- it doesn't take any locks unless
> the starvation happens. Then, you don't have to use .nr_to_write for
> fsync anymore.[/color]

I agree that the patch is low-impact and relatively straightforward.
The main problem is making the address_space larger - there can (and
often are) millions and millions of these things in memory. Making it
larger is a big deal. We should work hard to seek an alternative and
afacit that isn't happening here.

We already have existing code and design which attempts to avoid
livelock without adding stuff to the address_space. Can it be modified
so as to patch up this quite obscure and rarely-occuring problem?
[color=blue]
> Another solution could be to record in page structure jiffies when the
> page entered dirty state and writeback state. The start writeback/wait on
> writeback functions could then trivially ignore pages that were
> dirtied/writebacked while the function was in progress.
>[color=green]
> > Hopefully high performance applications are instantiating the file
> > up-front and are using sync_file_range() to prevent these sorts of
> > things from happening. But they probably aren't.[/color]
>
> --- for databases it is pretty much possible that one thread is writing
> already journaled data (so it doesn't care when the data are really
> written) and another thread is calling fsync() on the same inode
> simultaneously --- so fsync() could mistakenly write the data generated by
> the first thread and ignore the data generated by the second thread, that
> it should really write.
>
> Mikulas[/color]
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email]majordomo@vger.kernel.org[/email]
More majordomo info at [url]http://vger.kernel.org/majordomo-info.html[/url]
Please read the FAQ at [url]http://www.tux.org/lkml/[/url]

09-24-2008, 07:00 PM

unix

Re: [PATCH] Memory management livelock

> > > yup, that's pretty much unfixable, really, unless new locks are added[color=blue][color=green][color=darkred]
> > > which block threads which are writing to unrelated sections of the
> > > file, and that could hurt some workloads quite a lot, I expect.[/color]
> >
> > It is fixable with the patch I sent --- it doesn't take any locks unless
> > the starvation happens. Then, you don't have to use .nr_to_write for
> > fsync anymore.[/color]
>
> I agree that the patch is low-impact and relatively straightforward.
> The main problem is making the address_space larger - there can (and
> often are) millions and millions of these things in memory. Making it
> larger is a big deal. We should work hard to seek an alternative and
> afacit that isn't happening here.
>
> We already have existing code and design which attempts to avoid
> livelock without adding stuff to the address_space. Can it be modified
> so as to patch up this quite obscure and rarely-occuring problem?[/color]

I reworked my patch to use a bit in address_space->flags and hashes wait
queues, so it doesn't take any extra memory. I'm sending it in three
parts.
1 - make generic function wait_action_schedule
2 - fix the livelock, the logic is exactly the same as in my previous
patch, wait_on_bit_lock is used instead of mutexes
3 - remove that nr_pages * 2 limit, because it causes misbehavior and
possible data loss.

Mikulas
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email]majordomo@vger.kernel.org[/email]
More majordomo info at [url]http://vger.kernel.org/majordomo-info.html[/url]
Please read the FAQ at [url]http://www.tux.org/lkml/[/url]

09-24-2008, 07:00 PM

unix

[PATCH 1/3] Memory management livelock

A generic function wait_action_schedule that allows to use wait_on_bit_lock just
like mutexes.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email]majordomo@vger.kernel.org[/email]
More majordomo info at [url]http://vger.kernel.org/majordomo-info.html[/url]
Please read the FAQ at [url]http://www.tux.org/lkml/[/url]

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email]majordomo@vger.kernel.org[/email]
More majordomo info at [url]http://vger.kernel.org/majordomo-info.html[/url]
Please read the FAQ at [url]http://www.tux.org/lkml/[/url]

09-24-2008, 07:00 PM

unix

[PATCH 3/3] Memory management livelock

Fix violation of sync()/fsync() semantics. Previous code walked up to
mapping->nrpages * 2 pages. Because pages could be created while
__filemap_fdatawrite_range was in progress, it could lead to a misbehavior.
Example: there are two pages in address space with indices 4, 5. Both are dirty.
Someone calls __filemap_fdatawrite_range, it sets .nr_to_write = 4.
Meanwhile, some other process creates dirty pages 0, 1, 2, 3.
__filemap_fdatawrite_range writes pages 0, 1, 2, 3, finds out that it reached
the limit and exits.
Result: pages that were dirty before __filemap_fdatawrite_range was invoked were
not written.

With starvation protection from the previous patch, this mapping->nrpages * 2
logic is no longer needed.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email]majordomo@vger.kernel.org[/email]
More majordomo info at [url]http://vger.kernel.org/majordomo-info.html[/url]
Please read the FAQ at [url]http://www.tux.org/lkml/[/url]

This sequence is repeated three or four times and should be pulled out
into a well-commented function. That comment should explain the logic
behind the use of these barriers, please.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email]majordomo@vger.kernel.org[/email]
More majordomo info at [url]http://vger.kernel.org/majordomo-info.html[/url]
Please read the FAQ at [url]http://www.tux.org/lkml/[/url]

*What* is, forever? Data integrity syncs should have pages operated on
in-order, until we get to the end of the range. Circular writeback could
go through again, possibly, but no more than once.

[color=blue][color=green]
> > I have seen livelock in this function. For you that example with
> > two dd's, one buffered write and the other directIO read doesn't work?
> > For me it livelocks here.
> >
> > wait_on_page_writeback_range is another example where the livelock
> > happened, there is no protection at all against starvation.[/color]
>
> um, OK. So someone else is initiating IO for this inode and this
> thread *never* gets to initiate any writeback. That's a bit of a
> surprise.
>
> How do we fix that? Maybe decrement nt_to_write for these pages as
> well?[/color]

What's the actual problem, though? nr_to_write should not be used for
data integrity operations, and it should not be critical for other
writeout. Upper layers should be able to deal with it rather than
have us lying to them.

[color=blue][color=green]
> > BTW. that .nr_to_write = mapping->nrpages * 2 looks like a dangerous
> > thing to me.
> >
> > Imagine this case: You have two pages with indices 4 and 5 dirty in a
> > file. You call fsync(). It sets nr_to_write to 4.
> >
> > Meanwhile, another process makes pages 0, 1, 2, 3 dirty.
> >
> > The fsync() process goes to write_cache_pages, writes the first 4 dirty
> > pages and exits because it goes over the limit.
> >
> > result --- you violate fsync() semantics, pages that were dirty before
> > call to fsync() are not written when fsync() exits.[/color][/color]

Wow, that's really nasty. Sad we still have known data integrity problems
in such core functions.

[color=blue]
> yup, that's pretty much unfixable, really, unless new locks are added
> which block threads which are writing to unrelated sections of the
> file, and that could hurt some workloads quite a lot, I expect.[/color]

Why is it unfixable? Just ignore nr_to_write, and write out everything
properly, I would have thought.

Some things may go a tad slower, but those are going to be the things
that are using fsync, in which cases they are going to hurt much more
from the loss of data integrity than a slowdown.

Unfortunately because we have played fast and loose for so long, they
expect this behaviour, were tested and optimised with it, and systems
designed and deployed with it, and will notice performance regressions
if we start trying to do things properly. This is one of my main
arguments for doing things correctly up-front, even if it means a
massive slowdown in some real or imagined workload: at least then we
will hear about complaints and be able to try to improve them rather
than setting ourselves up for failure later.
/rant

Anyway, in this case, I don't think there would be really big problems.
Also, I think there is a reasonable optimisation that might improve it
(2nd last point, in attached patch).

OK, so after glancing at the code... wow, it seems like there are a lot
of bugs in there.

10-03-2008, 02:50 AM

unix

Re: [PATCH] Memory management livelock

On Fri, 3 Oct 2008 12:32:23 +1000 Nick Piggin <nickpiggin@yahoo.com.au> wrote:
[color=blue][color=green]
> > yup, that's pretty much unfixable, really, unless new locks are added
> > which block threads which are writing to unrelated sections of the
> > file, and that could hurt some workloads quite a lot, I expect.[/color]
>
> Why is it unfixable? Just ignore nr_to_write, and write out everything
> properly, I would have thought.[/color]

That can cause fsync to wait arbitrarily long if some other process is
writing the file. This happens.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email]majordomo@vger.kernel.org[/email]
More majordomo info at [url]http://vger.kernel.org/majordomo-info.html[/url]
Please read the FAQ at [url]http://www.tux.org/lkml/[/url]

10-03-2008, 03:00 AM

unix

Re: [PATCH] Memory management livelock

On Friday 03 October 2008 12:40, Andrew Morton wrote:[color=blue]
> On Fri, 3 Oct 2008 12:32:23 +1000 Nick Piggin <nickpiggin@yahoo.com.au>[/color]
wrote:[color=blue][color=green][color=darkred]
> > > yup, that's pretty much unfixable, really, unless new locks are added
> > > which block threads which are writing to unrelated sections of the
> > > file, and that could hurt some workloads quite a lot, I expect.[/color]
> >
> > Why is it unfixable? Just ignore nr_to_write, and write out everything
> > properly, I would have thought.[/color]
>
> That can cause fsync to wait arbitrarily long if some other process is
> writing the file.[/color]

It can be fixed without touching non-fsync paths (see my next email for
the way to fix it without touching fastpaths).

[color=blue]
> This happens.[/color]

What does such a thing? It would have been nicer to ask them to not do
that then, or get them to use range syncs or something. Now that's much
harder because we've accepted the crappy workaround for so long.

It's far far worse to just ignore data integrity of fsync because of the
problem. Should at least have returned an error from fsync in that case,
or make it interruptible or something.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email]majordomo@vger.kernel.org[/email]
More majordomo info at [url]http://vger.kernel.org/majordomo-info.html[/url]
Please read the FAQ at [url]http://www.tux.org/lkml/[/url]

OK, I have been able to reproduce it somewhat. It is not a livelock,
but what is happening is that direct IO read basically does an fsync
on the file before performing the IO. The fsync gets stuck behind the
dd that is dirtying the pages, and ends up following behind it and
doing all its IO for it.

The following patch avoids the issue for direct IO, by using the range
syncs rather than trying to sync the whole file.

The underlying problem I guess is unchanged. Is it really a problem,
though? The way I'd love to solve it is actually by adding another bit
or two to the pagecache radix tree, that can be used to transiently tag
the tree for future operations. That way we could record the dirty and
writeback pages up front, and then only bother with operating on them.

That's *if* it really is a problem. I don't have much pity for someone
doing buffered IO and direct IO to the same pages of the same file :)

10-03-2008, 03:20 AM

unix

Re: [PATCH] Memory management livelock

On Fri, 3 Oct 2008 12:59:17 +1000 Nick Piggin <nickpiggin@yahoo.com.au> wrote:
[color=blue]
> On Friday 03 October 2008 12:40, Andrew Morton wrote:[color=green]
> > On Fri, 3 Oct 2008 12:32:23 +1000 Nick Piggin <nickpiggin@yahoo.com.au>[/color]
> wrote:[color=green][color=darkred]
> > > > yup, that's pretty much unfixable, really, unless new locks are added
> > > > which block threads which are writing to unrelated sections of the
> > > > file, and that could hurt some workloads quite a lot, I expect.
> > >
> > > Why is it unfixable? Just ignore nr_to_write, and write out everything
> > > properly, I would have thought.[/color]
> >
> > That can cause fsync to wait arbitrarily long if some other process is
> > writing the file.[/color]
>
> It can be fixed without touching non-fsync paths (see my next email for
> the way to fix it without touching fastpaths).
>[/color]

yup.
[color=blue]
>[color=green]
> > This happens.[/color]
>
> What does such a thing?[/color]

I forget. People do all sorts of weird stuff.
[color=blue]
> It would have been nicer to ask them to not do
> that then, or get them to use range syncs or something. Now that's much
> harder because we've accepted the crappy workaround for so long.
>
> It's far far worse to just ignore data integrity of fsync because of the
> problem. Should at least have returned an error from fsync in that case,
> or make it interruptible or something.[/color]

If a file has one dirty page at offset 1000000000000000 then someone
does an fsync() and someone else gets in first and starts madly writing
pages at offset 0, we want to write that page at 1000000000000000.
Somehow.

I expect there's no solution which avoids blocking the writers at some
stage.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email]majordomo@vger.kernel.org[/email]
More majordomo info at [url]http://vger.kernel.org/majordomo-info.html[/url]
Please read the FAQ at [url]http://www.tux.org/lkml/[/url]

10-03-2008, 03:50 AM

unix

Re: [PATCH] Memory management livelock

On Friday 03 October 2008 13:14, Andrew Morton wrote:[color=blue]
> On Fri, 3 Oct 2008 12:59:17 +1000 Nick Piggin <nickpiggin@yahoo.com.au>[/color]
wrote:[color=blue][color=green]
> > On Friday 03 October 2008 12:40, Andrew Morton wrote:[/color][/color]
[color=blue][color=green][color=darkred]
> > > That can cause fsync to wait arbitrarily long if some other process is
> > > writing the file.[/color]
> >
> > It can be fixed without touching non-fsync paths (see my next email for
> > the way to fix it without touching fastpaths).[/color]
>
> yup.
>[color=green][color=darkred]
> > > This happens.[/color]
> >
> > What does such a thing?[/color]
>
> I forget. People do all sorts of weird stuff.[/color]

Damn people...

I guess they also do non-weird stuff like expecting fsync to really fsync.

[color=blue][color=green]
> > It would have been nicer to ask them to not do
> > that then, or get them to use range syncs or something. Now that's much
> > harder because we've accepted the crappy workaround for so long.
> >
> > It's far far worse to just ignore data integrity of fsync because of the
> > problem. Should at least have returned an error from fsync in that case,
> > or make it interruptible or something.[/color]
>
> If a file has one dirty page at offset 1000000000000000 then someone
> does an fsync() and someone else gets in first and starts madly writing
> pages at offset 0, we want to write that page at 1000000000000000.
> Somehow.
>
> I expect there's no solution which avoids blocking the writers at some
> stage.[/color]

See my other email. Something roughly like this would do the trick
(hey, it actually boots and runs and does fix the problem too).

It's ugly because we don't have quite the right radix tree operations
yet (eg. lookup multiple tags, set tag X if tag Y was set, proper range
lookups). But the theory is to up-front tag the pages that we need to
get to disk.

Completely no impact or slowdown to any writers (although it does add
8 bytes of tags to the radix tree node... but doesn't increase memory
footprint as such due to slab).

10-03-2008, 04:00 AM

unix

Re: [PATCH] Memory management livelock

On Fri, 3 Oct 2008 13:47:21 +1000 Nick Piggin <nickpiggin@yahoo.com.au> wrote:
[color=blue][color=green]
> > I expect there's no solution which avoids blocking the writers at some
> > stage.[/color]
>
> See my other email. Something roughly like this would do the trick
> (hey, it actually boots and runs and does fix the problem too).[/color]

It needs exclusion to protect all those temp tags. Is do_fsync()'s
i_mutex sufficient? It's qute unobvious (and unmaintainable?) that all
the callers of this stuff are running under that lock.
[color=blue]
> It's ugly because we don't have quite the right radix tree operations
> yet (eg. lookup multiple tags, set tag X if tag Y was set, proper range
> lookups). But the theory is to up-front tag the pages that we need to
> get to disk.[/color]

Perhaps some callback-calling radix tree walker.
[color=blue]
> Completely no impact or slowdown to any writers (although it does add
> 8 bytes of tags to the radix tree node... but doesn't increase memory
> footprint as such due to slab).[/color]

Can we reduce the amount of copy-n-pasting here?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email]majordomo@vger.kernel.org[/email]
More majordomo info at [url]http://vger.kernel.org/majordomo-info.html[/url]
Please read the FAQ at [url]http://www.tux.org/lkml/[/url]

10-03-2008, 04:10 AM

unix

Re: [PATCH] Memory management livelock

On Friday 03 October 2008 13:56, Andrew Morton wrote:[color=blue]
> On Fri, 3 Oct 2008 13:47:21 +1000 Nick Piggin <nickpiggin@yahoo.com.au>[/color]
wrote:[color=blue][color=green][color=darkred]
> > > I expect there's no solution which avoids blocking the writers at some
> > > stage.[/color]
> >
> > See my other email. Something roughly like this would do the trick
> > (hey, it actually boots and runs and does fix the problem too).[/color]
>
> It needs exclusion to protect all those temp tags. Is do_fsync()'s
> i_mutex sufficient? It's qute unobvious (and unmaintainable?) that all
> the callers of this stuff are running under that lock.[/color]

Yeah... it does need a lock, which I brushed under the carpet :P
I was going to just say use i_mutex, but then we really would start
impacting on other fastpaths (eg writers).

Possibly a new mutex in the address_space? That way we can say
"anybody who holds this mutex is allowed to use the tag for anything"
and it doesn't have to be fsync specific (whether that would be of
any use to anything else, I don't know).

[color=blue][color=green]
> > It's ugly because we don't have quite the right radix tree operations
> > yet (eg. lookup multiple tags, set tag X if tag Y was set, proper range
> > lookups). But the theory is to up-front tag the pages that we need to
> > get to disk.[/color]
>
> Perhaps some callback-calling radix tree walker.[/color]

Possibly, yes. That would make it fairly general. I'll have a look...

[color=blue][color=green]
> > Completely no impact or slowdown to any writers (although it does add
> > 8 bytes of tags to the radix tree node... but doesn't increase memory
> > footprint as such due to slab).[/color]
>
> Can we reduce the amount of copy-n-pasting here?[/color]

Yeah... I went to break the sync/async cases into two, but it looks like
it may not have been worthwhile. Just another branch might be the best
way to go.

As far as the c&p in setting the FSYNC tag, yes that should all go away
if the radix-tree is up to scratch. Basically:

should be able to replace the whole thing, and we'd hold the tree_lock, so
we would not have to take the page lock etc. Basically it would be much
nicer... even somewhere close to a viable solution.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email]majordomo@vger.kernel.org[/email]
More majordomo info at [url]http://vger.kernel.org/majordomo-info.html[/url]
Please read the FAQ at [url]http://www.tux.org/lkml/[/url]

10-03-2008, 04:20 AM

unix

Re: [PATCH] Memory management livelock

On Fri, 3 Oct 2008 14:07:55 +1000 Nick Piggin <nickpiggin@yahoo.com.au> wrote:
[color=blue]
> On Friday 03 October 2008 13:56, Andrew Morton wrote:[color=green]
> > On Fri, 3 Oct 2008 13:47:21 +1000 Nick Piggin <nickpiggin@yahoo.com.au>[/color]
> wrote:[color=green][color=darkred]
> > > > I expect there's no solution which avoids blocking the writers at some
> > > > stage.
> > >
> > > See my other email. Something roughly like this would do the trick
> > > (hey, it actually boots and runs and does fix the problem too).[/color]
> >
> > It needs exclusion to protect all those temp tags. Is do_fsync()'s
> > i_mutex sufficient? It's qute unobvious (and unmaintainable?) that all
> > the callers of this stuff are running under that lock.[/color]
>
> Yeah... it does need a lock, which I brushed under the carpet :P
> I was going to just say use i_mutex, but then we really would start
> impacting on other fastpaths (eg writers).
>
> Possibly a new mutex in the address_space?[/color]

That's another, umm 24 bytes minimum in the address_space (and inode).
That's fairly ouch, which is why Miklaus did that hokey bit-based
thing.
[color=blue]
> That way we can say
> "anybody who holds this mutex is allowed to use the tag for anything"
> and it doesn't have to be fsync specific (whether that would be of
> any use to anything else, I don't know).
>
>[color=green][color=darkred]
> > > It's ugly because we don't have quite the right radix tree operations
> > > yet (eg. lookup multiple tags, set tag X if tag Y was set, proper range
> > > lookups). But the theory is to up-front tag the pages that we need to
> > > get to disk.[/color]
> >
> > Perhaps some callback-calling radix tree walker.[/color]
>
> Possibly, yes. That would make it fairly general. I'll have a look...
>
>[color=green][color=darkred]
> > > Completely no impact or slowdown to any writers (although it does add
> > > 8 bytes of tags to the radix tree node... but doesn't increase memory
> > > footprint as such due to slab).[/color]
> >
> > Can we reduce the amount of copy-n-pasting here?[/color]
>
> Yeah... I went to break the sync/async cases into two, but it looks like
> it may not have been worthwhile. Just another branch might be the best
> way to go.[/color]

Yup. Could add another do-this flag in the writeback_control, perhaps.
Or even a function pointer.
[color=blue]
> As far as the c&p in setting the FSYNC tag, yes that should all go away
> if the radix-tree is up to scratch. Basically:
>
> radix_tree_tag_set_if_tagged(start, end, ifWRITEBACK|DIRTY, setFSYNC);
>
> should be able to replace the whole thing, and we'd hold the tree_lock, so
> we would not have to take the page lock etc. Basically it would be much
> nicer... even somewhere close to a viable solution.[/color]
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email]majordomo@vger.kernel.org[/email]
More majordomo info at [url]http://vger.kernel.org/majordomo-info.html[/url]
Please read the FAQ at [url]http://www.tux.org/lkml/[/url]