Re: still nfs problems [Was: Linux 2.6.37-rc8]

From: George Spelvin
Date: Sat Jan 01 2011 - 00:44:53 EST


>> 1) Look again; it's O(1) work per entry, or O(n) work for an n-entry
>> directory. And O(1) space. With very small constant factors,

> Yes. I was thinking about it this morning (after coffee).

Thank you for the second look.

> One variant on those algorithms that might make sense here is to save
> the current cookie each time we see that the result of a cookie search
> is a filp->f_pos offset < the current filp->f_pos offset. That means we
> will in general only detect the loop after going through an entire
> cycle, but that should be sufficient...

All of these low-overhead algorithms can take a couple of loop iterations
before they detect it; their job is to achieve a reasonably low constant
factor in time using O(1) space.

The worst case for the power-of-two algorithm is when the loop is n = 2^k+1
items long. When you get to item 2^(k+1), you'll be comparing to item
2^k, which is a mismatch. Then you'll save the cookie from 2^(k+1)
and have to go to 2^(k+1) + 2^k + 1, or about 3*n, before detecting
it.

I don't consider this a problem, because it wastes a few seconds of
computer time, to be followed by wasting a few hours trying to pass
a bug report upstream about the broken NFS server...

I don't quite follow how your proposed variant works. Pardon my ignorance
of NFS, but is the f->pos something that comes from the server, or
something that is synthesized locally? Obviously, if you keep a record
of all the server cookies, you can detect loops quite easily.

If it comes from the server, there's a risk that there might be two
backward jumps in the cycle, and thus you'll never notice it.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/