Re: SEEK_HOLE and SEEK_DATA support?

From: Jim Dennis
Date: Fri Mar 03 2006 - 12:02:08 EST



On Fri, 03 Mar 2006 10:15:05 +0100 Arjan van de Ven Wrote:

> On Fri, 2006-03-03 at 03:33 -0500, Lee Revell wrote:
>> On Thu, 2006-03-02 at 13:49 -0800, Jim Dennis wrote:

>>> I ask primarily because of the interplay between 64-bit systems and
>>> things like /var/log/lastlog (which appears as a 1.2TiB file due to
>>> the nfsnobody UID of 4294967294).

>>> (I'm realize that adding support for these additional seek() flags
>>> wouldn't solve the problem ... archiving tools would still have to
>>> implement it. And I can also hear the argument that Red Hat and
>>> other
>>> distributions should re-implement lastlog handling to use a more
>>> modern
>>> and efficient hashing/index format and perhaps that they should set
^^- NOT
>>> nfsnobody to "-1" ...

[correction: they should NOT set ...]

>> So the presence of very high UIDs causes lastlog to be huge? That
>> just
>> sounds like a RedHat bug.

> it causes it to be a sparse file

> lastlog is an array based file format ;) but sparse

Perhaps I should have been a bit more clear. /var/log/lastlog has
been a sparse file in most implementation for ... well ... forever.

The example issue is that the support for large UIDs and the convention
of setting nfsnobody to -1 (4294967294) combine to create a file whose
size is very large. The du of the file is (in my case) only about
100KiB. So there's a small cluster of used blocks for the valid
corporate UIDs that have ever accessed this machine ... then a huge
allocate hole, and then one block storing the lastlog timestamp for
nfsnobody.

However, this message was not intended to dwell on the cause of that
huge sparse file ... but rather to inquire as to the core issue;
how do we efficiently handle skipping over (potentially huge)
allocation holes in a portable fashion that might be adopted by
archiving and other tools? I provided this example simply to point
out that it does happen, in the real world and has a significant
cost (40 minutes to scan through NULs with which the filesystem fills
the hole for read()s).

OpenSolaris has implemented a mechanism for doing this and it sounds
reasonable from my admittedly superficial perspective.

--
Jim Dennis
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/