Re: [PATCH 0/3] readfile(2): a new syscall to make open/read/close faster
From: Greg KH
Date: Mon Jul 06 2020 - 07:18:54 EST
On Mon, Jul 06, 2020 at 08:07:46AM +0200, Jan Ziak wrote:
> On Sun, Jul 5, 2020 at 1:58 PM Greg KH <gregkh@xxxxxxxxxxxxxxxxxxx> wrote:
> >
> > On Sun, Jul 05, 2020 at 06:09:03AM +0200, Jan Ziak wrote:
> > > On Sun, Jul 5, 2020 at 5:27 AM Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote:
> > > >
> > > > On Sun, Jul 05, 2020 at 05:18:58AM +0200, Jan Ziak wrote:
> > > > > On Sun, Jul 5, 2020 at 5:12 AM Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote:
> > > > > >
> > > > > > You should probably take a look at io_uring. That has the level of
> > > > > > complexity of this proposal and supports open/read/close along with many
> > > > > > other opcodes.
> > > > >
> > > > > Then glibc can implement readfile using io_uring and there is no need
> > > > > for a new single-file readfile syscall.
> > > >
> > > > It could, sure. But there's also a value in having a simple interface
> > > > to accomplish a simple task. Your proposed API added a very complex
> > > > interface to satisfy needs that clearly aren't part of the problem space
> > > > that Greg is looking to address.
> > >
> > > I believe that we should look at the single-file readfile syscall from
> > > a performance viewpoint. If an application is expecting to read a
> > > couple of small/medium-size files per second, then neither readfile
> > > nor readfiles makes sense in terms of improving performance. The
> > > benefits start to show up only in case an application is expecting to
> > > read at least a hundred of files per second. The "per second" part is
> > > important, it cannot be left out. Because readfile only improves
> > > performance for many-file reads, the syscall that applications
> > > performing many-file reads actually want is the multi-file version,
> > > not the single-file version.
> >
> > It also is a measurable increase over reading just a single file.
> > Here's my really really fast AMD system doing just one call to readfile
> > vs. one call sequence to open/read/close:
> >
> > $ ./readfile_speed -l 1
> > Running readfile test on file /sys/devices/system/cpu/vulnerabilities/meltdown for 1 loops...
> > Took 3410 ns
> > Running open/read/close test on file /sys/devices/system/cpu/vulnerabilities/meltdown for 1 loops...
> > Took 3780 ns
> >
> > 370ns isn't all that much, yes, but it is 370ns that could have been
> > used for something else :)
>
> I am curious as to how you amortized or accounted for the fact that
> readfile() first needs to open the dirfd and then close it later.
I do not open a dirfd, look at the benchmark code in the patch, it's all
right there.
I can make it simpler, will do that for the next round as I want to make
it really obvious for people to test on their hardware.
> >From performance viewpoint, only codes where readfile() is called
> multiple times from within a loop make sense:
>
> dirfd = open();
> for(...) {
> readfile(dirfd, ...);
> }
> close(dirfd);
No need to open dirfd at all, my benchmarks did not do that, just pass
in an absolute path if you don't want to. But if you want to, because
you want to read a bunch of files, you can, faster than you could if you
wanted to read a number of individual files without it :)
thanks,
greg k-h