Re: zram: per-cpu compression streams
From: Sergey Senozhatsky
Date: Thu Mar 24 2016 - 21:45:52 EST
On (03/25/16 08:41), Minchan Kim wrote:
> > Test #10 iozone -t 10 -R -r 80K -s 0M -I +Z
> > Initial write 3213973.56 2731512.62 4416466.25*
> > Rewrite 3066956.44* 2693819.50 332671.94
> > Read 7769523.25* 2681473.75 462840.44
> > Re-read 5244861.75 5473037.00* 382183.03
> > Reverse Read 7479397.25* 4869597.75 374714.06
> > Stride read 5403282.50* 5385083.75 382473.44
> > Random read 5131997.25 5176799.75* 380593.56
> > Mixed workload 3998043.25 4219049.00* 1645850.45
> > Random write 3452832.88 3290861.69 3588531.75*
> > Pwrite 3757435.81 2711756.47 4561807.88*
> > Pread 2743595.25* 2635835.00 412947.98
> > Fwrite 16076549.00 16741977.25* 14797209.38
> > Fread 23581812.62* 21664184.25 5064296.97
> > = real 0m44.490s 0m44.444s 0m44.609s
> > = user 0m0.054s 0m0.049s 0m0.055s
> > = sys 0m0.037s 0m0.046s 0m0.148s
> > so when the number of active tasks become larger than the number
> > of online CPUS, iozone reports a bit hard to understand data. I
> > can assume that since now we keep the preemption disabled longer
> > in write path, a concurrent operation (READ or WRITE) cannot preempt
> > current anymore... slightly suspicious.
> > the other hard to understand thing is why do READ-only tests have
> > such a huge jitter. READ-only tests don't depend on streams, they
> > don't even use them, we supply compressed data directly to
> > decompression api.
> > may be better retire iozone and never use it again.
> > "118 insertions(+), 238 deletions(-)" the patches remove a big
> > pile of code.
> First of all, I appreciate you very much!
> At a glance, on write workload, huge win but worth to investigate
> how such fluctuation/regression happens on read-related test
> (read and mixed workload).
yes, was going to investigate in more details but got interrupted,
will return back to it today/tomorrow.
> Could you send your patchset? I will test it.
oh, sorry, sure! attached (because it's not a real patch submission
yet, but they look more or less ready I guess).
patches are against next-20160324.