Re: [PATCH v5 2/7] crypto: SHA1 multibuffer crypto hash infrastructure

From: Tim Chen
Date: Tue Jul 29 2014 - 18:28:55 EST


On Fri, 2014-07-25 at 09:25 -0700, Tim Chen wrote:
> On Fri, 2014-07-25 at 09:08 +0200, Peter Zijlstra wrote:
> > On Tue, Jul 22, 2014 at 03:09:32PM -0700, Tim Chen wrote:
> > > +/* Called in workqueue context, do one real cryption work (via
> > > + * req->complete) and reschedule itself if there are more work to
> > > + * do. */
> >
> > You seem to manage the 'normal' comment style in other places, this one
> > 'special' for a reason?
>
> Yes, I'll need to correct it.
>
> >
> > > +static void mcryptd_queue_worker(struct work_struct *work)
> > > +{
> > > + struct mcryptd_cpu_queue *cpu_queue;
> > > + struct crypto_async_request *req, *backlog;
> > > + int i;
> > > +
> > > + /*
> > > + * Need to loop through more than once for multi-buffer to
> > > + * be effective.
> > > + */
> > > +
> > > + cpu_queue = container_of(work, struct mcryptd_cpu_queue, work);
> > > + i = 0;
> > > + while (i < MCRYPTD_BATCH || single_task_running()) {
> > > + /*
> > > + * preempt_disable/enable is used to prevent
> > > + * being preempted by mcryptd_enqueue_request()
> > > + */
> > > + local_bh_disable();
> > > + preempt_disable();
> > > + backlog = crypto_get_backlog(&cpu_queue->queue);
> > > + req = crypto_dequeue_request(&cpu_queue->queue);
> > > + preempt_enable();
> > > + local_bh_enable();
> > > +
> > > + if (!req)
> > > + return;
> > > +
> > > + if (backlog)
> > > + backlog->complete(backlog, -EINPROGRESS);
> > > + req->complete(req, 0);
> > > + if (!cpu_queue->queue.qlen)
> > > + return;
> > > + ++i;
> > > + }
> > > + if (cpu_queue->queue.qlen)
> > > + queue_work(kcrypto_wq, &cpu_queue->work);
> > > +}
> >
> > Right, so I don't think you need that single_task_running() thing for
> > this. Also its a rather inconclusive test, a task can come in while
> > processing the request, preempt the processing, complete and by the time
> > we're back to check if we want to continue the loop its only us again.
> >
> > So its actually misleading..
> >
> > You could do that, but then you have to keep preemption disabled across
> > the entire thing, and break out on need_resched(), that would entirely
> > obviate the need for single_task_running(), but would increase the
> > preemption latency by however long the req processing takes, so that's
> > bad too.
> >
> > But you can get similar 'fuzzy' semantics as above by using something
> > like:
> >
> > static inline bool did_context_switch(unsigned long *nivcs)
> > {
> > if (*nivcs != current->nivcs) {
> > *nivcs = current->nivcs;
> > return true;
> > }
> > return false;
> > }
> >
> > static void mcryptd_queue_worker()
> > {
> > ...
> > unsigned long nivcs = current->nivcs;
> >
> > ...
> >
> > while (!did_context_switch(&nivcs) || i < MCRYPTD_BATCH)
>
> Won't I need something like
>
> while ((!did_context_switch(&nivcs) && single_task_running())
> || i < MCRYPTD_BATCH)
>
> in case I got some new task in my run queue but I did not get
> pre-empted? I should not continue to run in that case.
>
> >
> > ...
> > }
> >
> > It'll terminate earlier I suppose, its the inverse test. But I think you
> > want to terminate early if there's any activity.
>
> I have some doubts here.
> If I'm still the only task running, why not continue, even if I've been
> pre-empted? Since there's nothing else to run, why not take
> advantage of the cpu?
>

Peter,

Is my explanation adequate or you still have objection to the implementation? I'm
trying to decide here whether to extend our batch processing by
the crypto daemon (prolong our busy period)
based on whether there are other tasks to run. Pre-emption and
resuming the crypto daemon is okay. Even if there's a pre-emption and
context switch in between, we can still do extended processing if
there's nothing else to run. Otherwise the cpu will go idle.

Tim

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/