Re: [PATCH RFC ticketlock] Auto-queued ticketlock
From: Steven Rostedt
Date: Tue Jun 11 2013 - 13:13:59 EST
On Tue, 2013-06-11 at 09:43 -0700, Paul E. McKenney wrote:
> > > I am a bit concern about the size of the head queue table itself. RHEL6,
> > > for example, had defined CONFIG_NR_CPUS to be 4096 which mean a table
> > > size of 256. Maybe it is better to dynamically allocate the table at
> > > init time depending on the actual number of CPUs in the system.
> >
> > Yeah, it can be allocated dynamically at boot.
>
> But let's first demonstrate the need. Keep in mind that an early-boot
> deadlock would exercise this code.
I think an early-boot deadlock has more problems than this :-)
Now if we allocate this before other CPUs are enabled, there's no need
to worry about accessing it before they are used. They can only be used
on contention, and there would be no contention when we are only running
on one CPU.
> Yes, it is just a check for NULL,
> but on the other hand I didn't get the impression that you thought that
> this code was too simple. ;-)
I wouldn't change the code that uses it. It should never be hit, and if
it is triggered by an early boot deadlock, then I think this would
actually be a plus. An early boot deadlock would cause the system to
hang with no feedback whats so ever, causing the developer hours of
crying for mommy and pulling out their hair because the system just
stops doing anything except to show the developer a blinking cursor that
blinks "haha, haha, haha".
But if an early boot deadlock were to cause this code to be triggered
and do a NULL pointer dereference, then the system crashes. It would
most likely produce a backtrace that will give a lot more information to
the developer to see what is happening here. Sure, it may confuse them
at first, but then they can say: "why is this code triggering before we
have other CPUS? Oh, I have a deadlock here" and go fix the code in a
matter of minutes instead of hours.
Note, I don't even see this triggering with an early boot deadlock. The
only way that can happen is if the task tries to take a spinlock it
already owns, or an interrupt goes off and grabs a spinlock that the
task currently has but didn't disable interrupts. The ticket counter
would be just 2, which is far below the threshold that triggers the
queuing.
-- Steve
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/