Re: [PATCH 1/2] SOUND: kill gameport bits
From: Takashi Iwai
Date: Mon Aug 25 2014 - 03:13:49 EST
At Sun, 24 Aug 2014 07:07:16 +0200,
Andreas Mohr wrote:
>
> On Thu, Aug 21, 2014 at 01:29:03PM +0200, Takashi Iwai wrote:
> > I did a quick hack and it seems working on my box.
> > The patch is below.
>
> Thanks!!
>
> Further comments below.
>
> I will be testing this ASAP.
> > +static bool use_ktime = true;
> > +module_param(use_ktime, bool, 0400);
>
> Towards final commit, should probably add param docs on what may be switched here and why.
>
> > +
> > /*
> > * gameport_mutex protects entire gameport subsystem and is taken
> > * every time gameport port or driver registrered or unregistered.
> > @@ -76,6 +80,36 @@ static unsigned int get_time_pit(void)
> >
> > static int gameport_measure_speed(struct gameport *gameport)
> > {
> > + unsigned int i, t, tx;
> > + u64 t1, t2;
> > + unsigned long flags;
> > +
> > + if (gameport_open(gameport, NULL, GAMEPORT_MODE_RAW))
> > + return 0;
> > +
> > + tx = ~0;
> > +
> > + for (i = 0; i < 50; i++) {
> > + local_irq_save(flags);
> > + t1 = ktime_get_ns();
> > + for (t = 0; t < 50; t++)
> > + gameport_read(gameport);
> > + t2 = ktime_get_ns();
> > + local_irq_restore(flags);
> > + udelay(i * 10);
> > + if (t2 - t1 < tx)
> > + tx = t2 - t1;
>
> This impl is not doing the more complex t3, t2, t1 calculation
> that the PIT impl is doing (likely for the uncommented purpose
> of eliminating timer I/O delay from timing consideration).
> Do/don't ktime/TSC impls better need such an I/O timing correction,
> or are they so fast relative to gameport I/O delays
> that it does not matter? (probably the case for TSC at least).
It's based on x86-64 implementation that doesn't take t3 into
account. I don't think it doesn't matter so much on the recent
systems, but certainly it can't hurt to measure it, too.
> Oh, and any reason that such a speed calculation remains painfully duplicated
> in both source files? That's possibly done for layering reasons,
> but I'd have to analyze it further.
Yeah, a layer should be one reason. Another reason is that TSC read
has to be a macro, thus you'd need anyway reimplementation, either
static inline or such.
In my patch, I didn't want to change too much in a shot. It just adds
the replacement using ktime, that's all. If you'd like to work on
this further, feel free to do it.
> > +static inline u64 get_time(void)
> > +{
> > + if (use_ktime) {
> > + return ktime_get_ns();
> > + } else {
> > + unsigned int x;
> > + GET_TIME(x);
> > + return x;
> > + }
> > +}
>
> It might be useful to have a first commit to introduce these helpers,
> and a second commit to then add ktime support (to keep review code size
> down).
The very purpose of this helper is for ktime. For TSC, the helper
*is* GET_TIME(). So, splitting commit without introducing ktime
doesn't make much sense.
Nevertheless: did anyone test the patch at all...?
thanks,
Takashi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/