RE: [PATCH] cpu-topology: Fix the potential data corruption

From: Zengtao (B)
Date: Mon Mar 02 2020 - 21:59:03 EST


> -----Original Message-----
> From: Sudeep Holla [mailto:sudeep.holla@xxxxxxx]
> Sent: Monday, March 02, 2020 7:11 PM
> To: Zengtao (B)
> Cc: Linuxarm; Greg Kroah-Hartman; Rafael J. Wysocki;
> linux-kernel@xxxxxxxxxxxxxxx; Sudeep Holla
> Subject: Re: [PATCH] cpu-topology: Fix the potential data corruption
>
> On Sat, Feb 29, 2020 at 01:41:47AM +0000, Zengtao (B) wrote:
> > > -----Original Message-----
> > > From: Sudeep Holla [mailto:sudeep.holla@xxxxxxx]
> > > Sent: Friday, February 28, 2020 6:41 PM
> > > To: Zengtao (B)
> > > Cc: Linuxarm; Greg Kroah-Hartman; Rafael J. Wysocki;
> > > linux-kernel@xxxxxxxxxxxxxxx; Sudeep Holla
> > > Subject: Re: [PATCH] cpu-topology: Fix the potential data corruption
> > >
> > > On Fri, Feb 28, 2020 at 04:35:45PM +0800, Zeng Tao wrote:
> > > > Currently there are only 10 bytes to store the cpu-topology info.
> > > > That is:
> > > > snprintf(buffer, 10, "cluster%d",i);
> > > > snprintf(buffer, 10, "thread%d",i);
> > > > snprintf(buffer, 10, "core%d",i);
> > > >
> > > > In the boundary test, if the cluster number exceeds 100, there will
> be a
> > >
> > > I don't understand you mention of 100 in particular above. I can see
> > > issue
> > > if there are cluster with more than 2-digit id. Though highly unlikely
> for
> > > now, but I don't have objection to the patch.
> > >
> >
> > The same meaning, more than 2-digit id equals to more than 100,
> right?
>
> Yes. May be it is obvious but I prefer to word the commit message
> accordingly.
> Mention of 100 specifically makes at-least me think something very
> specific
> to 100 and not applicable for any more than 2-digit number.
>

Do you think I need to update the commit message and resend the patch?
And I don't mind if you can help modify the commit message since both
are fine for me, and it's a very trivial change.

> > Here 100 is for from tester/user perspective.
> > And we found this issue when test with QEMU.
>
> OK.
>
> --
> Regards,
> Sudeep