Re: [RESEND PATCH] arm64: v8.4: Support for new floating point multiplication variant

From: Suzuki K Poulose
Date: Mon Dec 11 2017 - 13:59:00 EST


Hi gengdongjiu

Sorry for the late response. I have a similar patch to add the support for "FHM", which I was about to post it this week.

On 11/12/17 13:29, Dave Martin wrote:
On Mon, Dec 11, 2017 at 08:47:00PM +0800, gengdongjiu wrote:

On 2017/12/11 19:59, Dave P Martin wrote:
On Sat, Dec 09, 2017 at 03:28:42PM +0000, Dongjiu Geng wrote:
ARM v8.4 extensions include support for new floating point
multiplication variant instructions to the AArch64 SIMD

Do we have any human-readable description of what the new instructions
do?

Since the v8.4 spec itself only describes these as "New Floating
Point Multiplication Variant", I wonder what "FHM" actually stands
for.
Thanks for the point out.
In fact, this feature only adds two instructions:
FP16 * FP16 + FP32
FP16 * FP16 - FP32

The spec call this bit to ID_AA64ISAR0_EL1.FHM, I do not know why it
will call "FHM", I think call it "FMLXL" may be better, which can
stand for FMLAL/FMLSL instructions.

Although "FHM" is cryptic, I think it makes sense to keep this as "FHM"
to match the ISAR0 field name -- we've tended to follow this policy
for other extension names unless there's a much better or more obvious
name available.

For "FMLXL", new instructions might be added in the future that match
the same pattern, and then "FMLXL" could become ambiguous. So maybe
this is not the best choice.

I think the FHM stands for "FP Half precision Multiplication instructions". I vote for keeping the feature bit in sync with the register bit definition. i.e, FHM.

However, my version of the patch names the HWCAP bit "asimdfml", following the compiler name for the feature option "fp16fml", which
is not perfect either. I think FHM is the safe option here.


Maybe something like "widening half-precision floating-point multiply
accumulate" is acceptable wording consistent with the existing
architecture, but I just made that up, so it's not official ;)

how about something like "performing a multiplication of each FP16
element of one vector with the corresponding FP16 element of a second
vector, and to add or subtract this without an intermediate rounding
to the corresponding FP32 element in a third vector."?

We could have that, I guess.


I agree, and that matches the feature description.