Re: [PATCH v3 1/4] Input: ts-overlay - Add touchscreen overlay object handling
From: Javier Carrasco
Date: Thu Jun 29 2023 - 03:54:41 EST
Hi Jeff,
On 29.06.23 05:29, Jeff LaBundy wrote:
> Hi Javier,
>
> On Wed, Jun 28, 2023 at 08:44:51AM +0200, Javier Carrasco wrote:
>
> [...]
>
>>>>>> +static const char *const ts_overlay_names[] = {
>>>>>> + [TOUCHSCREEN] = "overlay-touchscreen",
>>>>>
>>>>> I'm a little confused why we need new code for this particular function; it's
>>>>> what touchscreen-min-x/y and touchscreen-size-x/y were meant to define. Why
>>>>> can't we keep using those?
>>>>>
>>>> According to the bindings, touchscreen-min-x/y define the minimum
>>>> reported values, but the overlay-touchscreen is actually setting a new
>>>> origin. Therefore I might be misusing those properties. On the other
>>>> hand touchscreen-size-x/y would make more sense, but I also considered
>>>> the case where someone would like to describe the real size of the
>>>> touchscreen outside of the overlay node as well as the clipped size
>>>> inside the node. In that case using the same property twice would be
>>>> confusing.
>>>> So in the end I thought that the origin/size properties are more precise
>>>> and applicable for all objects and not only the overlay touchscreen.
>>>> These properties are needed for the buttons anyways and in the future
>>>> more overlay would use the same properties.
>>>
>>> Ah, I understand now. touchscreen-min-x/y define the lower limits of the axes
>>> reported to input but they don't move the origin. I'm aligned with the reason
>>> to introduce this function.
>>>
>>> This does beg the question as to whether we need two separate types of children
>>> and related parsing code. Can we not simply have one overlay definition, and
>>> make the decision as to whether we are dealing with a border or virtual button
>>> based on whether 'linux,code' is present?
>>>
>> A single overlay definition would be possible, but in case more objects
>> are added in the future, looking for single properties and then deciding
>> what object it is might get messy pretty fast. You could end up needing
>> a decision tree and the definition in the DT would get more complex.
>>
>> Now the decision tree is straightforward (linux,code -> button), but
>> that might not always be the case. In the current implementation there
>> are well-defined objects and adding a new one will never affect the
>> parsing of the rest.
>> Therefore I would like to keep it more readable and easily extendable.
>
> As a potential customer of this feature, I'm ultimately looking to describe
> the hardware as succinctly as possible. Currently we have two overlay types,
> a border and button(s). The former will never have linux,code defined, while
> the latter will. From my naive perspective, it seems redundant to define the
> overlay types differently when their properties imply the difference already.
>
> Ultimately it seems we are simply dealing with generic "segments" scattered
> throughout a larger touch surface. These segments start to look something
> like the following:
>
> struct touch_segment {
> unsigned int x_origin;
> unsigned int y_origin;
> unsigned int x_size;
> unsigned int y_size;
> unsigned int code;
> };
>
> You then have one exported function akin to touchscreen_parse_properties() that
> simply walks the parent device looking for children named "touch-segment-0",
> "touch-segment-1", etc. and parses the five properties, with the fifth (keycode)
> being optional.
>
> And then, you have one last exported function akin to touchscreen_report_pos()
> that processes the touch coordinates. If the coordinates are in a given segment
> and segment->code == KEY_RESERVED (i.e. linux,code was never given), then this
> function simply passes the shifted coordinates to touchscreen_report_pos().
>
> If however segment->code != KEY_RESERVED, it calls input_report_key() based on
> whether the coordinates are within the segment. If this simplified solution
> shrinks the code enough, it may even make sense to keep it in touchscreen.c
> which this new feature is so tightly coupled to anyway.
>
> I'm sure the devil is in the details however, and I understand the value in
> future-proofing. Can you help me understand a potential future case where this
> simplified view would break, and the existing definitions would be better?
>
> Kind regards,
> Jeff LaBundy
I agree that your approach would reduce the code and then moving this
feature to touchscreen.c would be reasonable. So if in the end that is
the desired solution, I will go for it. But there are some points where
I think the bit of extra code would be worth it.
>From a DT perspective, I can imagine some scenarios where a bunch of
segments scattered around would be messy. An example would be a keypad
with let's say N=9 buttons. It could be described easily with a buttons
node and the keys inside. Understanding what the node describes would be
straightforward as well, let alone N being much bigger.
You could argue that the buttons node could have segments inside instead
of buttons, but in the case where a cropped touchscreen is also
described, you would end up with a segment outside the buttons node and
the rest inside. That would reduce the parsing savings. Some labeling
would help in that case, but that would be not as clear as the current
implementation.
There is another point that I will just touch upon because I have no
experience in the matter. I have seen that some keys use the
'linux,input-type' property to define themselves as keys, switches, etc.
If that property or any other that I do not know is necessary for other
implementations, the button object will cover them better than a generic
segment where half of the properties would be meaningless in some
scenarios. Buttons/keys are so ubiquitous that a dedicated object for
them does not look that bad imho.
But as I said, I do not want to make a strong statement here because I
have seen that you maintain several bindings where this properties are
present and I am not the right person to explain that to you... or
actually anyone else out there :)
Talking about the code itself, having a structure for buttons is handy
because you can keep track of the button status (e.g. pressed) and in
the end it is just a child of the base shape that is used for the
overlay touchscreen. The same applies to any function that handles
buttons: they just wrap around the shape functions and add the
button-specific management. So if the parsing is taken aside, the code
does not get much savings from that side and it is again much more
readable and comprehensible.
Thank you for your efforts to improve these patches and the constructive
discussion.
Best regards,
Javier Carrasco