Message ID | 20230425115049.870003-1-javier.carrasco@wolfvision.net |
---|---|
Headers | show |
Series | Input: support virtual objects on touchscreens | expand |
Hi Pavel, On 4/27/23 13:04, Pavel Machek wrote: > Hi! > >> Some touchscreens are shipped with a physical layer on top of them where >> a number of buttons and a resized touchscreen surface might be >> available. > > Yes, it is quite comon, for example Motorola Droid 4 has 4 virtual > buttons below touchscreen. Are those buttons configurable in some way? Or do they have a fixed purpose? How does Android handle those buttons, BTW? > One question is if this should be handled inside the kernel. It will > make it compatible with existing software, but it will also reduce > flexibility. I would say that it should be described in device tree if the purpose is fixed. For example, if there is no display behind the touch screen at a certain point but a printed sheet (e.g., with a home or return symbol) then it is clear that this button is not going to change. In such a case I doubt that flexibility is required. Best regards, Michael > > Best regards, > Pavel
Hi, On 25.04.23 18:02, Jeff LaBundy wrote: > Hi Thomas, > > On Tue, Apr 25, 2023 at 05:29:39PM +0200, Thomas Weißschuh wrote: >> Hi Javier, >> >> On 2023-04-25 13:50:45+0200, Javier Carrasco wrote: >>> Some touchscreens are shipped with a physical layer on top of them where >>> a number of buttons and a resized touchscreen surface might be available. >>> >>> In order to generate proper key events by overlay buttons and adjust the >>> touch events to a clipped surface, these patches offer a documented, >>> device-tree-based solution by means of helper functions. >>> An implementation for a specific touchscreen driver is also included. >>> >>> The functions in ts-virtobj provide a simple workflow to acquire >>> physical objects from the device tree, map them into the device driver >>> structures as virtual objects and generate events according to >>> the object descriptions. >>> >>> This solution has been tested with a JT240MHQS-E3 display, which uses >>> the st1624 as a touchscreen and provides two overly buttons and a frame >>> that clips its effective surface. >> >> There are quite a few of notebooks from Asus that feature a printed >> numpad on their touchpad [0]. The mapping from the touch events to the >> numpad events needs to happen in software. > > That example seems a kind of fringe use-case in my opinion; I think the > gap filled by this RFC is the case where a touchscreen has a printed > overlay with a key that represents a fixed function. Exactly, this RFC addresses exactly such printed overlays. > > One problem I do see here is something like libinput or multitouch taking > hold of the input device, and swallowing the key presses because it sees > the device as a touchscreen and is not interested in these keys. Unfortunately I do not know libinput or multitouch and I might be getting you wrong, but I guess the same would apply to any event consumer that takes touchscreens as touch event producers and nothing else. Should they not check the supported events from the device instead of making such assumptions? This RFC adds key events defined in the device tree and they are therefore available and published as device capabilities. That is for example what evtest does to report the supported events and they are then notified accordingly. Is that not the right way to do it? Thanks a lot for your feedback! > > Therefore, my first impression is that the virtual keypad may be better > served by registering its own input device. > > Great work by the way, Javier! > >> >> Do you think your solution is general enough to also support this >> usecase? >> >> The differences I see are >> * not device-tree based >> * touchpads instead of touchscreens >> >>> [..] >> >> [0] https://unix.stackexchange.com/q/494400 > > Kind regards, > Jeff LaBundy
Hi Pavel, On Thu, Apr 27, 2023 at 03:15:19PM +0200, Pavel Machek wrote: > Hi! > > > > > > >> Some touchscreens are shipped with a physical layer on top of them where > > >> a number of buttons and a resized touchscreen surface might be > > >> available. > > > > > > Yes, it is quite comon, for example Motorola Droid 4 has 4 virtual > > > buttons below touchscreen. > > > > Are those buttons configurable in some way? Or do they have a fixed purpose? > > Fixed. > > > How does Android handle those buttons, BTW? > > No idea. > > > > One question is if this should be handled inside the kernel. It will > > > make it compatible with existing software, but it will also reduce > > > flexibility. That's a great question; I think there are arguments for both. On one hand, we generally want the kernel to be responsible for nothing more than handing off the raw coordinate and touch information to user space. Any further translation of that represents policy which would not belong here. On the other hand, the notion of what buttons exist and where is very much a hardware statement for the use-case targeted by this RFC. It would be ideal if both the kernel and user space did not need to know information about the same piece of hardware. So I think it is OK for the driver to give some help by doing some of its own interpretation, much like some hardware-accelerated solutions already do. While there are obviously exceptions in either case, I don't see any reason to prohibit having a simple option like this in the kernel, especially since it doesn't preclude having something in user space for more advanced cases. > > > > I would say that it should be described in device tree if the purpose is > > fixed. For example, if there is no display behind the touch screen at a > > certain point but a printed sheet (e.g., with a home or return symbol) > > then it is clear that this button is not going to change. In such a case > > I doubt that flexibility is required. > > I agree it should be in the device tree. > > AFAICT hardware can do drags between the buttons, and drag between the > buttons and touchscreen. Turning it into buttons prevents that. > > Plus, real buttons can do simultaneous presses on all of them, > touchscreens will have problems with that. I interpreted the RFC and its example to accommodate multitouch support, so I don't see any problem here unless the vendor built such a module without a multitouch panel which would not make sense. Let me know in case I have misunderstood the concern. > > Best regards, > Pavel > -- > People of Russia, stop Putin before his war on Ukraine escalates. Kind regards, Jeff LaBundy
Hi Jeff, On 27.04.23 19:23, Jeff LaBundy wrote: > Hi Javier, > > On Thu, Apr 27, 2023 at 05:59:42PM +0200, Javier Carrasco wrote: >> Hi, >> >> On 25.04.23 18:02, Jeff LaBundy wrote: >>> Hi Thomas, >>> >>> On Tue, Apr 25, 2023 at 05:29:39PM +0200, Thomas Weißschuh wrote: >>>> Hi Javier, >>>> >>>> On 2023-04-25 13:50:45+0200, Javier Carrasco wrote: >>>>> Some touchscreens are shipped with a physical layer on top of them where >>>>> a number of buttons and a resized touchscreen surface might be available. >>>>> >>>>> In order to generate proper key events by overlay buttons and adjust the >>>>> touch events to a clipped surface, these patches offer a documented, >>>>> device-tree-based solution by means of helper functions. >>>>> An implementation for a specific touchscreen driver is also included. >>>>> >>>>> The functions in ts-virtobj provide a simple workflow to acquire >>>>> physical objects from the device tree, map them into the device driver >>>>> structures as virtual objects and generate events according to >>>>> the object descriptions. >>>>> >>>>> This solution has been tested with a JT240MHQS-E3 display, which uses >>>>> the st1624 as a touchscreen and provides two overly buttons and a frame >>>>> that clips its effective surface. >>>> >>>> There are quite a few of notebooks from Asus that feature a printed >>>> numpad on their touchpad [0]. The mapping from the touch events to the >>>> numpad events needs to happen in software. >>> >>> That example seems a kind of fringe use-case in my opinion; I think the >>> gap filled by this RFC is the case where a touchscreen has a printed >>> overlay with a key that represents a fixed function. >> >> Exactly, this RFC addresses exactly such printed overlays. >>> >>> One problem I do see here is something like libinput or multitouch taking >>> hold of the input device, and swallowing the key presses because it sees >>> the device as a touchscreen and is not interested in these keys. >> >> Unfortunately I do not know libinput or multitouch and I might be >> getting you wrong, but I guess the same would apply to any event >> consumer that takes touchscreens as touch event producers and nothing else. >> >> Should they not check the supported events from the device instead of >> making such assumptions? This RFC adds key events defined in the device >> tree and they are therefore available and published as device >> capabilities. That is for example what evtest does to report the >> supported events and they are then notified accordingly. Is that not the >> right way to do it? > > evtest is just that, a test tool. It's handy for ensuring the device emits > the appropriate input events in response to hardware inputs, but it is not > necessarily representative of how the input device may be used in practice. You are right. I might have been biased by my use case though, where a touchscreen with key capabilities is is exactly that and there is no reason to ignore any event if the capabilities are available. Well, props to evtest for being representative of at least that practical use. > > I would encourage you to test this solution with a simple use-case such as > Raspbian, and the virtual keys mapped to easily recognizable functions like > volume up/down. > > Here, you will find that libinput will grab the device and declare it to be > a touchscreen based on the input events it advertises. However, you will not > see volume up/down keys are handled. > > If you break out the virtual keypad as a separate input device, however, you > will see libinput additionally recognize it as a keyboard and volume up/down > keys will be handled. It is for this reason that a handful of drivers with > this kind of mixed functionality (e.g. ad714x) already branch out multiple > input devices for each function. > > As a matter of principle, I find it to be most flexible for logically separate > functions to be represented as logically separate input devices, even if those > input devices all stem from the same piece of hardware. Not only does it allow > you to attach different handlers to each device (i.e. file descriptor), but it > also allows user space to inhibit one device but not the other, etc. I had complex devices in mind where many capabilities are provided (like a mouse with several buttons, wheels, and who knows what else or a bunch of other complex pieces of hardware) but are still registered as a single input device. That makes the whole functionality accessible within a single object that translates 1:1 to the actual hardware, but on the other hand it lacks of the flexibility you mention. Nevertheless, in the end this RFC applies to touchscreens and if the existing tools do not expect them to have key events, they must be advertised in a different way. And ss I want any tool to identify the touchscreen and the keys properly, I will go for the multi-device solution. > Maybe the right approach, which your RFC already seems to support, is to simply > let the driver decide whether to pass the touchscreen input_dev or a different > input_dev. The driver would be responsible for allocating and registering the > keypad; your functions simply set the capabilities for, and report events from,Y > whichever input_dev is passed to them. This is something to consider for your > st1232 example. I would let the drivers register the devices that fit better in each case according to the objects defined in the device tree and the hardware configuration. Of course I could include the device registration too, but that would probably reduce flexibility with no real gain. This RFC will not work out of the box with several input devices from a single driver because it sets the key capabilities right away as it always supposes there is only one input device. But splitting that part is rather trivial and the rest does not need to change much as it works with generic input devices. The st1232 example will need some bigger changes though, so that part will change a bit in the next version. > >> >> Thanks a lot for your feedback! >>> >>> Therefore, my first impression is that the virtual keypad may be better >>> served by registering its own input device. >>> >>> Great work by the way, Javier! >>> >>>> >>>> Do you think your solution is general enough to also support this >>>> usecase? >>>> >>>> The differences I see are >>>> * not device-tree based >>>> * touchpads instead of touchscreens >>>> >>>>> [..] >>>> >>>> [0] https://unix.stackexchange.com/q/494400 >>> >>> Kind regards, >>> Jeff LaBundy > > Kind regards, > Jeff LaBundy Thanks again for your feedback, I will keep your comments in mind for the next version. Best regards, Javier Carrasco
On Tue, Apr 25, 2023 at 1:51 PM Javier Carrasco <javier.carrasco@wolfvision.net> wrote: > Some touchscreens are shipped with a physical layer on top of them where > a number of buttons and a resized touchscreen surface might be available. The APQ8060 DragonBoard even shipped with two different stickers to be put over the touchscreen: one for Android and another one for Windows Mobile. True story! Yours, Linus Walleij