I have a GLTextureView inside a ZoomableViewGroup, essentially a canvas which can be scaled using pinch-zooming gestures and panned with two fingers—single-finger touches are used for fingerpainting on this canvas. On a layer above that, there are fragments that hold predefined ImageViews, and these fragments can be added and subtracted at will by the user. They can also be scaled and panned, but only after they have been long-clicked, which puts them into an editable mode. My problem is this: it is entirely too easy for users to accidentally long-click one of these fragments when they are actually trying to scale the canvas that they rest on. For example, if a user puts two fingers on the canvas, and one of the fingers happens to fall on one of these fragments, the app will read this as two separate MotionEvents, each having only one pointer, and one of them goes to the canvas (which draws some finger-painting instead of pinch-zooming) and the other goes to the fragment (which thinks it's being long-clicked and switches into an editable mode).
Because the two fingers are resting on separate views, the two-pointer gesture is being divided into two single-pointer gestures! I verified this by querying getPointerCount()
on the fragment views while doing this gesture. I would like to put logic into my fragment's onLongClick
method that essentially says "If there is a second pointer put down, anywhere on the screen, then do nothing." But I'm unsure how I can detect that second pointer from within the fragment. Is this possible?
Tl/dr: Given touch or long-click events on a view within a fragment, I need to detect the number of pointers. But not just the number of pointers that are on top of the view, the number of pointers that are touching the screen anywhere. Is this possible?