Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TouchExtraPadding works fine but there's no way to distinguish between mouse and touch input #2334

Open
ifarbod opened this issue Feb 6, 2019 · 8 comments
Labels

Comments

@ifarbod
Copy link

ifarbod commented Feb 6, 2019

Version/Branch of Dear ImGui:

  • Version: 1.68 WIP
  • Branch: master

Back-end/Renderer/Compiler/OS

  • Back-ends: imgui_impl_sdl2.cpp + imgui_impl_opengl3.cpp
  • Compiler: Clang 5.0 from Android NDK and VS2017 (15.9.x) on Windows
  • Operating System: Windows 10.0.17763, Android 9

My Issue/Question:

So I've started running the official examples on mobile and encountered a small issue. As the title says, it's related solely to this API in the global style struct, I've set its value, it works as intended, but, I've noticed that touch input is just passed as mouse input in all of the official backends here (even the third party ones like SFML, openFrameworks, etc.) and the API itself doesn't have anything for passing touch events separately, this is understandable, my issue is actually, in the case of Android specifically, that in addition to the typical imprecise finger touches, there's more precise input events like styli (MotionEvent.TOOL_TYPE_STYLUS) and actual mice, unlike iOS (I might be wrong about this one). So I'd like to apply TouchExtraPadding to my global style but have an exclusion for such cases.

I've had a few ideas to work around this limitation; most of these are not the best ways nor optimal:

  1. Duplicating the mouse fields in ImGuiIO as Touch/TouchDown or something similiar, until better touch support is implemented. This is like an API breaking change (?)
  2. A flag to distinguish touch events from actual mouse move events, and an additional check, probably here.
  3. A low-effort quick fix for the time being, setting TouchExtraPadding in touch events, and changing it back to 0 for stylus/mouse events. Unsure how would that work, and this is counter-inituitive IMO.
  4. A tiny workaround, something like this:
#if defined(__ANDROID__)
ImGui::GetStyle().TouchExtraPadding = ImVec2(4.0F, 4.0F);
#endif

And this is kind of against Dear ImGui's platform independency concept IMO.
Again, these aren't the best ways to do this.

I know that Dear ImGui is designed to be used primarily with a mouse and keyboard, however, since gamepad support got added not so long ago, I think this would also make for a great addition and would make the library much nicer to use on a phone.

This issue is mostly intended to create a discussion, I'm not sure if it has been mentioned here before, I looked in some of the issues here but couldn't find anything closely related (#1237 #2074 #1470 #443). Sorry if this came off vague, also, I've filled the template just in case and I'm willing to send PRs if needed :)

Screenshots/Video

2019-02-07_12-33-16-am
Not that relevant and taken on Windows, but instead it demonstrates that it should work only on touch input, not mouse.

Standalone, minimal, complete and verifiable example:

// In a typical Initialize() function:
ImGui::CreateContext();
ImGui::GetStyle().TouchExtraPadding = ImVec2(4.0F, 4.0F);

// ...

ImGui::Begin("Example Bug");
// This isn't just limited to buttons, it affects all other widgets as well
if (ImGui::Button("Testing button"))
{
    // ...
}
ImGui::End();
@ocornut ocornut added the inputs label Feb 7, 2019
@ocornut
Copy link
Owner

ocornut commented Feb 7, 2019

Hello @ifarbod and thanks for your effort developping this post.

The problem is that I haven't explored touch controls properly, and we don't even have many example associated to touch devices.

My suggestion short-term would be to implement your idea in (3)
This would essentially be a one-liner in your own backend (overwrite TouchExtraPadding on click/touch events). I agree this is a little fugly but it will do the job perfectly in the meanwhile.

Down the line, I think we will implement (2) in core imgui. I already have a plan to rework the ImGuiIO api toward a more event-driven approach, something like https://gist.github.com/ocornut/8417344f3506790304742b07887adf9f. It's actually not much work and has the advantage that old back-ends will still work so that removes a lot of the friction toward implementing something like this. I can see this happening in the following few months. So we can perfectly add explicitely named Mouse and Touch functions. Both will probably lead to setting io.MousePos with extra info/flags on the side that will alter behavior.

(4) Isn't really valid, not because the of platform #ifdef (that would be acceptable and we actually have some in there, e.g. ConfigMacOSXBehaviors default value is set based on an #ifdef __APPLE__ block.), but because as you pointed out the issue in your case is that you can have multiple source inputs simultaneously. While adding this line you suggested would provide a slightly better default it also feel like doing things half-way.

The issue is mostly that while we are at it, we should handle touch better, namely e.g. handle scrolling with dual-finger gestures and that will add more design work. And as you mention stylus events, how would they fit in there? Is a stylus emulating a mouse the ideal?

If you are able to explore better support for touch (incl. scrolling with dual finger) and provide ImGuiIo API suggestions and/or possible implementation, that would help moving this topic ahead faster.

@ifarbod
Copy link
Author

ifarbod commented Feb 7, 2019

Hi @ocornut and thanks for your response!

The problem is that I haven't explored touch controls properly, and we don't even have many example associated to touch devices.

Same, until this was brought to my attention. I haven't looked thoroughly at the Allegro5/Marmalade examples yet, do they handle touch events better than the SDL2 backend? I'm unfamiliar with them and at a quick peek, Allegro5 doesn't have anything related to touchscreens, and Marmalade has them in ImGui_Marmalade_PointerButtonEventCallback, not sure how it's handled under the hood.

My suggestion short-term would be to implement your idea in (3)

Right, I'm already doing this as we speak, it's seems like a reasonable workaround for now.

Down the line, I think we will implement (2) in core imgui. I already have a plan to rework the ImGuiIO api toward a more event-driven approach, something like [snip]

Nice! This looks much more extensible than the current ImGuiIo API. I'm looking forward to this!
A little off-topic though, would key events transition to the same style? Asking this as AddInputCharacter() seemed like the first iteration for this new API style.

(4) Isn't really valid, not because the of platform #ifdef (that would be acceptable and we actually have some in there, e.g. ConfigMacOSXBehaviors default value is set based on an #ifdef __APPLE__ block.), but because as you pointed out the issue in your case is that you can have multiple source inputs simultaneously. While adding this line you suggested would provide a slightly better default it also feel like doing things half-way.

As far as the examples are concerned, none of them set TouchExtraPadding, and this problem overall, isn't just limited to one platform (e.g. Windows has WM_TOUCH messages). While exploring other people's uses of Dear ImGui under mobile platforms, a common pattern was setting the UI elements to double their default sizes on mobile and leaving them untouched on desktop platforms. So I think it's safe to pass on this semi-solution.

The issue is mostly that while we are at it, we should handle touch better, namely e.g. handle scrolling with dual-finger gestures and that will add more design work.

Agreed, is proper touch support more of a long-term goal in Dear ImGui?
I'll try implementing some of the most common touch gestures in a fork (preferably public here on GitHub) and will report back on the potential issues that I might face in the upcoming weeks, no strong promises though.

And as you mention stylus events, how would they fit in there? Is a stylus emulating a mouse the ideal?

In Android's case, styli are handled like normal touch events, with a few exceptions.
When you point a stylus pen at a screen, it'll generate hover events, similiar to those of an actual mouse. onGenericMotionEvent is called, with these attributes:

getToolType -> TOOL_TYPE_STYLUS (or TOOL_TYPE_ERASER)
getSources -> SOURCE_STYLUS or SOURCE_BLUETOOTH_STYLUS

getX and getY return screen coordinates, just like absolute mouse events.

Once they make contact with the screen, they're reported as the usual touch events (onTouch) with the exception of getSources and getSize.

getSources -> SOURCE_STYLUS Bit OR'd with SOURCE_TOUCHSCREEN - Probably as a way of compatibility method for apps that don't need to handle it differently.
[getSize] -> Reports a noticeably lower value than touch events (as it's on the same precision level as a mouse pointer)

A similiarity, or difference would be buttons found on stylus pens. Most stylus pens I've seen have one button (e.g. S-Pen on Note 8/9), some have two.
The primary button (BUTTON_STYLUS_PRIMARY), Android's official documentation says that it should typically invoke a context menu, and this button itself was reported as BUTTON_SECONDARY in older Android versions, which is identical as a right mouse click.

The secondary button however, same as above, Android docs mention that it should invoke a secondary action, this button was reported as BUTTON_TERTIARY in older Android versions, which is equal to a middle mouse click. Popular apps like Google Chrome handle it the same. e.g. pressing it on a tab would close it. Maybe drawing apps make a more specialized use of them.

Stylus pens are also capable of reporting other features such as pressure, tilt, orientation, and distance. But I think that's out of scope for Dear ImGui.

A thing worth mentioning here is that there were old phones capable of reporting finger hover events much like a stylus pen, namely, the Galaxy S4 from 2013. But the newer Samsung phones dropped this feature.

SDL2 itself doesn't support stylus events separately, I've modified their Android shim and the event handler logic to accommodate for this. I'm also thinking of doing a Dear ImGui backend with something much lightweight than SDL2 for mobile phones, possibly using GLFM since most of the required input APIs are exposed to Android NDK, or extracting parts of my own game engine for this.

Most apps and games get away with handling them same as touch events, SDL also has a flag named SeparateMouseAndTouch which makes touch events report as mouse events, disregarding multi-touch events in the process.

Since Dear ImGui is targeted more towards content creators and power users, I think this would largely benefit drawing apps and games alike, as most palm rejection methods on Android uses TOOL_TYPE_STYLUS to distinguish between input types.

If you are able to explore better support for touch (incl. scrolling with dual finger) and provide ImGuiIo API suggestions and/or possible implementation, that would help moving this topic ahead faster.

Sure thing, I'll see what I can do, can the issue remain open for the time being? :)

@BrutPitt
Copy link

BrutPitt commented Feb 9, 2019

Hi
Me too would be interested in the ImGui and touchscreen.

If it can be interesting, I've recently implemented a version WebGL via WebAssemply (emscripten) of my project, usable by tablets and smartphones, that uses touchscreen and ImGui (of course).
I use GLFW to get mouse events (also in emscripten), and native emscripten callbacks (to get touch events) only for WebGL release (GLFW have no supports for touch).

Actually I used just a workaround (like described by @ifarbod) to "fool" ImGui:
I have created a new separate function, without modify ImGui code, but that replies the functionalities of ImGui_ImplGlfw_UpdateMousePosAndButtons present in imgui_impl_glfw.cpp, but for the touchscreen events instead of mouse ones:

void emsMDeviceClass::imGuiUpdateTouch()
{
    ImGuiIO& io = ImGui::GetIO();
    const ImVec2 mouse_pos_backup = io.MousePos;
    io.MousePos = ImVec2(-FLT_MAX, -FLT_MAX);
    io.MouseHoveredViewport = 0;

    // Update buttons
    for (int i = 0; i < IM_ARRAYSIZE(io.MouseDown); i++)
    {
        // If a touch event came, always pass it as "mouse held this frame", so we don't miss click-release events that are shorter than 1 frame.
        io.MouseDown[i] = imguiJustTouched[i];    // Touch instead mouseButton
    }

    ImGuiPlatformIO& platform_io = ImGui::GetPlatformIO();
    for (int n = 0; n < platform_io.Viewports.Size; n++)
    {
        ImGuiViewport* viewport = platform_io.Viewports[n];
        GLFWwindow* window = (GLFWwindow*)viewport->PlatformHandle;
        IM_ASSERT(window != NULL);
        IM_ASSERT(platform_io.Viewports.Size == 1);

        // if (focused) -> // Removed also "focus test", because I'm in EMSCRIPTEN
        {
            // double mouse_x, mouse_y;
            // glfwGetCursorPos(window, &mouse_x, &mouse_y);
            // instead to call it, I have saved touch coords from last event
            io.MousePos = ImVec2((float)touchX + viewport->Pos.x, (float)touchY + viewport->Pos.y);

            for (int i = 0; i < IM_ARRAYSIZE(io.MouseDown); i++) {
                io.MouseDown[i] |=  imguiJustTouched[i];     // Touch instead mouseButton
            }
        }
    }
}

It just replies the mouse coordinates with the touch ones (touchX and touchY) , and mouse buttons with taps imguiJustTouched[i], and call it (if touch enabled) after ImGui_ImplOpenGL3_NewFrame(); and ImGui_ImplGlfw_NewFrame() (to override mouse coords, updated exactly from ImGui_ImplGlfw_NewFrame() that call ImGui_ImplGlfw_UpdateMousePosAndButtons), and just before of ImGui::NewFrame();, like this:

    ImGui_ImplOpenGL3_NewFrame();
    ImGui_ImplGlfw_NewFrame();
    if(theApp->isTabletMode()) theApp->getEmsDevice().imGuiUpdateTouch();
    ImGui::NewFrame();

The WebGL version is online here wglChAoS.P (if you want test it)
This "touch" release is online only from 3/4 days, and I don't have yet committed sources changes to my project... (I will do it as soon as possible: glChAoS.P, on my profile)
I just updated my project:
The emsMDeviceClass class manages the touch events and is declared in the files emsTouch.h and members are in emsTouch.cpp
(They are excluded from VS and/or desktop build: need emscripten cmake build, to compile it... read main project page for instructions)
The *NewFrame(), instead, are called here:
https://github.com/BrutPitt/glChAoS.P/blob/9a675f7797fbcbac36c6896dfc5611b41ece1a9c/src/src/ui/uiMainDlg.cpp#L2182-L2188

I know: this is a workaround, as written, and is a function that override others (mouse) values just written, but I do not want modify ImGui code, that is in continue evolution.
Anyhow seems works discreetly... also if I do not need more than two touches for my application, and currently I use only one touch for ImGui.
Perhaps replicate (or create a similar correspondence) with the mouse coordinates and the touch ones, mouse buttons with taps, could be the initial easiest way.
In same way (like mouse buttons) maybe it is possible replicate the keyboard "modifiers" with the presence of more touches on screen...
I thought of the current one combinations with keyboard and drag controls that could may need 2/3 touches, (e.g. Shift+drag: 2 separate fingers, 1 stay still and other drag... 3 for Alt+drag, with 2 stay still... ), or also a simple popup (like ColorEdit control), with the possibility of select the speed/precision.
But this was my rumination...

Thanks.

@ifarbod
Copy link
Author

ifarbod commented Feb 9, 2019

@BrutPitt I've looked at your code, I think it's good enough for the time being, I personally had a similar idea, like yours, to use touch events given by the platform, and simulate mouse events at the end (e.g. dragging gesture producing proper io.MouseWheel values.

I know: this is a workaround, as written, and is a function that override others (mouse) values just written, but I do not want modify ImGui code, that is in continue evolution.

We should come up with proper touch support in Dear ImGui itself so we can ditch our current workarounds :D

Off-topic: How are you using GLFW with Emscripten? Isn't it supposed to be only used in desktop platforms like win32 and macos? Seems like Emscripten's got a GLFW emulation layer in addition to the SDL 1.2 one.

@BrutPitt
Copy link

BrutPitt commented Feb 10, 2019

First of all there is to say that this method emulate the "touchs" over actual ImGui mouse management: it's not a touch event management.
Moreover this has been thought to have the least impact on ImGui (for my personal use).

Said this, I thought a similar workaround would be quite simple to implement in ImGui (mostly without modifying the current behavior/functionality):
To get screen position (touch or mouse), for example in GLFW, instead to use directly glfwGetMousePos, use a function pointer, initially assigned to glfwGetMousePos (or a function that contains it, without using glfwWindow), and make available (internally to imgui_impl_glfw.cpp) an helper function to substitute the default assignment with a personal one (e.g.personal getToucPos)... (or permit to change function pointer assignment also without helper function).
It's similar is in SDL with SDL_GetMouseState in imgui_impl_sdl.cpp and in Windows with ::GetCursorPos in imgui_impl_win32.cpp... similar in Marmelade with s3ePointerGetX & s3ePointerGetY functions that can be grouped into one.
imgui_impl_freeglut.cpp is a little different: but the current function ImGui_ImplFreeGLUT_MouseFunc could work with a little change, or simply passing numberOfFingers instead buttonPressed (to be evaluated).
(I have not looked at the other files not mentioned)

Second.
Wanting to consider using: 1fingerPresent == button0 , 2fingersPresent == button1 ... and so on... to add an helper function (to call on touch event) where to pass the number of fingers active (to get/change number of fingers present)... and finally using something like that io.MouseDown[i] = g_MouseJustPressed[i] || imguiJustTouched[i] (just like it already happens, only add also imguiJustTouched) to test mouse click or tap on screen (both do not happen).

But I know that is necessary to consider everything globally and not for this particular example/use and for now I have tested this touch method only on SDL and GLFW with EMSCRIPTEN.
And I have not tested/considered "multitouch" inside/on ImGui (only in my application)... but ImGui currently uses only one screen position (mouse: x,y), and should be up to the user to choose which couple of finger coords (x,y) to consider (to pass to ImGui).

However if @ocornut were possibilist on the method, and above all he has in the projects or desire to simulate touch events over actual mouse management, I can try to implement a working example.
Obviously suggestions and/or directives are welcome.

@ifarbod
Yes, emscripten have GLFW emulation layer already from several years (from GLFW2 I think), and also supports SDL2

@ifarbod
Copy link
Author

ifarbod commented Apr 8, 2022

@ocornut Is there any chance of looking back at this again now that we have the new input API? :)

@swinefeaster
Copy link

i am looking to handle touch finger zoom in one of my canvases inside ImGui... i didn't see anything in ImGuiIO. Looks like I need WM_GESTURE and GID_ZOOM.

ocornut added a commit that referenced this issue Apr 4, 2023
…ent(). (#2334, #2702)

SDL doesn't distinguish Pen yet, but we don't need it as much as TouchScreen which will alter trickling.
@ocornut
Copy link
Owner

ocornut commented Apr 4, 2023

@ocornut Is there any chance of looking back at this again now that we have the new input API? :)

I just pushed a16f99c + f070497 (and f33a0de, unrelated to this) to support that discrimination.

However I didn't apply this to varying TouchPadding, as this would requires the backend to 100% support that discrimination.
I think you can now pull io..MouseSource after NewFrame() and alter TouchPadding based on it.

I am not sure what else there is to do and how much of @BrutPitt message #2334 (comment) is still valid. On my setup even though GLFW doesn't have explicit touch support, it does report touch screens as mouse events. If there is a missing component (Emscripten version, etc?) please to open a separate issue/PR and we'll address it. One that our Mouse vs TouchScreen discrimination for imgui_impl_glfw.cpp is for Windows only for now (see f070497). OK to happy other workaround in our GLFW backend until GLFW adds support.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants