Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
menu search
person
Welcome To Ask or Share your Answers For Others

Categories

I wrote an application that detects all active Windows and puts them into a list.

Is there a way to simulate a mouseclick on a spot on the screen relative to the Windows location without actually moving the cursor?

I don't have access to the buttons handle that is supposed to be clicked, only to the handle of the window

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
842 views
Welcome To Ask or Share your Answers For Others

1 Answer

Is there a way to simulate a mouseclick on a spot on the screen relative to the Windows location without actually moving the cursor?

To answer your specific question - NO. Mouse clicks can only be directed where the mouse cursor actually resides at the time of the click. The correct way to simulate mouse input is to use SendInput() (or mouse_event() on older systems). But those functions inject simulated events into the same input queue that the actual mouse driver posts to, so they will have a physical effect on the mouse cursor - ie move it around the screen, etc.

How do I simulate input without SendInput?

SendInput operates at the bottom level of the input stack. It is just a backdoor into the same input mechanism that the keyboard and mouse drivers use to tell the window manager that the user has generated input. The SendInput function doesn't know what will happen to the input. That is handled by much higher levels of the window manager, like the components which hit-test mouse input to see which window the message should initially be delivered to.

When something gets added to a queue, it takes time for it to come out the front of the queue

When you call Send-Input, you're putting input packets into the system hardware input queue. (Note: Not the official term. That's just what I'm calling it today.) This is the same input queue that the hardware device driver stack uses when physical devices report events.

The message goes into the hardware input queue, where the Raw Input Thread picks them up. The Raw Input Thread runs at high priority, so it's probably going to pick it up really quickly, but on a multi-core machine, your code can keep running while the second core runs the Raw Input Thread. And the Raw Input thread has some stuff it needs to do once it dequeues the event. If there are low-level input hooks, it has to call each of those hooks to see if any of them want to reject the input. (And those hooks can take who-knows-how-long to decide.) Only after all the low-level hooks sign off on the input is the Raw Input Thread allowed to modify the input state and cause Get-Async-Key-State to report that the key is down.

The only real way to do what you are asking for is to find the HWND of the UI control that is located at the desired screen coordinates. Then you can either:

  1. send WM_LBUTTONDOWN and WM_LBUTTONUP messages directly to it. Or, in the case of a standard Win32 button control, send a single BM_CLICK message instead.

  2. use the AccessibleObjectFromWindow() function of the UI Automation API to access the control's IAccessible interface, and then call its accDoDefaultAction() method, which for a button will click it.

That being said, ...

I don't have access to the buttons handle that is supposed to be clicked.

You can access anything that has an HWND. Have a look at WindowFromPoint(), for instance. You can use it to find the HWND of the button that occupies the desired screen coordinates (with caveats, of course: WindowFromPoint, ChildWindowFromPoint, RealChildWindowFromPoint, when will it all end?).


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
...