Jump to content

Recommended Posts

This sounds like an addition that would fit into [Extra Automation Tasks > Keyboard Simulation].

The idea is to pass in an array of string sequences that will be evaluated as a chain of "Dispatch Key Combo" objects.

 

Example input: 

[
	"⇧⌘←",
	"⇧↑",
	"⇧↑",
	"⌘B"
]

 

That way we could reduce complexity in terms of individual components present on the canvas and programmatically define key combos that require more than one dispatch.

Link to comment

I’m wary of the complexity VS utility of this. I doubt many people would take advantage of it, and it couldn’t truly be just a dispatch of combos in a row because those often need delays between them as they’re essentially triggering GUI automation. So it would need at least one delay setting. Which then could be crummy if applied between them all equally when only one of the combos needs a long delay but the others don’t.Taking us back to either the original approach, or shoving in even more ways to define delays.

 

What are some examples of situations where you set up multiple combos in a row? I ask in particular because you’re a skilled coder so I want to understand the situations you’re having to resort to multiple key combos.

Link to comment
2 hours ago, vitor said:

What are some examples of situations where you set up multiple combos in a row? I ask in particular because you’re a skilled coder so I want to understand the situations you’re having to resort to multiple key combos.

 

I am reasoning about an extensible "Inference Task" configuration scheme for my GPT workflows. Such a "task" defines among other things the system prompt, the parameters and the desired behavior how to handle the result, e.g. whether a result should be pasted into the frontmost application at the end or just copied.

 

Concrete, simple examples are definitions for Universal Actions to correct the spelling of text, change the tone, or create summaries. Another use case would be a configuration for a chat session that effectively turns the current session into a translation engine or an etymological dictionary.

 

And yet another use case, and this would require the key combo dispatches, is "snippet triggers". If the predefined combos could be evaluated in one go, it would be possible to define very specific application purposes. For example, in text editors actions could be configured to select only the last paragraph as the target for the task (⌥⇧↑, ⌘C, →, ↩, ↩), or the entire page, or actions could be primed to react very specifically to certain applications (e.g. Logseq).

 

If this were possible via Alfred, I wouldn't have to rebuild the entire NSEvent simulation just for that, i.e. I wouldn't probably just drop that whole part. 😄

 

I hope that makes sense, and granted, this may be bit niche. But I figured, since the core functionality already exists, maybe it's not such a stretch to ask.

If the internal implementation doesn't allow for an easy implementation, so be it~ As for the delay issue, maybe an approach similar to that of the "simulate typing text" automation task would suffice? Fast, medium, slow dispatch speed.

 

If you're interested, I'd be happy to show you the drafts of the configuration scheme to give you a better idea of what's going on, or if you're curious about pkl (github), which I'm playing with to set it up.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...