Started to work on startup time improvements and, more specifically, "cold" (first) start.
Here are startup times for the latest pack for GTA5 by @linx
- https://eyeauras.net/share/S202503032233289fPxqP4AA0N0
v8215 - 33.41s
v8224 - 28.89s
So the new version is approximately 15%
faster on the first launch.
Subsequent launches:
v8215 - 29.38s
v8224 - 27.10s
~8%
faster.
I will keep working on improving those stats.
Sometimes, you do not need ALL the functionality which is built into EyeAuras in your Packs. For example, if your pack does not use computer vision at all, it does not make much sense to load CV-related modules into the memory - this is waste of both CPU and RAM. With the new functionality around working with applications directly - such as reading memory - it is expected that there will be more and more cases where you just want to use EyeAuras as a platform for development and distribution own of your app.
This feature is intended to be used exactly for that purposes - you now can disable some parts of the application from loading. Those modules, which are blacklisted:
For now, we have two types of modules which could be excluded:
Blacklist is a part of Pack configuration. By default, everything is shipped, but you can disable two types of modules/
KeyLogin is a way to sign in using a one-time license key — no account creation required. This system is intended to be used along with Sublicenses to allow your users to access your cool Packs as quickly as humanly possible.
That is how the process looks right now
For end-user(the person who will be using the pack):
Download pack
That is it. I think this is as simple as it could get and we'll stay in that state for some time and see how it goes.
For you(author, who is creating the pack):
Yeah... for authors the process is not as straightforward. I will streamline it this year.
Allows to check that the color of a specific pixel(or region) matches selected one.
Probably one of the most useful nodes out there - fast, very easy to use and very flexible.
Allows to find image somewhere on the screen/in a selected region.
Added new node, which is intended to be used by those who are more comfortable with "classic" logic building.
New node in BTs and Macros, which allows you to run ML search over a region of a screen/window.
It is still missing a lot of options currently present in MLSearchTrigger such as Configende/IoU thresholds, inference method, number of threads etc - those will be added in the nearest future.
Note, that by itself this node does not do anything - you have to pair it with other nodes such as MLFindClass
or MouseMove
to get anything from it.
By default, it will pick the very first object and make it available for click via MouseMove
- for some very simple scenarios that will be enough, for more complex one - use MLFindClass
The node will return Success
only if there is at least one found prediction.
This node must be used in pair with MLSearch
and allows to pick some class(~object) from the output generated by MLSearch
- you can set some filtering parameters such as class name(s), minimum confidence threshold, size restrictions, etc. Those predictions which do not match any of those, will be filtered out.
Out of those predictions which will be left after filtering, you can make the node pick exactly one of them - it could be the very first in the list or the one with the highest confidence or even the one that is currently closest to the cursor.
The node will return Success
only if there is at least one found prediction that matches all the criteria.
Of course, both MLSearch and MLFindClass could be used in combination with all other nodes. Here is, for example, priority-based target selection
Let's welcome a new node which is accessible in Macros and BTs. It is the first one in the batch of nodes coming soon, nodes, focusing on computer-vision. Those nodes will be a direct replacement of triggers, which you're currently using in Auras. Migration process will take a lot of time and probably new Nodes will reach feature parity with corresponding Triggers only months from now, but eventually we'll get there.
Note, that Nodes DO NOT tick on their own. Image capture and analysis happens exactly at the same moment when the node gets executed.
Initially, performance of nodes could be worse than those that we had in Triggers - this is due to new mechanisms which were implemented for Computer Vision API which powers those new nodes.
New Nodes will be writing their results to Variables which are available via Bindings.
For now, there is a single variable CvLastFoundRegion
which will contain the last found region. This could be coordinates of a found pixel. Or coordinates of found image. Or ML class.
In MouseMove, there is now a new button, which allows to very quickly and easily use that variable as a source of coordinates.
Just click on the button, and, when executed, MouseMove will try to read coordinates from that variable.
Implementing a first prototype of the system which should allow to use CV API in BTs and Macros. The general idea is that now those methods which you call, be it ImageSearch/ColorSearch/etc, will cache underlying capture/processing mechanisms, which should make consequent similar calls from other parts of the program be much-much cheaper. We're talking seconds
VS milliseconds
here.
The mechanism tries to track which parts of the program use different parts of CV API and cache them accordingly. Monitor memory usage! Usually such systems tend to memory leak. A lot. Especially until properly tuned.
This API by itself is still very rough on edges and will be improved in upcoming releases.
This method is now part of Computer Vision API and allows to entirely bypass caching mechanism if you do not need it - e.g. your entire program is a single C# script. In that case it does not make sense to rely on EA caching mechanisms and pay extra (yet small) price for it
Clear()
method to MiniProfilerin the latest version I've enabled nodes such as Cooldown/Selector/Sequence for use in Macros as well. The assumption was that it will allow to create rotations a bit easier and bring in BT-goodies to Macros as well. First testing has shown, that this functionality still needs more time in the oven as it raises too many questions, especially around drag'n'drop functionality.
I've disabled those nodes for now and they will become available later, as soon as UX around them will be ready.
The macro system has received significant upgrades, expanding both its functionality and the number of supported operations. In practice, behavior trees have proven to be fairly complex to grasp, while Macros were always meant to occupy the niche between very simple auras and more sophisticated behavior trees. These changes aim not only to reinforce that position but also to incorporate many of the strengths from both Auras and Trees.
Let’s dive in:
This is the core concept from the Auras system. When something happens, a trigger fires. Then, in the aura, the following blocks are called sequentially:
This combination of blocks can describe many situations and automate gameplay. However, it often lacked flexibility — for example, it wasn’t possible to add additional conditions within blocks or repeat actions multiple times. Auras simply weren’t designed for that, but these features are fundamental to Macros.
The one thing Macros were missing was the ability to check their own current state — to know whether they should be active at a given moment.
CheckIsActive
This is where the new node CheckIsActive
comes in. Its sole purpose is to let the macro body check if the macro is currently active. Just this node alone allows Macros to replicate the exact logic that Auras used for years.
There are many possible ways this kind of macro could look. Here’s just one example:
The idea is simple:
OnEnter
) runs on entry.WhileActive
) is actually a Repeat
loop that runs every 250ms as long as the macro remains active.OnExit
) takes over to handle any exit logic.In the near future, additional improvements will make creating "Aura-like" macros even easier.
Here’s another example – a macro that waits for deactivation and performs an action only on exit. This mirrors the behavior of an aura with just OnExit
.
A whole set of nodes previously exclusive to Behavior Trees is now supported in Macros as well:
Note: The first three nodes (Selector/Sequence/Cooldown) are most useful inside a
Repeat
block — that’s where they shine.
Let’s build a simple rotation with two skills and a basic attack, using macros:
Nothing too special — we check cooldowns, then availability, then cast in order. If skills are unavailable, we fall back to a basic attack.
Now here’s the exact same logic, wrapped inside a macro:
The logic is nearly identical, except now we're building the rotation inside the macro itself with a loop, and using a Selector to control flow.
Side note: Looking at this screenshot, it seems like I went a bit overboard with icons and formatting — it’s a bit too "noisy". I’ll try to clean that up.
Finally — in the foreseeable future, macros will also be editable in a graph view. This feature is still in development, but many users may find it easier to understand and work with macros this way.
New instrument which allows to initialize hotkeys right from scripts.
[Keybind("p")] // Simple example - triggers when 'p' is pressed
[Keybind("Ctrl+2")] // Triggers only when 'Ctrl + 2' is pressed
[Keybind("Ctrl+2", IgnoreModifiers = true)] // Triggers on ANY combination containing 'Ctrl + 2' (e.g., 'Ctrl + Alt + 2')
public void OnKey(){
Log.Info("Key pressed");
}
[Keybind(Hotkey = "4", SuppressKey = false)] // Handles the key, but it will still pass through to other apps
public void HandleKeyWithInjectedServices(IAuraEventLoggingService loggingService){
Log.Info("Key pressed");
loggingService.LogMessage(new AuraEvent(){ Text = "Message", Loglevel = FluentLogLevel.Info });
}
If you're using GetService
somewhere in the code - you're already using DI system in EyeAuras.
I've made some improvements in that part which should simplify the scripts
Here is how exactly same script looks like in 3 different forms:
Current approach - get APIs via GetService
var sendInput = GetService<ISendInputScriptingApi>();
sendInput.MouseMoveTo(200, 100); // Moves the mouse to X200 Y100
sendInput.MouseRightClick(); // Performs a right mouse click
Now two new approaches which you could(and should!) start using in your scripts.
I am thinking about automatically injecting current APIs (such as SendInput) into your scripts - this would simplify and generalize the code.
E.g. for ISendInputScriptingApi
the name would be SendInput
(like in example below), for IPlaySoundScriptingApi
- PlaySound
, for ComputerVision
that could be something like IComputerVisionExperimentalScriptingApi Cv { get; init; }
I will publish this as a separate update.
ISendInputScriptingApi SendInput { get; init; } // Automatically initialized when the script starts
SendInput.MouseMoveTo(200, 100); // Moves the mouse to X200 Y100
SendInput.MouseRightClick(); // Performs a right mouse click
Now
[Dependency] ISendInputScriptingApi SendInput { get; set; }
SendInput.MouseMoveTo(200, 100); // Moves the mouse to X200 Y100
SendInput.MouseRightClick(); // Performs a right mouse click