Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Effect System Proposal #717

Closed
fredizzimo opened this issue Sep 3, 2016 · 6 comments
Closed

Effect System Proposal #717

fredizzimo opened this issue Sep 3, 2016 · 6 comments

Comments

@fredizzimo
Copy link
Contributor

What is the Effect System

The "Effect System is" a new library that I'm proposing, it's an improved version of the "Visualizer", which is a library for visualising stuff on the keyboard. It can control the backlight, standard keyboard LEDs, key LEDs, LCD screens, and basically anything that can be controlled and attached to a keyboard.

I have previously called it visualizer, because everything it controlled on the Infinity Ergodox was visual. However I don't see why the same system could not control the audio system for example as well.

It could of course also be used to control things that are currently not available on keyboards. Maybe some future keyboards will have some kind of vibration, which would make the keyboard vibrate, which you could feel with your fingertips. It might not be as crazy as it sounds, one use case that I could think of would be to indicate spelling errors.

Therefore instead of calling it the "Visualizer", I will call it the "Effect System". I don't particularly like the name, but at least it describes what it does. So feel free to propose a better name.

How the current Visualizer works

The best way to get a feel of the Visualizer, is probably to look at the example that is included in my TMK fork of Infinity Ergodox. I have tried to comment it quite generously, because it's also meant to act as the main documentation for it.

But in short it's a system to define keyframe animations, which can be started and stopped based on what's going on with the keyboard. These frames can have any duration, including a duration of zero, which makes them instant. If they do have a duration, they furthermore have a function that can be updated at regular intervals, so you can do fading for example, but also much more advanced effects.

On the Infinity Erogodox, each half runs the exact same code, and the idea is that by acting on the same input, the halves will stay in sync, which is also mostly the case in practice.

The input is called visualizer_state_t in the code, and contains things like the active layer, the keyboard suspend status and the standard keyboard LED(caps lock, num lock and so on) states. The input is synchronized to other physical devices using the "Serial Link" library.

On the Infinity Ergodox, the visualizer runs in it's own thread, so things that takes a relatively long time, like drawing to the LCD don't have any effect on the normal keyboard loop. However for keyboards with slower processors, and less advanced visualization, it would be quite easy to do the same thing every scan loop instead.

The problems with the current Visualizer

While the Visualizer perhaps does its job for simple things, I don't particularly like the solution. There are several problems.

  1. The visualizer_state_t struct is defined on the library side, and contains way too few things. For example at the moment it lacks the information of which keys have been pressed. Standard things could quite easily be added, but then we need to figure out all the possible things that a user might need.

    Furthermore, since the definition is on the library side, keymaps can't really add their own custom stuff. It would be possible for keymaps to replicate the functionality of the library, and define their own remote objects for the extra stuff, but that's not really a nice way of doing it.

  2. The actual keyframe functions have a fixed signature, and the parameters are always automatically passed. That makes code re-use very hard. Many of the keyframe functions would definitely benefit from taking parameters so that they are customizable.

  3. The starting and stopping of animations is purely based on inspecting the state of the visualizer_state_t struct. This makes some things much harder than it really should be, for example just starting an animation from a keyboard macro is hard. If we assume that number 1. is fixed, and you can add custom fields, then you need to add a variable representing what you want to do and check for that in the visualizer code. Then that variable has to somehow be reset, which isn't really possible without huge hacks.

  4. It's not really possible to combine effects. One thing that I think should easily be possible is to have layers, for example when you change the brightness of the LCD screen, it would be nice to have the current brightness being displayed as text on the screen. This text would fade away after a while, and the LCD would return to displaying what it was before.

  5. It's easy to make mistakes when defining the animations. For example you could specify the wrong number of keyframes, or you might be adjusting the length of the wrong frame.

How the Effect System would work and solves those problems

Note The numbers here does not represent the same numbers in the problem list. Instead I'm just listing the key technical design. It also mostly contains differences from the current Visualizer. So things not listed here will most likely stay almost the same.

  1. Instead of having the global visualizer_state_t, each effect have each own parameters. These parameters are given when starting the effect, but it should also be possible to change them during the effect is playing.

  2. These effect parameters are regularly synchronized over the serial link, along with the list of which effects are playing and the starting time of the effect. If we additionally synchronize the times on all the devices, then all effects should be synchronized on all devices.

    In the case of Ergodox Infinity the target is the slave devices and if we are using the same system for the host commands described in Brainstorming for usb endpoint for host commands #692, then the host would be the PC and the target would be the attached keyboard.

  3. You start and stop the effects and update their parameters using regular functions. And they can be called at any time. These functions take all the parameters needed. So the usage would be pretty much like you would use the current RGB light effects for example rgblight_effect_breathing(55), but these will probably be named start, stop, and set instead. In fact at some point we should probably convert those functions to use the effect system.

    This should make it very easy to integrate into existing keymaps. It's also easy to implement empty versions of these functions, when support for the hardware is disabled. Or in the case of the Ergodox, where the Infinity would support more kinds of effects than the EZ for example. But in both cases the keymap can freely call the functions. I think the linker will be smart enough to see that these functions are not used, and optimize the actual call away, but that has to be tested.

  4. The effect system is ticked once per frame. For a thread-less implementation this is where the actual physical effect is rendered, and data sent over the serial link. For a threaded implementation, this is where the effect parameters are synchronized with the thread.

  5. The effect combining is a something that I don't have a very good solution for yet. But I'm leaning towards the direction of having "meta-effects", which can read the output from other effects and combine them, most likely with simple BitBlt like functions. We could have one generic one which would simply use a layering system, and always draw the highest layer. And the keymaps could then write more specialized ones.

  6. Reducing the amount of possible mistakes is something that I don't have any solution for in C. We might be able to use some clever macros to work around things, but it would pretty type unsafe, with horrible casts everywhere, and still many errors would most likely only be detected at runtime. Keep in mind, that the rest of the system designed here will put more requirements on defining the effects, than what the current Visualizer does.

    But I'm personally very confident in C++, and its template metaprogramming capabilities. So I'm pretty sure that I can make something that is easy to use even for people that don't know C++. And in any case the C++ would only be required for declaring the full effects/animations, and maybe for the combiner that is described in 5. Keymaps would still call regular C functions for starting, stopping and updating the effects, like described in 3. Finally the actual keyframe functions would also be written as regular C functions.

    There should also not be any runtime overhead, as this system won't need to C++ runtime library. In fact the memory requirements would probably be less, since using C++ would allow me to reserve exactly the amount of memory needed for both the ROM and RAM, all statically at compile time.

    Once the C++ implementation is done, we could have another look, and see if it could be translated to C, perhaps with less feature, so people have a choice to use that if needed.

The drawbacks of uGFX

The current visualizer uses the uGFX library for drawing to the LCD screen. It also exposes the LEDs as a virtual screen that lets you access individual pixels, but also call more high level things like circle drawing functions, or even text drawing.

I found problems with the uGFX however. It's quite bloated and slow. Many of the drawing functions repeatedly calls things through function pointers for each pixel. Those low level pixel drawing functions also require you to do branching and calculation for each pixel that's going to be drawn.

Another problem is that it doesn't handle different colour spaces very well. For example on the Ergodox Infinity, the LCD screen is black and white, the LED's are represented as a grayscale image and the RGB backlight is RGB and so on.

I also think we really should use a frame buffer model, at least logically, since many effects would need to read from the screen. So doing this in the memory, rather than going to the hardware would make things considerably faster. Yes, it will use more memory, but with modern microprocessors like the Teensies, I don't think it's a problem. And for keyboards with smaller processor using just LEDs shouldn't add much memory, 100 LEDs with 8 bit gray scale is just 8 bytes. Finally, you don't need to use the image abstraction layer, you could also access the hardware directly from the keyframe effects.

Unfortunately it's very hard to find alternatives, I have spent quite much time searching and I have only really found one alternative. You would think it would be easy to find a software rendering library that output to RAM, but no it isn't. There are completely bloated ones like Cairo, but it wouldn't be easy to integrate, would use too much memory, and uses floating points, which is a very bad idea for the controllers that we are using.

So the only alternative I have found is uGUI. It's not perfect, for example it can't use different colour formats per display either. The drawpixel function is not inlined in this model either, which makes its slower than it needs to be. Also I don't see any functions for reading pixels.

But since uGUI is so simple, those problems would be quite easy to solve by modifying the code. Perhaps compiling the same code many times with different macro definitons for different pixel formats. C++ and templates could also be used for that, but then we push the C++ requirements to the keyframe functions as well.

I'm very much open for suggestions of other libraries as well.

What do you think?

I realise that this is a quite long proposal, but still I hope that some of you have time to read it and comment. Note that I have left out a lot of details for two reasons, to keep this reasonably short, and to not tie the implementation too much. I like to let the final technical design take shape as I'm building the system through unit tests.

I think I will start working on this as soon as tomorrow, but you don't need to hurry with the comments, things can always change until we have the final implementation.

@fredizzimo
Copy link
Contributor Author

I realize that the above is a little bit too long and abstract, but I just want to inform that I'm still working on this and thinking about it. I have also decided to change things a little bit. Rather than having one big effect system it will consist of three parts.

  1. The animation system is just a general purpose keyframe based animation system. I decided to use pure c, rather than c++, since the c++ implementation would have been rather complex, and I found a quite clean way of using the c designated initializers for it.
  2. The actual drawing, I'm still leaning towards using uGUI. Perhaps with some minor modifications
  3. The synchronization, will use the serial_link system. And I'm actually thinking about synchronizing the uGUI widgets and some simple output from the animations. I could go into much further details, but with the risk of no-one reading it I wont.

@fredizzimo
Copy link
Contributor Author

I have changed my mind, and will write more about the synchronization part. It's a complex problem, and I'm moving back and forth between different ideas in my head, and it's therefore I have trouble focusing on the more simple problem of making the animation system. So it would be good if someone else have some comments or other ideas.

When I'm talking about the synchronization, I'm actually talking about two different things

  1. Synchronizing the updating and the rendering on a single keyboard between two different threads
  2. Synchronizing the same information over some kind of communication channel on a split keyboard

The problems are very similar so I think they could be handled in the same way.

You might wonder why we need number 1, so let me explain that first. We need to be able to run the normal keyboard scan loop at a very fast rate, preferably at a rate of more than 1000 times per second. The problem is that when you are doing rich visualization that's no longer possible, for example drawing to an LCD screen takes at least on order of magnitude longer.

Simple visualization, like updating LEDs is probably fine though, and could be handled by drawing directly from the keyframe animation system. So the goal of the base keyframe animation system is still to be able to run on the smaller AVR based keyboards, so the memory and CPU requirements for that should be similar to the existing RGB Led system.But for the rich simulation I think we can assume that we have a quite powerful processor, at least something like a Teensy 3.0, but the additional memory of a Teensy 3.1 could definitely help.

Because the actual rendering takes so long, we need to do it from another thread that have lower priority than the main scan loop, and is run every time the main loop is waiting for something. But at the same time we need to be able to control what should be drawn from the main loop, since it makes things so much easier to write, if you can enable or disable something directly in the keymap when something happens.

Something like uGUI could quite a bit with this, since the main loop could control what widgets and windows should be visible, and also the contents of them. The renderer would just render that state. The problem is the synchronization. And I don't see many other alternatives than adding some sort of double buffering to the system, which would mean quite heavily modifying the uGUI code.

This would work OK if the renderer is completely stateless by itself, and just reads values from the windows and widgets. Animation can also be supported this way, but you would need special widgets telling what kind of animation should be renderered, and all it's current parameters, like the current fade value, the current color and so on.

Certain animations could be hard to implement that way though. For example a cross-fade which would work like this, render the end of the previous frame with an alpha of 1-t, followed by the next frame with an alpha of t, where t is a value that goes from 0 to 1 over time (note that by scaling the values this can be done using integer math). But this means that we need three set of parameters, one for the previous frame, one for the current frame and one for the next frame. So things could quickly get complex. At least compared to the normal case where it perhaps would be enough to have some kind of union that combines the parameters for different animation types, including one enum that tells which kind frame it should render.

One workaround for the cross-fade could be to use three different widgets, but in that case there need to be a way to reference the other widgets, and it doesn't feel like a good idea.

The alternative would be to let the renderer have it's own state that it can write to. But then there's another problem, if it want to animate the same properties that the main thread controls, then its hard to determine which version it should use, so I haven't found a clean way of doing that.

There's also a completely different way of doing the synchronization. And that's by using events. All changes are sent as events with parameters, which the renderer then processes in the same order as it's sent. For a single application this might work, but when we send things over some sort of physical link things get more complex, especially if you can disconnect and re-connect the link at any time. Then we still need a way to send the whole state. So therefore I haven't been looking that much into that option.

So that was the shortest overview of the synchronization that I could think of. I would really appreciate if you have some other ideas for the problems presented, but I understand if you don't, especially since my description of the problems probably isn't clear enough.

@fredizzimo
Copy link
Contributor Author

I just realized that a cross-fade can be implemented simply by duplicating the original window/widget, and then creating a new one (or just showing a hidden one) on top of it containing the following frame. Then it's just a matter of having the right alpha blending mode and adjusting their respective alphas. So it seems like even this could be controlled by just adjusting the windows and parameters.

Alpha blending is not supported by uGUI currently though, but it would be fairly simple to add. We might also need more control over the window z order.

@belak
Copy link
Contributor

belak commented Feb 24, 2017

What's the status on this? It would be awesome to have this for the Ergodox Infinity LCD displays.

@fredizzimo
Copy link
Contributor Author

fredizzimo commented Mar 26, 2017

I'm sorry, I haven't been able to make much progress with this for a while. There are several different causes

  1. For a long time I was kind of stuck with the design of this by trying to make a more complex system than the time I have available. I think I have a reasonable simple approach now. I have a somewhat working base system here. But it needs some final touches before I can make a pull request. And of course it also need to be integrated with the keyboard itself.
  2. Before I had time to finish that, we started the discussions about the host communication protocol here. So I decided to concentrate on that. That one is also almost completed in this branch. I don't remember exactly what's left to do anymore, but there shouldn't be much. It then of course need to be integrated with the serial_link and the new effect system.
  3. As you can see I haven't made any progress for almost two months. I have been very busy and stressed with my daytime work, so I haven't done anything for QMK. Furthermore I have spent much of my free-time changing the floors of my house.

Now I hope I can get back to QMK, but I won't give any promises of when this will be done. The good thing is that a few people have got the old visualizer system at least partly working, and I would be very happy if someone could spend some time and make that a bit more official until we have the new system.

**Edit: ** As described in #1122, I will enable the old visualizer support first.

@stale stale bot added the solved label Nov 21, 2019
@drashna drashna added discussion and removed solved labels Nov 21, 2019
@qmk qmk deleted a comment from stale bot Nov 21, 2019
@jackhumbert
Copy link
Member

We'd still love to see progress on this, but I know @fredizzimo has been pretty busy with stuff - I'm going to close this for now, but if anyone is interested in discussing/working on this more, we can reopen it.

BlueTufa pushed a commit to BlueTufa/qmk_firmware that referenced this issue Aug 6, 2021
* import bugged Gherkin keymap for testing

* Update JS files

* fix import/export; linting

* remove logging of original keycode

* remove logging of keycode remap action

* fix KC_ALGR assignments

* fix mod wrapper assignments

* use `export const`

* restore 40percentclub/gherkin keymap
petrovs12 pushed a commit to petrovs12/qmk_firmware_sval that referenced this issue May 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants