nanovg for plugin uis

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

hey,

curious if anyone is using nanvg for their uis? looks like it could be really cool.

oli

Post

You mean https://github.com/memononen/nanovg ??
Looks very useful indeed...

(but no, I don't)
We are the KVR collective. Resistance is futile. You will be assimilated. Image
My MusicCalc is served over https!!

Post

There is also https://github.com/wjakob/nanogui which happens to use nanovg, but I haven't tried either one
~stratum~

Post

You still need an OpenGL framework around, but from GUIs I've seen from friends in other domains, they look really neat.

Post

I use it alot, it's the drawing backend to for all UI on private play-arround never-finished stuff :lol:
Support for different rendering backends makes it really nice. The GLES runs very well on the raspberry pi for instance, no more X11 and stuff needed.. boot your app, create gles context and draw.

For desktop you need some kind of window framework below to create the Open GL context and handle window loop, events & co
On top of it you might want to run some UI engine (like nanoui - nanovg is only about to draw basic shapes)

Post

done that in a plugin? i guess there is some threading stuff to be dealt with

Post

It's not multithreading-safe.
Look at it as a vector graphic API -> 3D rendering backend API translator.

I have not tried it, but I think it should prefectly fit into a framework like:
https://juce.com/doc/classOpenGLContext
You get a GL context and a repaint callback, it's all you need.

Post

I have a branch of my code using nanoVG in an attempt to have one code base for Mac AND Pc. It's truly awful looking right now--8-bit graphics, but without the charm. I've supplemented with a few other bits and bobs of code, but I still am missing mouse overs, which I need for tool tips.

Also, mousing is not very responsive at all, making it difficult to turn knobs and drag sliders. Efficiency-wise, it appears to use the same number of CPU cycles as the native GUI.

At this point, I've pretty much abandoned the idea and am steeling myself to learn Windows GDI... :?
I started on Logic 5 with a PowerBook G4 550Mhz. I now have a MacBook Air M1 and it's ~165x faster! So, why is my music not proportionally better? :(

Post

At this point, I've pretty much abandoned the idea and am steeling myself to learn Windows GDI... :?
The problem is not that you don't use GDI but rather if you do not know GDI you probably do not know WIN32 programming (and related GUI message handling mechanism).
GDI is outdated and is kept only for compatibility, and there is confusing information about it in the web about whether it's still properly hardware accelerated or not.
~stratum~

Post

Efficiency-wise, it appears to use the same number of CPU cycles as the native GUI.
:o

I get about ~2-3000 fps on GTX1070 when drawing simple texture-based (knob textures) UIs.
GDI can reach about hmm.. 25fps ..?

It hardly drops below 200fps, even when doing lots of antialiased line-drawing.

I would be really astonished if nanovg takes same amount CPU cycles as native GUI (i.e. GDI).
GDI is software drawing, nanovg is hardware drawing.
Actually drawing the scene should not take any CPU cycles with nanovg, since it happens on GPU not on CPU.

What takes CPU cyles on nanovg is caculating the scene geometry.
Especially if you do lots of anti-aliased bezier curve drawing, the computing effort to build the vertex buffers can be significantly higher than when actually drawing it.
But you can address that with a proper scene management.
i.e that's the reason why each UI element creates it's own "layer" (which is a texture) on my UI. So the geometry is only re-calcuated for a single element if changes, otherwise it's a simple/fast texture blend (=> no CPU cycles, all the work will be done by GPU).

Post

PurpleSunray wrote:
Efficiency-wise, it appears to use the same number of CPU cycles as the native GUI.
:o

I get about ~2-3000 fps on GTX1070 when drawing simple texture-based (knob textures) UIs.
GDI can reach about hmm.. 25fps ..?

It hardly drops below 200fps, even when doing lots of antialiased line-drawing.

I would be really astonished if nanovg takes same amount CPU cycles as native GUI (i.e. GDI).
GDI is software drawing, nanovg is hardware drawing.
Actually drawing the scene should not take any CPU cycles with nanovg, since it happens on GPU not on CPU.

What takes CPU cyles on nanovg is caculating the scene geometry.
Especially if you do lots of anti-aliased bezier curve drawing, the computing effort to build the vertex buffers can be significantly higher than when actually drawing it.
But you can address that with a proper scene management.
i.e that's the reason why each UI element creates it's own "layer" (which is a texture) on my UI. So the geometry is only re-calcuated for a single element if changes, otherwise it's a simple/fast texture blend (=> no CPU cycles, all the work will be done by GPU).
My GUI is all live-drawn with no prerendered components, so CPU time should be about the same, if not slightly better with OpenGL. My native GUI takes up less than 1% of my CPU, so I can't just eyeball it to see how efficient the coding is. Unfortunately, the drawing quality I've gotten so far from nanoVG, I've seen better from a three-year old with broken, dull crayons. :hihi:

I've got to look again at what I was doing since I've put it away for a couple of months. I usually start with the naive approach to see how far I can get. It helps me learn the whys vs hows faster than trying to do things "the right way". I've just picked my code back up again, but I've been playing with filters, fixed a bug and did some rearranging to add some features. Someday, I'll have a working Windows VST, but not today. I have most of the code slopped in, I just don't have a working Windows compiler yet to do the rest of the stuff.

Unfortunately, most of these API kits are more lines of code than my entire VST, so I tend to shy away from them as they seem like overkill. I think it may be time, however, to get something a little more complete than nanoVG + whatever else I got that I've forgotten.
I started on Logic 5 with a PowerBook G4 550Mhz. I now have a MacBook Air M1 and it's ~165x faster! So, why is my music not proportionally better? :(

Post

My GUI is all live-drawn with no prerendered components, so CPU time should be about the same, if not slightly better with OpenGL.
It really depends how you set it up ;)
The bit difference is, that with OpenGL all the pixel-drawing work is off-loaded to GPU.
But in order to take benefit of it, you need to be prepared for it (there are some pitfalls with direct hardware rendering).

Example:
If you want blend 20 textures (like knob buttons) onto the final surface with transformation on it (i.e. rotation), OpenGL should make a huge difference.

On CPU this is about: read pixel from surface, read pixel from texture, blend, write to final surface, get next pixel ... until all pixels are done. This can be multi-threaded, so your CPU will most likely compute 4/8/16 pixel in parallel, depending on how many cores.

On a GPU this is about: upload texture, upload vertex buffer, send draw (blend) command to GPU, GPU starts compute on >2000++ shader cores in parallal (vs a hand full of CPU cores), then wait until it final surface is ready.

So this all about to save CPU cycles ;)

Maybe you mixed "CPU cycles" with "total runtime"?
Like, the first frame rendered with nanovg will be take longer than drawing it in software, simply because you first need to upload textures and prepare stuff.
But as soon as texture as on Video RAM already, blending images should take waaaaaaaay less CPU load (actually it should not cause any CPU load, becuase all the CPU does, is submitting 8 vertices (couple of bytes) and a draw command to GPU and wait until it is done.

Post

I'm just doing this for the cross-platform benefits. Even if my CPU time suddenly runs to 2% or even 5% of my CPU, I'm still happy. Some commercial GUIs eat about 15%! Like I said, nothing is prerendered, but is updated 60fps (or whatever i have it set to.)

Looking at nanogui, I might give it a whirl although it will require me to rewrite my entire GUI engine and still write code to create my specialized widgets.
I started on Logic 5 with a PowerBook G4 550Mhz. I now have a MacBook Air M1 and it's ~165x faster! So, why is my music not proportionally better? :(

Post

syntonica wrote:I'm just doing this for the cross-platform benefits. Even if my CPU time suddenly runs to 2% or even 5% of my CPU, I'm still happy. Some commercial GUIs eat about 15%! Like I said, nothing is prerendered, but is updated 60fps (or whatever i have it set to.)
Yeah. I was expecting something like that.
If your CPU needs 2% to render a scene at 60fps, your GPU is basically idle doing same.
So what you measure is the overhead of OpenGL vs software-drawing.
OpenGL starts to get interesting when your CPU is on i.e. 50% with 60fps, because then you can free up CPU completly from pixel-drawing work by moving it to GPU.

But it comes to the question why you want a hardware drawing backend on your VST at all.

Polygon-count on a UI will probably never reach the level where you absolutly need a GPU to handle it because CPU can't (this is no 3D-open-world scene, but a couple of 2D buttons).
But where you reach the "break-even-point" pretty fast is texture blending. The massive amount of parallel cores in combination with fragement shaders is what makes GPU very attractive for that.

So if your code is about to draw tons of circles, you won't gain much.
In order to draw that circle, nanovg first needs to calcuate the bezier's, then tesselate it and send it for drawing. With some interpolation, you could also immediatly draw down that line during tesselation.
So you haven't gained much from using OpenGL in this case.

In order to gain from it, you need some scene-management arround.
Example: my nanovg delirvers textures into a composer.
nanovg is not composing the window, but it draws elements into textures.
composer then blends textures to the final window.

My veryComplexSelfDrawnElememt has a draw function that paints 100mio. circles when I call it with force-flag or if it needs a repaint. The draw function on the composer collects the textures first, element will update it if changed, and then it blends it all together.
I do not have a veryComplexSelfDrawnElememt, but only elements that load a texture on it's own and add some frame arround or text or other simple stuff.

So a UI with 100 rotating buttons would render like this:
(CPU, but only first frame) ButtonElement loads (or draws) button texture.
(CPU) UI engine places ButtonElement at 100 positions on final window, with a rotate tranform (create a vertex buffer)
(GPU) Composer rernders ButtonElement texture 100 times, CPU can do other stuff in meanwhile
-> 100 rotating buttons render with 3000fps at 1% CPU load

Post

I use nanovg + glfw in my modular synth project: https://github.com/austensatterlee/VOSIMSynth

The simple API was a big draw for me, especially since I was planning on drawing most of it on the fly. Once you get to the point where your window and opengl context is set up, nanovg is a pretty smooth ride.

Post Reply

Return to “DSP and Plugin Development”