Double Buffering
- KVRAF
- Topic Starter
- 2245 posts since 25 Sep, 2014 from Specific Northwest
I'm working on my GUI design here and am considering design decisions on which subsystem to use for drawing on the Mac: Cocoa Quartz, OpenGL.
Way back in the dark ages when IBM mainframes the size of dinosaurs roamed the earth and Ataris had a special chip just for copying large pages of memory, we double-buffered graphics by drawing the page off-screen and then swapping the pointers.
Given the new hotness of video cards and graphics subsystems, is this still even necessary? Adding it in after the fact is trivial, but I'd like to focus on getting it right the first time!
I should add that the GUI will be all drawn with no bitmaps so that it will be easily scalable.
Way back in the dark ages when IBM mainframes the size of dinosaurs roamed the earth and Ataris had a special chip just for copying large pages of memory, we double-buffered graphics by drawing the page off-screen and then swapping the pointers.
Given the new hotness of video cards and graphics subsystems, is this still even necessary? Adding it in after the fact is trivial, but I'd like to focus on getting it right the first time!
I should add that the GUI will be all drawn with no bitmaps so that it will be easily scalable.
I started on Logic 5 with a PowerBook G4 550Mhz. I now have a MacBook Air M1 and it's ~165x faster! So, why is my music not proportionally better?
- u-he
- 28065 posts since 8 Aug, 2002 from Berlin
In Cocoa you tell the system what rectangle you consider "dirty" and then it calls you when that's fine. You don't need to worry about double buffering. It says "go!", gives you a device context and you draw your stuff into its rectangle it identifies as best. It may sometimes call into your stuff to draw things even if you haven't asked for it, e.g. when windows are moved or something.
With OpenGL you send the things you want drawn to the OpenGL device, and the device will eventually decide when to draw that stuff. That is, to get it right, you let go of the control of when it's drawn (you can wait for it to be drawn, but that's not how you're supposed to go about this).
As Quartz supposedly uses OpenGL (or is that Metal nowadays?), the CGContext is probably a wrapper to the OpenGL device. You say what you want drawn, something else below that will eventually draw it.
Both Quartz and OpenGL offer methods to retrieve the drawn pieces as pixel buffers, i.e. as bitmaps. This is extremely slow. Don't do it, if you can avoid it.
With OpenGL you send the things you want drawn to the OpenGL device, and the device will eventually decide when to draw that stuff. That is, to get it right, you let go of the control of when it's drawn (you can wait for it to be drawn, but that's not how you're supposed to go about this).
As Quartz supposedly uses OpenGL (or is that Metal nowadays?), the CGContext is probably a wrapper to the OpenGL device. You say what you want drawn, something else below that will eventually draw it.
Both Quartz and OpenGL offer methods to retrieve the drawn pieces as pixel buffers, i.e. as bitmaps. This is extremely slow. Don't do it, if you can avoid it.
- u-he
- 28065 posts since 8 Aug, 2002 from Berlin
Oh yes, there used to be one of these ultra-fast bitmap blitting routines in MacOS X... a long time ago.
There's still one on Windows, but for Mac development this was abandoned with the move to 64 bit and the demise of Carbon and QuickDraw.
There's still one on Windows, but for Mac development this was abandoned with the move to 64 bit and the demise of Carbon and QuickDraw.
-
- KVRian
- 1256 posts since 15 Mar, 2007 from Yorkshire, England
At the low level double buffering still goes on, and in fact sometimes more buffers are used.
- KVRAF
- Topic Starter
- 2245 posts since 25 Sep, 2014 from Specific Northwest
Cool! Thanks guys for the replies! It looks like I'm thinking too low-level for what's happening these days...
So, I'll just supply my draw routines and call it a day.
One last quick Cocoa related question:. I have the NSView subclass object supplied by the host. Do I just need to get a graphport/context and I'm good, or do I have an empty container waiting to be filled with another canvas-type object?
Sorry for what's probably a simple question, but I've been away from Cocoa for many years now and after having given up on all of the cross-platform GUI kits and deciding to write my own, my brain is hurting again! Otherwise, it's far less code than I thought it would be since I just need a few drawing routines.
So, I'll just supply my draw routines and call it a day.
One last quick Cocoa related question:. I have the NSView subclass object supplied by the host. Do I just need to get a graphport/context and I'm good, or do I have an empty container waiting to be filled with another canvas-type object?
Sorry for what's probably a simple question, but I've been away from Cocoa for many years now and after having given up on all of the cross-platform GUI kits and deciding to write my own, my brain is hurting again! Otherwise, it's far less code than I thought it would be since I just need a few drawing routines.
I started on Logic 5 with a PowerBook G4 550Mhz. I now have a MacBook Air M1 and it's ~165x faster! So, why is my music not proportionally better?
- u-he
- 28065 posts since 8 Aug, 2002 from Berlin
You'd typically derive from NSView to build your own, and get Mousing and Painting happening by overriding the appropriate methods. Then you embed this (your own) NSView into the NSView you're given by the host software.
- KVRAF
- Topic Starter
- 2245 posts since 25 Sep, 2014 from Specific Northwest
Awesome! Thanks! Y'all are the greatest!
I started on Logic 5 with a PowerBook G4 550Mhz. I now have a MacBook Air M1 and it's ~165x faster! So, why is my music not proportionally better?
- u-he
- 28065 posts since 8 Aug, 2002 from Berlin
We rolled our own, pretty much. Back in the days I initially used Antigrain Geometry before they went GPL, but we have since replaced it with routines that are a lot more specialized and faster. At the end of our drawing process we blit a bitmap using that ultra fast blitting routine provided by Windows APIs.Chris-S wrote:Hi Urs,
may I ask which graphic subsystem you are using for Win-OS?
We've planned to refactor the code so that one can have multiple drawing environments, so that we can step-by-step adopt OpenGL or maybe Cairo. Reason is, we wish to have one system for all platforms and it needs to be hardware accelerated. Paradoxically we think the best way to concentrate on one environment is to make multiple available at first.
-
- KVRAF
- 2256 posts since 29 May, 2012
opengl in windows appears to be one gray area in which one sometimes finds lower quality drivers when compared to directx (http://stackoverflow.com/questions/3146 ... in-windows ). I don't know if this still is the case.
~stratum~
- u-he
- 28065 posts since 8 Aug, 2002 from Berlin
That's what I'm afraid of and why we haven't gone all OpenGL for all platforms. People who did told me that there are all kinds of pitfalls to be expected.stratum wrote:opengl in windows appears to be one gray area in which one sometimes finds lower quality drivers when compared to directx (http://stackoverflow.com/questions/3146 ... in-windows ). I don't know if this still is the case.
-
- KVRAF
- 2256 posts since 29 May, 2012
There is an opengl emulator for windows that maps opengl calls to directx, called 'angle'. Apparently some versions of QT can be configured to use it https://wiki.qt.io/Qt_5_on_Windows_ANGLE_and_OpenGL so should be good enough for that purpose. Source repository is here if anyone wants to try https://github.com/google/angle
~stratum~
- KVRist
- 444 posts since 11 May, 2016 from Serbia
In my Youlean Loudness Meter plugin all graphics is drawn with Cairo and Freetype. All I can say is that Cairo is really easy to use, it is very fast (Samsung Tizen uses Cairo), and it is active in development. The other plus is that you (should) be able to use same code for OpenGL, and CPU drawing, which makes programming much easier than writing in OpenGL and yes, it is not as fast as pure OpenGL but we are not doing games. But the best part in Cairo is that you can scale GUI with 1 line of code (if you are drawing all vector) with very minimal performance penalties (it depends on scale ratio, but it should be max 5% slower than manual scaling).Urs wrote: We've planned to refactor the code so that one can have multiple drawing environments, so that we can step-by-step adopt OpenGL or maybe Cairo. Reason is, we wish to have one system for all platforms and it needs to be hardware accelerated. Paradoxically we think the best way to concentrate on one environment is to make multiple available at first.
Website: https://youlean.co/
- KVRAF
- Topic Starter
- 2245 posts since 25 Sep, 2014 from Specific Northwest
Just wanted to add, Never mix c++ and Objective-c. What a horrific nightmare! Header files and cross-dependencies and lions and tigers and bears! It took me a week to draw a square in the window! (When I set up my project, I didn't include any frameworks as I wasn't thinking that far ahead, but apparently, you need to add in the Cocoa framework if you want to link to it!)
But now that I am getting back to objective-c again... It would have been easier and faster to just use Interface Builder and glued the code in.
Thanks again everyone for being here! It gives me hope that I will still have some hair left by the time I'm done. Learning DSP from scratch, c++ and relearning objective-c and cocoa, it's the hardest thing I've ever done.
But now that I am getting back to objective-c again... It would have been easier and faster to just use Interface Builder and glued the code in.
Thanks again everyone for being here! It gives me hope that I will still have some hair left by the time I'm done. Learning DSP from scratch, c++ and relearning objective-c and cocoa, it's the hardest thing I've ever done.
I started on Logic 5 with a PowerBook G4 550Mhz. I now have a MacBook Air M1 and it's ~165x faster! So, why is my music not proportionally better?
-
- KVRAF
- 2256 posts since 29 May, 2012
The last two looks like way too much torture for someone who remembers the blitter (*) chip inside Ataris.Learning DSP from scratch, c++ and relearning objective-c and cocoa, it's the hardest thing I've ever done.
(*) That's what it was called on Amiga
~stratum~