"Does C++ Have a Future?"

DSP, Plugin and Host development discussion.
Post Reply New Topic
RELATED
PRODUCTS

Post

about speedness today java is not so much slow than c++, you can see it in those benchmarks too (expecially latest jvm). There is a different issue on memory footprint, java can use 100x the memory required by the same application in c++. Anyway it is a matter of tools, google says. They write a lot of things in java because there are MORE tools available for java (libraries, helpers and so on). Yes, core applications like bigtable are written in c++, and they have a lot of opensource projects in c++, for cpu consuming tasks. But a complex wrapper is written in java, not in c++. Memory is not a problem any more and a lot of things nowdays are written in java or even in javascript (this is a sort of new trend; java is not the first choice any more for several business logic tasks). Why javascript or java? because the development time is less. You have a very fast compile time, or you don't have a compile time at all.

I'll move the discussion on a different direction. I'm a "normal" programmer. not a genious, and I see I'm incredibly faster on java side for the same business logic tasks even if I've spent the same amount of hours on both of them. I see the same improvement for c++ compared to assembler on my side (I'm MUCH slower in assembler than in c++, and MUCH slower in c++ than in java and MUCH slower in java than in javascript). Just saying. People who say they are exactly the same are really surprising me, they are NOT the same, at all. If they were the same there was NOT a single reason for using java, which is wasting 100 times the memory of c++.
Last edited by Zaphod (giancarlo) on Mon Mar 27, 2017 12:09 pm, edited 1 time in total.

Post

C++ syntax is so complex that a team of geniuses isn't sufficient to make a good IDE for it. It's not just about compile time. Intellisense on visual studio works about %70 of the time. I don't think that they have only tested it on toy-sized projects, it's a difficult job.
~stratum~

Post

stratum wrote:C++ syntax is so complex that a team of geniuses isn't sufficient to make a good IDE for it. It's not just about compile time. Intellisense on visual studio works about %70 of the time. I don't think that they have only tested it on toy-sized projects, it's a difficult job.

agreed!!!!!!!!!!!!
on templates it really goes nuts, for example.

Post

I don't rely on Intellisense for almost anything, although the IDE is really great.
And I think you forgot about Python as the wrapper in Google, it may be used far more than Java.

Post

I forgot to mention Java's B&D approach to error trapping/exception handling. What a colossal waste of my time... I can only think of one case in Java where I could run into true exceptions...

I do like automatic garbage collection in applications where I'm not worrying about speed, per se. In Java, I just force it at certain times when I can afford the little speed bump. Done after a particularly heavy heap workout, the user never notices it anyway--it's just part of whatever operation the user selected, its work time. Java may grab a large heap up front, but with a little management, you can keep it in check. The database I wrote keeps it all in memory for speed reasons, but I still only need about 100mb for the heap and this allows me a practical limit of about 100k entries, more than enough for average needs. If you've ever had to deal with Apple's retain/release nonsense prior to ARC, auto gc is a breath of fresh air.

One thing I've noticed about Java programmers is that they love to write wrappers for wrappers and then wrap them in more code. :roll: Meta-programming is fine, but really?

I was originally just pointing out that the basic grammar and iterative/OOP approach were very similar between all these languages. The differences are under the hood, so jumping back and forth between these languages is simple enough, allowing you to choose which feature set you need.

I'd like to see a convergence of them all to a single language with the use of pragmas to select which features are desired--JVM, auto gc, closures, exception handling, etc., rather than relying on third-party crap. I'm rather tired of monolithic languages that demand things be handled in the one-size-fits-nobody way.

It's interesting that someone mentioned the Pascal-like languages vs the C-like languages. I could tell Pascal was a dead end way back in the 80's, but I could never put the issue into words. Pascal is like a C-weapon, but no bullets are loaded. Maybe? Considering it was originally meant as a teaching language, I'm surprised it got as far as it did. I haven't looked at the latest incarnation of Ada, however. It might actually be palatable... Nah... :lol:

As for external libraries, I try to avoid them at all costs. It's code that I didn't write, but will have to maintain if the original author suddenly declines to do so themselves. And don't get me started on licensing issues... :evil:

Anyways, more random thoughts... :D
I started on Logic 5 with a PowerBook G4 550Mhz. I now have a MacBook Air M1 and it's ~165x faster! So, why is my music not proportionally better? :(

Post

Zaphod (giancarlo) wrote: I'll move the discussion on a different direction. I'm a "normal" programmer. not a genious, and I see I'm incredibly faster on java side for the same business logic tasks even if I've spent the same amount of hours on both of them. I see the same improvement for c++ compared to assembler on my side (I'm MUCH slower in assembler than in c++, and MUCH slower in c++ than in java and MUCH slower in java than in javascript). Just saying. People who say they are exactly the same are really surprising me, they are NOT the same, at all. If they were the same there was NOT a single reason for using java, which is wasting 100 times the memory of c++.
They are not same.. tell the one that says they are same.
ASM vs C vs java vs JScript are completly different levels of abstraction. More abstraction means easier coding.. like, do that, on C, at same speed (means without memcpy):
(node.js)

Code: Select all

var buffer1 = new Buffer ([some_data]);
var buffer12 = new Buffer ([some_more_data]);
var newBuffer = Buffer.concat([buffer1, buffer2]);
console.log (newBuffer);
In C you need to implement all that stuff underknee a node.js buffer.
Buffer.concat in no-brainer C is malloc + memcpy. If you want to have it fast, you need Buffer objects that can build a linked list, representing one big buffer, have an iterator for reading, maybe also a splice function, ect. pp... this is typing work as hell. So the JScript coder will laught at you.. HA! SEE!! less code and even runs faster then your C crap..
Now ask the JScript coder again to run that piece of code on 50 blocks in parallel. Erm.. wait.. parallel.. Jscipt?

This are just completly different use-cases and a one-fits-all languge is the worst idea ever, because nothing fits all.
They have tried it several times with all kind of _UNSAFE_ constructs within manged languages and it all ended up on a unusable mess.

Post

The point is not what you could achieve on performance side but on programming side.
For multithreading stuff nodejs is "today" more than good, especially if the server itself is a part of the picture (ie a node in a ring). Or erlang.
Nobody will tell you that java (or derivates like scala) should replace c++ in game engines, or real-time embedded systems. But the actual diffusion of java explains everything. And the increasing interest in JavaScript on tasks where a lot of time ago you was using assembler, then c, then c++ and finally java tells you more.

I would never write a plugin in java today, but I did it in assembler 15 years ago. I would not surprised if you will use java or JavaScript for the similar tasks in 2030 if, and I keep reiterating "if" we'll be still programmers in 2030. Yes the garbage collector is still an issue today, but the jvm is smarter and smarter, computers are faster and faster and people are looking just for simple things to maintain.

Said that, the whole discussion about the future is a waste of time: singularity is close, and first machine learning applications dedicated to autoprogramming and autodebugging are available. They will program in whatever language possible, without the human at all.

Post

Guillaume Piolat wrote:I've written about this.
https://www.auburnsounds.com/blog/2016- ... ntime.html

You can circumvent anything, in particular for plugins it made no real difference to run with the D GC in UI treads only vs removing the GC altogether (what I do now but for macOS compatibility and memory usage reasons, not perf). In D the GC is "atop" the otherwise RAII destructors of stack objects. It calls destructors for heap objects but really should not even try. Makes the whole story harder to explain that C++-style resource management (which is possible, just not what happens by default) and those non-deterministic calls produce accidental correctness.

Fortunately D has imho other things going for it.
Yes, your article was actually what I wrote about :) D indeed does. I'm not so sure about your experiments however, if you're loading one plugin then yes nobody will notice the difference. But again, if you're pushing a system to the limits you will start to notice problems - consider your 200 different instanced plugins using slightly different GC versions/algorithms traversing the heap and constantly attaching and detaching domains. You can develop concurrent GCs, but they provably use many times more memory (and cpu).

I of course wouldn't be mentioning this problem if it wasn't because I have struggled so much with this.
Zaphod (giancarlo) wrote:Memory is not a problem any more
This depends entirely on context and any scientifically oriented programmer will of course tell you that the only interesting factor is scale. You can make any "throw-more-hardware-at-it" argument for all small toy programs, but here we're specifically in a context where we don't control the hardware, and the user expects to push the system to 95% percent without having real-time troubles.

Also about development time:
The fact that I may be more productive writing Word/Excel macros to do general-purpose tasks than C++ doesn't mean it's a good idea. And I'll wager that in our context, development time is so much less important than quality, consistent speed etc.
syntonica wrote:I'm rather tired of monolithic languages that demand things be handled in the one-size-fits-nobody way
It's rather funny you would say that, since C++'s philosophy is specifically not that - you can write in functional, imperative, OO, generic paradigms etc., although most people never really utilize (or discover) that.
Zaphod (giancarlo) wrote:stratum wrote:
C++ syntax is so complex that a team of geniuses isn't sufficient to make a good IDE for it. It's not just about compile time. Intellisense on visual studio works about %70 of the time. I don't think that they have only tested it on toy-sized projects, it's a difficult job.
agreed!!!!!!!!!!!!
on templates it really goes nuts, for example.
Annoying yes, but expected: The compiler literally has no information about the context nor the types, hence there's nothing to display. Concepts should fix this. I would say it's better than not being able to write a generic numeric class in C# because you cannot prove to the compiler that a binary operator exists for some type T. Of course, in Java, you wouldn't even have the chance as there's no operator overloading. Except for strings, because that was just too inconvenient.

Post

Guillaume Piolat wrote:For example Delphi had inline assembly
Not only "had" - Delphi still has inline assembler, in both 32- and 64-bit versions, compared to, for example, Visual Studio where it doesn't work for x64.

Inline assembler exists in Free Pascal / Lazarus too, again for both x86 and x64, on ARM as well (at least 32-bit because I didn't have a chance to check 64-bit ARM version yet).

Post

Inline assembly is a very bad choice in almost every place it has ever been used. You can still link assembly objects and call into those functions in amd64 via visual studio which is always a far more practical solution where assembly is still needed.

The use cases making inline assembly a practical solution just don't exist anymore.
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.

Post

aciddose wrote:Inline assembly is a very bad choice in almost every place it has ever been used. You can still link assembly objects and call into those functions in amd64 via visual studio which is always a far more practical solution where assembly is still needed.

The use cases making inline assembly a practical solution just don't exist anymore.
I just wrote some facts about inline assembler in Delphi / Free Pascal / VS. Didn't wrote my opinion and certainly didn't ask for your opinion, nor intend to spend a single second of my precious time arguing about it.

Post

Mayae wrote:Yes, your article was actually what I wrote about :) D indeed does. I'm not so sure about your experiments however, if you're loading one plugin then yes nobody will notice the difference. But again, if you're pushing a system to the limits you will start to notice problems - consider your 200 different instanced plugins using slightly different GC versions/algorithms traversing the heap and constantly attaching and detaching domains. You can develop concurrent GCs, but they provably use many times more memory (and cpu).
Thanks Mayae for a reasonable discussion.

For me at this point the cost was very precisely:
- 2x increased memory usage (would be a problem for < 100 instances in 32-bit). This is probably because of false pointers, memory allocated by PNG decoder did not get reclaimed.
- about 120 mutex locks per second (graphics timer threads attached then detached on exit).

However the mutex locks were uncontended so when I removed the GC (after a long period of reducing its usage) no speed improvement was measured. At 200 instances, yes so much memory barriers at once would probably create more problems. It felt cleaner to remove them for this reason.

There is no other costs AFAIK for the D GC, it doesn't create threads, it doesn't run without allocation. Of course I prefer when it's not there :wink:
I of course wouldn't be mentioning this problem if it wasn't because I have struggled so much with this.
Well I won't teach you anything if I say that not all GCs are the same. It's the same word "GC" that covers multiple characteristics. The one I've talked about is not pay-as-you-go but rather "stop-paying-as-you-remove".
When I was making small games in Javascript, it was near impossible to avoid GC pauses (as of 2011). Another time I was dispatched to a UK TV headquarters to fix stutters in video playback. The client was blaming the video output of our appliance (problems in video are usually timestamps), when in reality the C# player they were developping had GC pauses. :)
So I'm well aware how much pain they can inflict.
Checkout our VST3/VST2/AU/AAX/LV2:
Inner Pitch | Lens | Couture | Panagement | Graillon

Post

The worse thing about C++ is that some crazy coders love to abstract the f**k out of it, as if programming is some kind of 'cleverness' test.
It isn't, and try debugging that obfuscated mess!

It's NOT about the journey guys, it's just about getting there - by any means necessary.
You want things to run quickly, and the code to be readable - use normal OO C++. You'll be thankful for it in the future.

Post

The worse thing about C++ is that some crazy coders love to abstract the f**k out of it, as if programming is some kind of 'cleverness' test.
Mess is not created by cleverness wannabes, it's created by the sheer size of the codebase. That's why such recommendations often do not work. Reality is something different. "Leave that work now, continue with this stuff which is our current priority" kind of management practice is another important problem.
~stratum~

Post

quikquak wrote:The worse thing about C++ is that some crazy coders love to abstract the f**k out of it, as if programming is some kind of 'cleverness' test.
It isn't, and try debugging that obfuscated mess!
I have never understood this.. why does one think he is a good programmer if he writes complex code?
I know that type of dev too.. very proud about the new template meta thing that looks so f*cking complex for doing the simpliest things. Code that has more <> :: . and -> then normal ascii chars is a nightmare to me.

This is what like on good C code compared to C++... the logic is on data model, not on code. Like.. look at the linux kernel.. 90% of the code is just assigning pointers arround. You don't understand anything form looking at it.. unitl you find out that you don't need to look at the function but at the struct... than things get super easy suddenly.

Post Reply

Return to “DSP and Plugin Development”