Normally I know better than to jump into silly arguments almost a week late, but what the hell. I've got 40 minutes before a meeting before I can get on with meaningful work.
Actually I think C# is vb with a silly hat
And I think C++ is assembly in a cheap suit.
Fundamentally, back in the dark ages, people wrote code directly in the instruction set the hardware understood, and it was fine because that's all that we had.
Then some bright spark invented the high-level language, and all the people who'd spent days of their lives understanding how to program in assembly stood around tutting scornfully and saying how it would never catch on, because everyone knows that if you want your program to perform well you need to write directly in assembly. Except, you know, within a decade or so compiler design had advanced to the point that the average compiler really could optimise your code better than the average programmer could, and the conveniences of HLLs far outweighed the drawbacks and everyone recognised the people who still programmed in assembly for what they were - grognards who didn't want to let go of what they'd learned or liked to feel superior understanding the computer at a lower level than everyone else.
Then, a little while later, some clever guy invented memory-safe garbage-collecting runtime environments and JIT bytecode compilers and all those things C# and Java rely on, and a lot of the C++ programmers are standing around tutting scornfully and saying how it'll never catch on, because everyone knows that if you want your program to perform well you have to write code with pointers and manual memory allocation and all that guff. They've learned C++ and because it's easier to code in C#, they're presuming that it's not so good... or like the feeling of superiority they get from knowing things C# developers don't, or maybe even genuinely feel they can't get the job done if they don't get to create access violations at will. The amount of bollocks spouted about it in this thread alone demonstrates that a good number of them don't even bother to learn how it works before deriding it.
Now, there was a bit missed off the beginning of the story - right at the beginning
, people didn't even have assembly - they had to make hardware to do what they wanted. And people still do that, for a vanishingly small set of applications, like the kind of hardware you get in GPUs. And people still write assembly for some small, specialised set of circumstances, because it genuinely is the best answer sometimes. And I have no doubt that people will continue to use direct-memory-manipulation high-level languages like C++ for an increasingly small set of applications as we go forward. C# apps run on top of a layer of C++ code which quite possibly runs on top of some hand-crafted assembler which runs on purpose-built hardware.
But I also quite seriously expect that - while it may take another ten years - serious development going forward will increasingly use languages and runtimes like C# over languages like C++, because the performance loss genuinely is negligible a lot of the time (in fact, due to things like the way the GC allocates large chunks of memory in advance, in some situations it outperforms natively-compiled C++) and because the advantages to the programmer truly are great and useful
. When those advantages outweigh the disadvantages in a field, that field will transition over to the new thing. It's already happening in business software, I see it as only a matter of time for games. Maybe it won't be C#, maybe it'll be some other language/runtime that comes along later, but I'm pretty sure it'll happen.
And ten/fifteen years after that, some smart-arse will come up with an even better way of coding, and I'll be writing derisive posts to Internet forums telling kids that it'll never catch on, because everyone knows that the GC/bytecode-JIT model is best for performance.