Welcome, Guest. Please login or register.

Login with username, password and session length

 
Advanced search

1411273 Posts in 69323 Topics- by 58380 Members - Latest Member: bob1029

March 28, 2024, 12:40:14 AM

Need hosting? Check out Digital Ocean
(more details in this thread)
TIGSource ForumsDeveloperTechnical (Moderator: ThemsAllTook)The grumpy old programmer room
Pages: 1 ... 242 243 [244] 245 246 ... 295
Print
Author Topic: The grumpy old programmer room  (Read 733177 times)
oahda
Level 10
*****



View Profile
« Reply #4860 on: July 04, 2015, 10:58:02 AM »

Re C++ and initialisation:

I'd love to see default initialisation in C++. Ideally I think the default should be zero initialisation with uninitialised variables requiring specific declaration. There are some significant barriers to making this happen though.


Check C++11's class member initialization (at definition, not ctor) and default initialization via bracing:
Quote
An empty pair of braces indicates default initialization. Default initialization of POD types usually means initialization to binary zeros, whereas for non-POD types default initialization means default construction:
Code:
//C++11: default initialization using {}
int n{}; //zero initialization: n is initialized to 0
int *p{}; //initialized to nullptr
double d{}; //initialized to 0.0
char s[12]{}; //all 12 chars are initialized to '\0'
string s{}; //same as: string s;
char *p=new char [5]{}; // all five chars are initialized to '\0'
Code:
class C {
   int x=7; //class member initializer
public:
   C();
};
http://www.informit.com/articles/article.aspx?p=1852519


I'm in love with C++11
That's about as verbose as actually writing 0 manually tho and it still has to be remembered so I really don't see the difference TBH. Tongue

Maybe I can at least shorten for loops from i{0} to i{}. Durr...?
« Last Edit: July 04, 2015, 12:27:20 PM by Prinsessa » Logged

InfiniteStateMachine
Level 10
*****



View Profile
« Reply #4861 on: July 04, 2015, 12:04:13 PM »

I've begun to just memset structs to zero now because most of the time I can't use C++11 due to lack of compiler support Sad

To keep up with the times though I'm working on my own project which I'm trying to use modern conventions whenever possible. I've been reading Scott Meyers new 50 rule book for C++11 and it's pretty crazy how much this language is grown. It probably makes it more important than ever to have a project development practices doc for C++ projects now.

Everything I'm reading is cool but a lot of the subtle edge cases are scary. std::thread is an interesting one. I'm not sure if I like RAII being used for the threading mechanism. In most cases it's fine but there are times where I sometimes just want to kick off a thread inside a function and I don't need to keep a handle on it. I know there's the detach method but all my reading says not to use it. Presumably because it violates RAII.

Logged

oahda
Level 10
*****



View Profile
« Reply #4862 on: July 04, 2015, 12:28:39 PM »

What's RAII?
Logged

Layl
Level 3
***

professional jerkface


View Profile WWW
« Reply #4863 on: July 04, 2015, 12:56:41 PM »

RAII is "Resource Allocation Is Initialization", it's the concept of having your constructor be the point a system is initialized and your destructor be the point it's cleaned up. The reason for doing this is that it prevents you from accidentally leaving orphaned resources.
Logged
InfiniteStateMachine
Level 10
*****



View Profile
« Reply #4864 on: July 04, 2015, 02:00:36 PM »

yeah it's basically a fancy way to say

ctor
{
msomepointer = new widget();
}

dtor
{
delete msomepointer;
}
Logged

ThemsAllTook
Administrator
Level 10
******



View Profile WWW
« Reply #4865 on: July 04, 2015, 10:42:49 PM »

I'm working on a collision system. Intersection tests between pairs of objects are rigorously unit tested for just about every conceivable case, and I have a test harness application that lets me drag them around and see how they would collide under arbitrary circumstances. After polishing it up and tying off all the loose ends, everything appears to be pure perfection.

I put together a small demo with 20 balls bouncing around inside a square, and EVERYTHING EXPLODES.

Fuck.
Logged

J-Snake
Level 10
*****


A fool with a tool is still a fool.


View Profile WWW
« Reply #4866 on: July 05, 2015, 04:02:51 AM »

I'm working on a collision system. Intersection tests between pairs of objects are rigorously unit tested for just about every conceivable case, and I have a test harness application that lets me drag them around and see how they would collide under arbitrary circumstances. After polishing it up and tying off all the loose ends, everything appears to be pure perfection.

I put together a small demo with 20 balls bouncing around inside a square, and EVERYTHING EXPLODES.

Fuck.
The problem is, you don't reach perfection by trying to unit test every possibility. Perfection can only be reached by fundamentally understand the scalability properties of your physics system and its numerical implications. Only that will grant you perfection. When you reach that level of understanding you won't need to unit test everything.
Logged

Independent game developer with an elaborate focus on interesting gameplay, rewarding depth of play and technical quality.<br /><br />Trap Them: http://store.steampowered.com/app/375930
ThemsAllTook
Administrator
Level 10
******



View Profile WWW
« Reply #4867 on: July 05, 2015, 02:11:12 PM »

The problem is, you don't reach perfection by trying to unit test every possibility. Perfection can only be reached by fundamentally understand the scalability properties of your physics system and its numerical implications.

Unit tests help me do that. I understand their limitations just fine.
Logged

BorisTheBrave
Level 10
*****


View Profile WWW
« Reply #4868 on: July 05, 2015, 03:20:31 PM »

Energy conservation? If each collision is generally ok, but adds a bit of energy to the system, then overall things would blow up.

Numerical accuracy is pretty hard to unit test.

Perfection can only be reached by fundamentally understand the scalability properties of your physics system and its numerical implications. Only that will grant you perfection. When you reach that level of understanding you won't need to unit test everything.
Very zen, but not really helpful. Same could be said of all tests. Test are there because we are human, and true perfect understanding is not within reach. You might as well say "once you understand something well enough there are no bugs in your code, there is no need to test". Well, duh.
Logged
ThemsAllTook
Administrator
Level 10
******



View Profile WWW
« Reply #4869 on: July 05, 2015, 03:40:30 PM »

I've tracked down the problem now. The entire system is using fixed point math for everything, and it's giving me a chance to get to know the limitations of fixed point much more closely than I did before. I've had to work around about 5 separate instances now where either the precision or the range of one of my numbers is inadequate, so I have to fudge it and do the calculation some other way to avoid the problem.

If floating point calculations were consistent across architectures, this would all be so much simpler...
Logged

J-Snake
Level 10
*****


A fool with a tool is still a fool.


View Profile WWW
« Reply #4870 on: July 05, 2015, 04:13:24 PM »

Perfection can only be reached by fundamentally understand the scalability properties of your physics system and its numerical implications. Only that will grant you perfection. When you reach that level of understanding you won't need to unit test everything.
Very zen, but not really helpful.
Nothing zen here, just a sharp analytical mind.

The entire system is using fixed point math for everything, and it's giving me a chance to get to know the limitations of fixed point much more closely than I did before.
See? That's what you should understand in the first place.
Logged

Independent game developer with an elaborate focus on interesting gameplay, rewarding depth of play and technical quality.<br /><br />Trap Them: http://store.steampowered.com/app/375930
ThemsAllTook
Administrator
Level 10
******



View Profile WWW
« Reply #4871 on: July 05, 2015, 05:00:06 PM »

Nothing zen here, just a sharp analytical mind.

Saying that doesn't make it true. It actually implies the opposite.
Logged

J-Snake
Level 10
*****


A fool with a tool is still a fool.


View Profile WWW
« Reply #4872 on: July 05, 2015, 05:22:08 PM »

You should understand the exact limitations of fixed point in the first place to avoid any complication which can arise from that.

http://forums.tigsource.com/index.php?topic=35880.0
Logged

Independent game developer with an elaborate focus on interesting gameplay, rewarding depth of play and technical quality.<br /><br />Trap Them: http://store.steampowered.com/app/375930
Dacke
Level 10
*****



View Profile
« Reply #4873 on: July 05, 2015, 05:42:51 PM »

Have a little more trust, it is well known that J-Snake writes bug-free code:
http://forums.tigsource.com/index.php?topic=28284.0
Logged

programming • free software
animal liberation • veganism
anarcho-communism • intersectionality • feminism
gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #4874 on: July 06, 2015, 07:03:13 AM »

Remind me a discussion about Djikstra the man, about europe vs american programming style and how the american style took other, programmer in europe use to see programming like math and had to prove their code (which djikstra did when he propose the famous path finding algorythm) like you had to prove your "math theorem" (not sure about the term here lol). J Snake seems to follow that tradition. American style was more result oriented and see that as a quirk, until it's broken, no need to fix it. Djikstra lamented this mindset in an interview.
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #4875 on: July 06, 2015, 07:06:57 AM »

Quote
“Real mathematicians don't prove”

Roughly speaking, there are two ways in which people try to reason about programs; I shall distinguish them as “the postulational method” and “the operational method”.

The first method is called “postulational” because it postulates how the program text and the specification define the lemmata to be proved in order to show that the program meets the specification. (In effectively discharging this proof obligation, the predicate calculus has shown itself to be an indispensable tool.) The postulational method treats the program text as a mathematical object in its own right, i.e. semantic equivalence of two programs means that they meet the same specifications.

The second method is called “operational” because it tries to analyse what computations could be evoked under control of the program and to establish that each possible computation is compatible with the given specification. It relies on a computational model with respect to which the program text is interpreted as executable code.

The tragedy of today's world of programming is that, to the extent that it reasons about programs at all, it does so almost exclusively operationally. I call this a tragedy because, from a purely technical point of view, the operational method is by several orders of magnitude inferior to the postulational one. With growing size or sophistication of the program, the operational argument quickly becomes impossible to carry through, and the general adherence to operational reasoning has to be considered one of the main causes of the persistence of the software crisis.

A possibly very fundamental flaw of the operational method is that it begs the question how to reason about algorithms, because it translates the possible effects of a given algorithm into those of another one, viz. the program interpreter (i.e. the abstract machine that underlies the computational model). We shall not pursue that potentially severe shortcoming here. Our current concerns are much more pragmatic: by admitting —or should we say: by generating?— the possible computations into our considerations we open the door to a combinational explosion, the effect of which quickly defies exhaustive analysis. [Remember the archetypical programmer's excuse for a bug “Oh, but that was a very special case.”!] Instead of finding out how to cope with the effects of such combinational explosions, it is much more effective to prevent the combinatorial explosion from occurring in the first place; this is what the postulational method achieves by not taking into account that the program text admits the interpretation of executable code. The postulational method deals with the program text as a parsed but otherwise uninterpreted formula.

[...]


Quote
prof.dr.Edsger W.Dijkstra
Department of Computer Sciences
The University of Texas at Austin
Austin, TX 78712-1188
USA

More here
https://www.cs.utexas.edu/users/EWD/transcriptions/EWD10xx/EWD1012.html


SO it may just be that the on going discussion is just a cultural clash.


EDIT:
Because relevant

Quote
As soon as the postulational method began to be forcefully advocated, it met equally forceful opposition, all of which was quite predictable. I mention, by way of illustration

(0) It is of a “theoretical level” that “places it beyond the scope of most amateurs”, or: “but that would require a lot of education”. (Standard answers varying from “I never claimed that amateurs should be able to program well.” to “Well, education is my business.”.)

(1) The postulational method is of no relevance for the real world, for real programmers don't think that way. (Standard counterquestion: “Do you mean to say that I am a virtual programmer?”.)

(2) It cannot be any good because backward reasoning and weakest preconditions are counterintuitive. (Standard answer: “If a simple calculus can achieve what is so ‘counterintuitive’ that it is beyond the unaided mind, so much the better for that simple calculus.”.)

(3) It may be okay for toy programs, but you'll never be able to apply it in the case of real programs. (Standard answer: “Yes, scaling up is a problem, but the operational argument becomes much sooner impossible than the postulational one.”.)

(4) By imposing such strict logical constraints, you stifle the programmer's creativity. (Various answers are possible, such as “Unbridled creativity has done more harm than good.” and “If the programmer really wants to be an impressionistic poet, he is in the wrong business.”. For a more sophisticated audience you can explain that, at each stage of the design, the explicit statement of the designer's obligations is at the same time an explicit statement of his freedom, thereby inviting him to explore alternative designs the traditional programmer almost certainly overlooks. If time permits, you can give an example.)

(5) Etc.

The moral of the story is clear: real programmers don't reason about their programs, for reasoning isn't macho. They rather get their substitute for intellectual satisfaction from not quite understanding what they are doing in their daring irresponsibility and from the subsequent excitement of chasing the bugs they should not have introduced in the first place.


EDIT2

JUst adding more that you should read it in its entirely, I took the sensationalist part to spark thought but the whole letter is interesting.
« Last Edit: July 06, 2015, 07:25:36 AM by Jimym GIMBERT » Logged

InfiniteStateMachine
Level 10
*****



View Profile
« Reply #4876 on: July 06, 2015, 08:09:50 AM »

I've tracked down the problem now. The entire system is using fixed point math for everything, and it's giving me a chance to get to know the limitations of fixed point much more closely than I did before. I've had to work around about 5 separate instances now where either the precision or the range of one of my numbers is inadequate, so I have to fudge it and do the calculation some other way to avoid the problem.

If floating point calculations were consistent across architectures, this would all be so much simpler...

how are you doing this? considering modern processors have no support for fixed point are you typedefing and int and then defining all the operations you can perform on it?
Logged

Sik
Level 10
*****


View Profile WWW
« Reply #4877 on: July 06, 2015, 08:28:26 AM »

That's about as verbose as actually writing 0 manually tho and it still has to be remembered so I really don't see the difference TBH. Tongue

Try doing this to an array.

But yeah, this is inherited from C if I recall correctly, any values not explicitly written in the list get initialized to 0.
Logged
ThemsAllTook
Administrator
Level 10
******



View Profile WWW
« Reply #4878 on: July 06, 2015, 09:10:54 AM »

how are you doing this? considering modern processors have no support for fixed point are you typedefing and int and then defining all the operations you can perform on it?

Yep. http://ludobloom.com/svn/StemLibProjects/gamemath/trunk/source/gamemath/FixedPoint.h http://ludobloom.com/svn/StemLibProjects/gamemath/trunk/source/gamemath/FixedPoint.c
Logged

InfiniteStateMachine
Level 10
*****



View Profile
« Reply #4879 on: July 06, 2015, 10:31:30 AM »

how are you doing this? considering modern processors have no support for fixed point are you typedefing and int and then defining all the operations you can perform on it?

Yep. http://ludobloom.com/svn/StemLibProjects/gamemath/trunk/source/gamemath/FixedPoint.h http://ludobloom.com/svn/StemLibProjects/gamemath/trunk/source/gamemath/FixedPoint.c

gotcha. I wonder how much of a perf hit that would be. Is this standard practice? Would something like bullet do that?
Logged

Pages: 1 ... 242 243 [244] 245 246 ... 295
Print
Jump to:  

Theme orange-lt created by panic