Welcome, Guest. Please login or register.

Login with username, password and session length

 
Advanced search

1411430 Posts in 69363 Topics- by 58416 Members - Latest Member: JamesAGreen

April 19, 2024, 05:47:54 PM

Need hosting? Check out Digital Ocean
(more details in this thread)
TIGSource ForumsDeveloperTechnical (Moderator: ThemsAllTook)Denormalized floats???
Pages: [1]
Print
Author Topic: Denormalized floats???  (Read 1038 times)
tjcbs
Level 1
*


View Profile WWW
« on: October 12, 2014, 03:39:35 PM »

WTF
http://stackoverflow.com/questions/9314534/why-does-changing-0-1f-to-0-slow-down-performance-by-10x

I've been programming for a long time, and I've never heard of this crap. I'm about to add some simple physics code, and now I have to complexify it by worrying about numbers getting too close to 0?

Do you worry about this when you code??

This is just as stupid as unsigned, both create these catastrophic traps around the most common number, 0!

Logged

Gtoknu
Level 0
***


View Profile
« Reply #1 on: October 12, 2014, 04:04:01 PM »

Good stuff to know, although I wouldn't freak out about it.
Logged

wut
epcc
Level 1
*


View Profile
« Reply #2 on: October 12, 2014, 04:29:29 PM »

Don't worry about it.
The only place I can imagine this can have any noticeable effect is when you repeatedly multiply a large array of numbers by .9999 or something. Damping object speeds seems like a place where this could happen, but if you're worried about performance, you wouldn't update physics for objects that move so slow anyway.
A hundred times slower sounds awful, yeah, but since those numbers are so rare in practice, it's likely that your code is filled with much worse bottlenecks.
Logged

tjcbs
Level 1
*


View Profile WWW
« Reply #3 on: October 12, 2014, 05:09:48 PM »

I freak out about losing my car keys. Still it always makes me mad when I am burdened by the bad decisions of others. Apparently this was really controversial when floating point was standardized, and as so often happens the wrong people won.

Don't worry about it.
The only place I can imagine this can have any noticeable effect is when you repeatedly multiply a large array of numbers by .9999 or something. Damping object speeds seems like a place where this could happen, but if you're worried about performance, you wouldn't update physics for objects that move so slow anyway.
A hundred times slower sounds awful, yeah, but since those numbers are so rare in practice, it's likely that your code is filled with much worse bottlenecks.

But even testing that the speed is that low would be 100x slower when the speeds are denormalized. Unless you explicity flush the value to 0.0. Maybe that would have a significant impact with a 100 or so objects. Or maybe not, but it is still annoying to have another little worry lodged in the brain.
Logged

epcc
Level 1
*


View Profile
« Reply #4 on: October 13, 2014, 02:38:55 AM »

You can just move inactive objects to another list when their speed goes lower than 1px/s.
Then you don't have to check their speeds every frame and no numbers ever reach 10e-44
Logged

Columbo
Level 0
***


View Profile
« Reply #5 on: October 13, 2014, 01:37:40 PM »

If you're worried about it, you can set the floating point logic to flush to zero. It basically says that if a number ends up denormalized, then set it to zero. Wouldn't recommend turning it on in a large established codebase just in case it has unexpected knock on effects, but if you're writing new code it's unlikely to cause any problems.
Logged

tjcbs
Level 1
*


View Profile WWW
« Reply #6 on: October 13, 2014, 02:00:35 PM »

Is there a cross platform way to do that?
Logged

RandyGaul
Level 1
*

~~~


View Profile WWW
« Reply #7 on: October 13, 2014, 04:00:38 PM »

Some CPU vendors clamp denormalized numbers to 0 for execution speed. The one place I've seen denormals be a big issue is in writing some GLSL shaders. When the problem appeared we identified the numerical robustness issues and corrected them as needed. So in our case running into the problem gave some insight on how to deal with it.

It's generally not a big deal, and can be dealt with when a problem arises. Proper usage of epsilons and clamping usually avoids numeric issues appropriately.
Logged
tjcbs
Level 1
*


View Profile WWW
« Reply #8 on: October 13, 2014, 06:05:37 PM »

Crap even shaders have this problem??? How did you detect it? I can't believe gpu makers would conform to that.
Logged

Layl
Level 3
***

professional jerkface


View Profile WWW
« Reply #9 on: October 13, 2014, 09:56:31 PM »

Crap even shaders have this problem??? How did you detect it? I can't believe gpu makers would conform to that.

Because if they didn't, 2 things would happen:
1. Some calculations would break down for no clear reason.
2. Any denormal number can't just be piled into a buffer and sent over, the driver or GPU (or worse, the program) would have to first do processing to remove any denormal numbers.
Logged
RandyGaul
Level 1
*

~~~


View Profile WWW
« Reply #10 on: October 13, 2014, 10:51:37 PM »

Crap even shaders have this problem??? How did you detect it? I can't believe gpu makers would conform to that.
There was some crazy visual artifacts. The code looked harmless but a senior programmer noted that a dubious division was occurring. We ended up just clamping some numbers I think Smiley
Logged
tjcbs
Level 1
*


View Profile WWW
« Reply #11 on: October 13, 2014, 10:57:31 PM »

So then it wasn't necessarily a denormal problem, which is purely a performance bug, at least on cpus. Or it was a denormal problem, and (at least some) gpus simply don't support them?
Logged

Pages: [1]
Print
Jump to:  

Theme orange-lt created by panic