Welcome, Guest. Please login or register.

Login with username, password and session length

 
Advanced search

1411518 Posts in 69377 Topics- by 58431 Members - Latest Member: Bohdan_Zoshchenko

April 28, 2024, 03:28:24 AM

Need hosting? Check out Digital Ocean
(more details in this thread)
TIGSource ForumsDeveloperTechnical (Moderator: ThemsAllTook)Dynamic arrays are killing me!
Pages: 1 2 [3] 4
Print
Author Topic: Dynamic arrays are killing me!  (Read 10988 times)
Mikademus
Level 10
*****


The Magical Owl


View Profile
« Reply #40 on: April 15, 2009, 10:15:01 AM »

I appreciate your constant condescension, though, Mikademus.
I apologise, that was not my intention.

Quote
I do believe that programming is like art, but I believe it not because I think that it gives you an excuse not to study, but because it is a great exercise for creative thinking and critical analysis.
With this I agree. (In fact, if we were to compare code and actual practices, I'm rather certain we would agree on more than we disagree on, but then, this is the internet).

Quote
I may use bare pointers, but I know exactly where, when and how my objects are cleaned.
Actually, deallocation isn't the main problem with naked pointers, exception safety is. If you don't use exceptions then your app will terminate on errors and the OS will clean up memory anyway, so then naked pointers are, well, "ok" I guess. Also, as your app grows you'll find it more and more difficult to know exactly where, when and how your objects are cleaned. In fact, I'd argue (and I think this is uncontroversial) that a programming style where you place great demands on the programmer to always ever do right manually rather than an equivalent one that relieves the programmer of minute considerations (like manual deallocation vs. storing pointers in smart containers) is a bad choice. We don't program much in assembler nowadays, and most that programs C seems to be old-timers or programmers of constrained-resources platforms.
Logged

\\\"There\\\'s a tendency among the press to attribute the creation of a game to a single person,\\\" says Warren Spector, creator of Thief and Deus Ex. --IGN<br />My compilation of game engines for indies
Zaphos
Guest
« Reply #41 on: April 15, 2009, 05:10:30 PM »

Exceptions in C++ are super obnoxious ... I think a lot of C++ code is simply not exception safe.  If you care about them, smart pointers do seem nicer.

(edit: To clarify, I certainly know people who do 'real' software design who don't use exceptions in C++, or who even think exceptions are a bad idea in general.  Also, last time I checked, research methods in software engineering seemed pretty hopeless, so I don't trust anyone who claims to know anything beyond 'this worked out for me in practice')
« Last Edit: April 15, 2009, 05:16:28 PM by Zaphos » Logged
Core Xii
Level 10
*****


the resident dissident


View Profile WWW
« Reply #42 on: April 15, 2009, 05:24:40 PM »

Never got an answer from Ivan or Core XII, which I guess mean they were convinced by my post. Either that or they disconnected before losing so as not to ruin their stats Coffee

I just got bored of the discussion. If you want it so bad, sure. (by the way, if you look at my name, you can see that the i's are small, not capitalized. It's Xii, not XII)

Dynamic allocations in itself isn't evil. However, the normal pattern of allocating into naked pointers and deallocating "when done" or at scope end etc is dangerous and bug-prone. [...] the habit of using naked pointers will lead to memory leaks. Using naked pointers will lead to code not exception-safe. Also, storing new'd pointers in STL containers is a bad idea that generally exacerbate both these problems. Passing pointers as parameters will risk passing invalid or null pointers.

Looks to me like the reasons behind your argument are those of pure user error. Yeah, I completely agree, if you have no frigging idea what you're doing, you should probably play it safe and let the computer, which doesn't make mistakes, take care of it for you.

On the other hand, if you do know what you're doing, you'll save resources by doing it yourself.

It is also good habit of using static allocation as often as possible, since that will allow the compiler to pre-allocate space, determine memory use beforehand, and avoid the dynamic allocation overhead.

...And force you to recompile whenever you change something, kill the possibility for user-created content, etc.
Logged
Zaphos
Guest
« Reply #43 on: April 15, 2009, 06:07:38 PM »

When people talk about avoiding dynamic allocation do they really mean allocating on the stack, and knowing the limits at compile time, or do they mean allocating on the heap but in big chunks upon loading a level, so the total allocation doesn't change from frame to frame?
Logged
raigan
Level 5
*****


View Profile
« Reply #44 on: April 15, 2009, 07:32:58 PM »

kill the possibility for user-created content, etc.

That's not true -- you're going to need bounds regardless! For instance Little Big Planet's level editor definitely doesn't allow unlimited creation of objects -- if it did, physics/collision/graphics/etc might grind to a halt. So instead it introduces a "temperature/thermometer" to indicate how many more objects can be added to a level before it's "full". There's also a per-object max number of polygon edges, which IMHO indicates an "allocate max number of segs for each polygon" approach, similar to what Box2d does.

Avoiding dynamic allocation doesn't mean having to know everything at compile-time, it means having to know everything at level-load-time. And really even that is optional -- as long as your buffers are larger than you could possibly need (and you assert when they overflow), you don't really need to care. If you set reasonably high limits, chances are your game's going to grind to a halt (due to updating/collision/etc being a bottleneck) long before you actually hit MAX_NUM active instances.

When people talk about avoiding dynamic allocation do they really mean allocating on the stack, and knowing the limits at compile time, or do they mean allocating on the heap but in big chunks upon loading a level, so the total allocation doesn't change from frame to frame?

Good question.. I meant "avoid using new() in the main loop of your game" but probably both/either use makes sense.
Logged
Mikademus
Level 10
*****


The Magical Owl


View Profile
« Reply #45 on: April 16, 2009, 03:51:52 AM »

Never got an answer from Ivan or Core XII, which I guess mean they were convinced by my post. Either that or they disconnected before losing so as not to ruin their stats Coffee

I just got bored of the discussion. If you want it so bad, sure. (by the way, if you look at my name, you can see that the i's are small, not capitalized. It's Xii, not XII)

Whatever, I just got bored with the distinction.

Quote
Looks to me like the reasons behind your argument are those of pure user error. Yeah, I completely agree, if you have no frigging idea what you're doing, you should probably play it safe and let the computer, which doesn't make mistakes, take care of it for you.

On the other hand, if you do know what you're doing, you'll save resources by doing it yourself.

Funny you should say this. Studies in software development have shown that code by expert developers contain about as many bugs (about one per three lines) as code by novices. Also, the "I can do it faster" argument is a fallacy since tests of this has shown that programmers are rarely able to improve on the compiler, even when tuning code in assembler.
Logged

\\\"There\\\'s a tendency among the press to attribute the creation of a game to a single person,\\\" says Warren Spector, creator of Thief and Deus Ex. --IGN<br />My compilation of game engines for indies
Zaphos
Guest
« Reply #46 on: April 16, 2009, 08:46:06 AM »

Computers aren't good at everything just because they're good at one thing ... A study that shows people can't hand optimize ASM very well does not show that people can't optimize memory management better than [insert method here].  It doesn't even show people can't hand optimize ASM well, since there are times when people can generally help the compiler, like when writing code to take advantage of SSE instructions.  It just shows that some people couldn't improve on some compiler within some time & resource constraints for some code examples.

Also, I'm wondering how 'number of bugs' is measured, and how meaningful it could be ... my experience with novice programmers is that the main trouble is learning how to debug, so the bugs are much more of a problem.  Is that number of bugs before giving a chance to debug?  Also I wonder how the 'expert' vs 'novice' thing works, and if 'skilled' vs 'unskilled' would be different.  Three per line sounds a bit extreme, but then it's hard to say what it means because I guess it depends what was being programmed.
Logged
Mikademus
Level 10
*****


The Magical Owl


View Profile
« Reply #47 on: April 16, 2009, 11:40:04 AM »

Some of the studies I recall were commissioned by NASA a few years back. I think they sampled code in COBOL, C and Java, though this is taken from memory. Part of the research was obtaining best practices for robust programming. Since they have a zero-bug tolerance and a very special development situation I don't think any normal developer could (or should) adopt their practices, but the studies were certainly rigorous and scientific. They were also quoted in course literature I used for teaching a few years back. If I would put my back into it I could provide full references etc, but seriously, I can't bother because anyone really interested in it can relatively easily find it themselves through any CompSci dept at their local university.

As for hand-optimisation: yeah, there are things that expert developers can do better than trusting the normal alternative. F.i. STL memory allocation is a common-denominator solution that is good for most situations but not optimal for fringe situations. Thus custom memory management may improve performance significantly. However, for "normal" developers and situations manually using new/delete or malloc/free will not be an improvement on higher-level faculties, and though it is of course not good to believe that the compiler can magically turn turds to gold it is very difficult to improve on code optimisation done by optimising compilers (GCC or especially MSVC). Unless for very particular situations it is generally not worth the development time or increased code brittleness to aggressively optimise code (in technically esoteric ways, that is), especially peripheral code.
Logged

\\\"There\\\'s a tendency among the press to attribute the creation of a game to a single person,\\\" says Warren Spector, creator of Thief and Deus Ex. --IGN<br />My compilation of game engines for indies
Zaphos
Guest
« Reply #48 on: April 16, 2009, 01:11:42 PM »

I'm not sure it's really possible to have a rigorous and scientific study which samples existing code ... that doesn't sound like a controlled experiment at all.  And once you have a controlled experiment you end up with a rather artificial situation.  But anything that would say one bug per three lines of code (doesn't that strike you as insanely bad quality ...) and not instead give you a function of project size, language, the type of problem, skill, amount of time spent debugging, tools for debugging, etc ... is probably not going to be very meaningful.
Logged
Will Vale
Level 4
****



View Profile WWW
« Reply #49 on: April 16, 2009, 03:06:35 PM »

I just wanted to point out that there are many different options and it really depends on what fits the game best.

I totally agree with your earlier post about it being acceptable (and in many real game cases desireable) to avoid dynamic allocation. But in principle I agree with the above statement even more - it doesn't do to be too dogmatic about these things: I recall reading a discussion about these things on sweng-gamedev which pointed out that Halo 2 and Stranger's Wrath were built using 'competing' methodologies but both ended up AAA games with excellent visuals, interesting gameplay with lots of entities, and good frame rates.

That said, it's always tempting to support the "beware high level features" message because there's so much "beware low level features" advice around. Evil

I think that indie games in general are in a privileged situation - they're usually smaller games, running on relatively modern PC and Mac hardware, so many assumptions like "it's OK to allocate memory in the frame" or "it's OK to use auto-growing std::vector" or "smart pointers don't cost worthwhile performance" or "it's OK to optimise hotspots later" are perfectly valid. But if you wanted to port those games to e.g. WiiWare or DSiWare, you would likely find many situations which test or break those assumptions, requiring significant rework.

The ones to watch out for are assumptions that are used everywhere - they give you the dreaded flat graph in the profiler, because everything is slow! Sometimes you can do something systemic to fix this afterwards, but often you can't.

Will

Logged
Mikademus
Level 10
*****


The Magical Owl


View Profile
« Reply #50 on: April 16, 2009, 03:35:44 PM »

I'm not sure it's really possible to have a rigorous and scientific study which samples existing code ... that doesn't sound like a controlled experiment at all.  And once you have a controlled experiment you end up with a rather artificial situation.  But anything that would say one bug per three lines of code (doesn't that strike you as insanely bad quality ...) and not instead give you a function of project size, language, the type of problem, skill, amount of time spent debugging, tools for debugging, etc ... is probably not going to be very meaningful.

I don't really think that their main focus was a rigorously executed proper experimental scientific study with statistical paraphernalia and double-blind topping, but rather take live samples from the field and draw pragmatic conclusions. And no, one bug per three lines seems relatively reasonable (note that these do not need to be crashing bug [but depending on your definition of bugs] == instead if >= and other small or large things. The samples were taken, iirc, from live production code produced by professional developers. Having read interviews with NASA coders, it is interesting that it is almost as common that they leave the NASA code vats for the private sector as it is for them to return because they couldn't stand the sloppiness elsewhere... Well, hello there! (I think I found some links on Joel Spolsky's site, or perhaps Jeff Atwood's Coding Horror, a few years back).

Also, I think Will Vale is absolutely correct when he says "it's always tempting to support the "beware high level features" message because there's so much 'beware low level features' advice around". The design pattern frenzy that went around the last years is one example of this.

I do believe many people in this place are excellent coders, and this breed tends to assume everyone else is too and thus won't make mistake and at the same time be able to output efficient code. The problem is however that for every one excellent coder that knows his stuff and don't make silly mistakes there's ten or a hundred... eh... competence challenged ones, and since these will have to be worked with in any project that's not one-man shows, and add to this that skilled programmers tend to be strong individualists with even stronger convictions preferences and dislikes, and it is evident that good practices are needed to avoid absolute disaster. PandaHand Thumbs Down Right If you're lucky enough to work with skilled co-coders that are good personality and coding style fits, then I truly envy you and recommend you to surgically make you into Siamese twins so they can't escape.
Logged

\\\"There\\\'s a tendency among the press to attribute the creation of a game to a single person,\\\" says Warren Spector, creator of Thief and Deus Ex. --IGN<br />My compilation of game engines for indies
Core Xii
Level 10
*****


the resident dissident


View Profile WWW
« Reply #51 on: April 16, 2009, 08:10:41 PM »

That's not true -- you're going to need bounds regardless! For instance Little Big Planet's level editor definitely doesn't allow unlimited creation of objects -- if it did, physics/collision/graphics/etc might grind to a halt. So instead it introduces a "temperature/thermometer" to indicate how many more objects can be added to a level before it's "full". There's also a per-object max number of polygon edges, which IMHO indicates an "allocate max number of segs for each polygon" approach, similar to what Box2d does.

You do realize that Little Big Planet knows exactly what its bounds are because it's an Xbox game, right? It's a console; Everyone who plays it has the exact same system specs. This is not true with PC; Sure there's an average upper limit you can avoid going over like 1 gigabyte of RAM but the nice thing about PCs is that they evolve over time. If your game allocates too much memory for some people, next year, they might already have more.
Logged
Will Vale
Level 4
****



View Profile WWW
« Reply #52 on: April 16, 2009, 08:57:47 PM »

Yeah, but any PC game knows how much physical memory it has available as well. You just find out how much is installed in the system, divide it in (say) 2 to make room for everything else, and that's your hard ceiling. You can then allocate that block and slice it up to give you a frame-static memory picture which is right for each PC, just like you would on a console.
Logged
raigan
Level 5
*****


View Profile
« Reply #53 on: April 17, 2009, 04:53:00 AM »

That's not true -- you're going to need bounds regardless! For instance Little Big Planet's level editor definitely doesn't allow unlimited creation of objects -- if it did, physics/collision/graphics/etc might grind to a halt. So instead it introduces a "temperature/thermometer" to indicate how many more objects can be added to a level before it's "full". There's also a per-object max number of polygon edges, which IMHO indicates an "allocate max number of segs for each polygon" approach, similar to what Box2d does.

You do realize that Little Big Planet knows exactly what its bounds are because it's an Xbox game, right? It's a console; Everyone who plays it has the exact same system specs. This is not true with PC; Sure there's an average upper limit you can avoid going over like 1 gigabyte of RAM but the nice thing about PCs is that they evolve over time. If your game allocates too much memory for some people, next year, they might already have more.

My point was that you're going to hit a practical limit (processing load of AI/collision/etc) WAY before you hit a memory limit. Being able to allocate 99999999 rigid bodies in theory is great, but if that slows your game down to 1fps then you're going to have to make sure that never happens regardless of whether or not you can accommodate it memory-wise. Also LBP is PS3 Tongue

But even on PC you need a minimum spec for memory use, it just becomes more of a "soft" limit since things will just slow down when you exhaust the physical memory and start using virtual memory. Then again this is likely to really ruin the smooth framerates so you might want to consider it a "hard" limit just like on consoles.
Logged
Alex May
...is probably drunk right now.
Level 10
*


hen hao wan


View Profile WWW
« Reply #54 on: April 17, 2009, 05:21:15 AM »

I like the cut of raigan's jib.
Logged

BorisTheBrave
Level 10
*****


View Profile WWW
« Reply #55 on: April 17, 2009, 03:40:57 PM »

I like games not to have built in limits so that years later you can use your entire machine to play them. I'm sure Total Annihilation supports map and army sizes that would have never run when it was launched. Or who can forget how much more fun Liero was when you turned off reload times, and spammed the entire arena with projectiles?

The argument about consoles and memory applies just as much to processor power, btw. I can say when a LBP level will max out the consoles resources for all resources, but not for PC games. Worse, as levels are shared, I couldn't even try to estimate for LBP-like games - either the limits are too low for the vast majority of computers, missing out on potential, or you have to exclude low performance PCs.
I don't think it unreasonable for end users to have a comprehension of performance, and appreciate that slow machines may not be able to manage in some circumstances. Of course, you must still design your games such that difficult scenes aren't critical or encouraged, or graphics can fall back, but I don't see why you should go so far as to ban complex scenes entirely.
Logged
Mikademus
Level 10
*****


The Magical Owl


View Profile
« Reply #56 on: April 17, 2009, 04:41:49 PM »

I like games not to have built in limits so that years later you can use your entire machine to play them. I'm sure Total Annihilation supports map and army sizes that would have never run when it was launched. Or who can forget how much more fun Liero was when you turned off reload times, and spammed the entire arena with projectiles?

I agree with avoiding artificial limits, but that's not really what he was talking about. Say you're making a RTS that will never have more than 300 units at once. If you can optimise the game for a static pool, then why allow 999999 units if that incurs overhead or whatnot when 999699 of that potential will never be used?
Logged

\\\"There\\\'s a tendency among the press to attribute the creation of a game to a single person,\\\" says Warren Spector, creator of Thief and Deus Ex. --IGN<br />My compilation of game engines for indies
Core Xii
Level 10
*****


the resident dissident


View Profile WWW
« Reply #57 on: April 17, 2009, 09:38:27 PM »

why allow 999999 units if that incurs overhead or whatnot when 999699 of that potential will never be used?

Because you don't know that. Someone might use it. Years later someone might still revitalize your game with a crazy level design or something. If you put a needless, arbitrary hard limit you're effectively limiting your game to one point in time and creativity.
Logged
Ivan
Owl Country
Level 10
*


alright, let's see what we can see


View Profile
« Reply #58 on: April 17, 2009, 09:57:12 PM »

I have no problem statically allocating stuff on the game level and I do it myself all the time. My only comment was that on the engine level, you need to allow for infinite allocation. There is no reason to limit a 3D scene to a finite amount of objects in the engine. In a specific game, however, there's nothing wrong with having a set number of objects.
Logged

http://polycode.org/ - Free, cross-platform, open-source engine.
Will Vale
Level 4
****



View Profile WWW
« Reply #59 on: April 17, 2009, 11:33:23 PM »

The way I like to deal with this one is to configure engine systems with budgets at init time. The systems then allocate pools from the heap with exactly enough space for the maximum numbers of (nodes, entities, etc.) requested by the budget.

I think this gives a reasonable tradeoff, and the gamecode can choose to specify fixed budgets (e.g. for small games or console games) compute budgets to suit available memory, or even load budgets from an INI file.

As Raigan points out, you should also budget your available frame time - this is typically something console games are also good at, and PC games bad at.
Logged
Pages: 1 2 [3] 4
Print
Jump to:  

Theme orange-lt created by panic