Welcome, Guest. Please login or register.

Login with username, password and session length

 
Advanced search

1411424 Posts in 69363 Topics- by 58416 Members - Latest Member: JamesAGreen

April 19, 2024, 08:17:01 AM

Need hosting? Check out Digital Ocean
(more details in this thread)
TIGSource ForumsDeveloperTechnical (Moderator: ThemsAllTook)Help me optimize, por favor!
Pages: [1]
Print
Author Topic: Help me optimize, por favor!  (Read 7326 times)
Albert Lai
TIGSource Editor
Level 0
******



View Profile
« on: April 07, 2007, 06:42:30 PM »

I admit, the por favor was just to sound exotic. Sad

I've been trying to program a bit more, just to get some working familiarity with putting something together from the ground up, and I decided to try putting together some shmup-esque program, which I will promptly shoot upon completion.

Anyway - what's the best way to handle collision detection? Earlier I used a loop scanning through all the collisionable objects on the screen, and then, for each one, scanning through all the shots on the screen to see if any of them overlapped. It's unwieldy as all get out, though, and I'm expecting huge performance problems.

So, uh, gurus. Is there anything I can do about this? Should I learn about those mythical beasts called "threads"?
Logged
PoV
Level 5
*****


Click on the eye and something might happen..maybe


View Profile WWW
« Reply #1 on: April 07, 2007, 07:54:47 PM »

Divide up the screen in to regions, and note what objects exist in each region.  From that, you only need to test collision against other objects in the same regions as you.

You can implement this a number of ways.
Logged

Mike Kasprzak | Sykhronics Entertainment - Smiles (HD), PuffBOMB, towlr, Ludum Dare - Blog
DrDerekDoctors
THE ARSEHAMMER
Level 8
******



View Profile WWW
« Reply #2 on: April 08, 2007, 03:50:37 AM »

Yuss, agree with POV. Bucket-sort all the way! That's what I do in Retrengine (to an arbitrary level of density) and it's very effective.
Logged

Me, David Williamson and Mark Foster do an Indie Games podcast. Give it a listen. And then I'll send you an apology.
http://pigignorant.com/
Albert Lai
TIGSource Editor
Level 0
******



View Profile
« Reply #3 on: April 08, 2007, 04:05:56 PM »

Ah, thanks a bunch!

While I was mulling this over, a thought occurred to me - I could have the program extrapolate the path of friendly shots as they are created and see if it intersects with any enemies (assuming the enemy moves in a fixed pattern). Not a great idea, but slap some graphics with it and the player would see if the shots would hit, and where, as they fire. Pretty spooky, I'd imagine.
Logged
PoV
Level 5
*****


Click on the eye and something might happen..maybe


View Profile WWW
« Reply #4 on: April 08, 2007, 10:00:19 PM »

Yes, technically you could do that.  Either through fancy math (that may or may not take in to account other objects occluding the shot), or by running the game step several frames ahead.  It might be funny to see enemies that know they're going to be hit freak out in some way prior to it happening.  Or it might give off a unique vibe seeing enemies ejecting, or being tagged in some way to note who's going to die very soon.  Or even showing you who you would kill if you fired now.  A similar vein as John Blow's Braid game.
Logged

Mike Kasprzak | Sykhronics Entertainment - Smiles (HD), PuffBOMB, towlr, Ludum Dare - Blog
Alex May
...is probably drunk right now.
Level 10
*


hen hao wan


View Profile WWW
« Reply #5 on: April 09, 2007, 04:18:24 AM »

Careful... IMO you shouldn't really optimise unless

1: the routine is known to be slow or
2: you know you will never touch the routine again (and even then, see 1)

Optimising all over the shop can reduce code readability and make things harder to debug, and even introduce bugs, all this aside from the fact that you are potentially using schedule time to optimise a function that doesn't need it.

Still, collision is often one of the things that needs optimisation (doesn't sound like you've got much 3D animation going on but that's another big sink usually IME) so you've picked a good one to hack away at.

Use profiling (easiest way is to set up timers all over the shop) to find out which bits of your program are slow, and then concentrate on those. I wouldn't consider optimising unless there was a good need i.e. the game was slowing down.
Logged

ravuya
Level 7
**


Yip yip yip yip yip


View Profile WWW
« Reply #6 on: April 09, 2007, 05:16:22 AM »

I wouldn't even begin to optimize unless I had a pile of profiler data covering various run cycles at hand.
Logged

DrDerekDoctors
THE ARSEHAMMER
Level 8
******



View Profile WWW
« Reply #7 on: April 09, 2007, 06:02:07 AM »

I wouldn't optimise general code without a profiler, but it's VERY safe to assume in a shoot 'em up that collisions are something worth optimising (unless you're doing Space Invaders with 1 player bullet at a time) and that bucket sorts are an excellent way to optimise them. It's just common sense and unless you write the code HORRIFICALLY you'll get faster code.
Logged

Me, David Williamson and Mark Foster do an Indie Games podcast. Give it a listen. And then I'll send you an apology.
http://pigignorant.com/
ravuya
Level 7
**


Yip yip yip yip yip


View Profile WWW
« Reply #8 on: April 09, 2007, 09:38:39 AM »

A profiler is practically indispensable for me -- I still can't believe VS2005 doesn't ship with one considering the quality of Apple's profiling tools (particularly Shark).

Of course, a profiler is no substitute for just not using a stupid algorithm in the first place; a bucket sort/quadtree is probably a good start for a collision.
Logged

PoV
Level 5
*****


Click on the eye and something might happen..maybe


View Profile WWW
« Reply #9 on: April 09, 2007, 08:07:34 PM »

Careful... IMO you shouldn't really optimise unless

1: the routine is known to be slow or
2: you know you will never touch the routine again (and even then, see 1)

Agreed.  But it's also of note, code can be optimized for simplicity/readability.  Refactoring as some like to call it.  That detail tends to be omitted in most conversations regarding when and when not to optimize.  As usual, it's a judgment call, but sometimes refactoring can be a very positive courtesy to yourself.

Quote
A profiler is practically indispensable for me -- I still can't believe VS2005 doesn't ship with one considering the quality of Apple's profiling tools (particularly Shark).

Or option B, don't use profilers, and spend your time learning the intricate details of your platforms, engine design, and code in general to know where bottlenecks do show up.  Wink

Actually I guess I do profile, but without fancy tools.  What I used to do while working on the Gameboy's was watch the current draw line of the video vertical sync.  Log it to my console before and after the chunk of code I was profiling, or set breakpoints before and after and look at the register when I had that luxury.  I <3 NoCa$h.  Smiley
Logged

Mike Kasprzak | Sykhronics Entertainment - Smiles (HD), PuffBOMB, towlr, Ludum Dare - Blog
Chris Whitman
Sepia Toned
Level 10
*****


A master of karate and friendship for everyone.


View Profile
« Reply #10 on: April 09, 2007, 09:49:08 PM »

Profiling aside, you are always going to want to pick a better algorithm if you can.

Something that is O(n) or O(nlogn) is basically always preferable to something which is O(n2). Even if they appear to behave the same during profiling under average conditions, the former is going to scale better than the latter.


As a matter of note, a code level optimization which makes the code unreadable is almost never worth it. There is a huge chance of introducing bugs during optimization which are difficult to notice, and you or someone else is almost certainly going to have to look at that code at some point in the future, even if it's only for review and testing. Unless you can prove with a profiler that you are making a significant improvement, I would err on the side of readable code.

Keep in mind as well that it's very difficult at times to do better than your compiler's native optimization. Code level optimizations should always be done with a profiler. If you don't have one available, I'd recommend steering clear of them entirely, actually.
« Last Edit: April 09, 2007, 09:50:55 PM by I Like Cake » Logged

Formerly "I Like Cake."
PoV
Level 5
*****


Click on the eye and something might happen..maybe


View Profile WWW
« Reply #11 on: April 10, 2007, 12:09:55 AM »

Something that is O(n) or O(nlogn) is basically always preferable to something which is O(n2). Even if they appear to behave the same during profiling under average conditions, the former is going to scale better than the latter.

Anyone that can actually understand O(x) algorithm performance should already know the difference. :D

And if they don't, they'll need a reference.  Here's one, but it's complicated.

In English, it describes the time it takes to run an algorithm worst case.

O(1) is constant time, meaning the algorithm takes exactly what it takes.
O(n) is linear time, meaning 3 elements going in takes 3 units of time.
O(n2) is exponential time (or quadratic for math freaks), meaning 3 elements going in takes 9 units of time.
O(log n) is logarithmic time, meaning 3 elements will take less than 3 units of time.  Without describing logarithms, how it applies here is as the number of units grow, the workload is less than the number of units.  Pretty much any algorithm time with log in the description is better than O(n) time (so long as exponents aren't in there too).

And many mathematical variations on top of that describing more and less than O(n).  You can build your own using all your favorite math operators, but it's for keeping the mathematicians happy, not us.
Logged

Mike Kasprzak | Sykhronics Entertainment - Smiles (HD), PuffBOMB, towlr, Ludum Dare - Blog
Chris Whitman
Sepia Toned
Level 10
*****


A master of karate and friendship for everyone.


View Profile
« Reply #12 on: April 10, 2007, 12:29:39 AM »

Literally, if some function, call it f(x), is O(some function of n) that means that there is a scalar multiple of (some function of n), such as five or ten times (some function of n) which 'upwardly bounds' (is greater than) f(x) for all values of x greater than some chosen value, k.

In layman's terms, it divides functions up into 'families' based on their efficiency. Certain implementations of one thing or another thing may take different amounts of time based on all sorts of variables, but if an algorithm acting on 'n' inputs (such as collision detection between n shots and objects) is in the O(logn) family it will basically always outperform functions in the O(n) family as n scales up. It's also not only worst case, some algorithms are O of different functions in their best, average and worst cases.

These are cases where computational theory saves you a lot of work and lets you make smart choices even before you start programming something. Basically, if you don't want to get too far into the theory, try to pick algorithms which are O of a smaller functions over ones which are O of larger functions.

So... uh, school is cool?
Logged

Formerly "I Like Cake."
PoV
Level 5
*****


Click on the eye and something might happen..maybe


View Profile WWW
« Reply #13 on: April 10, 2007, 12:37:17 AM »

So... uh, school is cool?

Stay in School.  Winners don't use drugs.



See?  The image says so.
Logged

Mike Kasprzak | Sykhronics Entertainment - Smiles (HD), PuffBOMB, towlr, Ludum Dare - Blog
ravuya
Level 7
**


Yip yip yip yip yip


View Profile WWW
« Reply #14 on: April 10, 2007, 07:24:32 AM »

Historically, I doubt that the same kids who were playing arcade games were the same ones who were doing drugs. But I digress.
Logged

Pages: [1]
Print
Jump to:  

Theme orange-lt created by panic