Welcome, Guest. Please login or register.

Login with username, password and session length

 
Advanced search

1411710 Posts in 69402 Topics- by 58456 Members - Latest Member: FezzikTheGiant

May 20, 2024, 09:54:09 PM

Need hosting? Check out Digital Ocean
(more details in this thread)
TIGSource ForumsDeveloperTechnical (Moderator: ThemsAllTook)The Experimental Technology Thread
Pages: [1] 2 3 4
Print
Author Topic: The Experimental Technology Thread  (Read 13066 times)
st33d
Guest
« on: September 12, 2009, 04:33:00 AM »

This thread is for strange and useful algorithms that you may not have heard / thought of.

By sharing the weird ways we solve some problems, perhaps we can come up with some new ideas for games.

I'll start:

The Bresenham Algorithm

This is a fast line drawing algorithm that was invented to provide a super fast method of drawing a pixelated line - that means no trigonometry involved and no overhead in using it. Let wikipedia explain.

So what can we do with the Bresenham Algorithm? I used it in commercial projects and in an art project.

Bomba is basically a mouse avoider. Bresenham came to the rescue here because I was checking for collision against pixels. My modification of the algorithm (below) returned whether there were any intercepted pixels between moving the mouse from one position to the next. No matter how fast you move the mouse - you can't cheat my mouse avoider.

What else?

I'm using it to do ray cast light in a rogue like. If you want to do a tile based field of view - Bresenham works pretty well.

Of course, this is all very interesting without some code and a demo.

So here's a simple downloadable AS3 demo you can play around with that has the class for you to use.
Logged
raigan
Level 5
*****


View Profile
« Reply #1 on: September 12, 2009, 06:16:51 AM »

Great thread idea!!

One thing to note is that Bresenham was developed for drawing lines -- visually approximating a 1-pixel-width line on a grid. This is a bit different in terms of desired behaviour from walking along a ray through a grid, because sometimes Bresenham will jump diagonally (which looks better visually), which you wouldn't want to do when raycasting (e.g for line-of-sight tests).

(Sorry if this in unclear -- if you try drawing a ray and the cells marked by Bresenham at the same time, you'll find that there are some cells touched by the ray which are *not* touched by Bresenham.)

A traversal algorithm very similar to Bresenham, but which visits all cells touched by a ray is: http://www.cs.yorku.ca/~amana/research/grid.pdf


Um.. I can't think of any other algorithms OTOH! Maybe my Bresenham-alternative can count as my contribution for now Smiley

I really like the idea of this thread, let's keep it going!
« Last Edit: September 12, 2009, 06:23:25 AM by raigan » Logged
Triplefox
Level 9
****



View Profile WWW
« Reply #2 on: September 12, 2009, 07:10:33 AM »

Some highly useful and applicable things that don't get nearly enough attention:


The latter two are favorites of mine as they point towards programming methodology that goes beyond just writing code and towards formalizations that let you understand a problem better and faster. I want more of those.
Logged

st33d
Guest
« Reply #3 on: September 12, 2009, 07:18:41 AM »

Fair point. I actually use a different algorithm to ray cast against tiles to deal with the voxel problem. It's a bit of a mess at the moment though  Undecided

Funny thing is though, I just keep finding uses for that Bresenham code. It's so bloody fast!  Screamy

A couple of other bits of tech:

Neural Net drawing recognition

Scent tracking (fast path finding trick)
Logged
mewse
Level 6
*



View Profile WWW
« Reply #4 on: September 12, 2009, 07:23:03 AM »

Okay, here's my favourite algorithm:

The Radix Sort

Imagine that you've got yourself a huge array of unsorted data, and you want to sort it into some sort of useful order.  You could use a bubble sort or a mergesort or other similar sorting algorithm.. but by far the fastest sort out there for really large data sets is the Radix Sort.  And most people have never heard of it.  I'd link to the Wikipedia article, but it's not particularly comprehensible.

The thing that typically makes sorting algorithms slow is having to compare all of the items to be sorted against all the other items to be sorted;  that's why most sorting algorithms become exponentially more expensive to run, the more data you feed them.  The Radix Sort avoids this problem by avoiding ever performing a comparison between the items to be sorted.  How on earth does that work?

Well, to understand the Radix Sort, we need to start with the Bucket Sort.

For the Bucket Sort, let's assume that our list of data is made up of integers in the range of 0 through 9.  To perform a bucket sort, we create ten temporary lists (which we call "Buckets").  You then iterate over your list of data, and move each item into the matching bucket.  That is, all '0' items go into Bucket 0, all '1' items go into Bucket 1, etc.  Once you're done, you move all the elements back from the buckets into your master list of data, starting with the items in Bucket 0 and finishing with those in Bucket 9.  And voila, your data is now sorted from low to high, without ever having compared any individual element against any other individual element.  It doesn't matter how many elements you have in your list;  each new element you add costs requires exactly the same amount of extra processing time as the previous one did.

In fact, if you really wanted to, you could do a bucket sort to sort almost any type of data, or integers of any size;  you'd just need a ridiculously huge number of buckets to sort your data into.  In practice, though, that would require a prohibitive amount of memory, and so you don't actually want to do that.

Instead, the Radix Sort performs multiple Bucket Sorts across different parts of the value we're sorting.  For example, let's assume that we have data which ranges in value from 0 to 999.  We don't want to actually have 1,000 buckets to sort into, so instead, let's just have the same ten buckets that we had in the Bucket Sort example above.

What you do is perform a Bucket Sort exactly as described above, but only sorting based on the value in the "ones" digit.  So if we have a list of values:  12, 73, 104, 721, 510, 11, then after the first bucket sort, our sorted-by-ones and re-concatonated list of values would be 510, 721, 11, 12, 73, 104.  Note that the "ones" digits of each of these values are now in order, even though the full numbers are not.

Starting with this re-ordered list, we next do the same Bucket Sort again, but this time sorting based upon the value in the "tens" digit.  This time, we end up with:  104, 510, 11, 12, 721, 73.  Notice that the ones digits are no longer in order, but the tens are in order.  And elements with the same tens digit also have their ones digit sorted correctly.  (That is, 510, 11, and 12 all have '1' in the tens position, and the numbers in their 'ones' position are also sorted correctly with respect to each other.)

Next, we do the same Bucket Sort again, but this time sorting based upon the value in the "hundreds" digit.  And this time, we end up with:  11, 12, 73, 104, 510, 721.  As we've now sorted by every digit in your numbers, our list is fully sorted.

Or if you wanted to spend extra memory in exchange for needing to do fewer bucket sorts, you could handle 100 buckets at a time, which would mean you'd only need to perform half as many bucket sort passes.


I mentioned this before, but it's worth stressing that due to the overheads of moving data items around between lists, the radix sort typically doesn't end up being faster than mergesort or similar sorting algorithms unless there is a lot of data which needs to be sorted;  you'd need to have an absolute minimum of 1,000 elements before it's worth even considering to use this sorting method.

It's also worth mentioning that it's fastest to do this sorting in hexadecimal, where you can grab individual digits of a number by using a bitwise mask, such as "number & 0xf", instead of using the more expensive modulo math required to grab a digit from a decimal number, such as "number % 10".
Logged
Gold Cray
Level 10
*****


Gold Cray


View Profile WWW
« Reply #5 on: September 12, 2009, 07:39:52 AM »

Radix sort is my favorite linear time sorting algorithm.

I'm afraid I haven't got any algorithms that I can think of right now, but I'll come back to this thread if any come to me.
Logged
brog
Level 7
**



View Profile WWW
« Reply #6 on: September 12, 2009, 07:57:29 AM »

Bead Sort is a hilarious algorithm for sorting positive integers in linear time.
You line up the numbers you want to sort, like so:

7 XXXXXXX
3 XXX
5 XXXXX
2 XX
1 X

Then apply gravity!

X
XXXXXXX
XXX
XXXXX
XX

X
XX
XXXXXXX
XXX
XXXXX

X
XX
XXX
XXXXXXX
XXXXX

X        -> 1
XX       -> 2
XXX      -> 3
XXXXX    -> 5
XXXXXXX  -> 7

And voila!  List of numbers is sorted!

Other things..
Antiobjects, I saw a cool pacman AI based on this, haven't tried using it myself.
Seam Carving for Content-Aware Image Resizing, this is just an amazing idea, watch the video.
Logged
Izzimach
Level 0
***


Immersed in Phlogiston


View Profile WWW
« Reply #7 on: September 12, 2009, 08:54:33 AM »

I remember learning about Nondeterministic FSMs which can be used for regular expression matching.  Later on I used it to detect special moves in a simple fighting game, where each special move was encoded as a regular expression ("{D}+{F}+1" was fireball).

But really I'm here to mention the PID controller, which is used in various things from air-conditioners to robotics.

The primary use I would see in most games is using PID control to aim guns and steer cars.  For example, if your car is facing at angle A and the target is at angle B, a simple solution is to modify A by small amounts each frame until it reaches B:

Code:
// this is run each frame
// angles are in degrees!
if (A < B) A = A + 1
if (A > B) A = A - 1

But this can make your car turn really slow, and if B=50.5, for example, your car will flutter back and forth between A=50 and A=51 each frame.

With PID control you take the error e=B-A and use some constants P,I,D to update A.  In fact, you can often just use P (proportional gain) and get good results:

Code:
// run each frame
e = B-A
A = A + P*e

Using this method, the car will turn fast when the angle is large (i.e., the desired target is behind it) and turn in small increments when it is close to the desired angle.  The "proper" value of P has to be determined.  There are all sorts of papers and theories about how to compute the right value of P, but usually you just end up fiddling with P manually until the results look good.

Once you have proportional gain sort of working, you can add in I (integral) and D (derivative) gains.  In many cases you may not even need I and D, but they are useful for tweaking your control to avoid overshoot and other undesirables.
Logged

Afinostux
Level 2
**

Forgotten Beats


View Profile
« Reply #8 on: September 12, 2009, 09:06:56 AM »

I've tried a couple of AI learning techniques, and by far my favorite has been the ID3 algorithm. It has the distinction of being the one that produced the best result in the shortest time, at least under the tests I ran. Depending on the complexity of what you're doing with it, it can actually learn fast enough to be used on the fly ingame.

It's at its absolute best when dealing with data made entirely of true/false values classified by some other algorithm, although there are extensions for it that can split up numeric values.

Basically, say you have a set of data from a bunch of previous battles:
Code:
1) outgun: True outhealth: True won: True
2) outgun: True outhealth: False won: True
3) outgun: False outhealth: True won: False
4) outgun: False outhealth: False won: False
5) outgun: True outhealth: False won: True
6) outgun: False outhealth: False won: False

outgun is classified based on whether or not the agent has better firepower, outhealth is specified based on whether the agent can take more hits, and won is whether or not the agent won the battle.

Now, it's CLASSIFIER TIME!
I'm going to refer to the above list by numbers from now on, for the sake of sanity
Code:
Let's classify based on whether or not we won
node 1(start)
| 1(true)
| 2(true)
| 3(False)
| 4(False)
| 5(True)
| 6(False)

That data is not terribly useful yet. Let's split it up.
When choosing which attribute to split it on, you need to pick the value which classifies the data best.

A fast way I've found of doing this is to take the sum of those values XOR the value you're trying to classify. Only works in some circumstances, but very fast:

sum of outgun XOR won:
(true)(true) = 0
(true)(true) = 0
(false)(false) = 0
(false)(false) = 0
(true)(true) = 0
(false)(false) = 0
sum = 0

sum of outhealth XOR won:
(true)(true) = 0
(false)(true) = 1
(true)(false) = 1
(false)(false) = 0
(false)(true) = 1
(false)(false) = 0
sum = 3

As you can see, outgun corresponds best with the outcome of the battle. In fact, it corresponds directly. What we do now is to add branches on the tree that are split based on the value that corresponds best:

node 1(start)
| node 2 (outgun = true)
| | 1(true)
| | 2(true)
| | 5(true)
|
| node 3 (outgun = false)
| | 3(false)
| | 4(false)
| | 6(false)

Now the AI knows that it should run away from battles where it has inferior firepower!

But what happens if in another battle that happens later, the AI wins a battle where it was outgunned but had more health?

node 1(start)
| node 2 (outgun = true)
| | 1(true)
| | 2(true)
| | 5(true)
|
| node 3 (outgun = false)
| | 3(false)
| | 4(false)
| | 6(false)
| | 7(true)

the tree is still classified based on outgunning, but is not useful for decisions anymore. We need to classify node 3 based on what we've found out. The only value left to classify on is outhealth, so let's just do it:

node 1(start)
| node 2 (outgun = true)
| | 1(true)
| | 2(true)
| | 5(true)
|
| node 3 (outgun = false)
| | node4 (outhealth = true)
| | | 3(false)
| | | 7(true)
| |
| | node5 (outhealth = false)
| | | 4(false)
| | | 6(false)

Still not classified, but that's as far as we can go on the current information set. Still, it provides enough information for the AI to go on:

outgun & outhealth: attack
outgun only:        attack
outhealth only:     do whatever
neither:            retreat!
I hope you found that useful. The sum of XOR way of getting the uncertainty is most accurate when there are the smallest number of duplicate entries, and can actually fail if there are enough duplicates. You can save speed in processing by rejecting duplicate entries rather than doing any special math while making the tree.

Go here for a much more long winded explanation
Logged

Sam
Level 3
***



View Profile WWW
« Reply #9 on: September 12, 2009, 09:08:23 AM »

Kind of similar to the "pacman scent" in that antiobjects example: Pathfinding for many units by having them share a global distance-from-target map.

Basically you use your processing time to generate a value for each tile in the game indicating how far it is from the target (taking into account obstacles,) then have the units blindly follow that map - which is a computationally simple task.  The upshot is that computation time increases very little when you increase the number of units following the map.  I just so happened to have recently written up how I've used this.


Seam Carving for Content-Aware Image Resizing, this is just an amazing idea, watch the video.
That is extremely excellent.  As is this whole thread in fact!
« Last Edit: September 13, 2009, 03:40:16 PM by Salt » Logged
raigan
Level 5
*****


View Profile
« Reply #10 on: September 12, 2009, 10:13:44 AM »

On radix sort -- this is pretty magical, a couple good references are:
http://codercorner.com/RadixSortRevisited.htm
http://www.stereopsis.com/radix.html

@ st33d: AFAIK the grid marching algorithm I posted should be the same speed/complexity as Bresenham, they're both implementations of the same basic idea http://en.wikipedia.org/wiki/Digital_Differential_Analyzer_(graphics_algorithm); please do try it out since it's likely to be useful! There are very few cases aside from line rasterization where Bresenham is what you actually want -- you might want a DDA but not that exact one. This makes sense since Bresenham was only designed to produce good-looking lines, which is a totally different requirement from other sorts of uses!
« Last Edit: September 13, 2009, 05:59:02 AM by raigan » Logged
John Nesky
Level 10
*****


aka shaktool


View Profile WWW
« Reply #11 on: September 12, 2009, 10:20:41 AM »

you use your processing time to generate a value for each tile in the game indicating how far it is from the target (taking into account obstacles,) then have the units blindly follow that map

I did that for Ludum Dare a while ago. The result was this, and you can see a visualization of the map being generated here.

Right now I'm experimenting with using LLVM to recompile my inner loop at runtime (aka JIT compilation) except I programmatically remove parts of the inner loop that turn out to be unnecessary before compiling. It's the hardcore-est, wanker-est optimization I've ever done.
Logged
st33d
Guest
« Reply #12 on: September 13, 2009, 04:25:57 AM »

@raigan - what you're basically suggesting is another check at the step of the gradient. Since each check is a method call, it would be slower overall.

And imagine what would happen if we were walking a pure diagonal? Not only would I have to check each voxel I'm walking through, but each voxel to either side as well if I want something that truly tackles any corner situation.

But I think you're right in that a true voxel ray caster would be more complete if it could tackle corners. For now I've got my cheap and cheerful algorithm, but I'm not opposed to expanding on it at some point in the future.
Logged
raigan
Level 5
*****


View Profile
« Reply #13 on: September 13, 2009, 06:12:03 AM »

@raigan - what you're basically suggesting is another check at the step of the gradient. Since each check is a method call, it would be slower overall.

And imagine what would happen if we were walking a pure diagonal? Not only would I have to check each voxel I'm walking through, but each voxel to either side as well if I want something that truly tackles any corner situation.

Yes, but the traversal algorithm itself isn't really slower; it visits more cells and so it will require processing more cells, which might make the whole process slower. But that's unavoidable since those cells must be checked if what you want is to model a ray.

For instance, I have en even faster solution which is somewhat less accurate than Bresenham: don't check any cells, 0 method calls Smiley

I guess it really depends on what you're trying to model and/or your definition of correct behaviour.
Logged
Overkill
Level 3
***


Andrew G. Crowell


View Profile WWW
« Reply #14 on: September 13, 2009, 07:01:08 AM »

Oh, and in addition to using Bresenham on lines, you may want to check out the Cohen-Sutherland line clipping algorithm.

This will take care of boundary checks before a line is drawn/iterated through, effectively allowing you to throw away and trivially reject completely out-of-bounds lines, and clipping off the edges of lines that go past the boundaries before starting the more expensive line iteration part (which would be slow if you're comparing on every line step).
Logged

increpare
Guest
« Reply #15 on: September 13, 2009, 10:01:18 AM »

I've been looking into graph-related data-mining algorithms today, more specifically stuff relating to this paper, 'A Multi-Level Parallel Implementation of a Program for Finding Frequent Patterns in a Large Sparse Graph'.  I was hoping I'd get a chance to do something with Nauty again, it's one of the few computational/mathematical libraries I've been able to make use of in the past, but it's not entirely suited to this particular job.  An issue with the data-mining algorithms is that they're rather heavy, and may not be suited to doing real-time (or even turn-based) stuff, and my requirements aren't as strict as theirs (and my problems are slightly different).  It's interesting to read about them though Smiley
Logged
Pishtaco
Level 10
*****


View Profile WWW
« Reply #16 on: September 13, 2009, 10:44:09 AM »

I've been looking into graph-related data-mining algorithms today, more specifically stuff relating to this paper, 'A Multi-Level Parallel Implementation of a Program for Finding Frequent Patterns in a Large Sparse Graph'.  I was hoping I'd get a chance to do something with Nauty again, it's one of the few computational/mathematical libraries I've been able to make use of in the past, but it's not entirely suited to this particular job.  An issue with the data-mining algorithms is that they're rather heavy, and may not be suited to doing real-time (or even turn-based) stuff, and my requirements aren't as strict as theirs (and my problems are slightly different).  It's interesting to read about them though Smiley
What are you using it for?
Logged

increpare
Guest
« Reply #17 on: September 13, 2009, 10:47:30 AM »

Quote
What are you using it for?
AI stuff; trying to get agents to be able to recognise repeating patterns in their experiences without having to define what those patterns should look like.

Most complex scenario, I end up having to do a scheduler.  Simpler scenario: I can have agents randomly test a couple of combinations each tick; there'll be maybe a lot of overlap, and the results may not be as good, but at least I won't have to worry about slowness.  I'll try the latter first maybe, and see just how bad the results are.  Work on this is a couple of weeks off anyway.
« Last Edit: September 13, 2009, 10:55:42 AM by increpare » Logged
LemonScented
Level 7
**



View Profile
« Reply #18 on: September 13, 2009, 03:21:12 PM »

I played with something last year that involved Delaunay Triangulation and Voronoi Diagrams (http://en.wikipedia.org/wiki/Delaunay_triangulation). I wanted to find a way to "shatter" polygons into arbitrary and interestingly-shaped pieces, and it worked rather well. Generate a random set of points inside the polygon, and each cell in the corresponding Voronoi Diagram is a new polygon, and once you've generated all of the new polygons you can just remove the original shape. The way I implemented it was slightly different to that; I actually generated the Delaunay Triangulation, and created the polygons by iterating through every line in the triangulation and "slicing" the polygon(s) along a line perpendicular to it, running through the centrepoint.

Anyway, I can imagine a few other uses for such things now I know about it. Voronoi Diagrams can basically be used to subdivide a space up into cells which each represent the area in which the point which lies inside it is closer than any other points. Could be good for knowing which security camera the player is closest to in a stealth game, or for generating a simpler, high-level model for RTS AIs to plan strategies and know how the battle is going.

Similarly, Delaunay Triangulation is quite useful if you ever need to turn a set of arbitrary points representing the edge of a shape into a triangle mesh which is suitable for reasonably optimal rendering or collision detection.
Logged

st33d
Guest
« Reply #19 on: September 14, 2009, 05:01:04 AM »

That seam carving stuff is amazing.

And I've found it done in Flash!

Also really interested in the ID3 stuff. I reckon I'm gonna have a crack at that algorithm myself when I get the time.

Great stuff guys, keep it coming.

Also whilst I'm reminded of Klingeman's work, A great video here on reading Quick Recognition tags in Flash. It's a pretty good argument for leveraging Flash's capabilities in manipulating pixels to speed up calculations. I'm already using BitmapData's quite a lot to manipulate data that isn't necessarily image data.
Logged
Pages: [1] 2 3 4
Print
Jump to:  

Theme orange-lt created by panic