Welcome, Guest. Please login or register.

Login with username, password and session length

 
Advanced search

1411500 Posts in 69373 Topics- by 58429 Members - Latest Member: Alternalo

April 25, 2024, 01:59:03 PM

Need hosting? Check out Digital Ocean
(more details in this thread)
TIGSource ForumsDeveloperTechnical (Moderator: ThemsAllTook)Light simulation thing
Pages: 1 2 [3] 4
Print
Author Topic: Light simulation thing  (Read 9536 times)
ChevyRay
Level 2
**



View Profile
« Reply #40 on: February 08, 2013, 03:39:29 PM »

I too could not resist.



This one's in Flash... only way I could get it to run at 60FPS was using a 200x150 resolution Tongue

Here you can try it out in real time though (use the mouse):
http://dl.dropbox.com/u/1532933/LightTest.swf
Logged
BleakProspects
Level 4
****



View Profile WWW
« Reply #41 on: February 08, 2013, 04:05:25 PM »

Wonderful! But I don't think the light is bouncing...
Logged

ChevyRay
Level 2
**



View Profile
« Reply #42 on: February 08, 2013, 04:10:12 PM »

No it's not, I haven't added that yet. I want to add a reflection map to control surface specularity.
Logged
_Tommo_
Level 8
***


frn frn frn


View Profile WWW
« Reply #43 on: February 08, 2013, 07:37:19 PM »

That's awesome, and the light is much more "dense" than previous attempts, while it has no lag at all!

Meanwhile, I'm trying a cone-tracing based thing that should work pretty awesomely everywhere using... triangles Giggle
Basically it fakes a "tight light emitter" as an object with an "emissive edge" (a segment) and two "light vectors" (the light directions at points A and B of the ee).
Then, it finds the edges intersecting the truncated cone built with them, builds the contours of the light area, and tesselates the mesh needed to display that.
Bounces are done by spawning another "emitter" for each (part of the) edges that were hit.

And you might have even noticed that this is just a 2D stencil shadow Smiley
« Last Edit: February 08, 2013, 07:47:16 PM by _Tommo_ » Logged

Schrompf
Level 9
****

C++ professional, game dev sparetime


View Profile WWW
« Reply #44 on: February 09, 2013, 05:08:04 AM »

I already mourned that I don't have time to experiment myself, but if I could, I'd still go for the GPU stuff. But I wouldn't simulate explicit particles - they're just a means to an end. I'd use a cone tracing technique similar to how Crassin et al. gather global illumination in http://hal.archives-ouvertes.fr/docs/00/65/01/73/PDF/GIVoxels-pg2011-authors.pdf

a) Render direct lighting from light source, for example by projecting silhouette edges or by cascaded ray marching.
b) Downsample repeatedly by accumulating 2x2 pixels weighted by their "openess": walls 0, space 1
c) For every wall pixel, find normal and cast a few cones along that normal by sampling from increasing mip levels produced by b)
d) Repeat b) and c) for additional bounces

It's just a rough sketch of an idea. Maybe you'd have to change the downsampling weighting function to only cover walls instead of free area, thus only accumulating the light reflected from walls in step c). I really need to try that out at some time, I already toyed around with that idea for ages, to get some cheap Global Illumination for my game.
Logged

Snake World, multiplayer worm eats stuff and grows DevLog
bart_the_13th
Level 2
**


View Profile
« Reply #45 on: February 19, 2013, 08:33:30 PM »

I just couldn't help it
http://filebin.gamedevid.org/v/1018a/

the unblurred version
http://filebin.gamedevid.org/v/10189/
Logged
gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #46 on: February 19, 2013, 09:34:01 PM »

I was going to point to crassin and samuli's work too
Logged

JobLeonard
Level 10
*****



View Profile
« Reply #47 on: June 24, 2013, 02:28:22 AM »

Plus you can't just blur straight out, you have to sample the walls in some way while doing it to prevent unwanted leaking.
Free bloom Wink
Logged
oahda
Level 10
*****



View Profile
« Reply #48 on: September 22, 2013, 06:14:34 PM »

Yaaaay, bumping.

This thread deserves it, anyway.

So, I just have to ask.

I've never offloaded stuff like this directly on GPU or with stencil buffers and/or however it is that OP and other people are doing this. I have no idea how to do it. Could any directions be given? I'd like to play around with a similar system.

If there's already a free system that I can use (what's the status on OP's system, for example?) with SDL in C++, I wouldn't say no to that either, whether it be to learn from it or to steal it completely.
Logged

powly
Level 4
****



View Profile WWW
« Reply #49 on: September 23, 2013, 04:41:23 AM »

My implementation was very brute force, I updated the positions of a large set of particles - though drawn as lines between the current and the previous positions - with ping ponging between two textures and colliding them against a distance field texture. Way more elegant approaches would probably exist - all kinds of blur systems or light propagation could probably be done, the problem is not as hard in 2D as it is in 3D. Or well, the problem itself is more or less the same but way less computation is required.
Logged
oahda
Level 10
*****



View Profile
« Reply #50 on: September 23, 2013, 06:27:37 AM »

Ah, yes, I want to do it in 2D too, I might add.

Reading up a little bit on stencil buffers and whatever, it does seem to me like the calculations and raycasting and stuff would need to be done just like in a normal program, in the normal logic code, so where does the "extra power from the GPU" that people seemed to be mentioning come into this?
Logged

muki
Level 2
**



View Profile
« Reply #51 on: September 23, 2013, 02:35:09 PM »

I too could not resist.



This one's in Flash... only way I could get it to run at 60FPS was using a 200x150 resolution Tongue

Here you can try it out in real time though (use the mouse):
http://dl.dropbox.com/u/1532933/LightTest.swf

This is so fucking cool! I imagine this kind of lighting, even with one tenth the rays, could be used as projectiles in a side scroller shooter or something. Energy balls!  Shocked Even at one-tenth the precision and rays, it would still make jaws drop in an actual game!
Logged
muki
Level 2
**



View Profile
« Reply #52 on: September 23, 2013, 02:38:29 PM »

You guys are so awesome! It makes me hate the fact that I have poor math/programming skills and mostly rely on gamemaker type stuff for my projects.  Concerned  Crazy

More demos! More games!  Kiss
Logged
TheHermit
Level 0
**



View Profile WWW
« Reply #53 on: September 27, 2013, 07:39:10 AM »

Ah, yes, I want to do it in 2D too, I might add.

Reading up a little bit on stencil buffers and whatever, it does seem to me like the calculations and raycasting and stuff would need to be done just like in a normal program, in the normal logic code, so where does the "extra power from the GPU" that people seemed to be mentioning come into this?

Basically you'd load the occlusions into a texture and then for each pixel of the level/screen, you'd fling a couple of photons on the GPU and update a lightmap texture based on the result. However, without coherency between the samples for neighboring pixels, you're going to get a much noisier appearance.

Another way to do it would be to assign a separate GPU thread to each photon, and accumulate a fixed-length list of sites (pixels) each photon visits, which you'd then collapse into a light map using ~log(N) extra GPU passes to implement one of the 'sum N numbers in parallel' algorithms and get the final light value. This would preserve coherency between pixels (and give you the streaky light rays you get in the software renderer), but would be harder to code.

Maybe more shader-savvy people can suggest a better way?
Logged

Urban Hermit Games

Games: Travelogue, Rebound Recon, Heat Sink
oahda
Level 10
*****



View Profile
« Reply #54 on: September 27, 2013, 08:18:06 AM »

Yes, yes, but,

and accumulate a fixed-length list of sites (pixels) each photon visits

this must be done normally, no? Just checking so that I'm not missing some core concept here, because, obviously, when a photon collides with something, I have to determine whether to kill it or let it pass through, whether and how to refract it, whether to assign it a new colour if, for example, it shines through a blue saphire, and so on, right? In the normal program, no?
Logged

Conker534
Guest
« Reply #55 on: September 27, 2013, 08:21:03 AM »

purdy
Logged
oahda
Level 10
*****



View Profile
« Reply #56 on: September 27, 2013, 09:03:23 AM »

dum
Logged

TheHermit
Level 0
**



View Profile WWW
« Reply #57 on: September 27, 2013, 09:06:26 AM »

Yes, yes, but,

and accumulate a fixed-length list of sites (pixels) each photon visits

this must be done normally, no? Just checking so that I'm not missing some core concept here, because, obviously, when a photon collides with something, I have to determine whether to kill it or let it pass through, whether and how to refract it, whether to assign it a new colour if, for example, it shines through a blue saphire, and so on, right? In the normal program, no?

Well, in serial code you can have an accumulation buffer and march the photon forward, adding to each site as it goes. On GPU, you have to do everything in parallel, which means that you would have problems with two threads trying to simultaneously write to the same pixel of the lightmap.

So instead, you have to basically have every thread create a list of pixels it wants to add to, then you do a hierarchical addition to actually get the final texture. This would be a list of every pixel the photon passes through, not just the places it collides.

Because you have as many lists as photons, if you have to do it on CPU then the GPU isn't going to be getting you all that much (at best a factor of 2 improvement). So you do the addition in parallel too.

At least, thats my thoughts on how you might do this on GPU. There may be shortcuts that can be used to make it a bit simpler.
Logged

Urban Hermit Games

Games: Travelogue, Rebound Recon, Heat Sink
oahda
Level 10
*****



View Profile
« Reply #58 on: September 27, 2013, 09:16:16 AM »

Well, in serial code you can have an accumulation buffer and march the photon forward, adding to each site as it goes. On GPU, you have to do everything in parallel, which means that you would have problems with two threads trying to simultaneously write to the same pixel of the lightmap.
I guess I have a little reading up to do.

So instead, you have to basically have every thread create a list of pixels it wants to add to, then you do a hierarchical addition to actually get the final texture. This would be a list of every pixel the photon passes through, not just the places it collides.
Well, "the places it collides" is what is confusing me and tripping me up. How could it possibly detect something like that on the GPU? And creating those lists of pixels.

Because you have as many lists as photons, if you have to do it on CPU then the GPU isn't going to be getting you all that much (at best a factor of 2 improvement). So you do the addition in parallel too.

At least, thats my thoughts on how you might do this on GPU. There may be shortcuts that can be used to make it a bit simpler.
Again, I believe I need to do some reading up... Links would be greatly appreciated, if you have any.
Logged

BleakProspects
Level 4
****



View Profile WWW
« Reply #59 on: September 27, 2013, 09:16:34 AM »

Oh, you resurrected my thread!

I got a real-time version working with pure GPU in XNA



The source code is here:
http://www.dwarfcorp.com/Prototypes/LightSim/LightSim2.zip

Basically it works like this:

There are N^2 photons. (My GPU could handle N = 256 at 60 FPS)

I store seperate N x N textures for different aspects of a photon.

The textures are: status, position, angle, and color

I have different render passes on the GPU that do different parts of the update function.

The first render pass updates the status texture. This stores whether or not the particle should be respawned, and whether its light/angle should change.

The second pass updates the position texture by computing the velocity of the particle from its angle and adding it to the position. It also respawns particles' positions.

The third pass updates velocities by considering the status texture.

The fourth pass converts every photon to a vertex and renders it to the light buffer with additive blending.

The final pass renders the light buffer and the material buffer together using HDR blending.
Logged

Pages: 1 2 [3] 4
Print
Jump to:  

Theme orange-lt created by panic