powly
|
|
« Reply #20 on: February 05, 2013, 10:30:50 PM » |
|
while rasterizing a line Rasterizing.. Seriously - current graphics hardware is based on around fifteen years of iterations to do rasterization as fast as possible, and your CPU implementation is faster? I find that hard to believe, even though lines aren't usually the priority for GPU vendors.
|
|
|
Logged
|
|
|
|
JakobProgsch
Level 1
|
|
« Reply #21 on: February 06, 2013, 02:21:23 AM » |
|
Its not the rasterizing that is problematic but checking for collision while doing it... There is no way to tell the gpu "rasterize this line until you hit a pixel with a specific value and then stop"...
|
|
|
Logged
|
|
|
|
_Tommo_
|
|
« Reply #22 on: February 06, 2013, 04:33:39 AM » |
|
Rasterizing.. Seriously - current graphics hardware is based on around fifteen years of iterations to do rasterization as fast as possible, and your CPU implementation is faster? I find that hard to believe, even though lines aren't usually the priority for GPU vendors.
If you want to do this completely on the GPU, you would need to rasterize the line *in a shader*, so you get nothing out of the 15 years of iterations... but yeah, finding a way to exploit the GPU's rasterization would be very nice
|
|
|
Logged
|
|
|
|
powly
|
|
« Reply #23 on: February 06, 2013, 04:42:37 AM » |
|
Jakob: You can do this using a vertex shader. Set one end of the line wherever you left at last iteration and move the other end in steps, reading the background hit layer from a texture. Can't move infinitely, though, so there would probably be a noticeable speed of light. And this isn't how you'd really do it, because the endpoint must be saved for the next iteration. The simulation step should be done by ping-ponging between two position textures, and then a rendering step that reads the old and new positions and draws lines between them, accumulating to a light buffer.
Tommo: You only need to give the vertex shader two endpoints, so you get fully hardware accelerated line rasterization.
It seems I'm either very bad at explaining this or then I'm missing something - I'll give it a try now and see what happens.
|
|
|
Logged
|
|
|
|
JakobProgsch
Level 1
|
|
« Reply #24 on: February 06, 2013, 05:51:01 AM » |
|
That is not the same thing as "checking for collision while rasterizing" though. Which was the original statement. Sure the rasterization after finding the endpoint is fast but by that point you already have run a shader with tons of texture reads and dynamic branching. You don't benefit from lightning fast rasterization for collision if you will.
Anyway the advancement of the photons could be done via transform feedback alternating between two buffers (or even with image objects if you using ogl4 is an option). Also collision lookups could be reduced by abusing mipmap levels (an octree of the scene essentially).
|
|
|
Logged
|
|
|
|
powly
|
|
« Reply #25 on: February 06, 2013, 08:43:08 AM » |
|
... which nicely brings us exactly where I started; accumulation is not the real problem, collisions are. Octrees might indeed be good, but with reasonable sprite counts distance fields would do the job too and maybe be a tad easier to implement. Also not making instant-hit light would probably cut it.
|
|
« Last Edit: February 06, 2013, 08:48:25 AM by powly »
|
Logged
|
|
|
|
BleakProspects
|
|
« Reply #26 on: February 06, 2013, 04:54:49 PM » |
|
In the process of accumulating/interpolating light with the GPU: This is the result of a single frame. Each time a photon collides with an object, the GPU is informed that the "start point" and color of the photon has changed. From there the line is interpolated. These frames will be added together into a single HDR image. ... which nicely brings us exactly where I started; accumulation is not the real problem, collisions are. Octrees might indeed be good, but with reasonable sprite counts distance fields would do the job too and maybe be a tad easier to implement. Also not making instant-hit light would probably cut it.
Octrees? Distance fields? I get O(1) with my collision mask per collision. Unless you're talking about dynamic environments... Octree would be logn (and I'm pretty sure you mean quadtree since it's 2D). Distance field would obviously also give me O(1), but the distance calculation would be an unnecessary second step. I should also mention that I don't care if light penetrates stuff (as long as its not too much)
|
|
|
Logged
|
|
|
|
_Tommo_
|
|
« Reply #27 on: February 06, 2013, 05:52:08 PM » |
|
Cool stuff Is it faster? Meanwhile, I added collisions with apparently no cost! I'm using an octree and AABBs in place of a pixel grid, and this is actually faster & better, because -moving objects! -I can compute easily where a ray will end and then blindly rasterize it, no reads to check for collision for each step (they're like 25% of the cost of your algorithm, I guess) -more properties: I can deduce a normal, and a specular and refraction coefficient for tight reflection and even refraction caustics (prisms are quite awesome). but this still 100%s my beefy i7, no way it can become a game... and the image is still quite noisy and "dirty" I'm thinking about an approach based on "reflection frustums" which are basically quads extrapolated from a surface when it's hit by a light. That should be fast enough to run even on mobile Btw, sorry if I jumped on your idea and/or your thread, but it was really cool... yeah, I might have been rude. I'll leave it to you if you want
|
|
« Last Edit: February 06, 2013, 05:57:13 PM by _Tommo_ »
|
Logged
|
|
|
|
BleakProspects
|
|
« Reply #28 on: February 06, 2013, 07:03:53 PM » |
|
GPU drawing getting better, but I can't figure out how to do global tonemapping Probably will do two passes, one to write the min/max value and one to do the tonemapping. Stick with me, as I'm just learning shaders.
|
|
|
Logged
|
|
|
|
powly
|
|
« Reply #29 on: February 06, 2013, 10:56:16 PM » |
|
You can just adjust the brightness based on the last frame, there are usually not that many bright jumps. And yes, I was thinking about dynamic scenes though I'm interested in how your collision mask thing works - or do you just step one pixel at a time?
|
|
|
Logged
|
|
|
|
BleakProspects
|
|
« Reply #30 on: February 07, 2013, 08:04:30 AM » |
|
Phtons usually step between 1 and 4 pixels per frame. They just sample the collision mask and get a random velocity whenever they detect collision.
My scheme for rendering the lines on the GPU has some issues... photons will often collide in the simulation before the GPU can draw the line from the light source to their collision point. So there are these sort of "pre-shadows" where it is much darker near edges of obstacles since only the line *leaving* the obstacle is drawn, when both the line leaving and the line going from the light source to the obstacle need to be drawn. If I intend to fix this, I will have to store a few previous data points for each photon so that the GPU can draw lines between them.
|
|
|
Logged
|
|
|
|
JakobProgsch
Level 1
|
|
« Reply #31 on: February 07, 2013, 08:14:40 AM » |
|
Not knowing the details but wouldn't it be sufficient to randomize if the start point of the line collides instead of when then end point does? Essentially just reverse the order from: 1. move 2. randomize if collides to 1. randomize if collides 2. move
that way the position will always step into the obstacle.
|
|
|
Logged
|
|
|
|
BleakProspects
|
|
« Reply #32 on: February 07, 2013, 08:32:12 AM » |
|
Not knowing the details but wouldn't it be sufficient to randomize if the start point of the line collides instead of when then end point does? Essentially just reverse the order from: 1. move 2. randomize if collides to 1. randomize if collides 2. move
that way the position will always step into the obstacle.
Well the issue is that my "move" step is going much much faster than each frame is rendered (I am running 8 light simulation threads at the highest Hz the cpu will allow). To compensate for this I have the GPU interpolate lines between the last position sent and the current position sent. The problem is, if the assumption the GPU made (that the light was moving in a straight line through the entire frame) was broken (because the light collided and started moving in another straight line), I have to reset the start of the line to the collision point, and some of the pixels that the GPU would have drawn over had the light continued to move along a straight line are now lost, resulting in a "shadow" between the light source and the obstacle.
|
|
|
Logged
|
|
|
|
powly
|
|
« Reply #33 on: February 08, 2013, 06:22:39 AM » |
|
Okay, I think I hit a reason for another post. A million photon-things, 720p, 60fps. Accumulates over a few frames, though, otherwise it'd look even less smooth.
|
|
|
Logged
|
|
|
|
rivon
|
|
« Reply #34 on: February 08, 2013, 06:30:40 AM » |
|
Why don't you guys blur the resulting light? That way it would look smooth and overally better (especially at the low-light areas).
|
|
|
Logged
|
|
|
|
powly
|
|
« Reply #35 on: February 08, 2013, 06:35:36 AM » |
|
something hasn't been implemented != something isn't going to be implemented
Plus you can't just blur straight out, you have to sample the walls in some way while doing it to prevent unwanted leaking.
I'll stop using Bleaks thread too much now, though!
|
|
|
Logged
|
|
|
|
BleakProspects
|
|
« Reply #36 on: February 08, 2013, 09:26:34 AM » |
|
Very nice. I may stop working on it anyway. Was just a curiosity. I now christen this thread "2D Photon Mapping."
Edit: Also, I think the "leaking" is necessary for the illusion of lighting (otherwise everything looks flat). It's also important to have a medium-darkness background and pretty bright materials for obstacles. I'm not sure what's going on with your render, but it looks like you're adding the light to the scene rather than multiplying it.
|
|
« Last Edit: February 08, 2013, 02:04:48 PM by BleakProspects »
|
Logged
|
|
|
|
oahda
|
|
« Reply #37 on: February 08, 2013, 09:31:08 AM » |
|
MOAR.
WHY ARE YOU DOING THIS TO ME? I DON'T HAVE TIME TO EXPERIMENT RIGHT NOW. DON'T AWAKEN MY LUSTS. ;_;
|
|
|
Logged
|
|
|
|
Ben_Hurr
|
|
« Reply #38 on: February 08, 2013, 10:13:51 AM » |
|
I have a huge science boner for this thread now.
|
|
|
Logged
|
|
|
|
Fallsburg
|
|
« Reply #39 on: February 08, 2013, 10:37:35 AM » |
|
Blur using anisotropic diffusion. It should preserve edges, while blurring everything else.
|
|
|
Logged
|
|
|
|
|