Welcome, Guest. Please login or register.

Login with username, password and session length

 
Advanced search

1411492 Posts in 69377 Topics- by 58433 Members - Latest Member: graysonsolis

April 29, 2024, 04:37:46 PM

Need hosting? Check out Digital Ocean
(more details in this thread)
TIGSource ForumsDeveloperTechnical (Moderator: ThemsAllTook)The happy programmer room
Pages: 1 ... 250 251 [252] 253 254 ... 279
Print
Author Topic: The happy programmer room  (Read 678851 times)
JWki
Level 4
****


View Profile
« Reply #5020 on: September 20, 2017, 01:25:00 AM »

Just want to mention that with RenderDoc you can actually do step by step debugging for pixel shaders, at least for D3D/HLSL - a bit more difficult because you step through byte code obviously, but immensly helpful.
Logged
powly
Level 4
****



View Profile WWW
« Reply #5021 on: September 20, 2017, 01:36:20 AM »

The VS HLSL debugger steps in the actual shader source which is pretty neat. Probably just the same machinery that drives the other debuggers running their software implementation of D3D. The only real downside in that is that it takes half a minute to load up. And as a small annoying detail they seem to enforce strict register usage, so the value of a variable can't be checked after it has been used for the last time.
Logged
gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #5022 on: September 20, 2017, 01:49:48 PM »

This prompt me a tangentially related question, how much of a game can you code entirely from the GPU side?
Logged

powly
Level 4
****



View Profile WWW
« Reply #5023 on: September 20, 2017, 02:06:07 PM »

I've been thinking about this and there's really nothing that would stop you from using the GPU for everything apart from I/O (controls, networking, asset loading) and obviously the host side support for the GPU calls -- though at least CUDA can do kernel launches from inside compute kernels so it might be possible to do infinite tail recursion and no host code at some point? Not sure if it'd ever be worth it though, with all kinds of synchronization problems etc.

For example the printf could also have the formatting expanded and it could directly setup an indirect drawcall to also render the output, then the host would only have to call drawindirect with some proper parameters and you'd get the results directly onto the framebuffer.

Btw, has anyone tried the shader-printf out? Any thoughts on how to improve the usage, either C++ or GLSL?
Logged
Crimsontide
Level 5
*****


View Profile
« Reply #5024 on: September 20, 2017, 02:15:23 PM »

Btw, has anyone tried the shader-printf out? Any thoughts on how to improve the usage, either C++ or GLSL?

I used the tesselation shader stage to output glyphs for text, if that's what you mean.  Each glyph was a single 'vertex' in a buffer, the tesselation shader stages expanded that into 'n' triangles, 'n' being the number of tris for a particular glyph (I wasn't using a standard bitmap, rather each character was actually a 2D mesh).  The geometry shader then transformed the tesselation generated tris, handled texture coords, etc...  It worked pretty well.

I guess you could use a compute kernel to create the initial text buffer, seems like it would be pretty straight forward.

Or were you thinking of something else?
Logged
ProgramGamer
Administrator
Level 10
******


aka Mireille


View Profile
« Reply #5025 on: September 20, 2017, 02:46:35 PM »

This prompt me a tangentially related question, how much of a game can you code entirely from the GPU side?

One that comes to mind is Conway's game of life.
Logged

powly
Level 4
****



View Profile WWW
« Reply #5026 on: September 20, 2017, 02:54:01 PM »

Btw, has anyone tried the shader-printf out? Any thoughts on how to improve the usage, either C++ or GLSL?

I used the tesselation shader stage to output glyphs for text, if that's what you mean.  Each glyph was a single 'vertex' in a buffer, the tesselation shader stages expanded that into 'n' triangles, 'n' being the number of tris for a particular glyph (I wasn't using a standard bitmap, rather each character was actually a 2D mesh).  The geometry shader then transformed the tesselation generated tris, handled texture coords, etc...  It worked pretty well.

I guess you could use a compute kernel to create the initial text buffer, seems like it would be pretty straight forward.

Or were you thinking of something else?
Yes, see previous page. I guess I should make an actual thread to ask for feedback though.

Your solution sounds cool, were the meshes predefined or did you also somehow expand them adaptively in the tessellator? You'll probably get nice quality and performance regardless. I would probably just have gone with a compute kernel for the expansion step (look up amount of required triangles per triangle, prefix scan to find proper indices for all of the vertices) but your solution is likely faster since it does the allocations locally using the tessellator. What's your upper limit for the atmount of possible geometry per glyph?
Logged
gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #5027 on: September 20, 2017, 03:29:45 PM »

This prompt me a tangentially related question, how much of a game can you code entirely from the GPU side?

One that comes to mind is Conway's game of life.

I mean """real game™""", like """"mario 64""""
Logged

JWki
Level 4
****


View Profile
« Reply #5028 on: September 20, 2017, 03:35:57 PM »

Cool! Are you doing anything special to generate the thumbnails, or would they be stretched if they weren't square already? Or cropped?

Actually reworked the thumbnails so now they will maintain the image aspect ratio and autofill with blackness around that to preserve same-sized thumbnails:



Logged
Crimsontide
Level 5
*****


View Profile
« Reply #5029 on: September 20, 2017, 06:18:57 PM »

Your solution sounds cool, were the meshes predefined or did you also somehow expand them adaptively in the tessellator? You'll probably get nice quality and performance regardless. I would probably just have gone with a compute kernel for the expansion step (look up amount of required triangles per triangle, prefix scan to find proper indices for all of the vertices) but your solution is likely faster since it does the allocations locally using the tessellator. What's your upper limit for the atmount of possible geometry per glyph?

The meshes were pre-generated from a TTF file and stored in a texture.  I used the pixel shader to generate the quadratic curves for the glyphs, so I didn't have a need for dynamic tesselation (in the sense that you would increase or decrease triangle count on the fly).  By using the pixel shader for quadratic curve rendering, you can get pixel perfect precision at any resolution, with a surprisingly small mesh (usually 20 - 50 tris/glyph, really depends on the font).

I think I could get at most 8k triangles out of a single mesh.  If I remember correctly (was a while back) you could tesselate each side of a planar mesh up to 64 times, which gives you 4k quads or 8k tris.

I never seriously tested the performance.  The idea was to use it for a UI engine where everything was a 'glyph', and almost everything executed on the GPU.  You could program animations with keyframes for each glyph, have them respond to input, etc...  The thing was that all the keyframe animation was relatively complex (nothing worse than say skinned animation, but still more than usual for UI).  So I could do all the calculations once for each glyph in the vertex shader, then send the transformation data to the geometry shader via constants.  This meant I only had to do the key frame animation computations once per glyph as opposed to once per vertex.  It was working great, and then Vulkan came out...
Logged
powly
Level 4
****



View Profile WWW
« Reply #5030 on: September 21, 2017, 10:54:30 AM »

Your solution sounds cool, were the meshes predefined or did you also somehow expand them adaptively in the tessellator? You'll probably get nice quality and performance regardless. I would probably just have gone with a compute kernel for the expansion step (look up amount of required triangles per triangle, prefix scan to find proper indices for all of the vertices) but your solution is likely faster since it does the allocations locally using the tessellator. What's your upper limit for the atmount of possible geometry per glyph?

The meshes were pre-generated from a TTF file and stored in a texture.  I used the pixel shader to generate the quadratic curves for the glyphs, so I didn't have a need for dynamic tesselation (in the sense that you would increase or decrease triangle count on the fly).  By using the pixel shader for quadratic curve rendering, you can get pixel perfect precision at any resolution, with a surprisingly small mesh (usually 20 - 50 tris/glyph, really depends on the font).

I think I could get at most 8k triangles out of a single mesh.  If I remember correctly (was a while back) you could tesselate each side of a planar mesh up to 64 times, which gives you 4k quads or 8k tris.

I never seriously tested the performance.  The idea was to use it for a UI engine where everything was a 'glyph', and almost everything executed on the GPU.  You could program animations with keyframes for each glyph, have them respond to input, etc...  The thing was that all the keyframe animation was relatively complex (nothing worse than say skinned animation, but still more than usual for UI).  So I could do all the calculations once for each glyph in the vertex shader, then send the transformation data to the geometry shader via constants.  This meant I only had to do the key frame animation computations once per glyph as opposed to once per vertex.  It was working great, and then Vulkan came out...
Doesn't it require the tessellated parts to be connected, though? Like subdividing a triangle? The amount of geometry is pretty nice, you could do a lot with that.
Sounds like a cool project in general, too bad you dropped it!
Logged
Crimsontide
Level 5
*****


View Profile
« Reply #5031 on: September 21, 2017, 03:53:29 PM »

Well that's where the geometry shader comes in.  Since you can work with triangles (not vertices) there's no restriction on topology.

It worked well in a D3D11 context, but moving to Vulkan I couldn't justify the added complexity.  In Vulkan recording command buffers is dirt cheap, and trivially multithreaded.  So I can just blast out another command buffer every frame without even breaking a sweat (so to speak); and with 8, 10, 12, even 16 core behemoths on the horizon, what else are we gonna use the computational power for?

The idea was a lot of fun as a 'proof of concept', but in the end the pros didn't outweigh the cons.  Also I found that as cool as using mesh's to represent glyphs (and other 2d graphics) are, that signed distance field is generally still superior.  Sure meshes are resolution independent, while signed distance fields have limits, but with signed distance fields things like shadows, aura's, outlines, all sorts of cool effects are trivial; while with meshes many of these effects are very non-trivial.

I think in the future if I were to use the technique again, it'd most likely be in some sort of advanced particle effect.  Using the vertex shader to do the heavy computational lifting, then using the tesselation stages to multiply/amplify that, and then use the geometry shader as sort of a 2nd stage vertex shader.  The advantage being that it can be done in a single pass with no intermediary buffers needed for stream out.

I'd also like to play around a bit with micro-poly culling in the geometry shader, but as of yet I don't have that need.
Logged
Photon
Level 4
****


View Profile
« Reply #5032 on: September 29, 2017, 12:56:42 PM »

Man, the flexibility of the Haxe language is so amazing:

"What if I did XYZ to solve a problem? Hmm, that sounds nice, but is it feasible? Can I even do that in Haxe? Probably not. I mean, I guess I could try it and see if it builds. I have a feeling its not going to compile, but hey, let's just--oh, it compiled. Is that right? Let me fuss with a few other things... OK, its still compiling. I think I can do it that way... nice."
Logged
Cheezmeister
Level 3
***



View Profile
« Reply #5033 on: September 29, 2017, 03:12:30 PM »

It may compile, but does it run?
Logged

෴Me෴ @chzmstr | www.luchenlabs.com ቒMadeቓ RA | Nextris | Chromathud   ᙍMakingᙌCheezus II (Devlog)
powly
Level 4
****



View Profile WWW
« Reply #5034 on: September 30, 2017, 04:20:54 AM »

It may compile and run, but is it comprehensible?
Logged
ferreiradaselva
Level 3
***



View Profile
« Reply #5035 on: September 30, 2017, 10:43:17 AM »

It compile, run and is comprehensible, but is it what you really wanted?




I finally god a nice implementation to wrap OpenGL VBO and shaders (@qMopey, thanks for that example).

Goes like this:
Code:
/*----------------*/
/* Initialization */
/*----------------*/

cshader *shader = new_shader("default", vertex_source, NULL, fragment_source); /* no geometry source, then NULL */
cvertex_data *vd = new_vertex_data("default");

vertex_data_make(vertex_data, 256 /* vertices count supported */, false /* no normals */, false /* no UV */, true /* has color */, VERTEX_DATA_USAGE_DYNAMIC);


/*-----------------*/
/* Render callback */
/*-----------------*/

cshader *shader = get_shader_by_name("default");
cvertex_data *vertex_data = get_vertex_data_by_name("default");
cmatrix projection = matrix_ortho(-128.0f, 128.0f, 128.0f, -128.0f, 0.0f, 1.0f);
cmatrix view = matrix_look_at(to_vector3(0.0f, 0.0f, 1.0f), to_vector3(0.0f, 0.0f, 0.0f), to_vector3(0.0f, 1.0f, 0.0f));
cmatrix position = matrix_translation(to_vector3(0.0f, 0.0f, 0.0f));
cmatrix rotation = matrix_rotation_z(0.0f);
ctriangle triangle_a;
ctriangle triangle_b;

/* Omitted code to set the coordinates of the triangles and make a rectangle */

vertex_data_clear(vertex_data);
vertex_data_bind(vertex_data);

vertex_data_map(vertex_data, false /* Read? */, true /* Write? */);
    vertex_data_push_face(vertex_data, &triangle_a, position, rotation);
    vertex_data_push_face(vertex_data, &triangle_b, position, rotation);
vertex_data_unmap();

/* Draw vertex data with shader */

framebuffer_clear();
shader_use(shader);
shader_set_uniform_matrix("pv", matrix_multiply_matrix(projection, view));
shader_draw(shader, vertex_data, VERTEX_DATA_PRIMITIVE_TRIANGLES);


I'm finally happy with this bc the shader and VBO wrappers are flexible enough, but you don't have to touch OpenGL directly. There are some optimizations to make, some functions to wrap, but the most important part was done.
Logged

Photon
Level 4
****


View Profile
« Reply #5036 on: October 02, 2017, 09:04:16 AM »

It compile, run and is comprehensible, but is it what you really wanted?
The funny thing is, no, it turned out not to be. Still, I was pleasantly surprised by the ramifications of doing it in that way (in this instance, inheriting a typedef/type template with its type parameter already specified,) and its nice information to have moving forward.
Logged
ferreiradaselva
Level 3
***



View Profile
« Reply #5037 on: October 02, 2017, 05:20:24 PM »



Some WIP of a game I'm making with my framework :D

The game will be 3D ish, so I will will be able to test the 3D math of my API. It will be 3D ish because the characters will be simply a plane (like in Paper Mario) while the environment "real" 3D.

Edit:

Also, I decided to remove the whole network from my API, since it was only a ugly wrap of the libuv, anyways. Instead, I will just leave some instructions on how to use the API + libuv. Better than providing a half-assed API.
« Last Edit: October 02, 2017, 05:25:49 PM by ferreiradaselva » Logged

qMopey
Level 6
*


View Profile WWW
« Reply #5038 on: October 02, 2017, 08:30:07 PM »

Looks very cute Smiley
Logged
ZebraInFlames
Level 0
*


View Profile
« Reply #5039 on: October 03, 2017, 09:04:11 AM »



Got a SAT collision resolution working, which I was testing with a triangle and a box. Had been bugging me more than it should've. The collider on the character is the same size as the sprite in this case, so it's colliding on the lower left corner there. Also, I haven't even tested rotated sprites yet, but should work Tongue

Now, to build some prototype gameplay Tongue

EDIT: that printf from shader code is awesome  Shocked
« Last Edit: October 03, 2017, 09:14:00 AM by ZebraInFlames » Logged
Pages: 1 ... 250 251 [252] 253 254 ... 279
Print
Jump to:  

Theme orange-lt created by panic