Welcome, Guest. Please login or register.

Login with username, password and session length

 
Advanced search

1411276 Posts in 69323 Topics- by 58380 Members - Latest Member: bob1029

March 28, 2024, 11:34:36 AM

Need hosting? Check out Digital Ocean
(more details in this thread)
TIGSource ForumsDeveloperTechnical (Moderator: ThemsAllTook)General programming discussion
Pages: 1 ... 9 10 [11] 12 13 ... 16
Print
Author Topic: General programming discussion  (Read 28922 times)
ferreiradaselva
Level 3
***



View Profile
« Reply #200 on: December 16, 2017, 12:17:58 PM »

So, in my framework, I have a VBO wrap (`struct vertex_data`).

Currently, it can hold position, normals, color and texture coordinates, in that classic alternated layout:

Code:
.------------------------------------------.
| p.x,p.y,p.z | n.x,n.y,n.z | r,g,b,a | u,v|
|------------------------------------------|
| p.x,p.y,p.z | n.x,n.y,n.z | r,g,b,a | u,v|
|------------------------------------------|
| p.x,p.y,p.z | n.x,n.y,n.z | r,g,b,a | u,v|
`------------------------------------------´

I betcha loved my ASCII layout.

So, the function that creates the vertex_data is like this:

Code:
void init_vertex_data(struct vertex_data *vd,
    uint32_t size,
    bool has_normal,
    bool has_uv,
    bool has_color,
    enum vertex_data_usage usage);

Then, the struccture `vertex_data` will store what it has. I have no idea if this is limiting. Are there any cases that you need more than those components (position, normal, color, texture coordinates)?

Also related to that, my shader object (`struct shader`) takes a `struct vertex_data` to start a drawing. Obviously, the shader already assumes that limitation, and check in the `vertex_data` what it holds. And, based on what the `vertex_data` holds, then shader function:

Code:
void shader_draw_range(struct shader *shader,
    struct vertex_data *vertex_data,
    enum vertex_data_primitive primitive,
    uint32_t first,
    uint32_t count)

Will call those pointer attribute functions (`glVertexAttribPointer()`) according with what `vertex_data` holds.

All this work nicely. Really nice. However, I'm not sure what are the limitations that I might have in the future.
« Last Edit: December 16, 2017, 12:25:52 PM by ferreiradaselva » Logged

oahda
Level 10
*****



View Profile
« Reply #201 on: December 16, 2017, 12:30:02 PM »

Is there anything about this design preventing you from easily adding stuff later? I started out with something similar but recently added attributes for joints/weights to support animations and the additions didn't break anything.

I'm guessing your code that passes the data to GL expects a particular memory layout, but it can't be too much work to update if you ever add more attributes, can it? I'm thinking the bigger issue is having to pass a boolean for every attribute into init_vertex_data() since adding more attributes will break every reference to that function.

Isn't it better to have a design where you add/enable each attribute individually? Or is there something about the way you allocate stuff that forces you to allocate everything in one go, knowing all the attributes you will be using beforehand? In that case you could replace all those bool parameters with one bitflag parameter instead. That way you can easily add more flags in the future without breaking anything.
Logged

ferreiradaselva
Level 3
***



View Profile
« Reply #202 on: December 16, 2017, 12:52:53 PM »

Is there anything about this design preventing you from easily adding stuff later? I started out with something similar but recently added attributes for joints/weights to support animations and the additions didn't break anything.

I'm guessing your code that passes the data to GL expects a particular memory layout, but it can't be too much work to update if you ever add more attributes, can it? I'm thinking the bigger issue is having to pass a boolean for every attribute into init_vertex_data() since adding more attributes will break every reference to that function.

With the current design, I can't add stuff later. It's limited to those. Btw, joints/weights is something that I hadn't thought before, so it's already one limitation of this design of mine.

Quote
Isn't it better to have a design where you add/enable each attribute individually?

Like adding a "list" with information of the attributes? Like this:

Code:
struct attribute {
   GLint attrib_location;
   int32_t offset;
   int32_t stride;
};

struct vertex_data {
   struct attribute attributes[MAX_ATTRIBUTES];
   GLvoid *mapped_buffer;
};

*not sure if that list "attributes" should go in the vertex_data (that holds the VBO) object or the shader object (that holds the vertex/geometry/fragment shaders).

Quote
Or is there something about the way you allocate stuff that forces you to allocate everything in one go, knowing all the attributes you will be using beforehand? In that case you could replace all those bool parameters with one bitflag parameter instead. That way you can easily add more flags in the future without breaking anything.

There's nothing that force be to use the booleans or allocate beforehand. Another reason it (probably) would be good to get rid of the booleans.
« Last Edit: December 16, 2017, 01:08:39 PM by ferreiradaselva » Logged

ThemsAllTook
Administrator
Level 10
******



View Profile WWW
« Reply #203 on: December 16, 2017, 12:55:37 PM »

I had a situation like that recently. Those four should be enough for static data most of the time (though you might also want tangents if you do normal mapping, possibly?), but animated models needed a couple of extra attributes. The way I solved this was to keep an "animated" boolean in my Renderable class, which toggles between two different vertex formats. Each renderable also has its own VAO with attribute binding set up appropriately. I use a different vertex shader for animated objects than for static ones.

Originally, I wanted to do a more general system that would allow me to specify any combination of vertex attributes for any mesh, but that seemed unnecessarily complicated. I feel like it's a better approach to consider immediate use cases, implement just those and nothing else, and refactor if the need arises later to add more capabilities. Constraints are good. Trying to support everything up front leads to a lot of extra complexity.
Logged

ferreiradaselva
Level 3
***



View Profile
« Reply #204 on: December 16, 2017, 01:16:16 PM »

I had a situation like that recently. Those four should be enough for static data most of the time (though you might also want tangents if you do normal mapping, possibly?), but animated models needed a couple of extra attributes. The way I solved this was to keep an "animated" boolean in my Renderable class, which toggles between two different vertex formats. Each renderable also has its own VAO with attribute binding set up appropriately. I use a different vertex shader for animated objects than for static ones.

Originally, I wanted to do a more general system that would allow me to specify any combination of vertex attributes for any mesh, but that seemed unnecessarily complicated. I feel like it's a better approach to consider immediate use cases, implement just those and nothing else, and refactor if the need arises later to add more capabilities. Constraints are good. Trying to support everything up front leads to a lot of extra complexity.

So, it seems like something more flexible than what I have, but not completely flexible to the point it becomes complex.

I will do some research on what kind of vertex data could possibly be needed for animation (which is something I had never considered), and see how I can expand my current design without making it too complext.

Thanks for the help, both of you!  Coffee

Also, if anyone has other opinion, I will gladly hear! :D
Logged

oahda
Level 10
*****



View Profile
« Reply #205 on: December 16, 2017, 01:20:47 PM »

Sometimes you'll also want non-standard attributes, so it's always good to at least consider the possibility. There was someone recently on Twitter who passed normals for both smooth and flat shading and then blended between them in shader (I guess in this particular case the blend could be precalculated and passed as one attribute unless it's dynamic, but you get the point). Another interesting one was from a talk about animating destructible props (like a fence) in shader, where there was animation data for where the planks would go and so on passed as vertex attributes and animated in the vertex shader.

Tho if possible it's common just to repurpose some of the less commonly used ones like tangents and colours instead of adding more but should you ever need all of them AND your own special ones, or you need more special ones than you have standard ones to spare for repurposing... Tongue
« Last Edit: December 16, 2017, 01:29:03 PM by Prinsessa » Logged

qMopey
Level 6
*


View Profile WWW
« Reply #206 on: December 16, 2017, 01:22:17 PM »

Yes you absolutely need to have some kind of way to add generic float attributes to your vertex data definition. The whole point of abstracting GL in this way is to increase iteration time by providing a better API than what GL comes with out of the box. GL added generic attributes eventually simply because the old `glNormalPointer` and similar functions were too limited. You should not create an API that regresses to this older style. Often times people will need to add a single float to their vertex data definition just to try out some obscure graphical effect. This is an extremely common use case, and your API will need it.

I know you've seen my tinygl header, and we have talked about these points before on tigsource forums. Did you see a problem with this style (let me know if you noticed a weakness or anything):

Code:
#define TG_ATTRIBUTE_MAX_COUNT 16
typedef struct
{
uint32_t buffer_size;
uint32_t vertex_stride;
uint32_t primitive;
uint32_t usage;

uint32_t attribute_count;
tgVertexAttribute attributes[ TG_ATTRIBUTE_MAX_COUNT ];
} tgVertexData;

typedef struct
{
const char* name;
uint64_t hash;
uint32_t size;
uint32_t type;
uint32_t offset;
uint32_t location;
} tgVertexAttribute;

void tgMakeVertexData( tgVertexData* vd, uint32_t buffer_size, uint32_t primitive, uint32_t vertex_stride, uint32_t usage );
void tgAddAttribute( tgVertexData* vd, char* name, uint32_t size, uint32_t type, uint32_t offset );

It is an array of vertex component definitions, handed off to `glAttribPointer` later on.

Generally when people use tinygl the first few types of attributes added to their vertex definitions will be position, color and normals. However, once they figure out the API and get the basics working, surely people will start adding in arbitrary floats/ints as needed for whatever kind of effect they are going for. The API is agnostic to the purpose of these attributes, and simply gets them to the shader, leaving it up to the user to match attribute names in shaders and in C code.
« Last Edit: December 16, 2017, 01:32:28 PM by qMopey » Logged
ferreiradaselva
Level 3
***



View Profile
« Reply #207 on: December 16, 2017, 01:45:32 PM »

Sometimes you'll also want non-standard attributes, so it's always good to at least consider the possibility. There was someone recently on Twitter who passed normals for both smooth and flat shading and then blended between them in shader (I guess in this particular case the blend could be precalculated and passed as one attribute unless it's dynamic, but you get the point). Another interesting one was from a talk about animating destructible props (like a fence) in shader, where there was animation data for where the planks would go and so on passed as vertex attributes and animated in the vertex shader.

Tho if possible it's common just to repurpose some of the less commonly used ones like tangents and colours instead of adding more but should you ever need all of them AND your own special ones, or you need more special ones than you have standard ones to spare for repurposing... Tongue

Yes you absolutely need to have some kind of way to add generic float attributes to your vertex data definition. The whole point of abstracting GL in this way is to increase iteration time by providing a better API than what GL comes with out of the box. GL added generic attributes eventually simply because the old `glNormalPointer` and similar functions were too limited. You should not create an API that regresses to this older style. Often times people will need to add a single float to their vertex data definition just to try out some obscure graphical effect. This is an extremely common use case, and your API will need it.

Ahh, that was exactly my fear, that non-standard attributes is a thing. Since I'm making a framework that I want to make useful for others, I sould take those in consideration, even if I don't use them.

I know you've seen my tinygl header, and we have talked about these points before on tigsource forums. Did you see a problem with this style (let me know if you noticed a weakness or anything):

Code:
#define TG_ATTRIBUTE_MAX_COUNT 16
typedef struct
{
uint32_t buffer_size;
uint32_t vertex_stride;
uint32_t primitive;
uint32_t usage;

uint32_t attribute_count;
tgVertexAttribute attributes[ TG_ATTRIBUTE_MAX_COUNT ];
} tgVertexData;

typedef struct
{
const char* name;
uint64_t hash;
uint32_t size;
uint32_t type;
uint32_t offset;
uint32_t location;
} tgVertexAttribute;

void tgMakeVertexData( tgVertexData* vd, uint32_t buffer_size, uint32_t primitive, uint32_t vertex_stride, uint32_t usage );
void tgAddAttribute( tgVertexData* vd, char* name, uint32_t size, uint32_t type, uint32_t offset );

It is an array of vertex component definitions, handed off to `glAttribPointer` later on.

Generally when people use tinygl the first few types of attributes added to their vertex definitions will be position, color and normals. However, once they figure out the API and get the basics working, surely people will start adding in arbitrary floats/ints as needed for whatever kind of effect they are going for. The API is agnostic to the purpose of these attributes, and simply gets them to the shader, leaving it up to the user to match attribute names in shaders and in C code.

Yes, your tinygl design was what I remembered. Is there any particular reason the list of attributes is in the vertex data and not in the "tgShader" object? The reason I ask is because the tgVertexData (like in mine vertex data) is an abstraction for the VBO, and the VBO holds data (any data). Shouldn't be the shader object (or draw call) that should take the list of tgVertexAttribute? Or maybe I'm missing some point keeping the list of attributes associated with the vertex data.

"Draw this data considering these attributes"
Logged

qMopey
Level 6
*


View Profile WWW
« Reply #208 on: December 16, 2017, 02:01:54 PM »

Is there any particular reason the list of attributes is in the vertex data and not in the "tgShader" object? The reason I ask is because the tgVertexData (like in mine vertex data) is an abstraction for the VBO, and the VBO holds data (any data). Shouldn't be the shader object (or draw call) that should take the list of tgVertexAttribute? Or maybe I'm missing some point keeping the list of attributes associated with the vertex data.

"Draw this data considering these attributes"

The idea is vertex data definitions are coupled with buffers, since buffers hold vertices. Sometimes a buffer can hold data that needs to be drawn by two or three different shaders. For example, say our player has vertices with attributes A, B and C. A and B are position and UV coordinates. C is normals. The player buffer needs to be rendered most of the time with A and B; a typical and simple textured player. However, when the player casts the "Clone Visage" spell, two copies of the player are shown (sort of like this for example) to the left and right of the player, though drawn with a ghostly shader. The ghostly shader only uses normals from C to apply a surface effect (kind of like the metal mario in smash bros).

The same vertex data for the character can be used for two different shaders. And as you'll see in a moment, the same shader can be used by two different `renderable` structs.

In tinygl, composing a draw call looks kind of like:

Code:
tgRenderable render;
tgMakeRenderable(&render, &shader, &vertex_data);

tgDrawCall call;
call.render = &render;
call.other_settings = settings;

tgPushCall(&call);

// draw
tgFlush();

So `vertex_data` defines the layout of vertices and the buffer dimensions. A `vertex_attribute` defines components that belong in the `vertex_data`. `shader` is literally just a small wrapper around a shader, mostly for interfacing with uniforms. `renderable` is a marriage of a `shader` and a `vertex_data`, and `renderable` goes into a `draw_call`. `draw_call` structs are POD and get pushed into a buffer to later be flushed all at once.

That's the overall idea. The different struct types are all separated so the user can mix and match whatever they need, whenever they need, without any limitations.

For example, it is possible to change the API to shader + renderable are basically the same thing. But then the use case for the player spell I mentioned is not possible. The API would force the user to make two different buffers for two different shaders, forcing the user to implement a more inefficient and cumbersome (more code) strategy.



Of course, nothing is stopping you from adding similar low-level struct types to your framework, and then also wrap them in the framework to provide a higher level API. The higher level API can hard-code in UV, color, normal, etc. and end up being *much* easier to use. This can be a good idea. The higher level code path can even be a great example for users on how to use the lower level struct types to create their own effects Smiley
« Last Edit: December 16, 2017, 02:09:28 PM by qMopey » Logged
ferreiradaselva
Level 3
***



View Profile
« Reply #209 on: December 16, 2017, 02:26:36 PM »

Thanks for the explanation, the association of the attributes with the vertex data structure made sense to me. Thank you Smiley
Logged

qMopey
Level 6
*


View Profile WWW
« Reply #210 on: December 16, 2017, 02:29:48 PM »

No problem. Keep posting here if you come up with more ideas! I'm interested in hearing more about this stuff Smiley
Logged
JWki
Level 4
****


View Profile
« Reply #211 on: December 17, 2017, 01:34:43 AM »

FWIW most APIs other than OpenGL (and in a sense in modern OpenGL, i.e. ~4.5) vertex attribute layout is completely separate from buffers and shaders which I think is a valuable option to consider.
Logged
ferreiradaselva
Level 3
***



View Profile
« Reply #212 on: December 17, 2017, 06:03:40 AM »


Last night, I was making it with the attributes in the vertex data structure, then I got fixed on the fact I would need to set up a MAX_VERTEX_ATTRIBUTES for the fixed array. The solution that I came up with is maybe exactly what you describe, JWki:

Code:
void shader_draw(struct shader           *shader,
                 struct vertex_data      *vertex_data,
                 enum primitive           primitive,
                 struct vertex_attribute *vertex_attributes,
                 uint32_t                 vertex_attributes_count)

Instead of storing the array of vertex attributes in the vertex data or shader object, I will pass the array in the function.
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #213 on: December 18, 2017, 09:28:11 PM »

Here we go, it start to be close



Microsoft Quantum Development Kit: Introduction and step-by-step demo
Logged

Crimsontide
Level 5
*****


View Profile
« Reply #214 on: December 18, 2017, 10:08:33 PM »

I've been looking into quantum computing from time to time.  Certainly intrigued by the possibilities.  That said all the 'presentation' type examples leave a lot of details out.  I'd love to see an example of the traveling salesman problem (ie. the classic NP complete problem).  Also, if we can instantaneously teleport information between entangled q-bits, doesn't that breakdown the notion of causality?
Logged
Cheesegrater
Level 1
*



View Profile
« Reply #215 on: December 19, 2017, 07:42:09 AM »

I don't think you'll see one.

It's not proven yet, but the current thinking seems to be that Quantum computers will be able to solve some NP problems (integer factoring, discrete logarithms, etc) but not NP-Complete problems such as traveling salesman in polynomial time.

Quote
Also, if we can instantaneously teleport information between entangled q-bits
Entanglement hasn't been shown to transmit information.
« Last Edit: December 19, 2017, 07:50:39 AM by Cheesegrater » Logged
Crimsontide
Level 5
*****


View Profile
« Reply #216 on: December 19, 2017, 10:12:29 PM »

I don't think you'll see one.

It's not proven yet, but the current thinking seems to be that Quantum computers will be able to solve some NP problems (integer factoring, discrete logarithms, etc) but not NP-Complete problems such as traveling salesman in polynomial time.

Hmm interesting...  I really need to read more on this when I have some time...

Quote
Entanglement hasn't been shown to transmit information.

But isn't that what the lady said her program was doing?  They were entangling qubits and sending state across them.  Or did I miss something?

Logged
Cheesegrater
Level 1
*



View Profile
« Reply #217 on: December 20, 2017, 03:44:41 AM »

But isn't that what the lady said her program was doing?  They were entangling qubits and sending state across them.  Or did I miss something?

Quantum teleportation requires that for every qubit transported, a classical bit must also be transported, so all information exchange is limited to the speed of light. It's done not because it's FTL communication, but because this is the way to move qubits from place to place. You can read more about it at https://en.wikipedia.org/wiki/Quantum_teleportation
Logged
Crimsontide
Level 5
*****


View Profile
« Reply #218 on: December 20, 2017, 08:50:01 AM »

But isn't that what the lady said her program was doing?  They were entangling qubits and sending state across them.  Or did I miss something?

Quantum teleportation requires that for every qubit transported, a classical bit must also be transported, so all information exchange is limited to the speed of light. It's done not because it's FTL communication, but because this is the way to move qubits from place to place. You can read more about it at https://en.wikipedia.org/wiki/Quantum_teleportation

But doesn't that then break down the quantum superposition?  I thought the idea of using entanglement was to allow data transfer between qubits while maintaining superposition.  Using a classic bit seems like it would undermine any quantum algorithm.
Logged
Cheesegrater
Level 1
*



View Profile
« Reply #219 on: December 20, 2017, 09:20:34 AM »

But doesn't that then break down the quantum superposition?

Nope! That's why one would bother with the quantum channel at all.

Quote
Using a classic bit seems like it would undermine any quantum algorithm.

It doesn't, though. What's happening in general terms is that the sender has two qubits - one (A) is part of an entangled pair, and the second (B) is just some qubit should be sent to the recipient.

The sender does what's called a Bell measurement on their pair of qubits. Then they send the result of the Bell measurement to the recipient in classical bits (actually 2 are required, I misspoke earlier).

The recipient, with the result of the Bell measurement in hand, then knows which one of four operations to perform on their entangled qubit. After that operation, their qubit will now be in an identical state as the 'sent' qubit 'B' was before the Bell measurement.

Nothing actually 'teleported' (this is a misleading word here), what happened is that a copy was made at the far end of the connection.
Logged
Pages: 1 ... 9 10 [11] 12 13 ... 16
Print
Jump to:  

Theme orange-lt created by panic