Welcome, Guest. Please login or register.

Login with username, password and session length

 
Advanced search

1411500 Posts in 69373 Topics- by 58428 Members - Latest Member: shelton786

April 25, 2024, 12:28:45 PM

Need hosting? Check out Digital Ocean
(more details in this thread)
TIGSource ForumsDeveloperTechnical (Moderator: ThemsAllTook)Getting started with shaders
Pages: [1]
Print
Author Topic: Getting started with shaders  (Read 3038 times)
Eendhoorn
Level 6
*

Quak


View Profile
« on: November 02, 2014, 11:28:38 AM »

Hello,

After avoiding shaders for ages I finally took the step to look into them. I specifically took a look at these tutorial series http://cgcookie.com/unity/cgc-courses/noob-to-pro-shader-writing-for-unity-4-beginner/. After that I fiddled around and learned some more about manipulating vertices with heightmaps and the like. It's pretty fun.

I've looked around on the interwebz but I cannot find anwers to some of my (seemingly) simple questions.
To add a little more context to my questions; I'm using Unity3D where shaders are attached to a material, the material is attached on the mesh.

1. Scene-wide shaders
This might be specific to unity, but if I understand correctly, it is not possible to have a shder affect all the objects in your scene?
I'm wondering how certain effects that apply to multiple objects are handeled.

Take this animal crossing shader for example, it displaces all the objects on the Y axis based on it's distance to the camera. I actually managed to create a similiar effect, but had to apply the shader on all my scene's objects seperately, which seems annoying.


Additionaly, I encountered some shader effects which span over multiple objects, which makes it even more confusing.



Can anyone enlighten how this is done?

2. Effects surpassing mesh/texture bounds
I don't fully understand the concept of how some shader apply effects that surpass their texture's bounds. If I would apply a blur or a glow to a texture, how could the effect range beyond the object's bounds?
I understand how a fullscreen glow effect just blurs the screen buffer on a texture.


http://unitycoder.com/blog/2014/01/06/sprite-rgb-split-shader-test/
This is a decent example, the effect is expanding alot beyond the initial texture.
When I look at the source code my still limited shader knowledge makes me guess that he is expanding the vertices to make room, but I am not completely sure.`

3. Shaders on shared materials
How come I can apply a 2 different colors on 2 different objects using the same material. I would expect that they would both have the same color since the shader is attached to that specific material.

4. vertex shader neighbours
If I would like to apply a shader to only effects the top vertices of a mesh, how would I handle that? Since the vertex shader is handled on a per-vertex basis you don't have any information about the mesh's other vertices right? I'm guessing you could extract the information from normals or mark certain vertices with vertex colors. But I was just wondering if my assumptions are correct or not.

5. vertex world position
This one is shamefully specific. I made a shader which takes a mesh and applies a "wave" effect to it. I had to convert the vertices from object space to world space. But I actually don't have a clue what to do when I pass them to the fragment shader.

Code:
// Unlit shader. Simplest possible textured shader.
// - no lighting
// - no lightmap support
// - no per-material color

Shader "Custom/Wave" {
Properties {
_MainTex ("Base (RGB)", 2D) = "white" {}
_Color( "Color", Color) = (1.0, 1.0, 1.0, 1.0)
_Frequency( "WaveFrequency", float) = 10
_Amplitude( "Amplitude", float) = 1
_Speed( "Speed", float ) = 50
}

SubShader {
Tags { "RenderType"="Opaque" }
LOD 100

Pass { 
CGPROGRAM
#pragma vertex vert
#pragma fragment frag

#include "UnityCG.cginc"

struct appdata_t {
float4 vertex : POSITION;
float2 texcoord : TEXCOORD0;
};

struct v2f {
float4 vertex : SV_POSITION;
half2 texcoord : TEXCOORD0;
};

uniform sampler2D _MainTex;
uniform float4 _MainTex_ST;
uniform float4 _Color;
uniform float _Frequency;
uniform float _Amplitude;
uniform float _Speed;

v2f vert (appdata_t v)
{
v2f o;

//wave
float4 worldV = mul( _Object2World, v.vertex );

float speed = _Time * _Speed;
float frequency = (worldV.x + worldV.y ) / _Frequency ;
float amplitude = _Amplitude;
worldV.z += sin( frequency + speed ) * amplitude;

worldV = mul( _World2Object, worldV );

o.vertex = mul(UNITY_MATRIX_MVP, worldV);
o.texcoord = TRANSFORM_TEX(v.texcoord, _MainTex);

return o;
}

fixed4 frag (v2f i) : SV_Target
{
fixed4 col = tex2D(_MainTex, i.texcoord);
col *= _Color;
return col;
}
ENDCG
}
}
}



Applying the shader as it currently is scales the mesh by a large amount.


These questions might seem awfully basic,but I just cannot find any resources on them.
Thanks in advance.
Logged

epcc
Level 1
*


View Profile
« Reply #1 on: November 02, 2014, 02:25:53 PM »

I dunno about 1, but in answer to 2, those effects are called post-effects. You basically render your scene to a texture and then render a quad over the whole screen using that texture and a shader.
Unity has even special functions for that.
Logged

Ashaman73
Level 0
***



View Profile WWW
« Reply #2 on: November 02, 2014, 10:17:46 PM »

In general, the basic concept of shaders:
I. you have only a single shader per triangle/pixel, if you want to combine new multiple shaders, you need to put them into a single one or you need to render the triangle multiple times.
II. There are two basic rendering approaches. First, you render your mesh and apply a shader and/or you apply a shader on the already rendered "image". This is done by rendering a full-screen quad and applying a shader.
III. A shader is more or less a simple function, taking textures and parameters as input and outputing pixels (vertices).

1. Scene-wide shaders:
If you want to use a certain effect on all objects, you can either try to implement a post-processing shader (does your engine support it ?) or you need to incoparate the effect in all object/mesh shaders. I use a pre-processor to actually generate my final shaders, this way I can include certain effects in multiple shaders.

2. Effect surpass mesh/bounds:
Either post-processing or an additional render pass is applied, eg. the mesh is rendered multiple times with different shaders.

3. Shaders have parameters. Eg you could use the object color as parameter (depends on engine architecture).

4. vertex neighbors
You are correct, you can only apply a (vertex) shader on a single vertex. If you need to regard a shader in the context of other you either need to add additional vertex attributes (eg as color or custom attribute) or you could use "texture" and grab additional data from memory while processing the vertex (engine/hardware support!).

5. world position
The fragment/pixel shader needs them in projection space, if you want to use an other space , you need to calculate it and transfer the space coord from the vertex to the fragment shader (eg called varying in GLSL/OGL). Be aware, the the camera space is often the better space for calculating extra stuff, because it is more easily to transfer all needed information into camera space then worldspace.

Eg if you would like to save the pixel position in world space, you would need to save x,y,z as float (3x4=12 bytes per pixel). In camera space you would only need to save the depth, because you can reconstruct the complete camera space position from the pixel screen position and the depth (save only 1 float). Look for deferred shading/rendering to learn more about this (g-buffer, position reconstruction, light calculation in camera space, SSAO etc.).




Logged

Eendhoorn
Level 6
*

Quak


View Profile
« Reply #3 on: November 07, 2014, 06:00:59 PM »

In general, the basic concept of shaders:
I. you have only a single shader per triangle/pixel, if you want to combine new multiple shaders, you need to put them into a single one or you need to render the triangle multiple times.
II. There are two basic rendering approaches. First, you render your mesh and apply a shader and/or you apply a shader on the already rendered "image". This is done by rendering a full-screen quad and applying a shader.
III. A shader is more or less a simple function, taking textures and parameters as input and outputing pixels (vertices).

1. Scene-wide shaders:
If you want to use a certain effect on all objects, you can either try to implement a post-processing shader (does your engine support it ?) or you need to incoparate the effect in all object/mesh shaders. I use a pre-processor to actually generate my final shaders, this way I can include certain effects in multiple shaders.

2. Effect surpass mesh/bounds:
Either post-processing or an additional render pass is applied, eg. the mesh is rendered multiple times with different shaders.

3. Shaders have parameters. Eg you could use the object color as parameter (depends on engine architecture).

4. vertex neighbors
You are correct, you can only apply a (vertex) shader on a single vertex. If you need to regard a shader in the context of other you either need to add additional vertex attributes (eg as color or custom attribute) or you could use "texture" and grab additional data from memory while processing the vertex (engine/hardware support!).

5. world position
The fragment/pixel shader needs them in projection space, if you want to use an other space , you need to calculate it and transfer the space coord from the vertex to the fragment shader (eg called varying in GLSL/OGL). Be aware, the the camera space is often the better space for calculating extra stuff, because it is more easily to transfer all needed information into camera space then worldspace.

Eg if you would like to save the pixel position in world space, you would need to save x,y,z as float (3x4=12 bytes per pixel). In camera space you would only need to save the depth, because you can reconstruct the complete camera space position from the pixel screen position and the depth (save only 1 float). Look for deferred shading/rendering to learn more about this (g-buffer, position reconstruction, light calculation in camera space, SSAO etc.).






1. Ahh that's a bitch, your solution seems like a convenient one though.
2. I actually read an explanation on shadow mapping in the meanwhile and it makes more sense to me now, indeed done as post processing!
3. Yeah I had already learned about parameters, but since the shader is attached to a material I find it weird that shared materials between multiple meshes still are effected differently even though it's the same material used on both the meshes.
4. Alright!
5. Makes sense, I still have no idea which specific formula to use in my case, my question was too focused on unity I guess :p

Thanks for the answers, makes it all a bit more clear. Still sad to know that there is no kind of magic to know anything about surrounding vertices, but there aint always some magic solution I guess :p
Logged

Pages: [1]
Print
Jump to:  

Theme orange-lt created by panic