Welcome, Guest. Please login or register.

Login with username, password and session length

 
Advanced search

1412012 Posts in 69461 Topics- by 58492 Members - Latest Member: Gods Of Fire

July 03, 2024, 08:50:48 PM

Need hosting? Check out Digital Ocean
(more details in this thread)
TIGSource ForumsCommunityDevLogsSpecter | 2D action-platformer [now with demo!]
Pages: 1 [2] 3
Print
Author Topic: Specter | 2D action-platformer [now with demo!]  (Read 11495 times)
irabonus
Level 0
**



View Profile WWW
« Reply #20 on: July 30, 2013, 03:04:56 PM »

We had the opportunity to get a friend of ours to do some voice acting.
This is what came out of it:

!
Logged

Blog: darioseyb.com My game project: Specter
Tiago_BinaryPig
Level 1
*



View Profile WWW
« Reply #21 on: August 03, 2013, 05:56:14 AM »

Great article about the render process of Specter and awesome intro!

Just out of sheer curiosity, when are you going to further explain the process of distorcion and render process? It's an hard topic to find and i'm very interested on it.


Keep up the great work  Gentleman
Logged

irabonus
Level 0
**



View Profile WWW
« Reply #22 on: August 03, 2013, 06:05:08 AM »

I'm trying to find the time to write a follow up article, but I just started a full time internship and there is still so much left to do for Specter.
Still, maybe I can get it done this weekend!
Logged

Blog: darioseyb.com My game project: Specter
irabonus
Level 0
**



View Profile WWW
« Reply #23 on: August 06, 2013, 01:06:03 PM »

Distortion Effect

As requested, here is a quick overview over the distortion effect! In the image you can see the effect used for an attack you can unlock via a perk.

Low Quality GIF



Final image


Quick overview over the state of the pipeline.



Distortion Stages

Distortion is treated as a full-screen post-processing effect. That means it takes as input a pre-stage and outputs a post-stage render target. In this case, the input is the world render after glow is added.

Distortion render target


A post-processing stage is defined like that:

Code:
interface PostprocessingStage
{
    void Apply(RenderTarget2D _pre, ref RenderTarget2D _post, SpriteBatch _spriteBatch);
    void SaveRTs(string _dir, string _time);
}

I use a separate render target to render the distortion information to. I then draw the pre-stage texture to the post-stage render target, sampling from the distortion texture.
The distortion texture contains offset information for each pixel on the screen. The information is encoded in the following format:

R: X-Offset to the left
G: X-Offset to the right
B: Y-Offset upwards
A: Y-Offset downwards

You may ask why I don't just use the red channel for x- and the green channel for y-offset, interpreting 0.5 as no offset.
Splitting it up in four channels allows me to render each distortion effect additively. If two effects overlap the distortions will either cancel out or be added together.

rgba(0, 1, 0, 1)+rgba(1, 0, 1, 0)=rgba(1, 1, 1, 1)
offset (1,1)+offset (-1,-1)=offset (0, 0)

Again, as mentioned in the last post, this was not my idea, all credit goes to Eric Wheeler!

So, how does the distortion target actually get filled?

1. Render stuff to the distortion render target

Code:
device.SetRenderTarget(distortionTarget);
device.Clear(Color.Transparent);

foreach (DistortionBaseEffect effect in effects)
{
    effect.Apply(_spriteBatch, _pre);
}

I have a base class called "DistortionBaseEffect". The class is an interface which is responsible for rendering to the distortion target. This allows me to implement different methods to generate distortion. At the moment I have implemented rendering:

  • Textures (using the alpha channel for distortion strength)
  • A wave effect (used in the gif)

Each method uses a different pixel shader. Those shaders are the same except for an offset function:

Code:
float4 main(float2 TexCoord : TEXCOORD0, float2 screenPos : TEXCOORD1) : COLOR0
{
float2 offset = offsetFunc(TexCoord);

float4 distortion = float4(0,0,0,0);
if(offset.x < 0)
distortion.r = abs(offset.x);
else
distortion.g = offset.x;

if(offset.y < 0)
distortion.b = abs(offset.y);
else
distortion.a = offset.y;

return distortion * 5;
}

Textures
I can use any texture I want for distortion. In fact, I can just attach the "TextureDistortion" to a GameObject with a sprite attached to it and it will automatically render the sprite to the distortion target instead of to the regular world target.
The offset function looks like that:

Code:
float2 offsetFunc(float2 _pos)
{
//Sample from texture
float4 dist = tex2D(TextureSampler, _pos);
//Calculate offset from alpha
float2 offset = float2(dist.a,  dist.a*1.5)*strength/100;
return offset;
}

Wave Effect
I implemented the wave effect to get a smoother result than I'd ever get from using a texture. I also have more control over parameters like the speed, radius, height and width of the wave.

Code:
float2 offsetFunc(float2 _pos)
{
//Scale texture coordinates appropriately
_pos -= radius + width;
_pos /= scale;

//Length from the center of the rendered quad
float x = length(_pos);

//Calculate offset strength
float val = cos((x - currentRadius)/(width/PI)) * strength/2 + strength/2;

//Clip for ranges outside the width
if(abs(x - currentRadius) > width)
val = 0;

//Offset in the direction from the center
return normalize(_pos) * val;
}

I built the function for the wave in GeoGebra. It looked like that in the program:




2. Render distortion to the post-stage target

To get the final output I first render the pre-stage target without any effects. This has to do with a sampling issue I had where there was some slight distortion in parts where there shouldn't be any. This was fixed by clipping the distortion render for pixels with a total distortion  under a certain threshold.

Code:
device.SetRenderTarget(_post);
distortionApplyEffect.Parameters["distTex"].SetValue(distortionTarget);
distortionApplyEffect.Parameters["screenDim"].SetValue(GraphicsEngine.Instance.Resolution);

_spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.Opaque, SamplerState.PointClamp,
                   DepthStencilState.None, RasterizerState.CullNone, null);
_spriteBatch.Draw(_pre, GraphicsEngine.Instance.ScreenRectangle, Color.White);
_spriteBatch.End();

_spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.Opaque, SamplerState.PointClamp, 
                   DepthStencilState.None, RasterizerState.CullNone, distortionApplyEffect);
_spriteBatch.Draw(_pre, GraphicsEngine.Instance.ScreenRectangle, Color.White);
_spriteBatch.End();

I then render the pre-stage target again, this time with the distortion apply effect. This effect is responsible for sampling distortion information from the distortion target and using this information to offset the texture coordinates and then sample the color of the pre-stage target from the calculated location.

Code:
float4 main(float2 TexCoord : TEXCOORD0, float2 screenPos : TEXCOORD1) : COLOR0
{
        //Sample from the distortion texture
float4 sampleOffset = tex2D(DistortionSampler, screenPos)/5;
        //Calculate final offset
float2 offset = float2(0,0);
offset.x = -sampleOffset.r + sampleOffset.g;
offset.y = -sampleOffset.b + sampleOffset.a;
        //Clip if under threshold
clip(length(offset) < 0.001 ? -1 : 1);
        //Sample from new position
return tex2D(TextureSampler, screenPos+offset);
}


Conclusion

Whoa, this post is long... I hope I explained everything adequately. If not, feel free to ask questions!
 
An open task right now is, that I could use depth information to decide if an effect should be rendered behind or in front of an object. This could add a lot of depth to the effect and make it useful in situations where there is a lot of parallax going on and not occluded distortion might be obvious.
Logged

Blog: darioseyb.com My game project: Specter
Tiago_BinaryPig
Level 1
*



View Profile WWW
« Reply #24 on: August 07, 2013, 02:31:01 PM »

Great article, thanks for spending time writing it!

Though i understood most everything i'm confused with one particular detail.
There's two main functions, when do you use one and whe do you use the other one?

Let's say i want to apply distortion in this three situations :

- character explodes creating a shockwave with the shape or the character's texture.
- from a circular texture, ripple it and scale to create the same effect of the .gif.
- distort like the .gif (do you distort the entire render target or only what's inside a texture?)

Again, thanks for both articles  Gentleman
Logged

irabonus
Level 0
**



View Profile WWW
« Reply #25 on: August 08, 2013, 12:57:16 AM »

I guess with "two main functions" you mean the two different methods to generate distortion (texture and the wave function)?
Those are just the ones I've implemented so far. You could use anything you can do in a pixel shader to get offset values.
That is, if you want you could write another shader which combines some analytic method (like the wave) and textures

1. I'd use the texture distortion with the character texture and scale the texture up over time.
2. I found that scaling textures too much leads to artifacts in the distortion. That's why I decided to go with a function for the wave. You do have more overall control when using a texture though.
3. I render a quad that fits the current size of the effect. I don't render a fullscreen quad, because I want to reduce overdraw, which is already quite a big performance problem.
Logged

Blog: darioseyb.com My game project: Specter
Tiago_BinaryPig
Level 1
*



View Profile WWW
« Reply #26 on: August 08, 2013, 01:08:28 AM »

I guess with "two main functions" you mean the two different methods to generate distortion (texture and the wave function)?
Those are just the ones I've implemented so far. You could use anything you can do in a pixel shader to get offset values.
That is, if you want you could write another shader which combines some analytic method (like the wave) and textures

1. I'd use the texture distortion with the character texture and scale the texture up over time.
2. I found that scaling textures too much leads to artifacts in the distortion. That's why I decided to go with a function for the wave. You do have more overall control when using a texture though.
3. I render a quad that fits the current size of the effect. I don't render a fullscreen quad, because I want to reduce overdraw, which is already quite a big performance problem.

Yeah, sorry, the two float4 main functions.
You use one or another, not both at the same time. Ok, i was kinda confused about that Tongue

I've implemented a distortion effect before you wrote the article, but i'll now take into consideration some of the things you explained Smiley

Thanks again and looking forward to see more from this game.
« Last Edit: August 08, 2013, 07:35:28 AM by KopanoGS » Logged

Tiago_BinaryPig
Level 1
*



View Profile WWW
« Reply #27 on: August 08, 2013, 11:17:45 AM »

Sorry to insist in this topic, i don't want to ruin this thread (i'll delete my posts after the reply).

Which float4 functions do you use for each distortion type (from texture and from scene quad)?
And with which float2 offset functions too because i'm trying to modify my shader to be as flexible as yours but i'm getting weird results.

I don't get when to use the second float4 function and when to use the first float4 function.

And could you, for example, take your main character and make him distort the map? How would it look like?

When i try to distort my scene with a texture, if that texture has solid colors instead of fading colors, it will shake instead of ripple/distort.
« Last Edit: August 08, 2013, 11:48:10 AM by KopanoGS » Logged

irabonus
Level 0
**



View Profile WWW
« Reply #28 on: August 09, 2013, 11:21:05 AM »

No need to be sorry, I'm completely fine with answering questions!

The first main (float4) function is for rendering to the distortion target, the second one is for rendering the distortion to the final texture. It's used in the "distortionApply" shader.

I actually have three different shaders. One for the texture distortion, one for the wave distortion and the "DistortionApply" shader.
The texture and the wave distortion share the first main function and differ in what offsetFunc they use.

If I'd add a TextureDistortion component to the character it would look like that:



I can't really help you with the shake problem. As you can see in the image above the distortion has pretty hard edges if the sprite has no fading transparency, maybe that has something to do with it?
Logged

Blog: darioseyb.com My game project: Specter
Tiago_BinaryPig
Level 1
*



View Profile WWW
« Reply #29 on: August 09, 2013, 12:11:21 PM »

Oh, i see.
So, in pseudo-code, it would look like this :

Code:
public void Draw()
{
set render target - pre shader processing

//draw everything in the level

set render target - first pass

//draw textures that will have distortion with the shader containing the first (float4) main + one of the (float2) distortion offset function.

set render target - second pass

// here's my doubt.main.
// set the distortion sampler as the first pass.
// draw  pre shader processing render target with the second (float4)

set render target - final

//draw second pass

I'm i close to understand your process or not really?
I've been reading different articles and all use different processes, plus, the way i've done my shader is completely different from yours.

Btw, can you make a wave out of that distortion texture or for texture distortion, you can only scale up or down?
To be more precise, can you do what you did with the .gif but with a texture? "Explode" the character into a wave with the shape of the texture?
« Last Edit: August 10, 2013, 03:56:36 PM by KopanoGS » Logged

irabonus
Level 0
**



View Profile WWW
« Reply #30 on: August 24, 2013, 09:36:04 AM »

Hey, I'm sorry for the late answer, I've been really busy lately Sad
You've got it about right. Any differences are probably specific to each game anyway.

You can probably to that, with a little playing around. I haven't tried it though.
Logged

Blog: darioseyb.com My game project: Specter
irabonus
Level 0
**



View Profile WWW
« Reply #31 on: August 29, 2013, 07:36:25 AM »

Specter is on Greenlight now!

We've got a a lot of feedback via Greenlight and it's been awesome so far (~40% yes votes). Some people raised the question what is so different about Specter, with all those other 2D platformers out there. I addressed that in an announcement. You can read it below or check it out on Greenlight via the link above.

Quote
What makes Specter unique?

You guys raise a very important point! The uniqueness of our game is something we put a lot of thought into and I'll try to explain it as good as I can.

TL;DR: You get to unlock 35+ abilities, 40 perks and 5 weapons and use them all in the same fight in a fighting game style key mashing madness!

A lot of action-platformers either feature a vast world to explore (metroidvania likes) or more recently, a random level generation element (e.g. Rogue Legacy, Spelunky).
We are going for a more traditional arcady feel. We have hand made level, with a highscore system and challenges which would not be possible with random generation.
While Cloudberry Kingdom shows how powerful an intelligent random level generator can be, it is still severly limited in what it can do. The levels are usually very short and don't allow for certain "cleverness" that's possible if a game designer spends weeks optimizing flow and difficulty and design.

Also, the combat in recent action-platformers has been quite simplistic. In Rogue Legacy you have your sword, a secondary attack and a few additional effects from items (like sprint and dash). In Specter the combat is more like you'd see it in a fighting game. You'll be able to unlock five weapons with five active abilities per weapon. You switch between those weapons at any time instantly which allows you to pull of complex combos.
Additionally you can unlock 8 perks for each weapon. Those perks aren't passive like "5% more crit chance", but add secondary effects to your abilties. Like, make your standard attack break brittle walls, or even unlock an "ultimate" ability with a long cooldown (check out the development update number one for that one).
Logged

Blog: darioseyb.com My game project: Specter
ericmbernier
Level 3
***


Sometimes I make games.


View Profile WWW
« Reply #32 on: August 29, 2013, 06:32:19 PM »

This looks beautiful. Following.
Logged

irabonus
Level 0
**



View Profile WWW
« Reply #33 on: October 10, 2013, 01:45:36 PM »

We are still very much working on the game! It's our first big project and everything is taking longer than expected.
I implemented some new options for lighting today, because we wanted some more variety and it turns out that they work really well!



Logged

Blog: darioseyb.com My game project: Specter
irabonus
Level 0
**



View Profile WWW
« Reply #34 on: October 18, 2013, 02:25:10 AM »

In the spirit of Feedback Friday, here is a build for you to try!

Specter Alpha 0.1.15, 90mb, Windows only

We are looking especially for feedback on the level design. Too hard/too easy? Frustrating? Interesting challenges? etc.
Logged

Blog: darioseyb.com My game project: Specter
irabonus
Level 0
**



View Profile WWW
« Reply #35 on: October 22, 2013, 11:30:11 AM »

Specter's Lighting Explained


using this screenshot...

Why 2D lighting?
The lighting system is one of Specter's main features. It allows us to add a lot of atmosphere to a level and make art reuse less obvious. Also, it looks pretty. The base system is actually older than this project. I first implemented it for a project which unfortunately got called off (that's also where I met Blair, the artist and game designer behind Specter). At the time it produced results like this:




Since then I spent quite a bit of time on it, adding new features, optimizing, etc.

Pipeline overview (Also read this post):



Lighting modes:
For the longest time there was only one lighting mode, the "effect" one. Two weeks ago Blair asked for lights, which would allow him to color in bigger parts of the level, without standing out too much.
This spawned the "ambient" mode. Ambient lights have a smoother and slower falloff than effect lights. Effect lights are still great key lights and we use them to highlight important parts of the level, like checkpoints and deadly pits.
The biggest difference between ambient and effect lights is the way they are combined with the world render target.
Ambient lights should change the color of the parts they affect, but don't add too much brightness. Effect lights must be able to blow out the image and give the bloom post-processing effect some bright pixels to work with.

Light rendering:
All lighting is computed in screen space, per pixel. There are two render targets and two shader passes, one for effect lights and one for ambient lights. The render targets are set to half the screen resolution, two save performance. The visual difference between half and full resolution is negligible.
I render a quad for each light which is on screen. If the light is a spotlight I set a flag; point and spotlights are handled in the same shader. Every light has an optional depth value. Lights can be occluded by objects in the world, if the appropriate flag is set.

Code:
//simple early out condition (rough not in sight or not active)
if ((light.Position - mainCamPos).Length() > halfScreenRectSizeLength + light.Distance || !light.IsActive ||
    light.OnlyColor != _color)
    continue;

//Create rectangle based on the lights radius and position and transform it to screen space.
Rectangle lightScreenRect = Camera.Main.WorldToScreen(
      new Rectangle((int)(light.Position.X - light.Distance),
                    (int)(light.Position.Y - light.Distance),
                    (int)(light.Distance * 2),
                    (int)(light.Distance * 2)));

lightEffect.CurrentTechnique = lightEffect.Techniques["DeferredLight"];

//Set light parameters
lightEffect.Parameters["lightStrength"].SetValue(light.Intensity);              
lightEffect.Parameters["lightColor"].SetValue(light.Color.ToVector3());
lightEffect.Parameters["lightRadius"].SetValue(light.Distance/Camera.Main.Zoom);
lightEffect.Parameters["depth"].SetValue(light.Depth);
lightEffect.Parameters["checkDepth"].SetValue(light.CheckDepth);

//Set spotlight parameters
if (light.Type == LightType.Spot)
{
    lightEffect.Parameters["isSpotlight"].SetValue(true);
    lightEffect.Parameters["angleCos"].SetValue((float)Math.Cos(MathHelper.ToRadians(light.Angle)));
    lightEffect.Parameters["lightNormal"].SetValue( Vector2.Transform(new Vector2(1, 0), Matrix.CreateRotationZ(light.Rotation)));
}
else
    lightEffect.Parameters["isSpotlight"].SetValue(false);

//Set vertices for the quad (vertex shader handles transform to normalized screen coordinates)
vertices[0].Position = new Vector3(lightScreenRect.X, lightScreenRect.Y, 0);
vertices[1].Position = new Vector3(lightScreenRect.Right, lightScreenRect.Y, 0);
vertices[2].Position = new Vector3(lightScreenRect.X, lightScreenRect.Bottom, 0);
vertices[3].Position = new Vector3(lightScreenRect.Right, lightScreenRect.Bottom , 0);
                
//Apply Pass
if (light.OnlyColor)
    lightEffect.CurrentTechnique.Passes["Color"].Apply();
else
    lightEffect.CurrentTechnique.Passes["Light"].Apply();

// Draw the quad
Device.SetVertexBuffer(vertexBuffer);
Device.DrawUserPrimitives(PrimitiveType.TriangleStrip, vertices, 0, 2);
lightsRendered++;
Code executed for each light, Light.IsOnlyColor marks an ambient light.

Diving into the shader:
The lighting shader takes these parameters:

Code:
//World depth texture
Texture2D depthTex;

sampler DepthSampler = sampler_state
{
    Texture = <depthTex>;
    AddressU = clamp;
    AddressV = clamp;        
};

//Screen dimensions in pixel units
float screenWidth;
float screenHeight;

//Current zoom level
float scale;
//Size of the screen in world units
float2 viewSize;

//Light specific information
float lightStrength;
float lightRadius;
float3 lightColor;
float depth;
bool checkDepth;

//Data needed for spotlights
bool isSpotlight;
float angleCos;
float2 lightNormal;

Vertex shader:
The vertex shader handles all transformations that can be done on a per vertex basis. The last conversion from texture coordinates to world coordinates is necessary to keep the lights the same size in world coordinates (and not in pixel coordinates) if the camera is zoomed in.

Code:
float2 VertexToPixelShader(inout float2 texCoord: TEXCOORD0, inout float4 position : POSITION) : TEXCOORD1
{
    //Half pixel offset
    position.xy -= 0.5;
    //Convert from pixel to screen coordinates (x and y between 0 and 1)
    position.xy = position.xy / float2(screenWidth, screenHeight);
    //Save for easy depth texture sampling
    float2 screenPos = position.xy;
    //Transform to normalized device coordinates (x and y between -1 and 1, (0,0) being the bottom left of the screen)
    position.xy *= float2(2, -2);
    position.xy -= float2(1, -1);
    //Convert texture coordinates to world coordinates
    texCoord = lightRadius * 2 * texCoord * scale;
    return screenPos;
}

Ambient lights:
Let's start with the easier part. The pixel shader for the ambient lights is pretty straight forward. The only thing you might find weird is that I use the alpha channel for the light's strength. After all, we are rendering in additive mode anyway, why don't I just multiply the color with the strength (black meaning no light)?
The ting is that I need the alpha channel later on to know how to combine the ambient lights with the world color texture. The way I do it the colors are still blended correctly, but I also now how strong the lighting is at each pixel. This also allows me to add black ambient lights and actually darken certain areas.

Code:
float4 AmbientShader(float2 TexCoord : TEXCOORD0, float2 screenPos : TEXCOORD1) : COLOR0
{
    //Sample the depth value from the world depth texture.
    float depthVal = tex2D(DepthSampler, screenPos);
    //Discard the pixel if the depth check is enabled and the difference between the depth of the light and the depth of the sampled pixel is too high.
    clip( !checkDepth || abs(depthVal - depth) < 0.05 ? 1 : -1 );

    float2 pixelPosition = TexCoord;
    //Transform the lightRadius from pixel to world coordinates (also, the light sits in the middle of the quad sized (lightRadius, lightRadius)*2, hence it's position is (lightRadius, lightRadius)).
    float2 lightPosition = float2(lightRadius, lightRadius) * scale;
    float2 lightDirection = (pixelPosition - lightPosition) / scale;

    float distance =  length(lightDirection);

    //Linear fall off.
    float coneAttenuation = saturate(1.0f - distance / lightRadius);
    if(isSpotlight)
    {
        //Multiply spotlight falloff
        coneAttenuation *= coneFactor(lightDirection);
    }

    float3 shading = lightColor;
    //Write light's color to the render target, with the strength of the light at this pixel as alpha value.
    return float4(shading.r, shading.g, shading.b, coneAttenuation * lightStrength);
}

Effect lights:
The pixel shader for the effect lights is quite a bit more complex, mostly because of the edge detection. Though if you look at it for some time and read through the comments it shouldn't be all hard to understand.

Code:
float4 EffectShader(float2 TexCoord : TEXCOORD0, float2 screenPos : TEXCOORD1) : COLOR0
{
    //The same as in the ambient shader, just that an effect light will light everything behind it, not just close things.
    float depthVal = depthTex.Sample(DepthSampler, screenPos);

    clip( !checkDepth || depth < depthVal ? 1 : -1 );

    float2 pixelPosition = TexCoord;

    float2 lightPosition = float2(lightRadius, lightRadius) * scale;
    float2 lightDirection = (pixelPosition - lightPosition) / scale;

    float distance =  length(lightDirection);
    //Edge detection starts here
    //Calculate offset for edge detection samples
    float2 offset = (5.0f*(1.0f - distance/lightRadius))/viewSize;
    //Sample depth values around the current pixel
    float4 sample1 = depthTex.Sample(DepthSampler, screenPos - float2(offset.x, 0));
    float4 sample2 = depthTex.Sample(DepthSampler, screenPos - float2(0, offset.y));
    float4 sample3 = depthTex.Sample(DepthSampler, screenPos + float2(offset.x, 0));
    float4 sample4 = depthTex.Sample(DepthSampler, screenPos + float2(0, offset.y));
    //Write all samples into one vector (for nicer math)
    float4 fEdges = {
        sample1.r,
        sample2.r,
        sample3.r,
        sample4.r
    };
    //Calculate differences to current pixel.
    float4 delta = abs(depthVal.xxxx - fEdges);
    //0.005 is the difference threshold for an edge, if the difference is bigger in some direction, it is one.
    float4 edges = step(0.005, delta);
    //This produces the step/ring effect of effect lights. (one ring being 20 world units wide)
    distance = (int)((int)distance/(20/scale))*(20/scale);

    //Spotlight stuff, same as for ambient lights
    float coneAttenuation = saturate(1.0f - distance / lightRadius);
    if(isSpotlight)
    {
        coneAttenuation *= coneFactor(lightDirection);
    }

    //Decide if the current pixel is on an edge
    bool condition = dot(edges, 1.0) != 0.0 && depthVal.r < depth;
    //Cubic falloff
    float3 shading = pow(coneAttenuation * lightColor * lightStrength, 3);

    //Make edges a lot brighter
    if(condition)
        shading *= 3.0f;

    //Make things in front of the light darker, but still let some light through (in case depth test isn't enabled).
    if(depthVal.r > depth)
    {
        return float4(shading.r, shading.g, shading.b, 1.0f);
    }
    else
    {
        if(condition)
   return float4(shading.r, shading.g, shading.b, 1.0f);
        else
            return float4(shading.r, shading.g, shading.b, 1.5f)/1.5;
    }
}

The equation for spotlight cones:
This is piece of code is the result of a lot of trial and error. The variables "angleCos" and "lightNormal" are properties of the light. "angleCos" is the cosine of the angle of the light cone, "lightNormal" the normal on the direction the spotlight is facing.
Honestly, I can't quite remember why it works (maybe someone can help me out here?), but it produces a very nice cone shape with smooth edges. Remember guys, comments in your code are important!

Code:
float coneFactor(float2 lightDirection)
{
    float dotP = dot(lightNormal, normalize(lightDirection));
    return saturate(dotP - ((1-dotP)/(1-angleCos) * angleCos));
}


Combining the lighting with the rest:
Now that we have two render targets full of lighting information we need to decide how we use this information to light the scene. There isn't really any wrong way to do this, it depends on the look you are going for.
I draw the previously rendered world target to a intermediate render target (we need two of those anyway for post processing), using another shader.
This shader has takes additionally to the world texture the ambient light and effect light textures. I can also set an overall ambient color.

Code:
spriteBatch.Begin(SpriteSortMode.Immediate,
                  BlendState.AlphaBlend,
                  SamplerState.PointClamp,
                  DepthStencilState.Default,
                  null,
                  combine);
combine.Parameters["ambientTex"].SetValue(rtAmbientLight);
combine.Parameters["effectTex"].SetValue(rtEffectLights);
combine.Parameters["ambientColor"].SetValue(AmbientColor.ToVector4());
spriteBatch.Draw(rtWorld, ScreenRectangle, Color.White);
spriteBatch.End();

This time we can use the default XNA vertex shader and only need to specify a pixel shader.

Code:
float4 main(float2 TexCoord : TEXCOORD0) : COLOR0
{
    //Sample colors from input textures
    float4 worldColor = tex2D(TextureSampler, TexCoord);
    float4 amCol = tex2D(AmbientSampler, TexCoord);
    float4 efCol = tex2D(EffectSampler, TexCoord);

    //Mix the ambient color with the world color, based on the alpha channel of the ambient lighting textures.
    //(Remember, the alpha channel is the strength of the lighting at that point)
    //Add half of the ambient lights, just because that is what Blair wanted
    worldColor = (worldColor * (1-amCol.a) + (worldColor * amCol ) * amCol.a) + amCol /2;

    //Basic multiply with the flat ambient color
    float4 finalColor = worldColor * ambientColor;

    //Add the effect light color (multiplied by the base color, because it looked less flat)
    return finalColor + (efCol * worldColor);
}


Conclusion
I hope you were able to get something out of this post. I think it's a neat way to do things. Extensions like shadows, or normal mapping are easy to integrate and it performs quite well.
Thanks to soolstyle for this article, which inspired me to start this!

And that's it! If you have any questions, feel free to ask!
« Last Edit: October 22, 2013, 12:01:34 PM by irabonus » Logged

Blog: darioseyb.com My game project: Specter
Mystic River Games
Level 0
***



View Profile WWW
« Reply #36 on: October 25, 2013, 12:21:08 AM »

I just tried your alpha build, and just wanted to give you some feedback about it.

Difficulty :  is fine, but in some places it was hard to figure out what to do, especially at the entrance of level 2 I think, where you have to dash, I didn't know I could dash so I got stuck between the green wall and the crumbling top , actually I was trying to break the ceiling but didn't work, and just by luck I was able to dash it, tutorial for that will help new players, besides that seems ok, sometimes too easy, and add more secrets or make the environment a little bit more interesting, is not bad but needs some more work.

Sound: normalize your sounds, one sound that was really annoying is the jump sound of some skulls with legs, it is too loud compared to the rest of the sounds

death and respawn: I am able to see between the death and re-spawn sequence a few frames of non sense , not sure what is that but confuse me sometimes.

small quirks here and there, in some places I got killed without knowing why, I just hit the spikes and there was a hole after , i didn't fall and was my first hit with the spike but I died as I falled but i didn't fall.

Looks very promising, keep the good work!.
Logged

irabonus
Level 0
**



View Profile WWW
« Reply #37 on: October 25, 2013, 12:05:03 PM »

Hey, thanks for the feedback Smiley

We actually do have tutorials for all abilities. They didn't load for some reason, I fixed it now. The cracked wall parts can be destroyed when you've unlocked a certain perk.

We are working with a sound designer now, sounds should get a lot better!

Yeah, that's caused by slow the somewhat loading. If it's confusing I'll definitely have to do something about it. Didn't occur to me so far that that could be a problem, thanks for pointing it out!
Logged

Blog: darioseyb.com My game project: Specter
airman4
Level 10
*****


Need More Time !


View Profile WWW
« Reply #38 on: October 25, 2013, 11:54:22 PM »

Nice game
But very , too much like Rogue legacy
Did you made this game also ? sorry i don't know well

Anyway , the game could benefit of put down the camera when you are on knees , like sonic games to see what goin on bottom of screen and avoid dying stupidly in a hole.

The combat is kinda repetitive , i hoped at least a vertical attack , that would be cool.
The level design is well crafted , sometimes hard sometimes too easy but it's very good nonentheless.

Keep it up , the lighting is very cool.
Logged

irabonus
Level 0
**



View Profile WWW
« Reply #39 on: October 26, 2013, 09:54:53 AM »

We are not the guys who made Rogue Legacy (though we absolutely love that game). I know that there are some similarities and the comparison is bound to come up, but I don't think the games are too similar.
Yes, you are a knight and it's a 2D action-platformer, but that's pretty much where it stops.
There are vast differences in gameplay and level structure and progression works very differently as well. Please refer to this post for more details.

Anyway, thank you for your feedback! A lot of people have been asking for a vertical attack and we are looking into how we could do that.
Logged

Blog: darioseyb.com My game project: Specter
Pages: 1 [2] 3
Print
Jump to:  

Theme orange-lt created by panic