Welcome, Guest. Please login or register.

Login with username, password and session length

 
Advanced search

1411506 Posts in 69374 Topics- by 58429 Members - Latest Member: Alternalo

April 25, 2024, 10:34:51 PM

Need hosting? Check out Digital Ocean
(more details in this thread)
TIGSource ForumsDeveloperTechnical (Moderator: ThemsAllTook)Using Noise and Median-Cut to Create Pixel-Nebula
Pages: [1]
Print
Author Topic: Using Noise and Median-Cut to Create Pixel-Nebula  (Read 2133 times)
Deckhead
Level 1
*



View Profile WWW
« on: August 14, 2019, 08:22:57 PM »

Originally posted here.

One of the things I wanted to have in The Last Boundary was nebula. They look awesome and space doesn't look like space without them, instead it just looks like a few scattered white pixels. But because the game is supposed to be pixel-artish I had to find a way to make my noise library make pixelly images.

Here are some examples:













The single-colour examples above use 8 different colours, while the others are using 16 colours. In this article I'll show you how I created pixelly looking nebula for The Last Boundary.


Ordinarily a noise library, whether you use LibNoise, whatever is in your game engine, or roll your own, you're going to usually end up spitting out values between -1 and 1. For the purposes of 2D textures this is usually mapped to between 0 and 1, and for the sake of argument we can say this ends up with RGB(0,0,0) to RGB(255,255,255).


Perlin Noise generated from the x,y coordinate of each pixel scaled by 0.3f

Then you might use Fractional Brownian Motion to give that nice puffy cloud look.


Perlin Noise put through Fractional Brownian Motion with 8 octaves, 0.01 frequency, 0.5 persistence, and 2.0 lacunarity.

I've noticed that on the web there are a lot of incorrect implementations of both Perlin Noise, Simplex Noise, and fBm. There seems to be a lot of confusion about which is which. Take care that you're using a correct implementation because when you try to chain things together as I have below, an incorrect implementation might not result in the desired results.

Let's pretend that we wanted to produce a smoke effect, so this looks like it will work nicely. But our pixel-art game would look funny having the introduction of a whole bunch of new colours, from RGB(0,0,0) through to RGB(255,255,255). We could have 255 new grey values in our game all of a sudden.

What we need to do, is transform it into just a few colours. That's what we'll be doing later. But first...
Logged

Deckhead
Level 1
*



View Profile WWW
« Reply #1 on: August 14, 2019, 08:23:43 PM »

Generating a Random Nebula

I followed already existing tutorials on generating random nebula, but I added some of my own steps, and I also used my own noise library. I rolled my own library years ago because I wanted to properly understand Perlin Noise and how it can be used along with other concepts to produce textures etc.

Depending on what you're using, you might be able to follow-on step-by-step, or you made need to code up some new things to affect your noise. For everything other than the initial generation of noise, and the fBm, I'll explain what's happening so that you can code it yourself; I think it's safe to assume that you have available the noise generation and fBm already.

Firstly, here's the final result for this nebula that we're making


The final result

Important to note that this hasn't been pixellated yet. It's the full range of color, with a pixelly star field. We'll pixelate it later.

The first thing I do is generate five different textures. Red, Green, Blue, Alpha, and a Mask. The Red, Green, and Blue textures are for those respective channels for the final colour. In reality, I only generate one or two of the colour channels, because I've found that using all three at once produces a crazy disco nebula that just looks bad. Any single colour works well and any combination of two also works well.

The Alpha channel is important, because that's what decides whether the underlying stars shine through the nebula. To illustrate that, here's the Alpha channel of the ongoing example.


The final alpha result of our example

Anywhere that's white is a value moving towards 1.0, which will result in Alpha of 255. Black is the oppostite, and as a result, see-through. So if you compare it to the example, you'll see where the black sections match up with areas in which we can see the underlying starfield.


Starfield example

These aren't the same stars as in the example, because they're randomly generated in each screenshot. But that hopefully won't affect your understanding of the nebula generation.

My noise library is module-based, which I used Lib Noise as inspiration for. For those unfamiliar, it basically means everything is a "module", and you chain them together. Some modules generate new values (Perlin Module, Constant Value), other modules add them together (Multiply, Add), some might just run an operation on the value (Lerp, Clamp).

Colour Channels

Whether we're doing one, two, or even three colours doesn't matter. The Red, Green and Blue channels are all generated the same way; I just use different initial seed values for them. My seeds are just based on the current system time.

The below are all grey-scale, but the technical reality is that they would only be a value for one of the three channels. Grey-scale here is just to help illustrate the results.

1. Perlin Noise

Just as above, the starting point is Perlin Noise. You can use SimplexNoise if you prefer, I don't believe the 2D implementation is owned by Ken Perlin, but I could be wrong. Mathematically, Simplex Noise uses less instructions, so would be quicker to generate the equivalent nebula. Because it uses simplexes rather than a grid, it also produces slightly nicer looking noise, but we're going to do a lot to it, so it really won't matter much.

The below isn't the actual source used, because the real source x,y values are adjusted by the fBm in step 3. This is just the x,y coordinate of the image multiplied by a static scale factor.


Perlin Noise generated from the x,y coordinate of each pixel scaled by 0.3f. I.e. PixelValue=PerlinNoise(x * 0.3f, y * 0.3f)

The values produced by Perlin noise are roughly between -1 and 1, so these were remapped to fall between 0 and 1 to produce a normal greyscale image above. I tested the actual domain range of the values so that when mapping I could provide the highest contrast (the lowest value mapped to 0 and the highest value mapped to 1).

2. Multiply

The next module I use multiplies the generated noise by 5. This could be considered a contrast adjustment. The negative values are darker and the positive values are brighter.

There's nothing to show you because once I map the values from between -5 and 5 into the range of 0 and 1 the result is the same as above.

3. Fractional Brownian Motion

This is the step that turns the noise into what a lot of people think the "noise effect" is. This is where you run octaves of increasingly smaller samples from the noise function (in our case perlin(x,y) is the function) to produce the fluffy look.


Fractional Brownian Motion of the above Perlin Noise. 8 octaves, .01f frequency, .5f persistence, and 2.5f lacunarity

You can see the beginning of something here now. The above image wasn't generated by scaling the x,y pixel coordinates, the fBM handles that. Again, the values were mapped back into the 0 and 1 range from the possible range of -5 to 5.

4. Clamp

Now I clamp the values between -1 and 1. This just completely obliterates everything out this range.


The same fBm clamped betweem -1 and 1

The overall effect this has is to pull our values back into a smaller range, and at the same time it creates steeper gradients and larger areas of complete white or complete black. These dead or washed out areas are important for the nebula effect we need later. If we hadn't multiplied by 5 first, clamping wouldn't have done anything.

5. Add 1

Now I take the values from the clamp, and add 1 to them. This has the effect of putting the values into the range of 0 and 2. When remapped, the results would look the same as above.

6. Divide by 2

You could probably guess that I then divide the result by 2 (multiply by .5). Again, it's the same visual as before.

Steps 5 and 6 combined to get our values into the range of 0 and 1.

7. Produce a Distortion Texture

The next step is I create a distortion texture. This is done with Perlin Noise (with a new seed) > Multiply by 4 > fBm. The fBm in this case uses 5 octaves, 0.025 frequency, 0.5 persistence, and 1.5 lacunarity.


Distortion Texture

The idea for this texture is to produce more detail compared to the nebula texture so far. The nebula is fairly large billowy cloud, this texture is going to make small little adjustments throughout the nebula. You can start to see the grid nature of Perlin Noise come through with this one.

8. Displace the Colour Texture using the Diplacement Texture

Next I take these two textures, and use one to displace the coordinates of the other by a factor. In this case, the combination looks like this:


Displacement Result

The way this works is that the distortion texture is used to change the x,y coordinates that you're looking for in the original noise result.

Remember, the images I've shown you so far have been for illustration purposes. At any one time, all we really have is a noise function. We give it an x,y value, and it spits out a number. The range that that number can be in is different some of the time, but above we have been mapping it back to greyscale in order to produce an image. The image is produced by using each x,y coordinate in the image as the x,y that we're providing the noise function.

So, when we say:

Give me a value for the top-left corner pixel X=0 and Y=0

The noise function gives us a number. If we're asking Perlin Noise, we know it'll be -1 to 1, if we've clamped, added, and mulitplied the value as before we know we'll have a value between 0 and 1.

So with that in mind, we know that the distortion noise function is producing values between -1 and 1. So to do the displacement, when we say:

Give me a value for the top-left corner pixel X=0 and Y=0

The displace module first asks the displacement function for a value at coorinates x,y. The result of this is between -1 and 1 (as above). This is then multiplied by 40 (that's the factor that I'm using). The result is a value between -40 and 40.

Next we take that value, and add it to the x,y coordinates that we're looking for, and use that result to lookup the colour texture. We prevented negatives by clamping at 0 because we can't look at negative x,y coords from our noise functions (at least you can't in my noise library, but you possibly can).

So in summary it works like:

Code:
ColourFunction(x,y)=value in the range of 0 to 1
DisplaceFunction(x,y)=value in the range of -1 to 1
DoDisplace(x,y)={
    v=DisplaceFunction(x,y) * factor
    clamp(v,0,40)
    x=x + v;
    y=y + v;
    if x < 0 then x=0
    if y < 0 then y=0
    return ColourFunction(x,y)
}

Hopefully that makes sense. You're basically not looking at the x,y you thought you were, but instead it's offset by some amount. And because the amount is also a smooth gradient, it's smoothly displaced.

There are other ways to do the displacement. I have a module in my noise library that produces a Spiral Displacement. This can be used to draw the texture sprialing down toward a series of points. For example.

That's it. We do the above three times, using new seed values for each colour channel. We may produce one colour channel, we may produce two. I wouldn't think three is worth it.

Alpha Channel

The Alpha Channel is produced much in the same way as the Colour Channels:

  • Start with Perlin Noise
  • Multiply by 5
  • fBM with 8 octaves, frequency 0.005, persistence 0.5, and lacunarity 2.5
  • Clamp the results between -1 and 1, add 1, divide by 2 (i.e. shift the range from -1 to 1 into 0 to 1.
  • Shift the result by a small amount in the negative direction. I'm shifting by 0.4. This has the effect of just turning everything slightly darker.
  • Clamp the results between 0 and 1. Because we shifted everything a little darker, we've basically created more 0 areas, and some areas have gone into negative.
The results are this Alpha Channel texture.


Alpha texture

As already mentioned the black areas will be transparent and the white areas will be opaque.

Mask Channel

This last texture is used to produce the shadows that sit over the top of everything. It begins the same as the other textures:

  • Perlin Noise
  • Multiply by 5
  • fBm, 5 octaves, 0.01 frequency, 0.1 persistence, 0.1 lacunarity. This is a small persistence, which results in a less busy cloud
  • Shift from -1 to 1 into 0 to 1
But we produce two of them:


Mask A


Mask B

With these two textures, we put it through what I call a Select module. Basically, we will use the value from Module A, or Module B. Which we use is based on the value of Module C. It requires two other values, the Select Point, and the Falloff.

If the value at x,y in Module C is greater or equal to the SelectPoint, we use the value at x,y in Module B. If the value is less than or equal to the SelectPoint - Falloff, we will use the value at x,y in Module A.

If it's between SelectPoint - Falloff and SelectPoint we will linearly interpolate between the value at x,y for Module A and Module B.

Code:
float select(x, y, moduleA, moduleB, moduleC, selectPoint, falloff)
{
    float s=moduleC(x,y);
    if(s ]= selectPoint)
        return moduleB(x,y);
    else if(s <= selectPoint - falloff)
        return moduleA(x,y);
    else
    {
        float a=moduleA(x,y);
        float b=moduleB(x,y);
        return lerp(a, b, (1.0 / ((selectPoint - (selectPoint-falloff)) / (selectPoint - s)));
    }
}

In our case, Module A is a Constant module with value 0. Module B is our first Mask A texture, and the Selector module, Module C, is second Mask B. The SelectPoint will be 0.4 and the falloff will be 0.1. The result:


Final Mask

Having a larger or smaller SelectPoint would decrease or increase the amount of black in the mask. Having a larger or smaller falloff increases or decreases the soft edges of the masks. I could have used a Constant module with the value 1 instead of one of the masks, but I like to add a bit more randomness to the "unmasked" areas.

Blending the Colour Channel with the Mask

Now we need to apply the mask to each of the Colour Channels. This is done via a Blending module. What it does is it combines a percentage of the value from two modules, so that the two percentages add up to 100%.

So, we could take 50% of the value at x,y from Module A, and 50% of the value at x,y from Module B. Or 75% and 25%, etc etc. The percentage we take from each is based on another module, Module C. So if the value at x,y from Module C is 0, then we will take 100% from Module A, and 0% from Module B. If it's 1, it's the other way around, I think you get the idea.

We are combining for each colour texture

  • Module A - a constant value of 0
  • Module B - the colour channel you've seen before
  • Module C - the mask result
This means that our colour channel noise will only show through were the mask had values above 0 (the areas heading toward white), and the amount that it shows through is based on the value from the mask.

The result for our example we've been using so far.


Final result

Compare this to the original before appying the blend using the mask.


Prior to blending with the mask

This example might not show it well, and due to the random nature it's hard to produce a good example on purpose, but the mask effect is what produces those darker areas. Of course, we could adjust the Mask to make it more pronounced as discussed above.

What's important is that the same Mask is applied to all of our colour channels so that the same areas are in shadow.

Combining Everything Together

Our original final example:


Final example

Is using a Red, Green, and Alpha Channels:


Red Channel


Green Channel


Alpha Channel

And then you just put it on top of your starfield.

Now that looks pretty good. But probably not well suited for a pixel-art game. We need to reduce the number of colours...
Logged

Deckhead
Level 1
*



View Profile WWW
« Reply #2 on: August 14, 2019, 08:24:19 PM »

Median Cut

Now this part could be applied to anything really. Maybe you generate a marble texture to apply to something and you want the number of colours reduced. This is where median cut comes in. We're going to use it to reduce the number of colours in our nebula above.

This happens before it's placed onto the star field. The number of colours you want to use is completely arbitrary.

The Median Cut algorithm as per wikipedia:

Quote

Suppose we have an image with an arbitrary number of pixels and want to generate a palette of 16 colors. Put all the pixels of the image (that is, their RGB values) in a bucket. Find out which color channel (red, green, or blue) among the pixels in the bucket has the greatest range, then sort the pixels according to that channel's values. For example, if the blue channel has the greatest range, then a pixel with an RGB value of (32, 8, 16) is less than a pixel with an RGB value of (1, 2, 24), because 16 < 24. After the bucket has been sorted, move the upper half of the pixels into a new bucket. (It is this step that gives the median cut algorithm its name; the buckets are divided into two at the median of the list of pixels.) Repeat the process on both buckets, giving you 4 buckets, then repeat on all 4 buckets, giving you 8 buckets, then repeat on all 8, giving you 16 buckets. Average the pixels in each bucket and you have a palette of 16 colors.
Since the number of buckets doubles with each iteration, this algorithm can only generate a palette with a number of colors that is a power of two. To generate, say, a 12-color palette, one might first generate a 16-color palette and merge some of the colors in some way. https://en.wikipedia.org/wiki/Median_cut

I found this to be a pretty bad explanation, and not very helpful. Implementing the algorithm this way results in pretty ugly images. I've implemented the algorithm with some changes:

  • Keep a container of boxes along with a value representing the range (more on that later). A box simply holds some dynamic number of pixels from our source image.
  • Add all the pixels from the source image as the first box, and just use 0 range
  • So long as the total number of boxes is less than the number of colours we want, we continue the next steps.
  • For each current box, if the range value is 0 determine what the primary colour channel in that box is, and sort the pixels in that box according to this colour.
    This primary channel is whichever of Red, Green, Blue, and Alpha, has the widest range. I.e. redRange=Max(Red) - Min(Red).
    Sorting is simply done by comparing each pixels value in that primary channel, ignoring the other channels.
  • Take a note of that primary channels range and store it alongside the box in our container of boxes. We do this partially so we don't re-calculate a box that has already been done.
  • After we've done step 4 and 5 for each box, we then sort the container of boxes based on which has the biggest range. This is different to the Wiki-explanation, as we're taking the biggest range and we're going to subdivide it, rather than subdivide boxes that may only have a tiny range of pixels in them. We're always subdividing the biggest box as it is the most likely to have too many colours in it to effectively represent with a single palette entry.
  • Grab the biggest box (biggest == biggest range) and remove it from the container of boxes. Split this box into two equally sized halves and put them back into the container with 0 range (so it'll get re-calculated later). Remember the pixels in it were ordered in a previous step, so one half has the larger values, the other the smaller values. This allows other colour channels to take over once we calculate the primary channel again.
Once we've reached a number of boxes that equals the number of colours we want, we simply average all the pixels in each box to determine the best palette entry that can best represent these colours. I just used a euclidian distance, but there are perceptual ones that might do a better job.

Here's an image that will hopfully explain things better. For demonstration purposes, it's only using RGB because Alpha is hard to show.



So let's apply this method to our example image


Original


Median Cut down to 16 colours

I've found that when we're using two colour channels, that 16 total colours in the palette gives a good effect. But consider that we're using an alpha channel here as well, that counts toward the distance between colours. So you might not need to use that many if you're not concerned about transparency. Because my median cut can use an arbitrary number of colours, rather than the wiki-described power of 2, we can fine-tune it however we need.


16 down to 2 colours

The way we chose a colour from each box was to simply average all the values. However that's not the only way to do it. You may have noticed that our result compared to the original isn't as luminescent. Depending on what you're after, you may want to favour colours in the upper ranges more than the lower ranges by weighting the range determination. Alternatively, you could easily select the 1, 2 or maybe 3 most luminescent colours in the image, and add them to the palette. So if you want 16 colours, generate the palette at 13 colours and manually add your luminscent highlight colours.


Palette with the 3 highest luminescent colours

Now that looks pretty good, but it's splotchy. There's big batches of single colour, like blobbing in a Paradox game. What we need to do now, is smooth it out.

Dithering

I don't need to tell you what dithering is, because you're already a pixel-artist. So, we can just apply a dithering algorithm, of which there's a lot, to get a smoother look.

I implemented Floyd–Steinberg dithering which is simple to do. There were no surprise gotchas. The effect is pretty profound though. Here's our example again:


Original

Then we reduced the palette to 16 colours


Values mapped to a 16 colour palette

And now dithered when mapped:


Dithered final result

I hope that was informative and interesting. If I can explain anything better, please let me know and I'll do my best.
Logged

Daid
Level 3
***



View Profile
« Reply #3 on: August 14, 2019, 11:49:12 PM »

Nicely done! I'll most likely steal some ideas at some point.

Note:
Quote
The noise function gives us a number. If we're asking Perlin Noise, we know it'll be -1 to 1
That's not true:
http://digitalfreepen.com/2017/06/20/range-perlin-noise.html

Note that adding octaves+persistence changes this range.
Logged

Software engineer by trade. Game development by hobby.
The Tribute Of Legends Devlog Co-op zelda.
EmptyEpsilon Free Co-op multiplayer spaceship simulator
Pages: [1]
Print
Jump to:  

Theme orange-lt created by panic