Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length

Advanced search

1164628 Posts in 49467 Topics- by 40562 Members - Latest Member: ArchmageDev

October 13, 2015, 08:01:30 pm

Need hosting? Check out Digital Ocean
(more details in this thread)
  Show Posts
Pages: [1] 2 3 ... 17
1  Feedback / DevLogs / Re: Return of the Obra Dinn [Playable Build] on: October 06, 2015, 06:47:31 am
This is a serious suggestion: I think you should publish a book

Biggest problem with actually doing this is that I forget 90% of everything that's happened when a game is done. Even going through the devlog leaves me wondering wtf. The current level of posts is about all I can manage while also working and not completely ignoring my family. I appreciate the encouragement though. Maybe there's some way to put this stuff together in a more professional way that would get closer to a proper book. Hard to reproduce the animated gifs though.


Animating the player actually getting on and off the ship is something I thought would be cool, but didn't expect to have the time or energy to actually implement it. The arm rig in the game is tuned very specifically for realtime IK and using it for straight animation would be difficult. My backup plan was to show a black screen and just play the sounds of climbing. Ditto for climbing the ropes on the ship.

Yesterday I had the idea that it might be reasonable to animate by taking the skin mesh from the complex arm rig, duplicating it for left/right, and re-rigging that to a much simpler HumanIK rig in Maya. For in-game animations, I can hide the IK hand and show the two simpler ones instead. Once that was sorted it was really fast to animate. Mercifully I didn't even need to set the vertex weighting on the new rig - the default crappy weights ended up being good enough for how the hands are seen.

Boarding and disembarking (ship is an empty test version)

The in-game interface for climbing works just like doors. The hand reaches for knobs - pressing space when close enough activates the non-interactive climb animation.

I'm not totally happy with the animation itself but it's good enough. The real comedy is in the Maya scene, where you can see the heroic cheating I'm doing with the arms.

2  Feedback / DevLogs / Re: Return of the Obra Dinn [Playable Build] on: October 04, 2015, 08:19:36 pm
MPL might be a good license? Kind of splits the difference between MIT and GPL/LPGL -- you have to share your changes, but only to any of the original source files you modified, it doesn't virally affect the rest of someone's project.

Ah thanks. That does look pretty good. What I really want to prevent is someone from zipping it up, dumping it on the asset store, and charging $X for it. It might not be worth worrying about that though since even the best license won't stop unscrupulous people. Maybe I'll just add a "Any product built on this code must prominently display a link to this free version" note to the eventual github page.

Though seeing it, it's kinda fun to look at the first post now :D
The lower bound for finishing this game is around 3 months but realistically I think it'll take me around half a year.

Oh man :D

This project is way, way bigger than I expected, and it's not even that big by indie game standards. The combination of using a new engine, re-learning Maya, building pipelines from scratch, adding custom features, modeling the ship and characters, etc, has all taken ages. The gameplay is really simple which lured me into thinking the whole thing would be easy. Internally I like to blame Maya but the truth is that I was in over my head from the start. Some day I may go over all the stuff that's slowed me down in detail but it's probably not that interesting.

In Greyscale

As part of testing the lightmapping stuff, I used a different post filter to render in full greyscale instead of 1-bit.

Testing the lightmapping in greyscale

Doesn't look half bad. Way easier to see the face at least. I'm irreversibly committed to 1-bit at this point though so this is mostly just a "Huh.."
3  Feedback / DevLogs / Re: Return of the Obra Dinn [Playable Build] on: October 03, 2015, 06:06:21 pm
Some more technical fire hosing.

Custom Lightmapping in Unity

I got fed up with Unity's new lighting system and spent the last week or so writing my own custom lightmapping solution. Unity 5's Enlighten can give amazing results if you're going for a next-gen look but it's completely unsuited for what I want. And in the past 8 months of betas and release, I've never once gotten a decent lightmap of the Obra Dinn out of it.

Thankfully, Unity is a flexible engine with especially great editor integration, and rolling my own custom direct-lighting lightmapper wasn't too onerous. The basic idea is to take what the dynamic shadowcasting lights generate, pretty it up a little bit, and bake it into the lightmaps. This kind of direct-only lighting works fine for 1-bit so there's no need for bounces, emission, AO, final gather, etc. All the stuff that makes Unity 5 light baking take hours and hours, gigs of memory, and crash my computer in ways I've never seen.

Lightmap generated with custom GPU-accelerated "lightcaster"

In developing this system my two criteria were that it has to be fast and it has to look decent. "Fast" means GPU-only, which introduces some interesting constraints on the algorithms. "Decent" means soft shadowing.

The Process

Here's the basic steps for the process that I ended up with. I call it lightcasting to emphasize that it's direct light only.

Step 1: UV2 Unwrap
Unwrap each object into the second UV channel. Unity's built-in model import unwrapping isn't bad but I found Maya's auto-UV generation a little better so that's what I'm using.

Torus and its unwrapped UV2 channel

Step 2: UV2 Scale
For every object, determine the texel scale for its lightmap. This scale is based on the area of the triangles in the unwrapped UV2 channel, the area of the triangles in the final transformed model, and the desired texel density of the lightmap. Create a blank unlit local lightmap at the right size for each object.


Step 3: Build UV2->Pos/Norm Maps
Key step: For each object, generate a mapping that will transform UV2 coordinates to a local position and normal at that point on the model. The overall lightcasting technique is based on realtime shadowmapping, which requires the world position and normal for each point on screen to compare against the shadowmap depth. Because lightmaps are rendered into a UV2-mapped texture and not the screen, we need a way to convert those UV2 coordinates to position/normal. The UV2->position and UV2->normal maps can be rendered into a target with a simple shader.

Mappings from UV2 to local position & normal.
These are used to translate UV2 lightmap coordinates to worldspace position/normal.

Step 4: Render Light View Depths
For each light: Build a frustum and render the depth buffer at some suitable resolution - same as realtime shadowmapping. Spot and area lights are easy but point lights require 6 frustums (to form a ~cubemap) and special handling to average all 6 results together and avoid artifacts on the edges. Also, large area lights can be broken up into multiple smaller ones to keep the depth buffer at a manageable size.

A typical spotlight depth view

Step 5: Render Light/Shadow Into Local Lightmaps
For each light: For each object within the light's frustum: Use the UV2->pos/norm maps to perform standard lighting and shadowmapping into the local lightmap. Most forms of soft-shadowing are handled at this stage. Light cookies are easy to support here - something that Unity's lightmap baking has never supported for some reason.

Lighting and shadows rendered into a local lightmap

Step 6: Prune Unaffected Receivers
Determine which objects were actually lit by something. That means checking each object's local lightmap and searching for any pixel > 0. This step is a little tricky because we want everything to run on the GPU where searching isn't efficient. The solution is to repeatedly shrink each local lightmap down into a temporary buffer. At each shrinking step, sample the nearby area to find the maximum pixel value. So the brightest pixel in a 4x4 area will become the new value in a 1/4 size reduced buffer. Do that enough times until your buffer is 1x1, then write that value into another buffer that's collecting the results of all objects. This all happens on the GPU. Once all objects have been collected, you can copy the collection buffer to the CPU and read out the max pixel values for each object to determine which ones were hit by a light.

Reducing local lightmap to a single pixel containing the max brightness

Step 7: Atlas Local Lightmaps Into Global Lightmap
Atlas all the (lit) local lightmaps into multiple pages of a global lightmap. Unity has a basic built-in texture atlassing feature but if you want optimal support for multiple pages, you need to use something else. I ended up porting a really good C++ solution to C#.

Local object lightmaps atlased into a single global lightmap

Step 8: Dilate Global Lightmap
Dilate the lightmap to hide any seams.

Dilating the final lightmap atlas

Soft Shadowing

Lightcasting is based off realtime shadowmapping but offline baking lets me use some beefy soft-shadowing techniques that'd be harder to justify for realtime performance. I implemented a few different techniques, each with their own plusses and minuses.


For reference, the scene with no shadows and with Unity's dynamic realtime lighting.

Percentage Closer Filtering (PCF)

A standard technique for softening shadowmap sampling. This is basically a uniform blur on the entire shadow.

GOOD Simple, looks ok, not many artifacts
BAD Softens everything equally and doesn't simulate penumbras

Variance Shadow Maps (VSM)

Another standard technique that allows blurring the shadow map in a separate, very fast pass. If you need something simple and fast that looks good for realtime shadowing, VSM is one of the better solutions.

GOOD Simple, looks good, simulates a poor man's penumbra with perspective frustums
BAD Weak penumbras and bad artifacts in complex scenes at higher blur settings

Percentage Closer Soft Shadowing (PCSS)

An advanced technique that attempts to estimate more accurate penumbras. It's a little heavier processing-wise and the blocker estimation breaks down for complex scenes but for simple scenes it looks ok most of the time.

GOOD Decent penumbra simulation
BAD Artifacts where shadows overlap, hard penumbra edges when reducing artifacts


This is by far the best-looking technique, and the closest to modeling shadow penumbras accurately. It's also pretty simple: render into the lightmap multiple times, moving/rotating the light slightly for each pass. The big downside is that it's slow. Each jitter pass is nearly equivalent to adding another light to the bake process.

GOOD Looks great and simulates penumbras accurately
BAD Can be very slow. Long shadows require many, many jitter passes to avoid banding

So, that's a lot of choices. For now I'm keeping them all as optional settings. Different techniques look best in different circumstances. With a distant sky light for example, PCF works best with its general blur and reduced artifacts.

Custom = More

Since this is now a custom system, it's possible to go beyond simple lightmapping. The main extra features I wanted were the ability to blend between multiple lightmap snapshots for animating moving lights and to have multiple baked lightmap layers where each layer can adjust the color/intensity of its contribution at runtime.

Animated Snapshots

If I can be happy with monochromatic lighting per lightmap (I can) then a simple form of multiple lightmap snapshots are possible by storing each frame (up to 4) in the different color channels.

Blending between snapshots baked with hard shadows

Blending between snapshots baked with soft shadows

All snapshots/channels

This works well enough with subtle movement and very soft shadows. I could also use it for coarser state changes, say to lightmap a room with the door closed and the door open and to blend between them when the door state changes. With standard global lightmapping this would give me 4 frames of animation for the entire level. For more localized animation and more frames I created a layers system.


A layer is a collection of lights and the lightmap they generate. Each layer controls parameters about format, texel density, shadowing technique, etc. 

Lightcaster with multiple layers

It's possible for any object to be lit by multiple layers and thus need to perform multiple lightmap lookups during in-game rendering. Compared to actual realtime dynamic lighting though, a few extra texture lookups per object is cheap.

The main complexity from adding multiple layers is in dealing with the newly-required material variations/assets/shaders during the baking process. Critically, you want to minimize the number of materials in your scene. This is important for draw-call performance and especially Unity's built-in static batching system. 

Also, now that an object may reference more than one lightmap the built-in Renderer lightmapping properties are not enough. By default Unity provides Renderer.lightmapScaleOffset to specify how to transform the local UV2 coordinates into the larger lightmap atlas. This is a great feature and lets you lightmap instances differently while sharing the same UV2 channels. Unfortunately, there's only a single lightmapScaleOffset per Renderer; if an object is lit by more than one layer you're SOL. To further complicate things, static batching will bake lightmapScaleOffset into the UV2 channel, so that shader value becomes useless after batching. These are all esoteric details but the point is that it complicates support for multiple lightmaps per object.

During baking the steps above are performed for each layer in turn, then there are additional steps to combine them all:

Step 9: Create Receiver Groups
Group every lit object based on which materials it currently has applied and which layers affect it. These are called receiver groups. Generate a new material for each receiver group that combines mixing the X layer lightmaps and the old material that was applied before.

Step 10: Encode Multiple Lightmap Coordinates per Receiver
Hard part: Encode the multiple lightmap coordinates for each object. This is done by building a unique uv table texture for each receiver group that encodes the lightmap scale+offset for all objects in the group. That uv table texture gets added to each receiver group material and each object's index into that table is encoded into lightmapScaleOffset in a way that it can be recovered both before and after static batching.

Fading the green pointlight layer while leaving the spotlight unaffected

Stay Global

The layers are useful for the "present day" part of the game where stuff actually moves. On the other hand, everything is frozen still for the flashbacks so the layers/receiver groups/animation/etc is all overkill. In those cases, the system supports collapsing all the layers to a single set of lightmaps that can be slotted into Unity's built-in lightmapping table. No additional materials or tricks are required for that to work.

The layer data is still kept around even in global mode though so one nice thing is that individual layers can be re-baked, then re-combined into the global lightmaps without having to re-bake the entire level.


Except for jittering, all the shadowing techniques are blazingly fast when run offline. Going through all the steps can take a while though, especially with lots of lights and receivers in the scene. On my relatively fast iMac, the small test scene bakes in about 3 seconds. A huge scene like the Obra Dinn takes around 30 seconds with a single large area light.

Baking the lightmaps in the small test scene

Open Source

I'd like to eventually release this code as open source. It still needs some more production testing, and I want to figure out a license that prevents people from trying to just resell it on the asset store, but it's modular enough that someone else may get some use out of it.
4  Feedback / DevLogs / Re: Manifold Garden (previously known as Relativity) on: September 28, 2015, 09:38:04 pm
Just dropping in to say that A) looking good, as always, and B) "Manifold Garden" is a great, great name. A thorough homerun.
5  Feedback / DevLogs / Re: Return of the Obra Dinn [Playable Build] on: September 23, 2015, 08:36:01 am
interesting stuff, but I don't think a post effect would do the trick. the geometry would need to be manipulated to get a really consistent look, otherwise it might just look like your drunk? I'm sure you will know best how it looks, this is just my imagination.

You're right, it definitely has a wobbly drunk feel to it. In the flashback context it feels just about right to me.

Yeah, for the flashbacks, a way to display the remaining time could be to slowly blur the scene, and when it's too blurry it'll transition away. One thing I was thinking is directional blur, might make a (imo) nicer transition than an uniform blur. Maybe even motion blur? P:

The blur isn't _quite_ smooth enough for that to work. It's more like big discrete steps. As marcfgx mentioned though I could pull the depth-of-field distance in for a smoother effect.

I really appreciate the novelty of this "1-bit-blur" idea, and its crafty execution, so tons of kudos, but, result-wise, I feel that this feature seriously hurts that amazingly perfect superposition of "typical visual style of old '80s Macintosh games" and "3D", which I loved so much.  Or, said differently, the original crisp tidiness of this unique and expressive rendering style.

Yeah I definitely understand this. I'm still mostly on the fence about it, but I did find two functionally useful places to put the blur: For depth-blurring in flashbacks so they feel markedly foreign, and for blurring objects that have appeared/moved in the current time. The best example being the doors that open after visiting a flashback - blurring them is a really nice indicator that they've inherited some property from the past (being open). That works best if it's understood that "blurry" = "past".

In other news, I've been working on another big technical task and I'll try to get a post up about it soon.
6  Feedback / DevLogs / Re: Creepy Castle - Sideview Retro Puzzle RPG (Greenlit & Kickstarted) on: September 04, 2015, 06:33:51 am
Since the last time I made a palette swap shader gif there have been some new palettes added:

7  Feedback / DevLogs / Re: Return of the Obra Dinn [Playable Build] on: August 29, 2015, 08:53:26 am
Screen Warping - 1-bit Edition

In modeling the ship and testing things out in-game I got the distinct and unmistakable feeling that "this could use some screen-warping post effects to roughen up the low-poly straight lines everywhere." I'm sure you know the feeling.

I have been down a dark path. Let me take you there.

Starting with this innocent screen:

Testing screen, untouched

The first experiment is with Photoshop's "Wave" distortion filter:

Wobble things around a bit with Photoshop's "Wave" distortion filter

B+. Warps the lines around to make them look less polygonal. The problem is that Photoshop's algorithm doesn't work in 1-bit, so you need to apply it, then re-threshold or re-dither the result. Not a huge deal, but it does double some pixels and erase others. That hurts the legibility so what if there was a way to just shift pixels around and not affect their color at all?

Next try, just offset each pixel by a random amount, using some custom Haxe testing code:

Each pixel shifted by a random amount

Ok, too much. Let's offset large blocks in a random direction instead, to reduce the frequency:

Blocks of pixels shifted by a random amount

Not bad. In motion, there are large stretches of lost or doubled pixels which destroy the nice single-pixel-wide lines and looks more clearly like a post effect than a soft warping which I want. But, altogether a decent solution and solved very quickly... Too quickly. Let's identify an arbitrary problem with this randomized approach that we can spend days and days trying to solve.

The Problem

With the low-res visual style here, and just 1-bit to work with, each pixel is important. The "wireframe" lines make lost or doubled pixels especially ugly. That leads to a rule: If we move one pixel, it shouldn't ever overlap another pixel or leave behind a hole.

... which sounds pretty dumb because of course a pixel can only move in 4 directions (ignore diagonals), and each direction already has a pixel there. The key is that the first pixel move has to go offscreen, leaving a hole - then another pixel can move into that spot, creating another hole - etc. Wind your way around the entire screen this way and you can move each pixel one spot and never overlap or double another pixel. How?

Using Mazes

Build a special 2D maze (with no branches and no dead-ends) where the solution visits each square once, then while walking the solution pick up the pixel in front of you and put it down where you're standing. This shifts each pixel one space along a winding path without overlapping or doubling. If if the maze is sufficiently winding, then the direction you shift each pixel is nearly random. In pictures:

To start, build a maze using the trivial depth-first algorithm, with a bias towards changing directions at every possible step:

Simple depth-first maze. Note the branches and dead ends

Next, this needs to be converted to a unicursal maze - a single path through without any branches or dead-ends. Thanks to Thomas Hooper for the technique for this - it's pretty cool. To convert a normal maze into a unicursal one, first get a solution that visits every square in the maze. Any solving technique will work but the easiest is to just follow the left-hand wall as you walk:

Solving the maze by sticking to the left-hand wall

Next, take that solution and use it to insert a new set of walls between the existing ones:

Solution collapsed to a single line, then added as a new set of walls in-between the existing walls.

The new maze (double the size of the original one) can be solved by walking straight through without any decisions:

Unicursal maze - single solution that visits each square once

So, making a unicursal maze the size of the screen and shifting each pixel one spot along the solution path gives us this:

Shifting each pixel using offsets from a full-screen unicursal maze solution

Ok, that looks cool. Compare it to the random offset image above and it maintains a bit more order. Lines are a consistent width and there are no breaks or extra-thickened areas. But again it's too high-frequency to be considered a gentle warping. We need to reduce the frequency. I'll come back to this image below though.

Instead of generating a full 640x360 maze and applying it directly to the screen, generate a much smaller maze, scale it up, then apply it. This gets its own section...

Maze Scaling

The trick to scaling a (unicursal) maze in this case is that we need to maintain the property that the solution traverses the entire maze. For the mazes above, there's just one solution track so it's easy. When we scale the maze up, there are multiple independent solution tracks - each one must trace uninterrupted through to the end. I now realize that this is hard to describe in words. More pictures are needed.

If we take the unicursal maze above and represent its solution as a "flow":

A flow with one track that traverses the entire maze (starting in the top left)

Scaling that flow up arbitrarily yields this, which breaks our "must trace uninterrupted" rule at the turns:

Scaled flow with broken turns

Turns have to be handled specially during the scale:

Scaled flow with fixed turns - each track can be traced from start to finish

Scaled x5 and animated:

Each track runs through the maze independently. In the end every pixel is visited exactly once

Now we can generate a small maze, scale it up, and use that to shift the screen pixels at a lower frequency:

Small maze scaled up and applied to the screen

Hmm, ok. Every pixel is accounted for. That's good. But there are disjoints where the tracks pass by each other in the opposite direction. This ends up shifting adjoining pixels by 2 spots, which makes some lines appear to break. And with such a low resolution maze the structure is faintly evident. Luckily, we have multiple tracks and can apply a sine wave to the track offsets. Applied to the x5 animated gif above, that would mean shifting the white track by 0 spots, the green track by 1, the cyan by 0, the purple by -1 and the blue track by 0, roughly.

Tracks offset in a sine wave to reduce disjoints

Same effect, exaggerated and applied to a test grid

Obligatory Lena

Multiple mazes at differing resolutions can be stacked. Still no pixels are lost or doubled.

Back in the game, just one low-frequency maze

In motion (rounding errors in my shader are eating some pixels)

Ok! That looks good... Well, it's what I wanted and it holds together pretty well in-game. There's a nice warping which definitely makes it feel more ropey than straight low-poly shapes. Unfortunately it's a bit too distracting. Maybe it'll only be applied in the flashbacks. Maybe not. No surprise then that this rabbit hole has only dirt at the end.


There was something cool in the middle there.

1-Bit Screen Blur

That intermediate, high-frequency maze-offset thing looks a lot like a blur - using just offsets, no blending:

If you don't think this looks cool then go back in time and unread this whole post

This 1-bit blur is something I never wanted or needed but hey let's see what we can do with it anyways.

Just Blur

Here the high-frequency maze is layered with lower-frequency mazes to add irregularity. 
This maintains an important legibility over using randomized offsets

Another spot. This can be globally faded in and out, so may be good for a transition effect or something

Depth of Field

Scaling the effect with distance

Scaling between low and high-frequency mazes based on distance

Focal Point

Scaling the effect based on world distance from a hand-placed focal point

Same thing, different angle

These hold up surprisingly well in motion. I can't prove that though b/c the animated gifs are too big.

That last one might work especially well to highlight the killer/killed in each flashback. Or not. After all this work I'm still not sure where or if it'll end up in the game. I really just wanted an excuse to post those maze-solving animated gifs.
8  Feedback / DevLogs / Re: Return of the Obra Dinn [Playable Build] on: July 11, 2015, 08:29:16 pm
Weird question, I've been noticing that all the rendered lines look really nice and easy to see. Is there anything you specifically did to make them look like that, or did they just turn out nice on their own?

Probably the only particular trick is ensuring that they're always 1 pixel wide, which requires some special handling in the post-processing shader. There's a post here (and some afterwards) about how the lines are rendered.
9  Feedback / DevLogs / Re: Return of the Obra Dinn [Playable Build] on: July 11, 2015, 08:05:10 am
Back to Ship Modeling

I've finally got the entire ship roughed in - all rooms, areas, etc for the top, gun, orlop, and cargo decks. Fitting everything in was a bitch. They didn't design these ships for first person adventure games unfortunately.

I've had to add buffer space here and there to give the player enough room to move around comfortably. And because the ship has to properly articulate for the different flashbacks I couldn't cheat and leave out a critical capstan or threading hole for the anchor ropes for instance.

At the moment I'm going through and decorating each deck. One of the key ways to identify people will be to recognize which rooms they spend most of their time in. So, for example, the carpenter's room has to look the part. This bit is pretty fun to work on.

An in-progress Maya shot of the orlop deck. It'll get a lot busier before it's done:

Orlop deck

Also, this is what OCD looks like:

Old carpenter's walk - off-center grating and passageway width varies along the ship

New carpenter's walk - more centered grating and consistent passageway width

That took ~4 hours to "fix"...
10  Feedback / DevLogs / Re: Return of the Obra Dinn [Playable Build] on: July 10, 2015, 06:39:35 am
[...] You could make marking something in the Muster Roll permanent, too, since you can't exactly erase ink. After you decide on the fate of someone and mark it in the ledger, you get some sort of feedback from the family themselves - a line of dialogue or something small like that. Your role as investigator and arbiter gains some narrative weight, and it's harder to play the game "wrong".

There's a cool idea here. This game is probably too fantasy-oriented to work exactly like this - I'm not planning on injecting any pathos for the families, or including them at all really. But the idea that the fate entries have to be "finalized" before getting feedback is interesting.

Maybe I could have a "finalize these fates" button on each page, or a global "finalize all current fates" that can be used X times. Then you can "spend" those finalizations to find out if you're correct. Players could game this a little bit into something like Mastermind, but not much. Or they could wait until the end and just hit it once and get some special reward if they're all correct.

I think this is at least better than instantly getting feedback on page 1.

okay this is fantastic, despite the fact that when i looked at the sky with white pepper noises i feel like my screen is going to explode at certain extent, everything is so nice and unique!

I promise to fix the sky :D
11  Feedback / DevLogs / Re: Return of the Obra Dinn [Playable Build] on: June 25, 2015, 06:41:04 am
80 > 60

Finished sculpting and painting faces for 60 passengers and crew. 

From a distance

There's supposed to be 80 people in the game - I'm cutting the remaining 20. If I find an easy way to add them back later in I may go for it. 60 is way too few for sailing a boat of this size (120 would be a realistic minimum), but I just don't think I can build out that much content in a reasonable time. The character models themselves aren't actually that hard, it's integrating the characters into the story, adding them in flashbacks, and making sure the player has enough information about each one to deduce their identity. Also there are some UI issues with enabling the player to efficiently sort through that many characters; even 60 will be a bitch.

Feature Randomizer

At my fastest, I was able to sculpt+paint 6 faces in one day. That rate was mostly thanks to a simple feature randomizer tool I built from blend targets. I could either select the features manually or hit a button to generate randomized facial geometry. Most combinations were useless but after a few clicks something inspirational would come up that I could tweak and paint fairly quickly.

Making heads from randomized blend target sets. Textures are hand-painted after finding something good.

Reference and Consistency

Most of the faces are just sketched from my imagination but I did use references for some. Referenced faces took a lot longer, and the detail gives them a slightly different look. Overall I wasn't able to stay very consistent with my technique. Some faces are realistically detailed and some are more painterly. Luckily, in-game this comes across as making the characters look unique (a useful thing for the mechanics) as opposed to out of place. Another benefit of outputting to 1-bit 640x360.

Russian sailor reference and hand-painted result

Two different characters. Left is off-the-cuff, right is from reference.

Accessories Etc

The next step for the characters is to model and attach all their clothes and accessories and add unique scars/tattoos/etc. That process is pretty straightforward so I may take a detour and assemble some flashbacks first. Would be nice to know if 60 people is actually enough, and how hard it is to arrange that many characters.
12  Feedback / DevLogs / Re: Return of the Obra Dinn [Playable Build] on: June 25, 2015, 02:25:00 am
Did you find a solution to the magically open doors?
You could do something such as when the flashback ends, the player is in the position where he was in the flashback. That way he can be behind doors that were closed, and can undo a latch or just open them from the inside.

I don't have a good solution for this yet. Restoring the player's last flashback position is potentially way more complicated than it looks in the dev build. The configuration of the ship (cargo, crew, broken stuff) will change a lot more between flashbacks so some places you could stand in a flashback won't translate to the current time.

More than that though, I kinda like the idea that viewing a flashback is completely passive and you're always reset afterwards. I'd really like a better mechanic than "magically opening doors" but part of the problem is that I only plan on having 3 locked door gates in the whole game. So either I integrate a proper mechanic and use it a lot more or I punt on the whole thing and just swing the doors open when needed.
13  Feedback / DevLogs / Re: Return of the Obra Dinn [Playable Build] on: June 02, 2015, 10:00:25 pm
These tools are amazing. You make me feel like I don't push myself nearly hard enough with my own games when I see you putting this much detail and attention into your work. Inspiring and soul-shattering all at the same time.

Thanks! Don't sell yourself short though. I just played your Masochisia alpha and it's great.

The only thing that I did not find cool amazeballs is in a previous post it seems that you used the 8-head proportion for the characters.
The 8-head is an heroic, super-hero proportion, if you are still changing things regarding character models I would suggest playing with 6, 7 head proportion, just to see how it looks.
But then I am not a 3D artist, and it is not a big deal either.

This is good feedback. I'm not a fan of hero proportions either and I chose an "average male" reference of 7.5 heads for this game. That seems pretty normal after a quick google check.

Something I have found though is that due to the first person view/perspective, the character's proportions feel more top-heavy in-game. Changing it to 6 or even 7 heads might push things a little too cartoony so I'll probably just leave it where it is. I will keep this in mind though, thanks.
14  Feedback / DevLogs / Re: Return of the Obra Dinn [Playable Build] on: June 01, 2015, 06:41:28 pm
Do you feel pretty confident that most of the tools (and scripts?) you've written so far have (or will) save you time and hassle in the long run?

Well, I hope it'll save me time in the end. It's really distracting for me to work with inefficient tools so I'm always tempted to smooth out any rough edges. I guess that motivates me more than something concrete like saving X hours or whatnot. I try to be a little careful though. One reason why I avoided Blender for this project is that I could get seriously sidetracked making source-level changes there.

Are you thinking through the plot and puzzle flow throughout the process?  Have you come across anything that's caused you to reconsider the scope of the game?

Yeah, I'm always rolling the plot/characters/flashbacks/fates around in my head. The only big semi-recent change is that I'll probably limit the number of flashbacks to ~20. I was originally thinking around 60 but that's both too much work for me and too open of a playing field to sort through for the player. So there'll be a fair few characters that either survive, die "off camera", or die simultaneously.
15  Feedback / DevLogs / Re: Return of the Obra Dinn [Playable Build] on: June 01, 2015, 01:07:55 am
Thanks again everyone for the encouragement. Right, then.. character tools.

Character Tools

From the start of this project I knew creating all the sailors was gonna be a huge task. I looked around at various "create a character" libraries and wasn't impressed so I decided I'd just try to make them all myself. For the dev build I settled on a pretty basic system of blend targets applied to a common base character, with animated weighting to switch between them. That worked ok for a few characters in a few scenes but I could see pretty quickly that it wouldn't scale up to 80 characters in >20 scenes.

It took me a while, with lots of trial and error, and now I've finally got the character pipeline sorted. My goal was to be able to quickly create characters as both original builds and as components of existing pieces. I also wanted to keep things fairly flexible so I could build characters piecemeal - going back to edit or adjust them whenever I felt like. The current result is a set of tools: ObraHumans, ObraPainter, ObraSculptor (all Maya MEL-scripts), and ComposeHuman (Haxe command-line app).


This the main tool for creating and editing characters in Maya. There's no special meaning for "human" here, I just went through most of the obvious names with previous tools (character, crew, sailor, etc) and wanted something new.

Flipping between characters in the ObraHumans tool

For this game I've modeled and rigged only a single base "neutral" character - unique characters are created as variations of this. Each unique character is made up of:

  • The neutral base body mesh (rigged)
  • Blend targets on the body for changing features (always 100%, non-animated: face shape, injuries, expressions, etc)
  • Separate hair/clothing/hat/etc meshes (optional)
  • An overall scale (applied to the rig, so elongating limbs instead of a raw scale factor)

Switching between characters is a process of applying blend targets, showing/hiding different piece meshes, and setting a rig scale. The script handles this for me and I can easily add variations, new clothes, hats, etc, then choose them from the drop-downs for any character. It works a lot like a typical character builder, just integrated into Maya and organized for easy additions.

Adjusting pieces/targets/clothes on one character


ComposeHuman is a command-line tool for compositing the texture layers. The compositing logic is based on the stuff I wrote about earlier, encoding a sort of alpha value into RGB-painted textures. I originally had a fairly complicated process here but moving that complexity to the ObraHumans script let me simplify ComposeHuman a lot.

ObraHumans calls out to ComposeHuman whenever adding or removing a piece to a character. That updates the texture and it's reloaded into Maya. In the end, each character has two unique textures (body and clothes) that are composited based on the selected pieces. Most pieces just use the body texture but some clothes actually have to overlay the body and need the separate texture for that.

ComposeHuman has just two modes: "build" or "paint". Build does a simple composite:

> ComposeHuman build seaman2 base-male face-seaman2 pants-long

Compositing layers into a single texture

Paint mode also composites but tints the base layers so they can later be separated. This is the texture that I paint on directly using ObraPainter (explained below).
> ComposeHuman paint face-seaman2 <workingDir> base-male

Paint-ready texture with ignored pixels in blue/cyan


ObraPainter is the MEL-based painting script. Maya's built-in 3dpaint tool is so limited and buggy that I needed to add a bunch of workarounds and features to get a smooth pipeline. ComposeHuman was a big part of that (all the alpha-in-RGB and compositing stuff) but I also had to add a custom file-based undo/redo system since Maya's was so flaky. And the ObraPainter window includes the most useful shortcuts for brush colors, alpha values, showing wireframes, toggling reflection, etc.

All the actual painting is done in Maya by hand on a Bamboo tablet. For non-stroke stuff I can also quickly edit the texture in Photoshop - to shift some pixels around or whatever.

Painting a character


The last character tool is a simple mode-switching helper. Since all of the character variations are defined as blend targets for the base body mesh, I needed an easy way to edit these. You don't want to edit the base mesh directly, and you don't want all your blend targets hanging around visible in your scene most of the time. That's what I had while making the dev build and it was getting way out of control. So ObraSculptor just unhides the desired mesh and sets it up for vertex editing.

Sculpting a character's blend target

In Use

With all this character tool work mostly solved I've been able to start proper arting on the sailors. It's a huge relief that the pipeline actually works ok. Everything is fast enough that I can make around 2 characters per day, which puts me on track to finish all of them in a month or two.

Right now they all look sorta samey, especially in-game with 1-bit rendering. I'm not too worried about that as most people do like kinda similar anyways. Still, identifying sailors by appearance is important in this game so hopefully after adding clothing variations, tattoos, and other custom stuff they'll differentiate more.

Here's a few faces I've done so far:

16  Feedback / DevLogs / Re: Return of the Obra Dinn [Playable Build] on: May 28, 2015, 12:31:28 am
Tool Talk

Making this game by myself has been something. I like changing between the multiple tasks often but I've learned that in my heart I'm more of an engineer than an artist. Drarring n stuff is great but what really keeps me happy is writing code. Obra Dinn is overall a fairly straight-forward game technically so that itch doesn't get scratched as much. I often find myself making excuses to write code instead of model or draw.

One of the best ways to quench that thirst is by writing tools. A good tool can save a lot of time and given how over-my-head I am on this project art-wise, I need all the time savers I can get. Here's a couple Maya tools I've written to both keep my inner engineer happy and my inner artist lazy and unproductive.

Wrapped Ropes

More rope scripts. As I was filling some of the lower decks, I had a need to secure objects to keep them from moving around. As before, nothing gets used as much on a sailing ship as a good rope. You'll often see ropes wrapped around things to keep them in place. Unfortunately, wrapping ropes around things is not a well-supported feature of most modeling packages. I took a stab at using extruded splines, but lining up the points just right is a huge pain. And any adjustment to the objects being wrapped requires basically remodeling the ropes too.

Mel script and a fun technical challenge to the rescue again. In this case, I decided to specify rope-wrapping volumes that get intersected with a set of target objects to define a convex hull, which is then munged to get me a nice closed rope. Here's an example where I need to secure two galley stools to the fore mast:

Rope volumes used to generate wrapped ropes


If I move the stools around or add another object there that needs to also get wrapped, I just hit Ctrl-R twice and it drops me back to the original volumes and regenerates the ropes. A few more examples:

Wrap everything securely

There were a lot of little quirks to getting this working right, mostly due to Maya's totally worthless boolean operations. Boolean ops for arbitrary 3D models is a difficult problem - solved convincingly 30 years ago. That Maya's boolean ops don't just work in 2015 is some kind of criminal offense. Anyways, I lucked out that the rope volumes are always closed box shapes. I can just use plane slicing instead of actual boolean intersections for the operations I need.

What is nice about Maya is that this procedure was possible in Mel without any complex math programming. Mel is a pretty crappy language altogether, but it's very domain-specific and you're able to plug directly into all (most) of Maya's built-in features. So things that you'd normally write a big complex model editing function for can usually just be done with one or two Mel commands instead. Sometimes you need to get a little clever to get what you want but that's fun too.

Tiedown Ropes

The first cousin of wrapping things with ropes is tying them down with ropes. Actually tiedowns are probably more important than wraps, but I didn't have a need for these until later. The basic idea is to again have a rope volume that defines where a rope attaches (at two ends) and where it passes over to "tie down" something.

Generating tiedowns over a set of crates by script

This script re-uses some of the logic of the wrap rope script and integrates it with the rigging rope script. So the rope volumes here are used to first generate a convex hull over the target shapes. Then a spline is built along that hull down the center of the original rope volume. That spline is then passed through the rigging rope logic to extrude it and put anchors at the ends. The nice thing is that, once the anchor points are marked with two red vertices, the rope volume can encompass any shape and look ok.

These would be pretty laborious to model by hand


I made a bunch of character tools too. Will post about those soon.
17  Feedback / DevLogs / Re: Return of the Obra Dinn [Playable Build] on: May 27, 2015, 11:12:02 pm
Hey guys! Still alive! Still working!

There've been a bunch of small things keeping me from posting here, biggest one being that a lot of my work in the last 2 months or so has been on tools and there's just not much to show. I'll make a post about some of those tools in a minute though.

How much time do you think it will take/takes at this point to play through the entire game?

Honestly I have no idea. I think in the end it depends on how streamlined I want to make traveling between all the different flashbacks. I could make that quick and easy or I could make it ponderous as in the current dev build. I prefer quick and easy even if it shortens the overall game time.
18  Feedback / DevLogs / Re: ◁ DO NOT CROSS ▷ on: April 12, 2015, 01:57:56 am
Wow! Love the concept and the art. Your first-person view looks especially great.
19  Feedback / DevLogs / Re: Return of the Obra Dinn [Playable Build] on: April 01, 2015, 02:41:14 am
what I was wondering: do you have night and day-scenes? does the sun position change in your flash-backs? If I recall correctly you are not always jumping back to the same day/time. the last composed image would have to be night, but the shadows are strong. I hope the moon is up Smiley

The moon is up, but yeah I just now notice that the screenshots seem weird with the bright light and black sky when the moon is to your back. It feels ok when you're in the game when you can see a bit of the moon light source though.

Are you changing the mesh geometry each frame? That's the only way you'd be sending a lot of stuff to the GPU. Mesh data resides on the GPU's memory after it's allocated by the driver. I'd say it's likely the slowdown was because of Unity dynamic batching breaking down due to your multi-pass shader, resulting in excessive draw calls.

Except for a few small objects, all meshes are static and marked for static batching. There's not much room for dynamic batching to help. I render each pass with Camera.RenderWithShader, not as in-shader "Pass"es. I don't know even know how to do it in-shader with a render target switch between passes.

Also, the old sectioning pass used very fast forward rendering and the old lighting/sectioning pass used deferred. The new single pass is just deferred. So there might've been room for multipass optimizations, but it's hard to make guarantees about Unity's batching and it'd never be as fast as a single pass.
20  Feedback / DevLogs / Re: Return of the Obra Dinn [Playable Build] on: March 28, 2015, 07:54:54 pm

Working on the lower decks, some performance problems have been building up for awhile now. Even though the game's resolution is super low and there's no obviously fancy surface shaders going on, the geometry/object count is pretty high. Normally that wouldn't be a problem but since the 1-bit rendering technique requires two passes, sending all the geometry to the GPU twice eats up a lot of frame time. Yesterday I decided to sit down and see if I could get all the rendering done in a single pass (plus post-processing). 

The Old Way

The original technique required rendering the scene in two passes:

Scene Pass 1: Sectioning 

The sectioning pass just draws the vertex colors, which have been pre-processed to define areas that should be separated by edges. Wireframe lines are later drawn along these edges in post. 

Old method: Sectioning pass

     RED = Tool-generated random hash (lower bits)
     GREEN  = Manually-set adjustment color
     BLUE = Tool-generated random hash (upper bits)
     ALPHA = unused

Scene Pass 2: Lighting/Texturing

Lighting/texturing runs the full Unity SurfaceShader pipeline to generate lightmaps + dynamic light + textures. Dithering and other logic is applied to these results in post.

Old method: Lighting/texturing pass

     RED = Light value
     GREEN = Markup value (0: normal, 0.5: ordered dithering, 1:dust)
     BLUE = Texture value
     ALPHA = unused

Post-processing Combiner

These scene passes are written to two render textures which are then combined in a post process to make the final buffer. Having the light and texture in separate channels from the lighting/texturing buffer enables me to adjust their gradient ramps separately, which I use to make the hard shadows and blown-out textures that help with legibility.

The final post-processed output, 30fps

Separating the two passes like this makes sense for a couple reasons:

  1. Easy to visualize the two main features of the rendering style: wireframe lines and dithered 1-bit surfaces
  2. Two 32-bit RGBA buffers gives plenty of space for the data.

Both of those reasons aren't worth the framerate hit though. Unity's not very good at reusing scene data for subsequent passes, so even when the sectioning pass runs at +100fps, sending all the geometry twice bogs things down too much.


So the goal was to combine the two scene passes into one with the hope that performance improves on lower end video cards. That last bit is important because it precludes me from using MRT. For a single pass, everything has to fit in 32-bits.

One particular complication is that Unity's SurfaceShaders by default don't allow writing to the alpha channel, so you're effectively limited to just 24-bits in an RGBA buffer. I tried to be happy with that for a long time before finally tracking down a fix, which just became possible in Unity 5.

The problem. Alpha being "helped" to 1 for all surface shaders.

You can't easily edit the generated code directly, but you can redefine the offending macro in your own code, which is thankfully included after UnityCG.cginc:

The fix requires undefining the macro in your own surface shaders

With that, you have use of the full 32-bits and can write anything to the alpha as long as it's non-zero.

Single Scene Pass

So now I just had to pack 48 bits from the two separate passes into 32 bits for the single scene pass. The basic idea was to chop off the lower 8 bits of the sectioning hash (leaving 16 bits), reduce the lighting/texturing output to a single 8-bit channel, and use the alpha as a markup value to specify which "pass" the RGB values were coming from. Because the final output is dithered 1-bit, very few bits are actually required for the lighting/texturing. The result:



I was also able to use Unity's shader Queue tags to control draw order: set the sky shader to "Queue = Background" and the dust shader to "Queue = Transparent". 

Post-processing Combiner

The post-processing step now just has to do the edge detection (with 2 channels instead of 3), the darkness check (to invert wireframe lines in darkness), the dithering (bluenoise or ordered based on the alpha bits), and the dust inversion. There is some extra cost to doing more of the lighting/texturing combination in the scene shader instead of the post processing shader. But the final output looks identical to the old two-pass method, and it runs significantly faster.

Single scene pass, 60fps

Optimizing Early

There's always a danger in doing optimizations like this before the game content is mostly done. Now that I have an extra 30fps to work with there's a good chance I'll paint myself back in to poor performance. A common trick for smarter programmers is to keep a few secret optimizations in their pocket until the very end. If you optimize too early, the artists will just fill up the scene again and you'll be in a spot where it's much harder to get in framerate. I'll have to rely on self-discipline instead.

I am glad this worked out ok though. There's a (justified) perception that a 640x360 1-bit game should not have framerate problems on any machine. There's a lot going on behind the scenes that makes the rendering more intensive than it looks initially, but I'd rather match the perception than make excuses.
Pages: [1] 2 3 ... 17
Theme orange-lt created by panic