Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length

Advanced search

1403837 Posts in 68291 Topics- by 61956 Members - Latest Member: abakanking

December 06, 2022, 12:01:52 PM

Need hosting? Check out Digital Ocean
(more details in this thread)
TIGSource ForumsCommunityDevLogsSpooky Revelations - An adult game about uncovering desires
Pages: [1]
Author Topic: Spooky Revelations - An adult game about uncovering desires  (Read 604 times)
Level 0

View Profile WWW
« on: October 13, 2022, 06:02:13 AM »

So, first post of the first game.
Meh. I'm supposed to be interesting.


I'll write a tad about my problem with setting the drawing order of images.
My game is supposed to have a lot of images on top of each other, and at first they were just drawn in order of type (items would be drawn first, then people) and then by creation order (so people created last would get drawn on top of every other person). This was obviously less than ideal.

This is an image of the problem (left), next to the solution (right)

So I looked for sorting algorithms, and with me being a n00b, a simple algorithm would be suitable. So I checked Wikipedia's Sorting algorithm page. A generally fast algorithm is nice but:
  • Algorithms can be simpler or more complicated, with complicated algorithms taking more time to implement
  • Different algorithms are faster in different conditions
  • Some algorithms are stable meaning that things with the same position won't be moved; I'm using floating point for X and Y positions so that's unlikely, but I would want to avoid flickering due to two images being moved towards the front and then towards the back through the frames
  • Some algorithms are adaptive meaning that if they are close to be in order, it is faster. Important because between frames most of the time the sorting order will remain the same
So I just decided to pick the insertion sort algorithm.

Thanks to Wikipedia's page adding insertion sort was only translating pseudocode to Lua code, and switching from zero-based arrays (in Lua, arrays start at 1).

I did hit a small snag after implementing it... looking at the output numbers everything looked fine, but now some of the drawables were not being drawn. Turns out that I'm sorting by position in the X axis (Y-axis comes later), but I'm sorting by properties of objects, but what I'm sorting are the objects themselves. So I just made a small modification to the algorithm to separate the sorting criteria from what's being sorted:

This is Wikipedia's pseudocode (for zero-based arrays)
i ← 1
while i < length(A)
    x ← A[i]
    j ← i - 1
    while j >= 0 and A[j] > x
        A[j+1] ← A[j]
        j ← j - 1
    end while
    A[j+1] ← x
    i ← i + 1
end while

So at this point mine, because my arrays start at 1 and because I'm sorting objects by their property, is this:
i ← 2
while i < length(A)
    obj_ref ← A[i]
    j ← i - 1
    while j > 0 and A[j].property > obj_ref.property
        A[j+1] ← A[j]
        j ← j - 1
    end while
    A[j+1] ← obj_ref
    i ← i + 1
end while

After that I added the ability to pass an offset function for the sorting criteria, given that I don't actually want to sort by the X and Y axis of the images, because those are at the top left corner; the offset function just adds half width and half height (edit: @ThemsAllTook noticed half height is not ideal either) so they can be sorted by their center contact point with the floor, not their origin.

That worked well, and after sorting by the X axis, I just repeated the sorting for the Y axis. So yes! Finally no one's feet are on no one's face!

I don't expect a lot of (any?) people to read this, but if you do, know I'll continue posting but it will be erratically at first until I get into an schedule.

Finally, because I copy-pasted the pseudocode from Wikipedia I'm required to talk legalese and say this devlog entry is licensed to the public under the CC-BY-SA 4.0 License.
« Last Edit: November 24, 2022, 10:18:57 AM by phanual » Logged

Level 10

View Profile WWW
« Reply #1 on: October 13, 2022, 10:30:43 AM »

Sorting for draw order can be surprisingly tricky! One thing that seems like it might become necessary in your case is to sort by bottom center instead of center on both X and Y axes - otherwise, a short background object might appear in front of a tall foreground object.

Level 0

View Profile WWW
« Reply #2 on: October 13, 2022, 05:23:08 PM »

Sorting for draw order can be surprisingly tricky! One thing that seems like it might become necessary in your case is to sort by bottom center instead of center on both X and Y axes - otherwise, a short background object might appear in front of a tall foreground object.

That makes sense, that would be the center of the contact with the floor, and it would probably lead to a situation where people's feet are on other people's feet. I suppose I didn't notice/care because my main character is a floating ghost. Thanks for that, I couldn't ask for a better kind of feedback. Big Laff

Level 0

View Profile WWW
« Reply #3 on: October 30, 2022, 09:26:35 AM »

This entry will be about developing visual art styles!

I want a style that allows a simple workflow and forgiving of mistakes. I also want to avoid the uncanny valley and to maintain the suspension of disbelief. This is based on the kind of game I want to make.

I plan to have two types of gameplay, and they are are better suited for different styles:

  • The world has many objects, people and the like. The main character moves on a field, and can interact with various things
  • Scenes with a well-defined start and end, that act as a reward, and usually have a single person and a single object

The world has the simplest style. The scenes, even if they will have less drawables, each drawable will have more components, they will be bigger, and animations will have more keyframes.

For the world’s simplified style, I took inspiration mostly from Witch Trainer(NSFW), Helltaker and the lesser known Ghostlight.

  • Witch Trainer uses a 3/4 perspective for most situations. This makes sense, because it's best if the characters face the screen, but the characters are supposed to be facing each other in-game
  • Helltaker uses a flat (no shading) style with a limited color palette
  • Ghostlight also has flat colors with a limited palette, but with more varied shapes than Helltaker

So I ended with things like these:

For the scenes, I took inspiration from 13 images. 9 were realistic images found through search engines, while 4 of them were drawings. Since 7 of them are NSFW, I'll leave them to your imagination.

I want the scenes' style to be a more stimulating version of the world's style. That meant avoiding line art, using a limited palette, and similar shapes. Some details omitted in the world’s visuals are in the scenes like the white of the eye, eyelids, fingers and nails.

I had problems due to the lack of line art and shadowing, especially if I removed the clothing. The shape of overlapping body parts is hard to tell apart, since they have the same base color. To fix that, I had to do things like add borders to arms. In the future I will use line art at least for the skin, but I may also use line art for clothing and hair. This is the style I ended up with:

That’s it for the static visual style, but what about dynamic animation styles? Some questions I had were, do I use bone animation or frame-by-frame animation? Do I want cartoony movement or realistic movement?

I have little experience with animation, just a bit with Blender. I prefer to use Libre Software if possible, so I looked at 2D alternatives. Among them were Pencil2D, Synfig, and OpenToonz. Pencil2D is for hand-drawn animations, but I’m using vectors created in Inkscape. OpenToonz seemed a bit too advanced for what I wanted, so I tried Synfig.

Synfig is capable of doing what I wanted (and in the end I did), but I found several issues:

  • Outdated documentation
  • Data-corrupting bugs
  • A few operations are excessively slow
  • Incomplete interoperability with Inkscape. The best option ended up being the outdated .sif export from Inkscape

Feature-wise, the only thing I wish Synfig would have is animation using vertex weights. Spine, Spriter, Dragonbones and OpenToonz’s plastic tool represent what I would like. Synfig has bones, but there's no fine-grained control. This meant the choice of bones vs frame-by-frame was made for me in favor of frame-by-frame.

Regardless, Synfig made it easier to learn animation concepts compared to a more complex tool. I learned about things like keyframes, tweening and walk cycles while I battled through animating. This made me realize I want realistic movement, that is simplified by omitting details.

I also realized, after finishing the scene’s animation, that I had done a pseudo 3D animation:

Even if the shading is flat, the leg movement resembles movement in a 3D space. That’s when I realized I should have done the scene’s animation in Blender all along, duh! Either way, I plan to remake the Scene in Blender, and that should also lead to a better workflow.

For now, the world’s animation will remain in Synfig. I tried to switch to Dragonbones, but it turns out that it only works with raster images, and the same goes for Spine and Spriter. I’m also less reliant on the documentation now, and I learnt workarounds for the bugs. I’m not opposed to switching, but I'll need a good reason, and if I do switch my options are either Blender with orthographic projection, or OpenToonz.

At this point, I created an idea of the visual direction based on the game concept, then looked for inspiration, and tried to find a suitable toolset. The visual direction and the toolset were modified as I did actual work with them.

I'm currently switching the scene to Blender and refining the style for the Scene, and details matter. Not having an idea of how you want a detail to be is for an artist is the evil cousin of a programmer looking for the source of a bug that's hard to reproduce consistently. Right now I'm trying to decide if I want nostrils to be drawn or not.  I had to go through dozens of images to find something similar to what I want, and 'm going for something similar to #3 in this Pinterest post.

I have yet to define the visual style formally, and you can notice the inconsistency in some areas like the perspective, but for now these attempts help shape the style.

I'll continue the licensing trend: this post is under a CC-BY-SA 4.0 License. It probably won’t matter anyway.

Level 0

View Profile WWW
« Reply #4 on: November 24, 2022, 04:31:26 PM »

Unexpectedly, the third devlog post is about building a 3D character. I thought I'd face more difficulties while programming, but turns out that art has it's own complexities. There are also many paragraphs, so I'll distinguish the main learning points.

The post is not about concept art, which is likely what comes to mind when character design is mentioned. I've decided to adapt the concepts to my goals while I implement the original concepts, to have an incremental development process. Instead, I wrote a lot about 3D character modelling and animation.

Some context: the previous detailed scene is a 2D vector animation in Synfig, and I'm currently turning it into a true 3D animation in Blender.

Translating 2D to 3D made me affirm that I should keep designing the 2D World in 2D software; 3D is more complex not only in shape, but in colors, you have more vertices to work with, and you will be forced to deal with things you don't have to manage with 2D if you don't need them, like normals or cameras. Therefore, 3D design is not only more complex, but also has extra steps, and the technicalities of the transition from 2D to 3D could easily fill a post. The upside: once everything is setup animations will be easier, since you don't have to handle shading directly or create new shapes when you change angles.

Input & output

At this point , there's still a chance of going back to 2D animations, but I feel I need a bit more experience with both 2D and 3D before making a final choice.

I'll now describe the parts that were interesting, and how I dealt with them.

Anatomical baseline
To get a baseline 3D model I decided to try MakeHuman Community, which creates a 3D humanoid mesh according to input variables. The focus on realism overcomplicates it, due to things like input variables being unitless statistical averages instead of commonly used SI units or giving the choice to move the eyesocket vertical size at the inner, middle and outer portion, while the eyeball has an independent size value, but being a volunteer project aiming at a difficult task means it also falls short of being realistic in many ways. I have dealt with detailed anatomy before, so I kept using the software instead of looking for alternatives.

Default MakeHuman output in Blender

What I did is keeping track of any values that I would want to change for all models, and value changes of specific models. This felt overkill while I did it, but afterwards I'm glad I did; I had to adjust and remake the model multiple times because I found I wanted to change a couple things that affected how the other values are interpreted. Of course, I also saved the file with the model's changes, but having a separate ordered record of 30 and something changes instead of checking hundreds of values to see if they were different in the save file saved time.

I noticed that many values, while statistically average, where not at all like it's depicted in the media, or even how I picture bodies in my head; for those I changed the values to less realistic aesthetically-pleasing values. I'm making a game not a real-life medical sim.

Final MakeHuman output with Rig helper

There's other software for this
  • MB-Lab is more advanced but with more restrictive licensing (so it was discarded)
  • VRoid Studio with Japanese anime characters
  • Metahuman creator for realistic characters which can be freely used with the Unreal Engine (and therefore discarded)
  • ...and probably more I don't know about
Cave Story

I could also have tweaked a pre-made free or paid model, or even just make a new model, but this should be done by someone with a stronger foundation on designing characters.

Hair modeling
I am not aiming at realistic hair, so I modified the shape and colors. Multiple learning sources say that it's hard to be efficient at creating toon hair because the methods to easily make the required 3D shapes fail when outlines, cel-shading or non-realistic texturing are applied, as materials don't look as intended and UV coords are hard to visualize. This leads artists to finish by shaping one big mesh with a lots of small incremental tweaks vertex by vertex, then unwrapping UVs face by face. Because I don't hate myself, I compromised with flexible workflow with a limited amount of steps, and minimal tweaking at the end. It's not streamlined, but not too complex either.

The first big takeaway came from real-life cosmetology: split hair by areas. Cosmetologists typically split hair in 7 sections, but they want to grab hair with their hands, and I want to build a 3D mesh. Most sources split hair in front, sides and back, which is enough in most cases, but since I'm prototyping I just did front and back, with the sides resulting from merging those areas.

Current hair. Shirt is really black so overlaps are invisible

Wearables modeling
My first attempt at clothing was using Blender's Cloth Modifier. Even if it's not an animation, this method requires to simulate physics for a few seconds, then stop and keep the deformations. Turns out that mathematical cloth simulation has many shortcomings due to being an approximation of something that requires precision, therefore it's reserved for improving wrinkles. Even though I managed to avoid jitter after dozens of tries, I noped the hell away from that timesink that I don't really need.

Unstable cloth simulation & stable cloth sim after tweaks and hacks

Second try was modifying MakeHuman's assets, however I didn't spend time on it since I wasn't confident they would animate well. I now think that if you have premade assets meant for your base mesh, tweaking them is the best approach to adding clothes. You'll have to do even more tweaking of vertices and weight paint with whatever you create anyway.

Regardless, I ended duplicating part of the body mesh and editing it in order to know exactly how the clothes work. The shoes were the only part I felt I'd get a good result from premade assets.


Graphical style
It seems weird to work on 'style' after shaping the meshes, but some issues showed only as a consequence of the shapes.

The black color for everything but skin, irises and genitals was chosen to be hide mistakes on the first model. Choosing forgiving parameters for the initial stages was a great idea. Most tweaks were due to mesh overlaps, and this avoided having to tweak even more for now.

I'll also point out that after I was packaging the game I noticed I forgot to add hair shine. I'll just add it in the next release.

Next, since I'm going for a toon style, considering an outline seems natural. I tried multiple methods.

First try that seemed to work was the 'inverted hull' method. It works with a slightly bigger mesh duplicate with the outline color, but only showing what is not facing the camera. Most of the part to be drawn is covered by the original, leaving a protruding edge. Main downside is there are more vertices and materials to process, and may need tweaking for skin-tight clothing. Inverted hull outlines can protrude through clothes when viewed at some angles, even if the clothes have their own outline. Usually a vertex group of the covered body is made made and hidden, but this doesn't work for the translucent wearables I'm using. You can instead be very exact with the size of the outlines and the distance of the cloth from the body cloth distance from the body (which can be set through physics calculations), but it gets complicated fast. A big advantage of outlines by inverted hulls is that it can be implemented in a game engine without shaders, but I'm not rendering 3D in-game so it's irrelevant.

Too small for you to see, too big for players to ignore

I also tried generating 3D mesh lines using edge-detection when viewed through the camera (Blender's version is named Line Art). It lets you customize the lines more than an inverted hull, but looks weird while modelling and can build lines in places you don't want more often than an inverted hull. In general, dynamically generating an outline has fewer downsides than an inverted hull.

Better hide the messy lines if not editing them

Another option are rasterizing outlines as 2D post-processing (Freestyle in Blender). It's visually similar to building lines, but they are not shown while modelling, opposite to the above, some people don't like them because of that. Also, edge-detection after 3D rendering is the fastest method. This video compares render times,  showing it is ~5 times faster than generating 3D meshes for lines. However, according to Blender's documentation a limitation is memory usage.

Check a box and everything gets an outline

After all this, I tried simply using a material as an outline. I left it for last due to incorrectly assuming it would lead to the uncanny valley. With 2D art, it's easier to draw lines than to shade but shading best demonstrates depth, even if used as a fake outline. To get an outline-like material use fresnel equations. For visual artists, a 'fresnel effect' just means light you see reflected changes depending on your viewing angle, so colors are different depending on curvature. This invites the uncanny valley in static images, but material-based outlines are beautiful when animated. The big downside is giving up control. You can change the overall thickness, but the relative thickness of each part is given by the faces' angle and the outline is always drawn on the mesh.

Curve-following outlines with materials

You can also create a texture and paint the outlines. The downside is that the outlines are not actually 3D, so when you rotate the mesh they will rotate with it. I didn't spend time on it.

In practice, people often combine these methods, often inverted hull + cel-shading + textures with different tones. I'm going for a simple aesthetic, so I'm just going to use fresnel cel-shading for now. I found it more pleasing than the other methods as it is the one that best shows the shape of the mesh, and it fits the style of the world, where drawables don't have outlines either (the main character does have an outline, but I'm planning to change it for consistency), and as a bonus it's one of the less complex methods to set up. In the future I may add an inverted hull with the same color as the shading just to give the outline a minimum thickness.

Rigging and skinning
I went through quite a bit of effort to get approximately accurate automagic rigging and weight painting on the body. Some protrusions remain when bending and manual weight paint retouching is needed. Retouching will sometimes mess up other movements, and this usually means there are no edges in the mesh geometry that are perpendicular enough in the bending zone, so retopo is in order. The third option is using control bones to reduce the influence of bones that are causing too much bending, but I didn't have a need for that. This (5 minute!) video for an older version of Blender is the best resource I got on weight painting. Either way, you should always expect the need of some manual retouching to weight paint.

Another option is transferring weights from the body, but that will also require retouching and results may be similar to automatic weight painting. However, even if it did not need retouching, you don't want clothes to deform like skin. For example, look at your naked shoulder, and lift your arm high; you don't want that crease even with skin-tight clothing.
Yet another method is to copy vertex transforms from close vertices of another mesh (Blender does it with the Surface Deform Modifier). I experimented with two subdivided planes, noticed it can be finicky (normals sometimes flip and later transformations sometimes affect previous transformations), so I went back to weight painting.

Weight painting works well enough, but it's a separate skill to be learnt, because it's more like sculpting (ZBrush style) than painting. Also, you need to learn to configure the tool so it behaves predictably, for example, instead of replacing the weights, consider adding or substracting paint in small amounts. Enable auto-normalize by default, learn what that means, but be aware that if you have a single bone influencing a vertex the weight will be stuck at 1.

Finally, weight painting requires quality control. Check the entire range of motion of each joint from multiple angles, specially the extreme ones that only let you see the bending area while the rest of the mesh is covered by whatever you have in the scene.
Ugly protrusions during animations are so well known that I don't feel imagery is needed.

Whew! I'm almost done!

There's also a lot of software for animating:
  • Animaze allows realtime facial expression motion tracking and mapping to a 3D model
  • Live2D Cubism has pseudo-3D animations from a 2D character image, again with face tracking
  • Cascadeur is a 3D character animator with a free tier
  • Daz 3D is a 3D character animator that's easy to use with pre-made models for sale
  • Poser is an easy to use 3D character animator but you buy the program and can buy pre-made model sets
  • Adobe Fuse CC is a discountinued 3D character animator
  • Mixamo is an online animating service with many predefined animations
  • ...and probably many more
Cave Story
Overwhelmed with the options, I decided to stick to Blender for now. I feel animation is the most technical, repetitive and time-consuming part of the workflow, if only because you have to deal with the mistakes from previous stages too. So save time, and like everything in 3D, reuse and mirror along the X-axis when there's symmetry.

Everything in this post above this line is licensed to the public under the CC-BY-SA 4.0 License.
« Last Edit: November 25, 2022, 08:18:50 AM by phanual » Logged

Pages: [1]
Jump to:  

Theme orange-lt created by panic