Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length

 
Advanced search

1369312 Posts in 64331 Topics- by 56336 Members - Latest Member: Mr_Germica

November 16, 2019, 01:04:11 PM

Need hosting? Check out Digital Ocean
(more details in this thread)
TIGSource ForumsDeveloperArt (Moderator: JWK5)game art tricks
Pages: 1 2 [3] 4 5 ... 27
Print
Author Topic: game art tricks  (Read 82630 times)
gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #40 on: May 12, 2014, 11:37:31 AM »

http://bglatzel.movingblocks.net/publications/
http://michaldrobot.com/publications/
http://www.cs.cornell.edu/projects/layered-sg14/
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #41 on: May 15, 2014, 04:43:11 AM »

Everything 'bout gamma:

http://filmicgames.com/archives/299
http://http.developer.nvidia.com/GPUGems3/gpugems3_ch24.html
http://www.arcsynthesis.org/gltut/Illumination/Tut12%20Monitors%20and%20Gamma.html
http://molecularmusings.wordpress.com/2011/11/21/gamma-correct-rendering/
http://www.poynton.com/notes/color/GammaFQA.html
http://radsite.lbl.gov/radiance/refer/Notes/gamma.html
http://filmicgames.com/archives/299
http://filmicgames.com/archives/327
http://www.graphics.cornell.edu/~westin/gamma/gamma.html




https://docs.unity3d.com/Documentation/Manual/LinearLighting.html
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #42 on: May 19, 2014, 08:04:13 PM »

http://www.polycount.com/forum/showthread.php?t=117071
cel shading from concept to execution


http://imgur.com/a/dEWXP

texture resolution discussion
http://www.polycount.com/forum/showthread.php?t=134911&page=2

Edge flow, loop and pole
http://www.olborne.com/formationBlender/page/modelisation/topologie/topologie.html

« Last Edit: June 07, 2014, 05:47:14 PM by Gimym JIMBERT » Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #43 on: June 18, 2014, 10:38:20 AM »

screen space Real time global illumination (radiosity + reflection + ao) for the cost of traditional AO pass
http://graphics.cs.williams.edu/papers/DeepGBuffer14/
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #44 on: July 01, 2014, 10:43:17 AM »

Specular cubemap illumination
http://c0de517e.blogspot.ca/2014/06/oh-envmap-lighting-how-do-we-get-you.html
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #45 on: July 03, 2014, 02:49:04 PM »

Mario galaxy
http://heliocide.co.uk/2012/12/super-mario-galaxy-study/
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #46 on: July 04, 2014, 08:56:14 PM »

PBR encyclopia
http://www.polycount.com/forum/showthread.php?t=136390
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #47 on: July 07, 2014, 06:58:12 AM »

http://www.polycount.com/forum/showthread.php?t=136891

texture course




Normal light manipulation in photoshop using adjustement layer
https://www.youtube.com/watch?v=N9CTOw1lxQ0&feature=youtu.be&t=13m10s

normal map online
http://www.polycount.com/forum/showthread.php?t=136779

More pbr explanation




texture set up
https://www.youtube.com/watch?v=DAyx-C3O1Hc

Quote
you can find the albedo values for almost everything with a point and shoot camera(as long as it has a flash) and a cheap pair of passive 3d glasses.

put one lens horisontally across the flash, the other vertically over the camera lens, take your photo. it will strip out all of the specular highlights (for best results, do this in a dark room where there is little or no ambient light to interfere).

getting the reflectance values is a little more tricky, it involves taking the same photo from the exact same angle and position, but without the filters, converting the images to linear space, and then subtracting the highlight from the flat image. HOWEVER, a lot of that work is quite unneeded... keep your reflectance values low (or just use the metalness workflow) and you'll be surprised how many non-metals will work with this method.

Scanning albedo

http://www.polycount.com/forum/showthread.php?p=2089327
« Last Edit: July 07, 2014, 07:36:17 AM by Gimym JIMBERT » Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #48 on: July 12, 2014, 01:16:34 PM »

The newbie heaven
http://www.polycount.com/forum/showthread.php?t=137096
Logged

BomberTREE
Level 9
****


was kitheif, am @suttebun


View Profile WWW
« Reply #49 on: July 14, 2014, 09:36:56 AM »

Wow what is this thread? Amazing!
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #50 on: July 14, 2014, 09:46:27 AM »

Photogrammetry
http://web.forret.com/tools/megapixel_chart.asp
http://en.wikipedia.org/wiki/Comparison_of_photogrammetry_software

http://www.123dapp.com/catch
http://repos.archeos.eu/apt/pool/main/p/ppt/
http://ti.arc.nasa.gov/tech/asr/intelligent-robotics/ngt/stereo/
http://ccwu.me/vsfm/
http://mementify.com/
http://www.phov.eu/
http://www.my3dscanner.com/
http://www.agisoft.ru/products/stereoscan/
http://www.hypr3d.com/
http://www.arc3d.be/
http://dlt.fmt.bme.hu/

http://opensourcephotogrammetry.blogspot.com/2013/08/openmvg-c-open-source-multiple-view.html
http://makezine.com/2013/08/21/diy-photogrammetry-open-source-alternatives-to-123d-catch/
http://wedidstuff.heavyimage.com/index.php/2013/07/12/open-source-photogrammetry-workflow/
http://www.palaeo-electronica.org/blog/?p=615


Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #51 on: July 14, 2014, 11:22:03 AM »

Photogrammetry test





time around 15:16 taken the monday 14 july 2014 carelessly (18 photos)
Process took less than 1h (20mn cloud computing on 123D autodesk free on web) without hurry and plenty distractions
with an olympus stylus vg 180 with settings: 3Mp (1980x1440) , 4:3 , iso 100 , wb auto , compression normal, no flash
69€ locally in the west indies (martinique)
http://www.amazon.com/Olympus-Stylus-VG-180-Megapixel-Compact/dp/B00C7NX884

Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #52 on: July 14, 2014, 06:42:44 PM »

Difference in PBR from marmoset and ue4


Quote
i'm pretty sure the science behind this is in how we process our IBL vs how they do.

since toolbag is a standalone rendering platform, all we really need to worry about is making sure things look good while still performing well on our minimum specs. UE4 doesn't have that liberty, they need to make sure that their renderer works within the context of a full game, with instead of one character or small scene having potentially hundreds of characters within a very large scene.

so when you import a cubemap into UE4, it pre-processes a lot of the calculations for blurring based on roughness, and stores that information in the cubes mipmap chain as well as generating a lookup texture to match up to the mip chain. this makes it less strenuous on the GPU as it now no longer needs to do as much math in realtime since all of the computations are done and the results stored in textures. the downside being that clearly there is some small loss of clarity, but looking at the images above i'd say it's a very worthwhile tradeoff.

toolbag2 on the other hand does everything in realtime, we don't precompute anything at all. this means that you can load in any panorama from any source and have it "just work" as your IBL contribution instantly. the resulting image is about as sharp as it can be, but it's a lot more work for your processor to do in realtime.


Also, as EQ points out: currently Toolbags reflection model is Phong based while UE4's is GGX based. We'll be releasing our own GGX based reflection model soon so a more accurate comparison can be had.
http://www.polycount.com/forum/showpost.php?p=2096337&postcount=127




update:


Quote
threw one together for Marmoset's new GGX specular, compared with its regular blinn-phong.

I think I'll add a page to my website about this soon. It's very interesting seeing side-by-side comparisons of how these shaders behave. Unfortunately, the only other engine I can get to, other than Marmoset and UE4, is Cryengine; there are so many more out there, though.
« Last Edit: July 16, 2014, 07:45:25 PM by Gimym JIMBERT » Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #53 on: July 16, 2014, 06:56:57 PM »

shadow of colossus break down
http://www.froyok.fr/blog/2012-10-breakdown-shadow-of-the-colossus-pal-ps2


assassin's creed 2 breakdown
http://www.mapcore.org/page/features/_/articles/technical-breakdown-assassins-creed-ii-r24

and batman breakdown
http://www.froyok.fr/blog/2012-09-breakdown-batman-arkham-city

how many poly character have thread in beyond 3D (mesh breakdown)
http://beyond3d.com/showthread.php?p=1690590

Texcture and compression
http://www.gamasutra.com/view/feature/130877/making_quality_game_textures.php?print=1

procedural
http://www.polycount.com/forum/showthread.php?t=137234

Understanding the Masking-Shadowing Function in Microfacet-Based BRDFs
http://jcgt.org/published/0003/02/03/
« Last Edit: July 18, 2014, 07:35:40 AM by Gimym JIMBERT » Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #54 on: August 03, 2014, 03:44:38 PM »

http://www.fxguide.com/featured/assassins-creed-iii-the-tech-behind-or-beneath-the-action/

Assassin's creed 3 water breakdown
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #55 on: August 06, 2014, 06:12:09 AM »

Dev article on guilty gears Xrd
http://www.polycount.com/forum/showthread.php?t=121144&page=8
http://www.4gamer.net/games/216/G021678/20140703095/
I'll do a compact report of the interesting bit later

Quote from: Chev;2099538
This is an abridged amateur translation of the article. May it inform you still. Anyone fluent in japanese is invited to report glaring errors.

Original article there

The article is the first in a new series of technical articles on 4gamer by Zenji Nishikawa about game graphics, and Guilty gear Xrd, which came out in arcades in february, is an obviously good candidate. There's a lot in common between this article and the recent one in CGWorld but they aren't formally linked AFAIK.

The game runs in UE3, chosen because it was cheap (UE4 had just come out so there were big discounts), mature and stable with a lot of support behind it, and they didn't want to spend tons of time implementing their own engine (they tried on GG Overture and it didn't turn out too well). it'd also make ports to home consoles easy. They didn't have a lot of programmers, so the scripting possibilities were also a big plus since non-programmers could change some parts of the game directly. So they ported their whole gameplay engine from previous games to UE3 so they could use their usual tools with it. 3D modeling program is Softimage, and the same shaders were created in UE3 and softimage so modelers could directly see what the final result would be like.

Team of 25 that includes four programmers, three planners in charge of game design, 12 artists. Add to that a hundred persons they outsource work to.

First concept emerged in 2008, production started in 2011. They made a prototype movie that convinced them to use 3D graphics. Production ramped up to full scale mid-2012 and continued until the end of 2013. They'd experimented with 3D graphics since 2007 but hadn't found them to be expressive enough and there wasn't much of a point in using them over sprites while screen resolution remained low.

The game runs on Sega's Ringedge 2 arcade system, OS is 32 bit embedded windows XP. The arcade version runs at 60FPS in 1280x720 (the PS3 version will be the same while the PS4 version will be in 1080p). It doesn't use MSAA but FXAA for anti-aliasing. The scene budget is about 800K polygons, 550K for the background and 250K for the characters, but each actual character is about 40K, the remainder is for all the stuff they can spawn on-screen.

There are separate heads for wide shots and close-ups. It's not LoD, in that wide shot head don't necessarily have lower poly counts, instead they have features that are still readable and expressive from afar, bigger eyes and all that. See Milia for an example (close-up head, viewed at fight resolution, full fighting character with close up head, wide shot head, fight resolution, full character). Sol, the hero, has 460 bones, though some characters use up to 600. Faces have 170 bones, with a lot of them being used to change the face shape to look more 2D.

They use forward rendering, not deferred, though there's something about a z prepass. That means each character is rendered four times (z and color, once for the outline mesh and once for the actual mesh). About 160mb of texturs per scene, 70-80 different shaders, all made in UE.

Physical accuracy isn't a factor, everything is at the service of the anime look. Backgrounds have no real lighting, all the shadows are painted in. There is one light per scene that's exclusively used to cast character shadows on the ground. Characters don't receive light from the environment, instead they each have their own local light source, that is animated on a frame by frame basis to get the best look (environment lighting looking subpar, local lighting looking good).

The basis of the shading is a classical cel shader, ie if incident light is over a threshold value the surface is lit otherwise it's shadowed, but the shadows will give away the polygonal nature of the character so there's a number of measures to affect the light response and make it more anime-like.

R channel of the vertex color is a shadow bias, ie it's like ambient occlusion except artist-directed. Pixels that have a lower value will be more easily in shadow (occlusion term, model without it, model with it). There's a second shadow bias in the green channel of a lighting control texture (ILM texture) that allows to make areas always be lit or shadowed and essentially can fake self-shadowing. Characters only cast shadows on the ground, not on themselves.

Vertex normals are adjusted to simplify the light response and eliminate small scale polygonal shadows. Apparently the export from Softimage to UE3 automatically regenerated normals so they had to rewrite it. Faces were adjusted by hands, other parts by transferring normals from proxy objects, with the examples of Sol's pants or character hair in general.

Specular highlights are controlled by the R and B channels of the ILM texture. R is specular intensity, B is specular size. See this close-up of I-no where the left side and right side of her top have different specular size

Shadow color is controlled by the SSS texture. Basically on a lit surface the response will be something like (light color + ambient color) * texture color, while in shadow the response is ambient color * texture color * SSS color, so you can tint the shadows. Don't read too much into the name since it doesn't actually do any scattering.

There's no post effect at all for the outlines. They're generated by drawing a dark, expanded version of the model with front face culling instead of back face culling then drawing the normal model over it, a classic method, however the way most use it isn't expressive enough. So, the Alpha channel of the vertex color controls outline thickness (Sol without line thickness and with it, Venom without and with), but not just that, the green channel controls how much it scales with distance from the camera, and the blue channel is a Z-offset. This offset will push the vertex "back" into the z-buffer, effectively making the outline invisible when in front of close objects. It's used to simplify the outlines of hair and small features, see Chipp's hair without z-offset and with it.

That takes care of the outlines, which occur on edges, but not inner lines. Just drawing them into the texture would show texel jaggies or blurry areas, so instead they used a trick they called "Motomura lines" (Motomura being the lead modeler and tech artist): all the lines are drawn strictly axis-aligned in textures, and instead you use the texture coordinates to control direction and thickness, including the start and end of lines. That way they stay sharp no matter the distance, but you'll even benefit from the texture filtering. Outlines are drawn in the alpha channel of the lighting control texture.

There'll be a second part covering animation but that's all for now.

http://www.4gamer.net/games/216/G021678/20140714079/
Quote from: Chev;2110010
Random thoughts and comments after all that:

The stuff they do with the z-offset and modified perspective is both simple (well, for a programmer) and crazy. You can effectively use as many layers as you want as long as you're careful with z-buffer depth. I thought they'd be rendering to offscreen surfaces but this is way more efficient. On the other hand the z-offsets mean you can't do deferred lighting because the depth buffer's all wrong. You'd have to maintain a second depth buffer with non-offset values and need some tweaking. But of course since they never have more than one light per object this is fine.

Without the z stuff which appeals to me as a programmer Motomura lines would easily be the best trick they've shared as far as I'm concerned.

This is the kind of process that can really only be born when you have 2d artists, 3d artists and programmers/tech artists sitting together. It's cool how cross-discipline it is.

Let it be forever known that UE3 can do way more than just brown space marines.

Final vertex color use is R: ambient occlusion, G: outline scaling, B: outline Z-offset, A: outline width.

Final texture population is:
-Base (RGB, diffuse colors)
-SSS (RGB, shadow multiplier)
-ILM (RGBA, lighting control, R: specular intensity, G:shadow bias, B:specular size, A: inner lines)
-decals (greyscale)

Now I think of it, I wonder where the illumination info for glowy bits is.

There are some secondary textures you can see in screenshots (notably the pool ball textures for Venom's instant kill) but they aren't as important. Note that the base and SSS textures are the only ones that really contain color information so they're the ones you need to change to make different character palettes.

For such a workflow to really be efficient I think a modeling/animation program that can use custom pixel shaders directly in the viewport must be absolutely necessary, since the lighting formula and the outline stuff are very nonstandard. As a Blender user I'm very jealous of that function of Softimage.

There's really a lot of work that went into this but the results are there.. I'm curious to see how fast they'll iterate, ie add characters. The one they've added since release is the previously unplayable boss, and the next one already had his model present in the game in cutscenes.

Thanks for the thanks (and beer offers)! It was time consuming but fun to do! Of course the real persons to thank are the Xrd team and the article author Zenji Nishikawa, especially since I get the impression japanese devs are usually less likely to share such information.





other stuff
http://www.cgsociety.org/index.php/CGSFeatures/CGSFeatureSpecial/the_making_of_bet_shean
« Last Edit: August 06, 2014, 07:05:53 AM by Gimym JIMBERT » Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #56 on: August 10, 2014, 09:37:32 AM »

Model giveways, not exactly trick but hey
http://www.polycount.com/forum/showthread.php?t=138341
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #57 on: August 12, 2014, 06:55:07 AM »

Guilty gear follow up on animation

Quote from: Chev;2107579
And there we go. Same caveat as last time, there may be omissions or errors but hopefully not too many. If you notice any just tell me (and if you don't have a Polycount account bug me on Shoryuken or SA or wherever else I may be, same nick).

Secrets of anime-like 3D graphics, part 2

Opening statement: you can slap toon shading on something but it takes a conjunction of other things to make it feel 2D.

In a fighting game using 2D stages, a background would be a series of 2D layers using parallax movement, ie as the action moves left and right different parts of the background will be visited but layers that are supposed to represent far away objects will only move at a fraction of the camera speed to simulate the effect of perspective. Perspective comes naturally with 3D graphics but in Xrd's case it's actually massively faked: far away elements are in fact much closer than they appear but have been significantly scaled down and deformed to trick the eye (hotel view 1, hotel view 2, in situ). That's because they still want the perspective and movement to be art-directable and outside of the confines of the standard perspective transform. Notably, big objects would have to be placed far away, so far that they'd appear almost static to the viewer, but you actually want that left-right background movement in a fighting game, so they're brought closer so they can move better.

practical result featuring I-no's stage:
GUILTY GEAR Xrd -SIGN-?Huh?Huh?Huh?Huh?Huh?Huh?Huh?Huh? - YouTube

In 2D games, you only need to design and draw the part of the stage that is behind the fighters, since the camera is always pointing the same way, but for certain cinematic moves and scenes the camera in Xrd will orbit the action or otherwise change angles so they've actually modeled 360° backgrounds for the stages. Forget that for the story cutscenes that take place out of stages though, the backgrounds in those are billboards or very simple geometry.

Background characters close to the fight are 3D models but distant characters and crowds also are billboards, for obvious performance reasons.
Quote
3D game character animation is usually almost entirely bone-based plus maybe shape keys, but that places limits to the kind of movement you can animate, so in Xrd characters have swappable parts to allow them to actually change mesh topology. Example 1, Faust's burst, with his base mesh on the left and the alternate torso on the right. Milia and her multiple living hair shapes in the next three pictures.

Additionally, the bone themselves have a higher degree of freedom than usual, so the characters can fake 2D flaws. For example facial features can move around the face to better match drawings (Bedman in his instant kill cinematic as he'd appear without deformation, the suggested 2D corrections and the final result). It's not enough for the more symbolic anime faces (like round eyes or May's >_< faces) so for those there are swappable heads (and about now you should be realizing the "wide shot heads" we saw in part 1 are a feature of that same system).

Characters also needed to fake or exaggerate perspective in ways changing the camera's field of view couldn't achieve so this was done through allowing bone scaling on all three axes. UE3's animation system can't do that out of the box, they had to code it themselves and cite it as a great advantage to having access to the source code. And once they had that in, that meant they'd also opened the door to all kinds of muscle motions, cartoony squash'n'stretch animations and deformations. A big punch will get a big fist and so on. Bone scaling is now a feature in UE4 and they like to think it's because of Xrd.

May's victory pose and the actual deformation
Playing around with Ky's skeleton
Slayer flexing those arms
Sol in Softimage with his bone scale settings
Quote
Since the camera will orbit the action during finishing moves, they needed special effects like smoke and flashes to be plausible in 3D. They tried billboard but it didn't look convincing enough, so after unsuccessfully trying methods that'd save them time like multiple spheres for smoke they just went with brute force. That is to say, each frame of the smoke effect is its own mesh. In the same spirit, a character like Zato, who can basically melt into a smadow, has a completely separate and animated intermediate model for the melting effect. Most of those effects are simple and don't have outlines, they don't cast light on surrounding objects so as to keep the fight clear and readable.
Quote
Now, toon shading isn't exactly a new technique and players are familiar with it, yet when footage of Xrd was made public there was considerable debate as whether it was full 3D or not. What "betrays" 3D is not the visual appearance but the movement. Usually when a game uses toon shading it'll still animate at 30 or 60 fps, while TV animation is usually around 12 or 8 fps. 3D can even be a problem with 3D series, you can take the well known example of a popular anime show that uses toon-shaded models of the characters doing mocapped dance routines during the ending credits (just click on one of those), and it feels kinda off or at the very least certainly not like 2D.

Guilty gear is basically animated on a basis of 15fps, not 12 like a TV show, but it's actually a bit more complex that that. In the TV industry, 24fps is considered full animation like the kind Disney use, 8-12 is limited animation. Guilty Gear, due to its videogame nature, uses storyboards that have a basis of 60FPS for the numbering and for each animation frame you specify for how many game frames it should be held. You're not simply lowering the general framerate, you're banishing interpolation altogether! If you take an f-curve based animation and just lower the framerate, it doesn't look like 2D animation, it just looks like a crappy engine spitting out a crappy framerate. So you're really animating pose to pose all the time.

Sol with f-curve animation at 60 FPS for the first half and then the same at 15fps. Neither feels 2d:

Huh?Huh?Huh?Huh?Huh?Huh?Huh?Huh?? - YouTube

Part of May's storyboard:
http://www.4gamer.net/games/216/G021678/20140714079/screenshot.html?num=034
http://www.4gamer.net/games/216/G021678/20140714079/screenshot.html?num=035 (that one has the timeline with frame count for that sequence)

and the resulting poses:
Huh??20?Huh?Huh??? - YouTube

Do note the whole game logic is still 60fps, so even though your jump may only have 4 or 5 animation frames its trajectory will still be a perfect 60fps parabola. Basically your poses are handled like sprites would have been in a 2D game.
Quote
As we've seen in the previous part the light is adjusted on a frame by frame basis but really, any little tweak will be. In particular, in cinematic moves where the camera moves around even the character's nose may be moved around to look perfect on each frame.

May's instant kill cinematic move. You have to speed it up 2x to get the actual arcade speed:

Huh?Huh?Huh?Huh?Huh?Huh?Huh?Huh?Huh?Huh??? - YouTube
Quote
A small primer on fighting game internals: in a 2D fighting game the gameplay is usually really just 2d axis-aligned boxes moving around and colliding. A character is really represented by defensive hurtboxes and offensive hitboxes, themselve animated pose to pose, and when a character's hitbox overlaps an opponent's hurtbox it means an attack has landed. No need for more complex collision volumes, boxes are simple and easily tweakable (as they essentially determine which move works against which, balancing a game means moving them around). Hitboxes in red, hurtboxes in blue.

Problem is, gameplay happens on a 2D plane but the models representing it are volumes. That means because of the perspective transform they get distorted (well, more or less sheared, mathematically speaking) the further they are from the vanishing point, which in turn means they don't match their hitboxes (which are on a plane and thus don't get sheared) like they do in the middle of the screen. Specifically, visually they get wider. Since it's important for players to have their characters accurately represent the boxes, a transformation is applied to compensate the effect. The most direct way would be to use a parallel projection (ie no perspective for the characters) but they went for a compromise and sought a balance, ie the characters' projection is a hybrid projection, 30% perspective and 70% parallel. So you still feel perspective change but the effect is actually considerably lessened.

100% perspective
100% parallel
30%/70% hybrid

That technique was pioneered by Street Fighter 4. The correction is strictly horizontal though, the vertical component of the projection is 100% perspective as gameplay still seemed to work fine (notably, the screen area is wider than it is high, so the characters can't distort as much).
Quote
Sprite graphics are flat and subject to the painter's algorithm. They behave as layers, everything you draw will just be drawn on top of what was previously there and nothing will ever intersect. 3D models on the other hand operate as volumes and if two characters overlap, they will clip into each other thanks to the z-buffer. You can't just move one character behind the other because the perspective will give it away. What you can do, though, is use a z-offset. Rather than pushing back the character in the world itself, you change his depth value after the perspective transform, pushing him back into the z-buffer (you may remember a z-offset was already used to hide some outlines in the first part). That makes it possible to have two characters conceptually occupy the same space but still not intersect and instead layer like sprites (normal view, top view, as you can see, same space but no intersection). So usually the attacking character is about one meter in front of the opponent. The z-offset goes back to zero for moves like throws that do need characters to overlap. They had fears about whether this'd look alright for big characters, but it does.

In the standard fight view jumping characters move in front of the top gauges, while during cinematic moves all the gauges are on the top layer. Pretty standard fighting game convention there.
Quote
To handle fighters facing different directions 2D games usually just flip the sprites. Same for Xrd. Unlike a game like SF4 that remaps poses in the other direction to preserve asymmetrical features, for the sake of emulating the 2D look Guilty Gear's approach is really just to flip the model along the X axis (and change the culling order). However, this time around characters have various things written on their clothes and accessories. All the texts are decals and on their own texture, and when the character is flipped the texture coordinates of that texture are flipped as well, so the text still appears the right way round. The decal texture is greyscale, where neutral gray means no change and going towards white or black will respectively lighten or darken the model, so it's yet another instance of painting with the light level.
Quote
Everything's rendered in a 64 bit color buffer (16 bits per channel), and three post-effects are applied. FXAA (without, with), bloom for anything that has luminance over 1.0 (without, with), and a diffusion filter to soften things up a bit (without, with).

Without and with post-effects.
Quote
For the arcade version ease of play was the priority so they went with a purely 2D view during fights, but they may include full 3D camera modes in home versions.

They've acquired a lot of stylized rendering know-how on this project, and note that to further extend these 3D techniques it is absolutely necessary to analyse what makes 2D anime work. They think in the future these techniques can and will be applied to other games and genres with even more surprising results.

---

And that's it! I can only hope future 4gamer technical articles will be as informative!
http://www.polycount.com/forum/showthread.php?t=121144&page=10

other



http://www.polycount.com/forum/showthread.php?t=121144&page=11
« Last Edit: August 12, 2014, 07:02:27 AM by Gimym JIMBERT » Logged

unsilentwill
Level 9
****


O, the things left unsaid!


View Profile WWW
« Reply #58 on: August 14, 2014, 04:14:06 PM »

This thread is kind of a mess for how useful all your links are.

I don't really know what any of this means, but it seems to be the right place to put this: http://www.smallupbp.com/
Quote
SmallUPBP is a small physically based renderer that implements the unified points, beams and paths algorithm described in the paper

I do love physically based rendering, and it seems to be getting faster?
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #59 on: August 14, 2014, 04:52:30 PM »

PBR don't get "faster" it's just a better understanding of how light react with material, it allow to unify parameter and therefore unify material into a single shader (mostly). It will always act "consistently" with any light settings.

SO even though you haven't a true pbr pipeline you can take inspiration from the idea to simplify shader pipeline (most notably concept like "energy conserving" that help with consistency). Given the set of principle of pbr you can write pbr inspired shader that aren't accurate but have similar benefit even on very low hardware, so power and speed is not really part of the equation. Plus low end hardware can benefit greatly from look up texture to simulate it at low cost (see brdf shader based on look up texture).

It allow faster render precisely because there is less switching about case and less trial and error tweaking. It made no distinction between metal and plastic, concrete, etc ... because they can all express by similar parameter that will define the material. Also because you can feed actual parameter from a simple photography sampling to measure material.

In practice you still need more shader for special case (skin, translucency, alpha cut out, etc ...) but it's about managing the overall scene shader complexity (so not all objects run with the highest shader cost), or sorting issue. It also ensure that all those permutation of shader react consistently to light condition together.

So in the end pbr just simplify lighting and shading greatly on a material level.
Logged

Pages: 1 2 [3] 4 5 ... 27
Print
Jump to:  

Theme orange-lt created by panic