I’m glad photorealism/PBR and the latest and greatest GPU features are of no interest to me.
Aw I have some idea to make cheap GI for weak hardware, who I'm going to sell the idea too
More seriously, photorealism isn't interesting in itself, but the technique behind actually help a lot for anytime of rendering.
For example, energy conservation is a concept that benefit any type of rendering. WHY? because the idea of not having more light in vs out, actually make authoring any light set up easier, even non PBR one! It's less fiddly. And understandthing the various effect actually help you understand how to make the more artistic one. For example teh painting effect in skyward sword is inspired by depth of field.
But the compilation explosion has nothing to do with PBR stuff anyway, it's more an artefact of artistic shading complexity.
In fact PBR helpt a bit making stuff LESS complex by bringing common paramaters (more defined material). But artist will always fill the void with extra stuff anyway, make thing simpler and they will make thing even more intricate!
Okay I'll sell my cheap GI solution because I'm a NERD, I always get excited to talk about idea I have. I'm sorry for you all ....
I'm preparing for an open world game, the end goal being to make a "no man's sky + spore + skyrim" lite. There was a number of feature I really wanted, like time of day "with shadow" (Yagni, just change the ambient) ... the target being a mali 400 mp gpu (like weakass sauce, on the level of between a ps2 and n64 but open gl es2.0). I was going for axis aligned cube primitive cast by sampling the cube parameter stored on a precomputed map. But since I also wanted to have volumetric curly hair with a YAGNI technique of casting ray in modulo space, I was reading a lot about raycasting.
The two together I realized I could use a GI solution that is super cheap, async, and basically just sampling texture (ie work on gles2.0).
- The first thing was to realize I would have very low poly environement with blurry texture, since GI is low frequency that work well even with 1m per pixel (enlighten does it), so no high frequency details breaking the result too much. Also update can be slow as (time of day) GI is generally slow moving. Also I'm on a freaking mali 400mp, I can handle some artefact and less realism for dynamism and artistic rendering.
Idea 1:
+ Box projected Cubemap, retroprojected on lightmap (LM) recursively
- work only on convex area
Idea 2, Inspired by this:
https://copypastepixel.blogspot.com/2017/04/real-time-global-illumination.html+ What if I take the surfel point data and store it into a random texture (nearest neighbor because unsorted data), store cluster data in mipmap, use that as Gbuffer. In another texture, bake the adress and weight (UV from the surfel map) of all the surfels (with a raycaster) visible, for each point of the LM (basically a tile for sampling, using a hash of the LM point position).Tilemap is bidirectional, you can use the light accumulated in the LM to reflect back onto the surfel (they see each other), however since many LM point see a surfel, you have to do it point by point.
- Three textures -> surfel Gbuffer, lightmap acculmulation, tilemap. Lightmap is limited by hardware max size and tile size, on mali it's 2048² texture, so tile of 64 samples (8x8) limit LM to 256²
A. realization 1, wait a minute, surfel don't compress greatly because data is mostly random, but if we pack them relative to surface ... it's a LIGHTMAP! so basically you use the tilemap to map back to the same texture, most data of GBUFFER are linearly interpolation friendly, so we can sample in between pixel too (to reason, surface on a lightmap can be spatially coherent but not together visible by the same point).
B. Thinking about implementation, what if I do away with the weight data if I simply don't accumulate ray that fall on the same surfel, also store them in order of casting order ... wait a moment, that looks like a sphere map, it's the hemisphere visibility of a point, OH that's half a cubemap in latlong format!!! Makes sense, GI is effectively integrating the hemisphere over a point ...
Idea3, why not both
:
+ Do away with the tilemap, render the scene inside a UV cubemap that is box projected. For every point of the lightmap, sample the box projection of the cubemap, use that to find another point of the LM to accumulate as lighting. Store cubemap into an atlas, and the index of the seen cubemap inside the LM to know wich cubemap to sample. It also have the nice property that dynamic object can also sample the LM lighting using cubemap box projection, therefore have all light change directly reflected on them (though they don't contribute). You can probably have the Dynamic object contribute to lighting by rendering them on another cubemap and merge the sampling of the two cubemap during GI update. Cubemap are store into octahedron mapping, which allow to use a full sphere representation in square format.
This has a lot of advantage despite the limitation. Box projection is a crude approximation of the scene geometry, but probably good enough for weakass hardware. Also it's basically doing all the lighting pass in texture for the environment, decouple from geometry complexity. And you don't have to do it every frame. It use only sampling so run any machine that does UV sampling.
Although I have conducting some test to test feasibility, I haven't fully implemented because I'm doing an internship in webdesign right now. Stay tunes to see result and actual limitation with a benchmark.
I'm also thinking about idea 4:
- Trying to find a way to remove cubemap, they duplicate data, as many cubemap can see the same point adress. If only I could just store "square face" on the surface of box projection volume, and find which face a single point can see, but it seems there is no easy CHEAP way to do it without complex structure and a few costly jump.
- Given we only need 2 data per cubemap we can store 2 cubemap on top of each other and sample them both in a single read. But that's useless, so storing hemisphere in a single tile, each on 2 channels.
- Using SH inside pixel to compensate a bit for the box projection approximation and recovering some more angular data.
EDIT:
Also turn out I have to experiment with Zonal Harmonic (ZH) for my shadow need. ZH are linear in complexity, so a ZH9 only have 9 coefficient instead of 81 like SH, probably can make it good enough for real time... Which mean that if we store ZH in 3RGB texture, we can bake time of day shadow (index by time) with a fine angular resolution,and use that as a mask on the direct lighting inside the LM Gbuffer ...