And now, a little bit of tech update. This couple of weeks I've been working on some 'tools' (You might already know, we are basically using Python for most of our processes).
After the prefetch method explained in my previous post we needed a way to cut big images, into smaller tiles so they can be prefetched. That is not a difficult task (although it has boring details, like storing a file format, loading, etc..), but a couple of details must be taken into account.
Lets take as an example this image marked with pink squares of what would be 128 x 128 pixels (the images are actually scaled down), and I've marked with green what would be the size of our viewport:
We use 512 x 512 textures. (Not using bigger ones to have a smaller granularity to load / unload textures).
1. Maintain the locality on the texture / atlas.The naive approach would be to cut the image at the same sizes of the texture size, in this case a 512x512, we would end up with something like this:
This would be wasting texture space for nothing.
Another way woul be to cut in small tiles, for example of 128x128, and then put one after each other like this:
From my point of view, this is a drawback: if the image is very big (for example a background of 4000x500 pixels), we know that the tiles for each row are in different textures for sure: a minimum of 4 textures would be necessary at any viewport position (and that could be worse).
So we decided that the way to put the tiles, is by first cutting the image into rectangles of viewport size (rounding up to a multiple of the tile size), and then cut those squares in the actual tiles and "pushing" them on to the textures, ending with something like this:
This way we put the tiles closer.
2. Offset the image coordinates And last, but not least we must offset a little bit inside the uv coordinates for each of our tiles.
For example, for a tile containing the cat:
We don't want to put the coordinate exactly in the limit between one tile and the other (the red dot). Because when we use the "nearest" sample we could get the neighbor pixel. We neither want it to be in the center of the first pixel (yellow dot), because we would get artifacts when using a real resolution of 2x, 3x of the base resolution (a real pixel would fall in the limit between pixels). So we set a small percentage of the pixel (green dot, we are using a 5%, so we could start to get artifacts at 20x of the base resolution)
So, the basic unit of the
Prefetch 2D for images are rectangles of the viewport size.
I must point also, that we feed several images in the same run of the tool to not leave empty tiles in any texture.
For now we will stay with this very basic approach, but there are some
improvements / optimizations thaw would be cool to implement to save texture space:
-
remove the completely transparent tiles: if the image has a tile that is completely transparent that quad could be removed.
-
try to detect big squares of the same color and make the four u,v coordinates point to one single pixel in the texture.
However, at this stage this is good enough and we will move to other stuff. Only coming back to this if we really find that we are eating too much texture memory.
I'm gonna hava a