Welcome, Guest. Please login or register.

Login with username, password and session length

 
Advanced search

1411490 Posts in 69371 Topics- by 58428 Members - Latest Member: shelton786

April 24, 2024, 07:35:44 PM

Need hosting? Check out Digital Ocean
(more details in this thread)
TIGSource ForumsDeveloperArt (Moderator: JWK5)game art tricks
Pages: 1 ... 29 30 [31] 32 33 34
Print
Author Topic: game art tricks  (Read 195031 times)
gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #600 on: May 16, 2021, 05:37:27 PM »

https://joyrok.com/What-Are-SDFs-Anyway
Tech Art Chronicles: What Are SDFs Anwyay?
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #601 on: May 18, 2021, 06:24:44 PM »

Not exactly game art but I feel like this should be archived and used as ref

https://anime-backgrounds.tumblr.com/
Anime background

http://animationbackgrounds.blogspot.com/
animation background
Logged

bbennyk
TIGBaby
*


View Profile
« Reply #602 on: June 17, 2021, 10:23:07 AM »

Here is an interesting breakdown of how 2d animation was achieved in Dead Cells from 3D low poly models. I apologise if this has been here already. Thank you for the fantastic resource!
https://www.gamasutra.com/view/news/313026/Art_Design_Deep_Dive_Using_a_3D_pipeline_for_2D_animation_in_Dead_Cells.php
Logged
gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #603 on: June 29, 2021, 07:33:38 AM »




Fluffy stylized trees tutorial, using quadmesh-to-billboards shader in Unity
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #604 on: September 23, 2021, 08:06:21 PM »

https://bottosson.github.io/posts/colorpicker/
Okhsv and Okhsl
Two new color spaces for color picking

Quote
Despite color picking playing a big role in a lot of applications, the design of color pickers isn’t a particularly well researched topic. While some variation exist in the widgets themselves, the choice of HSL or HSV is mostly taken for granted, with only a few exceptions.

Is their dominance well deserved or would it be possible to create better alternatives? I at least think that this question deserves to be explored and that color picker design should be an active research topic. With this post I hope to contribute to the exploration of what a better color picker could and should be, and hopefully inspire others to do the same!

The main focus here will be on the choice of color space, rather than the design of the UI widget used for navigating the color space.

« Last Edit: September 23, 2021, 08:25:25 PM by gimymblert » Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #605 on: September 23, 2021, 08:32:18 PM »

https://tech.preferred.jp/en/blog/first-release-of-pynif3d/
Quote
are excited to announce the release of PyNIF3D – an open-source PyTorch-based library for research on neural implicit functions (NIF)-based 3D geometry representation. PyNIF3D aims to accelerate research by providing a modular design that allows for easy extension and combination of NIF-related components, as well as readily available paper implementations and dataset loaders.

The project can be found at https://github.com/pfnet/pynif3d. Please follow the installation steps described on the main page or feel free to contact us for further information.


Features
PyNIF3D provides a modular design which can be categorized into three main components: sampling, decoding and aggregation. Scene sampling refers to any method that samples pixels from an input image, rays that are cast from a 2D camera plane to a 3D environment or feature maps. Decoding refers to any NIF-based architecture that transforms the sampled data into some predictions, such as pixel values or occupancies. Aggregation refers to any method that aggregates those predictions in order to output the final values corresponding to the rendered image.

Quote
We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location (x,y,z) and viewing direction (θ,ϕ)) and whose output is the volume density and view-dependent emitted radiance at that spatial location. We synthesize views by querying 5D coordinates along camera rays and use classic volume rendering techniques to project the output colors and densities into an image. Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses. We describe how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrate results that outperform prior work on neural rendering and view synthesis. View synthesis results are best viewed as videos, so we urge readers to view our supplementary video for convincing comparisons.
https://arxiv.org/abs/2003.08934




https://gandissect.csail.mit.edu/
GAN Dissection: Visualizing and Understanding Generative Adversarial Networks
Quote
Why Painting with a GAN is Interesting
A computer could draw a scene in two ways:

It could compose the scene out of objects it knows.
Or it could memorize an image and replay one just like it.
In recent years, innovative Generative Adversarial Networks (GANs, I. Goodfellow, et al, 2014) have demonstrated a remarkable ability to create nearly photorealistic images. However, it has been unknown whether these networks learn composition or if they operate purely through memorization of pixel patterns.

Our GAN Paint demo and our GAN Dissection method provide evidence that the networks have learned some aspects of composition.
Quote
One surprising finding is that the same neurons control a specific object class in a variety of contexts, even if the final appearance of the object varies widely. The same neurons can switch on the concept of a "door" even if a big stone wall requires a big heavy door facing to the left, or a little hut requires a small curtain door facing to the right.

The network also understands when it can and cannot compose objects. For example, turning on neurons for a door in the proper location of a building will add a door. But doing the same in the sky or on a tree will typically have no effect. This structure can be quantified.
« Last Edit: September 23, 2021, 08:42:13 PM by gimymblert » Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #606 on: September 23, 2021, 09:12:41 PM »

https://twitter.com/pointinpolygon/status/1384861492252758016


https://www.shadertoy.com/view/fsXXzX
Quote
// English Lane by Jerome Liard, April 2021
// License Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
// https://www.shadertoy.com/view/fsXXzX
//
// You are walking and flying through an infinite English countryside.
// Chill out and use the mouse to look around.
// A single walk->fly cycle is about 50s.
//
// Shadertoy compilation time seems to be about 15s, thanks for your patience.

// This is the start lane index. At each walk-flight cycle we switch to the next lane midair.
// You can set any reasonable integer value (negative ok too) to walk along other paths.

#define FIRST_LANE_INDEX 10.0
//#define FIRST_LANE_INDEX (-80.0+mod(iDate.x*365.+iDate.y*31.+iDate.z,160.)) // one different lane every day (no fade when day changes)

// If the reprojection is janky please press the button that resets time to zero.
//
// I wanted to make a navigable countryside with paths inspired by paintings from Richard Thorn (see his book "Down an English Lane"),
// and a little bit by Hiroshi Nagai and Tezuka Osamu's Jumping short anime (both life long inspirations).
//
// Creation of the initial patchwork and parameterized paths network:
//
//   - 2 perpendicular sets of regularly spaced parallel 1d lanes are used.
//   - Each 1d lane has an id. The amplitude of each 1d lane must be such that they don't cross the previous or next 1d lane.
//   - The horizonal set of parallel lanes have constant vertical center spacing.
//   - The twist: the vertical set of parallel lanes can have their spacing set more freely based on which stab we are in the horizontal set.
//     This helps generating complex branching patterns.
//   - For each set of parallel lanes we simply use its local x coordinate as a parameter (used for garden brick wall and camera).
//   - The intersections of lane stabs give us a cellular base for country patches, and for each patch we get an id, a distance to boundary, and parameterized borders.
//
// Trees and houses placement:
//
//   - Patches ids is used to decide what combination of things goes on the patch (trees, bushes, farms, walls, lawn...)
//   - There are 3 layers of cellular placement for trees, bushes, and farms.
//     - Bushes are too close to each other and must be soft blended, but 3x3 search is no-no so we do a "4 or 5" neighbours search (we only consider checkboard black cells).
//     - For farms and trees we use randomly decimated jittered grid and actually only consider the current cell we are in, and hack marching to death to compensate.
//   - Modeling:
//     - Trees leaves volume have a base shape done with 2 spheres soft blended, then distored by 2 layers of packed 3d spheres tiling to blobify the leaves volume, and then some fine noise distortion on the surface.
//       The use of densely packed sphere tiling is inspired by @Shane's Cellular Tiling https://www.shadertoy.com/view/4scXz2
//     - Farms are randomized with gable and hipped roof, chimneys and colors very vaguely inspired by pictures of Devon.
//
// Marching:
//
//   - For patches, marching uses ghost steps nearby patch boundaries so that we don't check neighbour patches objects, only the patch we are in.
//   - For trees and farms too, we force the raymarch to take ghost steps along their cell borders for x1 sdf eval.
//     - This ghost point machinery is hacky and not perfect (esp on patches boundary where we don't have clean intersections) but still helps.
//   - Because of all the cellular evals going on, to save height evals we use taylor expansion of the heightfield on local neiborhood.
//   - Despite above efforts I had to resort to reprojection and still perf isn't great.
//     Blurring the noise with reprojection also helps hide the general noisy lameness and gives better colors.
//
// Clouds are volumetric but baked in a spheremap at first frame and assumed distant.
// Also had to turn view trace/shadow trace/scene gradient/cellular evals into loops to help compile time on the website, sometimes at the expense of runtime perfs.
// As always some code, techniques, ideas from @iq, @Dave_Hoskins, @Shane, @FabriceNeyret2 are used in various places,
// this shader also uses some spherical gaussian code from Matt Pettineo
// (see comment for links to references).
« Last Edit: September 23, 2021, 09:31:29 PM by gimymblert » Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #607 on: September 23, 2021, 09:51:30 PM »

https://twitter.com/MuRo_CG/status/1432884724582674433
Quote
I tried to change the perspective with the shader according to the distance to the camera. It seems that you can make a good picture depending on how you use it


https://twitter.com/EmilMeiton/status/1428458847623057408
Quote
1 Add array of centerpositions of spheres to shader
2 find closes neighbour in shader
3 find second closest neighbour (for corners)
4 construct normals blending between these points
5 Soft cuddly graphics!

https://twitter.com/chiba_akihito/status/1428015207440293891
Quote
1. Copy the object and stretch the part where you want to emit long smoke in edit mode.
2. Apply displace modifier. Assign your favorite procedural texture. At this time, if you specify the "Coordinates" field as "Object", you can control the texture transform with the object.
3. Set the blend mode to alpha clip in the material settings.
4. Use Fresnel in the material editor to control the alpha.

« Last Edit: September 23, 2021, 10:13:44 PM by gimymblert » Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #608 on: October 23, 2021, 08:19:02 PM »




Gpu driven effects on last of us part 2





The technical art of the last of us part 2

https://youtu.be/lo5VN2nOL98
Volumetric fog of the last of us part 2
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #609 on: November 11, 2021, 12:06:14 PM »

https://youtube.com/channel/UC9V4KS8ggGQe_Hfeg1OQrWw
Siggraph real time rendering 2021
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #610 on: November 13, 2021, 09:30:19 AM »







Advance in real time rendering 2019

Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #611 on: November 16, 2021, 03:31:05 PM »




Good overview of hair and skin shader in cg
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #612 on: November 30, 2021, 01:49:34 PM »

Advances in Neural Rendering (SIGGRAPH 2021 Course)

part 1



part 2




Least squares for programmers (SIGGRAPH 2021 course)
https://www.youtube.com/watch?v=ZDh3v8OAEIA
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #613 on: December 27, 2021, 10:47:49 PM »

More interior mapping (cubemap and sim city 5 single texture method)
https://forum.unity.com/threads/interior-mapping.424676/#post-2751518
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #614 on: December 31, 2021, 04:39:32 AM »

https://benedikt-bitterli.me/resources/
Quote
Rendering Resources
This page offers 32 different 3D scenes that you can use for free in your rendering research, publications and classes. They range in complexity from small test setups all the way to complex interior scenes with difficult indirect lighting. A number of hair models are also included. All scenes come with explicit licenses attached and have few restrictions: The majority allow commercial use, and many don't even require attribution.

https://devblogs.microsoft.com/directx/announcing-hlsl-2021/
Announcing HLSL 2021
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #615 on: January 08, 2022, 06:31:11 PM »




Pixcap - Easy Animation Software - Basic Intro

Has Motion capture lite from video
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #616 on: January 12, 2022, 05:19:40 PM »




Create A Game Ready Animated 3D Model In Less Than 10 Minutes -- Obscenely Easy, No Skill Required!
Quote
The stars of the show are a pair of programs, VRoid Studio (free!) and DeepMotion (free tier!).  The first one creates fully rigged and textured anime characters, in a process very similar to creating a character in a video game.  DEEPMOTION is able to import VRoid projects, which it can then add animations that are created by uploading a video clip of the animation you want.  You can check out a text based version of this tutorial in the link below.
Machine learning animation transfer
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #617 on: January 29, 2022, 10:15:50 AM »

http://www.science-and-fiction.org/rendering/noise.html
From random number to texture - GLSL noise functions
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #618 on: January 31, 2022, 09:36:40 PM »

https://twitter.com/zozuar/status/1461524656532471811
click for shader

https://twitter.com/zozuar
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #619 on: March 24, 2022, 03:14:43 PM »

https://github.com/NVIDIAGameWorks/RTXGI
RTX Global Illumination
Logged

Pages: 1 ... 29 30 [31] 32 33 34
Print
Jump to:  

Theme orange-lt created by panic