Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length

 
Advanced search

1395428 Posts in 67263 Topics- by 60343 Members - Latest Member: slot777

September 23, 2021, 10:15:56 PM

Need hosting? Check out Digital Ocean
(more details in this thread)
TIGSource ForumsDeveloperArt (Moderator: JWK5)game art tricks
Pages: 1 ... 29 30 [31]
Print
Author Topic: game art tricks  (Read 123990 times)
gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #600 on: May 16, 2021, 05:37:27 PM »

https://joyrok.com/What-Are-SDFs-Anyway
Tech Art Chronicles: What Are SDFs Anwyay?
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #601 on: May 18, 2021, 06:24:44 PM »

Not exactly game art but I feel like this should be archived and used as ref

https://anime-backgrounds.tumblr.com/
Anime background

http://animationbackgrounds.blogspot.com/
animation background
Logged

bbennyk
TIGBaby
*


View Profile
« Reply #602 on: June 17, 2021, 10:23:07 AM »

Here is an interesting breakdown of how 2d animation was achieved in Dead Cells from 3D low poly models. I apologise if this has been here already. Thank you for the fantastic resource!
https://www.gamasutra.com/view/news/313026/Art_Design_Deep_Dive_Using_a_3D_pipeline_for_2D_animation_in_Dead_Cells.php
Logged
gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #603 on: June 29, 2021, 07:33:38 AM »




Fluffy stylized trees tutorial, using quadmesh-to-billboards shader in Unity
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #604 on: Today at 08:06:21 PM »

https://bottosson.github.io/posts/colorpicker/
Okhsv and Okhsl
Two new color spaces for color picking

Quote
Despite color picking playing a big role in a lot of applications, the design of color pickers isn’t a particularly well researched topic. While some variation exist in the widgets themselves, the choice of HSL or HSV is mostly taken for granted, with only a few exceptions.

Is their dominance well deserved or would it be possible to create better alternatives? I at least think that this question deserves to be explored and that color picker design should be an active research topic. With this post I hope to contribute to the exploration of what a better color picker could and should be, and hopefully inspire others to do the same!

The main focus here will be on the choice of color space, rather than the design of the UI widget used for navigating the color space.

« Last Edit: Today at 08:25:25 PM by gimymblert » Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #605 on: Today at 08:32:18 PM »

https://tech.preferred.jp/en/blog/first-release-of-pynif3d/
Quote
are excited to announce the release of PyNIF3D – an open-source PyTorch-based library for research on neural implicit functions (NIF)-based 3D geometry representation. PyNIF3D aims to accelerate research by providing a modular design that allows for easy extension and combination of NIF-related components, as well as readily available paper implementations and dataset loaders.

The project can be found at https://github.com/pfnet/pynif3d. Please follow the installation steps described on the main page or feel free to contact us for further information.


Features
PyNIF3D provides a modular design which can be categorized into three main components: sampling, decoding and aggregation. Scene sampling refers to any method that samples pixels from an input image, rays that are cast from a 2D camera plane to a 3D environment or feature maps. Decoding refers to any NIF-based architecture that transforms the sampled data into some predictions, such as pixel values or occupancies. Aggregation refers to any method that aggregates those predictions in order to output the final values corresponding to the rendered image.

Quote
We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location (x,y,z) and viewing direction (θ,ϕ)) and whose output is the volume density and view-dependent emitted radiance at that spatial location. We synthesize views by querying 5D coordinates along camera rays and use classic volume rendering techniques to project the output colors and densities into an image. Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses. We describe how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrate results that outperform prior work on neural rendering and view synthesis. View synthesis results are best viewed as videos, so we urge readers to view our supplementary video for convincing comparisons.
https://arxiv.org/abs/2003.08934




https://gandissect.csail.mit.edu/
GAN Dissection: Visualizing and Understanding Generative Adversarial Networks
Quote
Why Painting with a GAN is Interesting
A computer could draw a scene in two ways:

It could compose the scene out of objects it knows.
Or it could memorize an image and replay one just like it.
In recent years, innovative Generative Adversarial Networks (GANs, I. Goodfellow, et al, 2014) have demonstrated a remarkable ability to create nearly photorealistic images. However, it has been unknown whether these networks learn composition or if they operate purely through memorization of pixel patterns.

Our GAN Paint demo and our GAN Dissection method provide evidence that the networks have learned some aspects of composition.
Quote
One surprising finding is that the same neurons control a specific object class in a variety of contexts, even if the final appearance of the object varies widely. The same neurons can switch on the concept of a "door" even if a big stone wall requires a big heavy door facing to the left, or a little hut requires a small curtain door facing to the right.

The network also understands when it can and cannot compose objects. For example, turning on neurons for a door in the proper location of a building will add a door. But doing the same in the sky or on a tree will typically have no effect. This structure can be quantified.
« Last Edit: Today at 08:42:13 PM by gimymblert » Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #606 on: Today at 09:12:41 PM »

https://twitter.com/pointinpolygon/status/1384861492252758016


https://www.shadertoy.com/view/fsXXzX
Quote
// English Lane by Jerome Liard, April 2021
// License Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
// https://www.shadertoy.com/view/fsXXzX
//
// You are walking and flying through an infinite English countryside.
// Chill out and use the mouse to look around.
// A single walk->fly cycle is about 50s.
//
// Shadertoy compilation time seems to be about 15s, thanks for your patience.

// This is the start lane index. At each walk-flight cycle we switch to the next lane midair.
// You can set any reasonable integer value (negative ok too) to walk along other paths.

#define FIRST_LANE_INDEX 10.0
//#define FIRST_LANE_INDEX (-80.0+mod(iDate.x*365.+iDate.y*31.+iDate.z,160.)) // one different lane every day (no fade when day changes)

// If the reprojection is janky please press the button that resets time to zero.
//
// I wanted to make a navigable countryside with paths inspired by paintings from Richard Thorn (see his book "Down an English Lane"),
// and a little bit by Hiroshi Nagai and Tezuka Osamu's Jumping short anime (both life long inspirations).
//
// Creation of the initial patchwork and parameterized paths network:
//
//   - 2 perpendicular sets of regularly spaced parallel 1d lanes are used.
//   - Each 1d lane has an id. The amplitude of each 1d lane must be such that they don't cross the previous or next 1d lane.
//   - The horizonal set of parallel lanes have constant vertical center spacing.
//   - The twist: the vertical set of parallel lanes can have their spacing set more freely based on which stab we are in the horizontal set.
//     This helps generating complex branching patterns.
//   - For each set of parallel lanes we simply use its local x coordinate as a parameter (used for garden brick wall and camera).
//   - The intersections of lane stabs give us a cellular base for country patches, and for each patch we get an id, a distance to boundary, and parameterized borders.
//
// Trees and houses placement:
//
//   - Patches ids is used to decide what combination of things goes on the patch (trees, bushes, farms, walls, lawn...)
//   - There are 3 layers of cellular placement for trees, bushes, and farms.
//     - Bushes are too close to each other and must be soft blended, but 3x3 search is no-no so we do a "4 or 5" neighbours search (we only consider checkboard black cells).
//     - For farms and trees we use randomly decimated jittered grid and actually only consider the current cell we are in, and hack marching to death to compensate.
//   - Modeling:
//     - Trees leaves volume have a base shape done with 2 spheres soft blended, then distored by 2 layers of packed 3d spheres tiling to blobify the leaves volume, and then some fine noise distortion on the surface.
//       The use of densely packed sphere tiling is inspired by @Shane's Cellular Tiling https://www.shadertoy.com/view/4scXz2
//     - Farms are randomized with gable and hipped roof, chimneys and colors very vaguely inspired by pictures of Devon.
//
// Marching:
//
//   - For patches, marching uses ghost steps nearby patch boundaries so that we don't check neighbour patches objects, only the patch we are in.
//   - For trees and farms too, we force the raymarch to take ghost steps along their cell borders for x1 sdf eval.
//     - This ghost point machinery is hacky and not perfect (esp on patches boundary where we don't have clean intersections) but still helps.
//   - Because of all the cellular evals going on, to save height evals we use taylor expansion of the heightfield on local neiborhood.
//   - Despite above efforts I had to resort to reprojection and still perf isn't great.
//     Blurring the noise with reprojection also helps hide the general noisy lameness and gives better colors.
//
// Clouds are volumetric but baked in a spheremap at first frame and assumed distant.
// Also had to turn view trace/shadow trace/scene gradient/cellular evals into loops to help compile time on the website, sometimes at the expense of runtime perfs.
// As always some code, techniques, ideas from @iq, @Dave_Hoskins, @Shane, @FabriceNeyret2 are used in various places,
// this shader also uses some spherical gaussian code from Matt Pettineo
// (see comment for links to references).
« Last Edit: Today at 09:31:29 PM by gimymblert » Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #607 on: Today at 09:51:30 PM »

https://twitter.com/MuRo_CG/status/1432884724582674433
Quote
I tried to change the perspective with the shader according to the distance to the camera. It seems that you can make a good picture depending on how you use it


https://twitter.com/EmilMeiton/status/1428458847623057408
Quote
1 Add array of centerpositions of spheres to shader
2 find closes neighbour in shader
3 find second closest neighbour (for corners)
4 construct normals blending between these points
5 Soft cuddly graphics!

https://twitter.com/chiba_akihito/status/1428015207440293891
Quote
1. Copy the object and stretch the part where you want to emit long smoke in edit mode.
2. Apply displace modifier. Assign your favorite procedural texture. At this time, if you specify the "Coordinates" field as "Object", you can control the texture transform with the object.
3. Set the blend mode to alpha clip in the material settings.
4. Use Fresnel in the material editor to control the alpha.

« Last Edit: Today at 10:13:44 PM by gimymblert » Logged

Pages: 1 ... 29 30 [31]
Print
Jump to:  

Theme orange-lt created by panic