Welcome, Guest. Please login or register.

Login with username, password and session length

 
Advanced search

1411315 Posts in 69330 Topics- by 58383 Members - Latest Member: Unicorling

April 03, 2024, 03:43:56 AM

Need hosting? Check out Digital Ocean
(more details in this thread)
  Show Posts
Pages: 1 ... 9 10 [11] 12 13 ... 952
201  Developer / Technical / Re: Procedural resource dump on: January 29, 2022, 10:15:43 AM
http://www.science-and-fiction.org/rendering/noise.html
From random number to texture - GLSL noise functions
202  Developer / Technical / Re: Procedural resource dump on: January 15, 2022, 04:37:54 PM



Dialog System - Hyperbolica Devlog #7
203  Developer / Art / Re: game art tricks on: January 12, 2022, 05:19:40 PM



Create A Game Ready Animated 3D Model In Less Than 10 Minutes -- Obscenely Easy, No Skill Required!
Quote
The stars of the show are a pair of programs, VRoid Studio (free!) and DeepMotion (free tier!).  The first one creates fully rigged and textured anime characters, in a process very similar to creating a character in a video game.  DEEPMOTION is able to import VRoid projects, which it can then add animations that are created by uploading a video clip of the animation you want.  You can check out a text based version of this tutorial in the link below.
Machine learning animation transfer
204  Developer / Technical / Re: Procedural resource dump on: January 10, 2022, 07:58:09 AM



GraphBLAS: Building a C++ Matrix API for Graph Algorithms
205  Developer / Art / Re: game art tricks on: January 08, 2022, 06:31:11 PM



Pixcap - Easy Animation Software - Basic Intro

Has Motion capture lite from video
206  Developer / Art / Re: game art tricks on: December 31, 2021, 04:39:32 AM
https://benedikt-bitterli.me/resources/
Quote
Rendering Resources
This page offers 32 different 3D scenes that you can use for free in your rendering research, publications and classes. They range in complexity from small test setups all the way to complex interior scenes with difficult indirect lighting. A number of hair models are also included. All scenes come with explicit licenses attached and have few restrictions: The majority allow commercial use, and many don't even require attribution.

https://devblogs.microsoft.com/directx/announcing-hlsl-2021/
Announcing HLSL 2021
207  Developer / Art / Re: game art tricks on: December 27, 2021, 10:47:49 PM
More interior mapping (cubemap and sim city 5 single texture method)
https://forum.unity.com/threads/interior-mapping.424676/#post-2751518
208  Developer / Technical / Re: Procedural resource dump on: December 04, 2021, 06:39:56 PM
https://www.gamedeveloper.com/programming/ai-madness-using-ai-to-bring-open-city-racing-to-life
AI Madness: Using AI to Bring Open-City Racing to Life
Quote
The role of artificial intelligence is to make the behaviors of high-level entities convincing and immersive. Here, Joe Adzima discusses the autonomous architecture used by high-level entities in Midtown Madness 2 for PC and Midnight Club.




Unity ECS for mobile: Metropolis Traffic Simulation - Unite Copenhagen




How Traffic Works in Cities: Skylines | AI and Games
https://www.gamedeveloper.com/disciplines/how-traffic-works-in-cities-skylines

https://www.youtube.com/watch?v=fIV6P1W-wuo
BIG PROJECT 2-in-1! Top Down City Based Car Crime Game #2
Quote
Whew! A 2-in-1 feature length video this one, but it covers a lot to bring this project up to date. First I discuss the project structure and designing for modification. Then I implement Lua to handle external scripting and asset management. Next I completely rebuild the engine making it more modular, and implement basic city elements. Then I discuss a strategy for various types of automata, such as pedestrians, vehicles and police, and implement them, whilst obeying the rules of the road.
209  Developer / Art / Re: game art tricks on: November 30, 2021, 01:49:34 PM
Advances in Neural Rendering (SIGGRAPH 2021 Course)

part 1



part 2




Least squares for programmers (SIGGRAPH 2021 course)
https://www.youtube.com/watch?v=ZDh3v8OAEIA
210  Developer / Art / Re: game art tricks on: November 16, 2021, 03:31:05 PM



Good overview of hair and skin shader in cg
211  Developer / Art / Re: game art tricks on: November 13, 2021, 09:30:19 AM






Advance in real time rendering 2019

212  Developer / Art / Re: game art tricks on: November 11, 2021, 12:06:14 PM
https://youtube.com/channel/UC9V4KS8ggGQe_Hfeg1OQrWw
Siggraph real time rendering 2021
213  Developer / Art / Re: game art tricks on: October 23, 2021, 08:19:02 PM



Gpu driven effects on last of us part 2





The technical art of the last of us part 2

https://youtu.be/lo5VN2nOL98
Volumetric fog of the last of us part 2
214  Developer / Art / Re: game art tricks on: September 23, 2021, 09:51:30 PM
https://twitter.com/MuRo_CG/status/1432884724582674433
Quote
I tried to change the perspective with the shader according to the distance to the camera. It seems that you can make a good picture depending on how you use it


https://twitter.com/EmilMeiton/status/1428458847623057408
Quote
1 Add array of centerpositions of spheres to shader
2 find closes neighbour in shader
3 find second closest neighbour (for corners)
4 construct normals blending between these points
5 Soft cuddly graphics!

https://twitter.com/chiba_akihito/status/1428015207440293891
Quote
1. Copy the object and stretch the part where you want to emit long smoke in edit mode.
2. Apply displace modifier. Assign your favorite procedural texture. At this time, if you specify the "Coordinates" field as "Object", you can control the texture transform with the object.
3. Set the blend mode to alpha clip in the material settings.
4. Use Fresnel in the material editor to control the alpha.

215  Developer / Art / Re: game art tricks on: September 23, 2021, 09:12:41 PM
https://twitter.com/pointinpolygon/status/1384861492252758016


https://www.shadertoy.com/view/fsXXzX
Quote
// English Lane by Jerome Liard, April 2021
// License Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
// https://www.shadertoy.com/view/fsXXzX
//
// You are walking and flying through an infinite English countryside.
// Chill out and use the mouse to look around.
// A single walk->fly cycle is about 50s.
//
// Shadertoy compilation time seems to be about 15s, thanks for your patience.

// This is the start lane index. At each walk-flight cycle we switch to the next lane midair.
// You can set any reasonable integer value (negative ok too) to walk along other paths.

#define FIRST_LANE_INDEX 10.0
//#define FIRST_LANE_INDEX (-80.0+mod(iDate.x*365.+iDate.y*31.+iDate.z,160.)) // one different lane every day (no fade when day changes)

// If the reprojection is janky please press the button that resets time to zero.
//
// I wanted to make a navigable countryside with paths inspired by paintings from Richard Thorn (see his book "Down an English Lane"),
// and a little bit by Hiroshi Nagai and Tezuka Osamu's Jumping short anime (both life long inspirations).
//
// Creation of the initial patchwork and parameterized paths network:
//
//   - 2 perpendicular sets of regularly spaced parallel 1d lanes are used.
//   - Each 1d lane has an id. The amplitude of each 1d lane must be such that they don't cross the previous or next 1d lane.
//   - The horizonal set of parallel lanes have constant vertical center spacing.
//   - The twist: the vertical set of parallel lanes can have their spacing set more freely based on which stab we are in the horizontal set.
//     This helps generating complex branching patterns.
//   - For each set of parallel lanes we simply use its local x coordinate as a parameter (used for garden brick wall and camera).
//   - The intersections of lane stabs give us a cellular base for country patches, and for each patch we get an id, a distance to boundary, and parameterized borders.
//
// Trees and houses placement:
//
//   - Patches ids is used to decide what combination of things goes on the patch (trees, bushes, farms, walls, lawn...)
//   - There are 3 layers of cellular placement for trees, bushes, and farms.
//     - Bushes are too close to each other and must be soft blended, but 3x3 search is no-no so we do a "4 or 5" neighbours search (we only consider checkboard black cells).
//     - For farms and trees we use randomly decimated jittered grid and actually only consider the current cell we are in, and hack marching to death to compensate.
//   - Modeling:
//     - Trees leaves volume have a base shape done with 2 spheres soft blended, then distored by 2 layers of packed 3d spheres tiling to blobify the leaves volume, and then some fine noise distortion on the surface.
//       The use of densely packed sphere tiling is inspired by @Shane's Cellular Tiling https://www.shadertoy.com/view/4scXz2
//     - Farms are randomized with gable and hipped roof, chimneys and colors very vaguely inspired by pictures of Devon.
//
// Marching:
//
//   - For patches, marching uses ghost steps nearby patch boundaries so that we don't check neighbour patches objects, only the patch we are in.
//   - For trees and farms too, we force the raymarch to take ghost steps along their cell borders for x1 sdf eval.
//     - This ghost point machinery is hacky and not perfect (esp on patches boundary where we don't have clean intersections) but still helps.
//   - Because of all the cellular evals going on, to save height evals we use taylor expansion of the heightfield on local neiborhood.
//   - Despite above efforts I had to resort to reprojection and still perf isn't great.
//     Blurring the noise with reprojection also helps hide the general noisy lameness and gives better colors.
//
// Clouds are volumetric but baked in a spheremap at first frame and assumed distant.
// Also had to turn view trace/shadow trace/scene gradient/cellular evals into loops to help compile time on the website, sometimes at the expense of runtime perfs.
// As always some code, techniques, ideas from @iq, @Dave_Hoskins, @Shane, @FabriceNeyret2 are used in various places,
// this shader also uses some spherical gaussian code from Matt Pettineo
// (see comment for links to references).
216  Developer / Art / Re: game art tricks on: September 23, 2021, 08:32:18 PM
https://tech.preferred.jp/en/blog/first-release-of-pynif3d/
Quote
are excited to announce the release of PyNIF3D – an open-source PyTorch-based library for research on neural implicit functions (NIF)-based 3D geometry representation. PyNIF3D aims to accelerate research by providing a modular design that allows for easy extension and combination of NIF-related components, as well as readily available paper implementations and dataset loaders.

The project can be found at https://github.com/pfnet/pynif3d. Please follow the installation steps described on the main page or feel free to contact us for further information.


Features
PyNIF3D provides a modular design which can be categorized into three main components: sampling, decoding and aggregation. Scene sampling refers to any method that samples pixels from an input image, rays that are cast from a 2D camera plane to a 3D environment or feature maps. Decoding refers to any NIF-based architecture that transforms the sampled data into some predictions, such as pixel values or occupancies. Aggregation refers to any method that aggregates those predictions in order to output the final values corresponding to the rendered image.

Quote
We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location (x,y,z) and viewing direction (θ,ϕ)) and whose output is the volume density and view-dependent emitted radiance at that spatial location. We synthesize views by querying 5D coordinates along camera rays and use classic volume rendering techniques to project the output colors and densities into an image. Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses. We describe how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrate results that outperform prior work on neural rendering and view synthesis. View synthesis results are best viewed as videos, so we urge readers to view our supplementary video for convincing comparisons.
https://arxiv.org/abs/2003.08934




https://gandissect.csail.mit.edu/
GAN Dissection: Visualizing and Understanding Generative Adversarial Networks
Quote
Why Painting with a GAN is Interesting
A computer could draw a scene in two ways:

It could compose the scene out of objects it knows.
Or it could memorize an image and replay one just like it.
In recent years, innovative Generative Adversarial Networks (GANs, I. Goodfellow, et al, 2014) have demonstrated a remarkable ability to create nearly photorealistic images. However, it has been unknown whether these networks learn composition or if they operate purely through memorization of pixel patterns.

Our GAN Paint demo and our GAN Dissection method provide evidence that the networks have learned some aspects of composition.
Quote
One surprising finding is that the same neurons control a specific object class in a variety of contexts, even if the final appearance of the object varies widely. The same neurons can switch on the concept of a "door" even if a big stone wall requires a big heavy door facing to the left, or a little hut requires a small curtain door facing to the right.

The network also understands when it can and cannot compose objects. For example, turning on neurons for a door in the proper location of a building will add a door. But doing the same in the sky or on a tree will typically have no effect. This structure can be quantified.
217  Developer / Art / Re: game art tricks on: September 23, 2021, 08:06:21 PM
https://bottosson.github.io/posts/colorpicker/
Okhsv and Okhsl
Two new color spaces for color picking

Quote
Despite color picking playing a big role in a lot of applications, the design of color pickers isn’t a particularly well researched topic. While some variation exist in the widgets themselves, the choice of HSL or HSV is mostly taken for granted, with only a few exceptions.

Is their dominance well deserved or would it be possible to create better alternatives? I at least think that this question deserves to be explored and that color picker design should be an active research topic. With this post I hope to contribute to the exploration of what a better color picker could and should be, and hopefully inspire others to do the same!

The main focus here will be on the choice of color space, rather than the design of the UI widget used for navigating the color space.

218  Developer / Technical / Re: Procedural resource dump on: September 22, 2021, 12:21:42 PM



Exploring AI Generated Music
219  Developer / Technical / Re: Procedural resource dump on: June 29, 2021, 07:38:07 AM
Worth it, it's one of the most influential game on me in my quest of procedural narrative, it has time stamp to watch episodically
220  Developer / Art / Re: game art tricks on: June 29, 2021, 07:33:38 AM



Fluffy stylized trees tutorial, using quadmesh-to-billboards shader in Unity
Pages: 1 ... 9 10 [11] 12 13 ... 952
Theme orange-lt created by panic