Welcome, Guest. Please login or register.

Login with username, password and session length

 
Advanced search

1411428 Posts in 69363 Topics- by 58416 Members - Latest Member: JamesAGreen

April 19, 2024, 02:30:28 PM

Need hosting? Check out Digital Ocean
(more details in this thread)
TIGSource ForumsDeveloperTechnical (Moderator: ThemsAllTook)The grumpy old programmer room
Pages: 1 ... 277 278 [279] 280 281 ... 295
Print
Author Topic: The grumpy old programmer room  (Read 738251 times)
Maximillian
Level 0
**


View Profile
« Reply #5560 on: December 21, 2017, 12:37:18 PM »

I agree, you right, Minecraft is an example. I thought that Maximillian was very interested in the low-level side at first, reading his implementation in QBasic.

I am interested but not to such an extent that I stubbornly oppose any form of higher-level programming. I'm a huge fan of functional programming, for example, and if I could I'd write all of my programs in a functional, or otherwise declarative, descriptive or definitional, manner.

The reason I am interested in QBasic is because it's the programming language I started with. I started with it on my 386 machine without any kind of help. There was on documentation, no manual, no tutorials, no books and noone who could help me. I had to learn everything on my own, entirely through trial-and-error process. One of the things I never figured out is how to draw things on screen. Which is why I am returning to it.

I am also into assembly. I'd love to write a simple game for NES, or some other old-school system, for no other reason but to challenge myself.

Quote
Mmm, IMHO I am not even sure if I could consider that example of an HTML5 + JS an engine... when I think about a game engine from scratch I think about all or most of the implementation directly connected to the hardware using a library: rendering talking directly to the GPU using DirectX/OpenGL, input using XInput etc. I do not know if you are making a game in HTML + JS, you have to use a library that itì's wrapping some framework that, essentially, it could be the same as using an already architecture (aka game engine).

It's a relative thing, isn't it? Someone else would say that DirectX and OpenGL do not allow you to talk directly to the GPU. If you want to talk directly to the GPU, they will say, you'd have to talk to its registers. Not to mention that using GPU's that you didn't create on your own is already a form of dependency. You didn't create these GPU's on your own. Instead, you're taking advantage of the ones that were created by other people. If you did not build the electronic circuits you are using on your own, and especially if you don't know how electronic circuits work, then you're still being dependent. Furthermore, if it is true that modern GPU's perform all sorts of graphical computations on your part then the dependency is even greater. I do not know how GPU's work but I believe that they do many things on their own such as vertex processing, culling, clipping, rasterization, shading and so on. Basically, I think that most of the conversion from 3D space to 2D space is done by GPU's. If that's true, then you're dependent to a considerable degree.

Though I am not creating anything physical and though I am not interacting with ready-made graphics processing units through API's such as DirectX and OpenGL I am still doing all of the conversion from 3D space to 2D space on my own. I do all of the computations on my own. I also make sure that I understand thoroughly all of the mathematical functions that I use in the process. In other words, I make sure that I can come up with these functions on my own. I think that's still some kind of "doing things from scratch".
« Last Edit: December 21, 2017, 12:46:27 PM by Maximillian » Logged
Garthy
Level 9
****


Quack, verily


View Profile WWW
« Reply #5561 on: December 21, 2017, 02:29:31 PM »

[citation needed]. Stuff might be less concrete, but on the other hand there aren't a billion interacting states, not a dozen ways of doing a given thing (which was pretty nasty at GL2) and less gotchas that follow from arbitrary design decisions. If you know linear algebra and anything about the typical graphics pipeline, they might be easier to get into.

Right, I meant from the perspective of someone who doesn't know those things already. I'm mostly thinking of difference between setting up a VBO and a couple of shaders, and drawing stuff with immediate mode and the fixed function pipeline. The new system is definitely better once you get a foothold, but for a new programmer trying to make that initial step to just get a triangle to draw, it's a much more complicated process with way more points of failure. Using some sort of pre-written template or tutorial could bridge the gap.

Just to add to this having worked with both the FFP and modern GL: The FFP is far easier to start out with. Your first plain triangle, lit triangle, textured triangle, and first models are much easier to create with the FFP. It is very clear and easy. With modern GL you have to do research on how to glue all of these things together using shaders and generic constructs. This will be cobbled-together code from a variety of sources and will be of variable quality. You won't know which code is good and which is poor at this stage. Once you've got that up you can start tweaking and hope to gain a better understanding as to what you just accomplished.

The FFP starts to lose ground when you're working with models that aren't driven by blocks of primitives between glBegin and glEnd. When you start adding in different sorts of effects that are too much more complex than alpha blending, the struggle begins. You have a number of tools that you can bash into shape with some effort and experimentation, and eventually, with some compromises, you'll get there. It won't render the same on every card though, especially not between vendors. As your needs grow, the difficulty increases exponentially. Everything must be bent into shape to conform to the model. The training wheels you once knew are now heavy shackles. Around this point modern GL screams past the FFP with a whoosh, leaving it in the dust. The generic-heavy and shader-heavy focus of modern GL absolutely shines at this point.

If there is one thing modern GL got really wrong in the transition, it was creating a barrier to entry that was not there before. The end result is far superior to the FFP once you are going, but they made it so much harder to get to that point. It could have been avoided by formally specifying *exactly* how to replicate parts of the FFP properly with specific examples, giving a definitive source on how to do it right. Personally, I would have gone with specific examples from a first triangle up to the point where you are rendering two textures on two triangles with two simple lights *at* *the* *same* *time*. As formal authoritative examples, you'd know that basing your code on them was the right way to begin, as each vendor had input into their creation, and that each would support those examples fully.

All IMHO.
Logged
Garthy
Level 9
****


Quack, verily


View Profile WWW
« Reply #5562 on: December 21, 2017, 02:56:28 PM »


I am also into assembly. I'd love to write a simple game for NES, or some other old-school system, for no other reason but to challenge myself.

Check out the Commodore 64. A phenomenal number of games were created for it. Some people still write and sell games for it, even though it is over thirty years old. There are multiple emulators out there so you can get started quickly. If you want to get into it seriously, track down "Commodore 64: Programmer's Reference Guide". I still have my copy, with faded yellowing pages.

https://en.wikipedia.org/wiki/Commodore_64

Too high level? Check out the Atari 2600. Check out what was possible on a device with 128 bytes of RAM and a maximum cartridge size of 32kB. Emulators everywhere.

https://en.wikipedia.org/wiki/Atari_2600

Have fun.


Logged
ProgramGamer
Administrator
Level 10
******


aka Mireille


View Profile
« Reply #5563 on: December 21, 2017, 03:09:25 PM »

I recently had to research assembly for the NES for college, it's actually not too hard to get something basic working. You do have to download a compiler that you use through the command line though. The NESDEV forum is a community dedicated to creating NES games that still runs to this day if you want to check it out.
Logged

Garthy
Level 9
****


Quack, verily


View Profile WWW
« Reply #5564 on: December 21, 2017, 03:20:44 PM »

I recently had to research assembly for the NES for college, it's actually not too hard to get something basic working. You do have to download a compiler that you use through the command line though. The NESDEV forum is a community dedicated to creating NES games that still runs to this day if you want to check it out.

I've never written anything for the NES myself and know little about its architecture. What was your experience with it like? Given the parent company, are there any hassles with hardware protection that you have to worry about as a hobbyist?
Logged
ProgramGamer
Administrator
Level 10
******


aka Mireille


View Profile
« Reply #5565 on: December 21, 2017, 03:50:16 PM »

The thing with that is that I used an emulator.
Logged

HDSanctum
Guest
« Reply #5566 on: December 21, 2017, 10:33:03 PM »

It's a relative thing, isn't it? Someone else would say that DirectX and OpenGL do not allow you to talk directly to the GPU. If you want to talk directly to the GPU, they will say, you'd have to talk to its registers. Not to mention that using GPU's that you didn't create on your own is already a form of dependency. You didn't create these GPU's on your own. Instead, you're taking advantage of the ones that were created by other people. If you did not build the electronic circuits you are using on your own, and especially if you don't know how electronic circuits work, then you're still being dependent. Furthermore, if it is true that modern GPU's perform all sorts of graphical computations on your part then the dependency is even greater. I do not know how GPU's work but I believe that they do many things on their own such as vertex processing, culling, clipping, rasterization, shading and so on. Basically, I think that most of the conversion from 3D space to 2D space is done by GPU's. If that's true, then you're dependent to a considerable degree.

Though I am not creating anything physical and though I am not interacting with ready-made graphics processing units through API's such as DirectX and OpenGL I am still doing all of the conversion from 3D space to 2D space on my own. I do all of the computations on my own. I also make sure that I understand thoroughly all of the mathematical functions that I use in the process. In other words, I make sure that I can come up with these functions on my own. I think that's still some kind of "doing things from scratch".

I agree, it is relative.

Just give it a go, start on it. You'll figure out pretty quick if it's the path you want to take or not. If not, there are plenty of great engines and frameworks to use, or at least learn from so you can later improve upon. It's not that difficult to get something working, if you make use of the abundant learning resources and libraries out there. Of course it'll take a bit more time than using other solutions, but you'll learn more from it.
Logged
Ordnas
Level 10
*****



View Profile WWW
« Reply #5567 on: December 22, 2017, 01:16:14 AM »

It's a relative thing, isn't it? Someone else would say that DirectX and OpenGL do not allow you to talk directly to the GPU. If you want to talk directly to the GPU, they will say, you'd have to talk to its registers. Not to mention that using GPU's that you didn't create on your own is already a form of dependency. You didn't create these GPU's on your own. Instead, you're taking advantage of the ones that were created by other people. If you did not build the electronic circuits you are using on your own, and especially if you don't know how electronic circuits work, then you're still being dependent. Furthermore, if it is true that modern GPU's perform all sorts of graphical computations on your part then the dependency is even greater. I do not know how GPU's work but I believe that they do many things on their own such as vertex processing, culling, clipping, rasterization, shading and so on. Basically, I think that most of the conversion from 3D space to 2D space is done by GPU's. If that's true, then you're dependent to a considerable degree.

Though I am not creating anything physical and though I am not interacting with ready-made graphics processing units through API's such as DirectX and OpenGL I am still doing all of the conversion from 3D space to 2D space on my own. I do all of the computations on my own. I also make sure that I understand thoroughly all of the mathematical functions that I use in the process. In other words, I make sure that I can come up with these functions on my own. I think that's still some kind of "doing things from scratch".

I don't know. It is relative to the single, but the majority could think in another way maybe. Probably that, if you say in an job interview that you created a 3D engine from scratch, they think that you created using DirectX/OpenGL.  Undecided
Logged

Games:

JWki
Level 4
****


View Profile
« Reply #5568 on: December 22, 2017, 03:10:46 AM »

It's a relative thing, isn't it? Someone else would say that DirectX and OpenGL do not allow you to talk directly to the GPU. If you want to talk directly to the GPU, they will say, you'd have to talk to its registers. Not to mention that using GPU's that you didn't create on your own is already a form of dependency. You didn't create these GPU's on your own. Instead, you're taking advantage of the ones that were created by other people. If you did not build the electronic circuits you are using on your own, and especially if you don't know how electronic circuits work, then you're still being dependent. Furthermore, if it is true that modern GPU's perform all sorts of graphical computations on your part then the dependency is even greater. I do not know how GPU's work but I believe that they do many things on their own such as vertex processing, culling, clipping, rasterization, shading and so on. Basically, I think that most of the conversion from 3D space to 2D space is done by GPU's. If that's true, then you're dependent to a considerable degree.

Though I am not creating anything physical and though I am not interacting with ready-made graphics processing units through API's such as DirectX and OpenGL I am still doing all of the conversion from 3D space to 2D space on my own. I do all of the computations on my own. I also make sure that I understand thoroughly all of the mathematical functions that I use in the process. In other words, I make sure that I can come up with these functions on my own. I think that's still some kind of "doing things from scratch".

I don't know. It is relative to the single, but the majority could think in another way maybe. Probably that, if you say in an job interview that you created a 3D engine from scratch, they think that you created using DirectX/OpenGL.  Undecided

IDK why people put so much emphasis on DX/OpenGL when it comes to engines, there's so many systems other than gfx like runtime object models, serialization, audio, asset pipeline, gameplay foundation, etc etc
It's really evident when looking at open source "game engines", 80 percent of those are basically just rendering engines with some rudimentary entity system tacked on.
Logged
powly
Level 4
****



View Profile WWW
« Reply #5569 on: December 22, 2017, 03:38:56 AM »

[citation needed]. Stuff might be less concrete, but on the other hand there aren't a billion interacting states, not a dozen ways of doing a given thing (which was pretty nasty at GL2) and less gotchas that follow from arbitrary design decisions. If you know linear algebra and anything about the typical graphics pipeline, they might be easier to get into.

Right, I meant from the perspective of someone who doesn't know those things already. I'm mostly thinking of difference between setting up a VBO and a couple of shaders, and drawing stuff with immediate mode and the fixed function pipeline. The new system is definitely better once you get a foothold, but for a new programmer trying to make that initial step to just get a triangle to draw, it's a much more complicated process with way more points of failure. Using some sort of pre-written template or tutorial could bridge the gap.

True that. Modern GL prolongs the intial “black screen” problem where you write more and more code to get something visible and have no idea where the problem lies. Then again after you become familiar with it you get an idea of what might be wrong and find better debugging tools but there is definitely a problem with bootstrapping this knowledge.

Garthy: mmm good point, there are a bunch of tutorials from different perspectives but an ARB-made guideline might have been a good idea. Then again replicating the fixed function pipeline is maybe not that interesting, dunno. The fixed func is nowadays only good for going flat shaded or for that late 90s/early 2000s aesthetic.

This kind of makes me want to write a GL tutorial myself, of how it should really be a used in my opinion Who, Me?
Logged
JWki
Level 4
****


View Profile
« Reply #5570 on: December 22, 2017, 04:41:03 AM »

How should it really be used in your opinion? You made me curious now.
Logged
powly
Level 4
****



View Profile WWW
« Reply #5571 on: December 22, 2017, 06:53:21 AM »

How should it really be used in your opinion? You made me curious now.
Core 4.5, everything bindless without VBOs or global uniforms with automatic binds by variable names (not a single “layout location”) and doing most everything (all that doesn’t necessarily require the rasterizer) in compute shaders. And with the debugging features on and preferrably with the glsl printf. I’m considering going for bindless textures too now that amd also has support, but we’ll see.

It really makes it nice to write, minimal boring setup stuff and maximal cool shader stuff.

Note that this will require a non-ancient GPU and a non-hip computer (apple doesn’t care about gfx) and is thus often frowned upon.
Logged
JWki
Level 4
****


View Profile
« Reply #5572 on: December 22, 2017, 09:14:15 AM »

How should it really be used in your opinion? You made me curious now.
Core 4.5, everything bindless without VBOs or global uniforms with automatic binds by variable names (not a single “layout location”) and doing most everything (all that doesn’t necessarily require the rasterizer) in compute shaders. And with the debugging features on and preferrably with the glsl printf. I’m considering going for bindless textures too now that amd also has support, but we’ll see.

It really makes it nice to write, minimal boring setup stuff and maximal cool shader stuff.

Note that this will require a non-ancient GPU and a non-hip computer (apple doesn’t care about gfx) and is thus often frowned upon.

I like that direction. However apple not caring about gfx isn't really a position you can hold given that metal is probably the best api out there right now.
Logged
powly
Level 4
****



View Profile WWW
« Reply #5573 on: December 22, 2017, 09:51:42 AM »

I like that direction. However apple not caring about gfx isn't really a position you can hold given that metal is probably the best api out there right now.
But they tend to supply only older hardware and driver updates are scarce — or that’s my understanding of the situation. No idea about metal, is it really that nice?

Hm, I’ll think about writing some examples and explanations, maybe it’d be worthwile to help people see the light!
Logged
Maximillian
Level 0
**


View Profile
« Reply #5574 on: December 22, 2017, 11:23:36 AM »

I don't know. It is relative to the single, but the majority could think in another way maybe. Probably that, if you say in an job interview that you created a 3D engine from scratch, they think that you created using DirectX/OpenGL.  Undecided

Probably. But here's the thing. If you want to create a GPU on your own you must first understand the process of conversion from 3D space to 2D space. You need to understand how to transform a set of 3D models to a framebuffer that can be displayed on the screen. You need to understand the graphics pipeline so well that you can program it yourself. And that's what I am trying to do. Without that sort of knowledge you can't create your own GPU's. You can use other people's GPU's and even then only indirectly through the use of API's such as DirectX and OpenGL but you can't create your own GPU. Truth to be told, I don't know how to create a GPU either but at least I know, or I want to know, how to convert a 3D space to a 2D space. And that's a step forward. With that knowledge, all that remains is to understand enough about hardware so that I can create a GPU (I guess doing it completely on my own is out of question since it would probably take too much time, maybe more than a lifetime, but I guess I can hire other people, companies that specialize at that sort of thing, to create a piece of hardware according to my specification.) Interestingly, I don't have to understand anything about DirectX and/or OpenGL. On the other hand, if all you know is how to use these API's then you're too many steps away from creating your own GPU. All in all, though my approach is abstract, it is only so in order to manage complexity i.e. to break complex tasks into simplest possible sub-tasks so that I can solve them sequentially one-by-one rather than in parallel all-at-once. The first thing to do, I guess, is to understand the mathematics behind 3D-to-2D rendering. And only once you understand that is it wise to proceed to learn stuff about hardware.
Logged
ferreiradaselva
Level 3
***



View Profile
« Reply #5575 on: December 22, 2017, 12:31:27 PM »

I don't know. It is relative to the single, but the majority could think in another way maybe. Probably that, if you say in an job interview that you created a 3D engine from scratch, they think that you created using DirectX/OpenGL.  Undecided

Probably. But here's the thing. If you want to create a GPU on your own you must first understand the process of conversion from 3D space to 2D space. You need to understand how to transform a set of 3D models to a framebuffer that can be displayed on the screen. You need to understand the graphics pipeline so well that you can program it yourself. And that's what I am trying to do. Without that sort of knowledge you can't create your own GPU's. You can use other people's GPU's and even then only indirectly through the use of API's such as DirectX and OpenGL but you can't create your own GPU. Truth to be told, I don't know how to create a GPU either but at least I know, or I want to know, how to convert a 3D space to a 2D space. And that's a step forward. With that knowledge, all that remains is to understand enough about hardware so that I can create a GPU (I guess doing it completely on my own is out of question since it would probably take too much time, maybe more than a lifetime, but I guess I can hire other people, companies that specialize at that sort of thing, to create a piece of hardware according to my specification.) Interestingly, I don't have to understand anything about DirectX and/or OpenGL. On the other hand, if all you know is how to use these API's then you're too many steps away from creating your own GPU. All in all, though my approach is abstract, it is only so in order to manage complexity i.e. to break complex tasks into simplest possible sub-tasks so that I can solve them sequentially one-by-one rather than in parallel all-at-once. The first thing to do, I guess, is to understand the mathematics behind 3D-to-2D rendering. And only once you understand that is it wise to proceed to learn stuff about hardware.

Don't you mean renderer? A lot of the stuff you said fall in the "renderer implementation" category, not necessarily related with GPU, which the manufacturers provide a driver with their own implementation for. Like this one https://github.com/ssloy/tinyrenderer, but unrelated with GPU, only using software rendering.
Logged

Maximillian
Level 0
**


View Profile
« Reply #5576 on: December 22, 2017, 12:55:09 PM »

I don't know. I am not familiar with how GPU's work. I assume that the way they work is you give them a 3D scene which they then take and convert to a 2D image to be displayed on the screen. I understand that there are programmable aspects, such as shaders, which means that the process isn't completely fixed. Still, the way I understand it is that most of the process is handled by GPU's and not by some sort of software.
Logged
powly
Level 4
****



View Profile WWW
« Reply #5577 on: December 22, 2017, 01:48:39 PM »

I don't know. I am not familiar with how GPU's work. I assume that the way they work is you give them a 3D scene which they then take and convert to a 2D image to be displayed on the screen. I understand that there are programmable aspects, such as shaders, which means that the process isn't completely fixed. Still, the way I understand it is that most of the process is handled by GPU's and not by some sort of software.
That’s the point. GPUs can run arbitrary software nowadays, you can just launch a set of threads that can do any arithmetic, random read/write access the main memory, there are no more limitations on control flow etc. The fixed function parts still present do rasterization, specific texture filtering and tessellation (subdividing geometry on the fly) but you don’t have to use any if you don’t want to.

So yes, you give them a 3D scene but you define the format, do all necessary transforms and shading etc manually; the part that’s given is determining which part of the geometry is visible under each fragment. And that just makes sense since the fastest (GPU) software rasterizers are significantly slower.
Logged
JWki
Level 4
****


View Profile
« Reply #5578 on: December 23, 2017, 01:34:31 AM »

I like that direction. However apple not caring about gfx isn't really a position you can hold given that metal is probably the best api out there right now.
But they tend to supply only older hardware and driver updates are scarce — or that’s my understanding of the situation. No idea about metal, is it really that nice?

Hm, I’ll think about writing some examples and explanations, maybe it’d be worthwile to help people see the light!

Yeah they don't care about OpenGL alright, but tbh that I can understand. They probably had metal in the pipeline for a while already which would explain their neglect of OpenGL.
And tbh I haven't used Metal yet because I don't own an Apple device, but I've been told that it sits in a nice spot between the complexity of Vulkan / D3D12 and the usability of D3D11, so basically more or less combines the best of both extremes.
Logged
oahda
Level 10
*****



View Profile
« Reply #5579 on: December 23, 2017, 02:37:46 AM »

Is there a C/C++ API for Metal or is it all Objective-C/Swift?
Logged

Pages: 1 ... 277 278 [279] 280 281 ... 295
Print
Jump to:  

Theme orange-lt created by panic