Welcome, Guest. Please login or register.

Login with username, password and session length

 
Advanced search

1411630 Posts in 69393 Topics- by 58447 Members - Latest Member: sinsofsven

May 11, 2024, 08:24:22 PM

Need hosting? Check out Digital Ocean
(more details in this thread)
TIGSource ForumsPlayerGamesOuya - New Game Console?
Pages: 1 ... 32 33 [34] 35 36 ... 94
Print
Author Topic: Ouya - New Game Console?  (Read 175831 times)
J-Snake
Level 10
*****


A fool with a tool is still a fool.


View Profile WWW
« Reply #660 on: August 20, 2012, 06:09:18 AM »

How many of you actually did GPGPU stuff on the GPU?
How many of you actually learned about GPU architecture?
I took a look at OpenCl and designed a parallel algorithm for a certain geometric problem dealing with determing cut out shapes. The performance-boost was astounding. Instead of waiting 10 seconds for the endresult it was practically instant, no noticeable delay.

Since many of my games won't be heavy on the visual part it would be cool to use free ressources for game-logic. However I wish to have a significant performance gain in TrapThem's physics(I haven't tried it yet):




In that case I could apply them in large metroid-sized maps.
They are really hard to design perfectly parallel but one major part of them is marking the stones to identify isolated shapes. That part can take advantage of parallel processing well. But there is a twist to it. It is only about checking if-statements. I might check OpenCL or something similar out on that when the time is right.
Logged

Independent game developer with an elaborate focus on interesting gameplay, rewarding depth of play and technical quality.<br /><br />Trap Them: http://store.steampowered.com/app/375930
Graham-
Level 10
*****


ftw


View Profile
« Reply #661 on: August 20, 2012, 08:01:20 AM »

Physics libraries should let you off-load work onto the GPU, within constraints that you set, by default.

I was wondering if I could take something like UDK/Nvidia-PhysX and do that, maybe with some hard work. I have no idea. I've never used it before. I'm just planning to.


edit:

@J Nice on the DKC soundtrack.
Logged
Richard Kain
Level 10
*****



View Profile WWW
« Reply #662 on: August 20, 2012, 08:46:17 AM »

Fiddling with technical details in order to squeeze a little bit more performance out of hardware isn't the objective of game design. The benefit of technological advancement is not having to worry about details like that.
Logged
Graham-
Level 10
*****


ftw


View Profile
« Reply #663 on: August 20, 2012, 08:57:48 AM »

Oh?

The objective of game design is to make a good game. If you can double physics processing then you can have twice as many actors doing things "at standard" levels of computation. You can make the moves twice as nuanced, adding fidelity to the translation between input and execution by a factor of 2.

HD is double resolution of previous tvs. That started a big thing.

You can make a computer that's 3 years old run a game that's new, if it otherwise couldn't. You can push frame-rate up from 40 to 60. You can double the lights.

On and on...

We design games in such a way so that huge sections of memory, opportunities to read from disk, the processor's time, the processor's idle cores, and the GPU aren't used maximally.

If you can double computation you can off-load network usage into prediction, upgrading the reliability and speed of online play, and the number of players.

We're not talking inches here. As some people were saying, some computations on the GPU can be several magnitudes more efficient than on the CPU.

But it's a worthy discussion to have. Needless optimizations are always a big threat too, tempting the unwary. We must wear helmets, with lights.
Logged
Richard Kain
Level 10
*****



View Profile WWW
« Reply #664 on: August 20, 2012, 09:49:53 AM »

The objective of game design is to make a good game. If you can double physics processing then you can have twice as many actors doing things "at standard" levels of computation. You can make the moves twice as nuanced, adding fidelity to the translation between input and execution by a factor of 2.

All of what you say is certainly useful from a technical perspective. But is any of that really the job of a game designer? It seems to me that such optimizations fall under the purview of software engineers and game engine developers. If a game designer is spending most of their time optimizing the back-end, they aren't actually designing games. What they are designing is game engines.

Don't get me wrong, designing solid, quality game engines is a very worthy endeavor, and the industry is much better off for having good engine developers. But it is not the same as game design. Engine development helps to empower designers, and has become a very important aspect of game development. But it cannot be equated with game design.
Logged
PompiPompi
Level 10
*****



View Profile WWW
« Reply #665 on: August 20, 2012, 10:48:31 AM »

If you don't intend to do optimizations, then don't do your own GPGPU stuf. Let someone else or some ready library do it for you.
I don't think we are going to see in the near future a technology which lets you just describe anything you want, and the computer will automatically produce a high performance code for the GPU.
Performance intensive games needs optimizations, as I said, things don't get accelerated magically.
Logged

Master of all trades.
PompiPompi
Level 10
*****



View Profile WWW
« Reply #666 on: August 20, 2012, 10:51:07 AM »

How many of you actually did GPGPU stuff on the GPU?
How many of you actually learned about GPU architecture?
I took a look at OpenCl and designed a parallel algorithm for a certain geometric problem dealing with determing cut out shapes. The performance-boost was astounding. Instead of waiting 10 seconds for the endresult it was practically instant, no noticeable delay.

Since many of my games won't be heavy on the visual part it would be cool to use free ressources for game-logic. However I wish to have a significant performance gain in TrapThem's physics(I haven't tried it yet):




In that case I could apply them in large metroid-sized maps.
They are really hard to design perfectly parallel but one major part of them is marking the stones to identify isolated shapes. That part can take advantage of parallel processing well. But there is a twist to it. It is only about checking if-statements. I might check OpenCL or something similar out on that when the time is right.
I think a CPU can deal with this task pretty easily.
You just need to implement something similar to an A*, you can also make it parallel.
Unless you are talking maps that are more than 10000x10000 and needs to be updated every frame.

Edit: this is an example of code that might be really bad for the GPU, no matter how much the GPU becomes "OMG MORE GENERAL PROCESSOR THAT CAN DO EVERYTHING!"
Logged

Master of all trades.
J-Snake
Level 10
*****


A fool with a tool is still a fool.


View Profile WWW
« Reply #667 on: August 20, 2012, 02:52:22 PM »

Yeah, unfortunately gpus suck at if-statements and are not really general.
They are specialized units, just less specialized than pure fixed-funtion-pipelines.
Logged

Independent game developer with an elaborate focus on interesting gameplay, rewarding depth of play and technical quality.<br /><br />Trap Them: http://store.steampowered.com/app/375930
Graham-
Level 10
*****


ftw


View Profile
« Reply #668 on: August 20, 2012, 08:13:07 PM »

The objective of game design is to make a good game. If you can double physics processing then you can have twice as many actors doing things "at standard" levels of computation. You can make the moves twice as nuanced, adding fidelity to the translation between input and execution by a factor of 2.

All of what you say is certainly useful from a technical perspective. But is any of that really the job of a game designer? It seems to me that such optimizations fall under the purview of software engineers and game engine developers. If a game designer is spending most of their time optimizing the back-end, they aren't actually designing games. What they are designing is game engines.

Don't get me wrong, designing solid, quality game engines is a very worthy endeavor, and the industry is much better off for having good engine developers. But it is not the same as game design. Engine development helps to empower designers, and has become a very important aspect of game development. But it cannot be equated with game design.

I don't see what difference it makes. Games are games. If I can do something that takes 40 hours and makes it a lot better then I'll do it. If I can take a different 40 to make it a little better, why would I do that instead, if I can only pick one?

"Optimizations" sell the idea short, as a word. HL2 has loading times. They are terrible, awful intrusions into the experience. They could have been removed. Bad design, bad tech. No engine can fix that problem, only the coders on the game.

I have a lot of tech experience. I know I can leverage it to compete with fidelity of modern games, then outpace them in other areas. My AIs can be richer. I can have online play that's faster, more complex, with more people. ...

Game design is not just creating rules on a page. It's taking whatever resources you have and leveraging them as best as you can to improve the player's experience. If you have artists, you should use them. If you have tech knowledge, you should use that. Any improvement is an improvement.


edit:

I could, for example, spend a lot of time designing a rich combat system. Easy to learn, forever to master; that kind of thing. That's what I'm doing, because that's one of my strong suits.

I could wait for someone to create a system then copy them. Or I could do the work myself and make the game better. Same thing with tech.

Also, good tech design, as with the HL2 example, the kind you get the most mileage out of, is intricately linked with the design itself. Certain operations run on the GPU better. If you understand that, and what sort of code would work well with it, then try to make a game who's calculations heavily fall in that area, you could do some incredible things. You could make a high-end game run on a mid-range computer, increasing your player base.

The designer's job is to provide experiences to the player, through games. However he does that is totally irrelevant. All there is is the player. If you make decisions that create the best experience for him, you've won. Anytime you don't do that you've made a mistake. That which falls into the purview of designers, and that which doesn't, is game- and team-dependent.

I don't understand where the hate for tech comes from. It's one more piece of the puzzle. It's not like caring about tech marginalizes the other areas of design.

Each to his own, and all that.
« Last Edit: August 20, 2012, 08:42:59 PM by toast_trip » Logged
Danmark
Level 7
**



View Profile
« Reply #669 on: August 20, 2012, 11:35:33 PM »

Programmable shaders are useful for graphics, period.
If you are doing non graphics stuff with programmable shaders you don't know what you are doing.
However, if you want to do non graphics stuff with the GPU, you can with things like OpenCL, CUDA, direct compute and etc.

"Fragment shader" is just a synonym for "fragment program". It doesn't denote that you're literally shading pixels or texels. GPGPU predates specialized GPGPU languages, and the meat of programs written in those languages still runs in shaders. Besides, I doubt OpenCL is used for GPGPU in consumer games, since support from both ATI and NVidia is spotty, whereas HLSL version support is broader & more straightforward.


The CPU will always be better than the GPU on certain tasks, and vice versa.
If I am not mistaken the CPU is a lot faster when it comes to accessing memory(because of the cach), branching and out of order execution.
Some algorithms will NEVER be accelerated on GPUs, and you better do them on the CPU.
Things like iterative algorithms and algorithms that require a lot of random access to memeory. And many others.


True. Already said something to this effect. It's also true that the GPU will take over many tasks traditionally given to the CPU, because it has so much more processing power. Even for graphics, all the hoops that must be jumped through to produce an image on the screen are getting more numerous & sophisticated; GPGPU and graphics are converging. Please read this paper on real-time GI. Generating a sparse voxel octree & tracing cones through it isn't the kind of thing we associate with GPUs, yet the technique would be infeasible on the CPU.

AFAIK progress in CPU design has been based on things like (as you say) caching, out-of-order execution, branch prediction, dispatching more instructions per cycle, that kind of thing. Clock rates haven't advanced in ages. Absent a paradigm shift, throughput is becoming far more important than speed, and CPUs will never have as much throughput as GPUs, despite GPUs not sharing in these particular innovations. Where CPUs will always have GPUs beat (excepting trivial graphics stuff) is in convenience: you can write a serial algorithm that works without too much difficulty.

But you can't say anything interesting by speaking on the level of algorithms, since algorithms are used to perform tasks*. If you benefit from performing the same task using different hardware running a different algorithm, you'll do so. The painful jump to multicore CPUs already required a new suite of algorithms.

Heavy processing of AI, physics, sound, and (obviously) graphics- together comprising the things that games burn serious frame time on- is all amenable to the GPU. As more things drift into the GPU's realm in practice, the GPU will slowly adopt more CPU features. To put this another way, GPU development won't stagnate after the point where they're so powerful we can do whatever we damn well please to produce images with the machines. They'll be doing lots of other things as well.


How many of you actually did GPGPU stuff on the GPU?

Can't say I have. I plan to soon, though it seems tremendously awkward at this point (again, we need better tools & a broader grounding in knowledge). What have you done? How did the performance compare to your CPU implementation?


*there are algorithms your shitbox could run faster than the most powerful supercomputer in the world, a point neither interesting nor pertinent
Logged
Superb Joe
Level 10
*****



View Profile
« Reply #670 on: August 21, 2012, 01:09:06 PM »

why arent there any games about proctology
Logged
J-Snake
Level 10
*****


A fool with a tool is still a fool.


View Profile WWW
« Reply #671 on: August 21, 2012, 03:27:13 PM »

Because you dont make them
Logged

Independent game developer with an elaborate focus on interesting gameplay, rewarding depth of play and technical quality.<br /><br />Trap Them: http://store.steampowered.com/app/375930
richardjames13
Level 0
**



View Profile
« Reply #672 on: August 21, 2012, 03:32:56 PM »

why arent there any games about proctology

Because they are a pain in the butt to program.
Logged
moi
Level 10
*****


DILF SANTA


View Profile WWW
« Reply #673 on: August 21, 2012, 07:32:55 PM »

what about sewer shark?

Also: battlebeard has a game caled "log dungeon"
« Last Edit: August 21, 2012, 07:44:25 PM by moi » Logged

subsystems   subsystems   subsystems
Graham-
Level 10
*****


ftw


View Profile
« Reply #674 on: August 24, 2012, 11:25:53 AM »

tl;dr the Jak series leverages tech in harmony with its design for powerful results

The objective of game design is to make a good game. If you can double physics processing then you can have twice as many actors doing things "at standard" levels of computation. You can make the moves twice as nuanced, adding fidelity to the translation between input and execution by a factor of 2.

All of what you say is certainly useful from a technical perspective. But is any of that really the job of a game designer? It seems to me that such optimizations fall under the purview of software engineers and game engine developers. If a game designer is spending most of their time optimizing the back-end, they aren't actually designing games. What they are designing is game engines.

Don't get me wrong, designing solid, quality game engines is a very worthy endeavor, and the industry is much better off for having good engine developers. But it is not the same as game design. Engine development helps to empower designers, and has become a very important aspect of game development. But it cannot be equated with game design.

I was thinking about this, and your point, and now I think I understand what you are saying a little better, and I have some time, so I'll tell you my new thoughts.

Optimizations that are independent are definitely more outside of the designer's scope than most things. Though sometimes they have to weigh in on how valuable something might be. For example they may agree that a month of a programmer's time optimizing something may be worth the results relative to a combat feature that will get cut instead. That's a designer's call, because he can understand the benefits/costs of each choice.

But that is a weak relationship. There are however very strong relationships, all over the place. In fact, the very best optimizations, and the very best designs, often rely on a harmonious synchronicity between design and code (and all other aspects), because it's in the cooperation of those separate areas that the best results can be achieved.

I'll give you an example.

Jak 2 and Jak 3 (and the original Jak and Daxter) blew me away for a lot of reasons:
  1. The worlds were huge.
  2. The worlds were interesting.
  3. The act of exploration was neatly tied to the action.
  4. There were no loading times.
Actually, the loading times were there, but they were so well paced and hidden that I almost couldn't remember if there were any at all after a play session, until I figured it out. This is back when I was much younger. These are old games.

Ok, I'm going to show you why the handling of loading times had an enormous impact on the quality of the experience, and how their handling required deep design decisions to be made in-tandem with the code base's construction. Then I'll show you the kind of experiences that suffered as a result of not doing these things.

Jak worlds are divided into sections. In Jak 2 for example there were 3 hub areas. Each area was massive. You could travel anywhere within it without encountering a loading screen. Each hub connected to a few play zones, or mission-givers. There were loading screens in the transfer between them.

When you wanted to change sections you had to travel to the correct barrier and walk through it. Only in the rarest of circumstances would the screen go blank. Normally something would happen instead. You would see a door open and lock behind you, then another open in-front, revealing the new area. In that example the narrative justification was: "There are dangerous poisons outside. No one except the hardy and well-equipped can go out there. The city is a fortress for this reason. Transferring from inside out or outside in requires a double-door system that keeps the citizens of the city safe." With this explanation the game bought itself 2 uses for the doors. You had to wait in-between them and you couldn't cross them until you'd received the correct permission and abilities.

The loading screens between these two areas was hidden well. You barely noticed it. And not only that, they re-enforced the narrative, and often provided a break in pacing the player very likely needed. In-order to support this design all of the missions had to be designed around it. Missions never crossed boundaries mid-way, unless in special cases, and in those cases the breaks were used to great effect. Players also had to cross barriers relatively infrequently, so as not to be annoyed, so it was common to spend a good amount of time in one area before having to cross into the other.

This kind of structure was everywhere, and each time the loading screen was unique, justified by the narrative, and woven into the entire world. Loading screens evaporated, and the feeling that you had this massive world to explore has hammered into you. You could feel it, and it felt you.

This kind of design required the following insights by the designer:
  1. If we program more robustly we can increase the size of a level a little bit.
  2. We can use this size to layer missions in single areas to make staying within them for even longer times enjoyable than it otherwise would be.
  3. We can build narrative constructs to justify the waits when crossing barriers, then pace our content so that such a wait is not only acceptable but valuable to the player's experience.

Every mission, level design, and the evolution of the plot had to take these rules into consideration, and none of it could have happened without understanding what boundaries could have been crossed with the technology. The design team saw a way to take small boosts in loading speed and size and turn them into an aspect of the experience that was fundamental to the quality of the game, and probably its legacy.

.

This kind of pattern is everywhere in design. Understanding what can be leveraged in tech can enhance the design, and if the designers are willing to accept the resulting constraints, the synergy between both can create something far greater than what would have been if either had been developed in isolation of the other.

If I realize that I can calculate something on the GPU 10 times as fast as I can somewhere else, maybe I can redesign my game in such a way to take advantage of that, and create an experience that blasts its competitors in some particular way. They might look at my game and think, "how is that even possible?" And the answer would be, not because of our tech and its team, but our ability to cut what wasn't necessary, to take advantage of an opportunity that so many others pass over, because it doesn't provide anything immediately experience changing if used in the standard way.

In the same way that modern physics engines can give weight to certain kinds of actions that couldn't be given before, designing around what is possible can create a leap in the quality of the possible experience, without having to wait for the tech to develop; and then when it does the game gets even better, and so do you at leveraging what's there.


p.s. HL2, and so many games are fucking hammered by poorly placed loading times. The loads in HL2 are always right when you're getting into a groove. For a tech-heavy company Valve showed its weaknesses on that one. It was almost unbelievable given the power of its physics and AI at the time.


p.s. ps. One of the critical reasons of success behind Jak was the size of its world. It stood head and shoulders above most games in, "giving a big world for you to do lots of things in."
« Last Edit: August 24, 2012, 02:34:29 PM by toast_trip » Logged
ஒழுக்கின்மை (Paul Eres)
Level 10
*****


Also known as रिंकू.


View Profile WWW
« Reply #675 on: August 24, 2012, 12:05:18 PM »

i think like gilbert timmy, toast_trip doesn't seem to know what tl;dr means: it's supposed to contain a summary of what follows, not just a title sentence or something. such that if you read the summary you do not need to read what follows cause it contains the same information. but their tl;dr's don't contain the same information at all
Logged

Graham-
Level 10
*****


ftw


View Profile
« Reply #676 on: August 24, 2012, 12:13:33 PM »

Yeah, I see what you're saying.

I prefer it this way, when I'm reading, but I can understand why other people might prefer it your way. I'll change it.
Logged
MyBeefCakes
Level 0
**


Personal.


View Profile
« Reply #677 on: August 31, 2012, 08:21:53 AM »

I understan your views on the fact you can just use your PC as it offers nothing a PC does.

But not everyone who plays and enjoys indie games fully understand how they are developed and not everyone has a PC that can even run a basic 3D game.

As most people keep their computers until they break.

I think OUYA will work for younger/older generations or even casual gamers. No causual gamer wants to pay extensive amounts for a new laptop or PC when they can pay the £100 to play indie games of which they love.

I have a 6 year old cousin who adores Cave story, Minecraft, LIMBO but a PC isn't really an option for him, my nan also loves to play a bunch of WiiWare on her Wii.

I think this has its positives and supporting it might not be a bad thing. But then again its just my shitty unknown opinion :')
Logged

Meep Meep.
Masakari
Level 0
***


Unknown Robot


View Profile WWW
« Reply #678 on: August 31, 2012, 08:41:16 AM »

OUYA doesn't have the budget to penetrate to the casual market (aka general commercial availability across the major markets in major stores).

I've said before it's a pointless device, and I stick by that. The people who will use it the most are curiously the same people that require it less - hobbyists and established gamers.

IMO OUYA turned into some sort of "Indie Cred Special Bling", all the hype about it is based on the fact it's "sooooo indie, sooooo open".
Logged

Richard Kain
Level 10
*****



View Profile WWW
« Reply #679 on: August 31, 2012, 10:02:25 AM »

IMO OUYA turned into some sort of "Indie Cred Special Bling", all the hype about it is based on the fact it's "sooooo indie, sooooo open".

You might end up being correct. At the same time, this only makes the OUYA that much more valuable to video game collectors. The system itself, as well as whatever games and accessories get produced for it, are going to be rare by their very nature. If the OUYA never gets much farther than its initial production run, then there won't be much more than 60,000 of them in existence.

If it tanks completely, the hardware will be worth considerably more than $100 in a few years. It will be one of the more obscure systems ever released.
Logged
Pages: 1 ... 32 33 [34] 35 36 ... 94
Print
Jump to:  

Theme orange-lt created by panic