Welcome, Guest. Please login or register.

Login with username, password and session length

 
Advanced search

1411634 Posts in 69394 Topics- by 58447 Members - Latest Member: wcored

May 13, 2024, 07:02:32 PM

Need hosting? Check out Digital Ocean
(more details in this thread)
TIGSource ForumsPlayerGamesOuya - New Game Console?
Pages: 1 ... 31 32 [33] 34 35 ... 94
Print
Author Topic: Ouya - New Game Console?  (Read 175860 times)
Manuel Magalhães
Forum Dungeon Master
Level 10
*****



View Profile WWW
« Reply #640 on: August 16, 2012, 01:10:22 PM »

Bob from Bob's Game should make a Kickstarter for the nD.




I'm joking btw.
Logged

richardjames13
Level 0
**



View Profile
« Reply #641 on: August 16, 2012, 03:21:25 PM »

I have registered, still waiting for any kind of infos, which have been zero in the 3 months since I have sent them my email adress,all they sent me was their twitter adress, which is full of links to videos of  streets of rages and sonic Facepalm

 Cry so much for that
Logged
Manuel Magalhães
Forum Dungeon Master
Level 10
*****



View Profile WWW
« Reply #642 on: August 16, 2012, 03:48:01 PM »

Yeah. Sad Someone with money/talent should create a new cheap, portable console with buttons aimed to hobbyist game development/play that isn't for once aimed for emulating games. OUYA-like hype to gather players and devs + a console like that = awesome.
Logged

J-Snake
Level 10
*****


A fool with a tool is still a fool.


View Profile WWW
« Reply #643 on: August 17, 2012, 05:40:21 AM »

I would be interested to make one with snes like controls when I will have better opportunity and more money. May be in 10 years.
Logged

Independent game developer with an elaborate focus on interesting gameplay, rewarding depth of play and technical quality.<br /><br />Trap Them: http://store.steampowered.com/app/375930
Danmark
Level 7
**



View Profile
« Reply #644 on: August 17, 2012, 05:55:48 PM »

But I still feel you focus far too much on equating production quality and its relationship to technology with overall quality. This is a dire mistake that the industry has been guilty of for many years. It is also one of the least meaningful applications of the technology that you seem so enamored of. More than anything else, modern technological developments have been used to improve the visuals of games, while most other aspects of game design have remained unexplored.

Not at all- that's what I alluded to:

Tech dev drives the industry- just not always in the right direction IMO.


I'm not personally enamored of tech applications today, particularly in AAA games. It's all dictated by market appeal and industry culture, the latter of which is hideously broken as it stands. I'm more interested in things like AI, procedural generation, large-scale simulation, (actual) non-linearity, real-time sound synthesis...

Anyway, the emphasis on graphics can't last. Real-time graphics exploded because it was low-hanging fruit. With the advent of programmable shaders, good aesthetic sensibilities and good shaders/materials outweigh the benefits of sheer detail many times over, even (I think) in the eyes of both casual and hardcore gamers. Artist team sizes have plateaued- nobody can afford to hire more artists even if they wanted to- so proc gen will be necessary to take up the slack & maintain the march of detail in AAA projects. Thus indie & AAA visuals will converge. In any case, indies have fought the visuals war asymmetrically since they were a thing, and shaders make a great weapon. Write something once, and it profoundly affects visuals throughout your game. No mountains of content needed.

I'd argue that players already care mostly about gameplay selling points. The visual arts being so advanced compared to games, it's getting harder and harder for a game to distinguish itself on the merits of its visuals, however their quality was achieved. One positive development here is the proliferation of gameplay trailers. You learn alot more about how a game plays from seeing it in motion than studying screenshots or reading a bunch of bullet points. In this case, the sense of wonder comes not so much from the visuals as the symbolic interactions they represent. The viewer gets the full video game experience sans control (for which intersubjectivity fills in). Put another way, I've been disappointed by games I've only read about or seen screenshots of, but never by a game that looked good in trailers.

As for the selling points themselves, we're gonna see lots of novelty and experimentation, necessary to keep people buying games. We already see some, even in AAA games. It's compelled expand though. Perhaps not back to what it was in the 80s and 90s, but 'good enough'.

tl;dr triumphalism, we'll see some cool tech, just give it time
Logged
Graham-
Level 10
*****


ftw


View Profile
« Reply #645 on: August 19, 2012, 12:10:52 AM »

proc-gen is future.
Logged
eld
Level 2
**



View Profile
« Reply #646 on: August 19, 2012, 02:23:55 AM »

Anyway, the emphasis on graphics can't last. Real-time graphics exploded because it was low-hanging fruit. With the advent of programmable shaders, good aesthetic sensibilities and good shaders/materials outweigh the benefits of sheer detail many times over, even (I think) in the eyes of both casual and hardcore gamers. Artist team sizes have plateaued- nobody can afford to hire more artists even if they wanted to- so proc gen will be necessary to take up the slack & maintain the march of detail in AAA projects. Thus indie & AAA visuals will converge. In any case, indies have fought the visuals war asymmetrically since they were a thing, and shaders make a great weapon.

Much of the art made today is often about scaling down, so we still have a great distance to go now without any additional costs, first generation games on the next generation of consoles will be very much how fancy pc ports are handled, much like how skyrim got a high resolution texturepack due to assets having been scaled down to fit the memory on the consoles.

And yes, the more closer the GPU comes to being a general processor the more stuff we can do with it, much of which isn't even graphical.


I think gameplay will get the most out of the next generation of consoles though, no more cheating with the scenes, no more baking down stuff, no more static environments just to get the best looking ones.

We could have actual worlds with stuff in them without having everything just like a movie-set where going beyond the bounds would break the illusion.
Logged

Graham-
Level 10
*****


ftw


View Profile
« Reply #647 on: August 19, 2012, 03:30:31 AM »

Yeah, this is true. If quality doesn't go up it can go sideways.

GPU can take extra load; we can make dynamic environments. I forgot to think about this.

There's a lot of stuff you can do, like swapping shaders around, or changing lights, or generating textures, to warp the scene, even if subtly, to suit the situation. We're not used to thinking this way. We still think of scenery like movie sets. But scenes can transition.
Logged
PompiPompi
Level 10
*****



View Profile WWW
« Reply #648 on: August 19, 2012, 02:52:59 PM »

And yes, the more closer the GPU comes to being a general processor the more stuff we can do with it, much of which isn't even graphical.

I am sorry, I had to comment about this. This is just dumb thing to say. Why do you feel like saying things about things you ahve no understanding of?
The GPU is efficient in parallel computation becuase IT'S DIFFERENT FROM A CPU.
It sucks big time at the things the CPU can do. GPU arn't magically faster, there is a reason. THey are structured differently from CPUs, but this is why they are also bad at what the CPU is good at, and the CPU is bad at what the GPU is good at.

Please STOP THE BULLSHIT, SAVE THE CHILDREN!

Edit: And if you ment GPGPU, then use the right term. I am not even sure what General Processor means.
GPGPU already exist today, so I don't see why you are saying that IT WILL, it already is doing GPGPU.
Logged

Master of all trades.
Graham-
Level 10
*****


ftw


View Profile
« Reply #649 on: August 19, 2012, 03:14:28 PM »

He's saying that even if we don't use the GPU for things that improve the graphics in irrelevant ways we can use it in more useful ways, if we want.
Logged
PompiPompi
Level 10
*****



View Profile WWW
« Reply #650 on: August 19, 2012, 03:22:22 PM »

Yes, it's called GPGPU, it has been going for a few years now. What's the point exactly?

The GPU is not a magic processor that can accelerate anything.
Logged

Master of all trades.
sodap
Level 1
*



View Profile WWW
« Reply #651 on: August 19, 2012, 03:32:56 PM »

What's the point in using a doctor to design an engine?

GPU's are good for graphics, if you dont need that, just use more powerful CPU's
Logged

Nix
Guest
« Reply #652 on: August 19, 2012, 04:52:45 PM »

The GPU is not a magic processor that can accelerate anything.

It can accelerate anything where parallel computation makes sense and that encompasses a whole lot of tasks.
Logged
Danmark
Level 7
**



View Profile
« Reply #653 on: August 19, 2012, 05:39:00 PM »

Much of the art made today is often about scaling down, so we still have a great distance to go now without any additional costs, first generation games on the next generation of consoles will be very much how fancy pc ports are handled, much like how skyrim got a high resolution texturepack due to assets having been scaled down to fit the memory on the consoles.

Good point.


I think gameplay will get the most out of the next generation of consoles though, no more cheating with the scenes, no more baking down stuff, no more static environments just to get the best looking ones.

We could have actual worlds with stuff in them without having everything just like a movie-set where going beyond the bounds would break the illusion.

This is what I'm looking forward to as well.


I am sorry, I had to comment about this. This is just dumb thing to say. Why do you feel like saying things about things you ahve no understanding of?
The GPU is efficient in parallel computation becuase IT'S DIFFERENT FROM A CPU.
It sucks big time at the things the CPU can do. GPU arn't magically faster, there is a reason. THey are structured differently from CPUs, but this is why they are also bad at what the CPU is good at, and the CPU is bad at what the GPU is good at.

Calm down dude. GPU capabilities have been inching closer to those of CPUs for some time (not to mention an abortive coup). Hell, there was even a time when they weren't programmable at all. The sheer processing power of GPUs left CPUs in the dust around 8 years ago, and the gap's only been widening since then, in spite of CPU core counts creeping up. The top-end consumer GPU is ~20x as powerful as its CPU counterpart.

You're right that GPUs are already versatile enough for GPGPU to take off. But that actually supports the notion that they'll be designed to be more like general-purpose co-processors for massively parallel tasks, graphical or otherwise. Like this: greater versatility -> exploitation of enormous processing power in applications -> real-world GPU performance constrained by ability to run such applications -> design of GPU architectures around such applications.

I contend this is already happening. Shaders have been getting more and more complex, logical, random-accessey, etc. and this trend will come to a head with the next revolution in computer graphics: real-time global illumination.

Folks say GPUs are fundamentally limited by their pipelined nature. While true, the irony is that in order to exploit multicore CPUs & achieve nearly linear speedup against the number of cores, you need to more or less lay your own pipelines anyway. There are parallel/high-latency tasks and there are serial/low-latency tasks. GPUs will dominate the former in time, while CPUs will always be better at the latter. Biggest thing holding back GPGPU today is the software tools.


GPU's are good for graphics, if you dont need that, just use more powerful CPU's

Nah. Cost per processing power for GPUs is miniscule. A cheapo low-end GPU costing $100 is several times as powerful as a top-of-the-line CPU costing $1000. Again, the tasks that comprise graphics are getting broader, so any meaningful distinction between "good for graphics" and "good for not-graphics" is eroding.
Logged
Garthy
Level 9
****


Quack, verily


View Profile WWW
« Reply #654 on: August 19, 2012, 06:57:30 PM »

IMHO, and to see how far I can stretch an analogy:

The CPU and GPU comparison is like comparing a multi-tool and a screwdriver. You can do anything with a multi-tool, no sweat. However, if you want to tighten and loosen screws, you're going to be able to do it much more efficiently with a screwdriver.

It turns out that as time has gone on, people have found that the screwdriver can be used to punch holes in things, and be flipped around to whack in the occasional nail. That's fair enough. But what we're also seeing is the screwdriver becoming so precise, so efficient, and so amazingly fast, that if you can reduce your construction project to a problem which involves tightening and loosening a lot of screws, the extra setup hassle might justify the gain in total job time.

What we're seeing here is a small compartment being added to the screwdriver handle to handle the odd remaining job. For certain jobs, namely the ones with a great number of screws that need to be tightened, the screwdriver is turning out to be vastly superior to the multi-tool, by several orders of magnitude. Sure, most jobs can be performed quite efficiently with the multi-tool, and a good number of these would plain out suck to do with the screwdriver.

The multi-tool's selling point is flexibility. It just does everything, and it does it reasonably well. However, we've also got our ultra-efficient screwdriver, that is painful to use for the majority of tasks, and is generally outperformed by the multi-tool. However, when applied to a small subset of tasks, it absolutely *annihilates* the performance of the multi-tool.

Another interesting thing that we've seen is that over time, our toolboxes have evolved to generally house just one multi-tool- toolboxes that support multiple multi-tools are disproportionately more expensive. However, we're able to easily drop in a growing number of screwdrivers. Sure, some toolboxes have an integrated screwdriver, but many will allow you to add in a few extra screwdrivers. The nice thing about this arrangement from a screwdriver-enthusiast point-of-view is that you can use the multi-tool to handle the more mundane tasks, basically everything that the screwdriver isn't good at handling, and organise the workload to play on the screwdriver's strengths.
Logged
eld
Level 2
**



View Profile
« Reply #655 on: August 19, 2012, 11:12:42 PM »

Sad Sad Sad

As I said, closer towards.

My point was that of most people attributing shaders towards something fancy and flashy that happens on the screen everytime someone writes about them, while in reality a ton of other great things happen which weren't possible before.

The actual graphics themselves should be entirely attributes towards artists if we're to be as technical as you wanted to be.
Logged

PompiPompi
Level 10
*****



View Profile WWW
« Reply #656 on: August 19, 2012, 11:57:14 PM »

Programmable shaders are useful for graphics, period.
If you are doing non graphics stuff with programmable shaders you don't know what you are doing.
However, if you want to do non graphics stuff with the GPU, you can with things like OpenCL, CUDA, direct compute and etc.

I tihnk you guys just don't have a lot of knowledge about how GPU and CPUs work behind the scenes and just say tings you don't undersand.

The CPU will always be better than the GPU on certain tasks, and vice versa.
If I am not mistaken the CPU is a lot faster when it comes to accessing memory(because of the cach), branching and out of order execution.
Some algorithms will NEVER be accelerated on GPUs, and you better do them on the CPU.
Things like iterative algorithms and algorithms that require a lot of random access to memeory. And many others.

How many of you actually did GPGPU stuff on the GPU?
How many of you actually learned about GPU architecture?
Logged

Master of all trades.
eld
Level 2
**



View Profile
« Reply #657 on: August 20, 2012, 01:33:08 AM »

No one ever suggested that GPU's would ever replace CPU's, what we were talking about was the ever expanding functionality of the GPU's when back in the days they would basically just render textured triangles for you and having a GPU that could do the transform-work on the hardware was a big thing.
Logged

PompiPompi
Level 10
*****



View Profile WWW
« Reply #658 on: August 20, 2012, 01:40:25 AM »

No one ever suggested that GPU's would ever replace CPU's, what we were talking about was the ever expanding functionality of the GPU's when back in the days they would basically just render textured triangles for you and having a GPU that could do the transform-work on the hardware was a big thing.

Only doing non graphics stuff on the GPU is even harder and even more work than doing graphics.
I am sorry, but what was exactly the point? Graphics are evil? Graphics are not important? Graphics take alot of time?



Logged

Master of all trades.
Graham-
Level 10
*****


ftw


View Profile
« Reply #659 on: August 20, 2012, 02:39:28 AM »

It was if you felt extra fidelity wasn't worth the cost. For example, good aesthetic design may be more important than detail. So you take the extra processing power and scale up your AI, simulate a larger part of the world or whatever.

I'm reading http://web.cs.mun.ca/~banzhaf/papers/CEC_2008.pdf. Genetic programming on the GPU in the Xbox.

Logged
Pages: 1 ... 31 32 [33] 34 35 ... 94
Print
Jump to:  

Theme orange-lt created by panic