Welcome, Guest. Please login or register.

Login with username, password and session length

 
Advanced search

1411283 Posts in 69325 Topics- by 58380 Members - Latest Member: bob1029

March 28, 2024, 11:55:43 PM

Need hosting? Check out Digital Ocean
(more details in this thread)
TIGSource ForumsDeveloperTechnical (Moderator: ThemsAllTook)The grumpy old programmer room
Pages: 1 ... 264 265 [266] 267 268 ... 295
Print
Author Topic: The grumpy old programmer room  (Read 733395 times)
ferreiradaselva
Level 3
***



View Profile
« Reply #5300 on: May 23, 2017, 01:32:39 AM »

I think enet might do that already for me. But there's still a distinct point in the frame where I pull the packets from that data structure and queue new ones to send and if state updates are received at the wrong point in the frame that adds a frame of latency. 
I don't think ENet does that. You probably are just receiving the packets really fast, because you are using the localhost.
Logged

JWki
Level 4
****


View Profile
« Reply #5301 on: May 27, 2017, 02:07:51 AM »

Okay so I feel totally fine hijacking this thread with non-grumpy content (not exactly grumpy anyways):

I've got a pretty solid understanding of how proper client/server networking with snapshot based replication should work - after all it's not that hard to grasp when you understand that at any point in time, the client's perception of the world is actually a mixture of different timelines - with the player seeing his own actions in his "present", while seeing other players and potentially server-controlled entities in the past, and the server holding the ground truth which none of the clients actually see.

Now, as I'm solidifying my understanding of this I wanna get a good basis going to implement all of it step by step. So far, I've just done the most straightforward thing which is split client and server into two completely separate executables, with some shared code. However, obviously I want players to be able to host a game - with the current setup, having a client host a game would basically mean launch the server.exe in the background on their machine. That's fine I guess but I've looked into how other games do it and it seems that at least for Source engine games, they don't split client and server that way - instead, there's the regular game executable (which was hl2.exe for all Source games for a long time) and then a Client.dll and Server.dll containaing game specific client/server code. So the game would actually interleave client and server code in a single process, with the .dlls containing only game specific handler logic afaik. Then there's Source Dedicated Server, which is equivalent to hl2.exe in the sense that it works for all Source games but only cares about server code obviously.
However keeping it all seperated seems actually cleaner to me in a way - sure, with a client/server just living in the same process, you get the potential performance advantage of just copying packets in memory instead of sending them halfway down and up the network stack but on the other hand if you keep the two separate (so there's only a dedicated client and a dedicated server), you don't need to maintain the dedicated server alongside the main executable - the dedicated server is probably just a stripped down version of the main .exe anyways - it's just always up to date and there's no issues with differences between client/server and dedicated server gameplay because they're literally the same.

Am I missing any obvious practical advantages of interleaving client and server like that?
Logged
Schrompf
Level 9
****

C++ professional, game dev sparetime


View Profile WWW
« Reply #5302 on: May 27, 2017, 02:35:24 AM »

Am I missing any obvious practical advantages of interleaving client and server like that?

Single Player. Or non-dedicated server, if that matters to you. Player starts as SinglePlayer but allows other players to join.
Logged

Snake World, multiplayer worm eats stuff and grows DevLog
JWki
Level 4
****


View Profile
« Reply #5303 on: May 27, 2017, 02:47:16 AM »

Am I missing any obvious practical advantages of interleaving client and server like that?

Single Player. Or non-dedicated server, if that matters to you. Player starts as SinglePlayer but allows other players to join.


You could just launch both processes though when deciding to host a game or play (joinable) single player.
Logged
qMopey
Level 6
*


View Profile WWW
« Reply #5304 on: May 28, 2017, 10:52:07 PM »

Hey JWki, figured you'd like this sort of discussion since you're working on something related: https://www.gamedev.net/topic/688975-berkeley-sockets-and-iocp/
Logged
Schrompf
Level 9
****

C++ professional, game dev sparetime


View Profile WWW
« Reply #5305 on: May 28, 2017, 11:39:59 PM »

You could just launch both processes though when deciding to host a game or play (joinable) single player.

That works, too, if your data set fits into memory twice. Having server and client logic operate on the same data set also allows you to cut some corners, e.g. on physical simulations. The local player simply uses the outcome of the server side simulation and only adds the bells and whistles, the other players have to simulate the whole set and adapt to server-reported changes.

Some of it is for historical reasons, I assume. Ever noticed the ingame console log "client connected via loopback device" in the old Doom? It's *that* old. Some OSses back then lacked processes.
Logged

Snake World, multiplayer worm eats stuff and grows DevLog
JWki
Level 4
****


View Profile
« Reply #5306 on: May 29, 2017, 02:21:03 AM »

You could just launch both processes though when deciding to host a game or play (joinable) single player.

That works, too, if your data set fits into memory twice. Having server and client logic operate on the same data set also allows you to cut some corners, e.g. on physical simulations. The local player simply uses the outcome of the server side simulation and only adds the bells and whistles, the other players have to simulate the whole set and adapt to server-reported changes.

Some of it is for historical reasons, I assume. Ever noticed the ingame console log "client connected via loopback device" in the old Doom? It's *that* old. Some OSses back then lacked processes.


I'd argue that cutting corners is something you specifically want to avoid to achieve a fair gameplay experience for everyone.
Anyways I switched my application to single process for client and server (so I'm lacking a dedicated server rn) and I'm probably going to keep going with this because I found it made some other things simpler - cross-client-server debugging for example. And so far cleanly splitting the two hasn't been an issue at all.

Just to try it out I also wrote a very hasty'n'hacky masterserver implementation in like two hours (did I mention it's really hacky?) and this actually makes me happy but well distributing a topic across like five threads everytime is pretty daunting so take this, grumpy programmers!



Important features missing: Removal and update of existing server listings atm it just appends to the server list until it runs out of memory.
Nice thing tho is I can actually connect via the server list.
Logged
JWki
Level 4
****


View Profile
« Reply #5307 on: June 02, 2017, 12:07:02 AM »

Hey JWki, figured you'd like this sort of discussion since you're working on something related: https://www.gamedev.net/topic/688975-berkeley-sockets-and-iocp/

Completely missed that post, saw the discussion already tho. I'm not working quite that low-level rn, atm I'm using Enet for the low level parts because it is super easy to integrate and reasonably sized as a dependency (sits in its own .dll anyways so never have to recompile it). That'd still leave the issue discussed in the thread I guess tho and my solution to that atm is to have a dedicated thread per peer, which I don't exactly know whether it corresponds to a socket or whether Enet reuses a single UDP socket. In the future I will probably switch to a worker based approach too tho.
Logged
qMopey
Level 6
*


View Profile WWW
« Reply #5308 on: June 02, 2017, 08:47:00 AM »

For now I'm using a single worker thread. Pretty much the pseudo code in this post. My worker thread prioritizes pulling packets out of the socket buffer and into my own custom buffer. I do this just because I'm paranoid of the chance to drop packets by filling the UDP buffer, and also to get a really accurate timestamp on the packet receive time.

If no packets are received the thread goes and processes all queued packets. Right now processing is decrypt then decompress. The packet is not removed from the queue, it just sits there in a processed state. The game can removed processed packets from the queue.

So it's a circular queue buffer with 3 pointers. One for pushing unprocessed packets, one for processing packets, and one for popping packets to the game.

I just lock the entire queue on any operation for simplicity. But I imagine a lockless implementation might be pretty easy... Meh, too much work.

Pretty sure this will be efficient enough for my server purposes, and and I also just throw it onto clients as well Smiley The nice thing is clients can have a larger sleep time, while the server can have a variable sleep time depending on load! Sleep time is nicely decoupled.

Anyways, even if you're using Enet, I'm still interested in hearing your thoughts about any of this net stuff. It's all new to me too so discussion helps and is fun!
Logged
gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #5309 on: June 06, 2017, 05:53:11 PM »

For years I have been making some observation about programming and felt I couldn't handle entire way of people architecturing code, it simply didn't make sense at all, especially OOP or even component, the problem being what is responsible of what how do you model interaction in a way that uphold the philosophy shared by these paradigm and how to change things without everything collapsing or being an intractable mess. Today I found a video that expose the same reasoning than me.




Or in term of object's interaction: how does your cat bite a dog.
« Last Edit: June 06, 2017, 06:07:03 PM by gimymblert » Logged

JWki
Level 4
****


View Profile
« Reply #5310 on: June 06, 2017, 11:33:12 PM »

For years I have been making some observation about programming and felt I couldn't handle entire way of people architecturing code, it simply didn't make sense at all, especially OOP or even component, the problem being what is responsible of what how do you model interaction in a way that uphold the philosophy shared by these paradigm and how to change things without everything collapsing or being an intractable mess. Today I found a video that expose the same reasoning than me.




Or in term of object's interaction: how does your cat bite a dog.


Yeah that video resonates with me as well.
Logged
InfiniteStateMachine
Level 10
*****



View Profile
« Reply #5311 on: June 07, 2017, 04:28:27 PM »

I'm only going by the title of that video but object oriented programming isn't bad. It's just often used to solve problems it's not good at solving. Really the only truly bad programming pattern is dogma.

GUI programming would be an example of a place where object oriented programming works well.


... I can't currently think of any other examples :D
Logged

Garthy
Level 9
****


Quack, verily


View Profile WWW
« Reply #5312 on: June 07, 2017, 05:48:37 PM »

This is a bit of a rushed post, please excuse errors. All IMHO.

I'm also guilty of just going by the video title and like count, but in my defense the video title already destroyed its credibility.

Core aspects of OO and a few reasons they are useful:

Encapsulation: Collects related concepts into a single place (or small number of places) rather than scattering them throughout code. Bugs tend to be found together and earlier rather than finding the same problem over the whole project multiple times over time. Assists greatly in testing, including white/black box, dedicated unit testing, etc. Allows damage control from interface weirdisms by working fixes into a single place rather than throughout the whole codebase. Allows global-scale changes to be made in a single place (eg. allowing normal operation but virtualising costly operations). Allows practical modular testing. Enables practical development of massive codebases where no developer exists or could ever exist who understands the entirety of the codebase.

Inheritance: Assists where you have separate entities that share a degree of common behaviour, eg. GUI widgets, game objects. Reduces code duplication and the associated cost of maintaining copy-pasted code.

Polymorphism: Allows an abstract interface to be defined with multiple switchable implementations. Useful to switch between real and simulated components, recording/playback of data, insertion of controlled errors to test recovery, insertion of timing delays, use of virtualised or dummy components. Eases development of cross-anything, ie. platform, toolkit, etc.

OO is a very useful tool to have at your disposal. Like all tools though, there will be times where it is useful, and times where it will be a hindrance.

(Simple) case in point: I have been developing a software test suite for an electronics project. It started with almost no real OO. This was fine, I didn't really need it. As the test suite grew, I added more and more styles of tests- far more were needed than I had anticipated needing. As this occurred, I realised that there was a lot of common functionality (processing and collecting, reporting them, displaying them, accepting test passes) between the different types of test. For each type of test, functionality was scattered everywhere through the code. It was a mess. I started collapsing the different types of tests back into classes that were based off of a single base class, and moved the scattered functionality into these classes. The code became considerably neater, easier to understand, and far easier to add new styles of test to. And this was just from a single class with a single level of single-inheritance subclasses and very few methods.

Personally: I use OO extensively. However, for anything small, simple, or throw-away, I tend to avoid it as it does have overhead. For something that used to be small, simple, or throw-away that ends up growing into something larger, I frequently refactor it into something OO-based, because as it gets larger, the overhead of it *not* being OO-based begins to overwhelm.

There's my braindump for the day. I hope it was useful. :}
Logged
gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #5313 on: June 07, 2017, 09:33:57 PM »

Well the video adress those point very thouroughly and convincingly lol, it's a very easy to follow video. Match my experience at least and gave me comfort in being bad at programming because it always felt putting round peg in square hole.

He does propose solution.

IMHO OOP happen because of the problem of the time when it was invented, the principle it raise matter, but principle aren't always the dogma itself.

Personally it has make harder to think in term of interactions because you always need some form of coupling that just better hidden. For a dog to bit a cat, you have to implement some sort of coupling where you put some of the state of dog into cat. In general you end up with a monstrous parent class to make everything interact with everything (monobehavior?) that actually hide a lot of the property you need and obfuscating the code because looking at just cat don't give the full picture, it's hard to read when you have to hunt multiple class in multiple file due to inheritance, let's not talk about exception. And most of the time you end up with inheritance like "Tiger is a kind of bigger cat" which is kind of weird. Also it does not promote performance when you make a game because data is always jumping everywhere in memory.
Logged

JWki
Level 4
****


View Profile
« Reply #5314 on: June 07, 2017, 11:03:25 PM »

I'm only going by the title of that video but object oriented programming isn't bad. It's just often used to solve problems it's not good at solving. Really the only truly bad programming pattern is dogma.

GUI programming would be an example of a place where object oriented programming works well.


... I can't currently think of any other examples :D

Yes, dogma is the issue - the problem with object oriented programming is that the ORIENTED part in it promotes dogma. Trying to shoehorn everything into a collection of objects that work together somehow usually leads to code that is harder to follow, harder to change, harder to parallelize and harder to make faster in general because of how fragmented stuff is.
BUT it's all very subjective - for some people it just clicks and they're productive with it, some aren't. I'd still argue that oop code is harder to read and understand in retrospective than linear, procedural code doing the same thing, but I'd also argue productivity is an absolute priority so everybody should just use what makes them most productive.
Logged
Garthy
Level 9
****


Quack, verily


View Profile WWW
« Reply #5315 on: June 08, 2017, 02:24:26 AM »


gimymblert, it is possible that for the programming problems that you typically encounter that the non-OO approach you take is actually the best way to solve them.

I (and many others) have found OO methodology to be extremely useful. I'd definitely recommend keeping an eye on it in case it ends up being useful to you in the future.

Trying to shoehorn everything into a collection of objects that...

IMHO, forcing a particular methodology onto a problem that it doesn't really fit seems like a very poor use of a methodology. There are exceptions, of course. For example: When teaching the basic principles of OO you typically use very small codebases, and they are frequently the ones that gain the least from an OO approach.
Logged
pelle
Level 2
**



View Profile WWW
« Reply #5316 on: June 08, 2017, 03:28:17 AM »

I like the "java was simple" part. It is an observation I made when going back to C++ from Java. Back when I was young and less grumpy I switched from C++ and other languages to Java because it (in version 1.1) was so small and simple and powerful for what it did. Would almost say elegant. A minimalistic little language (even if the virtual machine was already huge). You could get the entire language and library spec in a thin book and learn everything in a couple of afternoons. It made perfect sense that everyone was switching to java. But look at what monster Java is now. Using modern C++, and avoiding a lot of old crap that still exist in the language because of backwards portability, I really do not think Java has more than a marginal edge in simplicity, and still no other benefits.

Have not seen the video yet, but this needs to be said more often.

EDIT: And when I say "still no other benefits" I conveniently forget about Java Applets. I guess that was also an important part of making people want to use Java in the early years. But not terribly relevant this century.
Logged
InfiniteStateMachine
Level 10
*****



View Profile
« Reply #5317 on: June 08, 2017, 05:55:58 PM »

I'm only going by the title of that video but object oriented programming isn't bad. It's just often used to solve problems it's not good at solving. Really the only truly bad programming pattern is dogma.

GUI programming would be an example of a place where object oriented programming works well.


... I can't currently think of any other examples :D

Yes, dogma is the issue - the problem with object oriented programming is that the ORIENTED part in it promotes dogma. Trying to shoehorn everything into a collection of objects that work together somehow usually leads to code that is harder to follow, harder to change, harder to parallelize and harder to make faster in general because of how fragmented stuff is.
BUT it's all very subjective - for some people it just clicks and they're productive with it, some aren't. I'd still argue that oop code is harder to read and understand in retrospective than linear, procedural code doing the same thing, but I'd also argue productivity is an absolute priority so everybody should just use what makes them most productive.

Yeah I agree there. I guess the homogenous collection of objects is kind of a bad habit that might come more naturally in an oo environment.

I realize I'm reiterating my last post but I just want to make some specific points. In the case of the GUI example I was speaking about earlier oop works. UI is something that people have tried to parallelize for decades and the general consensus is that it's not worth it, the complexity outweights the benefits or it just doesn't work. Also with a base visual element there is shockingly a lot of common traits which bubble up through an inheritance hierarchy without conflict. This is one of the few cases where it works.

As I can't come up with any other good example offhand for OO outside of robust GUI frameworks I wonder if in a different world oo programming was considered a niche edge case design philosophy for specific classes of programming problems then maybe it wouldn't be suffering such a backlash.

My general programming philosophy today (I have no idea how this will change over time as I learn more Smiley ) is that separation of behavior and data is a very good thing and oop makes that difficult to do naturally. I suppose this might be why I tend to gravitate to C and F# for hobby projects.

I may have rambled a bit there, apologies. This kind of stuff I find interesting to dissect.
Logged

BorisTheBrave
Level 10
*****


View Profile WWW
« Reply #5318 on: June 09, 2017, 01:01:51 PM »

Does no one else feel that the challenges of game programming are such that many of the usual pros and cons of OO don't really apply.

Like, games have a lot of objects that really represent physical objects. And we have a lot more "cat bites dog"  problems  (i.e. functions that involve functionally unrelated objects) than in other domains.

I'm reminded of the Flixel framework. When I first encountered it, the idea of having objects that simultaneously represented a graphic as well as an enemy was abhorrent - it doesn't separate concerns at all. But now I realize, for games, it doesn't matter. You don't write for re-use, you write that which expresses a single thing as effectively and flexibly as possible. You could call Flixel OO if you like, but it's not objects like I'd see anywhere else.
Logged
JWki
Level 4
****


View Profile
« Reply #5319 on: June 09, 2017, 01:24:38 PM »

Does no one else feel that the challenges of game programming are such that many of the usual pros and cons of OO don't really apply.

Like, games have a lot of objects that really represent physical objects. And we have a lot more "cat bites dog"  problems  (i.e. functions that involve functionally unrelated objects) than in other domains.

I'm reminded of the Flixel framework. When I first encountered it, the idea of having objects that simultaneously represented a graphic as well as an enemy was abhorrent - it doesn't separate concerns at all. But now I realize, for games, it doesn't matter. You don't write for re-use, you write that which expresses a single thing as effectively and flexibly as possible. You could call Flixel OO if you like, but it's not objects like I'd see anywhere else.

Modeling real world physical objects with a 1:1 mapping to code objects is the main thing that turns me off oop for games.
Logged
Pages: 1 ... 264 265 [266] 267 268 ... 295
Print
Jump to:  

Theme orange-lt created by panic