Welcome, Guest. Please login or register.

Login with username, password and session length

 
Advanced search

1411491 Posts in 69371 Topics- by 58428 Members - Latest Member: shelton786

April 25, 2024, 04:05:22 AM

Need hosting? Check out Digital Ocean
(more details in this thread)
TIGSource ForumsDeveloperTechnical (Moderator: ThemsAllTook)The grumpy old programmer room
Pages: 1 ... 288 289 [290] 291 292 ... 295
Print
Author Topic: The grumpy old programmer room  (Read 738695 times)
InfiniteStateMachine
Level 10
*****



View Profile
« Reply #5780 on: August 26, 2019, 05:18:38 PM »

What about using an open source engine?

They all use big dependencies, and it’s mostly back to square one for me at that point.

Ah gotcha.
Logged

Schrompf
Level 9
****

C++ professional, game dev sparetime


View Profile WWW
« Reply #5781 on: August 26, 2019, 10:56:22 PM »

My job system is now up to the point where I can trivially distribute interdependent jobs to all cores. Have been using this to simulate a few tens of thousands particles being pushed around by entities to give the impression of muddy water.

I disabled the fancy job stuff yesterday and let everything run in sequence on main thread to find a strange crash on someone's machine. And it got faster. Now I'm grumpy. Haven't had the time yet to profile, but there surely is something to be learned from this.
Logged

Snake World, multiplayer worm eats stuff and grows DevLog
qMopey
Level 6
*


View Profile WWW
« Reply #5782 on: August 27, 2019, 12:06:21 AM »

Hopefully all you learn is that race conditions are fast! Edit: oh I misunderstood. That does sound very odd. Do post details after you investigate the speed up. My best guess is you have false sharing going on.
Logged
Schrompf
Level 9
****

C++ professional, game dev sparetime


View Profile WWW
« Reply #5783 on: August 27, 2019, 10:53:06 PM »

No false sharing, I'm not doing this the first time. Source data is read only, writes happen in a tightly confined area local to each job, the computations are really simple.

Because Intel throttled the VTune download after asking to be bullshitted in their registration form, I did not profile correctly yesterday. But I varied the parameters a bit and it turned out that my jobs were too small.

Table16x1632x32
serial0.4ms1.0ms
parallel0.7ms0.7ms

A nice simple table this forum has. Parallel is 8 threads on a Core i7, but you probably know that 4 of them are children's cores with roughly 30% of the power of a real core. It's only a rough estimate which was NOT properly benchmarked, but I think it's still obvious that the job size was too small. I always underestimate todays hardware when it comes to power through a linear block of math. The overhead of setting up and distributing a few hundred fibers was way larger than what the parallel execution gained.

There's one option in Boost.Context which I was told would speed up fiber switching x4 or something. Gotta try it some time. For now I leave it singlethreaded.

Oh, and the strange crash on someone's computer? Turned out that through some experiments in the past I had AVX2 enabled for enet in Release. Which conveniently crashed with an ILLEGAL_INSTRUCTION only after successfully connecting while transmitting the first packet.
Logged

Snake World, multiplayer worm eats stuff and grows DevLog
qMopey
Level 6
*


View Profile WWW
« Reply #5784 on: August 28, 2019, 12:53:34 PM »

Looks like you nailed it. Interesting find! Thanks for sharing Smiley
Logged
fluffrabbit
Guest
« Reply #5785 on: August 28, 2019, 01:42:39 PM »

Reminds me of graphics programming where making different components talk to each other is the slowest part. Now where are those promised 8 GHz AMD cores so we can run everything in a single thread?
Logged
gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #5786 on: August 29, 2019, 05:50:24 PM »

heat death amd cores you mean
Logged

ProgramGamer
Administrator
Level 10
******


aka Mireille


View Profile
« Reply #5787 on: August 30, 2019, 03:49:39 AM »

[...]Now where are those promised 8 GHz AMD cores so we can run everything in a single thread?

Here:
Logged

Daid
Level 3
***



View Profile
« Reply #5788 on: August 30, 2019, 05:09:30 AM »

What about using an open source engine?

They all use big dependencies, and it’s mostly back to square one for me at that point.

What would you call big dependencies? My game engine only depends on SDL2, and is open source. (You'll most likely not like it, as it's C++, and you have expressed your ... love ... for that a few times)
Logged

Software engineer by trade. Game development by hobby.
The Tribute Of Legends Devlog Co-op zelda.
EmptyEpsilon Free Co-op multiplayer spaceship simulator
qMopey
Level 6
*


View Profile WWW
« Reply #5789 on: August 30, 2019, 05:39:58 PM »

What about using an open source engine?

They all use big dependencies, and it’s mostly back to square one for me at that point.

What would you call big dependencies? My game engine only depends on SDL2, and is open source. (You'll most likely not like it, as it's C++, and you have expressed your ... love ... for that a few times)

Sounds pretty good! Though it's RAII, templates, and pretty much most features beyond C++11 (including rvalues) that I don't appreciate (rvalue get the boot since RAII is booted).

I would consider libpng a big dependency. Here is my reasoning.

1. Their API is terrible. It takes about 30 lines of code to load a PNG, compared to 1 with stb_image.
2. It depends on zlib, with a nearly identically horrible API.
3. It's tons of code, absolutely tons of code. Going in and making modifications is absolutely not a possibility. And yes, I've needed to make modifications before, hence me writing my own png loader/saver.

If I go poking around in, for example, Love2D, I can see all their dependencies. I actually almost used Love2D for my own game. Here is their internal dependencies: https://github.com/love2d/love/tree/master/src/libraries. Not too bad. However, once the source is opened up there are more internal dependencies hidden away where the source of those dependencies is included directly. This is a fairly big problem, since those internal dependencies can be "hidden" behind a Love wrapper in C++, thus hiding how big or small they are. This has a few different kinds of costs.

1. API erosion. As one API wraps another the one underneath is inevitably eroded. Little pieces of error messages are truncated or ignored. Optional parameters aren't properly exposed. Not all cases are handled. The list goes on. Erosion is a problem for code maturation over a long period of time.
2. Difficult to optimize or profile. Optimizing and interpreting profiler results requires an understanding of the dependencies. Understanding them is difficult when they are all in varying sizes from different authors with different styles and different paradigms.
3. Modifying, extending, or fixing bugs becomes a risk for each dependency due to difficulty in understanding the source.

So in the end its mostly about risk management and flexibility.

Of course, most people don't care about any of this stuff, so big dependencies don't really bother them. That's fine, but I personally really enjoy the freedom in engineering anything I need at any time without dependencies blocking my possibilities.

So in the end it's all just a preference.
Logged
Daid
Level 3
***



View Profile
« Reply #5790 on: August 30, 2019, 11:38:34 PM »

Well. I do cheat by including a bunch of libraries directly in my source tree:
https://github.com/daid/SeriousProton2

There is no documentation. Depends on SDL2 and libz. But the libz dependency can be removed by leaving out the ability to read resources from zip archives.
It dynamicly links with openssl on ssl socket demand. But does not require openssl run.

So goes for a lot of things. Default is compiles everything. But a lot of things can be left out without breaking the rest. Like 2d/3d collision handling (which uses box2d or bullet3d) or the whole gui subsystem.

Its an engine I developed for myself. So there are a few stange odds and ends. But it runs on windows, linux, android and on the raspberry pi.
Logged

Software engineer by trade. Game development by hobby.
The Tribute Of Legends Devlog Co-op zelda.
EmptyEpsilon Free Co-op multiplayer spaceship simulator
oahda
Level 10
*****



View Profile
« Reply #5791 on: August 31, 2019, 06:47:54 AM »

I dunno. I have a couple of big-ish libraries slapped onto my engine (SDL2, Bullet and more) but in my release build the actual executable itself on macOS for example is about 13MB which is just negligible when all the assets bump the size up anyway. Some of my gameplay GIF recordings are larger. Tongue I've noticed even my Unity jam games aren't a lot bigger than about 30MB either.
Logged

qMopey
Level 6
*


View Profile WWW
« Reply #5792 on: August 31, 2019, 07:28:51 AM »

Nice looking engine Daid! Thanks for sharing Smiley I took a peek, and will read more soon.

So far I use physfs for virtual path abstraction and zip mounting, libsodium for encryption and authenticitation, SDL2 for os abstraction, stb_vorbis for loading ogg files, stb_textedit for debug UI and some gameplay involving typing, glad for loading gl pointers on windows.

SDL2 is great. It’s my favorite library.
Logged
gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #5793 on: September 01, 2019, 05:52:02 PM »

Unity is one I want to go below the MB limit though, for some experiment, but I haven't compile anything in while ...

I'm waiting for unity's tiny to implement 3d webgl anyway. It stripped down the engine to kb limit.
Logged

JobLeonard
Level 10
*****



View Profile
« Reply #5794 on: September 14, 2019, 07:26:33 AM »

Every time I try to learn WebGL I get lost... it feels like the programming equivalent of doing taxes, really. And this time the idea is so simple I feel extra stupid for not managing it:

- open the webcam (done)
- initialize a gl.TEXTURE_3D at width x height x frames I wish to store (done)
- use WebGL2RenderingContext.texSubImage3 to update one XY-plane of the 3D texture at a time for each frame (done.. I think, I can't really verify if it works because...)
- render the 3D texture to a 2D canvas in a simple (X,Y,Z=X) fashion to create a slit-scanning effect on the GPU

I just can't wrap my head around all the verbose things I need to do to set up a simple vertex and fragment shader to do the last bit!  Cry
Logged
JobLeonard
Level 10
*****



View Profile
« Reply #5795 on: September 14, 2019, 09:10:28 AM »

Ok, using this StackOverflow example I got something working with gl.POINTS, but that's not a real solution. Argh!
Logged
ThemsAllTook
Administrator
Level 10
******



View Profile WWW
« Reply #5796 on: September 14, 2019, 03:07:18 PM »

Here's an excellent article I just came across that details what it takes to distribute software on a Mac these days: http://www.molleindustria.org/blog/notarizing-your-flashair-applications-for-macos/

Apple has been on this destructive path for a while now. I've had a Mac for pretty much my entire life, but it's become clear that I'm not in any way welcome on their platform anymore. It's really sad. I keep hoping they'll get some new leadership and stop trying so hard to destroy everything they've built, but at this point I'm mostly just ready to abandon them for good. Although my job still requires me to write code for Apple platforms, I'm seriously doubting that I'll ever release a game with Mac support again.
Logged

qMopey
Level 6
*


View Profile WWW
« Reply #5797 on: September 14, 2019, 03:40:26 PM »

I'm not going to release my game on Apple machines.
Logged
fluffrabbit
Guest
« Reply #5798 on: September 14, 2019, 07:07:35 PM »

@JobLeonard - Can't you use a 3D sampler and change the Z coordinate to get a different slice?
Logged
JobLeonard
Level 10
*****



View Profile
« Reply #5799 on: September 15, 2019, 11:24:03 AM »

@fluffrabbit:



I'll figure it out eventually, I guess
Logged
Pages: 1 ... 288 289 [290] 291 292 ... 295
Print
Jump to:  

Theme orange-lt created by panic