Welcome, Guest. Please login or register.

Login with username, password and session length

 
Advanced search

1411490 Posts in 69371 Topics- by 58428 Members - Latest Member: shelton786

April 24, 2024, 08:40:36 PM

Need hosting? Check out Digital Ocean
(more details in this thread)
TIGSource ForumsDeveloperTechnical (Moderator: ThemsAllTook)The grumpy old programmer room
Pages: 1 ... 273 274 [275] 276 277 ... 295
Print
Author Topic: The grumpy old programmer room  (Read 738674 times)
gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #5480 on: September 15, 2017, 01:55:43 PM »

10 - New paragdim invented to solve current problem,
20 - become the new hammer and everything looks like nail,
30 - turns out generalizing outside of the problem it solved was not fair,
40 - okay how do I mixed all these paradigm together and when to apply them, I'm a competent programmer who know his sh*t, I let you know on forums
41 - oh f**** let's over engineer with one single paradigm it's better and faster than overthinking a supposed elegance and knowing when to apply everything, it don't work for this specific new problem, wisdom!
50 - ? ? ?
60 - goto 10
Logged

qMopey
Level 6
*


View Profile WWW
« Reply #5481 on: September 15, 2017, 02:08:34 PM »

Hello sir, would you like to learn more about our lord and savior Entity Component Systems?
Logged
gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #5482 on: September 15, 2017, 02:37:46 PM »

Hello sir, would you like to learn more about our lord and savior Object Oriented Programming?
Logged

InfiniteStateMachine
Level 10
*****



View Profile
« Reply #5483 on: September 15, 2017, 07:15:12 PM »

spot on gimmy :D

Logged

JWki
Level 4
****


View Profile
« Reply #5484 on: September 16, 2017, 12:08:03 AM »

The problem with something like ECS is exactly what qMopey has pointed out somewhere else - people think "oh, an entity component system is just what I need for this" and then go and ignore *this* completely and write some generic framework that they think will now solve their problem. The same is true for any other paradigm.

See, I like a component based approach for object models for games because it maps well to how game objects should me modeled on the low level - but I don't go and implement some generic ECS framework first and then try to squeeze everything in there because that's just all sorts of awful.
The worst thing is that most of them miss the actual point of splitting up your data which is to be able to process large batches sequentially and quickly - most of them still iterate over entities instead of doing tight loops over component data, picking out the entities that have the components the system cares about. This makes me scream.

My approach is to just consider all subsystems in isolation and model their view of the game world - take a renderer for example. It cares about instances of meshes and materials to render, reflection probes, lights, etc. So there's data structures for all that and they're organized in a way that the renderer can work well and efficiently with them, and they don't give a fuck that they should maybe be components in some generic ECS framework because fuck that.
Now obviously at some point you want to associate them with some sort of entity because you actually want to be able to, you know, place stuff or whatever. So what I do is every "object" in my render world is associated with an integer key.
That's it.
Entities index their associated rendering related things with their ID as key, and there's sync points in the frame to copy over data like transformation matrices and the like from the entity world to the render world.
This approach works well, is nicely decoupled, allows for internal system optimizations and you can even slap a generic OOP ECS or whatever on top of it if you want that for the end user.
It may still end up looking like an ECS or something, but it has been designed around the actual process that needs to happen, the actual problem.
Logged
qMopey
Level 6
*


View Profile WWW
« Reply #5485 on: September 16, 2017, 12:19:39 PM »

My approach is to just consider all subsystems in isolation and model their view of the game world - take a renderer for example. It cares about instances of meshes and materials to render, reflection probes, lights, etc. So there's data structures for all that and they're organized in a way that the renderer can work well and efficiently with them, and they don't give a fuck that they should maybe be components in some generic ECS framework because fuck that.

I think this kind of thinking is really helpful. I used to be the guy that tried to write an engine with low level pieces intende to be used "everywhere". I would write a string class for the "engine", and a file wrapper for the "engine", so everyone everywhere would use the same underlying system.

This is ultimate folly.

Stuff like these are always shitty clunky implementations that don't actually solve *any* problem effectively, because they are busy solving too many different problems at once. In the end the few problems they end up actually solving, they do so in a mediocre manner.

Instead having different, as isolated as possible, services seems to be the winner. In my physics code I embed my own math library, as an example. It has a specific notation that is very effective at writing large amounts of low level linear algebra. The naive programmer will come along and say:

Quote
But that math library is REDUNDANT, *egad*, you can't possibly be OK with having duplicated code!?!?!? Don't you know anything about writing GOOD CODE

Duplication is good. Redundancy is good. It means dependencies are being cut. It doesn't matter if your mesh vertices or animation data is completely duplicated across two systems. It just doesn't matter; memory is cheap nowadays. Physics has a copy of terrain data, graphics has a different copy of terrain data, AI has yet another different copy of terrain data. Who cares. As long as these systems don't know about one another then dependencies can be kept low.

The same principle needs to apply to our Lord and Savior Entity Component Systems. An ECS can be used to solve a problem. Not to solve 10 or 20 problems. One problem, one solution. Currently at work we are suffering from this kind of naive over-engineering, where a low level systems engineer will try to write some functionality intended to be used ubiquitously without any forethought on dependency management, resulting in unnecessary complication and over-generality.

---

The most important software engineering lesson I ever learned was when I worked at thatgamecompany in 2014. I was working on some vertex welding code very briefly. I realized in the codebase they were copy + pasting a little bit of vertex welding code. It looked roughly like this:

Code:
struct Welder
{
struct VertWrapper
{
Vert vert;

operator <(const VertWrapper& rhs) { return CompareVertsLT(vert, rhs.vert) }
operator ==(const VertWrapper& rhs) { return CompareVertsEQ(vert, rhs.vert) }
};

Welder(Vert* verts, int count)
{
std::sort((VertWrapper*)verts, count);
RemoveDuplicates((VertWrapper*)verts, count);
}
};

It was a little struct to wrap and overload the less than operator, and call std::sort. Really simple code. It had been copy pasted into like 5-7 different areas of the code, each with slightly different kinds of verts and slightly different CompareVerts functions.

I decided to try and write a generic vert welder using C++ templates. I finished writing one. It was complicated and a complete mess. It had to make use of strides and some other annoying template features, making it completely unattractive to use in comparison to the above code.

The lead engineer at the time explained to me that in this code duplication is welcome! The snippet I presented is really easy to understand, so it's unlikely to have bugs. It's local to the area that needs it, so finding it is easy. It's not actually harming anyone to have the duplication, since each vertex weld that needed to happen involved different vertices, so they are essentially "different problems with different solutions". The different areas that needed vertex welding now had no dependencies between each other, so each welder could be custom tailored to each need. No excess functionality, or strides, or complicated templates.

---

Something I've noticed is this lesson is nearly impossible for an engineer to learn if they are only good at C++ without any other skills. If an engineer has another skill, say they are pretty good at writing shaders and doing linear algebra, then they can use these skills to write a service/API. When they write this API they learn this kind of lesson naturally. But an engineer only good at C++ doesn't usually write features or an API ever, so they never learn this lesson. They get stuck thinking everyone in the project should use the same allocator for everything, or the same file wrapper, and so on. They don't understand the importance of lowering dependencies or simplification of APIs. They live by untested principles or methodologies and "C++ wisdom".

Writing code to solve one problem is really hard. Writing code to solve one problem and be used by another engineer is 10x harder. Writing code to solve all the problems and be used by many engineers is impossible. Stop trying.
« Last Edit: September 16, 2017, 12:36:21 PM by qMopey » Logged
gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #5486 on: September 16, 2017, 03:50:04 PM »

Quote
Duplication is good. Redundancy is good. It means dependencies are being cut. It doesn't matter if your mesh vertices or animation data is completely duplicated across two systems. It just doesn't matter; memory is cheap nowadays. Physics has a copy of terrain data, graphics has a different copy of terrain data, AI has yet another different copy of terrain data. Who cares.

Depend on your platform.
Logged

Garthy
Level 9
****


Quack, verily


View Profile WWW
« Reply #5487 on: September 16, 2017, 10:14:07 PM »


The STM32 HAL simplifies things a lot IMHO, it's not *that* bad...

*twitch*

Let me elaborate:

You use the STM32Cube graphical configuration to generate the libraries that you ultimately use, plus a template to insert your code into. The libraries generated include only the code for the modules you've selected. The libraries then need further hand customisation, based on the parameters you need to supply and your own discovery of bugs within their library and your own painful experience. Let that sink in- you need to customise the library they provide you with directly. Change the *library* headers, not your own. Search through their headers and make changes directly.

Need a change in the original configuration? Generate the libraries again. Make the same customisations again, hook your code into it again, finding where to make them each time. Want to automate it? Good luck guessing what the next set of changes will include and writing something resilient enough to handle them. Want to modularise it into a library? Tough. Building a separate library will fail. Some files must be in your own project. Want to tidy the code up? That's cool. Have fun next time you need to generate the code again. Library bugfix? Generate the code again. Fix up the code again.

And let's get into the library itself. Default USB HID data size? Two. That doesn't even make sense! And that tiny buffer will overflow easily. Hope you're ready to debug microcontroller code buffer overflows, possibly the nastiest bugs to track down. Want to up that to 64? Cool. Make sure you also know to go in and change some other definitions by hand inside the library (documented nowhere) because that second value is still two. More overruns. The call that checks if the USB connection is good that returns an error code on failure? It calls another function, discards the result, and *always* returns success. And this bug has been there for years. It's just "a known thing" that everyone fixes when they discover it. So many bugfixes and workarounds required. And you have to reapply them the next time you realise you needed to move or enable a peripheral and go back to STM32Cube to spit out your new version of the library.

Schematic examples? A plain LED blinker example? Useful documentation? Good luck.

And this was on a pre-assembled board, so I could stick to software, which is basically my strength.

Apparently they're also deeply flawed chips as well, with numerous software workarounds to cover up the limitations of the chips. And there is practically no low-level documentation. So you need the libraries, and you will hate the libraries.

I lost track of the hours I spent on it, but I remember well the infuriation and sense of helplessness and futility. I almost gave up on ARM development entirely over it all.

*Phew*. I've been holding onto that rant for a while. But yeah, fuck STM32.

After this I made a mostly impulse-driven purchase of an Atmel ARM chip. 1000+-page datasheets with full descriptions of peripheral support and low-level registers used. Extensive library documentation. Buildable code examples for pretty much every peripheral they support. Linux Makefiles that build executables without any changes. Example schematics and electronic design documentation. In one weekend I learnt enough to rig up the MCU on a breadboard, use the libraries, learn the tools, make a USB-controlled LED blinker, and flash a working image to it from a *Pi* using only jumper wires. From nothing to USB-driven blinker in one weekend.

Don't get me wrong. The Atmel libraries have their issues: The non-buildable examples are almost always out of date. The libraries are board-focused and require poorly-explained defines to control their functionality. The library is modular but byzantine, especially in macro use. The contrast to STM32 though is night and day. But at the end of the day, it's a microcontroller library, and so fares poorly to most other libraries out there, because the speciality of the authors isn't software, it's hardware, and it shows.

Otherwise, I also find microcontrollers really fun, and actually enjoy the old school nature of all the tools and code, since you don't expect things to be outdated (or out-fashioned) in a month or so like in the web, mobile, games realms... Simple, universal little machines Smiley

They're great fun. Smiley It's amazing what they can pack into such a small device, and how capable the devices are. It's also amazing that you can purchase a device forty times faster than a Commodore 64 with approximately the same available RAM and inbuilt flash storage many times larger than that which is fast enough to run code directly from, yet small enough to fit six of on your fingernail, for the cost of half a coffee.


Thankyou. Smiley It's been a lot of work and a lot of learning.

I'm working on the next revision of the board now. It should have a heap of improvements over the current design, mostly around power efficiency and ease of use.

(Just out of curiosity, have you made this all in KiCAD?)

Yep, all KiCad. Smiley All the way from schematic to PCB layout to Gerbers. It's a odd little tool which has improved much over recent years.

Logged
Crimsontide
Level 5
*****


View Profile
« Reply #5488 on: September 17, 2017, 02:08:37 PM »

I started a reply, then got sidetracked, then forgot I didn't send it, then like 4 days later I look back and I'm way behind...  So here we go... kinda Smiley

Oh yes, the ESP8266 is pretty nice, are you using it with the full toolchain from Espressif? - I've went through that hassle as well a while ago, and switched over to Lua (NodeLua) immediately afterwards, it's sooo much easier to use Smiley. Just out of curiosity, are you controlling some underwater vehicle or flow or fish interaction in the Aquarium? Sounds interesting...

I'm actually using the Arduino tool chain, as I prefer C++ but didn't want to mess with the normal Espressif toolchain.  Granted their RTOS is pretty cool...

Its controlling all the pumps, lights (turning them off at night, on during the day), there's an automated water change system, as well as temperature readings/control.  I made a smaller controller with an Arduino (since I'm an electronics noob) which worked well, so now I thought I'd step it up a notch, control everything, have it all accessible via the web.  That way I could leave it for a week or two and know its working fine.

The ESP8266 seemed like a good choice as it works with the Arduino toolchain (which I'm familiar with) but also has built in wifi, etc...  I gotta admit its a neat little chip and its surprising what you can do with a few kB of ram and a few MHz when you don't have an OS or a driver stack to slow you down Smiley

Quote
Not sure what the compiler will do there with the function argument (uint32_t counter) on the stack... How significant are the delay differences? Just guessing, but the difference might stem just from the fact, that it will read the passed counter argument first from the stack, store it in a register and then use it for the counter loop in CounterTest() and in the "tempalated" CounterTest2 function it puts the counter value in as a literal at compile time... Every memory read operation might cause cache issues, but I'm just guessing, you might be able to tweak it out with different compiler optimization settings. Also, what does ICACHE_RAM_ATTR really stand for? - It most often helps to generate assembly output with -S for gcc, to see the exact differences.

On the ESP8266, code can either be stored directly in ram (for fast execution) or run off the flash (a lot slower).  When the function is in ram you have no cache latency and you can pretty much know exactly how many cycles each instruction will take.  Or at least in theory...

As far as the stack/templated variable... my understanding was the volatile after asm prevented a lot of assembly transformations, so between the counter reads, there should be no way GCC can alter that code.  But apparently it is?  Clearly I don't understand asm/volatile as well as I thought.  Its gotta be doing something...  Its weird, the 1st function (non-template) takes (5*i) - 3 cycles to execute, dead on every time.  1 iteration = 3 cycles, 10 = 47, 1000 iterations = 4997 cycles, etc...  The templated one will vary all over the place with small/tiny changes to code not even related to it.  Sometimes its faster at ~4 cycles per iteration, sometimes much slower.

You are correct in that I should check the disassembly.  At this point though I think I'm ready to move on.  I didn't really need the code, it was just me playing around and trying to understand how things worked.  It seems if I want nanosecond precision timing I'm going to need to hand-code the assembly myself (ie. not use inline and hope GCC doesn't mess me up).

Quote
Welcome to hell! :D

By way of explanation of your situation. The people who create microcontrollers are brilliant, brilliant electrical engineers, who happen to be terrible software developers, who delegate to inexperienced interns in a critically underfunded area to write the software libraries.

Be thankful you aren't doing development on STM32 ARM code, which comes with quite possibly the worst libraries in the industry.

I take it you have past experience with it?  Its funny, with ARM being as big as they are, I'm surprised they have such terrible software...  I mean 1 full time C++ geek could handle it all Smiley

You're project looks pretty cool, and far beyond my noobish electronic capabilities.  I wish I had more to comment apart from... wow neat Smiley  Clearly a lot of time and effort invested.
Logged
Garthy
Level 9
****


Quack, verily


View Profile WWW
« Reply #5489 on: September 17, 2017, 03:52:14 PM »

I take it you have past experience with it?

Yeah. My rant in the post above yours touches on that. :/ Originally I was going to use one of the STM32 chips for my board. On the surface they seemed to have the better support. They were priced well. My employer at the time also favoured them, and I wanted to build my skills with them outside of work.

Its funny, with ARM being as big as they are, I'm surprised they have such terrible software...

ARM license cores, and the manufacturers use those cores to build MCUs. If we took a PC as an analogy: ARM supply the CPU, and the manufacturers build a whole PC, plus keyboard, mice, and other gizmos... and then they write their own operating system.

To create a MCU, the manufacturers add their own sets of peripherals and the software to run them. It turns out that they're all pretty terrible at the software side. ARM kept out of that area initially, saw it go to hell, and have tried to step in a bit, but the horse had already bolted. There are lots of ARM chips out there. You don't really write to ARM, you write to the set of libraries provided by a particular manufacturer, because the chips have peripherals that are complex and manufacturer-specific, and even the ARM core can be tricky. For example: STM32 is a line of STMicro ARM chips and you'll be using STM32Cube. SAM is a line of Atmel ARM chips and you'll be using ASF.

I've touched on some of the reasons why things are a mess, but one thing I've left out is: The market is very competitive (or it *was*, before the various companies starting acquiring each other), as you've got multiple manufacturers essentially selling their versions of ARM chips. Hobbyists aren't bringing them in real money- the people making products needing a hundred thousand chips at a time are. Once the chip is made, and you can demo that chip to a non-technical type who signs off on the purchase, or a technical type who glosses over the surface stuff and gives a thumbs up, it is no longer their problem. Everything after that gets neglected, and it becomes the developer's job to clean up the mess.

Different manufacturers handle the post-sale work in different ways. I had starting writing a bit of an analysis of this, but it was getting a bit long, so I'll just leave it at that for now.

I mean 1 full time C++ geek could handle it all Smiley

It's a big job.

There are a lot of different ARM chips out there. A lot. There are massive numbers of variants available as each manufacturer tries to make the perfect chip for a particular set of applications, to ensure they're the one that get the order for a hundred thousand of them because their option was 10c cheaper. For example, let's take Atmel's SAM line:

http://www.atmel.com/products/microcontrollers/arm/default.aspx

That column down the left with about twenty entries? Families within that line. Click on one. There'll be a few options. Often the options in each family are fairly similar and often just differ in pin count or have peripherals removed from the top device in a family- but not always. There are a lot of chips to support.

So, to create a good library, you'll need good software developers, and a fair bit of time to create something that covers all the existing functionality. You then need to persuade people to actually use it (difficult if an existing product is tied to the old library/environment). But it can be done. But all of this costs money, and this money comes from chip sales, which means the prices go up, and the competitors move in.


You're project looks pretty cool, and far beyond my noobish electronic capabilities.  I wish I had more to comment apart from... wow neat Smiley  Clearly a lot of time and effort invested.

Thankyou. Smiley Yes, considerable time and effort has gone into putting it all together. I'm proud of the result.
Logged
ferreiradaselva
Level 3
***



View Profile
« Reply #5490 on: October 02, 2017, 05:47:22 PM »

Here I'm about to describe the most important feature that will be added to C and C++ in the year 2459:

Code:
#include <stdlib.h>
#resource "texture_atlas.png" texture_atlas texture_atlas_len

int main(int argc, char **args)
{
    int32_t width;
    int32_t height;
    uint8_t *pixels = read_png_pixels(texture_atlas, texture_atlas_len, &width, &height);
    /* OpenGL shit */
    glTexImage2D(GL_TEXTURE_2D,
                0,
                GL_RGBA,
                width,
                height,
                0,
                GL_RGBA,
                GL_UNSIGNED_BYTE,
                pixels);
    /* Drop mic */
    return 0;
}

Meanwhile, I'm in 2017 writing scripts to automate the conversion of assets into C headers.
« Last Edit: October 02, 2017, 07:30:43 PM by ferreiradaselva » Logged

oahda
Level 10
*****



View Profile
« Reply #5491 on: October 03, 2017, 02:11:43 AM »

Watch out for VS's limit on string length. Cheesy
Logged

qMopey
Level 6
*


View Profile WWW
« Reply #5492 on: October 03, 2017, 10:34:10 AM »

Watch out for VS's limit on string length. Cheesy

Can also use an integer array!
Logged
Photon
Level 4
****


View Profile
« Reply #5493 on: October 03, 2017, 10:39:19 AM »

The worst thing is that most of them miss the actual point of splitting up your data which is to be able to process large batches sequentially and quickly - most of them still iterate over entities instead of doing tight loops over component data, picking out the entities that have the components the system cares about. This makes me scream.
And you know what the funny thing is? How many things can you reasonably batch in the first place? I have found a lot of times that I'd rather just be calling functionalities when I need the data and not have to queue it all up for a big bang system run later. It gets even worse when data has to rebound back and forth between different slices of code. I'm sure there are plenty of great examples where ECS can batch a lot of things, but its just an example I thought of where you can end up using a hammer when you needed a wrench.

The main advantages I see for utilizing systems are (1) for batching relatively simple tasks and (2) for managing the relationships between different entities (ex: collisions,) since a system can efficiently intercept all the agents in said interactions.

I'm coming more into the "use what I need when I need it" approach for paradigms and stuff. I'm kind of using a butchered form of ECS right now, and I will admit I'm using it more in the puzzle piece manner you described above, but this is on purpose. Utilizing the component side of ECS does still give you the benefits of data-logic separation, for instance. There's a bit more as to why I'm going this route, but I'm not sure there's a very concise explanation to be had. Tongue

Finally, my grumpy remark for this post:

Haxe does not allow you to use Floats as map keys. Why?
Logged
qMopey
Level 6
*


View Profile WWW
« Reply #5494 on: October 03, 2017, 12:09:08 PM »

Haxe does not allow you to use Floats as map keys. Why?

Just a wild guess (I don't know haxe): the language semantics don't allow such a thing? Typically the bits of a float are considered as they are and hashed like any other integer. But maybe Haxe can't do that low level of a thing?
Logged
ferreiradaselva
Level 3
***



View Profile
« Reply #5495 on: October 03, 2017, 12:26:50 PM »

If I had to guess, I would say it's to prevent different keys on different architectures if you do something like this:
Code:
map[33.f / 7.f] = "blabla"
Logged

Ordnas
Level 10
*****



View Profile WWW
« Reply #5496 on: October 04, 2017, 12:16:08 AM »

Quote
Duplication is good. Redundancy is good. It means dependencies are being cut. It doesn't matter if your mesh vertices or animation data is completely duplicated across two systems. It just doesn't matter; memory is cheap nowadays. Physics has a copy of terrain data, graphics has a different copy of terrain data, AI has yet another different copy of terrain data. Who cares.

Depend on your platform.

Instead I agree with qMopey when he says "memory is cheap", even on the crappiest gaming system nowadays you have plenty of memory (different reason when you are trying to make a game on a display refrigerator, but that is a different problem). Different problems arises when you need to get a lot of data for the CPU, and topics like cache misses, branch prediction and an entire open-world to load 50.000 meshes of just rocks, the "memory is cheap, I do not care" sounds different of course.

About duplication and redundancy is good, I am not sure about it. Personally if I write duplicated code is because I am in a hurry, but I know the fact that there is the chance that if there is a bug in that duplicated snippet, I have duplicated bugs. Basically depends of what are you doing and what is the context of the problem.

When people saying "you should not write that", "you should not doing this", it should be a suggestion to avoid bugs, because experienced people found that "there is more chance" to encounter problems during the development. That's all. The important thing is the game. If you selled 200.000 copies of your game with all global functions and duplicated code, who cares. If you spend 2 years of doing the best code design and over-engineering, and then you will never release your game, then for sure I am not agree with your methods.
« Last Edit: October 04, 2017, 12:28:37 AM by Ordnas » Logged

Games:

powly
Level 4
****



View Profile WWW
« Reply #5497 on: October 11, 2017, 01:47:53 AM »

GLSL is crazy with memory barriers, the shared memory barrier is seemingly never required (and an execution barrier is what you really want anyway) and the other ones do exactly nothing, even with buffers marked coherent as should and the barrier spammed as every other line of the shader. Atomic operations do work in this sense but they’re not always ideal.
Logged
quantumpotato
Quantum Potato
Level 10
*****



View Profile WWW
« Reply #5498 on: October 12, 2017, 07:31:21 AM »

Quote
Duplication is good. Redundancy is good. It means dependencies are being cut. It doesn't matter if your mesh vertices or animation data is completely duplicated across two systems. It just doesn't matter; memory is cheap nowadays. Physics has a copy of terrain data, graphics has a different copy of terrain data, AI has yet another different copy of terrain data. Who cares.

Depend on your platform.

Instead I agree with qMopey when he says "memory is cheap", even on the crappiest gaming system nowadays you have plenty of memory (different reason when you are trying to make a game on a display refrigerator, but that is a different problem). Different problems arises when you need to get a lot of data for the CPU, and topics like cache misses, branch prediction and an entire open-world to load 50.000 meshes of just rocks, the "memory is cheap, I do not care" sounds different of course.

About duplication and redundancy is good, I am not sure about it. Personally if I write duplicated code is because I am in a hurry, but I know the fact that there is the chance that if there is a bug in that duplicated snippet, I have duplicated bugs. Basically depends of what are you doing and what is the context of the problem.

When people saying "you should not write that", "you should not doing this", it should be a suggestion to avoid bugs, because experienced people found that "there is more chance" to encounter problems during the development. That's all. The important thing is the game. If you selled 200.000 copies of your game with all global functions and duplicated code, who cares. If you spend 2 years of doing the best code design and over-engineering, and then you will never release your game, then for sure I am not agree with your methods.

Agree.

However, I think spending a little time up front engineering & a little time refactoring as you write = time saved exponentially = faster release.

At least for smaller scale projects, but I think this would scale at bigger projects too.

To repeat, I agree that Shipping is everything
Logged

powly
Level 4
****



View Profile WWW
« Reply #5499 on: October 12, 2017, 03:11:56 PM »

Logged
Pages: 1 ... 273 274 [275] 276 277 ... 295
Print
Jump to:  

Theme orange-lt created by panic