Welcome, Guest. Please login or register.

Login with username, password and session length

 
Advanced search

1411482 Posts in 69370 Topics- by 58426 Members - Latest Member: shelton786

April 24, 2024, 12:02:20 AM

Need hosting? Check out Digital Ocean
(more details in this thread)
TIGSource ForumsDeveloperTechnical (Moderator: ThemsAllTook)The grumpy old programmer room
Pages: 1 ... 282 283 [284] 285 286 ... 295
Print
Author Topic: The grumpy old programmer room  (Read 738613 times)
BorisTheBrave
Level 10
*****


View Profile WWW
« Reply #5660 on: October 09, 2018, 01:40:15 PM »

Quote
Poorly written math is like poorly written code, indecipherable to nearly anyone but the author.
This is true, as is your observation that academics sometimes write obscurely deliberately. But my comments are about well written things. Well written code aims to be as clear as possible to a reader with little background. Well written maths aims to communicate as efficiently as possible to a reader with a great deal of background and shared context. They are not the same goal, it's hardly surprising they end up with different techniques.

I can understand it is frustrating to read something that is not pitched at you. But that's no reason to assume that the writer is being deliberately obscure, or suffering from a superiority complex. To be brutally honest, you are assuming that if you cannot understand something, it must be the communicators fault. Perhaps it is you with the complex :D

On a forum, you'd expect answerers to alter their style for better communication, but it's a difficult habit to break, and (as I stated earlier) it's hard to figure out what background knowledge an asker already has.

Perhaps some of the confusion is that math *papers* are expected to be hard to understand - if there wasn't some new idea to be communicated, something difficult that needs explaining, even to experts, then why was the paper written at all? Compare with computer programs - a program that does absolutely nothing novel, can still be an incredibly useful program. Think of all the CRUD applications the world has written. It's entirely reasonable to read a new program, and expect 90% of the code to be variations on concepts you have already seen.
Logged
qMopey
Level 6
*


View Profile WWW
« Reply #5661 on: October 09, 2018, 01:40:50 PM »

On that note, I've hit a lot of old programmers in my career that expect young and up-comers (myself, in this story) to go through the same "rite of passage" as they once did when they were young. For example, one guy was counting my work hours and expected me to hit 60 hours a week, because I was "new and should be having fun learning a lot", and he worked 60 hours a week when he was my age simply because he was having fun. This is verbatim feedback I got directly from this person. Most of the others expect me to take anywhere from a 20-40% pay cut because "when they were young they didn't make much, and work was a labor of passion". It's ridiculous. I don't even interview at game studios anymore after receiving lowball after lowball.

Math guys aren't the only ones that overvalue their own subjective experiences to point of suppressing others who are even just slightly different. It's a common thing.
Logged
J-Snake
Level 10
*****


A fool with a tool is still a fool.


View Profile WWW
« Reply #5662 on: October 09, 2018, 02:16:44 PM »

They think that because its hard for them, then they must make it hard for others, and hence get all wrapped up over making things look as complicated as possible.
I doubt that this is the case, though, at least not for somewhat credible mathematicians. They just don't know it any better. A mathematician is typically conditioned to solve a problem by reducing it to an already known one, for which a solution/proof is already known. This can be a very powerful approach, but also a cumbersome and unnecessary one (Here is where good mathematicians separate themselves from bad ones, in having a good intuition how to approach a problem). Without getting too abstract, just let me make up an anecdote which isn't exaggerated at all, in a respective sense:

A mathematician is provided some eggs on the kitchen table. How does he boil them? He first puts the eggs into the fridge, boils a pot of water, and then takes the eggs from the fridge again and puts them into the boiling water. Why does he do these redundant steps? At this point he's been only used to boil eggs when he takes them out of the fridge. He already knows that this works. So he is trying to reduce the new problem (eggs are on the kitchen table) to an already known and solved one (eggs are in the fridge).

This might look ridiculous to you. But that is what typical mathematicians tend to do to solve a mathematical problem. The redundant information flow is just hidden to the "naked eye" behind abstract formalisms. If it would be translated into a visual format a human can instantly reason about, then you would often see the "boiling eggs" example play out in front of your eyes. Note that many ways might lead to the same result in the end. But the derivation of solutions can be more or less cumbersome. Cumbersome solutions often arise from a respective lack of problem domain understanding, and no profession is a magical cure for that, you have to keep an open mind.

Logged

Independent game developer with an elaborate focus on interesting gameplay, rewarding depth of play and technical quality.<br /><br />Trap Them: http://store.steampowered.com/app/375930
J-Snake
Level 10
*****


A fool with a tool is still a fool.


View Profile WWW
« Reply #5663 on: October 09, 2018, 02:51:32 PM »

But my comments are about well written things. Well written code aims to be as clear as possible to a reader with little background. Well written maths aims to communicate as efficiently as possible to a reader with a great deal of background and shared context.
This is no different with code. Code that requires more background is harder to make sense of as well, but can effectively communicate its workings for those who share the necessary background information (on forums you just stumble more over concrete lower level problems, that's it). I can just refer to this paper again, as example:

http://www.antlr.org/papers/allstar-techreport.pdf

The high level pseudo code of the algorithm in this paper won't make much sense to you as long as you don't have the necessary background information, involving concepts such as "graph structured stacks" etc. But if you have the background information, it efficiently communicates the workings of the sophisticated parser in only a few snippets, functions 1-9, starting on page 8. After retrieving the necessary background/context information the code made me quickly see how it is supposed to work.
(Delivering background and context information is actually where the paper is bad and you have to spend more time than necessary to search and connect the dots yourself, but the pseudo code is fine (even though sloppy at times)

So this right here is a perfect example for well written pseudo code, but one that isn't clear for a reader with little background. Just like with more involved mathematical circumstances, it has to assume respective background information first in order to communicate the algorithm effectively.


« Last Edit: October 09, 2018, 04:22:12 PM by J-Snake » Logged

Independent game developer with an elaborate focus on interesting gameplay, rewarding depth of play and technical quality.<br /><br />Trap Them: http://store.steampowered.com/app/375930
Crimsontide
Level 5
*****


View Profile
« Reply #5664 on: October 09, 2018, 04:19:19 PM »

Quote
Poorly written math is like poorly written code, indecipherable to nearly anyone but the author.
This is true, as is your observation that academics sometimes write obscurely deliberately. But my comments are about well written things. Well written code aims to be as clear as possible to a reader with little background. Well written maths aims to communicate as efficiently as possible to a reader with a great deal of background and shared context. They are not the same goal, it's hardly surprising they end up with different techniques.

I can understand it is frustrating to read something that is not pitched at you. But that's no reason to assume that the writer is being deliberately obscure, or suffering from a superiority complex. To be brutally honest, you are assuming that if you cannot understand something, it must be the communicators fault. Perhaps it is you with the complex :D

On a forum, you'd expect answerers to alter their style for better communication, but it's a difficult habit to break, and (as I stated earlier) it's hard to figure out what background knowledge an asker already has.

Perhaps some of the confusion is that math *papers* are expected to be hard to understand - if there wasn't some new idea to be communicated, something difficult that needs explaining, even to experts, then why was the paper written at all? Compare with computer programs - a program that does absolutely nothing novel, can still be an incredibly useful program. Think of all the CRUD applications the world has written. It's entirely reasonable to read a new program, and expect 90% of the code to be variations on concepts you have already seen.

Nothing against you personally, but I've been through the academic wringer as well, I have the uesless piece of paper to show for it.  There is so much nonsense that goes on in higher education its ridiculous.  Easily 90% of the academic papers written aren't worth the paper they are printed on.  Just desperate wanna-be PHDs and/or profs looking to pad their portfolio; intentionally obfuscating simple ideas because if they shortened it to 1 or 2 clear and concise paragraphs no one would publish it.  And so we end up with 18 pages of BS that can be distilled down to 1/2.

Case in point, in my 4th year I took a 5th year course (as an option) that was about distributed databases.  This was supposed to be cutting edge stuff.  The entire course was just 50 or so hand-picked journal articles by our prof, that we were supposed to read, understand, explain, and critique...  No tests or quizes or labs, at the end of the course we handed in literally a binder full of critiques on each paper, it was a brutal amount of work.

50% were just factually wrong, I was able to write proofs and/or show examples of how their algorithms straight up failed and/or produced incorrect results.  It wasn't even about inefficiency at this point, they were just provably wrong.

25% were pessimisms.  Even thought they would run, whatever is was they were trying to optimize (be it IO cycles, network bandwidth, latency, reduced memory, etc...) if the algorithms/'optimizations' in the paper were applied, the system it was applied to would run slower/less efficiently.

10% were a 'wash'.  No gains, no losses, no point to the paper but at least they didn't make things worse...

10% were decent papers, with minor but provable gains.  Though nothing at all groundbreaking or worth writing a paper on IMO.

And 2 or 3 out of all 50, were actually at a level I thought was worth publishing on.  Good solid papers with ideas that I hadn't encountered before that would yield positive results.

I got an A+ in the class, the teacher was very impressed with my work.  Perhaps I had no idea what I was doing... but I find it hard to believe I'd get a good mark if what I put down was factually wrong.  More likely, academic papers, for the most part, are complete and utter fcking BS.

I don't even like databases, and had almost no background in them (graphics, compression, and programming languages were my biggest interests), and yet there was maybe 3 papers out of nearly 50 that had something that was worth the time reading?  Gimme a break... utterly abysmal.

The one thing in common with the crappy papers, they all were filled with obscure jargon and '100 dollar' words.  They constantly 'forgot' to define key terms or constants, glossing over in hopes (I guess) of no one noticing that it was all BS.  The main reason the course had such an enormous workload, was that it took so long to try to figure out what these authors were actually trying to state between all their errors/ambiguous jargon, diagrams, and charts.  The amount of half-assed set and graph theory I had to wade through has permanently scarred me...

This is not limited to computer science... oh no... in math ITS WORSE.  Comp Sci is a young field, there's lots of places to introduce new ideas/concepts/algorithms that don't require you to be a genius.  Math on the other hand is old... the low hanging fruit was picked centuries ago.  So what's a young up-and-coming relatively bright mathematician to do?  What everyone does in school... bullshit their way through.

Its not because its 'too difficult' or 'too complex', its because mathematicians are people too, and if they want their phd and they aren't the next Von Neumann, they obfuscate.  And hence we have the mess that is mathematical jargon.

That and its seemingly impossible to convince them that state is an integral part of problem solving.
Logged
Crimsontide
Level 5
*****


View Profile
« Reply #5665 on: October 09, 2018, 04:29:36 PM »

But my comments are about well written things. Well written code aims to be as clear as possible to a reader with little background. Well written maths aims to communicate as efficiently as possible to a reader with a great deal of background and shared context.
This is no different with code. Code that requires more background is harder to make sense of as well, but can effectively communicate its workings for those who share the necessary background information (on forums you just stumble more over concrete lower level problems, that's it). I can just refer to this paper again, as example:

http://www.antlr.org/papers/allstar-techreport.pdf

The high level pseudo code of the algorithm in this paper won't make much sense to you as long as you don't have the necessary background information, involving concepts such as "graph structured stacks" etc. But if you have the background information, it efficiently communicates the workings of the sophisticated parser in only a few snippets, functions 1-9, starting on page 8. After retrieving the necessary background/context information the code made me quickly see how it is supposed to work.
(Delivering background and context information is actually where the paper is bad and you have to spend more time than necessary to search and connect the dots yourself, but the pseudo code is fine (even though sloppy at times)

So this right here is a perfect example for well written pseudo code, but one that isn't clear for a reader with little background. Just like with more involved mathematical circumstances, it has to assume respective background information first in order to communicate the algorithm effectively.

Its not a bad paper, its one of the better ones, but check out Figure 6.  If you didn't already know what those rules are, that mess of set-theory symbols would be undecipherable.  It was not enlightening, you already needed to know what a closure is (for example) before looking at that and thinking 'ok I see what they are doing there'.  Apart from looking 'mathematically' its useless.  You can describe, much more succinctly, what is going on in a few sentences of English.

Granted the paper is decent overall.  And I've saved it for reading for another time.  That said I'm partial to CYK parses when it comes to parsing ambiguous grammars, but maybe that's because I've implemented one in CUDA Smiley
Logged
J-Snake
Level 10
*****


A fool with a tool is still a fool.


View Profile WWW
« Reply #5666 on: October 09, 2018, 05:03:34 PM »

Its not a bad paper, its one of the better ones
You judge too quickly. You must have stumbled on papers then which could hardly be taken seriously, questioning its academic credibility. It can always get worse, I guess.

That said I'm partial to CYK parses when it comes to parsing ambiguous grammars, but maybe that's because I've implemented one in CUDA Smiley
If you are satisfied with the performance, that's probably the easiest way you can go;)
« Last Edit: October 09, 2018, 05:09:47 PM by J-Snake » Logged

Independent game developer with an elaborate focus on interesting gameplay, rewarding depth of play and technical quality.<br /><br />Trap Them: http://store.steampowered.com/app/375930
Crimsontide
Level 5
*****


View Profile
« Reply #5667 on: October 09, 2018, 05:08:36 PM »

You judge too quickly.

Big Laff Fair enough, I just skimmed it...

Quote
If you are satisfied with the performance, that's probably the easiest way you can go;)

See that's the thing, everyone talks down on it like it has poor performance.  Except, it can have the best performance of them all.  You can truly parse in parallel.  Not just a few threads, you can throw 10000's+ cores at it and it'll scale up nearly linearly.  So while its single threaded performance is poor, its multithreaded is performance is unmatched.
Logged
J-Snake
Level 10
*****


A fool with a tool is still a fool.


View Profile WWW
« Reply #5668 on: October 09, 2018, 05:38:37 PM »

See that's the thing, everyone talks down on it like it has poor performance.  Except, it can have the best performance of them all.  You can truly parse in parallel.  Not just a few threads, you can throw 10000's+ cores at it and it'll scale up nearly linearly.  So while its single threaded performance is poor, its multithreaded is performance is unmatched.
That's cool. Keep one thing in mind though, a solution with lower time complexity, running on a single core and given sufficient input, will eventually outperform another solution with a higher complexity, regardless how many cores you throw at it. So you have to be well aware about program size limitations, actually available cores, etc. to judge its effective performance.
Logged

Independent game developer with an elaborate focus on interesting gameplay, rewarding depth of play and technical quality.<br /><br />Trap Them: http://store.steampowered.com/app/375930
Crimsontide
Level 5
*****


View Profile
« Reply #5669 on: October 09, 2018, 05:44:07 PM »

See that's the thing, everyone talks down on it like it has poor performance.  Except, it can have the best performance of them all.  You can truly parse in parallel.  Not just a few threads, you can throw 10000's+ cores at it and it'll scale up nearly linearly.  So while its single threaded performance is poor, its multithreaded is performance is unmatched.
That's cool. Keep one thing in mind though, a solution with lower time complexity, running on a single core and given sufficient input, will eventually outperform another solution with a higher complexity, regardless how many cores you throw at it. So you have to be well aware about program size limitations etc.

From my understanding the algorithmic complexity (big 'O) of all the state-of-the-art parsers is pretty much the same.  All sit between n^2 and n^3 time complexity, and are highly dependent on the grammar/input more than anything.

The only exception I've seen to that rule is using boolean matrix multiplication (you can express parsing an ambiguous grammar as boolean multiplication) which has a lower bound of...  (https://arxiv.org/abs/1801.05202)??  From my understanding that is the lower bound of all true ambiguous context-free parsers; and while having the best theoretical complexity, has terrible performance in anything remotely resembling real use situations.

There are ways of getting around that by limiting options (the PEG parsers for example), but AFAIK true CFGs cannot be done in less than, well the time in the link above.  Which is in that usual n^2 - n^3 range.

If you have a grammar with only a small amount of ambiguity, then a GLR or GLL or whatever is fine, and will be plenty fast.  But if you have a lot of ambiguity, then LR/LL derivations will spiral out of control into this mess of interconnecting data structures and just grind to a halt.  CYK is fast across the board, trivial to implement on specialized hardware (there's no way I'd want to implement a TOMITA parser on a GPU), and can handle pretty much anything you can throw at it.  I've never understood why it gets such a bad rap...
« Last Edit: October 09, 2018, 05:58:41 PM by Crimsontide » Logged
J-Snake
Level 10
*****


A fool with a tool is still a fool.


View Profile WWW
« Reply #5670 on: October 09, 2018, 07:06:23 PM »

From my understanding the algorithmic complexity (big 'O) of all the state-of-the-art parsers is pretty much the same.  All sit between n^2 and n^3 time complexity, and are highly dependent on the grammar/input more than anything.
Here is the interesting thing, ALL parsing is actually O(n^4) in theory, but pretty much guaranteed to be linear in practice. This makes actually sense when you investigate how it works. What it effectively does is dynamically updating a lookahead DFA during the parsing process, which, in the same run, is used to match recurring input that is already "learned" (this matching is how the linear complexity is achieved). And it is done in a way so that the majority of ambiguous situations are resolved without penalty. Now since it is typical for code to contain repeated syntactic patterns, the majority of them are learned relatively early in the parsing process. Thus, the further the  parsing process advances, the less is left to learn. So it effectively means that most of the time the input is directly matched by the lookahead DFA.

So in short, learning syntactic patterns is the intensive part. But once they are learned the parser runs through the code in linear time. So if the amount of syntactic patterns is low compared to the code size, which is usually the case, then it works fast. That's the gist of it.


Logged

Independent game developer with an elaborate focus on interesting gameplay, rewarding depth of play and technical quality.<br /><br />Trap Them: http://store.steampowered.com/app/375930
Crimsontide
Level 5
*****


View Profile
« Reply #5671 on: October 09, 2018, 10:36:24 PM »

I'm not sure I'd call a DFA and/or LL/LR stack as 'learning', its not like there's an AI in there or any sort adaptation, its all computed beforehand.  Unless you're directly referring to the paper (which I haven't read)...

All parsing certainly isn't n^4, it really is all dependent on the grammar.  Simple grammars (regular expressions) are all linear, as are LL and LR grammars.  Context free grammars are worst case lower bound as whatever that link had for boolean matrix multiplication which is certainly less than n^4 (usually around n^2.5).  CKY, because of the memoization part, trades memory for speed, and gives a worst case of n^3, n^2 best case.  Generalize LL/LR only have linear time when using unambiguous grammars, but that's just because unambiguous grammars can be parsed in linear time, I'm not sure their worst-case but I don't think its more than n^3.  Context sensitive grammars are p complex, and recursively enumerable grammars are np complex.
Logged
J-Snake
Level 10
*****


A fool with a tool is still a fool.


View Profile WWW
« Reply #5672 on: October 10, 2018, 12:15:21 AM »

Yes, I am referring to the paper, the ALL(*) parsing algorithm. Just providing some relevant information, in case you are interested to investigate how it works.
Logged

Independent game developer with an elaborate focus on interesting gameplay, rewarding depth of play and technical quality.<br /><br />Trap Them: http://store.steampowered.com/app/375930
ProgramGamer
Administrator
Level 10
******


aka Mireille


View Profile
« Reply #5673 on: October 10, 2018, 03:55:48 AM »

The ALL(*) parsing algorithm invite Smash Mouth to your house to help you decipher arcane programming language syntax.
Logged

Crimsontide
Level 5
*****


View Profile
« Reply #5674 on: October 10, 2018, 07:31:39 AM »

Yes, I am referring to the paper, the ALL(*) parsing algorithm. Just providing some relevant information, in case you are interested to investigate how it works.

Ahh my mistake.  I'll have to take another look at it then, it sounds interesting.
Logged
Daid
Level 3
***



View Profile
« Reply #5675 on: November 08, 2018, 01:29:05 PM »

 WTF I know the API is deprecated. But the Win32 DirectShow VideoCapture API is a pain in the ass.

I mean, you just need to setup a capturegraphbuilder, with a graphbuilder, attach 3 filters to that, setup the proper rendering PIN, grab the media control and a bunch of other things, obvious right. Just because you want to grab a single frame from a camera...

Also, MSDN has removed like half of their documentation of these APIs, or at least, google links pop up as 404s.


Well, at least the v4l implementation was easy.
Logged

Software engineer by trade. Game development by hobby.
The Tribute Of Legends Devlog Co-op zelda.
EmptyEpsilon Free Co-op multiplayer spaceship simulator
SLiV
Level 0
**



View Profile WWW
« Reply #5676 on: November 09, 2018, 03:06:34 AM »

Also, MSDN has removed like half of their documentation of these APIs, or at least, google links pop up as 404s.
Does webarchive give any respite?
Logged

@sanderintveld -- Currently working on Epicinium, a turn-based strategy game where nature is a finite resource. Now live on Kickstarter!
Daid
Level 3
***



View Profile
« Reply #5677 on: November 09, 2018, 06:08:30 AM »

Also, MSDN has removed like half of their documentation of these APIs, or at least, google links pop up as 404s.
Does webarchive give any respite?
Google cache showed a few of them. But most of the descriptions then where "TBD". I'm also getting a return code that's not in the documentation from one of the functions. But, if you ever are crazy enough that you want to capture camera images on windows yourself, this implementation seems to work right now:
https://github.com/daid/SeriousProton2/blob/master/src/io/cameraCapture.cpp#L241

It lacks a lot of error checking.
Logged

Software engineer by trade. Game development by hobby.
The Tribute Of Legends Devlog Co-op zelda.
EmptyEpsilon Free Co-op multiplayer spaceship simulator
gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #5678 on: November 25, 2018, 03:08:23 PM »

I'm still working on the relief shader, I have move to trying to find the incercept of a point (center of grid now, arbitrary later) in a regular infinite grid down to a wrapping square ....

So I spend time trying to boot my brain to translate into math something I have already visually solved, which should be obvious but brain got nuhu ...

Then I realize visually I can just offset the center or any arbitrary point to the corner of teh cell, project the ray accordingly, and solve it as a corner intersection ... which I think there is more doc on internet for all these ray equation in discrete space

I don't know if I should be happy or grumpy
Logged

qMopey
Level 6
*


View Profile WWW
« Reply #5679 on: January 25, 2019, 04:34:28 PM »

Word wrapping routines are the absolute worst. And they are typically 100% intermingled with rendering code, making word wrapping routines a terrible candidate for code reuse. Which totally sucks, since I hate writing them and all their edge cases. Ideally a good word wrapping routine will take in a bounding box for displaying the text within and do CPU side clipping. The point of CPU side clipping is to very easily batch up all the text geometry in a single draw call, as opposed to setting multiple scissor boxes, which requires a different draw call for each scissor box (scissor boxes are GPU side clipping, typically done before pixel shaders are run, making them very efficient).

Something like this:
Code:
void render_text_to_buffer(font_t* font, vertex_buffer_t* buf, v2 pos, rect_t clip_rect, const char* text, const char* text_end, float wrap_width);

The idea is to render the text at pos (top left corner), clip agianst the clip_rect, and wrap with wrap_width. pos can be adjusted independently of clip_rect to easily implementing scrolling, which means wrap_width should be a separate float from the clip_rect's width (so word-wrapped text can still be scrolled around inside the clip rect).

I have a pretty good raster-font implementation up on github that has a render to buffer function, but does *not* do render-time word-wrapping, or CPU-side clipping. This means the text pointer would need to be modified in-memory and have newlines inserted before rendering (inefficient and difficult code to write), and each different clip-box would require a GPU-side scissor box, meaning more draw calls. More draw calls defeats the purpose of using a raster-font in the first place!

So now I get to implement word wrapping and clipping. Great. This will be a day long task, at least Facepalm
Logged
Pages: 1 ... 282 283 [284] 285 286 ... 295
Print
Jump to:  

Theme orange-lt created by panic