I kind of wrote a whole essay for this somehow.
Although this discussion was preluded by a focus specifically on "layout paradigms" and not "UI" or "internal vs. external" or "code vs. data," I find that UI is actually at the core of the issue in making an optimal editor and all those things matter up front and roll back into the layout methods; you can't escape having that enter the discussion. If UI didn't matter, the layout paradigm wouldn't matter, and you would be coding your levels inlined in the source with whatever functioned for the gameplay. As it is, what we are doing with the "tiles/points/nodes/fields/polys" metaphors is mapping data structures to problems by the UI we can present to them. If you want to explore those mappings, reviewing CS literature is probably the strongest way to do so. And after trying to make an editor that does everything and wasting about a month of time for it, my prerogative has become: get the most optimal UI possible with the smallest amount of UI programming.
What I learned from trying to build an editor the other way, explicitly aiming to make it refined and reusable for many purposes with a browsable, WIMP paradigm, is that it ends up being mediocre everywhere and bloated with mountains of potential configuration options. If you are building a tile editor, for example - is it a tile editor for
materials like solid/liquid/air/conveyor/etc., where the gameplay info comes first and some procedural magic is used to create final graphics, or is it a tile editor for
art first and you do the gameplay secondarily or separately, with the metadata on the tileset or on a separate layer? A puzzle game might want the former, while a cinematic platformer might want the latter.
For the former, it can all be done in-game and you can even make procedural algorithms that are responsive in real-time, but if you write a generic tool, it either has some clumsy script/plugin thing to do proceduralness, or it doesn't even try for an accurate representation and you have to play "guess-and-check."
For the latter case, I started doing art in Pro Motion's tilemap tool, which lets you do full-canvas pixel-pushing, autoreduce to a tileset, and then refine the tiles from there; the combo of art features + tile features is very well focused and will give a faster workflow than a pure tile tool that uses an external art package. And then I can handle gameplay data with an in-game thing banged together in a day or less. If I want to go gameplay-first in this system I can use the gameplay data to export an image from the game, import it into Pro Motion, and then start drawing on it.
But the main point of the example is that you can't actually "optimize for both," because adding that configuration option hurts the overall UI. I looked at Ogmo Editor's tutorial and it begins, "edit an XML file..." which is an immediate(but understandable) UI failure; the requirement of massive flexibility has already strained the app outside of WIMP and into writing code. If it didn't do that, it would become something far more horrific, a travesty of GUI programming that lets you assign generic properties much as you would do if you were coding, only not as conveniently.
Returning to the layout discussion, I think we should consider "editor libraries", "APIs" or "frameworks" with which to quickly manipulate our vocabulary of tiles, entities, nodes, polys, fields, etc. and add exactly the amount of UI necessary to facilitate this, scripting procedural assistants as necessary. An app can only optimize itself one way.
Of course, then the question is, what exactly do level editors need to facilitate their development? And then I end up thinking "it's all in the hands of the game engine, really." That is, the whole engine should be constructed with editor-building and "scaffolding work" in mind. Pursuing this line of thought is how I've been proceeding on
Fireball - making the entire engine designed only to facilitate the iterative process, to reduce debugging time, etc.
The result: The most interactive-necessary things are in-game and you access them by shortcut keys and clicking in the view. The things that need more involved data manipulation are in JSON, XML, or similar human-readable formats and I edit them in a text editor and have the game update from it live. The things that are really time-consuming to layout I write procedural code for. I don't want to have a single button, toggle, slider, input field, or dialog box devoted to editing; those things only aid in browsing, and you are probably going to be the only one using your editor. Console input is my last resort, when single-button shortcuts and clicks aren't detailed enough and text editors aren't interactive enough. The console is good to pull up arbitrary queries and reports or prelude interactive commands with more detail; I haven't needed to write code for it in Fireball yet but I'm sure the need will arise sometime soon.
But I don't see any reason to step outside these paradigms - you can remix them over and over, and do something like.... have a tilemap and point system that maps coordinates and metadata into an XML table containing script parameters; said script then procedurally generates landscapes and structures for a 3D world; 3D geometry can be used to hint to the script about particular details and customize each locale in further depth(size and scale of buildings, road delineations, etc.) and then within those locales, subset systems can be used to describe rooms, terrain markings, etc. Everything customizable, in layers, using the most powerful metaphors at each level, but with a very minimal UI for each part.