Reconsidering specialization, part the second

Volume 13, Issue 34; 13 Oct 2010

Most customizations aren't really amenable to “blind interchange” anyway, or are they?

[This essay is the second in a series about specialization and extensibility. You might want to read part one first.]

I balk at the notion of specialization and blind interchange partly because the examples that I think of first are things like the EBNF example which is pretty obviously not something that you could usefully use with any amount of fallback processing. (What do DITA folks do in this case, I wonder?)

But is that really the common case? A few things occur to me.

  1. There are more inline elements in DocBook than you have fingers and toes. By a wide margin unless you're, uh, from somewhere else. But almost without exception, they are all rendered in one of a few ways: italic, boldface, a monospace font, or some combination of those (and many aren't rendered in any special way at all).

    I think it follows that you could make a specialization rule for inlines that would be practical: every new inline nominates the standard inline that should provide its default processing if its not recognized by the processor.

    So if my new banana inline nominates emphasis for default processing, then you just treat all bananas like emphasis.

  2. The very first thing we suggest when users ask about customization is “add a role attribute”. That is the very essence of backwards compatible processing. Any system that doesn't expect a role value, or the particular role value you chose, will ignore the role and do the normal thing for elements of that type.

  3. There actually is a lot of similarity across many elements. This point was driven home while working on assembly processing. Almost all of the “hierarchy” elements, from set, book and part, their components, article, and so on down to the smallest subsection have basically the same structure: bag of info, bag of stuff.

    It's not exactly that simple, there are variations, parts can contain chapters, not sections, and chapters can contain sections, not parts. But there's a lot of similarity.

    It's not hard to imagine that simple rules could be used to promote elements up and down this hierarchy, in fact that's exactly what the assembly process does. That's not too dissimilar from what specialization does.

  4. To a lesser extent, there's some similarity across block elements. Some, like the admonitions, share a very common structure. The lists share some similarities. All of the formal objects (things with titles) have at least the titles in common.

    Of course, by the same token, there are a lot of elements that are absolutely unique; they were invented because nothing else could be stretched to do the job.

What does this all tell us? I'm not exactly sure. It might be just that the notion of fallback processing (specialization) is not as outlandish as I first thought.

If blind interchange is something that might in principle be useful, and something that might be possible, one question remains: is it practical? That's next.