The same feeling has been nagging me pretty me much ever since I started writing software. On the one hand, there is the magic, almost literally:
we write some text (spells) and the machine does things in the real world. On the other hand, it seems just way too much work to make the machine
do anything more complex than:
10 PRINT "Hello"
20 GOTO 10
Almost like threading a needle with boxing gloves. And that's even if we are careful, if we avoid unnecessary complexity.And the numbers appear to back that up, Alan Kay mentions Microsoft office at several hundred million lines of code. From my personal experience, the Wunderlist iOS client was not quite 200 KLOC. For the latter, I can attest to the attention given by the team to not introduce unnecessary bloat, and even to actively reduce it. (For example, we cut our core code by around 30KLOC thanks to some of the architectural mechanisms such as Storage Combinators). I am fairly sure I am not the only one with this experience.
So why so much code? After all Wunderlist was just a To Do List, albeit a really nice one. I can't really say much about Office, I don't think anyone can, because 400 MLOC is just way too much code to comprehend. I think the answer is:hexagonal architecture has enabled me to extract the business logic in the product i’m building and currently it’s less than 5% of all code
— 3. life out of balance (@infinitary) November 25, 2017
Glue Code.
It's the unglamorous, invisible code that connects two pieces of software, makes sure that data that's in location A reaches location B unscathed (from the datbase to the UI, from the UI to the model, from the model to the backend and so on...). And like Dark Matter, it is invisible and massive.
Why do I say it is "invisible"? After all, the code is right there, isn't it? As far as I can tell, there are several related reasons:
- Glue code is deemed not important. It's just a couple of lines here, and another couple of lines over there ... and soon enough you're talking real MLOCs!
- We cannot directly express glue code. Most of our languages are what I call "DSLs for Algorithms" (See ALGOL, the ALGOrithmic Language), so glue can not be expressed intentionally, but only by describing algorithms for implementing the glue.
- Glue is quadratic. If you have N features that interact with each other, you have O(N²) pieces of glue to get them to talk to each other.
This last point was illustrated quite nicely by Kevin Greer in a video comparing Multics and Unix development, with the crucial insight being that you need to "program the perimeter, not the area":
For him, the key difference is that Unix had the pipe, and I would agree. The pipe is one-character glue: "|". This is absolutely crucial.
If you have to write even a little custom code every time you connect two modules, you will be in quadratic complexity, meaning that as your features grow your glue code will overwhelm the core functionality. And you will only notice this when it's far too late to do anything about it, because the initial growth rate will be low.
So what can we do about it? I think we need to make glue first class so we can actually write down the glue itself, and not the algorithms that implement the glue. Once we have that, we can and hopefully will create better kinds of glue, ones like the Unix pipe in that they can connect components generically, without requiring custom glue per component pair.
UPDATE
There were some questions as to what to do about this. Well, I am working on it, with Objective-S, and I write fairly frequently on this blog (and occasionally submit my writing to scientific conferences), one post that would be immediately relevant is: Why Architecture Oriented Programming Matters.
I also don't see Unix Pipes and Filters as The Answer™, they just demonstrate the concept of minimized and constant glue. Expanding on this, and as I wrote in Why Architecture Oriented Programming Matters, I also don't see any one single connector as "the" solution. We need different kinds of connectors, and we need to write them down, to abstract over them and use them natively. Not simulate everything by calling procedures, methods or functions. See also Foxes vs. Hedgehogs.
14 comments:
There ideas like executable choreographies (swarm communication) intended to be glue code at the integration level of the systems. Unfortunatly systems start small and dont need choroegraphies and when they become big is too late...
sounds promising.
I have thought alot about this issue, and come to believe that a core aspect of the problem is need for "impedence matching" that so often is the complex part of the glue code. If there are multiple ways of encoding a thing, then glue code is often used to convert between. but if the components themselves were constructed of common representations/implementations of common concepts, then the logic of conversion would be quite small or non-existent.
@oblinger: Yes, absolutely. This is generally referred to as Architectural mismatch. Paper from 1994. In 2009, they penned a follow-up: Architectural Mismatch: Why Reuse is Still So Hard. Essentially nothing had changed, and I would say the same is true now.
One particularly pernicious form of mismatch is the one between the systems we want/need to build and the languages we build them with. For GUI systems, this was described in-depth by Chatty: Programs = Programs = Data + Algorithms + Architecture: consequences for interactive software engineering
More generally, I call it The Gentle Tyranny of Call/Return
I think I've created the modern equivalent of the pipe: https://www.youtube.com/watch?v=S4LbUv5FsGQ
Are you aware of Lisp?
In 2002, as part of my IEEE Internet Computing column called "Toward Integration", which was generally about distributed systems and distributed middleware glue, I published a column about a very similar notion of software dark matter:
https://steve.vinoski.net/pdf/IEEE-Middleware_Dark_Matter.pdf
and again in 2004:
http://steve.vinoski.net/pdf/IEEE-Dark_Matter_Revisited.pdf
I agree with you that "dark matter" is a great term for this sort of code.
@Keving Greer:
Hi Kevin,
thanks for stopping by, big fan!
FOAM is neat, but I don't actually think it, or "modeling" in general is either (a) equivalent to a connector or (b) a long term solution.
Explaining why this is the case will probably take at least another blog post (which I do have lined up somewhere), but briefly the benefits of modelling, having a representation that you can act on intellegently (the "model" is data) is just an artefact of current practice that isn't necessary and can be overcome.
After all, for the compiler the program syntax is also just data. And with forms of metaprogramming other than code generation, you can achieve the same kind of benefits, without the significant drawbacks of code generation.
Practice bears this out. I've encountered some systems that were modelled to a significant degree. They all suffered tremendously from it.
But there are obviously some good and important ideas there. Longer discussion :-)
@beders:
Yes.
Longer answer: you might have noticed the domain of this blog "metaobject.com". That's actually a homage to the book The Art of Metaobject Protocol. So no, I don't think LISP fundamentally solves these problems, despite having some useful features.
@beders:
LISP was also my first thought. After all Marcel did say the issue was making glue code a 'first class' object.
And we all know that lisp makes all code first class objects.
But this does not go to (my understanding) of the issue he is aiming at:
The languages we presently have, are not designed specifically for the construction of glue, yet (it seems to me) glue is a particular kind of code (and there is alot of it), thus it could be amenable to designing a language that make such code very compact.
@Steven:
Thanks for pointing me to your articles, I hadn't seen them before...and certainly would have referenced had I been aware. Very interesting, and straight up my alley, if more specific to distributed programming. I guess my point would be that many of the same issues actually apply to local programming as well.
I really like the following quote from one of your other articles: "We have a general-purpose imperative programming-language hammer, so we treat distributed computing as just another nail to bend to fit the programming models."
That, in spades. And extended to all call/return languages. Erlang's "a ! Pid" syntax for async messaging is certainly is one escape from the Gentle Tyranny of Call/Return, but I think we need to be able to define our own. Which we typically can with procedures/functions, but not with non-procedural mechanisms, when those are even provided.
@marcel:
Thanks for the pointer. I read the referenced article ``arch mismatch is still hard''. I agree.
I see value (in the abstract) in developing a language that is adapted toward being 'glue code'
I wonder if there is a universal language for such, or if it will be glue-code for X and a different one for domain Y?
I have a complementary approach. If all components within an ecosystem were built from components that were so simple, they could not be simpler, then they would tend towards compatibility since they used the same sub components when doing similar things.
My thought is to have a collection of paradigms in a dependency DAG of paradigms. This way each new API within the ecosystems naturally ends up reusing many common sub paradigms.
It is analogous to the way that JSON became so ubiquitous.. it was just a very parsimonious way of encoding structured data.
It seems one could build a web (DAG) of components which each were as simple as possible given the rest of the DAG.
Hard to articulate in a comment... but it seems quite relevant to your quest of simplifying glue code.
Sounds interesting, would love to hear more! Do you have a link?
And yes, connectors and components are very much complementary. Of course the tricky part is the "could not be simpler". :-)
And yes, this conversation is probably a bit tricky to have in comments...you can reach me at marcel (at) metaobject.com, or Twitter (link is in the sidebar), LinkedIn, etc.
I think the only way to solve this is via standardisation (which happens 100 years ago during the industrial revolution), we are probably at the same kind of period back then, where every mechanical manufacturer has their own nuts and bolts, sockets etc.
What we need to do now is to standardise semantical data, especially for data that keeps recurring in almost any field, for example authentication/authorisation related data.
And slowly the more we standardise, the less glue code (back then, adapters) we will need to build.
Nice article speaking about the "glue code" and comparing it to the dark matter.
For me it is not the necessity for avoiding this "glue code". I don't even think that is good to search for an abstract definition language.
What is important to distinguis always between business/algorithm code and what is glue code.
Glue code should be understandable easily.
It should be possible to throw away glue code without loosing business rules.
If always the same or similar glue code is used, then it is necessary to think on abstracting this glue code. But this does not immediately happen with the third occurence as usual for duplicating code. If I have an interface class with for example 50 methods and each containing 20 lines of code, then we have 1000 lines of glue code and this is ok as long as it is code simply to understand.
Arne
Post a Comment