A while ago, the guys from the "Accidental Tech Podcast" had an episode about the goto fail; disaster and seemed to be struggling a bit with why software is hard / complex, "the most complex man made thing". Although the fact that it's all built is an important aspect, I think that that is a fairly small part.
Probably the biggest problem is the state-space. Software is highly non-linear and discontinuous, unlike for example a bridge, or most other physical objects. If you change or remove a single bolt from a bridge, it is still the same bridge and its characteristics are largely the same. You need to remove quite a number of bolts
for that to change, and the effects become noticeable before that (though they do
get catastrophically non-linear at some point!). If you change one bit in a piece of software, the behavior is completely unpredictable. It could be the same, it could just crash, it could quietly corrupt data. That's why all those corner cases in the layers matter so much. Again, coming back to the bridge, if one beam has steel that has a slightly odd edge-case, it doesn't matter so much, you don't have to know everything about every beam, as long as they are within rough tolerances. And there are tolerances, and you can improve your odds by making things with
tighter tolerances than required. Again, with software it isn't really the
case, discrete problems are much harder than continuous ones.
You can see this at work in optimization problems. As long as you have linear equations of real values, there are efficient algorithms for solving such an optimization problem (simplex typically runs in linear time, interior point methods are polynomial). Intuitively, restricting the variables to take only integer values should be easier/quicker, but the reverse is true, and in a big way: once you have integer programming or mixed-integer programming, everything becomes NP-hard.
In fact, I just saw this in action during Joe Spolsky's talk "You suck at Excel":
he turned on goal-seeking (essentially a solver), and it diverged dramatically.
The problem is that he was rounding the results. Once he turned rounding off,
the solver converged to a solution fairly quickly.
The second part that they touched upon, is that it is all abstract, which I think is what they were getting at with the idea that is 100% built. Software being abstract means that we have no intuitions from physical objects to guide us. When building a house, everyone has an idea of how difficult it will be to build a single room vs. the whole house, how much material it will take etc. With software, not so much: this one seemingly little sub-function can potentially be more complex than the entire rest of the program. Even when navigating a hierarchical file-system, there is no indication of how much is hidden behind each directory entry at a particular level.
The last part is related to the second, in that there are no physical or geometric constraints to the architecture and connection complexity. Again, in a physical system we know that something in one corner has very limited ways of influencing something in a different corner, and whatever effect there is will be attenuated by distance in a very predictable way. Again, in software we cannot generally know this. Good software architecture tries to impose artificial constraints to make construction and understanding tractable.
A while ago, I presented as a crazy thought experiment the idea of using Montecito's transistor budget
for creating a chip with tens of thousand of ARM cores. Well, it seems the idea wasn't so crazy after
all: The SpiNNaker project is trying to build a system with a million ARM CPUs, and it is designing
a custom chip with lots of ARM cores on it.
Of course they only have 1/6th the die area of the Montecito and are using a conservative 135nm process rather
than the 95nm of the Montecito or the 15nm that is state of the art, so they have a much lower
transistor budget. They also use the later ARM 9 core and add 54 SRAM banks with 32KB each (from the die
picture, 3 per core), so in the
end they "only" put 18 cores on the chip, rather than many thousands. Using a state of the art
14nm process would mean roughly 100 times more transistors, a Montecito-sized die another factor
of six. At that point, we would be at 10000 cores per chip, rather than 18.
One of the many interesting features of the SpiNNaker project is that "the micro-architecture assumes
that processors are ‘free’: the real cost of computing is energy." This has interesting consequences
for potentially simplifying object- or actor-oriented programming. Alan Kay's original idea of
objects was to scale down the concept of "computer", so every object is essentially a self-contained
computer with CPU and storage, communicating with its peers via messages. (Erlang is probably the
closest implementation of this concept).
In our core-scarce computing environments, this had to
be simulated by multiplexing all (or most) of the objects onto a single von Neumann computer, usually
with a shared address space. If cores are free and we have them in the tens of thousands, we can
start entertaining the idea of no longer simulating object-oriented computing, but rather of
implementing it directly by giving each object its own core and attached memory. Yes, utilization
of these cores would probably be abysmal, but with free cores low utilization doesn't matter, and
low utilization (hopefully) means low power consumption.
Even at 1% utilization, 10000 cores would still mean throughput
equivalent to 100 ARM 9 cores running full tilt, and I am guessing pretty low power consumption
if the transistors not being used are actually off. More important than 100 core-equivalents running is
probably the equivalent of 100 bus interfaces running at full tilt. The aggregate on-chip memory
bandwidth would be staggering.
You could probably also run the whole
thing at lower clock frequencies, further reducing power. With each object having around 96KB
of private memory to itself, we would probably be looking at coarser-grained objects, with pure
data being passed between the objects (Objective-C or Erlang style) and possibly APL-like
array extensions (see OOPAL).
Overall, that would lead to de-emphasis of expression-oriented programming models, and a more
architectural focs.
This sort of idea isn't new, the Transputer got there in
the late 80ies, but it was conceived when Moore's law didn't just increase transistor counts, but also
clock-frequencies, and so Intel could always bulldozer away more intelligent architectures with better
fabs. This has stopped, clock-frequencies have been stagnant for a while and even geometries are starting
to stutter. So maybe now the time for intelligent CPU architectures has finally come, and
with it the impetus for examining our assumptions about programming models.
As always, comments welcome here or on Hacker News.
UPDATE: The kilo-cores are here:
Kilocore: 1000 processors,
1.78 Trillion ops/sec, and at 1.78pJ/Op super power-efficient, so at 150 GOps/s only uses 0.7 watts. On a 32nm process, so
not yet maxed out.
Crusty here, I just saw that my young friend Dave Abrahams gave a talk that was based on a little keyboard session we had just a short while ago. Really sharp
fellow, you know, I am sure he'll go far someday, but that's the problem with young folk these days: they
go rushing out telling everyone what they've learned when the lesson is only one third of the way through.
You see, I was trying to impart some wisdom on the fellow using the old Hegelian dialectic: thesis, antithesis,
synthesis. And yes, I admit I wasn't completely honest with him, but I swear it was just a little white lie
for a good educational cause. You see, I presented ADT (Abstract Data Type) programming to him and called
it OOP. It's a little ruse I use from time to time, and decades of Java, C++ and C# have gone a long way
to making it an easy one.
Thesis
So the thesis was simple: we don't need all that fancy shmancy OOP stuff, we can just use old fashioned
structs 90% of the time. In fact, I was going to show him how easy things look in MC68K assembly
language, with a few macros for dispatch, but then thought better of it, because he might have seen
through my little educational ploy.
Of course, a lot of what I told him was nonsense, for example OOP isn't at all about subclassing, for
example the guy who coined the term, Alan I think, wrote: "So I decided to leave out inheritance as a built-in feature until I understood it better." So not only isn't inheritance not the defining feature of OOP as I let on, it actually
wasn't even in the original conception of the thing that was first called "object-oriented programming".
Absolute reliance on inheritance and therefore structural relationships is, in fact, a defining feature
of ADT-oriented programming, particularly when strong type systems are involved. But more on that later.
In fact, OOP best practices have always (since the late 80ies and early 90ies) called for composition
to be used for known axes of customization, with inheritance used for refinement, when a component needs
to be adapted in a more ad-hoc fashion. If that knowledge had filtered down to young turks writing
their master's thesis back in what, 1997,
you can rest assured that the distinction was well known and not exactly rocket science.
Anyway, I kept all that from Dave in order to really get him excited about the idea I was peddling to
him, and it looks like I succeeded. Well, a bit too well, maybe.
Antithesis
Because the idea was really to first get him all excited about not needing OOP, and then turn around
and show him that all the things I had just shown him in fact were OOP. And still are,
as a matter of fact. Always have been. It's that sort of confusion of conflicting truth seeming
ideas that gets the gray matter going. You know, "sound of one hand clapping" kind of stuff.
The reason I worked with him on a little graphics context example was, of course, that I had written
a graphics context wrapper on top of CoreGraphics a good three years ago. In Objective-C. With a protocol
defining the, er, protocol. It's called MPWDrawingContext
and live on github, but I also wrote about it, showed how protocols combine with blocks to make CoreGraphics patterns
easy and intuitive to use and how to combine this type of drawing context with a more advanced
OO language to make live coding/drawing possible.
And of course this is real live programming, not the "not-quite-instant replay" programming that
is all that Swift playgrounds can provide.
The simple fact is that actual Object Oriented Programming is Protocol Oriented Programming,
where Protocol means a set of messages that an object understands. In a true and pure object
oriented language like Smalltalk, it is all that can be, because the only way to interact with an
object is to send messages. Even if you do simple metaprogramming like checking the class, you are
still sending a message. Checking for object identity? Sending a message. Doing more intrusive
metaprogramming like "directly" accessing instance variables? Message. Control structures like
if and while? Message. Creating ranges? Message. Iterating? Message.
Comparing object hierarchies? I think you get the drift.
So all interacting is via messages, and the set of messages is a protocol. What does that make
OO? Say it together: Protocol Oriented Programming.
Synthesis
So we don't need objects when we have POP, but at the same time POP is OOP. Confused? Well,
that's kind of the point of a good dialectic argument.
One possible solution to the conflict could be that we don't need any of this stuff. C, FORTRAN
and assembly were good enough for me, they should be good enough for you. And that's true to
a large extent. Excellent software was written using these tools (and ones that are much, much
worse!), and tooling is not the biggest factor determining success or failure of software projects.
On the other hand, if you want to look beyond what OOP has to offer, statically typed ADT programming
is not the answer. It is the question that OOP answered. And statically typed ADT programming
is not Protocol Oriented Programming, OOP is POP. Repeat after me: OOP is POP, POP is OOP.
To go beyond OOP, we actually need to go beyond it, not step back in time to the early 90ies, forget
all we learned in the meantime and declare victory. My personal take is that our biggest challenges
are in "the big", meaning programming in the large. How to connect components together in a meaningful,
tractable and understandable fashion. Programming the components is, by and large, a solved problem,
making it a tiny bit better may make us feel better, but it won't move the needle on productivity.
Making architecture malleable, user-definable and thus a first class citizen of our programming
notation, now that is a worthwhile goal and challenge.
I like bindings. I also like Key Value Observing. What they do is undeniably cool: you do some initial setup, and presto: magic! You change a value over here, and another
value over there changes as well. Action at a distance. Power.
What they do is also undeniably valuable. I'd venture that nobody actually
likes writing state
maintenance and update code such as the following: when the user clicks this button, or finishes entering
text in that textfield, take the value and put it over here. If the underlying
value changes, update the textfield. If I modify this value, notify
these clients that the value has changed so they can update themselves accordingly.
That's boring. There is no glory in state maintenance code, just potential for
failure when you screw up something this simple.
Finally, their implementation is also undeniably cool: observing an attribute
of a generic
object creates a private subclass for that object (who says we can't do
prototype-based programming in Objective-C?), swizzles the object's
class pointer to that private subclass and then replaces the attribute's
(KVO-compliant) accessor methods with new ones that hook into the
KVO system.
Despite these positives, I have actively removed bindings code from
projects I have worked on, don't use either KVO or bindings myself and
generally recommend staying away from them. Why on earth would I
do that?
Excursion: Constraint Solvers
Before I can answer that question, I have to go back a little and talk about
constraint solvers.
The idea of setting up relationships once and then having the system maintain them
without manually shoveling values back and forth is not exactly new, the first variant
I am aware of was Sketchpad,
Ivan Sutherland's PhD Thesis from 1961/63 (here with narration by Alan Kay):
I still love Ivan's answer to the question as to how he could invent computer graphics,
object orientation and constraint solving in one fell swoop: "I didn't know it was hard".
The first system I am aware of that integrated constraint solving with an object-oriented
programming language was ThingLab, implemented on top of Smalltalk by Alan Borning at Xerox PARC around 1978 (where else...):
While the definition
of a paths is simple, the idea behind it has proved quite powerful and has been essential
in allowing constraint- and object-oriented metaphors to be integrated. [..] The notion
of a path helps strengthen [the distinction between inside and outside of an object] by
providing a protected way for an object to provide external reference to its parts and
subparts.
Yes, that's a better version of KVC. From 1981.
Alan Borning's group at the University of Washington continued working on constraint solvers
for many years, with the final result being the Cassowary linear constraint solver (based on the simplex
algorithm) that was picked up by Apple for Autolayout. The papers on Cassowary and
constraint hierarchies should help with understanding why Autolayout does what it does.
A simpler form of constraints are one-way dataflow constraints.
A one-way, dataflow constraint is an equation of the form y = f(x1,...,xn) in which the formula on the right side
is automatically re-evaluated and assigned to the variable y whenever any variable xi.
If y is modified from
outside the constraint, the equation is left temporarily unsatisfied, hence the attribute “one-way”. Dataflow constraints are recognized as a powerful programming methodology in a variety of contexts because of their versatility and simplicity. The most widespread application of dataflow constraints is perhaps embodied by spreadsheets.
The most important lessons they found were the following:
constraints should be allowed to contain arbitrary code that is written in the underlying toolkit language and does not require any annotations, such as parameter declarations
constraints are difficult to debug and better debugging tools are needed
programmers will readily use one-way constraints to specify the graphical layout of an application, but must be carefully and time-consumingly trained to use them for other purposes.
However, these really are just the headlines, and particularly for Cocoa programmers
the actual reports are well worth reading as they contain many useful pieces of
information that aren't included in the summaries.
Back to KVO and Cocoa Bindings
So what does this history lesson about constraint programming have to do with KVO
and Bindings? You probably already figured it out: bindings are one-way
dataflow constraints, specifically with the equation limited to y = x1.
more complex equations can be obtained by using NSValueTransformers. KVO
is more of an implicit invocation
mechanism that is used primarily to build ad-hoc dataflow constraints.
The specific problems of the API and the implementation have been documented
elsewhere, for example by Soroush Khanlou and Mike Ash, who not only suggested and
implemented improvements back in 2008, but even followed up on them in 2012. All
these problems and workarounds
demonstrate that KVO and Bindings are very sophisticated, complex and error prone
technologies for solving what is a simple and straightforward task: keeping
data in sync.
To these implementation problems, I would add performance: even
just adding the willChangeValueForKey: and didChangeValueForKey:
message sends in your setter (these are usually added automagically for you) without triggering any notifications makes that setter 30 times slower (from 5ns to
150ns on my computer) than a simple setter that just sets and retains the object.
Actually having that access trigger a notification takes the penalty to a factor of over 100
( 5ns vs over 540ns), even when there is only a single observer. I am pretty sure
it gets worse when there are lots of observers (there used to be an O(n^3)
algorithm in there, that was fortunately fixed a while ago). While 500ns may
not seem a lot when dealing with UI code, KVO tends to be implemented at
the model layer in such a way that a significant number of model data accesses
incur at least the base penalties. For example KVO notifications were one of the primary
reasons for NSOperationQueue's somewhat anemic performance back when
we measured it for the Leopard release.
Not only is the constraint graph not available at run time, there is also no
direct representation at coding time. All there is either code or IB settings
that construct such a graph indirectly, so the programmer has to infer the
graph from what is there and keep it in her head. There are also no formulae, the best
we can do are ValueTransformers and
keyPathsForValuesAffectingValueForKey.
As best as I can tell, the reason for this state of affairs is that there simply
wasn't any awareness of the decades of
research and practical experience with constraint solvers at the time (How
do I know? I asked, the answer was "Huh?").
Anyway, when you add it all up, my conclusion is that while I would really,
really, really like a good constraint solving system (at least for spreadsheet
constraints), KVO and Bindings are not it. They are too simplistic, too
fragile and solve too little of the actual problem to be worth the trouble.
It is easier to just write that damn state maintenance code, and infinitely
easier to debug it.
I think one of the main communication problems between advocates for and
critics of KVO/Bindings is that the advocates are advocating more for
the concept of constraint solving, whereas critics are critical of the
implementation. How can these critics not see that despite a few flaws,
this approach is obviously
The Right Thing™? How can the advocates not see the
obvious flaws?
Functional Reactive Programming
As far as I can tell, Functional Reactive Programming (FRP) in general and Reactive
Cocoa in particular are another way of scratching the same itch.
[..] is an integration of declarative [..] and imperative object-oriented programming. The primary goal of this integration is to use constraints to express relations among objects explicitly -- relations that were implicit in the code in previous languages.
Sounds like FRP, right? Well, the first "[..]" part is actually "Constraint Imperative Programming" and the second is "constraints",
from the abstract of a 1994 paper. Similarly, I've seen it stated that FRP is like a spreadsheet.
The connection between functional programming and constraint programming is also well
known and documented in the literature, for example the experience report above states the
following:
Since constraints are simply functional programming dressed up with syntactic sugar, it should not be surprising that 1) programmers do not think of using constraints for most programming tasks and, 2) programmers require extensive training to overcome their procedural instincts so that they will use constraints.
However, you wouldn't be able to tell that there's a relationship there from reading
the FRP literature, which focuses exclusively on the connection to functional
programming via functional reactive animations and Microsoft's Rx extensions. Explaining and particularly motivating FRP this way has the
fundamental problem that whereas functional programming, which is per definition
static/timeless/non-reactive, really needs something to become interactive,
reactivity is already inherent in OO. In fact, reactivity is the quintessence of
objects: all computation is modeled as objects reacting to messages.
So adding reactivity to an object-oriented language is, at first blush, non-sensical
and certainly causesconfusion when explained this way.
I was certainly confused, because until I found this one
paper on reactive imperative programming,
which adds constraints to C++ in a very cool and general way,
none of the documentation, references or papers made the connection that seemed so
blindingly obvious to me. I was starting to question my own sanity.
Architecture
Additionally, one-way dataflow constraints creating relationships between program variables
can, as far as I can tell, always be replaced by a formulation where the dependent
variable is simply replaced by a method that computes the value on-demand. So
instead of setting up a constraint between point1.x and point2.x,
you implement point2.x as a method that uses point1.x to
compute its value and never stores that value. Although this may evaluate more
often than necessary rather than memoizing the value and computing just once, the
additional cost of managing constraint evaluation is such that the two probably
balance.
However, such an implementation creates permanent coupling and requires dedicated
classes for each relationship. Constraints thus become more of an architectural
feature, allowing existing, usually stateful components to be used together without
having to adapt each component for each individual ensemble it is a part of.
Panta Rhei
Everything flows, so they say. As far as I can tell, two different
communities, the F(R)P people and the OO people came up with very similar
solutions based on data flow. The FP people wanted to become more reactive/interactive,
and achieved this by modeling time as sequence numbers in streams of values, sort
of like Lucid or other dataflow languages.
The OO people wanted to be able to specify relationships declaratively and have
their system figure out the best way to satisfy those constraints, with
a large and useful subset of those constraints falling into the category of
the one-way dataflow constraints that, at least to my eye, are equivalent
to FRP. In fact, this sort of state maintenance and update-propagation
pops up in lots of different places, for example makefiles or other
build systems, web-server generators, publication workflows etc. ("this
OmniGraffle diagram embedded as a PDF into this LaTeX document that
in turn becomes a PDF document" -> the final PDF should update
automatically when I change the diagram, instead of me having to
save the diagram, export it to PDF and then re-run LaTeX).
What's kind of funny is that these two groups seem to have converged
in essentially the same space, but they seem to not be aware of
each other, maybe they are phase-shifted with respect to each other?
Part of that phase-shift is, again, communication. The FP guys
couch everything in must destroy all humans er state rethoric,
which doesn't do much to convince OO guys who know that for most
of their programs, state isn't an implementation detail but fundamental
to their applications. Also practical experience does not support the
idea that the FP approach is obvious:
Unfortunately, given the considerable amount of time required to train students to use constraints in a non-graphical manner, it does not seem reasonable to expect that constraints will ever be widely used for purposes other than graphical layout. In retrospect this result should not have been surprising. Business people readily use constraints in spreadsheets because constraints match their mental model of the world. Similarly, we have found that students readily use constraints for graphical layout since constraints match their mental model of the world, both because they use constraints, such as left align or center, to align objects in drawing editors, and because they use constraints to specify the layout of objects in precision paper sketches, such as blueprints. However, in their everyday lives, students are much more accustomed to accomplishing tasks using an imperative set of actions rather than using a declarative set of actions.
Of course there are other groups hanging out in this convergence zone, for example the
Unix folk with their pipes and filters. That is also not too surprising if
you look at the history:
So, we were all ready. Because it was so easy to compose processes with shell scripts. We were already doing that. But, when you have to decorate or invent the name of intermediate files and every function has to say put your file there. And the next one say get your input from there. The clarity of composition of function, which you perceived in your mind when you wrote the program, is lost in the program. Whereas the piping symbol keeps it. It's the old thing about notations are important.
I think the familiarity with Unix pipes also increases the itch: why can't I have
that sort of thing in my general purpose programming language? Especially when
it can lead to very concise programs, such as the Quartz-like graphics subsystem
Gezira written in
under 400 lines of code using the Nile dataflow language.
Moving Forward
I too have heard the siren sing.
I also think that a more spreadsheet-like programming model would not just make my life
as a developer easier, it might also make software more approachable for end-user adaptation and tinkering,
contributing to a more meaningful version of open source. But how do we get there?
Apart from a reasonable implementation and better debuggingsupport, a new system would need much tighter
language integration. Preferably there would be a direct syntax for expressing constraints
such as that available in constraint imperative programming languages or constraint extensions to existing
languages like
Ruby or JavaScript.
This language support should be unified as much as
possible between different constraint systems, not one mechanism for Autolayout and a
completely different one for Bindings.
Supporting constraint programming has always been one of the goals of my Objective-Smalltalk project, and so far that has informed the
PolymorphicIdentifiers that support a uniform interface for data backed by different types of
stores, including one or more constraint stores supporting cooperating solvers, filesystems or web-sites. More needs
to be done, such as extending the data-flow connector hierarchy to conceptually integrate
constraints. The idea is to create a language that does not actually include constraints
in its core, but rather provides sufficient conceptual, expressive and implementation
flexibility to allow users to add such a facility in a non-ad-hoc way so that it is
fully integrated into the language once added. I am not there yet, but all the results
so far are very promising. The architectural focus of Objective-Smalltalk also ties
in well with the architectural interpretation of constraints.
There is a lot to do, but on the other hand I think the payback is huge, and there is
also a large body of existing theoretical,
practical and empirical groundwork to fall back on, so I think the task is doable.
Your feedback, help and pull requests would be very much appreciated!
DHH recently made a claim of Dependency Injection
not being entirely virtuous, and got support from Tim Bray. While both make good points, they are both wrong.
Having hard-coded class-names like in the example Time.now is effectively the same as communicating
via global variables. DHH's suggestion of stubbing out the Time class's now is selling us mutable
global variables as the solution to global variables. Or more precisely: passing an argument to a method
by modifying a global variable that the method reads out and restoring the state of the global variable afterward.
If that's "better", I don't really want to see "worse", and not wanting that sort of thing has nothing to do with being a Java drone limited by the language. And my experience with time-dependent systems tells
me that you really want to pass the time into such a system generally, not just for unit testing.
Of course, having n-levels of AbstractFactoryFactory indirection could actually be argued
as being worse, as Tim convincingly does, but that's one implementation of DI that's hobbled by Java's limitations. For a DI solution
that's actually simple and elegant, check out Newspeak'smodule system (PDF): there is no global namespace, modules are parametrized and
all names dynamically resolved.
If you want synthetic time for a module, just instantiate that module with your own Time class.