A while ago, I presented as a crazy thought experiment the idea of using Montecito's transistor budget
for creating a chip with tens of thousand of ARM cores. Well, it seems the idea wasn't so crazy after
all: The SpiNNaker project is trying to build a system with a million ARM CPUs, and it is designing
a custom chip with lots of ARM cores on it.
Of course they only have 1/6th the die area of the Montecito and are using a conservative 135nm process rather
than the 95nm of the Montecito or the 15nm that is state of the art, so they have a much lower
transistor budget. They also use the later ARM 9 core and add 54 SRAM banks with 32KB each (from the die
picture, 3 per core), so in the
end they "only" put 18 cores on the chip, rather than many thousands. Using a state of the art
14nm process would mean roughly 100 times more transistors, a Montecito-sized die another factor
of six. At that point, we would be at 10000 cores per chip, rather than 18.
One of the many interesting features of the SpiNNaker project is that "the micro-architecture assumes
that processors are ‘free’: the real cost of computing is energy." This has interesting consequences
for potentially simplifying object- or actor-oriented programming. Alan Kay's original idea of
objects was to scale down the concept of "computer", so every object is essentially a self-contained
computer with CPU and storage, communicating with its peers via messages. (Erlang is probably the
closest implementation of this concept).
In our core-scarce computing environments, this had to
be simulated by multiplexing all (or most) of the objects onto a single von Neumann computer, usually
with a shared address space. If cores are free and we have them in the tens of thousands, we can
start entertaining the idea of no longer simulating object-oriented computing, but rather of
implementing it directly by giving each object its own core and attached memory. Yes, utilization
of these cores would probably be abysmal, but with free cores low utilization doesn't matter, and
low utilization (hopefully) means low power consumption.
Even at 1% utilization, 10000 cores would still mean throughput
equivalent to 100 ARM 9 cores running full tilt, and I am guessing pretty low power consumption
if the transistors not being used are actually off. More important than 100 core-equivalents running is
probably the equivalent of 100 bus interfaces running at full tilt. The aggregate on-chip memory
bandwidth would be staggering.
You could probably also run the whole
thing at lower clock frequencies, further reducing power. With each object having around 96KB
of private memory to itself, we would probably be looking at coarser-grained objects, with pure
data being passed between the objects (Objective-C or Erlang style) and possibly APL-like
array extensions (see OOPAL).
Overall, that would lead to de-emphasis of expression-oriented programming models, and a more
architectural focs.
This sort of idea isn't new, the Transputer got there in
the late 80ies, but it was conceived when Moore's law didn't just increase transistor counts, but also
clock-frequencies, and so Intel could always bulldozer away more intelligent architectures with better
fabs. This has stopped, clock-frequencies have been stagnant for a while and even geometries are starting
to stutter. So maybe now the time for intelligent CPU architectures has finally come, and
with it the impetus for examining our assumptions about programming models.
As always, comments welcome here or on Hacker News.
UPDATE: The kilo-cores are here:
Kilocore: 1000 processors,
1.78 Trillion ops/sec, and at 1.78pJ/Op super power-efficient, so at 150 GOps/s only uses 0.7 watts. On a 32nm process, so
not yet maxed out.
Crusty here, I just saw that my young friend Dave Abrahams gave a talk that was based on a little keyboard session we had just a short while ago. Really sharp
fellow, you know, I am sure he'll go far someday, but that's the problem with young folk these days: they
go rushing out telling everyone what they've learned when the lesson is only one third of the way through.
You see, I was trying to impart some wisdom on the fellow using the old Hegelian dialectic: thesis, antithesis,
synthesis. And yes, I admit I wasn't completely honest with him, but I swear it was just a little white lie
for a good educational cause. You see, I presented ADT (Abstract Data Type) programming to him and called
it OOP. It's a little ruse I use from time to time, and decades of Java, C++ and C# have gone a long way
to making it an easy one.
Thesis
So the thesis was simple: we don't need all that fancy shmancy OOP stuff, we can just use old fashioned
structs 90% of the time. In fact, I was going to show him how easy things look in MC68K assembly
language, with a few macros for dispatch, but then thought better of it, because he might have seen
through my little educational ploy.
Of course, a lot of what I told him was nonsense, for example OOP isn't at all about subclassing, for
example the guy who coined the term, Alan I think, wrote: "So I decided to leave out inheritance as a built-in feature until I understood it better." So not only isn't inheritance not the defining feature of OOP as I let on, it actually
wasn't even in the original conception of the thing that was first called "object-oriented programming".
Absolute reliance on inheritance and therefore structural relationships is, in fact, a defining feature
of ADT-oriented programming, particularly when strong type systems are involved. But more on that later.
In fact, OOP best practices have always (since the late 80ies and early 90ies) called for composition
to be used for known axes of customization, with inheritance used for refinement, when a component needs
to be adapted in a more ad-hoc fashion. If that knowledge had filtered down to young turks writing
their master's thesis back in what, 1997,
you can rest assured that the distinction was well known and not exactly rocket science.
Anyway, I kept all that from Dave in order to really get him excited about the idea I was peddling to
him, and it looks like I succeeded. Well, a bit too well, maybe.
Antithesis
Because the idea was really to first get him all excited about not needing OOP, and then turn around
and show him that all the things I had just shown him in fact were OOP. And still are,
as a matter of fact. Always have been. It's that sort of confusion of conflicting truth seeming
ideas that gets the gray matter going. You know, "sound of one hand clapping" kind of stuff.
The reason I worked with him on a little graphics context example was, of course, that I had written
a graphics context wrapper on top of CoreGraphics a good three years ago. In Objective-C. With a protocol
defining the, er, protocol. It's called MPWDrawingContext
and live on github, but I also wrote about it, showed how protocols combine with blocks to make CoreGraphics patterns
easy and intuitive to use and how to combine this type of drawing context with a more advanced
OO language to make live coding/drawing possible.
And of course this is real live programming, not the "not-quite-instant replay" programming that
is all that Swift playgrounds can provide.
The simple fact is that actual Object Oriented Programming is Protocol Oriented Programming,
where Protocol means a set of messages that an object understands. In a true and pure object
oriented language like Smalltalk, it is all that can be, because the only way to interact with an
object is to send messages. Even if you do simple metaprogramming like checking the class, you are
still sending a message. Checking for object identity? Sending a message. Doing more intrusive
metaprogramming like "directly" accessing instance variables? Message. Control structures like
if and while? Message. Creating ranges? Message. Iterating? Message.
Comparing object hierarchies? I think you get the drift.
So all interacting is via messages, and the set of messages is a protocol. What does that make
OO? Say it together: Protocol Oriented Programming.
Synthesis
So we don't need objects when we have POP, but at the same time POP is OOP. Confused? Well,
that's kind of the point of a good dialectic argument.
One possible solution to the conflict could be that we don't need any of this stuff. C, FORTRAN
and assembly were good enough for me, they should be good enough for you. And that's true to
a large extent. Excellent software was written using these tools (and ones that are much, much
worse!), and tooling is not the biggest factor determining success or failure of software projects.
On the other hand, if you want to look beyond what OOP has to offer, statically typed ADT programming
is not the answer. It is the question that OOP answered. And statically typed ADT programming
is not Protocol Oriented Programming, OOP is POP. Repeat after me: OOP is POP, POP is OOP.
To go beyond OOP, we actually need to go beyond it, not step back in time to the early 90ies, forget
all we learned in the meantime and declare victory. My personal take is that our biggest challenges
are in "the big", meaning programming in the large. How to connect components together in a meaningful,
tractable and understandable fashion. Programming the components is, by and large, a solved problem,
making it a tiny bit better may make us feel better, but it won't move the needle on productivity.
Making architecture malleable, user-definable and thus a first class citizen of our programming
notation, now that is a worthwhile goal and challenge.
One of the many things that's been puzzling me for a long time is why operator overloading
appears to be at the same time problematic and attractive in languages such as C++ and
now Swift. I know I certainly feel the same way, it's somehow very cool to massage the
language that way, but at the same time the thought of having everything redefined underneath
me fills me with horror, and what little I've seen and heard of C++ with heavy overloading
confirms that horror, except for very limited domains. What's really puzzling is that
binary messages in Smalltalk, which are effectively the same feature (special characters like *,+ etc. can be used as message names taking
a single argument), do not seem to not have
either of these effects: they are neither particularly attractive to Smalltalk programmers,
nor are their effects particularly worrisome. Odd.
Of course we simply don't have that problem in C or Objective-C: operators are built-in
parts of the language, and neither the C part nor the Objective part has a comparable
facility, which is a large part of the reason we don't have a useful number/magnitude
hierarchy in Objective-C and numeric/array libraries are't that popular: writing
[number1 multipliedBy:number2] is just too painful.
Some recent articles and talks that dealt with operator overloading in Apple's new
Swift language just heightened my confusion. But as is often the case, that
heightened confusion seems to have been the last bit of resistance that pushed through
an insight.
Anyway, here is an example from NSHipster Matt Thompson's excellent post on Swift Operators,
an operator for exponentiation wrapping the pow() function:
This is introduced as "the arithmetic operator found in many programming languages, but missing in Swift [is **]".
Here is an example of the difference:
pow( left, right )
left ** right
pow( 2, 3 )
2 ** 3
How come this is seen as an improvement (and to me it does)? There are two candidates for what the difference
might be: the fact that the operation is now written in infix notation and that it's using
special characters. Do these two factors contribute evenly or is one more important than
the other. Let's look at the same example in Smalltalk syntax, first with a normal keyword
message and then with a binary message (Smalltalk uses raisedTo:, but let's stick
with pow: here to make the comparison similar):
left pow: right.
left ** right.
2 pow: 3.
2 ** 3.
To my eyes at least, the binary-message version is no improvement over the keyword message,
in fact it seems somewhat worse to me. So the attractiveness of infix notation appears to
be a strong candidate for why operator overloading is desirable. Of course, having to use
operator overloading to get infix notation is problematic, because special characters generally
do not convey the meaning of the operation nearly as well as names, conventional arithmetic
aside.
Note that dot notation for message sends/method calls does not really seem to have the same effect, even though it could technically also be considered
an infix notation:
left.pow( right)
left ** right
2.pow( 3 )
2 ** 3
There is more anecdotal evidence. In Chris Eidhof's talk on functional swift, scrub to around the 10 minute mark. There you'll find the
following code with some nested and curried function calls:
let result = colorOverlay( overlayColor)(blur(blurRadius)(image))
"This does not look to nice [..] it gets a bit unreadable, it's hard to see what's going on" is the quote.
let result = colorOverlay( overlayColor)(blur(blurRadius)(image))
Having a special compose function doesn't actually make it better
let myFilter = composeFilters(blur(blurRadius),colorOverlay(overlayColor))
let result = myFilter(image)
Infix to the rescue! Using the |>operator:
let myFilter = blur(blurRadius) |> colorOverlay(overlayColor)
let result = myFilter(image)
Chris is very fair-minded about this, he mentions that due to the special characters involved,
you can't really infer what |> means from looking at the code, you have to know, and having
many of these sorts of operators makes code effectively incomprehensible. Or as one twitter
use put it:
Every time I pick up Scala I think I'm being trolled. How many different possible meanings of _=>.+->_ can one language have??
Like most things
in engineering, it's a trade-off, though my guess is the trade-off would shift if we had
infix without requiring non-sensical characters.
Built in
I do believe that there is another factor involved, one that is more psychologically subtle
having to do with the idea of language as a (pre-defined) thing vs. a mechanism for building
your own abstractions that I mentioned in my previous post on Swift performance.
In that post, I mentioned BASIC as the primary example of the former, a language as a
collection of built-in features, with C and Pascal as (early) examples of the latter,
languages as generic mechanisms for building your own features. However, those
latter languages don't treat all constructs equally. Specifically, all the operators
are built-in, not user-definable over -overridable. They also correspond closely
to those operations that are built into the underlying hardware and map to
single instructions in assembly language. In short: even in languages with
a strong "user-defined" component, there is a hard line between "user-defined"
and "built-in", and that line just happens to map almost 1:1 to the operator/function
boundary.
Hackers don't like boundaries. Or rather: they love boundaries, the overcoming of.
I'd say that overloaded operators are particularly attractive (to hacker mentalities,
but that's probably most of us) in languages where this boundary between user-defined
and built-in stuff exists, and therefore those overloaded operators let you cross
that boundary and do things normally reserved for language implementors.
If you think this idea is too crazy, listen to John Siracusa, Guy English and Rene Ritchie
discussing Swift language features and operator overloading on Debug Podcast Number 49, Siracusa Round 2, starting at 45:45. I've transcribed a bit
below, but I really recommend you listen to the podcast, it's very good:
45:45 Swift is a damning comment on C++ [benefits without the craziness]
46:06 You can't do what Swift did [putting basic types in the standard library]
without operator overloading. [That's actually not true, because in Swift the operators are just syntax -> but it is exactly the idea I talked about earlier]
47:50 If you're going to add something like regular expressions to the language ...
they should have some operators of their own. That's a perfect opportunity for
operator overloading
48:07 If you're going to add features to the language, like regular expressions or so [..]
there is well-established syntax for this from other languages.
48:40 ...or range operators. Lots of languages have range operators these days.
Really it's just a function call with two different operands. [..]
You're not trying to be clever
All you're trying
to do is make it natural to use features that exist in many of other languages.
The thing about Swift is you don't have to add syntax to the language to do it.
Because it's so malleable.
If you're not adding a feature, like I'm adding regular expressions to the language.
If you're not doing that, don't try to get clever. Consider the features as existing
for the benefit of the expansion of the language, so that future features look natural
in it
and not bolted on even though technically everything is in a library. Don't think of
it as in my end user code I'm going to come up with symbols that combine my types in
novel ways, because what are you even doing there?
50:17 if you have a language like this, you need new syntax and new behavior to
make it feel natural. [new behavior strings array] and it has the whole struct thing. The
basics of the language, the most basic things you can do, have to be different,
look different and behave different for a modern language.
51:52 "using operator overloading to add features to the language" [again, not actually true]
The interesting thing about this idea of a boundary between "language things" and "user things" is
that it does not align with the "operators" and "named operators" in Swift, but apparently it still
feels like it does, so we "extending the language" is seen as roughly equivalent to "adding
some operators", with all the sound caveats that apply.
In fact, going back to Matt Thompson's article from above, it is kind of odd that he talks
about exponentiation operator as missing from the language, when if fact the operation is
available in the language. So if the operation crosses the boundary from function to
operator, then and only then does it become part of the language.
In Smalltalk, on the other hand, the boundary has disappeared from view. It still exists in the
form of primitives, but those are well hidden all over the class hierarchy and not something
that is visible to the developer. So in addition to having infix notation available for
named operations, Smalltalk doesn't have the notion of something being "part of the language"
rather than "just the library" just because it uses non-sensical characters. Everything
is part of the library, the library is the language and you can use names or special
characters as appropriate, not because of asymmetries in the language.
And that's why operator overloading is a a thing even in languages like Swift, whereas it
is a non-event in Smalltalk.
I recently stumbled on Rob Napier's explanation of the map
function in Swift. So I am reading along yadda yadda when suddenly I wake up and
my eyes do a double take:
After years of begging for a map function in Cocoa [...]
Huh? I rub my eyes, probably just a slip up, but no, he continues:
In a generic language like Swift, “pattern” means there’s a probably a function hiding in there, so let’s pull out the part that doesn’t change and call it map:
Not sure what he means with a "generic language", but here's how we would implement a map function in Objective-C.
#import <Foundation/Foundation.h>
typedef id (*mappingfun)( id arg );
static id makeurl( NSString *domain ) {
return [[[NSURL alloc] initWithScheme:@"http" host:domain path:@"/"] autorelease];
}
NSArray *map( NSArray *array, mappingfun theFun )
{
NSMutableArray *result=[NSMutableArray array];
for ( id object in array ) {
id objresult=theFun( object );
if ( objresult ) {
[result addObject:objresult];
}
}
return result;
}
int main(int argc, char *argv[]) {
NSArray *source=@[ @"apple.com", @"objective.st", @"metaobject.com" ];
NSLog(@"%@",map(source, makeurl ));
}
This is less than 7 non-empty lines of code for the mapping function, and took me less
than 10 minutes to write in its entirety, including a trip to the kitchen for an
extra cookie, recompiling 3 times and looking at the qsort(3) manpage
because I just can't remember C function pointer declaration syntax (though it took
me less time than usual, maybe I am learning?). So really, years of "begging" for
something any mildly competent coder could whip up between bathroom breaks or
during a lull in their twitter feed?
Or maybe we want a version with blocks instead? Another 2 minutes, because I am a klutz:
#import <Foundation/Foundation.h>
typedef id (^mappingblock)( id arg );
NSArray *map( NSArray *array, mappingblock theBlock )
{
NSMutableArray *result=[NSMutableArray array];
for ( id object in array ) {
id objresult=theBlock( object );
if ( objresult ) {
[result addObject:objresult];
}
}
return result;
}
int main(int argc, char *argv[]) {
NSArray *source=@[ @"apple.com", @"objective.st", @"metaobject.com" ];
NSLog(@"%@",map(source, ^id ( id domain ) {
return [[[NSURL alloc] initWithScheme:@"http" host:domain path:@"/"] autorelease];
}));
}
Of course, we've also had collect for a good decadeorso, which turns the client code into the following,
much more readable version (Objective-Smalltalk syntax):
NSURL collect URLWithScheme:'http' host:#('objective.st' 'metaobject.com') each path:'/'.
As I wrote in my previous post, we seem to be regressing to a mindset about computer
languages that harkens back to the days of BASIC, where everything was baked into the
language, and things not baked into the language or provided by the language vendor do not exist.
Rob goes on the write "The mapping could be performed in parallel [..]", for example like parcollect? And then "This is the heart of good functional programming." No. This is the heart of good programming.
Having processed that shock, I fly over a discussion of filter (select) and stumble over
the next whopper:
It’s all about the types
Again...huh?? Our map implementation certainly didn't need (static) types for the list, and
all the Smalltalkers and LISPers that have been gleefully using higher order
techniques for 40-50 years without static types must also not have gotten the memo.
We [..] started to think about the power of functions to separate intent from implementation. [..] Soon we’ll explore some more of these transforming functions and see what they can do for us. Until then, stop mutating. Evolve.
All modern programming separates intent from implementation. Functions are a
fairly limited and primitive way of doing so. Limiting power in this fashion can be
useful, but please don't confuse the power of higher order programming with the
limitations of functional programming, they are quite distinct.
Objective-Smalltalk is now getting into a very nice virtuous cycle of
being more useful, therefore being used more and therefore motivating changes
to make it even more useful. One of the recent additions was autocomplete,
for both the tty-based and the GUI based REPLs.
I modeled the autocomplete after the one in bash and other Unix shells:
it will insert partial completions without asking up the point that they
become ambiguous. If there is no unambiguous partial completion, it
displays the alternatives. So a usual sequence is <TAB> -> something
is inserted <TAB> again -> list is displayed, type one character to disambiguate, <TAB> again and so on. I find that I get to my
desired result much quicker and with fewer backtracks than with the
mechanism Xcode uses.
Fortunately, I was able to wrestle NSTextView's
completion mechanism (in ShellView borrowed from
the excellent FSCript) to provide these semantics rather than the
built in ones.
Another cool thing about the autocomplete is that it is very precise,
unlike for example FScript which as far as I can tell just offers all
possible symbols.
How can this be, when Objective-Smalltalk is (currently) dynamically
typed and we all know that good autocomplete requires static types?
The reason is simply that there is one thing that's even better
than having the static types available: having the actual objects
themselves available!
The two REPLs aren't just syntax-aware, they also evaluate the
expression as much as needed and possible to figure out what
a good completion might be. So instead of having to figure
out the type of the object, we can just ask the object what
messages it understands. This was very easy to implement,
almost comically trivial compared to a full blown static type-system.
So while static types are good for this purpose, live objects are
even better! The Self team made a similar discovery when they
were working on their optimizing compiler, trying both static
type inference and dynamic type feedback. Type feedback was
both simpler and performed vastly better and is currently used
even for optimizing statically typed languages such as Java.
Finally, autocomplete also works with Polymorphic Identifiers, for
example file:./a<TAB> will autocomplete files
in the current directory starting with the letter 'a' (and just
fi<TAB> will autocomplete to the file:
scheme). Completion is scheme-specific, so any schemes you add
can provide their own completion logic.
Like all of Objective-Smalltalk, this is still a work in progress:
not all syntactic constructs support completions, for example
Polymorphic Identifiers don't support complex paths and there
is no bracket matching. However, just like Objective-Smalltalk,
what is there is quite useful and often already better what else
is out there in small areas.
I like bindings. I also like Key Value Observing. What they do is undeniably cool: you do some initial setup, and presto: magic! You change a value over here, and another
value over there changes as well. Action at a distance. Power.
What they do is also undeniably valuable. I'd venture that nobody actually
likes writing state
maintenance and update code such as the following: when the user clicks this button, or finishes entering
text in that textfield, take the value and put it over here. If the underlying
value changes, update the textfield. If I modify this value, notify
these clients that the value has changed so they can update themselves accordingly.
That's boring. There is no glory in state maintenance code, just potential for
failure when you screw up something this simple.
Finally, their implementation is also undeniably cool: observing an attribute
of a generic
object creates a private subclass for that object (who says we can't do
prototype-based programming in Objective-C?), swizzles the object's
class pointer to that private subclass and then replaces the attribute's
(KVO-compliant) accessor methods with new ones that hook into the
KVO system.
Despite these positives, I have actively removed bindings code from
projects I have worked on, don't use either KVO or bindings myself and
generally recommend staying away from them. Why on earth would I
do that?
Excursion: Constraint Solvers
Before I can answer that question, I have to go back a little and talk about
constraint solvers.
The idea of setting up relationships once and then having the system maintain them
without manually shoveling values back and forth is not exactly new, the first variant
I am aware of was Sketchpad,
Ivan Sutherland's PhD Thesis from 1961/63 (here with narration by Alan Kay):
I still love Ivan's answer to the question as to how he could invent computer graphics,
object orientation and constraint solving in one fell swoop: "I didn't know it was hard".
The first system I am aware of that integrated constraint solving with an object-oriented
programming language was ThingLab, implemented on top of Smalltalk by Alan Borning at Xerox PARC around 1978 (where else...):
While the definition
of a paths is simple, the idea behind it has proved quite powerful and has been essential
in allowing constraint- and object-oriented metaphors to be integrated. [..] The notion
of a path helps strengthen [the distinction between inside and outside of an object] by
providing a protected way for an object to provide external reference to its parts and
subparts.
Yes, that's a better version of KVC. From 1981.
Alan Borning's group at the University of Washington continued working on constraint solvers
for many years, with the final result being the Cassowary linear constraint solver (based on the simplex
algorithm) that was picked up by Apple for Autolayout. The papers on Cassowary and
constraint hierarchies should help with understanding why Autolayout does what it does.
A simpler form of constraints are one-way dataflow constraints.
A one-way, dataflow constraint is an equation of the form y = f(x1,...,xn) in which the formula on the right side
is automatically re-evaluated and assigned to the variable y whenever any variable xi.
If y is modified from
outside the constraint, the equation is left temporarily unsatisfied, hence the attribute “one-way”. Dataflow constraints are recognized as a powerful programming methodology in a variety of contexts because of their versatility and simplicity. The most widespread application of dataflow constraints is perhaps embodied by spreadsheets.
The most important lessons they found were the following:
constraints should be allowed to contain arbitrary code that is written in the underlying toolkit language and does not require any annotations, such as parameter declarations
constraints are difficult to debug and better debugging tools are needed
programmers will readily use one-way constraints to specify the graphical layout of an application, but must be carefully and time-consumingly trained to use them for other purposes.
However, these really are just the headlines, and particularly for Cocoa programmers
the actual reports are well worth reading as they contain many useful pieces of
information that aren't included in the summaries.
Back to KVO and Cocoa Bindings
So what does this history lesson about constraint programming have to do with KVO
and Bindings? You probably already figured it out: bindings are one-way
dataflow constraints, specifically with the equation limited to y = x1.
more complex equations can be obtained by using NSValueTransformers. KVO
is more of an implicit invocation
mechanism that is used primarily to build ad-hoc dataflow constraints.
The specific problems of the API and the implementation have been documented
elsewhere, for example by Soroush Khanlou and Mike Ash, who not only suggested and
implemented improvements back in 2008, but even followed up on them in 2012. All
these problems and workarounds
demonstrate that KVO and Bindings are very sophisticated, complex and error prone
technologies for solving what is a simple and straightforward task: keeping
data in sync.
To these implementation problems, I would add performance: even
just adding the willChangeValueForKey: and didChangeValueForKey:
message sends in your setter (these are usually added automagically for you) without triggering any notifications makes that setter 30 times slower (from 5ns to
150ns on my computer) than a simple setter that just sets and retains the object.
Actually having that access trigger a notification takes the penalty to a factor of over 100
( 5ns vs over 540ns), even when there is only a single observer. I am pretty sure
it gets worse when there are lots of observers (there used to be an O(n^3)
algorithm in there, that was fortunately fixed a while ago). While 500ns may
not seem a lot when dealing with UI code, KVO tends to be implemented at
the model layer in such a way that a significant number of model data accesses
incur at least the base penalties. For example KVO notifications were one of the primary
reasons for NSOperationQueue's somewhat anemic performance back when
we measured it for the Leopard release.
Not only is the constraint graph not available at run time, there is also no
direct representation at coding time. All there is either code or IB settings
that construct such a graph indirectly, so the programmer has to infer the
graph from what is there and keep it in her head. There are also no formulae, the best
we can do are ValueTransformers and
keyPathsForValuesAffectingValueForKey.
As best as I can tell, the reason for this state of affairs is that there simply
wasn't any awareness of the decades of
research and practical experience with constraint solvers at the time (How
do I know? I asked, the answer was "Huh?").
Anyway, when you add it all up, my conclusion is that while I would really,
really, really like a good constraint solving system (at least for spreadsheet
constraints), KVO and Bindings are not it. They are too simplistic, too
fragile and solve too little of the actual problem to be worth the trouble.
It is easier to just write that damn state maintenance code, and infinitely
easier to debug it.
I think one of the main communication problems between advocates for and
critics of KVO/Bindings is that the advocates are advocating more for
the concept of constraint solving, whereas critics are critical of the
implementation. How can these critics not see that despite a few flaws,
this approach is obviously
The Right Thing™? How can the advocates not see the
obvious flaws?
Functional Reactive Programming
As far as I can tell, Functional Reactive Programming (FRP) in general and Reactive
Cocoa in particular are another way of scratching the same itch.
[..] is an integration of declarative [..] and imperative object-oriented programming. The primary goal of this integration is to use constraints to express relations among objects explicitly -- relations that were implicit in the code in previous languages.
Sounds like FRP, right? Well, the first "[..]" part is actually "Constraint Imperative Programming" and the second is "constraints",
from the abstract of a 1994 paper. Similarly, I've seen it stated that FRP is like a spreadsheet.
The connection between functional programming and constraint programming is also well
known and documented in the literature, for example the experience report above states the
following:
Since constraints are simply functional programming dressed up with syntactic sugar, it should not be surprising that 1) programmers do not think of using constraints for most programming tasks and, 2) programmers require extensive training to overcome their procedural instincts so that they will use constraints.
However, you wouldn't be able to tell that there's a relationship there from reading
the FRP literature, which focuses exclusively on the connection to functional
programming via functional reactive animations and Microsoft's Rx extensions. Explaining and particularly motivating FRP this way has the
fundamental problem that whereas functional programming, which is per definition
static/timeless/non-reactive, really needs something to become interactive,
reactivity is already inherent in OO. In fact, reactivity is the quintessence of
objects: all computation is modeled as objects reacting to messages.
So adding reactivity to an object-oriented language is, at first blush, non-sensical
and certainly causesconfusion when explained this way.
I was certainly confused, because until I found this one
paper on reactive imperative programming,
which adds constraints to C++ in a very cool and general way,
none of the documentation, references or papers made the connection that seemed so
blindingly obvious to me. I was starting to question my own sanity.
Architecture
Additionally, one-way dataflow constraints creating relationships between program variables
can, as far as I can tell, always be replaced by a formulation where the dependent
variable is simply replaced by a method that computes the value on-demand. So
instead of setting up a constraint between point1.x and point2.x,
you implement point2.x as a method that uses point1.x to
compute its value and never stores that value. Although this may evaluate more
often than necessary rather than memoizing the value and computing just once, the
additional cost of managing constraint evaluation is such that the two probably
balance.
However, such an implementation creates permanent coupling and requires dedicated
classes for each relationship. Constraints thus become more of an architectural
feature, allowing existing, usually stateful components to be used together without
having to adapt each component for each individual ensemble it is a part of.
Panta Rhei
Everything flows, so they say. As far as I can tell, two different
communities, the F(R)P people and the OO people came up with very similar
solutions based on data flow. The FP people wanted to become more reactive/interactive,
and achieved this by modeling time as sequence numbers in streams of values, sort
of like Lucid or other dataflow languages.
The OO people wanted to be able to specify relationships declaratively and have
their system figure out the best way to satisfy those constraints, with
a large and useful subset of those constraints falling into the category of
the one-way dataflow constraints that, at least to my eye, are equivalent
to FRP. In fact, this sort of state maintenance and update-propagation
pops up in lots of different places, for example makefiles or other
build systems, web-server generators, publication workflows etc. ("this
OmniGraffle diagram embedded as a PDF into this LaTeX document that
in turn becomes a PDF document" -> the final PDF should update
automatically when I change the diagram, instead of me having to
save the diagram, export it to PDF and then re-run LaTeX).
What's kind of funny is that these two groups seem to have converged
in essentially the same space, but they seem to not be aware of
each other, maybe they are phase-shifted with respect to each other?
Part of that phase-shift is, again, communication. The FP guys
couch everything in must destroy all humans er state rethoric,
which doesn't do much to convince OO guys who know that for most
of their programs, state isn't an implementation detail but fundamental
to their applications. Also practical experience does not support the
idea that the FP approach is obvious:
Unfortunately, given the considerable amount of time required to train students to use constraints in a non-graphical manner, it does not seem reasonable to expect that constraints will ever be widely used for purposes other than graphical layout. In retrospect this result should not have been surprising. Business people readily use constraints in spreadsheets because constraints match their mental model of the world. Similarly, we have found that students readily use constraints for graphical layout since constraints match their mental model of the world, both because they use constraints, such as left align or center, to align objects in drawing editors, and because they use constraints to specify the layout of objects in precision paper sketches, such as blueprints. However, in their everyday lives, students are much more accustomed to accomplishing tasks using an imperative set of actions rather than using a declarative set of actions.
Of course there are other groups hanging out in this convergence zone, for example the
Unix folk with their pipes and filters. That is also not too surprising if
you look at the history:
So, we were all ready. Because it was so easy to compose processes with shell scripts. We were already doing that. But, when you have to decorate or invent the name of intermediate files and every function has to say put your file there. And the next one say get your input from there. The clarity of composition of function, which you perceived in your mind when you wrote the program, is lost in the program. Whereas the piping symbol keeps it. It's the old thing about notations are important.
I think the familiarity with Unix pipes also increases the itch: why can't I have
that sort of thing in my general purpose programming language? Especially when
it can lead to very concise programs, such as the Quartz-like graphics subsystem
Gezira written in
under 400 lines of code using the Nile dataflow language.
Moving Forward
I too have heard the siren sing.
I also think that a more spreadsheet-like programming model would not just make my life
as a developer easier, it might also make software more approachable for end-user adaptation and tinkering,
contributing to a more meaningful version of open source. But how do we get there?
Apart from a reasonable implementation and better debuggingsupport, a new system would need much tighter
language integration. Preferably there would be a direct syntax for expressing constraints
such as that available in constraint imperative programming languages or constraint extensions to existing
languages like
Ruby or JavaScript.
This language support should be unified as much as
possible between different constraint systems, not one mechanism for Autolayout and a
completely different one for Bindings.
Supporting constraint programming has always been one of the goals of my Objective-Smalltalk project, and so far that has informed the
PolymorphicIdentifiers that support a uniform interface for data backed by different types of
stores, including one or more constraint stores supporting cooperating solvers, filesystems or web-sites. More needs
to be done, such as extending the data-flow connector hierarchy to conceptually integrate
constraints. The idea is to create a language that does not actually include constraints
in its core, but rather provides sufficient conceptual, expressive and implementation
flexibility to allow users to add such a facility in a non-ad-hoc way so that it is
fully integrated into the language once added. I am not there yet, but all the results
so far are very promising. The architectural focus of Objective-Smalltalk also ties
in well with the architectural interpretation of constraints.
There is a lot to do, but on the other hand I think the payback is huge, and there is
also a large body of existing theoretical,
practical and empirical groundwork to fall back on, so I think the task is doable.
Your feedback, help and pull requests would be very much appreciated!
The feedback was, effectively: "This code is incorrect, it is missing a return type". Of course, the code isn't incorrect in the least bit, the return type is id, because that is the default type, and in fact, you will see this style in both Brad Cox's book:
and the early NeXTStep documentation:
Having a default type for objects isn't entirely surprising, because at that time id was not just the default type, it was the only type available for objects, the optional static typing for objects wasn't introduced into Objective-C until later. In addition the template for Objective-C's object system was Smalltalk, which doesn't use static types, you just use variable names.
Cargo-cult typing
So while it is possible (and apparently common) to write -(id)objectAtIndex:(NSUInteger)anIndex, it certainly isn't any more correct. In fact, it's
worse, because it is just syntactic noise [1][2], although it is arguably even worse than what Fowler describes because it isn't actually mandated by
the language, the noise is inflicted needlessly.
And while we could debate as to whether it is better or not to write things that are redundant
syntactic noise, we could also not, as that was settled almost 800 years ago: entia non sunt multiplicanda praeter necessitatem. You could also say KISS or "when in doubt, leave it out", all of which just
say the the burden of proof is on whoever wants to add the redundant pieces.
What's really odd about this phenomenon is that we really don't gain anything from typing
out these explicit types, the code certainly doesn't become more readable. It's as if
we think that by following the ritual of explicitly typing out a type, we made the
proper sacrifice to the gods of type-safety and they will reward us with correctness.
But just like those Pacific islanders that built wooden planes, radios and control
towers, the ritual is empty, because it conveys no information to the type system,
or the reader.
The id subset
Now, I personally don't really care whether you put in a redundant (id)
or not, I certainly have been reading over it (and not even really noticing) for
my last two decades of Objective-C coding. However, the mistaken belief that it
has to be there, rather than this is a personal choice you make, does worry me.
I think the problem goes a little deeper than just slightly odd coding styles, because it seems to be part and parcel of a drive towards making Objective-C look like an explicitly statically typed language along the lines of C++ or maybe Java,
with one of the types being id. That's not the case: Objective-C
is an optionally statically typed language. This means that you
may specify type information if you want to, but you generally
don't have to. I also want the emphasize that you can at best get Objective-C
to look like such a language, the holes in the type system are way too big for
this to actually gain much safety.
Properties started this trend, and now the ARC variant of the language turns what used to be warnings about unknown selectors needlessly into hard compiler errors.
Of course, there are some who plausibly argue that this always should have been an error,
or actually, that it always was an error, we just didn't know about it.
That's hogwash, of course. There is a subset of the language, which I'd like
to call the id subset, where all the arguments and returns are object
pointers, and for this it was always safe to not have additional type information,
to the point where the compiler didn't actually have that additional type information.
You could also call it the Smalltalk subset.
Another thing that's odd about this move to rigidify Objective-C in the face of
success of more dynamic languages is that we actually have been moving into the
right direction at the language base-level (disregarding the type-system): in general programming style, with new syntax support
for object literals and subscripting, SmallInteger style NSNumbers modern
Objective-C consists much more of pure objects than was traditionally the case.
And as long as we are dealing with pure objects, we are in the id subset.
A dynamic language
What's great about the id subset is that it makes incremental, explorative
programming very easy and lots of fun, much like other dynamic languages
such as Smalltalk, Python or Ruby.
(Not entirely like them, due to the need to compile to native code, but compilers are fast these
days and there are possible fixes such as Objective-Smalltalk.)
The newly enforced rigidity is starting to make explorative programming in Objective-C much
harder, and a lot less fun. In fact, it feels much more like C++ or Java and much less
like the dynamic language that it is, and in my opinion is the wrong direction: we should
be making our language more dynamic, and of course that's what I've been doing. So while I wouldn't agree with that tradeoff even if
it were true, the fact is that we aren't actually
getting static type safety, we are just getting a wood prop that will not fly.
Discussion on Hacker News.
UPDATE: Inserted a little clarification that I don't care about bike-shedding your code
with regard to (id). The problem is that people's mistaken belief both that and why it has to be there is symptomatic of that deeper trend I wrote about.
… from my (Smalltalk) experience, the block passed to #collect: is often not a single message send, but rather a small adhoc expression, for which it does not really make sense to define a named method. Or you might need both the element and its key/index… how does HOM deal with that?
These are certainly valid observations, and were some of the reasons
that I didn't really think that much of HOM for the first couple of
years after coming up with it back in 1997 or so. Since then, I've
become less and less convinced that the problems raised are a big concern, for a number of reasons.
Inline vs. Named
One reason is that I actually looked at usage of blocks in the Squeak
image, and found that the majority of blocks with at least one argument
(so not ifTrue:, whileTrue: and other control structures) actually did
contain just a single message send, and so could be immediately expressed
as HOMs. Second, I noticed that there were a lot of fairly large (3+ LOC)
blocks that should have been separate methods but weren't.
That's when I discovered that the presence of blocks actually
encourages bad code, and the 'limitation' of HOMs actually was
encouraging better(-factored) code.
Of course, I wasn't particularly convinced by that line of reasoning,
because it smelled too much like "that's not a bug, that's a feature".
Until that is, I saw others with less vested interest reporting the same
observation:
But are these really limitations? After using higher order messages for a while I've come to think that they are not. The first limitation encourages you move logic that belongs to an object into that object's implementation instead of in the implementation of methods of other objects. The second limitation encourages you to represent application concepts as objects rather than procedural code. Both limitations have the surprising effect of guiding the code away from a procedural style towards better object-oriented design.
My experience has been that Nat is right, having a mechanism that
pushes you towards factoring and naming is better for your code
that one that pushes you towards inlining and anonymizing.
Objective-C I
In fact, the Cocoa example that Apple gives for blocks illustrates this idea
very well. They implement a "Finder like" sorting mechanism using blocks:
The block syntax is so verbose that there is no hope of actually defining the block inline, the supposed raison d'etre for blocks. So we actually need to take the
block out-of-line and name it. So it looks suspiciously like an
equivalent implementation using functions:
Of course, something as useful as a Finder-like comparison sort
really deserves to be exposed and made available for reuse, rather
than hidden inside one specific sort. Objective-C categories are
just the mechanism for this sort of thing:
Note that some of these criticisms are specific to Apple's implementation of blocks, they do not apply in the same way to
Smalltalk blocks, which are a lot less noisy.
Objective-C II
Objective-C has at least one other pertinent difference from
Smalltalk, which is that it already contains control structures
in the basic language, without blocks. (Of course, those control
structures can also take blocks as arguments, but these are the
different types of blocks that are delimited by curly braces and
cannot be passed around as first class objects).
This means that in Objective-C, we already have the ability to
do all the iterating we need, mechanisms such as blocks and
HOM are mostly conveniences, not required building blocks. If
we need indices, use a for loop. If we require keys, use a
key-enumerator and iterate over that.
In fact, I remember when my then colleagues started working
with a enum-filters, a HOM-precursor that's strikingly similar
to the Google Toolbox's GTMSEnumerator+Filter.m. They really took to
the elegance, but then also wanted to use it for various special
cases. They laughed when they realized that those special-cases
were actually already handled better by existing C control structures
such as for-loops.
FP, HANDs and Aggregate Operations
While my dislike of blocks is easy to discount by the usual
inventor's pride (your child must be ugly for mine to be pretty),
that interpretation actually reverses the causation: I came
up with HOM because I was never very fond of blocks. In fact,
when I first encountered Smalltalk during my university
years I was enthralled until I saw the iteration methods.
That's not to say that do:, collect: and friends were not light-years
ahead of Algol-type control structures, they most definitely were
and still are. Having some sort of higher-order mechanism is
vastly superior than not having a higher-order mechanism.
I do wish that "higher order mechanism" and "blocks" weren't
used as synonyms quite as much, because they are not, in fact,
synonymous.
When I first encountered Smalltalk blocks, I had just previously been
exposed to Backus's FP, and that was just so much prettier! In
FP functions are composed using functionals without ever talking
about actual data, and certainly without talking about individual
elements. I have always been on the lookout for higher levels
of expression, and this was such a higher level. Now taking
things down to "here's another element, what do you want to
do with that" was definitely a step back, and quite frankly
a bit of a let-down.
The fundamental difference I see is that in Smalltalk there
is still an iteration, even if it is encapsulated: we iterate
over some collection and then execute some code for each element.
In FP, and in HOM, there is instead an aggregate operation: we
take an existing operation and lift it up as applying to an entire collection.
This difference might seem contrived, but the research done with
the HANDS system demonstrates that it is very real:
After creating HANDS, I conducted another user study to examine the effectiveness of three features of HANDS: queries, aggregate operations, and data visibility. HANDS was compared with a limited version that lacked these features. In the limited version, programmers were able to achieve the desired results but had to use more traditional programming techniques. Children using the full-featured HANDS system performed significantly better than their peers who used the limited version.
I also find this difference to be very real.
The difference between iterating with blocks and lifting operations
to be aggregate operations also shows up in the fact that the lifting can be done on any
combination of the involved parameters, whereas you tend to only
iterate over one collection at a time, because the collection and
the iteration are in focus.
Symmetry
Finally, the comparison to functional languages shows a couple of
interesting asymmetries: in a functional language, higher order
functions can be applied both to named functions and to anonymous
functions. In essence, the higher order mechanism just takes
functions and doesn't care wether they are named or not. Also
the higher order mechanism uses the same mechanisms (functions)
as the base system,
With block-based higher order mechanisms, on the other hand,
we must make the argument an anonymous function (that's what
a block is), and we cannot use a named function, bringing
us back to the conundrum mentioned at the start that this
mechanisms encourages bad code. Not only that, it also turns
out that the base mechanism (messages and methods) is different
from the higher order mechanism, which requires anonymous functions,
rather than methods.
HOM currently solves only the latter part of this asymmetry, making
the higher order mechanism the same as the base mechanism, that
mechanism being messaging in both cases. However, it currently
cannot solve the other asymmetry: where blocks support unnamed,
inline code and not named code, HOM supports named but not unnamed
code. While I think that this is the better choice in the larger
number of cases, it would be nice to actually suport both.
One solution to this problem might be to simply support both blocks
and Higher Order Messaging, but it seems to me that the more
elegant solution would be to support inline definition of more-or-less
anonymous methods that could then be integrated into the Higher Order
Messaging framework.