Wednesday, October 7, 2015

Jitterdämmerung

So, Windows 10 has just been released, and with it Ahead Of Time (AOT) compilation feature .NET native. Google also just recently introduced ART for Android, and I just discovered that Oracle is planning an AOT compiler for mainstream Java.

With Apple doggedly sticking to Ahead of Time Compilation for Objective-C and now their new Swift, JavaScript is pretty much the last mainstream hold-out for JIT technology. And even in JavaScript, the state-of-the-art for achieving maximum performance appears to be asm.js, which largely eschews JIT techniques by acting as object-code in the browser represented in JavaScript for other languages to be AOT-compiled into.

I think this shift away from JITs is not a fluke but was inevitable, in fact the big question is why it has taken so long (probably industry inertia). The benefits were always less than advertised, the costs higher than anticipated. More importantly though, the inherent performance characteristics of JIT compilers don't match up well with most real world systems, and the shift to mobile has only made that discrepancy worse. Although JITs are not going to go away completely, they are fading into the sunset of a well-deserved retirement.

Advantages of JITs less than promised

I remember reading the copy of the IBM Systems Journal on Java Technology back in 2000, I think. It had a bunch of research articles describing super amazing VM technology with world-beating performance numbers. It also had a single real-world report from IBM's San Francisco project. In the real world, it turned out, performance was a bit more "mixed" as they say. In other words: it was terrible and they had to do an incredible amount of work for the system be even remotely usable.

There was also the experience of the New Typesetting System (NTS), a rewrite of TeX in Java. Performance was atrocious, the team took it with humor and chose a snail as their logo.

Nts at full speed One of the reasons for this less than stellar performance was that JITs were invented for highly dynamic languages such as Smalltalk and Self. In fact, the Java Hotspot VM can be traced in a direct line to Self via the Strongtalk system, whose creator Animorphic Systems was purchased by Sun in order to acquire the VM technology.

However, it turns out that one of the biggest benefits of JIT compilers in dynamic languages is figuring out the actual types of variables. This is a problem that is theoretically intractable (equivalent to the halting problem) and practically fiendishly difficult to do at compile time for a dynamic language. It is trivial to do at runtime, all you need to do is record the actual types as they fly by. If you are doing Polymorphic Inline Caching, just look at the contents of the caches after a while. It is also largely trivial to do for a statically typed language at compile time, because the types are right there in the source code!

So gathering information at runtime simply isn't as much of a benefit for languages such as C# and Java as it was for Self and Smalltalk.

Significant Costs

The runtime costs of a JIT are significant. The obvious cost is that the compiler has to be run alongside the program to be executed, so time compiling is not available for executing. Apart from the direct costs, this also means that your compiler is limited in the types of analyses and optimizations it can do. The impact is particularly severe on startup, so short-lived programs like for example the TeX/NTS are severely impacted and can often run slower overall than interpreted byte-code.

In order to mitigate this, you start having to have multiple compilers and heuristics for when to use which compilers. In other words: complexity increases dramatically, and you have only mitigated the problem somewhat, not solved it.

A less obvious cost is an increase in VM pressure, because the code-pages created by the JIT are "dirty", whereas executables paged in from disk are clean. Dirty pages have to be written to disk when memory is required, clean pages can simply be unmapped. On devices without a swap file like most smartphones, dirty vs. clean can mean the difference between a few unmapped pages that can be swapped in later and a process getting killed by the OS.

VM and cache pressure is generally considered a much more severe performance problem than a little extra CPU use, and often even than a lot of extra CPU use. Most CPUs today can multiply numbers in a single cycle, yet a single main memory access has the CPU stalled for a hundred cycles or more.

In fact, it could very well be that keeping non-performance-critical code as compact interpreted byte-code may actually be better than turning it into native code, as long as the code-density is higher.

Security risks

Having memory that is both writable and executable is a security risk. And forbidden on iOS, for example. The only exception is Apple's own JavaScript engine, so on iOS you simply can't run your own JITs.

Machines got faster

On the low-end of performance, machines have gotten so fast that pure interpreters are often fast enough for many tasks. Python is used for many tasks as is and PyPy isn't really taking the Python world by storm. Why? I am guessing it's because on today's machines, plain old interpreted Python is often fast enough. Same goes for Ruby: it's almost comically slow (in my measurements, serving http via Sinatra was almost 100 times slower than using libµhttp), yet even that is still 400 requests per second, exceeding the needs of the vast majority of web-sites including my own blog, which until recently didn't see 400 visitors per day.

The first JIT I am aware of was Peter Deutsch's PS (Portable Smalltalk), but only about a decade later Smalltalk was fine doing multi-media with just a byte-code interpreter. And native primitives.

Successful hybrids

The technique used by Squeak: interpreter + C primitives for heavy lifting, for example for multi-media or cryptography has been applied successfully in many different cases. This hybrid approach was described in detail by John Ousterhout in Scripting: Higher-Level Programming for the 21st Century: high level "scripting" languages are used to glue together high performance code written in "systems" languages. Examples include Numpy, but the ones I found most impressive were "computational steering" systems apparently used in supercomputing facilities such as Oak Ridge National Laboratories. Written in Tcl.

What's interesting with these hybrids is that JITs are being squeezed out at both ends: at the "scripting" level they are superfluous, at the "systems" level they are not sufficient. And I don't believe that this idea is only applicable to specialized domains, though there it is most noticeable. In fact, it seems to be an almost direct manifestation of the observations in Knuth's famous(ly misquoted) quip about "Premature Optimization":

Experience has shown (see [46], [51]) that most of the running time in non-IO-bound programs is concentrated in about 3 % of the source text.

[..] The conventional wisdom shared by many of today's software engineers calls for ignoring efficiency in the small; but I believe this is simply an overreaction to the abuses they see being practiced by penny-wise-and-pound-foolish programmers, who can't debug or maintain their "optimized" programs. In established engineering disciplines a 12 % improvement, easily obtained, is never considered marginal; and I believe the same viewpoint should prevail in soft- ware engineering. Of course I wouldn't bother making such optimizations on a one-shot job, but when it's a question of preparing quality programs, I don't want to restrict myself to tools that deny me such efficiencies.

There is no doubt that the grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.

Yet we should not pass up our opportunities in that critical 3 %. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified. It is often a mistake to make a priori judgments about what parts of a program are really critical, since the universal experience of programmers who have been using measurement tools has been that their intuitive guesses fail. After working with such tools for seven years, I've become convinced that all compilers written from now on should be designed to provide all programmers with feedback indicating what parts of their programs are costing the most; indeed, this feedback should be supplied automatically unless it has been specifically turned off.

[..]

(Most programs are probably only run once; and I suppose in such cases we needn't be too fussy about even the structure, much less the efficiency, as long as we are happy with the answers.) When efficiencies do matter, however, the good news is that usually only a very small fraction of the code is significantly involved.

For the 97%, a scripting language is often sufficient, whereas the critical 3% are both critical enough as well as small and isolated enough that hand-tuning is possible and worthwhile.

I agree with Ousterhout's critics who say that the split into scripting languages and systems languages is arbitrary, Objective-C for example combines that approach into a single language, though one that is very much a hybrid itself. The "Objective" part is very similar to a scripting language, despite the fact that it is compiled ahead of time, in both performance and ease/speed of development, the C part does the heavy lifting of a systems language. Alas, Apple has worked continuously and fairly successfully at destroying both of these aspects and turning the language into a bad caricature of Java. However, although the split is arbitrary, the competing and diverging requirements are real, see Erlang's split into a functional language in the small and an object-oriented language in the large.

Unpredictable performance model

The biggest problem I have with JITs is that their performance model is extremely unpredictable. First, you don't know when optimizations are going to kick in, or when extra compilation is going to make you slower. Second, predicting which bits of code will actually be optimized well is also hard and a moving target. Combine these two factors, and you get a performance model that is somewhere between unpredictable and intractable, and therefore at best statistical: on average, your code will be faster. Probably.

While there may be domains where this is acceptable, most of the domains where performance matters at all are not of this kind, they tend to be (soft) real time. In real time systems average performance matters not at all, predictably meeting your deadline does. As an example, delivering 80 frames in 1 ms each and 20 frames in 20 ms means for 480ms total time means failure (you missed your 60 fps target 20% of the time) whereas delivering 100 frames in 10 ms each means success (you met your 60 fps target 100% of the time), despite the fact that the first scenario is more than twice as fast on average.

I really learned this in the 90ies, when I was doing pre-press work and delivering highly optimized RIP and Postscript processing software. I was stunned when I heard about daily newspapers switching to pre-rendered, pre-screened bitmap images for their workflows. This is the most inefficient format imaginable for pre-press work, with each page typically taking around 140 MB of storage uncompressed, whereas the Postscript source would typically be between 1/10th and 1/1000th of the size. (And at the time, 140MB was a lot even for disk storage, never mind RAM or network capacity.

The advantage of pre-rendered bitmaps is that your average case is also your worst case. Once you have provisioned your infrastructure to handle this case, you know that your tech stack will be able to deliver your newspaper on time, no matter what the content. With Postscript (and later PDF) workflows, you average case is much better (and your best case ridiculously so), but you simply don't get any bonus points for delivering your newspaper early. You just get problems if it's late, and you are not allowed to average the two.

Eve could survive and be useful even if it were never faster than, say, Excel. The Eve IDE, on the other hand, can't afford to miss a frame paint. That means Imp must be not just fast but predictable - the nemesis of the SufficientlySmartCompiler.
I also saw this effect in play with Objective-C and C++ projects: despite the fact that Objective-C's primitive operations are generally more expensive, projects written in Objective-C often had better performance than comparable C++ projects, because the Objective-C's performance model was so much more simple, obvious and predictable.

When Apple was still pushing the Java bridge, Sun engineers did a stint at a WWDC to explain how to optimize Java code for the Hotspot JIT. It was comical. In order to write fast Java code, you effectively had to think of the assembler code that you wanted to get, then write the Java code that you thought might net that particular bit of machine code, taking into account the various limitations of the JIT. At that point, it is a lot easier to just write the damn assembly code. And vastly more predictable, what you write is what you get.

Modern JITs are capable of much more sophisticated transformations, but what the creators of these advanced optimizers don't realize is that they are making the problem worse rather than better. The more they do, the less predictable the code becomes.

The same, incidentally, applies to SufficentlySmart AOT compilers such as the one for the Swift language, though the problem is not quite as severe as with JITs because you don't have the dynamic component. All these things are well-intentioned but all-in-all counter-productive.

Conclusion

Although the idea of Just in Time Compilers was very good, their area of applicablity, which was always smaller than imagined and/or claimed, has shrunk ever further due to advances in technology, changing performance requirements and the realization that for most performance critical tasks, predictability is more important than average speed. They are therefore slowly being phased out in favor of simpler, generally faster and more predictable AOT compilers. Although they are unlikely to go away completely, their significance will be drastically diminished.

Alas, the idea that writing high-level code without any concessions to performance (often justified by misinterpreting or simply just misquoting Knuth) and then letting a sufficiently smart compiler fix it lives on. I don't think this approach to performance is viable, more predictability is needed and a language with a hybrid nature and the ability for the programmer to specify behavior-preserving transformations that alter the performance characteristics of code is probably the way to go for high-performance, high-productivity systems. More on that another time.

What do you think? Are JITs on the way out or am I on crack? Should we have a more manual way of influencing performance without completely rewriting code or just trusting the SmartCompiler?

Update: Nov. 13th 2017

The Mono Project has just announced that they are adding a byte-code interpreter: "We found that certain programs can run faster by being interpreted than being executed with the JIT engine."

Update: Mar. 25th 2021

Facebook has created an AOT compiler for JavaScript called Hermes. They report significant performance improvements and reductions in memory consumption (with of course some trade-offs for specific benchmarks).

Speaking of JavaScript, Google launhed Ignition in 2017, a JS bytecode interpreter for faster startup and better performance on low-memory devices. It also did much better in terms of pure performance than they anticipated.

So in 2019, Google introduced their JIT-Less JS engine. Not even an AOT compiler, just a pure interpreter. While seeing slowdowns in the 20-80% range on synthetic benchmarks, they report real-world performance only a few percent lower than their high-end, full-throttle TurboFan JIT.

Oh, how could I forget WebAssembly? "One of the reasons applications target WebAssembly is to execute on the web with predictable high performance. ".

I've also been told by People Who Know™ that the super-amazing Graal JIT VM for Java is primarily used for its AOT capabilities by customers. This is ironic, because the aptly named Graal pretty much is the Sufficiently Smart Compiler that was always more of a joke than a real goal...but they achieved it nonetheless, and as a JIT no less! And now that we have it, it turns out customers mostly don't care. Instead, they care about predictability, start-up performance and memory consumption.

Last not least, Apple's Rosetta 2 translator for running Intel binaries on ARM is an AOT binary-to-binary translator. And it is much faster, relatively, than the original Rosetta or comparable JIT-based binary translators, with translated Intel binaries clocking in at around 80% the speed of native ARM binaries. Combined with the amazing performance of the M1, this makes M1 Macs frequently faster than Intel Macs at running Intel Mac binaries. (Of course it also includes a (much slower) JIT component as a fallback when the AOT compiler can't figure things out. Like when the target program is itself a JIT... )

Update: Oct. 3rd 2021

It appears that 45% of CVEs for V8 and more than half of "in the wild" Chrome exploits abused a JIT bug, which is why Microsoft is experimenting with what they call SDSM (Super Duper Secure Mode). What's SDSM? JavaScript without the JIT.

Unsurprisingly at this point, performance is affected far less than one might have guessed, even for the benchmarks, and even less in real-world tasks: "Anecdotally, we find that users with JIT disabled rarely notice a difference in their daily browsing.".

Update: Mar. 30th 2023

To my eyes, this looks like startup costs, much of which will be jit startup costs.

HN user jigawatts appears to confirm my suspicion:

Teams is an entire web server written in an interpreted language compiled on the fly using a "Just in Time" optimiser (node.js + V8) -- coupled to a web browser that is basically an entire operating system. You're not "starting Teams.exe", you are deploying a server and booting an operating system. Seen in those terms, 9 seconds is actually pretty good. Seen in more... sane terms, showing 1 KB of text after 9 seconds is about a billion CPU instructions needed per character.
. It also looks to be the root problem, well one of the root problems with Visual Studio's launch time.

Update: Apr. 1st 2023

"One of the most annoying features of Julia is its latency: The laggy unresponsiveness of Julia after starting up and loading packages." -- Julia's latency: Past, present and future

I find it interesting that the term "JIT" is never mentioned in the post, I guess it's just taken as a given.

I am pretty sure I've missed some developments.

Sunday, October 4, 2015

Are Objects Already Reactive?


TL;DR: Yes, obviously.

My post from last year titled The Siren Call of KVO and Cocoa Bindings has been one of my most consequential so far. Apart from being widely circulated and discussed, it has also been a focal point of my ongoing work related to Objective-Smalltalk. The ideas presented there have been central to my talks on software architecture, and I have finally been able to present some early results I find very promising.

Alas, with the good always comes the bad, and some of the reactions (sic) have no been quite so positive. For example, consider the following I wrote:

[..] Adding reactivity to an object-oriented language is, at first blush, non-sensical and certainly causes confusion [because] whereas functional programming, which is per definition static/timeless/non-reactive, really needs something to become interactive, reactivity is already inherent in OO. In fact, reactivity is the quintessence of objects: all computation is modeled as objects reacting to messages.
This seemed quite innocuous, obvious, and completely uncontroversial to me, but apparently caused a bit of a stir with at least some of the creators of ReactiveCocoa:

Ouch! Of course I never wrote that "nobody" needs FRP: Functional Programming definitely needs FRP or something like it, because it isn't already reactive like objects are. Second, what I wrote is that this is non-sensical "at first blush" (so "on first impression"). Idiomatically, this phrase is usually sets up a " ...but on closer examination", and lo-and-behold, almost the entire rest of the post talks about how the related concepts of dataflow and dataflow-constraints are highly desirable.

The point was and is (obviously?) a terminological one, because the existing term "reactivity" is being overloaded so much that it confuses more than it clarifies. And quite frankly, the idea of objects being "reactive" is (a) so self-evident (you send a message, the object reacts by executing method which usually sends more messages) and (b) so deeply ingrained and basic that I didn't really think about it much at all. So obviously, it could very well be that I was wrong and that this was "common sense" to me in the Einsteinian sense.

I will explore the terminological confusion more in later posts, but suffice it to say that Conal Elliott contacted the ReactiveCocoa guys to tell them (politely) that whatever ReactiveCocoa was, it certainly wasn't FRP:

I'm hoping to better understand how the term "Functional Reactive Programming" gets applied to systems that are so far from the original definition and principles (continuous time with precise & simple mathematical meaning)
He also wrote/talked more about this confusion in his 2015 talk "Essence and Origins of FRP":
The term has been used incorrectly to describe systems like Elm, Bacon, and Reactive Extensions.
Finally, he seems to agree with me that the term "reactive" wasn't really well chosen for the concepts he was going after:

What is Functional Reactive Programming:  Something of a misnomer.  Perhaps Functional temporal programming

So having established the the term "reactive" is confusing when applied to whatever it is that ReactiveCooca is or was trying to be, let's have a look at how and whether it is applicable to objects. The Communication of the ACM "Special issue on object-oriented experiences and future trends" from 1995 has the following to say:

A group of leading experts from industry and academia came together last fall at the invitation of IBM and ACM to ponder the primary areas of future needs in software support for object-based applications.

[..]

In the future, as you talk about having an economy based on these entities (whether we call them “objects” or we call them something else), they’re going to have to be more proactive. Whether they’re intelligent agents or subjective objects, you enable them with some responsibility and they get something done for you. That’s a different view than we have currently where objects are reactive; you send it a message and it does something and sends something back.

But lol, that's only a group of leading researchers invited by IBM and the ACM writing in arguably one of the most prestigious computing publications, so what do they know? Let's see what the Blue Book from 1983 has to say when defining what objects are:

The set of messages to which an object can respond is called its interface with the rest of the system. The only way to interact with an object is through its interface. A crucial property of an object is that its private memory can be manipulated only by its own operations. A crucial property of messages is that they are the only way to invoke an object's operations. These properties insure that the implementation of one object cannot depend on the internal details of other objects, only on the messages to which they respond.
So the crucial definition of objects according the creators of Smalltalk is that they respond to messages. And of course if you check a dictionary or thesaurus, you will find that respond and react are synonyms. So the fundamental definition of objects is that they react to messages. Hmm...that sounds familiar somehow.

While those are seemingly pretty influential definitions, maybe they are uncommon? No. A simple google search reveals that this definition is extremely common, and has been around for at least the last 30-40 years:

A conventional statement of this principle is that a program should never declare that a given object is a SmallInteger or a LargeInteger, but only that it responds to integer protocol.
But lol, what do Adele Goldberg, David Robson or Dan Ingalls know about Object Oriented Programming? After all, we have one of the creators of ReactiveCocoa here! (Funny aside: LinkedIn once asked me "Does Dan Ingalls know about Object Oriented Programming?" Alas there wasn't a "Are you kidding me?" button, so I lamely clicked "Yes").

Or maybe it's only those crazy dynamic typing folks that no-one takes seriously these days? No.

So the only thing relevant thing for typing purposes is how an object reacts to messages.
Here's a section from the Haiku/BeOS documentation:
A BHandler object responds to messages that are handed to it by a BLooper. The BLooper tells the BHandler about a message by invoking the BHandler's MessageReceived() function.
A book on OO graphics:

The draw object reacts to messages from the panel, thereby creating an IT to cover the canvas.
CS lecture on OO:
Properties implemented as "fields" or "instance variables"
  • constitute the "state" of the object
  • affect how object reacts to messages
Heck, even the Apple Cocoa/Objective-C docs speak of "objects responding to messages", it's almost like a conspiracy.
By separating the message (the requested behavior) from the receiver (the owner of a method that can respond to the request), the messaging metaphor perfectly captures the idea that behaviors can be abstracted away from their particular implementations.
Book on OO Analysis and Design:
As the object structures are identified and modeled, basic processing requirements for each object can be identified. How each object responds to messages from other objects needs to be defined.
An object's behavior is defined by its message-handlers(handlers). A message-handler for an object responds to messages and performs the required actions.
CLIPS - object-oriented programming

Or maybe this is an old definition from the 80ies and early 90ies that has fallen out of use? No.

The behavior of related collections of objects is often defined by a class, which specifies the state variables of an objects (its instance variables) and how an object responds to messages (its instance methods).
Methods: Code blocks that define how an object responds to messages. Optionally, methods can take parameters and generate return values.
Cocoa, by Richard Wentk, 2010

The main difference between the State Machine and the immutable is the way the object reacts to messages being sent (via methods invoked on the public interface). Whereas the State Machine changes its own state, the Immutable creates a new object of its own class that has the new state and returns it.
So to sum up: classic OOP is definitely reactive. FRP is not, at least according to the guy who invented it. And what exactly things like ReactiveCocoa and Elm etc. are, I don't think anyone really knows, except that they are not even FRP, which wasn't, in the end reactive.

Tune in for "What the Heck is Reactive Programming, Anyway?"

As always, comments welcome here or on HN

Saturday, September 26, 2015

Very Simple Dataflow Constraints with Objective-Smalltalk

Early last year, I wrote a lengthy piece on the connection between Apple technologies such as Key Value Observing (KVO) and Bindings and general Computer Science concepts such as constraint solving, particularly dataflow constraints (aka. Spreadsheet Constraints).

I also wrote that I was working on something, and despite being somewhat distracted with becoming a father, joining a startup and being acquired, I now have working code.

The sample application contains two examples, one a classic temperature converter that I will cover later, the other an implementation of the ReactiveCocoa password validation example. The basic idea is super-simple, we want to enable a login button when the password field and the confirmation field contain the same value, expressed as follows in Objective-C:


loginButton.enabled = [password.stringValue isEqual:passwordConfirm.stringValue];

Again, this is super simple to write down, but it's not the entire story, because we want to evaluate this statement continuously as the password field and the passwordConfirm field change. The mess of callbacks require to keep those states in sync vastly exceeds the one-time evaluation, as explained in a very good article on Reactive Cocoa by NSHipster. That article uses a slightly more elaborate example, the one in the ReactiveCocoa documentation is the following:


RAC(self, createEnabled) = [RACSignal 
    combineLatest:@[ RACObserve(self, password), RACObserve(self, passwordConfirmation) ] 
    reduce:^(NSString *password, NSString *passwordConfirm) {
        return @([passwordConfirm isEqualToString:password]);
    }];

What's noticeable, apart from the macros that are necessary, is the semantic noise apparently inherent in this solution. Instead of focusing on what we want to accomplish (hidden inside the last return), the focus is on generic RAC classes such as RACSignal and methods such as combineLatest: and reduce:. I didn't really want to combine and reduce, I just wanted to keep some different states in sync, and with Objective-Smalltalk, I can do just that.

Let's first recast the original Objective-C expression into Objective-Smalltalk. Since Smalltalk is not burdened by the syntactic legacy of C, we can lose the square brackets. Because we have binary selectors (a bit like operators) and use ':=' for assignment, we can use '=' to check for equality instead of having to write out 'isEqual:'. The dots get replaced by slashes because Polymorphic Identifiers use URI syntax, and finally we use periods instead of semicolons at the end of sentences, er, statements.


loginButton/enabled := password/stringValue = passwordConfirm/stringValue.

Again, this is semantically the same statement as the original Objective-C, it assigns the right hand side to the left hand side. This can be viewed as a a one way constraint: the left hand side should be the same as the right hand side. The constraint has a fundamental flaw, though, because it is only maintained instantaneously as the line of code is executed. After that, left hand side and right hand side can diverge again. What we want is for that constraint to be maintained indefinitely: whenever the right hand side changes, the left hand side should be updated. In Objective-Smalltalk, you can now express this by replacing the ":=" assignment operator (technically: connector), with the "|=" constraint connector:
loginButton/enabled |= password/stringValue = passwordConfirm/stringValue.

A single character change, so no syntactic and no semantic noise.

Sunday, September 13, 2015

Why Software Is Hard

A while ago, the guys from the "Accidental Tech Podcast" had an episode about the goto fail; disaster and seemed to be struggling a bit with why software is hard / complex, "the most complex man made thing". Although the fact that it's all built is an important aspect, I think that that is a fairly small part.

Probably the biggest problem is the state-space. Software is highly non-linear and discontinuous, unlike for example a bridge, or most other physical objects. If you change or remove a single bolt from a bridge, it is still the same bridge and its characteristics are largely the same. You need to remove quite a number of bolts for that to change, and the effects become noticeable before that (though they do get catastrophically non-linear at some point!). If you change one bit in a piece of software, the behavior is completely unpredictable. It could be the same, it could just crash, it could quietly corrupt data. That's why all those corner cases in the layers matter so much. Again, coming back to the bridge, if one beam has steel that has a slightly odd edge-case, it doesn't matter so much, you don't have to know everything about every beam, as long as they are within rough tolerances. And there are tolerances, and you can improve your odds by making things with tighter tolerances than required. Again, with software it isn't really the case, discrete problems are much harder than continuous ones.

You can see this at work in optimization problems. As long as you have linear equations of real values, there are efficient algorithms for solving such an optimization problem (simplex typically runs in linear time, interior point methods are polynomial). Intuitively, restricting the variables to take only integer values should be easier/quicker, but the reverse is true, and in a big way: once you have integer programming or mixed-integer programming, everything becomes NP-hard.

In fact, I just saw this in action during Joe Spolsky's talk "You suck at Excel": he turned on goal-seeking (essentially a solver), and it diverged dramatically. The problem is that he was rounding the results. Once he turned rounding off, the solver converged to a solution fairly quickly.

The second part that they touched upon, is that it is all abstract, which I think is what they were getting at with the idea that is 100% built. Software being abstract means that we have no intuitions from physical objects to guide us. When building a house, everyone has an idea of how difficult it will be to build a single room vs. the whole house, how much material it will take etc. With software, not so much: this one seemingly little sub-function can potentially be more complex than the entire rest of the program. Even when navigating a hierarchical file-system, there is no indication of how much is hidden behind each directory entry at a particular level.

The last part is related to the second, in that there are no physical or geometric constraints to the architecture and connection complexity. Again, in a physical system we know that something in one corner has very limited ways of influencing something in a different corner, and whatever effect there is will be attenuated by distance in a very predictable way. Again, in software we cannot generally know this. Good software architecture tries to impose artificial constraints to make construction and understanding tractable.

Wednesday, August 26, 2015

What Happens to OO When Processors Are Free?

A while ago, I presented as a crazy thought experiment the idea of using Montecito's transistor budget for creating a chip with tens of thousand of ARM cores. Well, it seems the idea wasn't so crazy after all: The SpiNNaker project is trying to build a system with a million ARM CPUs, and it is designing a custom chip with lots of ARM cores on it.

Of course they only have 1/6th the die area of the Montecito and are using a conservative 135nm process rather than the 95nm of the Montecito or the 15nm that is state of the art, so they have a much lower transistor budget. They also use the later ARM 9 core and add 54 SRAM banks with 32KB each (from the die picture, 3 per core), so in the end they "only" put 18 cores on the chip, rather than many thousands. Using a state of the art 14nm process would mean roughly 100 times more transistors, a Montecito-sized die another factor of six. At that point, we would be at 10000 cores per chip, rather than 18.

One of the many interesting features of the SpiNNaker project is that "the micro-architecture assumes that processors are ‘free’: the real cost of computing is energy." This has interesting consequences for potentially simplifying object- or actor-oriented programming. Alan Kay's original idea of objects was to scale down the concept of "computer", so every object is essentially a self-contained computer with CPU and storage, communicating with its peers via messages. (Erlang is probably the closest implementation of this concept).

In our core-scarce computing environments, this had to be simulated by multiplexing all (or most) of the objects onto a single von Neumann computer, usually with a shared address space. If cores are free and we have them in the tens of thousands, we can start entertaining the idea of no longer simulating object-oriented computing, but rather of implementing it directly by giving each object its own core and attached memory. Yes, utilization of these cores would probably be abysmal, but with free cores low utilization doesn't matter, and low utilization (hopefully) means low power consumption.

Even at 1% utilization, 10000 cores would still mean throughput equivalent to 100 ARM 9 cores running full tilt, and I am guessing pretty low power consumption if the transistors not being used are actually off. More important than 100 core-equivalents running is probably the equivalent of 100 bus interfaces running at full tilt. The aggregate on-chip memory bandwidth would be staggering.

You could probably also run the whole thing at lower clock frequencies, further reducing power. With each object having around 96KB of private memory to itself, we would probably be looking at coarser-grained objects, with pure data being passed between the objects (Objective-C or Erlang style) and possibly APL-like array extensions (see OOPAL). Overall, that would lead to de-emphasis of expression-oriented programming models, and a more architectural focs.

This sort of idea isn't new, the Transputer got there in the late 80ies, but it was conceived when Moore's law didn't just increase transistor counts, but also clock-frequencies, and so Intel could always bulldozer away more intelligent architectures with better fabs. This has stopped, clock-frequencies have been stagnant for a while and even geometries are starting to stutter. So maybe now the time for intelligent CPU architectures has finally come, and with it the impetus for examining our assumptions about programming models.

As always, comments welcome here or on Hacker News.

UPDATE: The kilo-cores are here:

  • Kilocore: 1000 processors, 1.78 Trillion ops/sec, and at 1.78pJ/Op super power-efficient, so at 150 GOps/s only uses 0.7 watts. On a 32nm process, so not yet maxed out.
  • GRVI Phalanx joins The Kilocore Club: 1680 cores.
No reports of any of them running actors, but ensembles might work :-)

Tuesday, July 7, 2015

Greek Choices: A German's Perspective

My 6Wunderkinder Microsoft colleague James Duncan Davidson penned some beautiful notes on the Impossible Greek Choices that happened effectively during his wedding and honeymoon (congratulations!!), to the people he has grown to know and, in some cases, love. It's a wonderful piece that I highly recommend. There were some calls for a German perspective, which is by necessity going to be less personal, but I hope it can provide some additional background.

The Referendum

The Greek Referendum was an odd one, because it asked voters whether to accept or not accept an offer by the 18 other Euro states and the IMF that was, in fact, no longer on the table. So that's at least a little weird, as there really wasn't a choice to make, and from here it looked like pure grandstanding / political theater.

I don't know how it was perceived in Greece, but from here it looked like the government presented the choices as "bow to German dictates" vs. "we can do without them". And the capital controls imposed by the Greek government because the banks were effectively insolvent were described as "terrorism" perpetrated by the EU. That's a bit rich when you take into account that the only thing keeping the country afloat was EU funds and the money that is still available is coming from EU emergency funds. If "terrorists" are people who give you money, I want some more terrorists around me.

Even weirder is the fact that Tsiprias seems to think that the No vote will strengthen his position in future negotiations, that now he will be able to negotiate a better deal. I have a feeling that this will go down as one of the biggest political miscalculations in history, but I may be wrong and he will now get everything he wanted.

The "German" View

The German perspective on the Greek debt crisis is really quite simple: you can't live beyond your means indefinitely. You can do it temporarily by taking on debt, but at some point that debt has to be repaid. At that point you have to not only revert to living within your original means, but significantly below because you have the debt to repay.

I think this is completely obvious and non-controversial, as it relies on no more than basic arithmetic. But then again, I am German.

This doesn't mean you can never take on debt. For example, it makes sense to time-shift availability of things. Or to invest in things that increase your earning potential. Or one-time events, especially with a strong time component. But never to lift your general standard of living. There is no way that can work.

The Keynesian View

Keynesians say that governments are different from private individuals, and the debt they take on is different from normal household debt. Important is the role of government debt in a recession: the government should take on more debt to minimize economic contraction and spur growth. The worst thing a government can do in a recession is start saving and imposing "austerity": the economy will contract even more, and apart from hurting citizens, this actually tends to make the debt problem worse. The reason is that government debt is generally measured relative to GDP, because GDP/economic output is a pretty direct indicator of a country's ability to pay back its debt.

While not quite as uncontroversial, I think this is also largely trivially true.

Although these two views are often described as contradictory, for example Krugman criticizes the "Austerians" by showing how Keynesian predictions turn out to be true, I don't think they are. After all, you can pick up debt in a crisis and then pay back the debt when the economy is doing well, and IIRC, that is exactly what Keynes said governments should do, behave anti-cyclically.

Of course, this requires discipline: why should I do something uncomfortable when things are going well? After all, the fact that things are going well proves that what I am doing is right, right? Empirically, governments do not seem to pay back debt in any significant way shape or form, for reasons I do not fully understand. Part appears to be simple expediency, why cut spending (=happy voters) when you don't absolutely have to? Part might be that our pensions systems tend to depend on having "safe" government debt to invest in. Financial markets also appear to be fine with a small-ish permanent fiscal deficit.

I personally think this is hugely problematic and undermines democratic principles, because governments sell out their sovereign (the voters) to the financiers for a little bit of extra spending money. Panem et circenses.

A Third Way (I)

Of course, there is a way of living beyond your means without having to suffer the consequences, which is to have someone else pick up the tab, for example by not repaying your debt. The most obvious way is a default, but if you are a country and the debt is in your own currency, you get more subtle options such as devaluation, where you just lessen the value of the debt (and of all the money in circulation, but hey...) or high inflation, which has the same effect but stretches it out over time so it is less noticeable.

In the past, at least Italy and Greece used to do this all the time, but of course this means that people are much less willing to lend you money, and they will demand much higher interest payment, pricing in actual or potential inflation and risk of devaluation or default. In the worst case, people will stop lending you money altogether.

Without control over a currency and attached printing press for said currency, inflating or devaluing your way out of trouble is no longer an option, which is why joining the Euro area was contingent on meeting "convergence criteria" on inflation and public debt.

Whereas Italy made a real effort to meet the criteria, Greece never really did. Even the official figures were at best marginal, but these had been doctored by US money house and vampire squid Goldman Sachs.

However, private money lenders were unaware (possibly willfully) of this, and lent Greece money at Euro-group rates, way, way below previously attainable rates. Greece, freed from the difficulty and expense of obtaining debt, went on a debt-fueled spending spree: GDP skyrocketed, from Euro introduction in 2001 to the start of the financial crisis in 2008 by a factor of 2.3!

Did the Greek economy become super-competitive in this time, did exports soar, did tourism? As far as I know, the answer to these questions is "No", the rise in GDP was largely fueled by cheap debt. The Euro area in general did well during this time, but for example Germany's GDP in 2008 was only 50% higher than the high point in the mid 90ies.

Greece piled on public debt in this period at >10% of the budget.

The Greek Problem

When the financial crisis hit, the toxicity of all the debt held by pretty much everyone became evident, but for some reason, Greece was particularly hard hit, despite the fact that their overall debt levels were not that much worse than everyone else's. However, how bad existing debt is is very much influenced by your creditworthiness, because debt is constantly refinanced and a bad credit rating means that what used to be sustainable levels of debt can become unsustainable, a self-fulfilling prophecy.

Well, our favorite vampire-squid Goldman Sachs announced that something was "fishy" with Greece. How did they know this? Easy, as we saw above they were the ones who had helped Greece cook the books in the first place! Suddenly Greece's credit load was unsustainable, because rates skyrocketed. Well, Greece's rates reverted to pre-Euro levels, because it became known that the information that had been used to justify low Euro-area rates (meeting the "convergence criteria") had been falsified.

Without those falsifications, Greece's borrowing costs would have been much higher, and those high borrowing costs would have prohibited taking on as much debt as had been taken on to fuel Greece's GDP bubble. Added factors were that Greece no longer had the option of devaluing creditors' assets by devaluing or inflating debt away, but at the core is trustworthiness: do I, as a creditor, believe that my debtor will pay back their debt to me in full? If I am 100% certain, rates are low, with every bit of doubt rates rise until they are infinite, i.e. you can't borrow.

Greece simply was never trustworthy, and for most of history this was well known, except for the period between 2001 (Euro introduction) and 2008 (financial crisis). Of course, it could have been known if the banks lending Greece money had bothered to look. But they didn't look, which was reckless, beyond the lie that Goldman Sachs had helped Greece sell, which was criminal.

So what I think should have happened (aside from the debt never accumulating in the first place because of people and institutions not lying and doing their jobs properly) is that Greece should have defaulted on those debts, the banks that recklessly lent that money took the hit they deserved to take and then sued Goldman Sachs and the Greek government(s) and the officials for the damages.

What happened instead was the typical and sickening socialization of private losses, while profits from those same transactions continued and continue to be privatized.

At this point, we could have taken the economically correct Keynesian approach of helping the Greek economy recover with the help of possibly even more debt, but debt that becomes sustainable because GDP rises faster than debt and thus the all important debt/GDP ratio falls.

Instead we got the mess we're in now, with GDP down 25% from peak, though still significantly higher than before the Euro. Why? As far as I can tell, a large part of the reason is that no one in the EU trusts pretty much any Greek government with a scheme that entails "you get stuff now, and you will do the right things later". No one.

The reason for this is that, in the context of the EU, successive Greek governments have flouted every rule, broken every agreement, ignored every regulation on the books. Consistently. Over the last 30 plus years. All while receiving billions in EU funds that are contingent on compliance with those rules. Rules that every other EU country had to abide by, and when non-compliance was detected, funds were withheld. This includes the old countries as well as the newest. Except Greece.

For example, one of the largest pieces of the budget is the Common Agriculture Policy (CAP), a system of farm subsidies. Every country is required to have specific electronic reporting systems in order to receive CAP funds. The new states all had to install them, even the poorest, and when they did not, CAP funds were withheld. Except Greece, which to this day does not have such a system.

The plastic olive trees used to obtain additional farm subsidies are legendary, but sadly not mythical. Having better controls would make this sort of creative farming (and accounting) much more difficult.

There are also requirements for good governance and tax collection standards, for example you need to have a land registry in order to collect property tax (and protect property rights). Greece does not have such a registry, which is pretty novel for an ostensibly developed nation.

They were required to have one. Didn't bother successive Greek governments. At one point they got significant EU funds to build such a system (€100m IIRC UPDATED: I initially recalled incorrectly that it was €400m, the correct figure is €100m). After a couple of years, an EU official was sent to check. At first, the Greek authorities didn't know what he was talking about. Then they remembered. Of course, not a thing had been done to create a registry.

After being reminded that they had received (and accepted) €400m in funds specifically dedicated to the purpose of building such a registry, they offered to "refund" half the amount, €200m. The mind boggles.

Of course, tax collection is a huge domestic problem in Greece, with often huge wealth and income taxed at best theoretically. Real estate plays a huge role in this, and this is why a land registry is not popular with the elites. When the tax collectors come to these quite fabulous houses, nobody there knows who the hoses belong to.

I still really like Volker Pispers's idea of tax collection via bulldozer: knock at the first house on the block. If an owner can't be found, knock it down. I am pretty certain tax compliance will improve markedly.

Anyway, as far as I can see, Greek governments see flouting EU rules as their birthright, as something that cannot possibly have consequences, a view that they has never been challenged until now. So I think their indignation at actually being held accountable is quite real, and they do see enforcement of rules as cruel and certainly unusual punishment, because for them it is just that: unusual, unexpected, unfamiliar.

And of course, EU institutions that let them get away with, well, everything over so many years certainly share some of the blame. But only some, not nearly as much as Greek politicians would like everyone to believe.

When Syriaza was voted into power, I was hopeful that things would improve, but they almost immediately started placing their relatives in well-paid positions, and apparently their proposals are just as vague about finally taxing the actual significant wealth that exists, and just as concrete about squeezing lower incomes to enrage public opinion and effectively use these people as human shields to protect the wealthy. I think I've used the word "sickening" before.

A Third Way (II)

Duncan writes:
What should be on the table is a decision by Europe to strengthen the economic union by sharing the eurozone’s debt. While the particulars of the Greek situation sent them over the edge first in the financial meltdown of 2008, sharing a currency between states without sharing debt is unsustainable in the long term for the entire eurozone. This isn’t news.
I confirmed with him that he meant "debt + fiscal" union. Yes. It is the only way a monetary union can work, in the long run. This has been pointed out many times, by many people. Alas, he adds:

The only surprising thing at this point is that states like Germany insist on keeping Greece under the weight of a debt that can’t ever be repaid.
I take exception to that on several levels:
  1. Nobody forced Greece to take on loans (from private banks) that it knew it couldn't repay
  2. Nobody forced Greece to cheat in order to obtain those loans, which it would never have had access to without cheating.
  3. Nobody forced Greece to avoid certain default, with all the awful consequences that we are now only beginning to see, by accepting a bailout using public/taxpayer money. Which came with certain conditions. If you don't like to conditions, don't take the money.
That said, I don't think anyone in the remaining Eurozone governments has any illusions about Greece avoiding having to default on at least part of that debt. After all, said governments already negotiated a partial default, with both public and private lenders taking a hit.

However, those governments have a justified interest in the same thing not happening again and again. The easiest way to do this would be to have a fiscal union, with member states giving up some chunk of their sovereignty over spending. However, national governments currently balk at this, most vehemently the Greek government with their referendum and "national pride" demagoguery.

Given that fiscal union is not on the table, the other alternative are stringent reforms. And given the simple fact that Greek governments have not been the least bit trustworthy, including this one, the only way to get those reforms is "under the gun". This is unfortunate, and economically stupid, but there does not appear to be any alternative other than letting Greece continue to spend as it wishes with others picking up the tab from time to time. And that is also not an option.

So yeah, it's complicated, and life sometimes sucks, especially for all the people that get to suffer from the consequences of their governments' actions.

The Seldon Crisis You Ordered Is Ready, Where Would You Like It Delivered?

Coming back to Euro, my view of the Euro project was always that it was never really about monetary union. The people who instigated it were dedicated Europeans, a view I share. They wanted political union. They knew that political union was not possible at the time. So they created monetary union, which was politically possible, knowing that over time it would create a situation that would force political union as the only reasonable alternative. A classic Seldon crisis. The only weird thing is that the crisis is here and yet it is not being used to create the fiscal and political union as one would expect. There were plans, it seems, but these were not acted upon. Is it because our current crop of leaders are too fearful, too used to just waiting it out, whatever "it" is? Are they too timid to take the bold leap required to create something great? Or maybe the crisis just isn't deep enough yet? I truly don't know, and am completely flabbergasted.

Of course, things are always a bit more complicated, for example the Euro was also a precondition for German unification, demanded by various European "partners", particularly France in order to limit German power post-unification. The fact that the Euro is now described as an evil German master plan to subjugate the proud southern European nations make this somewhat ironic, or more likely demonstrates how off-base and populist those types of claims are.

The US angle

You've probably noticed the central role of US investment bank and favorite vampire squid (sorry for the repitition, I just can't resist writing that...) Goldman Sachs in this saga: not only were they absolutely instrumental in creating the problem, they then blabbed about at the exact worst possible moment, in the middle of the worst financial crisis the world has seen in the last 80 years or so.

While I am not much into conspiracy theories, this is just too much of a coincidence to be... a coincidence, especially when you keep in mind the revolving door between GS and various parts of the US government and the fact that the US is not very fond of the EU, and absolutely hates the Euro.

When taken together, the EU is the world's largest economic power, with more people than the US and a greater GDP. If and when it gets its political act together, it will also be a significant political power, with 2 seats on the security council, a nuclear arsenal, unparalleled economic might and much less hatred for it in the world than the US. Having the EU get its political act together is not in the US's national interest, and as the NSA affair amply demonstrated (not that it was ever in dispute), there are no friendships at this level, just national interests that may coincide from time to time.

While the EU as such is a problem for US interests, the Euro presents an actual real danger. Not only is it a central piece of getting to political union (not just, but especially if my idea that it was intended to hasten the process is true), it has started to become an alternative to the dollar as a petro-, trade- and reserve-currency.

Why is this problematic? One of the mysteries of the US economy is how it has been able to have huge trade deficits, expand the money supply ("print money") and do all sorts of other things, without ever being punished by the usual consequences: high inflation and high interest rates.

While part of the answer is the "confidence fairy" that Professor Krugman ridicules in other contexts (people just think that the US will continue to be a good investment), a huge part is the use of dollars by other countries, which soaks up the dollars you just printed and spent.

An alternative reserve-/trading-currency means that those dollars will not just no longer be soaked up, but even worse existing reserves will likely be released, so all those dollars flood the market and cause the effects that were previously avoided. Which could be financially disastrous.

Friday, July 3, 2015

When Is My Unit Test Coverage Adequate?

  1. When you are not afraid of changing any of the code.
  2. When you are comfortable with releasing as soon as the tests are green (i.e. always).
  3. Tertium non datur :-)

Why unit tests and not integration test?

Integrated Tests Are a Scam [vimeo]

EOM

Friday, June 26, 2015

Guys, "guys" is perfectly fine for addressing diverse groups

With the Political Correctness police gaining momentum again after being laughed out of the 80ies, the word "guys" has apparently come under attack as being "non-inclusive". After discussing the topic a bit on twitter, I saw Peter Hosey's post declaring the following:

"when you’re addressing a mixed-gender or unknown-gender group, you should not use the word 'guys'."
As evidence, he references a post by Julia Evans purportedly showing that for most uses, people perceive "guys" to be gender-specific. Here is the graph of what she found:

What I find interesting is that that data show exactly the opposite of Peter's claim. Yes, most of the usage patterns are perceived as gender-specific by more people than not, but all of those are third person. The one case that is second person plural, the case of addressing a group of people, is overwhelmingly perceived as being gender neutral, with women perceiving it as gender neutral slightly more than men, but both groups at over 90%.

This matches with my intuition, or to be more precise, I find it somewhat comforting that my intuition about this appears to still match reality. I find "hey guys" neutral (2nd person plural), whereas "two guys walked into a store" is male.

Prescription vs. Description

Of course, they could have just checked their friendly local dictionary, for example Webster's online:
guy (noun)
Definition of GUY
  1. often capitalized : a grotesque effigy of Guy Fawkes traditionally displayed and burned in England on Guy Fawkes Day
  2. chiefly British : a person of grotesque appearance
  3. a : man, fellow
    b : person —used in plural to refer to the members of a group regardless of sex
  4. 4 : individual, creature <the other dogs pale in companion to this little guy>
So there we go: "used in plural to refer to the members of a group regardless of sex". It is important to note that unlike continental dictionaries (German, French), which proscribe correct usage, the anglo-saxon tradition is descriptive, meaning actual use is documented. In addition, my recollection is that definitions are listed chronologically, with the older last and newer ones first. So the word's meaning is shifting to be more gender neutral. This is called progress.

What I found interesting is that pointing out the dictionary definition was perceived as prescriptive, that I was using trying to force an out-of-touch dictionary definition on a public that perceives the word differently. Of course, the opposite is the case: a few people are trying to force their perception based on outdated definitions of the word on a public and a language that has moved on.

Language evolution and the futility of PC

Speaking of Anglo-Saxons and language evolution: does anyone feel the oppression when ordering beef or pork? Well, you should. These words for the meat of certain animals were introduced to English in 1066 with the conquering Normans. The french words for the animals were now used to describe the food the upper class got served, whereas the anglo-saxon words shifted to denote just the animals that the peasants herded. Yeah, and medieval oppression was actually real, unlike some other "oppression" I can think of.

Of course, we don't know about that today, and the words don't have those association anymore, because language just shifts to adapt to and reflect reality. Never the other way around, which is why the PC brigade's attempts to affect reality by policing language is so misguided.

Take the long history of euphemisms for "person with disability". It started out as "cripple", but that word was seen as stigmatizing, so it was replaced with "handicapped", because it wasn't something a defect with the person, but a handicap they had. Then that word got to be stigmatized and we switched to "disabled". Then "person with disabilities", "special", "challenged", "differently abled". And so on and so forth. The problem is that it never works: the stigma moves to the new word that was chosen because it was so far stigma-free, so nowadays calling someone "special" is no longer positive. And calling homeless people "the temporarily underhoused" because "home is wherever you are" also never helped.

So leave language be and focus on changing the underlying reality instead. All of this does not mean that you can't be polite: if someone feels offended by being addressed in a certain way, by all means accomodate them and/or come to some understanding.

Let the Hunting begin :-))

Wednesday, June 17, 2015

Protocol-Oriented Programming is Object-Oriented Programming

Crusty here, I just saw that my young friend Dave Abrahams gave a talk that was based on a little keyboard session we had just a short while ago. Really sharp fellow, you know, I am sure he'll go far someday, but that's the problem with young folk these days: they go rushing out telling everyone what they've learned when the lesson is only one third of the way through.

You see, I was trying to impart some wisdom on the fellow using the old Hegelian dialectic: thesis, antithesis, synthesis. And yes, I admit I wasn't completely honest with him, but I swear it was just a little white lie for a good educational cause. You see, I presented ADT (Abstract Data Type) programming to him and called it OOP. It's a little ruse I use from time to time, and decades of Java, C++ and C# have gone a long way to making it an easy one.

Thesis

So the thesis was simple: we don't need all that fancy shmancy OOP stuff, we can just use old fashioned structs 90% of the time. In fact, I was going to show him how easy things look in MC68K assembly language, with a few macros for dispatch, but then thought better of it, because he might have seen through my little educational ploy.

Of course, a lot of what I told him was nonsense, for example OOP isn't at all about subclassing, for example the guy who coined the term, Alan I think, wrote: "So I decided to leave out inheritance as a built-in feature until I understood it better." So not only isn't inheritance not the defining feature of OOP as I let on, it actually wasn't even in the original conception of the thing that was first called "object-oriented programming".

Absolute reliance on inheritance and therefore structural relationships is, in fact, a defining feature of ADT-oriented programming, particularly when strong type systems are involved. But more on that later. In fact, OOP best practices have always (since the late 80ies and early 90ies) called for composition to be used for known axes of customization, with inheritance used for refinement, when a component needs to be adapted in a more ad-hoc fashion. If that knowledge had filtered down to young turks writing their master's thesis back in what, 1997, you can rest assured that the distinction was well known and not exactly rocket science.

Anyway, I kept all that from Dave in order to really get him excited about the idea I was peddling to him, and it looks like I succeeded. Well, a bit too well, maybe.

Antithesis

Because the idea was really to first get him all excited about not needing OOP, and then turn around and show him that all the things I had just shown him in fact were OOP. And still are, as a matter of fact. Always have been. It's that sort of confusion of conflicting truth seeming ideas that gets the gray matter going. You know, "sound of one hand clapping" kind of stuff.

The reason I worked with him on a little graphics context example was, of course, that I had written a graphics context wrapper on top of CoreGraphics a good three years ago. In Objective-C. With a protocol defining the, er, protocol. It's called MPWDrawingContext and live on github, but I also wrote about it, showed how protocols combine with blocks to make CoreGraphics patterns easy and intuitive to use and how to combine this type of drawing context with a more advanced OO language to make live coding/drawing possible. And of course this is real live programming, not the "not-quite-instant replay" programming that is all that Swift playgrounds can provide.

The simple fact is that actual Object Oriented Programming is Protocol Oriented Programming, where Protocol means a set of messages that an object understands. In a true and pure object oriented language like Smalltalk, it is all that can be, because the only way to interact with an object is to send messages. Even if you do simple metaprogramming like checking the class, you are still sending a message. Checking for object identity? Sending a message. Doing more intrusive metaprogramming like "directly" accessing instance variables? Message. Control structures like if and while? Message. Creating ranges? Message. Iterating? Message. Comparing object hierarchies? I think you get the drift.

So all interacting is via messages, and the set of messages is a protocol. What does that make OO? Say it together: Protocol Oriented Programming.

Synthesis

So we don't need objects when we have POP, but at the same time POP is OOP. Confused? Well, that's kind of the point of a good dialectic argument.

One possible solution to the conflict could be that we don't need any of this stuff. C, FORTRAN and assembly were good enough for me, they should be good enough for you. And that's true to a large extent. Excellent software was written using these tools (and ones that are much, much worse!), and tooling is not the biggest factor determining success or failure of software projects.

On the other hand, if you want to look beyond what OOP has to offer, statically typed ADT programming is not the answer. It is the question that OOP answered. And statically typed ADT programming is not Protocol Oriented Programming, OOP is POP. Repeat after me: OOP is POP, POP is OOP.

To go beyond OOP, we actually need to go beyond it, not step back in time to the early 90ies, forget all we learned in the meantime and declare victory. My personal take is that our biggest challenges are in "the big", meaning programming in the large. How to connect components together in a meaningful, tractable and understandable fashion. Programming the components is, by and large, a solved problem, making it a tiny bit better may make us feel better, but it won't move the needle on productivity.

Making architecture malleable, user-definable and thus a first class citizen of our programming notation, now that is a worthwhile goal and challenge.

Crusty out.

As always, comments welcome here and on HN.

Sunday, June 7, 2015

Steve Jobs on Swift

No, there is no actual evidence of Steve commenting on Swift. However, he did say something about the road to sophisticated simplicity.

In short, at first you think the problem is easy because you don't understand it. Then you begin to understand the problem and everything becomes terribly complicated. Most people stop there, and Apple used to make fun of the ones that do.

To me this is the perfect visual illustration of the crescendo of special cases that is Swift.

The answer to this, according to Steve, is "[..] a few people keep burning the midnight oil and finally understand the underlying principles of the problem and come up with an elegantly simple solution for it. But very few people go the distance to get there."

Apple used to be very much about going that distance, and I don't think Swift lives up to that standard. That doesn't mean it's all bad or that it's completely irredeemable, there are good elements. But they stopped at sophisticated complexity. And "well, it's not all bad" is not exactly what Apple stands for or what we as Apple customers expect and, quite frankly, deserve. And had there been a Steve in Dev Tools, he would have said: do it again, this is not good enough.

As always, comments welcome here or on HN

Saturday, May 23, 2015

I am jealous of Swift

Really, I am. They get to do everything wrong there is in language design and yet the results get fawned upon and the obvious flaws not just overlooked but turned into their opposite.

Language Design

What do I mean? Well, primarily this:
Swift is a crescendo of special cases stopping just short of the general; the result is complexity in the semantics, complexity in the behaviour (i.e. bugs), and complexity in use (i.e. workarounds).
The list Rob compiled is impressively well-researched. Although "special cases stopping just short of the general" for me is enough, it is THE cardinal sin of language design, I would add "needlessly replacing the keyword message syntax at exactly the point where it was no longer an issue and adding it back as an abomination of accidental complexity the world has never seen before". Let's see what Gilad Bracha, an actual programming language designer, has to say on the keyword syntax:
This notation makes it impossible to have an arity error when calling a method. In a dynamically typed language, this is a huge advantage.

I am keenly aware that this syntax is unfamiliar to most programmers, and is a potential barrier to adoption. However, it improves usability massively. Furthermore, a growing number of programmers are learning this notation because of its use in Objective-C (e.g., the iOS APIs).

Abandoning keyword syntax at this point in time takes "snatching defeat from the jaws of victory" to a whole new and exciting level!

Or the whole idea of having every arithmetic operation be a potential crash point, despite the fact that proper numeric towers have been around for many decades and decently optimized (certainly no slower than unoptimized Swift).

And yet, Rob for example writes that the main culprit for Swift's complexity is Objective-C, which I find somewhat mind-boggling. After all, the requirement for Objective-C interoperability couldn't exactly have come as a last minute surprise foisted on an existing language. Folks: if we're designing a replacement language for Apple's Cocoa frameworks, Objective-C compatibility needs to be designed in from the beginning and not added in as an afterthought. And if you don't design your language to be at odds with the frameworks you will be supporting, you will discover that you can get a much cleaner design.

Performance

The situation is even more bizarre when it comes to performance. For example, here's a talk titled How Swift is Swift. The opening paragraph declares that "Swift is designed to be fast, very fast", yet a few paragraphs (or slides) down, we learn that debug builds are often 100 times slower than optimized builds (which themselves don't really rival C).

Sorry, that's not the sign of a language that's "designed to be fast". Those are the characteristics of a language design that is inherently super, super slow, and that requires leaning heavily on the optimizer to get performance to an acceptable level.

And of course, the details bear that out: copy semantics are usually expensive, they need the optimizer to elide those copies in the majority of cases. Same with ARC, which is built in and also requires the optimizer to be effectively clairvoyant (and: on) in order not to suffer 30x regressions.

Apart from the individual issues, the overriding one is that Swift's performance model is extremely opaque (100x for turning the optimizer on). Having the optimizer do a heroic job of optimizing code that we don't care about is of no use if we can't figure out why the code we do care about is slow or how to make it go fast.

Jealousy

So what makes little amateur language designer me jealous is that I really do try and get these things right, make sure the design is parsimonious, and these guys just joyfully ignore every rule in the book, and then trample on said book in ways that should get the language designer's guild to revoke their license, yet there is almost universal fawnage.

Whoever said life was fair?

As always, comments welcome here or on HN

Sunday, April 5, 2015

React.native isn't

While we're on the subject of terminological disasters, Facebook's react.native seems to be doing a good job of muddling the waters.

While some parts make use of native infrastructure, a lot do not:

  1. uses views as drawing results, rather than as drawing sources, has a
  2. parallel component hierarchy,
  3. ListView isn't UITableView (and from what I read, can't be),
  4. even buttons aren't UIButton instances,
  5. doesn't use responder chain, but implements something "similar", and finally,
  6. oh yes, JavaScript

None of this is necessarily bad, but whatever it is, it sure ain't "native".

What's more, the rationale given for React and the Components framework that was also just released echoes the misunderstandings Apple shows about the MVC pattern:

Mvc data event flow fb components

Just as a reminder: what's shown here with controllers pushing data to view at any time is not MVC, unless you use that to mean "Massive View Controller".

In Components and react.native, this "pushing of mutable state to the UI" is supposed to be replaced by "a (pure) function of the model". That's what a View (UIView or NSView) is, and what drawRect:: does. So next time you are annoyed by pushing data to views, instead of creating a whole new framework, just drag a Custom View from the palette into your UI and then implement the drawRect::. Creating views as a result of drawing (and/or turning components into view state mutations) is more stateful than drawRect::, not less.

Again, that doesn't mean it's bad or useless, it just means it isn't what it says on the tin. And that's a problem. From what I've heard so far, the most enthusiastic response to react.native has come from web developers who can finally code "native" apps without learning Objective-C/Swift or Java. That may or may not be useful (past experience suggests not), but it's something completely different from what the claims are.

Oh and finally, the "react" part seems to refer to "one-way reactive data flow", an even bigger terminological disaster that I will examine in a future post.

As always, comments welcome here or at HN

Friday, April 3, 2015

Model Widget Controller (MWC) aka: Apple "MVC" is not MVC

I probably should have taken more notice that time that after my question about why a specific piece of the UI code had been structured in a particular way, one of my colleagues at 6wunderkinder informed me that Model View Controller meant the View must not talk to the model, and instead the Controller is responsible for mediating all interaction between the View and the Model. It certainly didn't match the definition of MVC that I knew, so I checked the Wikipedia page on MVC just in case I had gone completely senile, but it checked out with that I remembered:
  1. the controller updates the model,
  2. the model notifies the view that it has changed, and finally
  3. the view updates itself by talking to the model
(The labeling on the graphic on the Wikipedia is a bit misleading, as it suggests that the model updates the view, but the text is correct).

What I should have done, of course, is keep asking "Why?", but I didn't, my excuse being that we were under pressure to get the Wunderlist 3.0 release out the door. Anyway, I later followed up some of my confusion about both React.native and ReactiveCocoa (more on those in a later post) and found the following incorrect diagram in a Ray Wenderlich tutorial on ReactiveCocooa and MVVC.

Hmm...that's the same confusion that my colleague had. The plot thickens as I re-check Wikipedia just to be sure. Then I had a look at the original MVC papers by Trygve Reenskaug, and yes:

A view is a (visual) representation of its model. It would ordinarily highlight certain attributes of the model and suppress others. It is thus acting as a presentation filter. A view is attached to its model (or model part) and gets the data necessary for the presentation from the model by asking questions.

The 1988 JOOP article "MVC Cookbook" also confirms:

MVC Interaction Krasner 88

So where is this incorrect version of MVC coming from? It turns out, it's in the Apple documentation, in the overview section!

Model view controller

I have to admit that I hadn't looked at this at least in a while, maybe ever, so you can imagine my surprise and shock when I stumbled upon it. As far as I can tell, this architectural style comes from having self-contained widgets that encapsulate very small pieces of information such as simple strings, booleans or numbers. The MVC architecture was not intended for these kinds of small widgets:

MVC was conceived as a general solution to the problem of users controlling a large and complex data set.
If you look at the examples, the views are large both in size and in scope, and they talk to a complex model. With a widget, there is no complex model, not filtering being done by the view. The widget contains its own data, for example a string or a number. An advantage of widgets is that you can meaningfully assemble them in a tool like Interface Builder, with a more MVC-like large view, all you have in IB is a large blank space labeled 'Custom View'. On the other hand, I've had very good experiences with "real" (large view) MVC in creating high performance, highly responsive user interfaces.

Model Widget Controller (MWC) as I like to call it, is more tuned for forms and database programming, and has problems with more reactive scenarios. As Josh Abernathy wrote:

Right now we write UIs by poking at them, manually mutating their properties when something changes, adding and removing views, etc. This is fragile and error-prone. Some tools exist to lessen the pain, but they can only go so far. UIs are big, messy, mutable, stateful bags of sadness.

To me, this sadness is almost entirely a result of using MWC rather than MVC. In MVC, the "V" is essentially a function of the model, you don't push or poke at it, you just tell it "something changed" and it redraws itself.

And so the question looms: is react.native just a result of (Apple's) misunderstanding (of) MVC?

As always, your comments are welcome here or on HN.

Thursday, March 19, 2015

Why overload operators?

One of the many things that's been puzzling me for a long time is why operator overloading appears to be at the same time problematic and attractive in languages such as C++ and now Swift. I know I certainly feel the same way, it's somehow very cool to massage the language that way, but at the same time the thought of having everything redefined underneath me fills me with horror, and what little I've seen and heard of C++ with heavy overloading confirms that horror, except for very limited domains. What's really puzzling is that binary messages in Smalltalk, which are effectively the same feature (special characters like *,+ etc. can be used as message names taking a single argument), do not seem to not have either of these effects: they are neither particularly attractive to Smalltalk programmers, nor are their effects particularly worrisome. Odd.

Of course we simply don't have that problem in C or Objective-C: operators are built-in parts of the language, and neither the C part nor the Objective part has a comparable facility, which is a large part of the reason we don't have a useful number/magnitude hierarchy in Objective-C and numeric/array libraries are't that popular: writing [number1 multipliedBy:number2] is just too painful.

Some recent articles and talks that dealt with operator overloading in Apple's new Swift language just heightened my confusion. But as is often the case, that heightened confusion seems to have been the last bit of resistance that pushed through an insight.

Anyway, here is an example from NSHipster Matt Thompson's excellent post on Swift Operators, an operator for exponentiation wrapping the pow() function:

func ** (left: Double, right: Double) -> Double {
    return pow(left, right)
}
This is introduced as "the arithmetic operator found in many programming languages, but missing in Swift [is **]". Here is an example of the difference:
pow( left, right )
left ** right
pow( 2, 3 )
2 ** 3
How come this is seen as an improvement (and to me it does)? There are two candidates for what the difference might be: the fact that the operation is now written in infix notation and that it's using special characters. Do these two factors contribute evenly or is one more important than the other. Let's look at the same example in Smalltalk syntax, first with a normal keyword message and then with a binary message (Smalltalk uses raisedTo:, but let's stick with pow: here to make the comparison similar):
left pow: right.
left ** right.
2 pow: 3.
2 ** 3.
To my eyes at least, the binary-message version is no improvement over the keyword message, in fact it seems somewhat worse to me. So the attractiveness of infix notation appears to be a strong candidate for why operator overloading is desirable. Of course, having to use operator overloading to get infix notation is problematic, because special characters generally do not convey the meaning of the operation nearly as well as names, conventional arithmetic aside.

Note that dot notation for message sends/method calls does not really seem to have the same effect, even though it could technically also be considered an infix notation:

left.pow( right)
left ** right
2.pow( 3 )
2 ** 3
There is more anecdotal evidence. In Chris Eidhof's talk on functional swift, scrub to around the 10 minute mark. There you'll find the following code with some nested and curried function calls:
let result = colorOverlay( overlayColor)(blur(blurRadius)(image))
"This does not look to nice [..] it gets a bit unreadable, it's hard to see what's going on" is the quote.
let result = colorOverlay( overlayColor)(blur(blurRadius)(image))
Having a special compose function doesn't actually make it better
let myFilter = composeFilters(blur(blurRadius),colorOverlay(overlayColor))
let result = myFilter(image)
Infix to the rescue! Using the |>operator:
let myFilter = blur(blurRadius) |> colorOverlay(overlayColor)
let result = myFilter(image)
Chris is very fair-minded about this, he mentions that due to the special characters involved, you can't really infer what |> means from looking at the code, you have to know, and having many of these sorts of operators makes code effectively incomprehensible. Or as one twitter use put it: Like most things in engineering, it's a trade-off, though my guess is the trade-off would shift if we had infix without requiring non-sensical characters.

Built in
I do believe that there is another factor involved, one that is more psychologically subtle having to do with the idea of language as a (pre-defined) thing vs. a mechanism for building your own abstractions that I mentioned in my previous post on Swift performance.

In that post, I mentioned BASIC as the primary example of the former, a language as a collection of built-in features, with C and Pascal as (early) examples of the latter, languages as generic mechanisms for building your own features. However, those latter languages don't treat all constructs equally. Specifically, all the operators are built-in, not user-definable over -overridable. They also correspond closely to those operations that are built into the underlying hardware and map to single instructions in assembly language. In short: even in languages with a strong "user-defined" component, there is a hard line between "user-defined" and "built-in", and that line just happens to map almost 1:1 to the operator/function boundary.

Hackers don't like boundaries. Or rather: they love boundaries, the overcoming of. I'd say that overloaded operators are particularly attractive (to hacker mentalities, but that's probably most of us) in languages where this boundary between user-defined and built-in stuff exists, and therefore those overloaded operators let you cross that boundary and do things normally reserved for language implementors.

If you think this idea is too crazy, listen to John Siracusa, Guy English and Rene Ritchie discussing Swift language features and operator overloading on Debug Podcast Number 49, Siracusa Round 2, starting at 45:45. I've transcribed a bit below, but I really recommend you listen to the podcast, it's very good:

  • 45:45 Swift is a damning comment on C++ [benefits without the craziness]
  • 46:06 You can't do what Swift did [putting basic types in the standard library] without operator overloading. [That's actually not true, because in Swift the operators are just syntax -> but it is exactly the idea I talked about earlier]
  • 47:50 If you're going to add something like regular expressions to the language ... they should have some operators of their own. That's a perfect opportunity for operator overloading
  • 48:07 If you're going to add features to the language, like regular expressions or so [..] there is well-established syntax for this from other languages.
  • 48:40 ...or range operators. Lots of languages have range operators these days. Really it's just a function call with two different operands. [..] You're not trying to be clever All you're trying to do is make it natural to use features that exist in many of other languages. The thing about Swift is you don't have to add syntax to the language to do it. Because it's so malleable. If you're not adding a feature, like I'm adding regular expressions to the language. If you're not doing that, don't try to get clever. Consider the features as existing for the benefit of the expansion of the language, so that future features look natural in it and not bolted on even though technically everything is in a library. Don't think of it as in my end user code I'm going to come up with symbols that combine my types in novel ways, because what are you even doing there?
  • 50:17 if you have a language like this, you need new syntax and new behavior to make it feel natural. [new behavior strings array] and it has the whole struct thing. The basics of the language, the most basic things you can do, have to be different, look different and behave different for a modern language.
  • 51:52 "using operator overloading to add features to the language" [again, not actually true]
The interesting thing about this idea of a boundary between "language things" and "user things" is that it does not align with the "operators" and "named operators" in Swift, but apparently it still feels like it does, so we "extending the language" is seen as roughly equivalent to "adding some operators", with all the sound caveats that apply.

In fact, going back to Matt Thompson's article from above, it is kind of odd that he talks about exponentiation operator as missing from the language, when if fact the operation is available in the language. So if the operation crosses the boundary from function to operator, then and only then does it become part of the language.

In Smalltalk, on the other hand, the boundary has disappeared from view. It still exists in the form of primitives, but those are well hidden all over the class hierarchy and not something that is visible to the developer. So in addition to having infix notation available for named operations, Smalltalk doesn't have the notion of something being "part of the language" rather than "just the library" just because it uses non-sensical characters. Everything is part of the library, the library is the language and you can use names or special characters as appropriate, not because of asymmetries in the language.

And that's why operator overloading is a a thing even in languages like Swift, whereas it is a non-event in Smalltalk.