Friday, July 3, 2015

When Is My Unit Test Coverage Adequate?

  1. When you are not afraid of changing any of the code.
  2. When you are comfortable with releasing as soon as the tests are green (i.e. always).
  3. Tertium non datur :-)

Why unit tests and not integration test?

Integrated Tests Are a Scam [vimeo]

EOM

Friday, June 26, 2015

Guys, "guys" is perfectly fine for addressing diverse groups

With the Political Correctness police gaining momentum again after being laughed out of the 80ies, the word "guys" has apparently come under attack as being "non-inclusive". After discussing the topic a bit on twitter, I saw Peter Hosey's post declaring the following:

"when you’re addressing a mixed-gender or unknown-gender group, you should not use the word 'guys'."
As evidence, he references a post by Julia Evans purportedly showing that for most uses, people perceive "guys" to be gender-specific. Here is the graph of what she found:

What I find interesting is that that data show exactly the opposite of Peter's claim. Yes, most of the usage patterns are perceived as gender-specific by more people than not, but all of those are third person. The one case that is second person plural, the case of addressing a group of people, is overwhelmingly perceived as being gender neutral, with women perceiving it as gender neutral slightly more than men, but both groups at over 90%.

This matches with my intuition, or to be more precise, I find it somewhat comforting that my intuition about this appears to still match reality. I find "hey guys" neutral (2nd person plural), whereas "two guys walked into a store" is male.

Prescription vs. Description

Of course, they could have just checked their friendly local dictionary, for example Webster's online:
guy (noun)
Definition of GUY
  1. often capitalized : a grotesque effigy of Guy Fawkes traditionally displayed and burned in England on Guy Fawkes Day
  2. chiefly British : a person of grotesque appearance
  3. a : man, fellow
    b : person —used in plural to refer to the members of a group regardless of sex
  4. 4 : individual, creature <the other dogs pale in companion to this little guy>
So there we go: "used in plural to refer to the members of a group regardless of sex". It is important to note that unlike continental dictionaries (German, French), which proscribe correct usage, the anglo-saxon tradition is descriptive, meaning actual use is documented. In addition, my recollection is that definitions are listed chronologically, with the older last and newer ones first. So the word's meaning is shifting to be more gender neutral. This is called progress.

What I found interesting is that pointing out the dictionary definition was perceived as prescriptive, that I was using trying to force an out-of-touch dictionary definition on a public that perceives the word differently. Of course, the opposite is the case: a few people are trying to force their perception based on outdated definitions of the word on a public and a language that has moved on.

Language evolution and the futility of PC

Speaking of Anglo-Saxons and language evolution: does anyone feel the oppression when ordering beef or pork? Well, you should. These words for the meat of certain animals were introduced to English in 1066 with the conquering Normans. The french words for the animals were now used to describe the food the upper class got served, whereas the anglo-saxon words shifted to denote just the animals that the peasants herded. Yeah, and medieval oppression was actually real, unlike some other "oppression" I can think of.

Of course, we don't know about that today, and the words don't have those association anymore, because language just shifts to adapt to and reflect reality. Never the other way around, which is why the PC brigade's attempts to affect reality by policing language is so misguided.

Take the long history of euphemisms for "person with disability". It started out as "cripple", but that word was seen as stigmatizing, so it was replaced with "handicapped", because it wasn't something a defect with the person, but a handicap they had. Then that word got to be stigmatized and we switched to "disabled". Then "person with disabilities", "special", "challenged", "differently abled". And so on and so forth. The problem is that it never works: the stigma moves to the new word that was chosen because it was so far stigma-free, so nowadays calling someone "special" is no longer positive. And calling homeless people "the temporarily underhoused" because "home is wherever you are" also never helped.

So leave language be and focus on changing the underlying reality instead. All of this does not mean that you can't be polite: if someone feels offended by being addressed in a certain way, by all means accomodate them and/or come to some understanding.

Let the Hunting begin :-))

Wednesday, June 17, 2015

Protocol-Oriented Programming is Object-Oriented Programming

Crusty here, I just saw that my young friend Dave Abrahams gave a talk that was based on a little keyboard session we had just a short while ago. Really sharp fellow, you know, I am sure he'll go far someday, but that's the problem with young folk these days: they go rushing out telling everyone what they've learned when the lesson is only one third of the way through.

You see, I was trying to impart some wisdom on the fellow using the old Hegelian dialectic: thesis, antithesis, synthesis. And yes, I admit I wasn't completely honest with him, but I swear it was just a little white lie for a good educational cause. You see, I presented ADT (Abstract Data Type) programming to him and called it OOP. It's a little ruse I use from time to time, and decades of Java, C++ and C# have gone a long way to making it an easy one.

Thesis

So the thesis was simple: we don't need all that fancy shmancy OOP stuff, we can just use old fashioned structs 90% of the time. In fact, I was going to show him how easy things look in MC68K assembly language, with a few macros for dispatch, but then thought better of it, because he might have seen through my little educational ploy.

Of course, a lot of what I told him was nonsense, for example OOP isn't at all about subclassing, for example the guy who coined the term, Alan I think, wrote: "So I decided to leave out inheritance as a built-in feature until I understood it better." So not only isn't inheritance not the defining feature of OOP as I let on, it actually wasn't even in the original conception of the thing that was first called "object-oriented programming".

Absolute reliance on inheritance and therefore structural relationships is, in fact, a defining feature of ADT-oriented programming, particularly when strong type systems are involved. But more on that later. In fact, OOP best practices have always (since the late 80ies and early 90ies) called for composition to be used for known axes of customization, with inheritance used for refinement, when a component needs to be adapted in a more ad-hoc fashion. If that knowledge had filtered down to young turks writing their master's thesis back in what, 1997, you can rest assured that the distinction was well known and not exactly rocket science.

Anyway, I kept all that from Dave in order to really get him excited about the idea I was peddling to him, and it looks like I succeeded. Well, a bit too well, maybe.

Antithesis

Because the idea was really to first get him all excited about not needing OOP, and then turn around and show him that all the things I had just shown him in fact were OOP. And still are, as a matter of fact. Always have been. It's that sort of confusion of conflicting truth seeming ideas that gets the gray matter going. You know, "sound of one hand clapping" kind of stuff.

The reason I worked with him on a little graphics context example was, of course, that I had written a graphics context wrapper on top of CoreGraphics a good three years ago. In Objective-C. With a protocol defining the, er, protocol. It's called MPWDrawingContext and live on github, but I also wrote about it, showed how protocols combine with blocks to make CoreGraphics patterns easy and intuitive to use and how to combine this type of drawing context with a more advanced OO language to make live coding/drawing possible. And of course this is real live programming, not the "not-quite-instant replay" programming that is all that Swift playgrounds can provide.

The simple fact is that actual Object Oriented Programming is Protocol Oriented Programming, where Protocol means a set of messages that an object understands. In a true and pure object oriented language like Smalltalk, it is all that can be, because the only way to interact with an object is to send messages. Even if you do simple metaprogramming like checking the class, you are still sending a message. Checking for object identity? Sending a message. Doing more intrusive metaprogramming like "directly" accessing instance variables? Message. Control structures like if and while? Message. Creating ranges? Message. Iterating? Message. Comparing object hierarchies? I think you get the drift.

So all interacting is via messages, and the set of messages is a protocol. What does that make OO? Say it together: Protocol Oriented Programming.

Synthesis

So we don't need objects when we have POP, but at the same time POP is OOP. Confused? Well, that's kind of the point of a good dialectic argument.

One possible solution to the conflict could be that we don't need any of this stuff. C, FORTRAN and assembly were good enough for me, they should be good enough for you. And that's true to a large extent. Excellent software was written using these tools (and ones that are much, much worse!), and tooling is not the biggest factor determining success or failure of software projects.

On the other hand, if you want to look beyond what OOP has to offer, statically typed ADT programming is not the answer. It is the question that OOP answered. And statically typed ADT programming is not Protocol Oriented Programming, OOP is POP. Repeat after me: OOP is POP, POP is OOP.

To go beyond OOP, we actually need to go beyond it, not step back in time to the early 90ies, forget all we learned in the meantime and declare victory. My personal take is that our biggest challenges are in "the big", meaning programming in the large. How to connect components together in a meaningful, tractable and understandable fashion. Programming the components is, by and large, a solved problem, making it a tiny bit better may make us feel better, but it won't move the needle on productivity.

Making architecture malleable, user-definable and thus a first class citizen of our programming notation, now that is a worthwhile goal and challenge.

Crusty out.

As always, comments welcome here and on HN.

Sunday, June 7, 2015

Steve Jobs on Swift

No, there is no actual evidence of Steve commenting on Swift. However, he did say something about the road to sophisticated simplicity.

In short, at first you think the problem is easy because you don't understand it. Then you begin to understand the problem and everything becomes terribly complicated. Most people stop there, and Apple used to make fun of the ones that do.

To me this is the perfect visual illustration of the crescendo of special cases that is Swift.

The answer to this, according to Steve, is "[..] a few people keep burning the midnight oil and finally understand the underlying principles of the problem and come up with an elegantly simple solution for it. But very few people go the distance to get there."

Apple used to be very much about going that distance, and I don't think Swift lives up to that standard. That doesn't mean it's all bad or that it's completely irredeemable, there are good elements. But they stopped at sophisticated complexity. And "well, it's not all bad" is not exactly what Apple stands for or what we as Apple customers expect and, quite frankly, deserve. And had there been a Steve in Dev Tools, he would have said: do it again, this is not good enough.

As always, comments welcome here or on HN

Saturday, May 23, 2015

I am jealous of Swift

Really, I am. They get to do everything wrong there is in language design and yet the results get fawned upon and the obvious flaws not just overlooked but turned into their opposite.

Language Design

What do I mean? Well, primarily this:
Swift is a crescendo of special cases stopping just short of the general; the result is complexity in the semantics, complexity in the behaviour (i.e. bugs), and complexity in use (i.e. workarounds).
The list Rob compiled is impressively well-researched. Although "special cases stopping just short of the general" for me is enough, it is THE cardinal sin of language design, I would add "needlessly replacing the keyword message syntax at exactly the point where it was no longer an issue and adding it back as an abomination of accidental complexity the world has never seen before". Let's see what Gilad Bracha, an actual programming language designer, has to say on the keyword syntax:
This notation makes it impossible to have an arity error when calling a method. In a dynamically typed language, this is a huge advantage.

I am keenly aware that this syntax is unfamiliar to most programmers, and is a potential barrier to adoption. However, it improves usability massively. Furthermore, a growing number of programmers are learning this notation because of its use in Objective-C (e.g., the iOS APIs).

Abandoning keyword syntax at this point in time takes "snatching defeat from the jaws of victory" to a whole new and exciting level!

Or the whole idea of having every arithmetic operation be a potential crash point, despite the fact that proper numeric towers have been around for many decades and decently optimized (certainly no slower than unoptimized Swift).

And yet, Rob for example writes that the main culprit for Swift's complexity is Objective-C, which I find somewhat mind-boggling. After all, the requirement for Objective-C interoperability couldn't exactly have come as a last minute surprise foisted on an existing language. Folks: if we're designing a replacement language for Apple's Cocoa frameworks, Objective-C compatibility needs to be designed in from the beginning and not added in as an afterthought. And if you don't design your language to be at odds with the frameworks you will be supporting, you will discover that you can get a much cleaner design.

Performance

The situation is even more bizarre when it comes to performance. For example, here's a talk titled How Swift is Swift. The opening paragraph declares that "Swift is designed to be fast, very fast", yet a few paragraphs (or slides) down, we learn that debug builds are often 100 times slower than optimized builds (which themselves don't really rival C).

Sorry, that's not the sign of a language that's "designed to be fast". Those are the characteristics of a language design that is inherently super, super slow, and that requires leaning heavily on the optimizer to get performance to an acceptable level.

And of course, the details bear that out: copy semantics are usually expensive, they need the optimizer to elide those copies in the majority of cases. Same with ARC, which is built in and also requires the optimizer to be effectively clairvoyant (and: on) in order not to suffer 30x regressions.

Apart from the individual issues, the overriding one is that Swift's performance model is extremely opaque (100x for turning the optimizer on). Having the optimizer do a heroic job of optimizing code that we don't care about is of no use if we can't figure out why the code we do care about is slow or how to make it go fast.

Jealousy

So what makes little amateur language designer me jealous is that I really do try and get these things right, make sure the design is parsimonious, and these guys just joyfully ignore every rule in the book, and then trample on said book in ways that should get the language designer's guild to revoke their license, yet there is almost universal fawnage.

Whoever said life was fair?

As always, comments welcome here or on HN

Sunday, April 5, 2015

React.native isn't

While we're on the subject of terminological disasters, Facebook's react.native seems to be doing a good job of muddling the waters.

While some parts make use of native infrastructure, a lot do not:

  1. uses views as drawing results, rather than as drawing sources, has a
  2. parallel component hierarchy,
  3. ListView isn't UITableView (and from what I read, can't be),
  4. even buttons aren't UIButton instances,
  5. doesn't use responder chain, but implements something "similar", and finally,
  6. oh yes, JavaScript

None of this is necessarily bad, but whatever it is, it sure ain't "native".

What's more, the rationale given for React and the Components framework that was also just released echoes the misunderstandings Apple shows about the MVC pattern:

Mvc data event flow fb components

Just as a reminder: what's shown here with controllers pushing data to view at any time is not MVC, unless you use that to mean "Massive View Controller".

In Components and react.native, this "pushing of mutable state to the UI" is supposed to be replaced by "a (pure) function of the model". That's what a View (UIView or NSView) is, and what drawRect:: does. So next time you are annoyed by pushing data to views, instead of creating a whole new framework, just drag a Custom View from the palette into your UI and then implement the drawRect::. Creating views as a result of drawing (and/or turning components into view state mutations) is more stateful than drawRect::, not less.

Again, that doesn't mean it's bad or useless, it just means it isn't what it says on the tin. And that's a problem. From what I've heard so far, the most enthusiastic response to react.native has come from web developers who can finally code "native" apps without learning Objective-C/Swift or Java. That may or may not be useful (past experience suggests not), but it's something completely different from what the claims are.

Oh and finally, the "react" part seems to refer to "one-way reactive data flow", an even bigger terminological disaster that I will examine in a future post.

As always, comments welcome here or at HN

Friday, April 3, 2015

Model Widget Controller (MWC) aka: Apple "MVC" is not MVC

I probably should have taken more notice that time that after my question about why a specific piece of the UI code had been structured in a particular way, one of my colleagues at 6wunderkinder informed me that Model View Controller meant the View must not talk to the model, and instead the Controller is responsible for mediating all interaction between the View and the Model. It certainly didn't match the definition of MVC that I knew, so I checked the Wikipedia page on MVC just in case I had gone completely senile, but it checked out with that I remembered:
  1. the controller updates the model,
  2. the model notifies the view that it has changed, and finally
  3. the view updates itself by talking to the model
(The labeling on the graphic on the Wikipedia is a bit misleading, as it suggests that the model updates the view, but the text is correct).

What I should have done, of course, is keep asking "Why?", but I didn't, my excuse being that we were under pressure to get the Wunderlist 3.0 release out the door. Anyway, I later followed up some of my confusion about both React.native and ReactiveCocoa (more on those in a later post) and found the following incorrect diagram in a Ray Wenderlich tutorial on ReactiveCocooa and MVVC.

Hmm...that's the same confusion that my colleague had. The plot thickens as I re-check Wikipedia just to be sure. Then I had a look at the original MVC papers by Trygve Reenskaug, and yes:

A view is a (visual) representation of its model. It would ordinarily highlight certain attributes of the model and suppress others. It is thus acting as a presentation filter. A view is attached to its model (or model part) and gets the data necessary for the presentation from the model by asking questions.

The 1988 JOOP article "MVC Cookbook" also confirms:

MVC Interaction Krasner 88

So where is this incorrect version of MVC coming from? It turns out, it's in the Apple documentation, in the overview section!

Model view controller

I have to admit that I hadn't looked at this at least in a while, maybe ever, so you can imagine my surprise and shock when I stumbled upon it. As far as I can tell, this architectural style comes from having self-contained widgets that encapsulate very small pieces of information such as simple strings, booleans or numbers. The MVC architecture was not intended for these kinds of small widgets:

MVC was conceived as a general solution to the problem of users controlling a large and complex data set.
If you look at the examples, the views are large both in size and in scope, and they talk to a complex model. With a widget, there is no complex model, not filtering being done by the view. The widget contains its own data, for example a string or a number. An advantage of widgets is that you can meaningfully assemble them in a tool like Interface Builder, with a more MVC-like large view, all you have in IB is a large blank space labeled 'Custom View'. On the other hand, I've had very good experiences with "real" (large view) MVC in creating high performance, highly responsive user interfaces.

Model Widget Controller (MWC) as I like to call it, is more tuned for forms and database programming, and has problems with more reactive scenarios. As Josh Abernathy wrote:

Right now we write UIs by poking at them, manually mutating their properties when something changes, adding and removing views, etc. This is fragile and error-prone. Some tools exist to lessen the pain, but they can only go so far. UIs are big, messy, mutable, stateful bags of sadness.

To me, this sadness is almost entirely a result of using MWC rather than MVC. In MVC, the "V" is essentially a function of the model, you don't push or poke at it, you just tell it "something changed" and it redraws itself.

And so the question looms: is react.native just a result of (Apple's) misunderstanding (of) MVC?

As always, your comments are welcome here or on HN.