Sunday, June 7, 2015

Steve Jobs on Swift

No, there is no actual evidence of Steve commenting on Swift. However, he did say something about the road to sophisticated simplicity.

In short, at first you think the problem is easy because you don't understand it. Then you begin to understand the problem and everything becomes terribly complicated. Most people stop there, and Apple used to make fun of the ones that do.

To me this is the perfect visual illustration of the crescendo of special cases that is Swift.

The answer to this, according to Steve, is "[..] a few people keep burning the midnight oil and finally understand the underlying principles of the problem and come up with an elegantly simple solution for it. But very few people go the distance to get there."

Apple used to be very much about going that distance, and I don't think Swift lives up to that standard. That doesn't mean it's all bad or that it's completely irredeemable, there are good elements. But they stopped at sophisticated complexity. And "well, it's not all bad" is not exactly what Apple stands for or what we as Apple customers expect and, quite frankly, deserve. And had there been a Steve in Dev Tools, he would have said: do it again, this is not good enough.

As always, comments welcome here or on HN

Saturday, May 23, 2015

I am jealous of Swift

Really, I am. They get to do everything wrong there is in language design and yet the results get fawned upon and the obvious flaws not just overlooked but turned into their opposite.

Language Design

What do I mean? Well, primarily this:
Swift is a crescendo of special cases stopping just short of the general; the result is complexity in the semantics, complexity in the behaviour (i.e. bugs), and complexity in use (i.e. workarounds).
The list Rob compiled is impressively well-researched. Although "special cases stopping just short of the general" for me is enough, it is THE cardinal sin of language design, I would add "needlessly replacing the keyword message syntax at exactly the point where it was no longer an issue and adding it back as an abomination of accidental complexity the world has never seen before". Let's see what Gilad Bracha, an actual programming language designer, has to say on the keyword syntax:
This notation makes it impossible to have an arity error when calling a method. In a dynamically typed language, this is a huge advantage.

I am keenly aware that this syntax is unfamiliar to most programmers, and is a potential barrier to adoption. However, it improves usability massively. Furthermore, a growing number of programmers are learning this notation because of its use in Objective-C (e.g., the iOS APIs).

Abandoning keyword syntax at this point in time takes "snatching defeat from the jaws of victory" to a whole new and exciting level!

Or the whole idea of having every arithmetic operation be a potential crash point, despite the fact that proper numeric towers have been around for many decades and decently optimized (certainly no slower than unoptimized Swift).

And yet, Rob for example writes that the main culprit for Swift's complexity is Objective-C, which I find somewhat mind-boggling. After all, the requirement for Objective-C interoperability couldn't exactly have come as a last minute surprise foisted on an existing language. Folks: if we're designing a replacement language for Apple's Cocoa frameworks, Objective-C compatibility needs to be designed in from the beginning and not added in as an afterthought. And if you don't design your language to be at odds with the frameworks you will be supporting, you will discover that you can get a much cleaner design.

Performance

The situation is even more bizarre when it comes to performance. For example, here's a talk titled How Swift is Swift. The opening paragraph declares that "Swift is designed to be fast, very fast", yet a few paragraphs (or slides) down, we learn that debug builds are often 100 times slower than optimized builds (which themselves don't really rival C).

Sorry, that's not the sign of a language that's "designed to be fast". Those are the characteristics of a language design that is inherently super, super slow, and that requires leaning heavily on the optimizer to get performance to an acceptable level.

And of course, the details bear that out: copy semantics are usually expensive, they need the optimizer to elide those copies in the majority of cases. Same with ARC, which is built in and also requires the optimizer to be effectively clairvoyant (and: on) in order not to suffer 30x regressions.

Apart from the individual issues, the overriding one is that Swift's performance model is extremely opaque (100x for turning the optimizer on). Having the optimizer do a heroic job of optimizing code that we don't care about is of no use if we can't figure out why the code we do care about is slow or how to make it go fast.

Jealousy

So what makes little amateur language designer me jealous is that I really do try and get these things right, make sure the design is parsimonious, and these guys just joyfully ignore every rule in the book, and then trample on said book in ways that should get the language designer's guild to revoke their license, yet there is almost universal fawnage.

Whoever said life was fair?

As always, comments welcome here or on HN

Sunday, April 5, 2015

React.native isn't

While we're on the subject of terminological disasters, Facebook's react.native seems to be doing a good job of muddling the waters.

While some parts make use of native infrastructure, a lot do not:

  1. uses views as drawing results, rather than as drawing sources, has a
  2. parallel component hierarchy,
  3. ListView isn't UITableView (and from what I read, can't be),
  4. even buttons aren't UIButton instances,
  5. doesn't use responder chain, but implements something "similar", and finally,
  6. oh yes, JavaScript

None of this is necessarily bad, but whatever it is, it sure ain't "native".

What's more, the rationale given for React and the Components framework that was also just released echoes the misunderstandings Apple shows about the MVC pattern:

Mvc data event flow fb components

Just as a reminder: what's shown here with controllers pushing data to view at any time is not MVC, unless you use that to mean "Massive View Controller".

In Components and react.native, this "pushing of mutable state to the UI" is supposed to be replaced by "a (pure) function of the model". That's what a View (UIView or NSView) is, and what drawRect:: does. So next time you are annoyed by pushing data to views, instead of creating a whole new framework, just drag a Custom View from the palette into your UI and then implement the drawRect::. Creating views as a result of drawing (and/or turning components into view state mutations) is more stateful than drawRect::, not less.

Again, that doesn't mean it's bad or useless, it just means it isn't what it says on the tin. And that's a problem. From what I've heard so far, the most enthusiastic response to react.native has come from web developers who can finally code "native" apps without learning Objective-C/Swift or Java. That may or may not be useful (past experience suggests not), but it's something completely different from what the claims are.

Oh and finally, the "react" part seems to refer to "one-way reactive data flow", an even bigger terminological disaster that I will examine in a future post.

As always, comments welcome here or at HN

Friday, April 3, 2015

Model Widget Controller (MWC) aka: Apple "MVC" is not MVC

I probably should have taken more notice that time that after my question about why a specific piece of the UI code had been structured in a particular way, one of my colleagues at 6wunderkinder informed me that Model View Controller meant the View must not talk to the model, and instead the Controller is responsible for mediating all interaction between the View and the Model. It certainly didn't match the definition of MVC that I knew, so I checked the Wikipedia page on MVC just in case I had gone completely senile, but it checked out with that I remembered:
  1. the controller updates the model,
  2. the model notifies the view that it has changed, and finally
  3. the view updates itself by talking to the model
(The labeling on the graphic on the Wikipedia is a bit misleading, as it suggests that the model updates the view, but the text is correct).

What I should have done, of course, is keep asking "Why?", but I didn't, my excuse being that we were under pressure to get the Wunderlist 3.0 release out the door. Anyway, I later followed up some of my confusion about both React.native and ReactiveCocoa (more on those in a later post) and found the following incorrect diagram in a Ray Wenderlich tutorial on ReactiveCocooa and MVVC.

Hmm...that's the same confusion that my colleague had. The plot thickens as I re-check Wikipedia just to be sure. Then I had a look at the original MVC papers by Trygve Reenskaug, and yes:

A view is a (visual) representation of its model. It would ordinarily highlight certain attributes of the model and suppress others. It is thus acting as a presentation filter. A view is attached to its model (or model part) and gets the data necessary for the presentation from the model by asking questions.

The 1988 JOOP article "MVC Cookbook" also confirms:

MVC Interaction Krasner 88

So where is this incorrect version of MVC coming from? It turns out, it's in the Apple documentation, in the overview section!

Model view controller

I have to admit that I hadn't looked at this at least in a while, maybe ever, so you can imagine my surprise and shock when I stumbled upon it. As far as I can tell, this architectural style comes from having self-contained widgets that encapsulate very small pieces of information such as simple strings, booleans or numbers. The MVC architecture was not intended for these kinds of small widgets:

MVC was conceived as a general solution to the problem of users controlling a large and complex data set.
If you look at the examples, the views are large both in size and in scope, and they talk to a complex model. With a widget, there is no complex model, not filtering being done by the view. The widget contains its own data, for example a string or a number. An advantage of widgets is that you can meaningfully assemble them in a tool like Interface Builder, with a more MVC-like large view, all you have in IB is a large blank space labeled 'Custom View'. On the other hand, I've had very good experiences with "real" (large view) MVC in creating high performance, highly responsive user interfaces.

Model Widget Controller (MWC) as I like to call it, is more tuned for forms and database programming, and has problems with more reactive scenarios. As Josh Abernathy wrote:

Right now we write UIs by poking at them, manually mutating their properties when something changes, adding and removing views, etc. This is fragile and error-prone. Some tools exist to lessen the pain, but they can only go so far. UIs are big, messy, mutable, stateful bags of sadness.

To me, this sadness is almost entirely a result of using MWC rather than MVC. In MVC, the "V" is essentially a function of the model, you don't push or poke at it, you just tell it "something changed" and it redraws itself.

And so the question looms: is react.native just a result of (Apple's) misunderstanding (of) MVC?

As always, your comments are welcome here or on HN.

Thursday, March 19, 2015

Why overload operators?

One of the many things that's been puzzling me for a long time is why operator overloading appears to be at the same time problematic and attractive in languages such as C++ and now Swift. I know I certainly feel the same way, it's somehow very cool to massage the language that way, but at the same time the thought of having everything redefined underneath me fills me with horror, and what little I've seen and heard of C++ with heavy overloading confirms that horror, except for very limited domains. What's really puzzling is that binary messages in Smalltalk, which are effectively the same feature (special characters like *,+ etc. can be used as message names taking a single argument), do not seem to not have either of these effects: they are neither particularly attractive to Smalltalk programmers, nor are their effects particularly worrisome. Odd.

Of course we simply don't have that problem in C or Objective-C: operators are built-in parts of the language, and neither the C part nor the Objective part has a comparable facility, which is a large part of the reason we don't have a useful number/magnitude hierarchy in Objective-C and numeric/array libraries are't that popular: writing [number1 multipliedBy:number2] is just too painful.

Some recent articles and talks that dealt with operator overloading in Apple's new Swift language just heightened my confusion. But as is often the case, that heightened confusion seems to have been the last bit of resistance that pushed through an insight.

Anyway, here is an example from NSHipster Matt Thompson's excellent post on Swift Operators, an operator for exponentiation wrapping the pow() function:

func ** (left: Double, right: Double) -> Double {
    return pow(left, right)
}
This is introduced as "the arithmetic operator found in many programming languages, but missing in Swift [is **]". Here is an example of the difference:
pow( left, right )
left ** right
pow( 2, 3 )
2 ** 3
How come this is seen as an improvement (and to me it does)? There are two candidates for what the difference might be: the fact that the operation is now written in infix notation and that it's using special characters. Do these two factors contribute evenly or is one more important than the other. Let's look at the same example in Smalltalk syntax, first with a normal keyword message and then with a binary message (Smalltalk uses raisedTo:, but let's stick with pow: here to make the comparison similar):
left pow: right.
left ** right.
2 pow: 3.
2 ** 3.
To my eyes at least, the binary-message version is no improvement over the keyword message, in fact it seems somewhat worse to me. So the attractiveness of infix notation appears to be a strong candidate for why operator overloading is desirable. Of course, having to use operator overloading to get infix notation is problematic, because special characters generally do not convey the meaning of the operation nearly as well as names, conventional arithmetic aside.

Note that dot notation for message sends/method calls does not really seem to have the same effect, even though it could technically also be considered an infix notation:

left.pow( right)
left ** right
2.pow( 3 )
2 ** 3
There is more anecdotal evidence. In Chris Eidhof's talk on functional swift, scrub to around the 10 minute mark. There you'll find the following code with some nested and curried function calls:
let result = colorOverlay( overlayColor)(blur(blurRadius)(image))
"This does not look to nice [..] it gets a bit unreadable, it's hard to see what's going on" is the quote.
let result = colorOverlay( overlayColor)(blur(blurRadius)(image))
Having a special compose function doesn't actually make it better
let myFilter = composeFilters(blur(blurRadius),colorOverlay(overlayColor))
let result = myFilter(image)
Infix to the rescue! Using the |>operator:
let myFilter = blur(blurRadius) |> colorOverlay(overlayColor)
let result = myFilter(image)
Chris is very fair-minded about this, he mentions that due to the special characters involved, you can't really infer what |> means from looking at the code, you have to know, and having many of these sorts of operators makes code effectively incomprehensible. Or as one twitter use put it: Like most things in engineering, it's a trade-off, though my guess is the trade-off would shift if we had infix without requiring non-sensical characters.

Built in
I do believe that there is another factor involved, one that is more psychologically subtle having to do with the idea of language as a (pre-defined) thing vs. a mechanism for building your own abstractions that I mentioned in my previous post on Swift performance.

In that post, I mentioned BASIC as the primary example of the former, a language as a collection of built-in features, with C and Pascal as (early) examples of the latter, languages as generic mechanisms for building your own features. However, those latter languages don't treat all constructs equally. Specifically, all the operators are built-in, not user-definable over -overridable. They also correspond closely to those operations that are built into the underlying hardware and map to single instructions in assembly language. In short: even in languages with a strong "user-defined" component, there is a hard line between "user-defined" and "built-in", and that line just happens to map almost 1:1 to the operator/function boundary.

Hackers don't like boundaries. Or rather: they love boundaries, the overcoming of. I'd say that overloaded operators are particularly attractive (to hacker mentalities, but that's probably most of us) in languages where this boundary between user-defined and built-in stuff exists, and therefore those overloaded operators let you cross that boundary and do things normally reserved for language implementors.

If you think this idea is too crazy, listen to John Siracusa, Guy English and Rene Ritchie discussing Swift language features and operator overloading on Debug Podcast Number 49, Siracusa Round 2, starting at 45:45. I've transcribed a bit below, but I really recommend you listen to the podcast, it's very good:

  • 45:45 Swift is a damning comment on C++ [benefits without the craziness]
  • 46:06 You can't do what Swift did [putting basic types in the standard library] without operator overloading. [That's actually not true, because in Swift the operators are just syntax -> but it is exactly the idea I talked about earlier]
  • 47:50 If you're going to add something like regular expressions to the language ... they should have some operators of their own. That's a perfect opportunity for operator overloading
  • 48:07 If you're going to add features to the language, like regular expressions or so [..] there is well-established syntax for this from other languages.
  • 48:40 ...or range operators. Lots of languages have range operators these days. Really it's just a function call with two different operands. [..] You're not trying to be clever All you're trying to do is make it natural to use features that exist in many of other languages. The thing about Swift is you don't have to add syntax to the language to do it. Because it's so malleable. If you're not adding a feature, like I'm adding regular expressions to the language. If you're not doing that, don't try to get clever. Consider the features as existing for the benefit of the expansion of the language, so that future features look natural in it and not bolted on even though technically everything is in a library. Don't think of it as in my end user code I'm going to come up with symbols that combine my types in novel ways, because what are you even doing there?
  • 50:17 if you have a language like this, you need new syntax and new behavior to make it feel natural. [new behavior strings array] and it has the whole struct thing. The basics of the language, the most basic things you can do, have to be different, look different and behave different for a modern language.
  • 51:52 "using operator overloading to add features to the language" [again, not actually true]
The interesting thing about this idea of a boundary between "language things" and "user things" is that it does not align with the "operators" and "named operators" in Swift, but apparently it still feels like it does, so we "extending the language" is seen as roughly equivalent to "adding some operators", with all the sound caveats that apply.

In fact, going back to Matt Thompson's article from above, it is kind of odd that he talks about exponentiation operator as missing from the language, when if fact the operation is available in the language. So if the operation crosses the boundary from function to operator, then and only then does it become part of the language.

In Smalltalk, on the other hand, the boundary has disappeared from view. It still exists in the form of primitives, but those are well hidden all over the class hierarchy and not something that is visible to the developer. So in addition to having infix notation available for named operations, Smalltalk doesn't have the notion of something being "part of the language" rather than "just the library" just because it uses non-sensical characters. Everything is part of the library, the library is the language and you can use names or special characters as appropriate, not because of asymmetries in the language.

And that's why operator overloading is a a thing even in languages like Swift, whereas it is a non-event in Smalltalk.

Thursday, September 11, 2014

iPhone 6 Plus and The End of Pixels

It's been a long time coming. NeXTStep in 1989 featured DisplayPostscript, and therefore a device independent imaging model that meant you did not specify graphics in pixels, but rather in physical units. The default was a variant of the printer's point at 1/72nd of an inch, which happened to be close the the typical pixel resolution of displays at the time. However, 1 point never meant 1 pixel, it meant 1/72nd of an inch, and the combination of floating point coordinates and transformation matrices meant you could use pretty much any unit you wanted. When NeXT bought Apple, it brought this imaging model with it, although with some modifications due to Adobe intransigence about licensing and the addition of anti-aliasing.

However, despite the device-independent APIs, we still have pixel-based content, and "pixel-accurate" graphics. This has made less and less sense over time, with retina displays making pixel-accuracy moot (no more screen fonts!) scaled modes making it impossible and both iOS 7 and OS X 10.10 going for a more geometric look. Still, the design community has resisted, talking about @3 pixel art etc.

No more.

The iPhone 6 Plus has a 1920x1080 panel, but the simulator renders at 3x. These two resolutions don't match and so the pixels will need to be downsampled to the display resolution. Whether that is accomplished by downsampling pixel art (which happens automagically with Quartz and the proper device transform set) or as a separate step that downsamples the entire rendered framebuffer doesn't matter (much). Either way, there are no more "pixel perfect" pre-rendered designs.

Device-independent graphics, here we come at last. We're only a quarter century late.

Update: "Its 401 PPI display is the first display I’ve ever used on which, no matter how close I hold it to my eyes, I can’t perceive the pixels. " - John Gruber (emphasis mine)

Wednesday, September 10, 2014

collect is what for does

I recently stumbled on Rob Napier's explanation of the map function in Swift. So I am reading along yadda yadda when suddenly I wake up and my eyes do a double take:
After years of begging for a map function in Cocoa [...]
Huh? I rub my eyes, probably just a slip up, but no, he continues:
In a generic language like Swift, “pattern” means there’s a probably a function hiding in there, so let’s pull out the part that doesn’t change and call it map:
Not sure what he means with a "generic language", but here's how we would implement a map function in Objective-C.
#import <Foundation/Foundation.h>

typedef id (*mappingfun)( id arg );

static id makeurl( NSString *domain ) {
  return [[[NSURL alloc] initWithScheme:@"http" host:domain path:@"/"] autorelease];
}

NSArray *map( NSArray *array, mappingfun theFun )
{
  NSMutableArray *result=[NSMutableArray array];
  for ( id object in array ) {
    id objresult=theFun( object );
    if ( objresult ) {
       [result addObject:objresult];
    }
  }
  return result;
}

int main(int argc, char *argv[]) {
  NSArray *source=@[ @"apple.com", @"objective.st", @"metaobject.com" ];
  NSLog(@"%@",map(source, makeurl ));
}

This is less than 7 non-empty lines of code for the mapping function, and took me less than 10 minutes to write in its entirety, including a trip to the kitchen for an extra cookie, recompiling 3 times and looking at the qsort(3) manpage because I just can't remember C function pointer declaration syntax (though it took me less time than usual, maybe I am learning?). So really, years of "begging" for something any mildly competent coder could whip up between bathroom breaks or during a lull in their twitter feed?

Or maybe we want a version with blocks instead? Another 2 minutes, because I am a klutz:


#import <Foundation/Foundation.h>

typedef id (^mappingblock)( id arg );

NSArray *map( NSArray *array, mappingblock theBlock )
{
  NSMutableArray *result=[NSMutableArray array];
  for ( id object in array ) {
    id objresult=theBlock( object );
    if ( objresult ) {
       [result addObject:objresult];
    }
  }
  return result;
}

int main(int argc, char *argv[]) {
  NSArray *source=@[ @"apple.com", @"objective.st", @"metaobject.com" ];
  NSLog(@"%@",map(source, ^id ( id domain ) {
    return [[[NSURL alloc] initWithScheme:@"http" host:domain path:@"/"] autorelease];
        }));
}

Of course, we've also had collect for a good decade or so, which turns the client code into the following, much more readable version (Objective-Smalltalk syntax):
NSURL collect URLWithScheme:'http' host:#('objective.st' 'metaobject.com') each path:'/'.

As I wrote in my previous post, we seem to be regressing to a mindset about computer languages that harkens back to the days of BASIC, where everything was baked into the language, and things not baked into the language or provided by the language vendor do not exist.

Rob goes on the write "The mapping could be performed in parallel [..]", for example like parcollect? And then "This is the heart of good functional programming." No. This is the heart of good programming.

Having processed that shock, I fly over a discussion of filter (select) and stumble over the next whopper:

It’s all about the types

Again...huh?? Our map implementation certainly didn't need (static) types for the list, and all the Smalltalkers and LISPers that have been gleefully using higher order techniques for 40-50 years without static types must also not have gotten the memo.

We [..] started to think about the power of functions to separate intent from implementation. [..] Soon we’ll explore some more of these transforming functions and see what they can do for us. Until then, stop mutating. Evolve.
All modern programming separates intent from implementation. Functions are a fairly limited and primitive way of doing so. Limiting power in this fashion can be useful, but please don't confuse the power of higher order programming with the limitations of functional programming, they are quite distinct.