Thursday, November 14, 2019

Presenting (in) Objective-Smalltalk

2019 has been the year that I have started really talking about Objective-Smalltalk in earnest, because enough of the original vision is now in place.

My first talk was at the European Smalltalk User Group's (ESUG) annual conference in my old hometown of Cologne: (pdf)

This year's ESUG was was my first since Essen in 2001, and it almost seemed like a bit of a timewarp. Although more than half the talks were about Pharo, the subjects seemed mostly the same as back when: a bit of TDD, a bit of trying to deal with native threads (exactly the same issues I struggled with when I was doing the CocoaSqueak VM), a bit of 3D graphics that weren't any better than 3D graphics in other environments, but in Smalltalk.

One big topic was getting large (and very profitable) Smalltalk code-bases running on mobile devices such as iPhones. The top method was transpiling to JavaScript, another translating the VM code to JavaScript and then having that run off-the-shelf images. Objective-Smalltalk can also be put in this class, with a mix of interpretation and native compilation.

My second talk, I was at Germany's oldest Mac conference, Macoun in Frankfurt. The videos from there usually take a while, but here was a reaction:

"Anyone who wants a glimpse at the future should have watched @mpweiher's talk"

Aww, shucks, thanks, but I'll take it. :-)

I also had two papers accepted at SPLASH '19, one was Standard Object Out: Streaming Objects with Polymorphic Write Streams at the Dynamic Languages Symposium, the other was Storage Combinators at Onward!.

Anyway, one aspect of those talks that I didn't dwell on is that the presentations themselves were implemented in Objective-Smalltalk, in fact the definitions were Objective-Smalltalk expressions, complex object literals to be precise.

What follows is an abridged version of the ESUG presentation:


controller := #ASCPresentationViewController{
    #Name : 'ESUG Demo'.
    #Slides : #(

      #ASCChapterSlide { 
               #text : 'Objective-SmallTalk'.
               #subtitle : 'Marcel Weiher (@mpweiher)'
         }  ,

        #ASCBulletSlide{ 
             #title : 'Objective-SmallTalk'.
             #bullets : #( 
                'Embeddable SmallTalk language (Mac, iOS, Linux, Windows)',
                'Objective-C framework (peer/interop)',
                'Generalizes Objects+Messages to Components+Connectors',
                'Enable composition by solving Architectural Mismatch',
             )
        } ,
      #ASCBulletSlide{ 
             #title : 'The Gentle Tyranny of Call/Return'.
             #bullets : #( 
                'Feymnan: we name everything just a little wrong',
                'Multiparadigm: Procedural, OO and FP!',
                "Guy Steele: it's no longer about completion",
                "Oscar Nierstrasz: we were told we could just model the domain",
                "Andrew Black: good OO students antropmorphise the objects",
             )
        } ,

         #ProgramVsSystem { 
              #lightIntensities : #( 0.2 , 0.7 )
              
         }  ,


       #ASCSlideWithFigure{ 
             #delayInSeconds : 5.0.
             #title : 'Objects and Messages'.
             #bullets : #( 
                'Objective-C compatible semantics',
                'Interpreted and native-compiled',
                '"C" using type annotations',
                'Higher Order Messaging',
                'Framework-oriented development',
                'Full platform integration',
             )
        } ,
  

       #ASCBulletSlide{ 
             #title : 'Pipes and Filters'.
             #bullets : #( 
                'Polymorphic Write Streams (DLS ''19)',
                '#writeObject:anObject',
                'Triple Dispatch + Message chaining',
                'Asynchrony-agnostic',
                'Streaming / de-materialized objects',
                'Serialisation, PDF/PS (Squeak), Wunderlist, MS , To Do',
                'Outlook: filters generalise methods?',
            )
        } ,
 
       #ASCBulletSlide{ 
             #title : 'In-Process REST'.
             #bullets : #( 
                'What real large-scale networks use',
                'Polymorphic Identifiers',
                'Stores',
                'Storage Combinators',
                'Used in a number of applications',
             )
        } ,


       #ASCBulletSlide{ 
             #title : 'Polymorphic Identifiers'.
             #bullets : #( 
                'All identifiers are URIs',
                "var:hello := 'World!",
                'file:{env:HOME}/Downloads/site := http://objective.st',
                'slider setValueHolder: ref:var:celsius',
             )
        } ,

       #ASCBulletSlide{ 
             #title : 'Storage Combinators'.
             #bullets : #( 
                'Onward! ''19',
                'Combinator exposes + consumes REST interfaces',
                'Uniform interface (REST) enables pluggability',
                'Narrow, semantically tight interface enables intermediaries',
                '10x productivity/code improvments',
             )
        } ,


      #ImageSlide{ 
               #text : 'Simple Composed Store'.
               #imageURL : '/Users/marcel/Documents/Writing/Dissertation/Papers/StorageCombinators/disk-cache-json-aligned.png'.
               #xOffset : 2.0 .
               #imageScale : 0.8
         }  , 
      #ASCBulletSlide{ 
             #title : 'Outlook'.
             #bullets : #( 
                'Port Stores and Polymorphic Write Streams',
                'Documentation / Sample Code',
                'Improve native compiler',
                'Tooling (Debugger)',
                'You! (http://objective.st)',
             )
        }  ,


      #ASCChapterSlide { 
               #text : 'Q&A   http://objective.st'.
               #subtitle : 'Marcel Weiher (@mpweiher)'
         }  ,
      )
}. 


There are a number of things going on here:
  • Complex object literals
  • A 3D presentation framework
  • Custom behavior via custom classes
  • Framework-oriented programming
Let's look at these in turn.

Complex object literals

Objective-Smalltalk has literals for arrays (really: ordered collections) and dictionaries, like many other languages now. Array literals are taken from Smalltalk, with a hash and round braces: #(). Unlike other Smalltalks, entries are separated via commas, so #( 1,2,3) rather than #( 1 2 3 ). For dictionaries, I borrowed the curly braces from Objective-C, so #{}.

This gives us the ability to specify complex property lists directly in code. A common idiom in Mac/iOS development circles is to initialize objects from property lists, so something like the following:


presentation = [[MyPresentation alloc] initWithDictionary:aDictionary];

All complex object literals really do is add a little bit of syntactic support for this idiom, by noticing that the two respective character at the start of array and dictionay literals give us a space to put a name, a class name, between those two characters:


presentation := #MyPresentation{ ... };

This will parse the text between the curly brackets as a dictionary and then initialize a MyPresentation object with that dictionary using the exact -initWithDictionary: message given above. This may seem like a very minor convenience, and it is, but it actually makes it possible to simply write down objects, rather than having to write code that constructs objects. The difference is subtle but significant.

The benefit becomes more obvious once you have nested structures. A normal plist contains no specific class information, just arrays, dictionaries numbers and strings, and in the Objective-C example, that class information is provided externally, by passing the generic plist to a specific class instance.

(JSON has a similar problem, which is why I still prefer XML for object encoding.)

So either that knowledge must also be provided externally, for example by the implicit knowledge that all substructure is uniform, or custom mechanisms must be devised to encode that information inside the dictionaries or arrays. Ad hoc. Every single time.

Complex object identifiers create a common mechanism for this: each subdictionary or sub-array can be tagged with the class of the object to create, and there is a convenient and distinct syntax to do it.

A 3D presentation framework

One of the really cool wow! effects of Alan Kay's Squeak demos is always when he breaks through the expected boundaries of a presentation with slides and starts live programming and interactive sketching on the slide. The effect is verey similar to when characters break the "fourth wall", and tends to be strongest on the very jaded, who were previously dismissive of the whole presentation.

Alas, a drawback is that those presentations in Squeak tend to look a bit amateurish and cartoonish, not at all polished.

Along came the Apple SceneKit Team's presentations, which were done as Cocoa/SceneKit applications. Which is totally amazing, as it allows arbitrary programmability and integration with custom code, just like Alan's demos, but with a lot more polish.

Of course, an application like that isn't reusable, the effort is pretty high and interactivity low.

I wonder what we could do about that?

First: turn the presentation application into a framework (Slides3D). Second, drive that framework interactively with Objective-Smalltalk from my Workspace-like "Smalltalk" application: presentation.txt. After a bit of setup such as loading the framework (framework:Slides3D load.) and defining a few custom slide classes, it goes on to define the presentation using the literal shown above and then starts the presentation by telling the presentation controller to display itself in a window.


framework:Slides3D load.     
class ProgramVsSystem : ASCSlide {
   var code.
   var system.
   ...
}.
class ImageSlide : ASCSlide { 
     var text.
     var image.


      #ASCChapterSlide { 
               #text : 'Q&A   http://objective.st'.
               #subtitle : 'Marcel Weiher (@mpweiher)'
         }  ,
      )
}. 

controller := #ASCPresentationViewController{
    #Name : 'ESUG Demo'.
    #Slides : #(

      #ASCChapterSlide { 
               #text : 'Objective-SmallTalk'.
               #subtitle : 'Marcel Weiher (@mpweiher)'
         }  ,

       ...
      )
}. 
     
controller view openInWindow:'Objective-SmallTalk (ESUG 2019)'. 

Voilà: highly polished, programmatically driven presentations that I can edit interactively and with a somewhat convenient format. Of course, this is not a one-off for presentations: the same mechanism can be used to define other object hierarchise, including but not limited to interactive GUIs.

Framework-oriented programming

Which brings us to the method behind all this madness: the concept I call framework-oriented programming.

The concept is worth at least another article or two, but at its most basic boils down to: for goodness sake, put the bulk of your code in frameworks, not in an application. Even if all you are building is an application. One app that does this right is Xcode. On my machine, the entire app bundle is close to 10GB. But the actual Xcode binary in /Applications/Xcode.app/Contents/MacOS? 41KB. Yes, Kilobytes. And most of that is bookkeeping and boilerplate, it really just contains a C main() function, which I presume largely matches the one that Xcode generates.

Why?

Simple: an Apple framework (i.e.: a .framework bundle) is at least superficially composable, but a .app bundle is not. You can compose frameworks into bigger frameworks, and you can take a framework and use it in a different app. This is difficult to impossible with apps (and no, kludged-together AppleScript concoctions don't count).

And doing it is completely trivial: after you create an app project, just create a framework target alongside the app target, add that framework to the app and then add all code and resources to the framework target instead of to the app target. Except for the main() function. If you already have an app, just move the code to the framework target, making adjustments to bundle loading code (the relevant bundle is now the framework and no longer the app/main bundle). This is what I did to derive Slides3D from the WWDC 2013 SceneKit App.

What I've described so fa is just code packaging. If you also organize the actual code as an object-oriented framework, you will notice that with time it will evolve into a black-box framework, with objects that are created, configured and composed. This is somewhat tedious to do in the base language (see: creating Views programmatically), so the final evolutionary step is considered a DSL (Hello, SwiftUI!). However, most of this DSL tends to be just creating, configuring and connecting objects. In other words: complex object literals.

Monday, November 11, 2019

What Alan Kay Got Wrong About Objects

One of the anonymous reviewers of my recently published Storage Combinators paper (pdf) complained that hiding disk-based, remote, and local abstractions behind a common interface was a bad idea, citing Jim Waldo's A Note on Distributed Computing.

Having read both this and the related 8 Fallacies of Distributed Computing a while back, I didn't see how this would apply, and re-reading confirmed my vague recollections: these are about the problems of scaling things up from the local case to the distributed case, whereas Storage Combinators and In-Process REST are about scaling things down from the distributed case to the local case. Particularly the Waldo paper is also very specifically about objects and messages, REST is a different beast.

And of course scaling things down happens to be time-honored tradtition with a pretty good track record:

In computer terms, Smalltalk is a recursion on the notion of computer itself. Instead of dividing "computer stuff" into things each less strong than the whole—like data structures, procedures, and functions which are the usual paraphernalia of programming languages—each Smalltalk object is a recursion on the entire possibilities of the computer. Thus its semantics are a bit like having thousands and thousands of computers all hooked together by a very fast network.
Mind you, I think this is absolutely brilliant: in order to get something that will scale up, you simply start with something large and then scale it down!.

But of course, this actually did not happen. As we all experienced scaling local objects and messaging up to the distributed case did not (CORBA, SOAP,...), and as Waldo explains, cannot, in fact, work. What gives?

My guess is that the method described wasn't actually used: when Alan came up with his version of objects, there were no networks with thousands of computers. And so Alan could not actually look at how they communicated, he had to imagine it, it was a Gedankenexperiment. And thus objects and messages were not a scaled-down version of an actual larger thing, they were a scaled down version of an imagined larger thing.

Today, we do have a large network of computers, with not just thousands but billions of nodes. And they communicate via HTTP using the REST architectural style, not via distributed objects and messages.

So maybe if we took that communication model and scaled it down, we might be able to do even better than objects and messages, which already did pretty brilliantly. Hence In-Process REST, Polymorphic Identifiers and Storage Combinators, and yes, the results look pretty good so far!

The big idea is "messaging" -- that is what the kernal of Smalltalk/Squeak is all about (and it's something that was never quite completed in our Xerox PARC phase). The Japanese have a small word -- ma -- for "that which is in between" -- perhaps the nearest English equivalent is "interstitial". The key in making great and growable systems is much more to design how its modules communicate rather than what their internal properties and behaviors should be. Think of the internet -- to live, it (a) has to allow many different kinds of ideas and realizations that are beyond any single standard and (b) to allow varying degrees of safe interoperability between these ideas.

So of course Alan is right after all, just not about objects and messages, which are too specific: "ma", or "interstitialness" or "connector" is the big idea, messaging is just one incarnation of that idea.

Thursday, November 7, 2019

Instant Builds

One of the goals I am aiming for in Objective-Smalltalk is instant builds and effective live programming.

A month ago, I got a package from an old school friend: my old Apple ][+, which I thought I had given as a gift, but he insisted had been a long-term loan. That machine featured 48KB of DRAM and a 1 MHz, 8 bit 6502 processor that took multiple cycles for even the simplest instructions, had no multiply instructions and almost no registers. Yet, when I turn it on it becomes interactive faster than the CRT warms up, and the programming experience remains fully interactive after that. I type something in, it executes. I change the program, type "RUN" and off it goes.

Of course, you can also get that experience with more complex systems, Smalltalk comes to mind, but the point is that it doesn't take the most advanced technology or heroic effort to make systems interactive, what it takes is making it a priority.


But here we are indeed.

Now Swift is only one example of this, it's a current trend, and of course these systems do claim that they provide benefits that are worth the wait. From optimizations to static type-checking with type-inference, so that "once it compiles, it works". This is deemed to be (a) 100% worthwhile despite the fact that there is no scientific evidence backing up these claims (a paper which claimed that it had the evidence was just shredded at this year's OOPSLA) and (b) essentially cost-free. But of course it isn't cost free:

So when everyone zigs, I zag, it's my contrarian nature. Where Swift's message was, essentially "there is too much Smalltalk in Objective-C", my contention is that there is too little Smalltalk in Objective-C (and also that there is too little "Objective" in Smalltalk, but that's a different topic).

Smalltalk was perfectly interactive in its own environment on high end late 70s and early 80s hardware. With today's monsters of computation, there is no good reason, or excuse for that matter, to not be interactive even when taken into the slightly more demanding Unix/macOS/iOS development world. That doesn't mean there aren't loads of reasons, they're just not any good.

So Objective-Smalltalk will be fast, it will be live or near-live at all times, and it will have instant builds. This isn't going to be rocket science, mostly, the ingredients are as follows:

  1. An interpreter
  2. Late binding
  3. Separate compilation
  4. A fast and simple native compiler
Let's look at these in detail.

An interpreter

The basic implementation of Objective-Smalltalk is an AST-walking interpreter. No JIT, not even a simple bytecode interpreter. Which is about as pessimal as possible, but our machines are so incredibly fast, and a lot of our tasks simple enough or computational steering enough that it actually does a decent enough job for many of those tasks. (For more on this dynamic, see The Death of Optimizing Compilers by Daniel J. Bernstein)

And because it is just an interpreter, it has no problems doing its thing on iOS:

(Yes, this is in the simulator, but it works the same on an actual device)

Late Binding

Late binding nicely decouples the parts of our software. This means that the compiler has very little information about what happens and can't help a lot in terms of optimization or checking, something that always drove the compiler folks a little nuts ("but we want to help and there's so much we could do"). It enables strong modularity and separate compilation. Objective-Smalltalk is as late-bound in its messaging as Objective-C or Smalltalk are, but goes beyond them by also late-binding identifiers, storage and dataflow with Polymorphic Identifiers (ACM, pdf), Storage Combinators (ACM, pdf) and Polymorphic Write Streams (ACM, pdf).

Allowing this level of flexibility while still not requiring a Graal-level Helden-JIT to burn away all the abstractions at runtime will require careful design of the meta-level boundaries, but I think the technically desirable boundaries align very well with the conceptually desirable boundaries: use meta-level facilities to define the language you want to program in, then write your program.

It's not making these boundaries clear and freely mixing meta-level and base-level programming that gets us in not just conceptual trouble, but also into the kinds of technical trouble that the Heldencompilers and Helden-JITs have to bail us out of.

Separate Compilation

When you have good module boundaries, you can get separate compilation, meaning a change in file (or other code-containing entity if you don't like files) does not require changes to other files. Smalltalk had this. Unix-style C programming had this, and the concept of binary libraries (with the generalization to frameworks on macOS etc.). For some reason, this has taken more and more of a back-seat in macOS and iOS development, with full source inclusion and full builds becoming the norm in the community (see CocoaPods) and for a long time being enforced by Apple by not allowing user-define dynamic libraries on iOS.

While Swift allows separate compilation, this can have such severe negative effects on both performance and compile times that compiling everything on any change has become a "best practice". In fact, we now have a build option "whole module optimization with optimizations turned off" for debugging. I kid you not.

Objective-Smalltalk is designed to enable "Framework-oriented-programming", so separate compilation is and will remain a top priority.

A fast and simple native compiler

However, even with an interpreter for interactive adjustments, separate compilation due to good modularity and late binding, you sometimes want to do a full build, or need to rebuild a large part of the codebase.

Even that shouldn't take forever, and in fact it doesn't need to. I am totally with Jonathan Blow on this subject when he says that compiling a medium size project shouldn't really more than a second or so.

My current approach for getting there is using TinyCC's backend as the starting point of the backend for Objective-Smalltalk. After all, the semantics are (mostly) Objective-C and Objective-C's semantics are just C. What I really like about tcc is that it goes so brutally directly to outputting CPU opcode as binary bytes.


static void gcall_or_jmp(int is_jmp)
{
    int r;
    if ((vtop->r & (VT_VALMASK | VT_LVAL)) == VT_CONST &&
	((vtop->r & VT_SYM) && (vtop->c.i-4) == (int)(vtop->c.i-4))) {
        /* constant symbolic case -> simple relocation */
        greloca(cur_text_section, vtop->sym, ind + 1, R_X86_64_PLT32, (int)(vtop->c.i-4));
        oad(0xe8 + is_jmp, 0); /* call/jmp im */
    } else {
        /* otherwise, indirect call */
        r = TREG_R11;
        load(r, vtop);
        o(0x41); /* REX */
        o(0xff); /* call/jmp *r */
        o(0xd0 + REG_VALUE(r) + (is_jmp << 4));
    }
}

No layers of malloc()ed intermediate representations here! This aligns very nicely with the streaming/messaging approach to high-performance I've taken elsewhere with Polymorphic Write Streams (see above), so I am pretty confident I can make this (a) work and (b) simple/elegant while keeping it (c) fast.

How fast? I obviously don't know yet, but tcc is a fantastic starting point. The following is the current (=wrong) ObjectiveTcc code to drive tcc to build a function that sends a single message:


-(void)generateMessageSendTestFunctionWithName:(char*)name
{
    SEL flagMsg=@selector(setMsgFlag);
    [self functionOnlyWithName:name returnType:VT_INT argTypes:"" body:^{
        [self pushFunctionPointer:objc_msgSend];
        [self pushObject:self];
        [self pushPointer:flagMsg];
        [self call:2];
    }];
}

How often can I do this in one second? On my 2018 high spec but 13" MBP: 300,000 times. Including in-memory linking (though not much of that happening in this example), not including Mach-O generation as that's not implemented yet and writing the whole shebang to disk. I don't anticipate either of these taking appreciably additional time.

If we consider this 2 "lines" of code, one for the function/method header and one for the message, then we can generate binary for 600KLOC/s. So having a medium size program compile and link in about a second or so seems eminently doable, even if I manage to slow the raw Tcc performance down by about an order of magnitude.

(For comparison: the Swift code base that motivated the Rome caching system for Carthage was clocking in at around 60 lines per second with the then Swift compiler. So even with an anticipated order of magnitude slowdown we'd still be 1000x faster. 1000x is good enough, it's the difference between 3 seconds and an hour.)

What's the downside? Tcc doesn't do a lot of optimization. But that's OK as (a) the sorts of optimizations C compilers and backends like LLVM do aren't much use for highly polymorphic and late-bound code and (b) the basics get you around 80% of the way (c) most code doesn't need that much optimization (see above) and (d) machines have become really fast.

And it helps that we aren't doing crazy things like initially allocating function-local variables on the heap or doing function argument copying via vtables that require require leaning on the optimizer to get adequate performance (as in: not 100x slower..).

Defense in Depth

While any of these techniques might be adequate some of the time, it's the combination that I think will make the Objective-Smalltalk tooling a refreshing, pleasant and highly productive alternative to existing toolchains, because it will be reliably fast under all circumstances.

And it doesn't really take (much) rocket science, just a willingness to make this aspect a priority.