Showing posts with label Smalltalk. Show all posts
Showing posts with label Smalltalk. Show all posts

Monday, June 28, 2021

Generating ARM Assembly: First Steps

Finally took the plunge to start generating ARM64 assembly. As expected, the actual coding was much easier than overcoming the barrier to just start doing it.

The following snippet generates a program that prints a message to stdout, so a classic "Hello World":


#!env stsh
#-gen:msg

messageLabel ← 'message'.
main ← '_main'.

framework:ObjSTNative load
arm := MPWARMAssemblyGenerator stream 
arm global: main;
    align:2;
    label:main;
    mov:0 value:1;
    adr:1 address:messageLabel;
    mov:2 value: msg length;
    mov:16 value:4;
    svc:128;
    mov:0 value:0;
    ret;
    label:messageLabel;
    asciiz:msg.

file:hello-main.s := arm target 

One little twist is that the message to print gets passed to the generator. I like how Smalltalk's keyword syntax keeps the code uncluttered, and often pretty close to the actual assembly that will be generated.

Of particular help here is message cascading using the semicolon. This means I don't have to repeat the receiver of the message, but can just keep sending it messages. Cascading works well together with streams, because there are no return values to contend with, we just keep appending to the stream.

When invoked using ./genhello-main.st 'Hi Marcel, finish that blog post and get on your bike!', the generated code is as follows:


.global	_main
.align	2
_main:
	mov	X0, #1
	adr	X1, message
	mov	X2, #54
	mov	X16, #4
	svc	#128
	mov	X0, #0
	ret
message:
	.asciz	"Hi Marcel, finish that blog post and get on your bike!"

And now I am going to do what it says :-)

Tuesday, June 15, 2021

if let it be

One of funkier aspects of Swift syntax is the if let statement. As far as I can tell, it exists pretty much exclusively to check that an optional variable actually does contain a value and if it does, work with a no-longer-optional version of that variable.

Swift packages this functionality in a combination if statement and let declaration:


if let value = value {
   print("value is \(value)")
}

This has a bunch of problems that are explained nicely in a Swift Evolution thread (via Michael Tsai) together with some proposals to fix it. One of the issues is the idiomatic repitition of the variable name, because typically you do want the same variable, just with less optionality. Alas, code-completion apparently doesn't handle this well, so the temptation is to pick a non-descriptive variable name.

In my previous post (Asynchronous Sequences and Polymorphic Streams) I noted how the fact that iteration in Smalltalk and Objecive-S is done via messages and blocks means that there is no separate concept of a "loop-variable", that is just an argument to the block.

Conditionals are handled the same way, with blocks and messages, but normally don't pass arguments to their argument blocks, because in normal conditionals those arguments would always be just the constants true or false. Not very interesting.

When I added ifNotNil: some time ago, I used the same logic, but it turns out the object is now actually potentially interesting. So ifNotNil: now passes the now-known-to-be-non-nil value to the block and can be used as follows:


value ifNotNil:{ :value |
    stdout println:value.
}

This doesn't eliminate the duplication, but does avoid the issue of having the newly introduced variable name precede the original variable. Well, that and the whole weird if let in the first place.

With anonymous block arguments, we actually don't have to name the parameter at all:


value ifNotNil:{ stdout println:$0. }

Alternatively, we can just take advantage of some conveniensces and use a HOM instead:


value ifNotNil printOn:stdout.

Of course, Objective-S currently doesn't care about optionality, and with the current nil-eating behavior, the ifNotNil is not strictly necessary, you could just write it as follow:


value printOn:stdout.

I haven't really done much thinking about it, but the whole idea of optionality shouldn't really be handled in the space of values, but in the space of references. Which are first class objects in Objective-S.

So you don't ask a value if it is nil or not, you ask the variable if it contains a value:


ref:value ifBound:{ :value | ... }

To me that makes a lot more sense than having every type be accompanied by an optional type.

So if we were to care about optionality so in the future, we have the tools to create a sensible solution. And we can let if let just be.

Thursday, May 20, 2021

A far too simple (hardcoded) tasks backend in Objective-S

Recently there was a question as to what one should use to create a backend for an iOS/macOS app these days. I couldn't resist mentioning Objective-S, and just to check for myself whether that's feasible, I quickly jotted down the following tiny backend that returns a hardcoded list of tasks via HTTP:
#!env stsh
framework:ObjectiveHTTPD load.

class Task {
   var <bool> done.
   var title.
   -description { "Task: {this:title} done: {this:done}". }
}

taskList ← #( #Task{ #title: 'Clean my room', #done: false }, #Task{ #title: 'Check twitter feed', #done: true } ).

scheme todo {
   var taskList.
   /tasks { 
      |= { 
         this:taskList.
      }
   }
}.

todo := #todo{ #taskList: taskList }.
server := #MPWSchemeHttpServer{ #scheme: todo, #port: 8082 }.
server start.
shell runInteractiveLoop.

After loading the HTTP framework, we define a Task and a list of two example tasks. Then we define a scheme with a single path, just /tasks, which returns said tasks list. We then instantiate the scheme and serve it via HTTP on port 8082. Since this is a shell script and starting the server does not block, we finally start up the REPL.

Details such as coding the tasks as JSON, accessing a single task and modifying tasks are left as exercises for the reader.

Sunday, June 14, 2020

The Curious Case of Swift's Adoption of Smalltalk Keyword Syntax

I was really surprised to learn that Swift recently adopted Smalltalk keyword syntax: [Accepted] SE-0279: Multiple Trailing Closures. That is: a keyword terminated by a colon, followed by an argument and without any surrounding braces.

The mind boggles.

A little.

Of course, Swift wouldn't be Swift if this weren't a special case of a special case, specifically the case of multiple trailing closures, which is a special case of trailing closures, which are weird and special-casey enough by themselves. Below is an example:


UIView.animate(withDuration: 0.3) {
  self.view.alpha = 0
} completion: { _ in
  self.view.removeFromSuperview()
}

Note how the arguments to animate() would seem to terminate at the closing parenthesis, but that's actually not the case. The curly braces after the closing paren start a closure that is actually also an argument to the method, a so-called trailing closure. I have a little bit of sympathy for this construct, because closures inside of the parentheses look really, really awkward. (Of course, all params apart from a sole x inside f(x) look awkward, but let's not quibble. For now.).

Another thing this enables is methods that reasonably resemble control structures, which I heard is a really great idea.

The problem is that sometimes you have more than one closure argument, and then just stacking them up behind what appears to be end of the function/method call gets really, really awkward, and you can't tell which block is which argument, because the trailing closure doesn't get a keyword.

Well, now it does. And we now have 4 different method syntaxes in one!

  1. Traditional C/Pascal/C++/Java function call syntax x.f()
  2. The already weird-ish addition of Smalltalk/Objective-C keywords inside the f(x) syntax: f(arg:x)
  3. Original trailing-closure syntax, which is just its own thing, for the first closure
  4. Smalltalk non-brackted keyword syntax for the 2nd and subsequent closures.
That is impressive, in a scary kind of way.
Swift is a crescendo of special cases stopping just short of the general; the result is complexity in the semantics, complexity in the behaviour (i.e. bugs), and complexity in use (i.e. workarounds).
In understand that this proposal was quite controversial, with heated discussion between opponents and proponents. I understand and sympathize with both sides. On the one hand, this is markedly better than alternatives. On the other hand it is a special case of a special case that is difficult to justify as an addition of all that is already there.

Special cases beget special cases beget special cases.

Of course the answer was always there: Smalltalk keyword syntax is not just the only reasonable solution in this case, it also solves all the other cases. It is the general solution. Here's how this could look in Objective-Smalltalk (which uses curly braces instead for closures instead of Smalltalk-80's square brackets):


UIView animate:{ self.view.alpha ← 0. } withDuration:0.3 completion:{ self view removeFromSuperview. }.

No special cases, every argument is labeled, no syntax mush of brackets inside parentheses etc. And yes, this also handles user-defined control structures, to:do: is just a method on NSNumber:


1 to:10 do:{:i | stdout println:"I will not introduce {i} special cases willy nilly.".}.

And since keywords naturally go between their arguments, there is no need for "operators", as a very different and special syntax form. You just allow some "binary" keywords to look a little different, so instead of 2 multiply:3 you can write 2 * 3. And when you have 2 raisedTo:3 instead of pow(2,3) (with the signature: func pow(_ x: Decimal, _ y: Int) -> Decimal), do you really neeed to go to the trouble of defining an "operator"?

Or Swift's a as b, another special kind of syntax. How about a as:b? (Yes I know there are details, but those are ... details.). And so on and so forth.

But of course, it's too late now. When I chose Smalltalk as the base syntax for the language that has turned into Objective-Smalltalk, it wasn't just because I just like it or have gotten used to it via Objective-C. Smalltalk's syntax is surprisingly flexible and general, Smalltalk APIs look a lot like DSLs, without any of the tooling or other overheads.

And that's the frustrating part: this stuff was and is available and well-known. At least if you bother to look and/or ask. But instead, we just choose these things willy-nilly and everybody has to suffer the consequences.

UPDATE:

I guess what I am trying to get at is that if you'd thought things through just a little bit, you could have had almost the entire syntax of your language for the cost (complexity, implementation size and brittleness, cognitive load, etc.) of this one special case of a special case. And it would have been overall better to boot.

Saturday, December 14, 2019

The Four Stages of Objective-Smalltalk

One of the features that can be confusing about Objective-Smalltalk is that it actually has several parts that are each significant on their own, so frequently will focus on just one of these (which is fine!), but without realising that the other parts also exist, which is unfortunate as they are all valuable and complement each other. In fact, they can be described as stages that are (logically) built on top of each other.

1. WebScript 2 / "Shasta"

Objective-C has always had great integration with other languages, particularly with a plethora of scripting languages, from Tcl to Python and Ruby to Lisp and Scheme and their variants etc. This is due not just to the fact that the runtime is dynamic, but also that it is simple and C-based not just in terms of being implemented in C, but being a peer to C.

However, all of these suffer from having two somewhat disparate languages, with competing object models, runtimes, storage strategies etc. One language that did not have these issues was WebScript, part of WebObjects and essentially Objective-C-Script. The language was interpreted, a peer in which you could even implement categories on existing Objective-C objects, and so syntactically compatible that often you could just copy-paste code between the two. So close to the ideal scripting language for that environment.

However, the fact that Objective-C is already a hybrid with some ugly compromises means that these compromises often no longer make sense at all in the WebScript environment. For example, Objective-C strings need an added "@" character because plain double quotes are already taken by C strings, but there are no C strings in WebScripts. Primitive types like int can be declared, but are really objects, the declaration is a dummy, a NOP. Square brackets for message sends are needed in Objective-C to distinguish messages from the rest of the C syntax, but the that's also irrelevant in WebScript. And so on.

So the first stage of Objective-Smalltalk was/is to have all the good aspects of WebScript, but without the syntactic weirdness needed to match the syntactic weirdness of Objective-C that was needed because Objective-C was jammed into C. I am not the only one who figured out the obvious fact that such a language is, essentially, a variant of Smalltalk, and I do believe this pretty much matches what Brent Simmons called Shasta.

Implementation-wise, this works very similarly to WebScript in that everything in the language is an object and gets converted to/from primitives when sending or receiving messages as needed.

This is great for a much more interactive programming model than what we have/had (and the one we have seems to be deteriorating as we speak):

And not just for isolated fragments, but for interacting with and tweaking full applications as they are running:

2. Objective-C without the C

Of course, getting rid of the (syntactic) weirdnesses of Objective-C in our scripting language means that it is no longer (syntactically) compatible with Objective-C. Which is a shame.

It is a shame because this syntactic equivalence between Objective-C and WebScript meant that you could easily move code between them. Have a script that has become stable and you want to reuse it? Copy and paste that code into an Objective-C file and you're good to go. Need it faster? Same. Have some Objective-C code that you want to explore, create variants of etc? Paste it into WebScript. Such a smooth integration between scripting and "programming" is rare and valuable.

The "obvious" solution is to have a native AOT-compiled version of this scripting language and use it to replace Objective-C. Many if not all other scripting languages have struggled mightily with becoming a compiled language, either not getting there at all or requiring JIT compilers of enormous size, complexity, engineering effort and attack surface.

Since the semantic model of our scripting language ist just Objective-C, we know that we can AOT-compile this language with a fairly straightforward compiler, probably a lot simpler than even the C/Objective-C compilers currently used, and plugging into the existing toolchain. Which is nice.

The idea seems so obvious, but apparently it wasn't.

Everything so far would, taken together, make for a really nice replacement for Objective-C with a much more productive and, let's face it, fun developer experience. However, even given the advantages of a simpler language, smoothly integrated scripting/programming and instant builds, it's not really clear that yet another OO language is really sufficient, for example the Etoilé project or the eero language never went anywhere, despite both being very nice.

3. Beyond just Objects: Architecture Oriented Programming

Ever since my Diplomarbeit, Approaches to Composition and Refinement in Object-Oriented Design back in 1997, I've been interested in Software Architecture and Architecture Description Languages (ADLs) as a way of overcoming the problems we have when constructing larger pieces of software.

One thing I noticed very early is that the elements of an ADL closely match up with and generalise the elements of a programming language, for example an object-oriented language: object generalises to component, message to connector. So it seemed that any specific pogramming language is just a specialisation or instantiation of a more general "architecture language".

To explore this idea, I needed a language that was amenable to experimentation, by being both malleable enough as to allow a metasystem that can abstract away from objects and messages and simple/small enough to make experimentation feasible. A simple variant of Smalltalk would do the trick. More mature variants tend to push you towards building with what is there, rather than abstracting from it, they "...eat their young" (Alan Kay).

So Objective-Smalltalk fits the bill perfectly as a substrate for architecture-oriented programming. In fact, its being built on/with Objective-C, which came into being largely to connect the C/Unix world with the Smalltalk world, means it is already off to a good start.

What to build? How about not reinventing the wheel and simply picking the (arguably) 3 most successful/popular architectural styles:

  • OO (subsuming the other call/return styles)
  • Unix Pipes and Filters
  • REST
Again, surprisingly, at least to me, even these specific styles appear to align reasonably well with the elements we have in a programming language. OO is already well-developed in (Objective-)Smalltalk, dataflow maps to Smalltalk's assignment operator, which needed to be made polymorphic anyway, and REST at least partially maps to non-message identifiers, which also are not polymorphic in Smalltalk.

Having now built all of these abstractions into Objective-Smalltalk, I have to admit again to my surprise how well they work and work together. Yes, it was my thesis, and yes, I can now see confirmation bias everywhere, but it was also a bit of a long-shot.

4. Architecture Oriented Metaprogramming

The architectural styles described above are implemented in frameworks and their interfaces hard-coded into the language implementation. However, with three examples , it should now be feasible to create linguistic support for defining the architectural styles in the language itself, allowing users to define and refine their own architectural styles. This is ongoing work.

What now?

One of the key takeaways from this is that each stage is already quite useful, and probably a worthy project all by itself, it just gets Even Better™ with the addition of later stages. Another is that I need to get back to getting stage ready, as it wasn't actually needed for stage 3, at least not initially.

Monday, November 11, 2019

What Alan Kay Got Wrong About Objects

One of the anonymous reviewers of my recently published Storage Combinators paper (pdf) complained that hiding disk-based, remote, and local abstractions behind a common interface was a bad idea, citing Jim Waldo's A Note on Distributed Computing.

Having read both this and the related 8 Fallacies of Distributed Computing a while back, I didn't see how this would apply, and re-reading confirmed my vague recollections: these are about the problems of scaling things up from the local case to the distributed case, whereas Storage Combinators and In-Process REST are about scaling things down from the distributed case to the local case. Particularly the Waldo paper is also very specifically about objects and messages, REST is a different beast.

And of course scaling things down happens to be time-honored tradtition with a pretty good track record:

In computer terms, Smalltalk is a recursion on the notion of computer itself. Instead of dividing "computer stuff" into things each less strong than the whole—like data structures, procedures, and functions which are the usual paraphernalia of programming languages—each Smalltalk object is a recursion on the entire possibilities of the computer. Thus its semantics are a bit like having thousands and thousands of computers all hooked together by a very fast network.
Mind you, I think this is absolutely brilliant: in order to get something that will scale up, you simply start with something large and then scale it down!.

But of course, this actually did not happen. As we all experienced scaling local objects and messaging up to the distributed case did not (CORBA, SOAP,...), and as Waldo explains, cannot, in fact, work. What gives?

My guess is that the method described wasn't actually used: when Alan came up with his version of objects, there were no networks with thousands of computers. And so Alan could not actually look at how they communicated, he had to imagine it, it was a Gedankenexperiment. And thus objects and messages were not a scaled-down version of an actual larger thing, they were a scaled down version of an imagined larger thing.

Today, we do have a large network of computers, with not just thousands but billions of nodes. And they communicate via HTTP using the REST architectural style, not via distributed objects and messages.

So maybe if we took that communication model and scaled it down, we might be able to do even better than objects and messages, which already did pretty brilliantly. Hence In-Process REST, Polymorphic Identifiers and Storage Combinators, and yes, the results look pretty good so far!

The big idea is "messaging" -- that is what the kernal of Smalltalk/Squeak is all about (and it's something that was never quite completed in our Xerox PARC phase). The Japanese have a small word -- ma -- for "that which is in between" -- perhaps the nearest English equivalent is "interstitial". The key in making great and growable systems is much more to design how its modules communicate rather than what their internal properties and behaviors should be. Think of the internet -- to live, it (a) has to allow many different kinds of ideas and realizations that are beyond any single standard and (b) to allow varying degrees of safe interoperability between these ideas.

So of course Alan is right after all, just not about objects and messages, which are too specific: "ma", or "interstitialness" or "connector" is the big idea, messaging is just one incarnation of that idea.

Thursday, November 7, 2019

Instant Builds

One of the goals I am aiming for in Objective-Smalltalk is instant builds and effective live programming.

A month ago, I got a package from an old school friend: my old Apple ][+, which I thought I had given as a gift, but he insisted had been a long-term loan. That machine featured 48KB of DRAM and a 1 MHz, 8 bit 6502 processor that took multiple cycles for even the simplest instructions, had no multiply instructions and almost no registers. Yet, when I turn it on it becomes interactive faster than the CRT warms up, and the programming experience remains fully interactive after that. I type something in, it executes. I change the program, type "RUN" and off it goes.

Of course, you can also get that experience with more complex systems, Smalltalk comes to mind, but the point is that it doesn't take the most advanced technology or heroic effort to make systems interactive, what it takes is making it a priority.


But here we are indeed.

Now Swift is only one example of this, it's a current trend, and of course these systems do claim that they provide benefits that are worth the wait. From optimizations to static type-checking with type-inference, so that "once it compiles, it works". This is deemed to be (a) 100% worthwhile despite the fact that there is no scientific evidence backing up these claims (a paper which claimed that it had the evidence was just shredded at this year's OOPSLA) and (b) essentially cost-free. But of course it isn't cost free:

So when everyone zigs, I zag, it's my contrarian nature. Where Swift's message was, essentially "there is too much Smalltalk in Objective-C", my contention is that there is too little Smalltalk in Objective-C (and also that there is too little "Objective" in Smalltalk, but that's a different topic).

Smalltalk was perfectly interactive in its own environment on high end late 70s and early 80s hardware. With today's monsters of computation, there is no good reason, or excuse for that matter, to not be interactive even when taken into the slightly more demanding Unix/macOS/iOS development world. That doesn't mean there aren't loads of reasons, they're just not any good.

So Objective-Smalltalk will be fast, it will be live or near-live at all times, and it will have instant builds. This isn't going to be rocket science, mostly, the ingredients are as follows:

  1. An interpreter
  2. Late binding
  3. Separate compilation
  4. A fast and simple native compiler
Let's look at these in detail.

An interpreter

The basic implementation of Objective-Smalltalk is an AST-walking interpreter. No JIT, not even a simple bytecode interpreter. Which is about as pessimal as possible, but our machines are so incredibly fast, and a lot of our tasks simple enough or computational steering enough that it actually does a decent enough job for many of those tasks. (For more on this dynamic, see The Death of Optimizing Compilers by Daniel J. Bernstein)

And because it is just an interpreter, it has no problems doing its thing on iOS:

(Yes, this is in the simulator, but it works the same on an actual device)

Late Binding

Late binding nicely decouples the parts of our software. This means that the compiler has very little information about what happens and can't help a lot in terms of optimization or checking, something that always drove the compiler folks a little nuts ("but we want to help and there's so much we could do"). It enables strong modularity and separate compilation. Objective-Smalltalk is as late-bound in its messaging as Objective-C or Smalltalk are, but goes beyond them by also late-binding identifiers, storage and dataflow with Polymorphic Identifiers (ACM, pdf), Storage Combinators (ACM, pdf) and Polymorphic Write Streams (ACM, pdf).

Allowing this level of flexibility while still not requiring a Graal-level Helden-JIT to burn away all the abstractions at runtime will require careful design of the meta-level boundaries, but I think the technically desirable boundaries align very well with the conceptually desirable boundaries: use meta-level facilities to define the language you want to program in, then write your program.

It's not making these boundaries clear and freely mixing meta-level and base-level programming that gets us in not just conceptual trouble, but also into the kinds of technical trouble that the Heldencompilers and Helden-JITs have to bail us out of.

Separate Compilation

When you have good module boundaries, you can get separate compilation, meaning a change in file (or other code-containing entity if you don't like files) does not require changes to other files. Smalltalk had this. Unix-style C programming had this, and the concept of binary libraries (with the generalization to frameworks on macOS etc.). For some reason, this has taken more and more of a back-seat in macOS and iOS development, with full source inclusion and full builds becoming the norm in the community (see CocoaPods) and for a long time being enforced by Apple by not allowing user-define dynamic libraries on iOS.

While Swift allows separate compilation, this can have such severe negative effects on both performance and compile times that compiling everything on any change has become a "best practice". In fact, we now have a build option "whole module optimization with optimizations turned off" for debugging. I kid you not.

Objective-Smalltalk is designed to enable "Framework-oriented-programming", so separate compilation is and will remain a top priority.

A fast and simple native compiler

However, even with an interpreter for interactive adjustments, separate compilation due to good modularity and late binding, you sometimes want to do a full build, or need to rebuild a large part of the codebase.

Even that shouldn't take forever, and in fact it doesn't need to. I am totally with Jonathan Blow on this subject when he says that compiling a medium size project shouldn't really more than a second or so.

My current approach for getting there is using TinyCC's backend as the starting point of the backend for Objective-Smalltalk. After all, the semantics are (mostly) Objective-C and Objective-C's semantics are just C. What I really like about tcc is that it goes so brutally directly to outputting CPU opcode as binary bytes.


static void gcall_or_jmp(int is_jmp)
{
    int r;
    if ((vtop->r & (VT_VALMASK | VT_LVAL)) == VT_CONST &&
	((vtop->r & VT_SYM) && (vtop->c.i-4) == (int)(vtop->c.i-4))) {
        /* constant symbolic case -> simple relocation */
        greloca(cur_text_section, vtop->sym, ind + 1, R_X86_64_PLT32, (int)(vtop->c.i-4));
        oad(0xe8 + is_jmp, 0); /* call/jmp im */
    } else {
        /* otherwise, indirect call */
        r = TREG_R11;
        load(r, vtop);
        o(0x41); /* REX */
        o(0xff); /* call/jmp *r */
        o(0xd0 + REG_VALUE(r) + (is_jmp << 4));
    }
}

No layers of malloc()ed intermediate representations here! This aligns very nicely with the streaming/messaging approach to high-performance I've taken elsewhere with Polymorphic Write Streams (see above), so I am pretty confident I can make this (a) work and (b) simple/elegant while keeping it (c) fast.

How fast? I obviously don't know yet, but tcc is a fantastic starting point. The following is the current (=wrong) ObjectiveTcc code to drive tcc to build a function that sends a single message:


-(void)generateMessageSendTestFunctionWithName:(char*)name
{
    SEL flagMsg=@selector(setMsgFlag);
    [self functionOnlyWithName:name returnType:VT_INT argTypes:"" body:^{
        [self pushFunctionPointer:objc_msgSend];
        [self pushObject:self];
        [self pushPointer:flagMsg];
        [self call:2];
    }];
}

How often can I do this in one second? On my 2018 high spec but 13" MBP: 300,000 times. Including in-memory linking (though not much of that happening in this example), not including Mach-O generation as that's not implemented yet and writing the whole shebang to disk. I don't anticipate either of these taking appreciably additional time.

If we consider this 2 "lines" of code, one for the function/method header and one for the message, then we can generate binary for 600KLOC/s. So having a medium size program compile and link in about a second or so seems eminently doable, even if I manage to slow the raw Tcc performance down by about an order of magnitude.

(For comparison: the Swift code base that motivated the Rome caching system for Carthage was clocking in at around 60 lines per second with the then Swift compiler. So even with an anticipated order of magnitude slowdown we'd still be 1000x faster. 1000x is good enough, it's the difference between 3 seconds and an hour.)

What's the downside? Tcc doesn't do a lot of optimization. But that's OK as (a) the sorts of optimizations C compilers and backends like LLVM do aren't much use for highly polymorphic and late-bound code and (b) the basics get you around 80% of the way (c) most code doesn't need that much optimization (see above) and (d) machines have become really fast.

And it helps that we aren't doing crazy things like initially allocating function-local variables on the heap or doing function argument copying via vtables that require require leaning on the optimizer to get adequate performance (as in: not 100x slower..).

Defense in Depth

While any of these techniques might be adequate some of the time, it's the combination that I think will make the Objective-Smalltalk tooling a refreshing, pleasant and highly productive alternative to existing toolchains, because it will be reliably fast under all circumstances.

And it doesn't really take (much) rocket science, just a willingness to make this aspect a priority.

Wednesday, April 3, 2019

Accessors Have Message Obsession

Just came across and older post by Nat Pryce on Message Obsession, which he describes as the opposite end of a spectrum from Primitive Obsession.

The example is a set of commands for moving a robot:


-moveNorth.
-moveSouth.
-moveWest.
-moveEast.

Although the duplication is annoying, the bigger problem is that there are two things, the verb "move" and a direction argument, mushed together into the message name. And that can cause further problems down the road:
"It’s awkward to work with this kind of interface because you can’t pass around, store or perform calculations on the direction at all."

He argues, convincingly IMHO, that the combined messages should be replaced by a single move: message with a separate direction argument. The current fashion would be to make direction an enum, but he (wisely, IMHO) turns it into a class that can encode different directions:
-move:direction.

class Direction {
  ...
}

So far so good. However...

...we have this message obsessions at a massively larger scale with accessors.


-attribute.
-setAttribute:newValue.

Every single attribute of every single class gets its own accessor or accessor pair, again with the action (get/set) mushed together with the name of the attribute to work on. The solution is the same as for the directions in Nat's example: there are only two actual messages, with reified identifiers.
-get:identifier.
-set:identifier to:value.

These, of course, correspond to the GET and PUT HTTP verbs. Properties, now available in a number of mainstream languages, are supposed to address this issue, but they only really address to 2:1 problem (getter and setter for an attribute). The much bigger N:2 problem (method pair for every attribute) remains unaddressed, and particularly you also cannot pass around, store or perform calculations on the identifier.

And it turns out that passing those identifiers around performing calculations on them is tremendously powerful, even if you don't have language support. Without language support, the interface between the world of reified identifiers and objects can be a bit awkward.

Friday, February 8, 2019

A small (and objective) taste of Objective-Smalltalk

I've been making good progress on Objective-Smalltalk recently. Apart from the port to GNUstep that allowed me to run the official site on it (shrugging off the HN hug of death in the process), I've also been concentrating on not just pushing the concepts further, but also on adding some of the more mundane bits that are just needed for a programming language.

And so my programs have been getting longer and more useful, and I am starting to actually see the effect of "I'd rather write this on Objective-Smalltalk than Objective-C". And with that, I thought I'd share one of these slightly larger examples, show how it works, what's cool and possibly a bit weird, and where work is still needed (lots!).

The code I am showing is a script that implements a generic scheme handler for sqlite databases and then uses that scheme handler to access the database given on the command line. When you run it, in this case with the sample database Chinook.db, it allows you to interact with the database using URIs using the db scheme. For example, db:. lists the available tables:


> db:. 
( "albums","sqlite_sequence","artists","customers","employees","genres","invoices",
 "invoice_items","media_types","playlists","playlist_track","tracks","sqlite_stat1") 

You can then access entire tables, for example db:albums would show all the albums, or you can access a specific album:
> db:albums/3
{ "AlbumId" = 4;
"Title" = "Let There Be Rock";
"ArtistId" = 1;
} 

With that short intro and without much further ado, here's the code :


#!/usr/local/bin/stsh
#-sqlite:<ref>dbref

framework:FMDB load.


class ColumnInfo {
  var name.
  var type.
  -description {
      "Column: {var:self/name} type: {var:self/type}".
  }
}

class TableInfo  {
  var name.
  var columns.
  -description {
    cd := self columns description.
    "Table {var:self/name} columns: {cd}".
  }
}

class SQLiteScheme : MPWScheme {
  var db.

  -initWithPath: dbPath {
     self setDb:(FMDatabase databaseWithPath:dbPath).
     self db open.
     self.
  }

  -dictionariesForResultSet:resultSet
  {
    results := NSMutableArray array.
    { resultSet next } whileTrue: { results addObject:resultSet resultDictionary. }.
    results.
  }

  -dictionariesForQuery:query {
     self dictionariesForResultSet:(self db executeQuery:query).
  }

  /. { 
     |= {
       resultSet := self dictionariesForQuery: 'select name from sqlite_master where [type] = "table" '.
       resultSet collect at:'name'.
     }
  }

  /:table/count { 
     |= { self dictionariesForQuery: "select count(*) from {table}" | firstObject | at:'count(*)'. }
  }

  /:table/:index { 
     |= { self dictionariesForQuery: "select * from {table}" | at: index. }
  }

  /:table { 
     |= { self dictionariesForQuery: "select * from {table}". }
  }

  /:table/:column/:index { 
     |= { self dictionariesForQuery: "select * from {table}" | at: index.  }
  }

  /:table/where/:column/:value { 
     |= { self dictionariesForQuery: "select * from {table} where {column} = {value}".  }
  }

  /:table/column/:column { 
     |= { self dictionariesForQuery: "select {column} from {table}"| collect | at:column. } 
  }

  /schema/:table {
     |= {
        resultSet := self dictionariesForQuery: "PRAGMA table_info({table})".
	    columns := resultSet collect: { :colDict | 
            #ColumnInfo{
				#'name': (colDict at:'name') ,
				#'type': (colDict at:'type')
			}.
        }.
        #TableInfo{ #'name': table, #'columns': columns }.
     }
  } 

  -tables {
	 db:. collect: { :table| db:schema/{table}. }.
  }
  -<void>logTables {
     stdout do println: scheme:db tables each.	
  }
}

extension NSObject {
  -initWithDictionary:aDict {
    aDict allKeys do:{ :key |
      self setValue: (aDict at:key) forKey:key.
    }.
    self.
  }
}


scheme:db := SQLiteScheme alloc initWithPath: dbref path.
stdout println: db:schema/artists
shell runInteractiveLoop.


Let's walk through the code in detail, starting with the header:
#!/usr/local/bin/stsh
#-sqlite:<ref>dbref

This is a normal Unix shell script invoking stsh, the Smalltalk Shell. The Smalltalk Shell is a bigger topic for another day, but for now let's focus on the second line, which looks like a method declaration, and that's exactly what it is! In order to ease the transition between small scripts and larger systems (successful scripts tend to get larger, and successful large systems evolve from successful small systems), scripts have a dual nature, being at the same time callable from the Unix command line and also usable as a method (or filter) from a program.

Since this script is interactive, that part is not actually that important, but a nice side effect is that the declaration of a parameter gets us automatic command-line parameter parsing, conversion, and error checking. Specifically, stsh knows that the script takes a single parameter of type <ref> (a reference, so a filename or URL) and will put that in the dbref variable as a reference. If the script is invoked without that parameter, it will exit with an error message, all without any further work by the script author. These declarations are optional, without them parameters will go into an args array without further interpretation.

Next up, we load a dependency, Gus Mueller's wonderful FMDB wrapper for SQLite.


framework:FMDB load.

The framework scheme looks for frameworks on the default framework path, and the load message is sent to the NSBundle that is returned.

The next bit is fairly straightforward, defining the ColumnInfo class with two instance variables, name and type, and a -descritpion method.


class ColumnInfo {
  var name.
  var type.
  -description {
      "Column: {var:self/name} type: {var:self/type}".
  }
}

Again, this is very straightforward, with maybe the missing superclass specification being slightly unusual. Different constructs may have different implicit superclasses, for class it is assumed to be NSObject. The description method, introduced by "-" just like in Objective-C, uses string interpolation with curly braces. (I currently need to use fully qualified names like var:self/name to access instance variables, that should be fixed in the near future). It also doesn't have a return statement or the like, a method return can be specified by just writing out the return value.

To me, this has the great effect of putting the focus on the essential "this is the description" rather than on the incidental, procedural "this is how you build the description". It is obviously only a very small instance of this shift, but I think even this small examples speaks to what that shift can look like in the large.

The way instance variables are defined is far from being done, but for now the var syntax does the job. The TableInfo class follows the same pattern as ColumnInfo, and of course these two classes are used to represent the metadata of the database.

So on to the main attraction, the scheme-handler itself, which is just a plain old class inheriting from MPWScheme, with an instance variable and an initialisation method:


class SQLiteScheme : MPWScheme {
  var db.

  -initWithPath: dbPath {
     self setDb:(FMDatabase databaseWithPath:dbPath).
     self db open.
     self.
  }

Having advanced language features largely defined as/by plain old classes goes back to the need for a stable starting point. However, it has turned out to be a little bit more than that, because the mapping to classes is not just the trivial one of "since this written in on OO language, obviously the implementation of features is somehow done with classes". Instead, the classes map onto the language features very much in an Open Implementation kind of way, except that in this case it is Open Language Implementation.

That means that unlike a typical MOP, the classes actually make sense outside the specific language implementation, making their features usable from other languages, albeit less conveniently. Being easily accessible from other languages is important for an architectural language.

With this mapping, a very narrow set of syntactic language mechanism can be used to map a large and extensible (thus infinite) set of semantic features into the languages. This is of course similar to language features like procedures, methods and classes, but is expanded to things that usually haven't been as extensible.

The next two methods handle the interface to FMDB, they are mostly straightforward and, I think, understandable to both Smalltalk and Objective-C programmers without much explanation.


  -dictionariesForResultSet:resultSet
  {
    results := NSMutableArray array.
    { resultSet next } whileTrue: { results addObject:resultSet resultDictionary. }.
    results.
  }

  -dictionariesForQuery:query {
     self dictionariesForResultSet:(self db executeQuery:query).
  }

Smalltalk programmers may balk a little at the use of curly braces rather than square brackets to denote blocks. To me, this is a very easy concession to "the mainstream"; I have bigger fish to fry. To Objective-C programmers, the fact that the condition of the while-loop is implemented as a message sent to a block rather than as syntax might take a little getting used to, but I don't think it presents any fundamental difficulties.

Next up we have some property path definitions, the meat of the scheme handler. Each property path lets you define code that will run for a specific subset of the scheme's namespace, with the subset defined by the property path's URI pattern. As the name implies, property paths can be regarded as a generalisation of Objective-C properties, extended to handle both entire sets of properties, sub-paths and the combination of both.


  /. { 
     |= {
       resultSet := self dictionariesForQuery: 'select name from sqlite_master where [type] = "table" '.
       resultSet collect at:'name'.
     }
  }

The first property path definition is fairly straightforward as it only applies to a single path, the period (so the db:. example from above). Property path definitions start with the forward slash ("/"), similar to the way that instance methods start with "-" and class methods with "+" in Objective-C (and Objetive-Smalltalk). The slash seemed natural to indicate the definition of paths/URIs.

Like C# or Swift property definitions, you need to be able to deal with (at least) "get" and/or "set" access to a property. I really dislike having random keywords like "get" or "set" for program structure, I prefer to see names reserved for things that have domain meaning. So instead of keywords, I am using constraint connector syntax: "|=" means the left hand side is constrained to be the same as the right hand side (aka "get"). "=|" means the right hand side is constrained to be the same as the left hand side (aka "set"). The idea is that the "left hand side" in this case is the interface, the outside of the object/scheme handler, whereas the "right hand side" is the inside of the object, with properties mediating between the outside and the inside of the object.

As most everything, this is currently experimental, but so far I like it more than I expected to, and again, it shifts us away from being action oriented to describing relationships. For example, delegating both get and set to some other object could then be described by using the bidirectional constraint connector: /myProperty =|= var:delegate/otherroperty.

Getting the result set is a straightforward message-send with the SQL query as a constant, non-interpolated string (single quotes, double quotes is for interpolation). We then need to extract the name of the table from the return dictionaries, which we do via the collect HOM and the Smalltalk-y -at: message, which in this case maps to Foundation's -objectForKey:. The next property paths map URIs to queries on the tables. Unlike the previous example, which had a constant, single element path and so was largely equivalent to a classic property, these all have variable path elements, multiple path segments or both.


  /:table/count { 
     |= { self dictionariesForQuery: "select count(*) from {table}" | firstObject | at:'count(*)'. }
  }

  /:table/:index { 
     |= { self dictionariesForQuery: "select * from {table}" | at: index. }
  }

  /:table { 
     |= { self dictionariesForQuery: "select * from {table}". }
  }

Starting at the back, /:table returns the data from the entire table specified in the URI using the parameter :table. The leading semicolon means that this path segment is a parameter that will match any single string and deliver it the method as the parameter name used, in this case "table". Wildcards are also possible.

Yes, the SQL query is performed using simple string interpolation without any sanitisation. DON'T DO THAT. At least not in production code. For experimenting in an isolated setting it's OK.

The second query retrieves a specific row of the table specified. The pipe "operator" is for method chaining with keyword syntax without having to bracket like crazy:


self dictionariesForQuery: "select count(*) from {table}" | firstObject | at:'count(*)'
((self dictionariesForQuery: "select count(*) from {table}") firstObject) at:'count(*)'

I find the "pipe" version to be much easier to both just visually scan and to understand, because it replaces nested (recursive) evaluation with linear piping. And yes, it is at least partly a by-product of integrating pipes/filters, which is a part of the larger goal of smoothly integrating multiple architectural styles. That this integration would lead to an improvement in the procedural part was an unexpected but welcome side effect.

The first property path, /:table/count returns the size of the given table, using the optimised SQL code select count(*). This shows an interesting effect of property paths. In a standard ORM, getting the count of a table might look something like this: db.artists.count. Implemented naively, this code retrieves the entire "artists" table, converts that to an array and then counts the array, which is incredibly inefficient. And of course, this was/is a real problem of many ORMs, not just a made up example.

The reason it is a real problem is that it isn't trivial to solve, due to the fact that OOPLs are not structurally polymorphic. If I have something like db.artists.count, there has to be some intermediate object returned by artists so I can send it the count message. The natural thing for that is the artists table, but that is inefficient. I can of course solve this by returning some clever proxy that doesn't actually materialise the table unless it has to, or I can have count handled completely separately, but neither of these solutions are straightforward, which is why this has traditionally been a problem.

With property paths, the problem just goes away, because any scheme handler (or object) has control over its sub-structure to an arbitrary depth.

Queries are handled in a similar matter, so db:albums/where/ArtistId/6 retrieves the two albums by band Apocalyptica. This is obviously very manual, for any specific database you'd probably want to specialise this generic scheme handler to give you named relationships and also to return actual objects, rather than just dictionaries. A step in that direction is the /schema/:table property path:


  /schema/:table {
     |= {
        resultSet := self dictionariesForQuery: "PRAGMA table_info({table})".
	    columns := resultSet collect: { :colDict | 
            #ColumnInfo{
				#'name': (colDict at:'name') ,
				#'type': (colDict at:'type')
			}.
        }.
        #TableInfo{ #'name': table, #'columns': columns }.
     }
  } 

This property path returns the SQL schema in terms of the objects we defined at the top. First is a simple query of the SQLite table that holds the schema information, returning an array of dictionaries. These individual dictionaries are then converted to ColumnInfo objects using object literals.

Similar to defining the -description method above as simple the parametrized string literal instead of as instructions to build the result string, object literals allow us to simple write down general objects instead of constructing them. The example inside the collect defines a ColumnInfo object literal with the name and type attributes set from the column dictionary retrieved from the database.

Similarly, the final TableInfo is defined by its name and the column info objects. Object literals are a fairly trivial extension of Objective-Smalltalk dictionary literals, #{ #'key': value }, with a class-name specified between the "#" and the opening curly brace. Being able to just write down objects is, I think, one of the better and under-appreciated features of WebObjects .wod files (though it's not 100% the same thing), as well as QML and I think also part of what makes React "declarative".

Not entirely coincidentally, the "configurations" of architectural description languages can also be viewed as literal object definitions.

With that information in hand, and with the Objective-Smalltalk runtime providing class definition objects that can be used to install objects with utheir methods in the runtime, we now have enough information to create some classes straight from the SQL table definitions, without going through the intermediate steps of generating source code, adding that to a project and compiling it.

That isn't implemented, and it's also something that you don't really want, but it's a stepping stone towards creating a general mechanism for orthogonal modular persistence. The final two utility methods are not really all that noteworthy, except that they do show how expressive and yet straightforward even ordinary Objective-Smalltalk code is.


  -tables {
	 db:. collect: { :table| db:schema/{table}. }.
  }
  -<void>logTables {
     stdout do println: scheme:db tables each.	
  }

The -tables method just gets the all the schema information for all the tables. The -logTables methods prints all the tables to stdout, but individually, not as an array. Finally, there is a class extension to NSObject that supports the literal syntax on all objects and the script code that actually initialises the scheme with a database and starts an interactive session. This last feature has also been useful in Smalltalk scripting: creating specialized shells that configure themselves and then run the normal interactive prompt.

So that's it!

It's not a huge revelation, yet, but I do hope this example gives at least a tiny glimpse of what Objective-Smalltalk already is and of what it is poised to become. There is a lot that I couldn't cover here, for example that scheme-handlers aren't primarily for interactive exploration, but for composition. I also only mentioned pipes-and-filters in passing, and of course there "is" a lot more that just "isn't" there, quite yet.

As always, but even more than usual, I would love to get feedback! And of course the code is on github

Wednesday, February 6, 2019

Why Objective-Smalltalk is (not) a Smalltalk

One of the main goals of Objective-Smalltalk is to make it possible to express programs in mixtures of diverse architectural styles, and variations of those styles, naturally and comprehensibly. That needs a mechanism that abstracts from any specific style, while at the same time accommodating all, or at least a wide variety of styles.

I don't know how to do that.

Yet.

A Smalltalk

Since I don't know how to do that yet, I can't build what I need, and since I can't build what I need I can't experiment with it and learn how to do what I need to do.

Instead of waiting for divine inspiration, designing something from thin air (that will obviously be perfect) or just throwing my hands up in despair, I need to pick some starting point from which I can start iterating. In terms of styles, there are a few to choose from, for example call-return, pipes-and-filters and REST. Call-return seems like an obvious choice, because it is something that is familiar, has widespread support and is very capable of implementing other styles.

Objective-C would have been a pleasant choice: I am very familiar with it and it is very suitable for building architectural and language adapters. This suitability is not an accident either, connectivity was the primary design goal, and the designers succeeded admirably given the constraints.

However, Objective-C also has significant drawbacks as a starting point: having already merged two languages, Smalltalk and C, it is rather large and unwieldy, with weird overlaps and other oddities. Being a superset of C, it also pretty much demands being compiled, whereas I want at least the option of interpretation.

WebScript would be an alternative, but it is not just dead, but also proprietary. It also very closely mimics the somewhat baroque Objective-C syntax, but without the actual reasons for those syntactic compromises or the benefits that this sacrifice brings in Objective-C.

These and most other existing language choices would mean extending an existing language, grammar and implementation, meaning that the architectural features of the language would be almost certainly be 2nd class citizens, being tacked on wherever they can fit. That's not a good starting point (see: Objective-C).

So it really had to be a brand new language, one whose syntax and semantics would be under full control, and a syntactic starting point for the procedural part of that language. For this, I don't know of a better choice than the Smalltalk (keyword) message syntax. The syntax itself is tiny, with almost no reserved words or special characters reserved by the language, and almost all "language" features implemented as objects and messages without special syntax.

Having the procedural base as trivial as possible to implement is important, because I don't want to spend too much effort on it. I have (much) bigger fish to fry. Smalltalk also integrates well conceptually and syntactically with Cocoa and Objective-C, my preferred environment, a point not lost on the plethora of Smalltalk-based scripting languages available for Cocoa and GNUstep. And having a rich environment to integrate with is important for a language intended to connect existing pieces, which is what an architectural language does, or should do.

Not a Smalltalk

So given all the arguments for Smalltalk, surely Objective-Smalltalk is based on one of the existing Smalltalks, such as Squeak, enjoying the great development environment, malleable infrastructure etc.

Not so fast.

Although Smalltalk fits very well conceptually and syntactically, it doesn't fit at all well technically. It generally runs in its own world, the image, requires a complex and sophisticated VM, with all the integration headaches that entails (FFI, JNI, etc.), and comes with and requires a GC. Integrating multiple tracing GCs is essentially an unsolved problem, so that's a bit of a downer for a language that wants to be able to glue existing pieces together.

The philosophy of current Smalltalks is the exact opposite: it wants to be the entire world. This is understandable: when Smalltalk was created there simply wasn't a "rest of the world" to talk to, and there certainly wasn't room for it on the same machine, an Alto with 128-512KB of RAM and a 2.5MB removable hard disk.

A current Smalltalk has a lot of code, and communities that think it's just the greatest thing on earth, so resistance to change is significant and reasonable, both to smaller changes, because they just aren't worth it in that context, and to larger changes because they make things just too different. However, I want to make both large and small changes and not be hobbled by linguistic backwards compatibility.

[..] when ST hit the larger world, it was pretty much taken as "something just to be learned", as though it were Pascal or Algol. Smalltalk-80 never really was mutated into the next better versions of OOP. Given the current low state of programming in general, I think this is a real mistake.
Alan Kay: prototypes vs classes was: Re: Sun's HotSpot

So Objective-Smalltalk takes cues from Smalltalk where this is helpful, but it is not really a new version of Smalltalk. It's not a "better old thing" (>45 years), but a (probably worse) "new thing", and for that reason it has to strike out on its own.

Wednesday, October 7, 2015

Jitterdämmerung

So, Windows 10 has just been released, and with it Ahead Of Time (AOT) compilation feature .NET native. Google also just recently introduced ART for Android, and I just discovered that Oracle is planning an AOT compiler for mainstream Java.

With Apple doggedly sticking to Ahead of Time Compilation for Objective-C and now their new Swift, JavaScript is pretty much the last mainstream hold-out for JIT technology. And even in JavaScript, the state-of-the-art for achieving maximum performance appears to be asm.js, which largely eschews JIT techniques by acting as object-code in the browser represented in JavaScript for other languages to be AOT-compiled into.

I think this shift away from JITs is not a fluke but was inevitable, in fact the big question is why it has taken so long (probably industry inertia). The benefits were always less than advertised, the costs higher than anticipated. More importantly though, the inherent performance characteristics of JIT compilers don't match up well with most real world systems, and the shift to mobile has only made that discrepancy worse. Although JITs are not going to go away completely, they are fading into the sunset of a well-deserved retirement.

Advantages of JITs less than promised

I remember reading the copy of the IBM Systems Journal on Java Technology back in 2000, I think. It had a bunch of research articles describing super amazing VM technology with world-beating performance numbers. It also had a single real-world report from IBM's San Francisco project. In the real world, it turned out, performance was a bit more "mixed" as they say. In other words: it was terrible and they had to do an incredible amount of work for the system be even remotely usable.

There was also the experience of the New Typesetting System (NTS), a rewrite of TeX in Java. Performance was atrocious, the team took it with humor and chose a snail as their logo.

Nts at full speed One of the reasons for this less than stellar performance was that JITs were invented for highly dynamic languages such as Smalltalk and Self. In fact, the Java Hotspot VM can be traced in a direct line to Self via the Strongtalk system, whose creator Animorphic Systems was purchased by Sun in order to acquire the VM technology.

However, it turns out that one of the biggest benefits of JIT compilers in dynamic languages is figuring out the actual types of variables. This is a problem that is theoretically intractable (equivalent to the halting problem) and practically fiendishly difficult to do at compile time for a dynamic language. It is trivial to do at runtime, all you need to do is record the actual types as they fly by. If you are doing Polymorphic Inline Caching, just look at the contents of the caches after a while. It is also largely trivial to do for a statically typed language at compile time, because the types are right there in the source code!

So gathering information at runtime simply isn't as much of a benefit for languages such as C# and Java as it was for Self and Smalltalk.

Significant Costs

The runtime costs of a JIT are significant. The obvious cost is that the compiler has to be run alongside the program to be executed, so time compiling is not available for executing. Apart from the direct costs, this also means that your compiler is limited in the types of analyses and optimizations it can do. The impact is particularly severe on startup, so short-lived programs like for example the TeX/NTS are severely impacted and can often run slower overall than interpreted byte-code.

In order to mitigate this, you start having to have multiple compilers and heuristics for when to use which compilers. In other words: complexity increases dramatically, and you have only mitigated the problem somewhat, not solved it.

A less obvious cost is an increase in VM pressure, because the code-pages created by the JIT are "dirty", whereas executables paged in from disk are clean. Dirty pages have to be written to disk when memory is required, clean pages can simply be unmapped. On devices without a swap file like most smartphones, dirty vs. clean can mean the difference between a few unmapped pages that can be swapped in later and a process getting killed by the OS.

VM and cache pressure is generally considered a much more severe performance problem than a little extra CPU use, and often even than a lot of extra CPU use. Most CPUs today can multiply numbers in a single cycle, yet a single main memory access has the CPU stalled for a hundred cycles or more.

In fact, it could very well be that keeping non-performance-critical code as compact interpreted byte-code may actually be better than turning it into native code, as long as the code-density is higher.

Security risks

Having memory that is both writable and executable is a security risk. And forbidden on iOS, for example. The only exception is Apple's own JavaScript engine, so on iOS you simply can't run your own JITs.

Machines got faster

On the low-end of performance, machines have gotten so fast that pure interpreters are often fast enough for many tasks. Python is used for many tasks as is and PyPy isn't really taking the Python world by storm. Why? I am guessing it's because on today's machines, plain old interpreted Python is often fast enough. Same goes for Ruby: it's almost comically slow (in my measurements, serving http via Sinatra was almost 100 times slower than using libµhttp), yet even that is still 400 requests per second, exceeding the needs of the vast majority of web-sites including my own blog, which until recently didn't see 400 visitors per day.

The first JIT I am aware of was Peter Deutsch's PS (Portable Smalltalk), but only about a decade later Smalltalk was fine doing multi-media with just a byte-code interpreter. And native primitives.

Successful hybrids

The technique used by Squeak: interpreter + C primitives for heavy lifting, for example for multi-media or cryptography has been applied successfully in many different cases. This hybrid approach was described in detail by John Ousterhout in Scripting: Higher-Level Programming for the 21st Century: high level "scripting" languages are used to glue together high performance code written in "systems" languages. Examples include Numpy, but the ones I found most impressive were "computational steering" systems apparently used in supercomputing facilities such as Oak Ridge National Laboratories. Written in Tcl.

What's interesting with these hybrids is that JITs are being squeezed out at both ends: at the "scripting" level they are superfluous, at the "systems" level they are not sufficient. And I don't believe that this idea is only applicable to specialized domains, though there it is most noticeable. In fact, it seems to be an almost direct manifestation of the observations in Knuth's famous(ly misquoted) quip about "Premature Optimization":

Experience has shown (see [46], [51]) that most of the running time in non-IO-bound programs is concentrated in about 3 % of the source text.

[..] The conventional wisdom shared by many of today's software engineers calls for ignoring efficiency in the small; but I believe this is simply an overreaction to the abuses they see being practiced by penny-wise-and-pound-foolish programmers, who can't debug or maintain their "optimized" programs. In established engineering disciplines a 12 % improvement, easily obtained, is never considered marginal; and I believe the same viewpoint should prevail in soft- ware engineering. Of course I wouldn't bother making such optimizations on a one-shot job, but when it's a question of preparing quality programs, I don't want to restrict myself to tools that deny me such efficiencies.

There is no doubt that the grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.

Yet we should not pass up our opportunities in that critical 3 %. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified. It is often a mistake to make a priori judgments about what parts of a program are really critical, since the universal experience of programmers who have been using measurement tools has been that their intuitive guesses fail. After working with such tools for seven years, I've become convinced that all compilers written from now on should be designed to provide all programmers with feedback indicating what parts of their programs are costing the most; indeed, this feedback should be supplied automatically unless it has been specifically turned off.

[..]

(Most programs are probably only run once; and I suppose in such cases we needn't be too fussy about even the structure, much less the efficiency, as long as we are happy with the answers.) When efficiencies do matter, however, the good news is that usually only a very small fraction of the code is significantly involved.

For the 97%, a scripting language is often sufficient, whereas the critical 3% are both critical enough as well as small and isolated enough that hand-tuning is possible and worthwhile.

I agree with Ousterhout's critics who say that the split into scripting languages and systems languages is arbitrary, Objective-C for example combines that approach into a single language, though one that is very much a hybrid itself. The "Objective" part is very similar to a scripting language, despite the fact that it is compiled ahead of time, in both performance and ease/speed of development, the C part does the heavy lifting of a systems language. Alas, Apple has worked continuously and fairly successfully at destroying both of these aspects and turning the language into a bad caricature of Java. However, although the split is arbitrary, the competing and diverging requirements are real, see Erlang's split into a functional language in the small and an object-oriented language in the large.

Unpredictable performance model

The biggest problem I have with JITs is that their performance model is extremely unpredictable. First, you don't know when optimizations are going to kick in, or when extra compilation is going to make you slower. Second, predicting which bits of code will actually be optimized well is also hard and a moving target. Combine these two factors, and you get a performance model that is somewhere between unpredictable and intractable, and therefore at best statistical: on average, your code will be faster. Probably.

While there may be domains where this is acceptable, most of the domains where performance matters at all are not of this kind, they tend to be (soft) real time. In real time systems average performance matters not at all, predictably meeting your deadline does. As an example, delivering 80 frames in 1 ms each and 20 frames in 20 ms means for 480ms total time means failure (you missed your 60 fps target 20% of the time) whereas delivering 100 frames in 10 ms each means success (you met your 60 fps target 100% of the time), despite the fact that the first scenario is more than twice as fast on average.

I really learned this in the 90ies, when I was doing pre-press work and delivering highly optimized RIP and Postscript processing software. I was stunned when I heard about daily newspapers switching to pre-rendered, pre-screened bitmap images for their workflows. This is the most inefficient format imaginable for pre-press work, with each page typically taking around 140 MB of storage uncompressed, whereas the Postscript source would typically be between 1/10th and 1/1000th of the size. (And at the time, 140MB was a lot even for disk storage, never mind RAM or network capacity.

The advantage of pre-rendered bitmaps is that your average case is also your worst case. Once you have provisioned your infrastructure to handle this case, you know that your tech stack will be able to deliver your newspaper on time, no matter what the content. With Postscript (and later PDF) workflows, you average case is much better (and your best case ridiculously so), but you simply don't get any bonus points for delivering your newspaper early. You just get problems if it's late, and you are not allowed to average the two.

Eve could survive and be useful even if it were never faster than, say, Excel. The Eve IDE, on the other hand, can't afford to miss a frame paint. That means Imp must be not just fast but predictable - the nemesis of the SufficientlySmartCompiler.
I also saw this effect in play with Objective-C and C++ projects: despite the fact that Objective-C's primitive operations are generally more expensive, projects written in Objective-C often had better performance than comparable C++ projects, because the Objective-C's performance model was so much more simple, obvious and predictable.

When Apple was still pushing the Java bridge, Sun engineers did a stint at a WWDC to explain how to optimize Java code for the Hotspot JIT. It was comical. In order to write fast Java code, you effectively had to think of the assembler code that you wanted to get, then write the Java code that you thought might net that particular bit of machine code, taking into account the various limitations of the JIT. At that point, it is a lot easier to just write the damn assembly code. And vastly more predictable, what you write is what you get.

Modern JITs are capable of much more sophisticated transformations, but what the creators of these advanced optimizers don't realize is that they are making the problem worse rather than better. The more they do, the less predictable the code becomes.

The same, incidentally, applies to SufficentlySmart AOT compilers such as the one for the Swift language, though the problem is not quite as severe as with JITs because you don't have the dynamic component. All these things are well-intentioned but all-in-all counter-productive.

Conclusion

Although the idea of Just in Time Compilers was very good, their area of applicablity, which was always smaller than imagined and/or claimed, has shrunk ever further due to advances in technology, changing performance requirements and the realization that for most performance critical tasks, predictability is more important than average speed. They are therefore slowly being phased out in favor of simpler, generally faster and more predictable AOT compilers. Although they are unlikely to go away completely, their significance will be drastically diminished.

Alas, the idea that writing high-level code without any concessions to performance (often justified by misinterpreting or simply just misquoting Knuth) and then letting a sufficiently smart compiler fix it lives on. I don't think this approach to performance is viable, more predictability is needed and a language with a hybrid nature and the ability for the programmer to specify behavior-preserving transformations that alter the performance characteristics of code is probably the way to go for high-performance, high-productivity systems. More on that another time.

What do you think? Are JITs on the way out or am I on crack? Should we have a more manual way of influencing performance without completely rewriting code or just trusting the SmartCompiler?

Update: Nov. 13th 2017

The Mono Project has just announced that they are adding a byte-code interpreter: "We found that certain programs can run faster by being interpreted than being executed with the JIT engine."

Update: Mar. 25th 2021

Facebook has created an AOT compiler for JavaScript called Hermes. They report significant performance improvements and reductions in memory consumption (with of course some trade-offs for specific benchmarks).

Speaking of JavaScript, Google launhed Ignition in 2017, a JS bytecode interpreter for faster startup and better performance on low-memory devices. It also did much better in terms of pure performance than they anticipated.

So in 2019, Google introduced their JIT-Less JS engine. Not even an AOT compiler, just a pure interpreter. While seeing slowdowns in the 20-80% range on synthetic benchmarks, they report real-world performance only a few percent lower than their high-end, full-throttle TurboFan JIT.

Oh, how could I forget WebAssembly? "One of the reasons applications target WebAssembly is to execute on the web with predictable high performance. ".

I've also been told by People Who Know™ that the super-amazing Graal JIT VM for Java is primarily used for its AOT capabilities by customers. This is ironic, because the aptly named Graal pretty much is the Sufficiently Smart Compiler that was always more of a joke than a real goal...but they achieved it nonetheless, and as a JIT no less! And now that we have it, it turns out customers mostly don't care. Instead, they care about predictability, start-up performance and memory consumption.

Last not least, Apple's Rosetta 2 translator for running Intel binaries on ARM is an AOT binary-to-binary translator. And it is much faster, relatively, than the original Rosetta or comparable JIT-based binary translators, with translated Intel binaries clocking in at around 80% the speed of native ARM binaries. Combined with the amazing performance of the M1, this makes M1 Macs frequently faster than Intel Macs at running Intel Mac binaries. (Of course it also includes a (much slower) JIT component as a fallback when the AOT compiler can't figure things out. Like when the target program is itself a JIT... )

Update: Oct. 3rd 2021

It appears that 45% of CVEs for V8 and more than half of "in the wild" Chrome exploits abused a JIT bug, which is why Microsoft is experimenting with what they call SDSM (Super Duper Secure Mode). What's SDSM? JavaScript without the JIT.

Unsurprisingly at this point, performance is affected far less than one might have guessed, even for the benchmarks, and even less in real-world tasks: "Anecdotally, we find that users with JIT disabled rarely notice a difference in their daily browsing.".

Update: Mar. 30th 2023

To my eyes, this looks like startup costs, much of which will be jit startup costs.

HN user jigawatts appears to confirm my suspicion:

Teams is an entire web server written in an interpreted language compiled on the fly using a "Just in Time" optimiser (node.js + V8) -- coupled to a web browser that is basically an entire operating system. You're not "starting Teams.exe", you are deploying a server and booting an operating system. Seen in those terms, 9 seconds is actually pretty good. Seen in more... sane terms, showing 1 KB of text after 9 seconds is about a billion CPU instructions needed per character.
. It also looks to be the root problem, well one of the root problems with Visual Studio's launch time.

Update: Apr. 1st 2023

"One of the most annoying features of Julia is its latency: The laggy unresponsiveness of Julia after starting up and loading packages." -- Julia's latency: Past, present and future

I find it interesting that the term "JIT" is never mentioned in the post, I guess it's just taken as a given.

I am pretty sure I've missed some developments.

Sunday, October 4, 2015

Are Objects Already Reactive?


TL;DR: Yes, obviously.

My post from last year titled The Siren Call of KVO and Cocoa Bindings has been one of my most consequential so far. Apart from being widely circulated and discussed, it has also been a focal point of my ongoing work related to Objective-Smalltalk. The ideas presented there have been central to my talks on software architecture, and I have finally been able to present some early results I find very promising.

Alas, with the good always comes the bad, and some of the reactions (sic) have no been quite so positive. For example, consider the following I wrote:

[..] Adding reactivity to an object-oriented language is, at first blush, non-sensical and certainly causes confusion [because] whereas functional programming, which is per definition static/timeless/non-reactive, really needs something to become interactive, reactivity is already inherent in OO. In fact, reactivity is the quintessence of objects: all computation is modeled as objects reacting to messages.
This seemed quite innocuous, obvious, and completely uncontroversial to me, but apparently caused a bit of a stir with at least some of the creators of ReactiveCocoa:

Ouch! Of course I never wrote that "nobody" needs FRP: Functional Programming definitely needs FRP or something like it, because it isn't already reactive like objects are. Second, what I wrote is that this is non-sensical "at first blush" (so "on first impression"). Idiomatically, this phrase is usually sets up a " ...but on closer examination", and lo-and-behold, almost the entire rest of the post talks about how the related concepts of dataflow and dataflow-constraints are highly desirable.

The point was and is (obviously?) a terminological one, because the existing term "reactivity" is being overloaded so much that it confuses more than it clarifies. And quite frankly, the idea of objects being "reactive" is (a) so self-evident (you send a message, the object reacts by executing method which usually sends more messages) and (b) so deeply ingrained and basic that I didn't really think about it much at all. So obviously, it could very well be that I was wrong and that this was "common sense" to me in the Einsteinian sense.

I will explore the terminological confusion more in later posts, but suffice it to say that Conal Elliott contacted the ReactiveCocoa guys to tell them (politely) that whatever ReactiveCocoa was, it certainly wasn't FRP:

I'm hoping to better understand how the term "Functional Reactive Programming" gets applied to systems that are so far from the original definition and principles (continuous time with precise & simple mathematical meaning)
He also wrote/talked more about this confusion in his 2015 talk "Essence and Origins of FRP":
The term has been used incorrectly to describe systems like Elm, Bacon, and Reactive Extensions.
Finally, he seems to agree with me that the term "reactive" wasn't really well chosen for the concepts he was going after:

What is Functional Reactive Programming:  Something of a misnomer.  Perhaps Functional temporal programming

So having established the the term "reactive" is confusing when applied to whatever it is that ReactiveCooca is or was trying to be, let's have a look at how and whether it is applicable to objects. The Communication of the ACM "Special issue on object-oriented experiences and future trends" from 1995 has the following to say:

A group of leading experts from industry and academia came together last fall at the invitation of IBM and ACM to ponder the primary areas of future needs in software support for object-based applications.

[..]

In the future, as you talk about having an economy based on these entities (whether we call them “objects” or we call them something else), they’re going to have to be more proactive. Whether they’re intelligent agents or subjective objects, you enable them with some responsibility and they get something done for you. That’s a different view than we have currently where objects are reactive; you send it a message and it does something and sends something back.

But lol, that's only a group of leading researchers invited by IBM and the ACM writing in arguably one of the most prestigious computing publications, so what do they know? Let's see what the Blue Book from 1983 has to say when defining what objects are:

The set of messages to which an object can respond is called its interface with the rest of the system. The only way to interact with an object is through its interface. A crucial property of an object is that its private memory can be manipulated only by its own operations. A crucial property of messages is that they are the only way to invoke an object's operations. These properties insure that the implementation of one object cannot depend on the internal details of other objects, only on the messages to which they respond.
So the crucial definition of objects according the creators of Smalltalk is that they respond to messages. And of course if you check a dictionary or thesaurus, you will find that respond and react are synonyms. So the fundamental definition of objects is that they react to messages. Hmm...that sounds familiar somehow.

While those are seemingly pretty influential definitions, maybe they are uncommon? No. A simple google search reveals that this definition is extremely common, and has been around for at least the last 30-40 years:

A conventional statement of this principle is that a program should never declare that a given object is a SmallInteger or a LargeInteger, but only that it responds to integer protocol.
But lol, what do Adele Goldberg, David Robson or Dan Ingalls know about Object Oriented Programming? After all, we have one of the creators of ReactiveCocoa here! (Funny aside: LinkedIn once asked me "Does Dan Ingalls know about Object Oriented Programming?" Alas there wasn't a "Are you kidding me?" button, so I lamely clicked "Yes").

Or maybe it's only those crazy dynamic typing folks that no-one takes seriously these days? No.

So the only thing relevant thing for typing purposes is how an object reacts to messages.
Here's a section from the Haiku/BeOS documentation:
A BHandler object responds to messages that are handed to it by a BLooper. The BLooper tells the BHandler about a message by invoking the BHandler's MessageReceived() function.
A book on OO graphics:

The draw object reacts to messages from the panel, thereby creating an IT to cover the canvas.
CS lecture on OO:
Properties implemented as "fields" or "instance variables"
  • constitute the "state" of the object
  • affect how object reacts to messages
Heck, even the Apple Cocoa/Objective-C docs speak of "objects responding to messages", it's almost like a conspiracy.
By separating the message (the requested behavior) from the receiver (the owner of a method that can respond to the request), the messaging metaphor perfectly captures the idea that behaviors can be abstracted away from their particular implementations.
Book on OO Analysis and Design:
As the object structures are identified and modeled, basic processing requirements for each object can be identified. How each object responds to messages from other objects needs to be defined.
An object's behavior is defined by its message-handlers(handlers). A message-handler for an object responds to messages and performs the required actions.
CLIPS - object-oriented programming

Or maybe this is an old definition from the 80ies and early 90ies that has fallen out of use? No.

The behavior of related collections of objects is often defined by a class, which specifies the state variables of an objects (its instance variables) and how an object responds to messages (its instance methods).
Methods: Code blocks that define how an object responds to messages. Optionally, methods can take parameters and generate return values.
Cocoa, by Richard Wentk, 2010

The main difference between the State Machine and the immutable is the way the object reacts to messages being sent (via methods invoked on the public interface). Whereas the State Machine changes its own state, the Immutable creates a new object of its own class that has the new state and returns it.
So to sum up: classic OOP is definitely reactive. FRP is not, at least according to the guy who invented it. And what exactly things like ReactiveCocoa and Elm etc. are, I don't think anyone really knows, except that they are not even FRP, which wasn't, in the end reactive.

Tune in for "What the Heck is Reactive Programming, Anyway?"

As always, comments welcome here or on HN