Sunday, February 10, 2019

Why Architecture Oriented Programming Matters

On re-reading John Hughes influential Why Functional Programming Matters, two things stood out for me.

The first was the claim that, "...a functional programmer is an order of magnitude more productive than his or her conventional counterpart, because functional programs are an order of magnitude shorter." That's a bold claim, though he cleverly attributes this claim not to himself but to unspecified others: "Functional programmers argue that...".

If there is evidence for this 10x claim, I'd like to see it. The only somewhat systematic assessment of "language productivity" is the Caper Jones language-level evaluation, which is computed on the lines of code needed to achieve a function point worth of software. In this evaluation, Haskell achieved a level of 7.5, whereas Smalltalk was rated 15, so twice as productive. While I don't see this is conclusive, it certainly doesn't support a claim of vastly superior productivity.

That little niggle out of way, I think he does make some very insightful and important points in the second section. Talking about structured programming he concludes that:

With the benefit of hindsight, it’s clear that these properties of structured programs, although helpful, do not go to the heart of the matter. The most important difference between structured and unstructured programs is that structured programs are designed in a modular way. Modular design brings with it great productivity improvements. First of all, small modules can be coded quickly and easily. Second, general-purpose modules can be reused, leading to faster development of subsequent programs. Third, the modules of a program can be tested independently, helping to reduce the time spent debugging.
He goes on:
However, there is a very important point that is often missed. When writing a modular program to solve a problem, one first divides the problem into subproblems, then solves the subproblems, and finally combines the solutions.
And then comes the zinger:
The ways in which one can divide up the original problem depend directly on the ways in which one can glue solutions together. Therefore, to increase one’s ability to modularize a problem conceptually, one must provide new kinds of glue in the programming language.
Yes, I put that in bold. I'd also put a <blink>-tag on it, but fortunately for everyone involved that tag is obsolete. Anyway, he then makes some partly good, partly debatable points about the benefits of the two kinds of glue he says FP provides: function composition and lazy evaluation.

For me, the key point here is not those specific benefits, but the number 2. He just made the important point that "one must provide new kinds of glue" and then the amount of "new kinds of glue" is the smallest number that actually justifies using the plural. That seems a bit on the low side, particularly because the number also seems to be fixed.

I'd venture to say that we need a lot more different kinds of glue, and we need our kinds of glue to be extensible, to be user-defined. But first, what is this "glue"? Do we have some other term for it? Maybe Alan Kay can help?

The Japanese have a small word -- ma -- for "that which is in between" -- perhaps the nearest English equivalent is "interstitial". The key in making great and growable systems is much more to design how its modules communicate rather than what their internal properties and behaviors should be. Think of the internet -- to live, it (a) has to allow many different kinds of ideas and realizations that are beyond any single standard and (b) to allow varying degrees of safe interoperability between these ideas.
Alan Kay: prototypes vs classes was: Re: Sun's HotSpot

OK, that's an additional insight along the same lines, but doesn't really help us. Fortunately the Software Architecture community has something for us, the idea of a Connector.

Connectors are the locus of relations among components. They mediate interactions but are not “things” to be hooked up (they are, rather, the hookers-up).
Mary Shaw: Procedure Calls Are the Assembly Language of Software Interconnection: Connectors Deserve First-Class Status

The subtitle of that paper by Mary Shaw is the solution: connectors deserve first-class-status. Connectors are the "ma" that goes in-between, the glue that we need "lots more" of ("lots" > 2); and when we give them first class status, we can create more, put them in libraries, and adapt them to our needs.

That's why I think Architecture Oriented Programming matters, and why I am creating Objective-Smalltalk.

UPDATE (2023)

Something I didn't mention in the original article was that the second kind of glue FP supposedly provides, lazy evaluation, was at best a bit iffy, because the evidence for that was a bit hand-wavy. Well, it turns out that an analysis of R code shows that lazy evaluation is essentially unused, at least in package code.

So that means the number of kinds of glue that FP provides is...one.

Friday, February 8, 2019

A small (and objective) taste of Objective-Smalltalk

I've been making good progress on Objective-Smalltalk recently. Apart from the port to GNUstep that allowed me to run the official site on it (shrugging off the HN hug of death in the process), I've also been concentrating on not just pushing the concepts further, but also on adding some of the more mundane bits that are just needed for a programming language.

And so my programs have been getting longer and more useful, and I am starting to actually see the effect of "I'd rather write this on Objective-Smalltalk than Objective-C". And with that, I thought I'd share one of these slightly larger examples, show how it works, what's cool and possibly a bit weird, and where work is still needed (lots!).

The code I am showing is a script that implements a generic scheme handler for sqlite databases and then uses that scheme handler to access the database given on the command line. When you run it, in this case with the sample database Chinook.db, it allows you to interact with the database using URIs using the db scheme. For example, db:. lists the available tables:


> db:. 
( "albums","sqlite_sequence","artists","customers","employees","genres","invoices",
 "invoice_items","media_types","playlists","playlist_track","tracks","sqlite_stat1") 

You can then access entire tables, for example db:albums would show all the albums, or you can access a specific album:
> db:albums/3
{ "AlbumId" = 4;
"Title" = "Let There Be Rock";
"ArtistId" = 1;
} 

With that short intro and without much further ado, here's the code :


#!/usr/local/bin/stsh
#-sqlite:<ref>dbref

framework:FMDB load.


class ColumnInfo {
  var name.
  var type.
  -description {
      "Column: {var:self/name} type: {var:self/type}".
  }
}

class TableInfo  {
  var name.
  var columns.
  -description {
    cd := self columns description.
    "Table {var:self/name} columns: {cd}".
  }
}

class SQLiteScheme : MPWScheme {
  var db.

  -initWithPath: dbPath {
     self setDb:(FMDatabase databaseWithPath:dbPath).
     self db open.
     self.
  }

  -dictionariesForResultSet:resultSet
  {
    results := NSMutableArray array.
    { resultSet next } whileTrue: { results addObject:resultSet resultDictionary. }.
    results.
  }

  -dictionariesForQuery:query {
     self dictionariesForResultSet:(self db executeQuery:query).
  }

  /. { 
     |= {
       resultSet := self dictionariesForQuery: 'select name from sqlite_master where [type] = "table" '.
       resultSet collect at:'name'.
     }
  }

  /:table/count { 
     |= { self dictionariesForQuery: "select count(*) from {table}" | firstObject | at:'count(*)'. }
  }

  /:table/:index { 
     |= { self dictionariesForQuery: "select * from {table}" | at: index. }
  }

  /:table { 
     |= { self dictionariesForQuery: "select * from {table}". }
  }

  /:table/:column/:index { 
     |= { self dictionariesForQuery: "select * from {table}" | at: index.  }
  }

  /:table/where/:column/:value { 
     |= { self dictionariesForQuery: "select * from {table} where {column} = {value}".  }
  }

  /:table/column/:column { 
     |= { self dictionariesForQuery: "select {column} from {table}"| collect | at:column. } 
  }

  /schema/:table {
     |= {
        resultSet := self dictionariesForQuery: "PRAGMA table_info({table})".
	    columns := resultSet collect: { :colDict | 
            #ColumnInfo{
				#'name': (colDict at:'name') ,
				#'type': (colDict at:'type')
			}.
        }.
        #TableInfo{ #'name': table, #'columns': columns }.
     }
  } 

  -tables {
	 db:. collect: { :table| db:schema/{table}. }.
  }
  -<void>logTables {
     stdout do println: scheme:db tables each.	
  }
}

extension NSObject {
  -initWithDictionary:aDict {
    aDict allKeys do:{ :key |
      self setValue: (aDict at:key) forKey:key.
    }.
    self.
  }
}


scheme:db := SQLiteScheme alloc initWithPath: dbref path.
stdout println: db:schema/artists
shell runInteractiveLoop.


Let's walk through the code in detail, starting with the header:
#!/usr/local/bin/stsh
#-sqlite:<ref>dbref

This is a normal Unix shell script invoking stsh, the Smalltalk Shell. The Smalltalk Shell is a bigger topic for another day, but for now let's focus on the second line, which looks like a method declaration, and that's exactly what it is! In order to ease the transition between small scripts and larger systems (successful scripts tend to get larger, and successful large systems evolve from successful small systems), scripts have a dual nature, being at the same time callable from the Unix command line and also usable as a method (or filter) from a program.

Since this script is interactive, that part is not actually that important, but a nice side effect is that the declaration of a parameter gets us automatic command-line parameter parsing, conversion, and error checking. Specifically, stsh knows that the script takes a single parameter of type <ref> (a reference, so a filename or URL) and will put that in the dbref variable as a reference. If the script is invoked without that parameter, it will exit with an error message, all without any further work by the script author. These declarations are optional, without them parameters will go into an args array without further interpretation.

Next up, we load a dependency, Gus Mueller's wonderful FMDB wrapper for SQLite.


framework:FMDB load.

The framework scheme looks for frameworks on the default framework path, and the load message is sent to the NSBundle that is returned.

The next bit is fairly straightforward, defining the ColumnInfo class with two instance variables, name and type, and a -descritpion method.


class ColumnInfo {
  var name.
  var type.
  -description {
      "Column: {var:self/name} type: {var:self/type}".
  }
}

Again, this is very straightforward, with maybe the missing superclass specification being slightly unusual. Different constructs may have different implicit superclasses, for class it is assumed to be NSObject. The description method, introduced by "-" just like in Objective-C, uses string interpolation with curly braces. (I currently need to use fully qualified names like var:self/name to access instance variables, that should be fixed in the near future). It also doesn't have a return statement or the like, a method return can be specified by just writing out the return value.

To me, this has the great effect of putting the focus on the essential "this is the description" rather than on the incidental, procedural "this is how you build the description". It is obviously only a very small instance of this shift, but I think even this small examples speaks to what that shift can look like in the large.

The way instance variables are defined is far from being done, but for now the var syntax does the job. The TableInfo class follows the same pattern as ColumnInfo, and of course these two classes are used to represent the metadata of the database.

So on to the main attraction, the scheme-handler itself, which is just a plain old class inheriting from MPWScheme, with an instance variable and an initialisation method:


class SQLiteScheme : MPWScheme {
  var db.

  -initWithPath: dbPath {
     self setDb:(FMDatabase databaseWithPath:dbPath).
     self db open.
     self.
  }

Having advanced language features largely defined as/by plain old classes goes back to the need for a stable starting point. However, it has turned out to be a little bit more than that, because the mapping to classes is not just the trivial one of "since this written in on OO language, obviously the implementation of features is somehow done with classes". Instead, the classes map onto the language features very much in an Open Implementation kind of way, except that in this case it is Open Language Implementation.

That means that unlike a typical MOP, the classes actually make sense outside the specific language implementation, making their features usable from other languages, albeit less conveniently. Being easily accessible from other languages is important for an architectural language.

With this mapping, a very narrow set of syntactic language mechanism can be used to map a large and extensible (thus infinite) set of semantic features into the languages. This is of course similar to language features like procedures, methods and classes, but is expanded to things that usually haven't been as extensible.

The next two methods handle the interface to FMDB, they are mostly straightforward and, I think, understandable to both Smalltalk and Objective-C programmers without much explanation.


  -dictionariesForResultSet:resultSet
  {
    results := NSMutableArray array.
    { resultSet next } whileTrue: { results addObject:resultSet resultDictionary. }.
    results.
  }

  -dictionariesForQuery:query {
     self dictionariesForResultSet:(self db executeQuery:query).
  }

Smalltalk programmers may balk a little at the use of curly braces rather than square brackets to denote blocks. To me, this is a very easy concession to "the mainstream"; I have bigger fish to fry. To Objective-C programmers, the fact that the condition of the while-loop is implemented as a message sent to a block rather than as syntax might take a little getting used to, but I don't think it presents any fundamental difficulties.

Next up we have some property path definitions, the meat of the scheme handler. Each property path lets you define code that will run for a specific subset of the scheme's namespace, with the subset defined by the property path's URI pattern. As the name implies, property paths can be regarded as a generalisation of Objective-C properties, extended to handle both entire sets of properties, sub-paths and the combination of both.


  /. { 
     |= {
       resultSet := self dictionariesForQuery: 'select name from sqlite_master where [type] = "table" '.
       resultSet collect at:'name'.
     }
  }

The first property path definition is fairly straightforward as it only applies to a single path, the period (so the db:. example from above). Property path definitions start with the forward slash ("/"), similar to the way that instance methods start with "-" and class methods with "+" in Objective-C (and Objetive-Smalltalk). The slash seemed natural to indicate the definition of paths/URIs.

Like C# or Swift property definitions, you need to be able to deal with (at least) "get" and/or "set" access to a property. I really dislike having random keywords like "get" or "set" for program structure, I prefer to see names reserved for things that have domain meaning. So instead of keywords, I am using constraint connector syntax: "|=" means the left hand side is constrained to be the same as the right hand side (aka "get"). "=|" means the right hand side is constrained to be the same as the left hand side (aka "set"). The idea is that the "left hand side" in this case is the interface, the outside of the object/scheme handler, whereas the "right hand side" is the inside of the object, with properties mediating between the outside and the inside of the object.

As most everything, this is currently experimental, but so far I like it more than I expected to, and again, it shifts us away from being action oriented to describing relationships. For example, delegating both get and set to some other object could then be described by using the bidirectional constraint connector: /myProperty =|= var:delegate/otherroperty.

Getting the result set is a straightforward message-send with the SQL query as a constant, non-interpolated string (single quotes, double quotes is for interpolation). We then need to extract the name of the table from the return dictionaries, which we do via the collect HOM and the Smalltalk-y -at: message, which in this case maps to Foundation's -objectForKey:. The next property paths map URIs to queries on the tables. Unlike the previous example, which had a constant, single element path and so was largely equivalent to a classic property, these all have variable path elements, multiple path segments or both.


  /:table/count { 
     |= { self dictionariesForQuery: "select count(*) from {table}" | firstObject | at:'count(*)'. }
  }

  /:table/:index { 
     |= { self dictionariesForQuery: "select * from {table}" | at: index. }
  }

  /:table { 
     |= { self dictionariesForQuery: "select * from {table}". }
  }

Starting at the back, /:table returns the data from the entire table specified in the URI using the parameter :table. The leading semicolon means that this path segment is a parameter that will match any single string and deliver it the method as the parameter name used, in this case "table". Wildcards are also possible.

Yes, the SQL query is performed using simple string interpolation without any sanitisation. DON'T DO THAT. At least not in production code. For experimenting in an isolated setting it's OK.

The second query retrieves a specific row of the table specified. The pipe "operator" is for method chaining with keyword syntax without having to bracket like crazy:


self dictionariesForQuery: "select count(*) from {table}" | firstObject | at:'count(*)'
((self dictionariesForQuery: "select count(*) from {table}") firstObject) at:'count(*)'

I find the "pipe" version to be much easier to both just visually scan and to understand, because it replaces nested (recursive) evaluation with linear piping. And yes, it is at least partly a by-product of integrating pipes/filters, which is a part of the larger goal of smoothly integrating multiple architectural styles. That this integration would lead to an improvement in the procedural part was an unexpected but welcome side effect.

The first property path, /:table/count returns the size of the given table, using the optimised SQL code select count(*). This shows an interesting effect of property paths. In a standard ORM, getting the count of a table might look something like this: db.artists.count. Implemented naively, this code retrieves the entire "artists" table, converts that to an array and then counts the array, which is incredibly inefficient. And of course, this was/is a real problem of many ORMs, not just a made up example.

The reason it is a real problem is that it isn't trivial to solve, due to the fact that OOPLs are not structurally polymorphic. If I have something like db.artists.count, there has to be some intermediate object returned by artists so I can send it the count message. The natural thing for that is the artists table, but that is inefficient. I can of course solve this by returning some clever proxy that doesn't actually materialise the table unless it has to, or I can have count handled completely separately, but neither of these solutions are straightforward, which is why this has traditionally been a problem.

With property paths, the problem just goes away, because any scheme handler (or object) has control over its sub-structure to an arbitrary depth.

Queries are handled in a similar matter, so db:albums/where/ArtistId/6 retrieves the two albums by band Apocalyptica. This is obviously very manual, for any specific database you'd probably want to specialise this generic scheme handler to give you named relationships and also to return actual objects, rather than just dictionaries. A step in that direction is the /schema/:table property path:


  /schema/:table {
     |= {
        resultSet := self dictionariesForQuery: "PRAGMA table_info({table})".
	    columns := resultSet collect: { :colDict | 
            #ColumnInfo{
				#'name': (colDict at:'name') ,
				#'type': (colDict at:'type')
			}.
        }.
        #TableInfo{ #'name': table, #'columns': columns }.
     }
  } 

This property path returns the SQL schema in terms of the objects we defined at the top. First is a simple query of the SQLite table that holds the schema information, returning an array of dictionaries. These individual dictionaries are then converted to ColumnInfo objects using object literals.

Similar to defining the -description method above as simple the parametrized string literal instead of as instructions to build the result string, object literals allow us to simple write down general objects instead of constructing them. The example inside the collect defines a ColumnInfo object literal with the name and type attributes set from the column dictionary retrieved from the database.

Similarly, the final TableInfo is defined by its name and the column info objects. Object literals are a fairly trivial extension of Objective-Smalltalk dictionary literals, #{ #'key': value }, with a class-name specified between the "#" and the opening curly brace. Being able to just write down objects is, I think, one of the better and under-appreciated features of WebObjects .wod files (though it's not 100% the same thing), as well as QML and I think also part of what makes React "declarative".

Not entirely coincidentally, the "configurations" of architectural description languages can also be viewed as literal object definitions.

With that information in hand, and with the Objective-Smalltalk runtime providing class definition objects that can be used to install objects with utheir methods in the runtime, we now have enough information to create some classes straight from the SQL table definitions, without going through the intermediate steps of generating source code, adding that to a project and compiling it.

That isn't implemented, and it's also something that you don't really want, but it's a stepping stone towards creating a general mechanism for orthogonal modular persistence. The final two utility methods are not really all that noteworthy, except that they do show how expressive and yet straightforward even ordinary Objective-Smalltalk code is.


  -tables {
	 db:. collect: { :table| db:schema/{table}. }.
  }
  -<void>logTables {
     stdout do println: scheme:db tables each.	
  }

The -tables method just gets the all the schema information for all the tables. The -logTables methods prints all the tables to stdout, but individually, not as an array. Finally, there is a class extension to NSObject that supports the literal syntax on all objects and the script code that actually initialises the scheme with a database and starts an interactive session. This last feature has also been useful in Smalltalk scripting: creating specialized shells that configure themselves and then run the normal interactive prompt.

So that's it!

It's not a huge revelation, yet, but I do hope this example gives at least a tiny glimpse of what Objective-Smalltalk already is and of what it is poised to become. There is a lot that I couldn't cover here, for example that scheme-handlers aren't primarily for interactive exploration, but for composition. I also only mentioned pipes-and-filters in passing, and of course there "is" a lot more that just "isn't" there, quite yet.

As always, but even more than usual, I would love to get feedback! And of course the code is on github

Wednesday, February 6, 2019

Why Objective-Smalltalk is (not) a Smalltalk

One of the main goals of Objective-Smalltalk is to make it possible to express programs in mixtures of diverse architectural styles, and variations of those styles, naturally and comprehensibly. That needs a mechanism that abstracts from any specific style, while at the same time accommodating all, or at least a wide variety of styles.

I don't know how to do that.

Yet.

A Smalltalk

Since I don't know how to do that yet, I can't build what I need, and since I can't build what I need I can't experiment with it and learn how to do what I need to do.

Instead of waiting for divine inspiration, designing something from thin air (that will obviously be perfect) or just throwing my hands up in despair, I need to pick some starting point from which I can start iterating. In terms of styles, there are a few to choose from, for example call-return, pipes-and-filters and REST. Call-return seems like an obvious choice, because it is something that is familiar, has widespread support and is very capable of implementing other styles.

Objective-C would have been a pleasant choice: I am very familiar with it and it is very suitable for building architectural and language adapters. This suitability is not an accident either, connectivity was the primary design goal, and the designers succeeded admirably given the constraints.

However, Objective-C also has significant drawbacks as a starting point: having already merged two languages, Smalltalk and C, it is rather large and unwieldy, with weird overlaps and other oddities. Being a superset of C, it also pretty much demands being compiled, whereas I want at least the option of interpretation.

WebScript would be an alternative, but it is not just dead, but also proprietary. It also very closely mimics the somewhat baroque Objective-C syntax, but without the actual reasons for those syntactic compromises or the benefits that this sacrifice brings in Objective-C.

These and most other existing language choices would mean extending an existing language, grammar and implementation, meaning that the architectural features of the language would be almost certainly be 2nd class citizens, being tacked on wherever they can fit. That's not a good starting point (see: Objective-C).

So it really had to be a brand new language, one whose syntax and semantics would be under full control, and a syntactic starting point for the procedural part of that language. For this, I don't know of a better choice than the Smalltalk (keyword) message syntax. The syntax itself is tiny, with almost no reserved words or special characters reserved by the language, and almost all "language" features implemented as objects and messages without special syntax.

Having the procedural base as trivial as possible to implement is important, because I don't want to spend too much effort on it. I have (much) bigger fish to fry. Smalltalk also integrates well conceptually and syntactically with Cocoa and Objective-C, my preferred environment, a point not lost on the plethora of Smalltalk-based scripting languages available for Cocoa and GNUstep. And having a rich environment to integrate with is important for a language intended to connect existing pieces, which is what an architectural language does, or should do.

Not a Smalltalk

So given all the arguments for Smalltalk, surely Objective-Smalltalk is based on one of the existing Smalltalks, such as Squeak, enjoying the great development environment, malleable infrastructure etc.

Not so fast.

Although Smalltalk fits very well conceptually and syntactically, it doesn't fit at all well technically. It generally runs in its own world, the image, requires a complex and sophisticated VM, with all the integration headaches that entails (FFI, JNI, etc.), and comes with and requires a GC. Integrating multiple tracing GCs is essentially an unsolved problem, so that's a bit of a downer for a language that wants to be able to glue existing pieces together.

The philosophy of current Smalltalks is the exact opposite: it wants to be the entire world. This is understandable: when Smalltalk was created there simply wasn't a "rest of the world" to talk to, and there certainly wasn't room for it on the same machine, an Alto with 128-512KB of RAM and a 2.5MB removable hard disk.

A current Smalltalk has a lot of code, and communities that think it's just the greatest thing on earth, so resistance to change is significant and reasonable, both to smaller changes, because they just aren't worth it in that context, and to larger changes because they make things just too different. However, I want to make both large and small changes and not be hobbled by linguistic backwards compatibility.

[..] when ST hit the larger world, it was pretty much taken as "something just to be learned", as though it were Pascal or Algol. Smalltalk-80 never really was mutated into the next better versions of OOP. Given the current low state of programming in general, I think this is a real mistake.
Alan Kay: prototypes vs classes was: Re: Sun's HotSpot

So Objective-Smalltalk takes cues from Smalltalk where this is helpful, but it is not really a new version of Smalltalk. It's not a "better old thing" (>45 years), but a (probably worse) "new thing", and for that reason it has to strike out on its own.

Thursday, January 10, 2019

Objective-Smalltalk on Linux via GNUstep and Docker

Although Objective-Smalltalk's architectural orientation should be an ideal fit for The Cloud™, this has been largely theoretical in recent times as Objective-Smalltalk only ran on macOS and iOS.

Having used both GNUstep and Cocotron to port, for example, MusselWind to Linux in the past, I knew this wasn't an insurmountable probl...er opportunity. However, dealing with VMs and OSes, and configurations getting everything to compile was always painful, and usually not reproducible.

Enter Docker for Mac. I had obviously heard quite a bit about Docker, but so far never had an opportunity to actually try it out in anger. Well, maybe not "anger", more "mild chagrin", because Docker is not really meant for running Linux binaries on your Mac. However, it does that job really, really well. Instead of futzing around with VMs, dealing with special guest OS tools in order to get window sizes to match, Retina not working, displays not resizing etc. you just get a login onto a Linux "box" right in your Terminal window, no muss no fuss. Nice. And mounting directories from the Mac is also trivial. Niiiice.

The other part that makes this different is the whole idea of reproducibility via Dockerfiles. Because a container is at least conceptually ephemeral (practically you can keep it around for quite a bit), you don't capture your knowledge in the state of the VM you are working on. Instead, you accumulate the knowledge in the Dockerfile(s) used to construct your image.

The Dockerfile for creating a GNUstep image/container that is capable of compiling + running Objective-Smalltalk can be found here:

https://github.com/mpw/MPWFoundation/tree/master/GNUstep/gnustep-combined
The main Dockerfile loads first loads a bunch of packages via apt-get:
FROM ubuntu

ENV LC_ALL en_US.UTF-8
ENV LANG en_US.UTF-8
ENV TZ 'Europe/Berlin'
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone


RUN apt-get update && apt-get install -y \
    git \
    make \
    ssh \
    sudo \
    curl \
    inetutils-ping \
    vim build-essential  \
    clang llvm libblocksruntime-dev libkqueue-dev libpthread-workqueue-dev libxml2-dev cmake \
    libffi-dev \
    libreadline6-dev \
    libedit-dev \
    libmicrohttpd-dev \
    gnutls-dev 


RUN useradd -ms /bin/bash gnustep

COPY install-gnustep-clang /home/gnustep/
RUN chmod u+x /home/gnustep/install-gnustep-clang
RUN /home/gnustep/install-gnustep-clang  

COPY bashrc /home/gnustep/.bashrc
COPY profile /home/gnustep/.profile
COPY bashrc /root/.bashrc
COPY profile /root/.profile
COPY bashrc /.bashrc
COPY profile /.profile

COPY build-gnustep-clang /home/gnustep/build-gnustep-clang
RUN  mkdir -p /home/gnustep/patches/libobjc2-1.8.1/
COPY patches/libobjc2-1.8.1/objcxx_eh.cc /home/gnustep/patches/libobjc2-1.8.1/objcxx_eh.cc

RUN chmod u+x /home/gnustep/build-gnustep-clang
RUN /home/gnustep/build-gnustep-clang  


COPY build-gnustep-clang /home/gnustep/build-gnustep-clang
RUN  mkdir -p /home/gnustep/patches/libobjc2-1.8.1/
COPY patches/libobjc2-1.8.1/objcxx_eh.cc /home/gnustep/patches/libobjc2-1.8.1/objcxx_eh.cc

RUN chmod u+x /home/gnustep/build-gnustep-clang
RUN /home/gnustep/build-gnustep-clang  

CMD ["bash"]

It adds a user "gnustep", then runs a script that will downloads specific version of the gnustep and libobjc2 sources using a script, patches one of those sources which wouldn't compile for me and finally builds and installs the whole thing using another script, both adapted from Tobias Lensing's post.


#!/bin/bash
#


cd /home/gnustep/

echo Installing gnustep-make
export CC=clang
echo compiler is $CC

tar zxf gnustep-make-2.7.0.tar.gz
cd gnustep-make-2.7.0
./configure
make install
cd ..

echo 
echo 
echo ======================
echo Installing libobjc2


tar zxf libobjc2-1.8.1.tar.gz
cp patches/libobjc2-1.8.1/* libobjc2-1.8.1/
cd libobjc2-1.8.1
mkdir Build
cd Build
# cmake -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang -DTESTS=OFF -DLLVM_OPTS=OFF  ..
cmake -DTESTS=OFF .. 
make install
cd ../..


cd gnustep-make-2.7.0
make clean
./configure --with-library-combo=ng-gnu-gnu
make install
cd ..

source /usr/local/share/GNUstep/Makefiles/GNUstep.sh
tar zxf gnustep-base-1.25.1.tar.gz 
cd gnustep-base-1.25.1
./configure
make installn
cd ..

echo Installation script finished successfully

One of the tricky bits about this is that you need to install the gnustep-make package twice, first to get libobjc2 to install, the second time with libobjc2 installed so all the other packages find the right runtime. The apt-get packages did not work for me, because I needed newer compiler/runtime features.

Once you have the image, you can start it up and log into it. I then su -l to the gnustep account and work with the MPWFoundation, ObjectiveSmalltalk and ObjectiveHTTPD mounted from my local filesystem, rather than having separate copies inside the container.

Not everything is ported, but basic expressions work, as does messaging back and forth between Objective-Smalltalk and Objective-C, Higher Order Messaging, etc. Considering some of the runtime trickery involved, I was surprised how straightforward it was to get this far.

In the future, I expect to also build a runtime image that has the libraries and executable pre-built.

Friday, January 4, 2019

Swifter Sieving of Primes with Software ICs

I've been working much more on Objective-Smalltalk again, which is great, but that also made me think more intently on the motivations, because a lot of what is there is probably not quite obvious without knowing why and how it got there.

One of the obvious motivations has been Objective-C, and in particular the notion of Software-ICs, components that are connected via dynamic messages. This is actually an instance of the Scripted Components (pdf) pattern, with the interesting twist that unlike most of the instances of this pattern, there is only a single programming language for both the scripts and the components, that language being Objective-C.

In a sense, Objective-C is not really a language, it is middleware with some language features. And therein lies much confusion, because people try to use it as "just" a language. In particular, they try to use the "Objective" parts as an algorithmic programming language, something it is eminently unsuited for. This is what the "C" part is for, which is somewhere around 90% of the actual language.

To back off for a bit, a Scripted Component system is one where you have components that are written in one language, often C or C++, that are then glued together using a more dynamic scripting language, for example Unix shell, Visual Basic. One of the more extreme examples of this pattern is computational steering of supercomputer applications using Tcl(!). Which brings up the point of performance: one of the interesting aspects of the Scripted Components pattern is that it combines the excellent performance of the component language with the flexibility and interactivity of the scripting language.

Which brings me back to Objective-C, almost. No one at the supercomputer centers that used Tcl for steering would have had the idea of implementing their actual computation in Tcl. No, implementing the components is what you use C or C++ or maybe FORTRAN for. You then combine, parametrise and maybe invoke those components from the scripting language.

Yet when it comes to Objective-C, that is exactly what people do: they implement the algorithmic parts in the scripting/middleware part of Objective-C and then wonder why it comes out looking like the following (taken from Vincent's post):


NSArray<NSNumber *> *sieve(NSArray<NSNumber *> *sorted) {
  if ([sorted count] == 0) { return [NSArray new]; }
  NSUInteger head = sorted[0].unsignedIntegerValue;
  NSArray *tail = [sorted subarrayWithRange:NSMakeRange(1, sorted.count - 1)];
  NSIndexSet *indexes = [tail indexesOfObjectsPassingTest:^BOOL(NSNumber *element, NSUInteger idx, BOOL *stop) {
    return element.unsignedIntegerValue % head > 0;
  }];
  NSArray *sieved = [tail objectsAtIndexes:indexes];
  return [@[@(head)] arrayByAddingObjectsFromArray:sieve(sieved)];
}

NSMutableArray *numbers = [NSMutableArray new];
for (NSUInteger i = 2; i <= 100; i++) {
  [numbers addObject: @(i)];
}
NSArray *primes = sieve(numbers);
NSLog(@"%@ -> %@", numbers, primes);

Yes, this code doesn't make sense. Or it makes about as much sense as implementing your computational fluid dynamics simulation in Tcl and then scripting it using C. And yes, the Swift code definitely looks a bit nicer:


func sieve(_ sorted: [Int]) -> [Int] {
	guard !sorted.isEmpty else { return [] }
	let (head, tail) = (sorted[0], sorted[1..&now lt;sorted.count])
	return [head] + sieve(tail.filter { $0 % head > 0 })
}

let numbers = Array(2...100)
let primes = sieve(numbers)
print("\(numbers) -> \(primes)")

The first thing I noticed was that I couldn't even tell what this was doing, it took me a bit of close reading of the code to realise that my confusion was due to the fact that although this does compute primes, it isn't actually the Sieve of Eratosthenes!

The hallmark of the Sieve is that it computes primes while avoiding expensive division (and multiplication by anything other than 2, a simple shift left) operations. It does this by allocating an array with the candidate numbers (it starts from 2, because all numbers are multiples of 1) and then eliminating any known non-primes (multiples of some other number) by stepping through the array at a certain step-size. To eliminate all multiples of 2, step through the array with a step size of 2. To eliminate all multiples of 3, step through the array with a step size of 3, and so on. This was great on small microcomputers that did not have multiplication or division operations in hardware.

The algorithm presented here just checks all remaining numbers if they are divisible by the current number using a modulo operation.

But the fact that this is the wrong algorithm wasn't really what caught my eye, rather that was the unidiomatic and to me, quite frankly, rather bizarre way of using Objective-C: this example uses Objective-C Foundation objects to implement algorithms on integers.

It is not surprising that this code looks weird, because it is. Objective-C is a hybrid language intended for the construction and connection of Software-ICs. You implement the algorithms inside the Software-IC using C and then package it up (and connect) with the dynamic messaging provided by the "Objective" part.

So let's do that! First, we grab a C implementation of the (real) Sieve of Eratosthenes algorithm, for example from the first Duck Go Go search result here:


#include <stdio.h>

#define LIMIT 1500000 /*size of integers array*/
#define PRIMES 100000 /*size of primes array*/

int main(){
    int i,j,numbers[LIMIT];
    int primes[PRIMES];

    /*fill the array with natural numbers*/
    for (i=0;i<limit;i++){
        numbers[i]=i+2;
    }

    /*sieve the non-primes*/
    for (i=0;i<limit;i++){
        if (numbers[i]!=-1){
            for (j=2*numbers[i]-2;j<limit;j+=numbers[i])
                numbers[j]=-1;
        }
    }

    /*transfer the primes to their own array*/
    j = 0;
    for (i=0;i<limit&&j<primes;i++)
        if (numbers[i]!=-1)
            primes[j++] = numbers[i];

    /*print*/
    for (i=0;i<primes;i++)
        printf("%dn",primes[i]);

return 0;
}

This somewhat longer than the previous examples, but it does have the distinct advantage of actually being a Sieve of Eratosthenes :-). The way it works is that it first intialises an array of size N with the integers from 1 to N. The Sieve then knocks out multiples of n for n in 1 to N. The "knocking out" is performed by setting the array element to -1. Finally, all the non-negative numbers are retrieved from the array. These are the primes, which are then printed.

Sine Objective-C is a strict superset of C, we could simply use this code as-is. However, it does lack some conveniences, for example the array sizes are fixed, the code isn't reusable and for example printing is very manual. So using the idea of the Software-IC, we notice that an array of integers is the central data structure here. Apple's Foundation doesn't provide such a beast, but that isn't a big deal, we can easily write one ourselves, or more precisely reuse the one provided in MPWFoundation.

As this is going to be running all over the array, we probably want to break open the encapsulation boundary of the Software-IC and implement the core of the algorithm as an extension, er, category of MPWIntArray.

Another slight improvement is breaking the Sieve algorithm up into smaller, named pieces that tell you what is going on, and using conveniences such as -select: to filter the results. Printing of the contents is also something that is provided.


@import MPWFoundation;

@implementation MPWIntArray(primes)

-(void)knockoutMultiplesOf:(int)n
{
    int *numbers=data;
    if (n > 0){
       for (int j=2*n-2,max=[self count];j<max;j+=n) {
           numbers[j]=-1;
       }
    }
}

-(void)sievePrimes
{
    for (int i=0,max=[self count];i<max;i++){
        [self knockoutMultiplesOf:data[i]];
    }
}

@end

int main(int argc, char *argv[])
{
    int n=argc > 1 ? atoi(argv[1]) : 100000;
    MPWIntArray *numbers=[[@(2) to:@(n-2)] asArray];
    [numbers sievePrimes];

    MPWIntArray *primes = [numbers select:^(int potentialPrime) {
        return (BOOL)(potentialPrime > 0);
    }];
    printf("primes: %s\n",[[primes description] UTF8String]);
    printf("number of primes: %d\n",[primes count]);
}

One of the benefits of the Software-IC or Scripted Components style is performance, otherwise you wouldn't want to use it in supercomputer settings. In this particular case I ran the Swift version against the Software-IC version (I didn't bother with the original Objective-C version). The performance difference is so remarkable that it was actually difficult to find values for the N parameter that gave at least somewhat reasonable/comparable results for both variants:
N Swift Sieve Software-IC Sieve Ratio
100000 0,501 0,006 83,5
200000 1,78 0,007 254,3
300000 3,78 0,008 472,5
400000 6,43 0,009 714,4
500000 9,74 0,010 974,0
Sieve times
I couldn't go much higher than N=500,000 with the Swift version because at N=500,000 it was already using around 8GB of memory and rising quickly. At that point you start to see paging activity on a reasonably busy machine, particularly one with Xcode open and I didn't want to get my machine "benchmark-clean". At those same N values, the Software-IC version isn't really getting started yet, the execution time actually being dominated by startup costs and random fluctuations. In fact, launching it with N=1 also takes 6-8ms the same as for N=500,000.

So yes, using Swift to do computations is certainly a better choice than abusing the middleware part of Objective-C to do those computations. On the other hand, used properly for constructing a Software-IC and then providing an interface to it, Objective-C does quite well. Which doesn't mean we couldn't do better for both parts: yes, somewhat better algorithmics but more importantly better middleware and less muddle about which is which. It shouldn't come as a complete shock that that is the (rough) idea behind Objective-Smalltalk, but more on that later.

UPDATE:
My little post triggered a small twitter discussion that was really much better than the post itself. Although my point was really mostly about the scripted components pattern and how it relates to Objective-C and its Software-ICs, I focused a bit too much on the performance aspect (occupational hazard). It also comes off a bit too much as "Swift vs. Objective-C", so to make that clear: the Objective-C code initially presented is probably slowest, and I am reasonably certain that you can write a C-like Sieve using Swift with roughly comparable performance.

For me, the bigger point was that this code was a good illustration of the misunderstanding and misapplication of Objective-C. I am also decidedly not saying that this misapplication is due ignorance of the people doing the misapplication, I think it has to do with the fact that the tension between scripted coomponents and unified language is something we haven't figured out yet.

Anyway, Helge managed to put the point very succinctly:

Also insightful as alway, Joe Groff:

And a little bit of "hey, I didn't know that":

And last not least: apologies again to Vincent for picking on him, just a little bit :-)

Sunday, December 30, 2018

UIs Are Not Pure Functions of the Model - React.js and Cocoa Side by Side

When I first saw React.js, I had a quick glance and thought that it was cool, they finally figured out how to do a Cocoa-like MVC UI framework in JavaScript.

Of course, they could have just used Cappuccino, but "details". Some more details were that their version of drawRect: was called render() and returned HTML instead of drawing into a graphics context, but that's just the reality of living inside a browser. And of course there was react-canvas.

Overall, though, the use of a true MVC UI framework seemed like a great step forward compared to directly manipulating the DOM on user input, at least for actual applications.

Imagine my surprise when I learned about React.native! It looked like they took what I had assumed to be an implementation detail as the main feature. Looking a little more closely confirmed my suspicions.

Fortunately, the React.js team was so kind as to put their basic ideas in writing: React - Basic Theoretical Concepts (also discussed on HN) ). So I had a look and after a bit of reading decided it would be useful to do a side-by-side comparison with equivalents of those concepts in Cocoa as far as I understand them.


React

Cocoa

Transformation

The core premise for React is that UIs are simply a projection of data into a different form of data. The same input gives the same output. A simple pure function.

function NameBox(name) {
  return { fontWeight: 
          'bold', labelContent: name };
}

Transformation

A core premise of Cocoa, and MVC in general, is that UIs are a projection of data into a different form of data, specifically bits on a screen. The same input gives the same output. A simple method:
 -(void)drawRect:(NSRect)aRect  {
	...
}
Due to the fact that screens and the bits on them are fairly expensive, we use the screen as an accumulator instead of returning the bits from the method.

We do not make the (unwarranted) assumption that this transformation can or should be expressed as a pure function. While that would be nice, there are many reasons why this is not a good idea, some pretty obvious.

Abstraction

You can't fit a complex UI in a single function though. It is important that UIs can be abstracted into reusable pieces that don't leak their implementation details. Such as calling one function from another.

function FancyUserBox(user) {
  return {
    borderStyle: '1px solid blue',
    childContent: [
      'Name: ',
      NameBox(user.firstName + ' ' +
user.lastName)
    ]
  };
}
{ firstName: 'Sebastian', 
   lastName: 'Markbåge' } ->
{
  borderStyle: '1px solid blue',
  childContent: [
    'Name: ',
    { fontWeight: 'bold',
    labelContent: 'Sebastian Markbåge' }
  ]
};

Abstraction

Although it is be possible to render an entire UI in a single View's drawRect:: method, and users of NSOpenGLView tend to do that, it is generally better practice to split complex UIs into reusable pieces that don't leak their implementation details.

Fortunately we have such reusable pieces, they are called objects, and we can group them into classes. Following the MVC naming conventions, we call objects that represent UI views, in Cocoa they are instance of NSView or its subclasses, in CocoaTouch the common superclass in called UIView.

Composition

To achieve truly reusable features, it is not enough to simply reuse leaves and build new containers for them. You also need to be able to build abstractions from the containers that compose other abstractions. The way I think about "composition" is that they're combining two or more different abstractions into a new one.

function FancyBox(children) {
  return {
    borderStyle: '1px solid blue',
    children: children
  };
}

function UserBox(user) {
  return FancyBox([
    'Name: ',
    NameBox(user.firstName + ' ' +
    user.lastName)
  ]);
}

Composition

To achieve truly reusable features, it is not enough to simply reuse leaves and build new containers for them. You also need to be able to build abstractions from the containers that compose other abstractions. The way I think about "composition" is that they're combining two or more different abstractions into a new one.

Examples of this are the NSScrollView, which composes the actual scrollers, themselves composed of different parts, a NSClipView to provide a window onto the user-provided contentView.

Other examples are NSTableViews coordinating their columns, rows, headers and the system- or user-provided Cells.

State

A UI is NOT simply a replication of server / business logic state. There is actually a lot of state that is specific to an exact projection and not others. For example, if you start typing in a text field. That may or may not be replicated to other tabs or to your mobile device. Scroll position is a typical example that you almost never want to replicate across multiple projections.

We tend to prefer our data model to be immutable. We thread functions through that can update state as a single atom at the top.

function FancyNameBox(user, likes,
  onClick) {
  return FancyBox([
    'Name: ', NameBox(user.firstName + ' ' +
  user.lastName),
    'Likes: ', LikeBox(likes),
    LikeButton(onClick)
  ]);
}

// Implementation Details

var likes = 0;
function addOneMoreLike() {
  likes++;
  rerender();
}

// Init

FancyNameBox(
  { firstName: 'Sebastian', 
   lastName: 'Markbåge' },
  likes,
  addOneMoreLike
);

NOTE: These examples use side-effects to update state. My actual mental model of this is that they return the next version of state during an "update" pass. It was simpler to explain without that but we'll want to change these examples in the future.

State

A UI is NOT simply a replication of server / business logic state. There is actually a lot of state that is specific to an exact projection and not others. For example, if you start typing in a text field. That may or may not be replicated to other tabs or to your mobile device. Scroll position is a typical example that you almost never want to replicate across multiple projections.

Fortunately, the view objects we are using already provide exactly this UI-specific state, so yay objects.

Memoization

Calling the same function over and over again is wasteful if we know that the function is pure. We can create a memoized version of a function that keeps track of the last argument and last result. That way we don't have to reexecute it if we keep using the same value.

function memoize(fn) {
  var cachedArg;
  var cachedResult;
  return function(arg) {
    if (cachedArg === arg) {
      return cachedResult;
    }
    cachedArg = arg;
    cachedResult = fn(arg);
    return cachedResult;
  };
}

var MemoizedNameBox = memoize(NameBox);

function NameAndAgeBox(user, currentTime)
 {
  return FancyBox([
    'Name: ',
    MemoizedNameBox(user.firstName +
     ' ' + user.lastName),
    'Age in milliseconds: ',
    currentTime - user.dateOfBirth
  ]);
}

Memoization

Calling the same method over and over again is wasteful.

So we don't do that.

First, we did not start with the obviously incorrect premise that the UI is a simple "pure" function of the model. Except for games, UIs are actually very stable, more stable than the model. You have chrome, viewers, tools etc. What is a (somewhat) pure mapping from the model is the data that is displayed in the UI, but not the entire UI.

So if we don't make the incorrect assumption that UIs are unstable (pure functions of model), then we don't have to expend additional and fragile effort to re-create that necessary stability.

In terms of optimizing output, this is also handled within the model, rather than in opposition to it: views are stable, so we keep a note of which views have collected damage and need to be redrawn. The application event loop coalesces these damage rectangles and initiates an optimized operation: it only redraws views whose bounds intersect the damaged region, and also passes the rectangle(s) to the drawRect:: method. (That's why it's called drawRect::).

Lists

Most UIs are some form of lists that then produce multiple different values for each item in the list. This creates a natural hierarchy.

To manage the state for each item in a list we can create a Map that holds the state for a particular item.

function UserList(users, likesPerUser,
 updateUserLikes) {
  return users.map(user => FancyNameBox(
    user,
    likesPerUser.get(user.id),
    () => updateUserLikes(user.id, 
likesPerUser.get(user.id) + 1)
  ));
}

var likesPerUser = new Map();
function updateUserLikes(id, likeCount) {
  likesPerUser.set(id, likeCount);
  rerender();
}

UserList(data.users, likesPerUser,
            updateUserLikes);

NOTE: We now have multiple different arguments passed to FancyNameBox. That breaks our memoization because we can only remember one value at a time. More on that below.

Lists

Most UIs are some form of lists that then produce multiple different values for each item in the list. This creates a natural hierarchy.

Fortunately there is nothing to do here, the basic hierarchical view model already takes care of it. There are specific view classes for lists, but there is nothing special about them in terms of the conceptual or implementation model.

Continuations

Unfortunately, since there are so many lists of lists all over the place in UIs, it becomes quite a lot of boilerplate to manage that explicitly.

We can move some of this boilerplate out of our critical business logic by deferring execution of a function. For example, by using "currying" (bind in JavaScript). Then we pass the state through from outside our core functions that are now free of boilerplate.

This isn't reducing boilerplate but is at least moving it out of the critical business logic.

function FancyUserList(users) {
  return FancyBox(
    UserList.bind(null, users)
  );
}

const box = FancyUserList(data.users);
const resolvedChildren =
     box.children(likesPerUser,
        updateUserLikes);
const resolvedBox = {
  ...box,
  children: resolvedChildren
};

Continuations

Fortunately, it doesn't matter how many lists of lists there are all over the place in UIs, since our composition mechanism actually works for this use case.

State Map

We know from earlier that once we see repeated patterns we can use composition to avoid reimplementing the same pattern over and over again. We can move the logic of extracting and passing state to a low-level function that we reuse a lot.

function FancyBoxWithState(
  children,
  stateMap,
  updateState
) {
  return FancyBox(
    children.map(child => child.continuation(
      stateMap.get(child.key),
      updateState
    ))
  );
}

function UserList(users) {
  return users.map(user => {
    continuation:
      FancyNameBox.bind(null, user),
    key: user.id
  });
}

function FancyUserList(users) {
  return FancyBoxWithState.bind(null,
    UserList(users)
  );
}

const continuation =
     FancyUserList(data.users);
continuation(likesPerUser,
 updateUserLikes);

State Map

Don't need it.

(Just how many distinct mechanism do we have now for re-introducing state? At what point do we revisit our initial premise that UIs are pure functions of the model??)

Memoization Map

Once we want to memoize multiple items in a list memoization becomes much harder. You have to figure out some complex caching algorithm that balances memory usage with frequency.

Luckily, UIs tend to be fairly stable in the same position. The same position in the tree gets the same value every time. This tree turns out to be a really useful strategy for memoization.

We can use the same trick we used for state and pass a memoization cache through the composable function.

function memoize(fn) {
  return function(arg, memoizationCache) {
    if (memoizationCache.arg === arg) {
      return memoizationCache.result;
    }
    const result = fn(arg);
    memoizationCache.arg = arg;
    memoizationCache.result = result;
    return result;
  };
}

function FancyBoxWithState(
  children,
  stateMap,
  updateState,
  memoizationCache
) {
  return FancyBox(
    children.map(child =>
    child.continuation(
      stateMap.get(child.key),
      updateState,
      memoizationCache.get(child.key)
    ))
  );
}

const MemoizedFancyNameBox =

      memoize(FancyNameBox);

Memoization Map

Huh?

Seriously?

Algebraic Effects

It turns out that it is kind of a PITA to pass every little value you might need through several levels of abstractions. It is kind of nice to sometimes have a shortcut to pass things between two abstractions without involving the intermediates. In React we call this "context".

Sometimes the data dependencies don't neatly follow the abstraction tree. For example, in layout algorithms you need to know something about the size of your children before you can completely fulfill their position.

Now, this example is a bit "out there". I'll use Algebraic Effects as proposed for ECMAScript. If you're familiar with functional programming, they're avoiding the intermediate ceremony imposed by monads.

function ThemeBorderColorRequest() 
{ }

function FancyBox(children) {
  const color = raise 
    new ThemeBorderColorRequest();
  return {
    borderWidth: '1px',
    borderColor: color,
    children: children
  };
}

function BlueTheme(children) {
  return try {
    children();
  } catch effect
     ThemeBorderColorRequest -> 
     [, continuation] {
    continuation('blue');
  }
}

function App(data) {
  return BlueTheme(
    FancyUserList.bind(null, data.users)
  );
}

Algebraic Effects

It turns out that it is kind of a PITA to pass every little value you might need through several levels of abstractions....

So don't do that.

This is another reason why it's advantageous to have a stable hierarchy of stateful objects representing your UI. If you need more context, you just ask around. Ask your parent, ask your siblings, ask your children, they are all present. Again, no special magic needed.


So that was the comparison. I have to apologise for getting somewhat less detail-oriented near the end, but the level of complexity just started to overwhelm me. Add to that, for React.native, the joy of having to duplicate the entire hierarchy of view classes, just to have the "components" generate not-quite-temporary temporary views that than generate layers that draw the actual UI. Maybe there's one too many layers. Or two.

The idea of UI being a pure function of the model seems so obviously incorrect, and leads to such a plethora of problems, that it is a bit puzzling how one could come up with it in the first place, and certainly how one would stick with it in face of the avalanche of problems that just keeps coming. A part of this is certainly the current unthinking infatuation with functional programming ideas. These ideas are broadly good, but not nearly as widely or universally applicable as some of their more starry-eyed proponents propose (I hesitate to use the word "think" in this context).

Another factor is the the usefulness of immediate-mode graphics, compared to retained-state graphics. This is actually an interesting topic by itself, IMHO, one of those eternal circles where we move from a fully retained model such as the early GKS or laterr Phigs to immedate drawing models such as Postscript, Quartz, und OpenGL, only to then re-invent the retained model (sort of) with things like CoreAnimation, Avalon and, of course, the DOM (and round and round we go). Cocoa's model represents a variable approach, where you can mix-and-match object-oriented and immediate-mode drawing as you see fit. But more on that later.

Last not least, it's probably not entirely coincidental that this idea was hatched for Facebook and Instagram feed applications. Similar to games, these sorts of apps have displays that really are determined mostly by their "model", the stream of data coming from their feed. I am not convinced that feed application generalizes well to application.

Anyway, for me, this whole exercise has actually motivated me to start using react.js a little. I still think that Cappuccino is probably the better, less confused framework, but it helps to know about what the quasi-mainstream is doing. I also think that despite all the flaws, react.js and react.native are currently eating native development's lunch. And that's certainly interesting. Stay tuned!

UPDATE:
Dan Abramov responds:

I think you’re overfocusing on the “pure” wording and theoretical definitions over practical differences.
To elaborate a bit, React components aren’t always “pure” in strict FP sense of the word. They’re also not always functions (although we’re adding a stateful function API as a preferred alternative to classes soon). Support for local state and side effects is absolutely a core feature of React components and not something we avoid for “purity”.

I added a PR to remove the misleading "pure" from the concepts page.

Sunday, December 23, 2018

A Minimal Test Runner

A long time ago when I was working a MPWTest, "The simplest Objective-C Unit Test Framework that could possibly work...", I had a brief chat with Kent Beck about it, and one of the things he said was that everyone should build their own unit test "framework".

Why the scare quotes?

If your testing framework is actually a framework, it's probably too big. I recently started porting MPWFoundation and Objective-Smalltalk to GNUstep again, in order to get it running In the Cloud™. In order to see how it's going, it's probably helpful to run the tests.

Initially, I needed to test some compiler issues with such modern amenities as keyed subscripting of dictionaries:


#import <Foundation/Foundation.h>

int main( int argc, char *argv[] ) {
  MPWDictStore *a=[MPWDictStore store];
  a[@"hi"]=@"there";
  NSLog(@"hi: %@",a[@"hi"]);
  return 0;

}

Once that was resolved with the help of Alex Denisov, I wanted to minimally run some tests, but the idea of first getting all of MPWTest to run wasn't very appealing. So instead I just did the simplest thing that could possible work:
static void runTests()
{
  int tests=0;
  int success=0;
  int failure=0;
  NSArray *classes=@[
    @"MPWDictStore",
    @"MPWReferenceTests",
  ];

  for (NSString *className in classes ) {
    id testClass=NSClassFromString( className );
    NSArray *testNames=[testClass testSelectors];
    for ( NSString *testName in testNames ) {
      SEL testSel=NSSelectorFromString( testName );
      @try {
        tests++;
        [testClass performSelector:testSel];
        NSLog(@"%@:%@ -- success",className,testName);
        success++;
      } @catch (id error)  {
        NSLog(@"%@:%@ == failure: %@",className,testName,error);
        failure++;
      }
    }

  }
  printf("\033[91;3%dmtests: %d total, %d successes %d failures\033[0m\n",
         failure>0 ? 1:2,tests,success,failure);
}

That's it, my minimal testrunner. With hard-coded list of classes to test. In a sense, that is the entire "test framework", the rest just being conventions followed by classes that wish to be tested.

And of course MPWTest's slogan was a bit...optimistic.