Showing posts with label Scripting. Show all posts
Showing posts with label Scripting. Show all posts

Saturday, December 14, 2019

The Four Stages of Objective-Smalltalk

One of the features that can be confusing about Objective-Smalltalk is that it actually has several parts that are each significant on their own, so frequently will focus on just one of these (which is fine!), but without realising that the other parts also exist, which is unfortunate as they are all valuable and complement each other. In fact, they can be described as stages that are (logically) built on top of each other.

1. WebScript 2 / "Shasta"

Objective-C has always had great integration with other languages, particularly with a plethora of scripting languages, from Tcl to Python and Ruby to Lisp and Scheme and their variants etc. This is due not just to the fact that the runtime is dynamic, but also that it is simple and C-based not just in terms of being implemented in C, but being a peer to C.

However, all of these suffer from having two somewhat disparate languages, with competing object models, runtimes, storage strategies etc. One language that did not have these issues was WebScript, part of WebObjects and essentially Objective-C-Script. The language was interpreted, a peer in which you could even implement categories on existing Objective-C objects, and so syntactically compatible that often you could just copy-paste code between the two. So close to the ideal scripting language for that environment.

However, the fact that Objective-C is already a hybrid with some ugly compromises means that these compromises often no longer make sense at all in the WebScript environment. For example, Objective-C strings need an added "@" character because plain double quotes are already taken by C strings, but there are no C strings in WebScripts. Primitive types like int can be declared, but are really objects, the declaration is a dummy, a NOP. Square brackets for message sends are needed in Objective-C to distinguish messages from the rest of the C syntax, but the that's also irrelevant in WebScript. And so on.

So the first stage of Objective-Smalltalk was/is to have all the good aspects of WebScript, but without the syntactic weirdness needed to match the syntactic weirdness of Objective-C that was needed because Objective-C was jammed into C. I am not the only one who figured out the obvious fact that such a language is, essentially, a variant of Smalltalk, and I do believe this pretty much matches what Brent Simmons called Shasta.

Implementation-wise, this works very similarly to WebScript in that everything in the language is an object and gets converted to/from primitives when sending or receiving messages as needed.

This is great for a much more interactive programming model than what we have/had (and the one we have seems to be deteriorating as we speak):

And not just for isolated fragments, but for interacting with and tweaking full applications as they are running:

2. Objective-C without the C

Of course, getting rid of the (syntactic) weirdnesses of Objective-C in our scripting language means that it is no longer (syntactically) compatible with Objective-C. Which is a shame.

It is a shame because this syntactic equivalence between Objective-C and WebScript meant that you could easily move code between them. Have a script that has become stable and you want to reuse it? Copy and paste that code into an Objective-C file and you're good to go. Need it faster? Same. Have some Objective-C code that you want to explore, create variants of etc? Paste it into WebScript. Such a smooth integration between scripting and "programming" is rare and valuable.

The "obvious" solution is to have a native AOT-compiled version of this scripting language and use it to replace Objective-C. Many if not all other scripting languages have struggled mightily with becoming a compiled language, either not getting there at all or requiring JIT compilers of enormous size, complexity, engineering effort and attack surface.

Since the semantic model of our scripting language ist just Objective-C, we know that we can AOT-compile this language with a fairly straightforward compiler, probably a lot simpler than even the C/Objective-C compilers currently used, and plugging into the existing toolchain. Which is nice.

The idea seems so obvious, but apparently it wasn't.

Everything so far would, taken together, make for a really nice replacement for Objective-C with a much more productive and, let's face it, fun developer experience. However, even given the advantages of a simpler language, smoothly integrated scripting/programming and instant builds, it's not really clear that yet another OO language is really sufficient, for example the Etoilé project or the eero language never went anywhere, despite both being very nice.

3. Beyond just Objects: Architecture Oriented Programming

Ever since my Diplomarbeit, Approaches to Composition and Refinement in Object-Oriented Design back in 1997, I've been interested in Software Architecture and Architecture Description Languages (ADLs) as a way of overcoming the problems we have when constructing larger pieces of software.

One thing I noticed very early is that the elements of an ADL closely match up with and generalise the elements of a programming language, for example an object-oriented language: object generalises to component, message to connector. So it seemed that any specific pogramming language is just a specialisation or instantiation of a more general "architecture language".

To explore this idea, I needed a language that was amenable to experimentation, by being both malleable enough as to allow a metasystem that can abstract away from objects and messages and simple/small enough to make experimentation feasible. A simple variant of Smalltalk would do the trick. More mature variants tend to push you towards building with what is there, rather than abstracting from it, they "...eat their young" (Alan Kay).

So Objective-Smalltalk fits the bill perfectly as a substrate for architecture-oriented programming. In fact, its being built on/with Objective-C, which came into being largely to connect the C/Unix world with the Smalltalk world, means it is already off to a good start.

What to build? How about not reinventing the wheel and simply picking the (arguably) 3 most successful/popular architectural styles:

  • OO (subsuming the other call/return styles)
  • Unix Pipes and Filters
  • REST
Again, surprisingly, at least to me, even these specific styles appear to align reasonably well with the elements we have in a programming language. OO is already well-developed in (Objective-)Smalltalk, dataflow maps to Smalltalk's assignment operator, which needed to be made polymorphic anyway, and REST at least partially maps to non-message identifiers, which also are not polymorphic in Smalltalk.

Having now built all of these abstractions into Objective-Smalltalk, I have to admit again to my surprise how well they work and work together. Yes, it was my thesis, and yes, I can now see confirmation bias everywhere, but it was also a bit of a long-shot.

4. Architecture Oriented Metaprogramming

The architectural styles described above are implemented in frameworks and their interfaces hard-coded into the language implementation. However, with three examples , it should now be feasible to create linguistic support for defining the architectural styles in the language itself, allowing users to define and refine their own architectural styles. This is ongoing work.

What now?

One of the key takeaways from this is that each stage is already quite useful, and probably a worthy project all by itself, it just gets Even Better™ with the addition of later stages. Another is that I need to get back to getting stage ready, as it wasn't actually needed for stage 3, at least not initially.

Wednesday, October 7, 2015

Jitterdämmerung

So, Windows 10 has just been released, and with it Ahead Of Time (AOT) compilation feature .NET native. Google also just recently introduced ART for Android, and I just discovered that Oracle is planning an AOT compiler for mainstream Java.

With Apple doggedly sticking to Ahead of Time Compilation for Objective-C and now their new Swift, JavaScript is pretty much the last mainstream hold-out for JIT technology. And even in JavaScript, the state-of-the-art for achieving maximum performance appears to be asm.js, which largely eschews JIT techniques by acting as object-code in the browser represented in JavaScript for other languages to be AOT-compiled into.

I think this shift away from JITs is not a fluke but was inevitable, in fact the big question is why it has taken so long (probably industry inertia). The benefits were always less than advertised, the costs higher than anticipated. More importantly though, the inherent performance characteristics of JIT compilers don't match up well with most real world systems, and the shift to mobile has only made that discrepancy worse. Although JITs are not going to go away completely, they are fading into the sunset of a well-deserved retirement.

Advantages of JITs less than promised

I remember reading the copy of the IBM Systems Journal on Java Technology back in 2000, I think. It had a bunch of research articles describing super amazing VM technology with world-beating performance numbers. It also had a single real-world report from IBM's San Francisco project. In the real world, it turned out, performance was a bit more "mixed" as they say. In other words: it was terrible and they had to do an incredible amount of work for the system be even remotely usable.

There was also the experience of the New Typesetting System (NTS), a rewrite of TeX in Java. Performance was atrocious, the team took it with humor and chose a snail as their logo.

Nts at full speed One of the reasons for this less than stellar performance was that JITs were invented for highly dynamic languages such as Smalltalk and Self. In fact, the Java Hotspot VM can be traced in a direct line to Self via the Strongtalk system, whose creator Animorphic Systems was purchased by Sun in order to acquire the VM technology.

However, it turns out that one of the biggest benefits of JIT compilers in dynamic languages is figuring out the actual types of variables. This is a problem that is theoretically intractable (equivalent to the halting problem) and practically fiendishly difficult to do at compile time for a dynamic language. It is trivial to do at runtime, all you need to do is record the actual types as they fly by. If you are doing Polymorphic Inline Caching, just look at the contents of the caches after a while. It is also largely trivial to do for a statically typed language at compile time, because the types are right there in the source code!

So gathering information at runtime simply isn't as much of a benefit for languages such as C# and Java as it was for Self and Smalltalk.

Significant Costs

The runtime costs of a JIT are significant. The obvious cost is that the compiler has to be run alongside the program to be executed, so time compiling is not available for executing. Apart from the direct costs, this also means that your compiler is limited in the types of analyses and optimizations it can do. The impact is particularly severe on startup, so short-lived programs like for example the TeX/NTS are severely impacted and can often run slower overall than interpreted byte-code.

In order to mitigate this, you start having to have multiple compilers and heuristics for when to use which compilers. In other words: complexity increases dramatically, and you have only mitigated the problem somewhat, not solved it.

A less obvious cost is an increase in VM pressure, because the code-pages created by the JIT are "dirty", whereas executables paged in from disk are clean. Dirty pages have to be written to disk when memory is required, clean pages can simply be unmapped. On devices without a swap file like most smartphones, dirty vs. clean can mean the difference between a few unmapped pages that can be swapped in later and a process getting killed by the OS.

VM and cache pressure is generally considered a much more severe performance problem than a little extra CPU use, and often even than a lot of extra CPU use. Most CPUs today can multiply numbers in a single cycle, yet a single main memory access has the CPU stalled for a hundred cycles or more.

In fact, it could very well be that keeping non-performance-critical code as compact interpreted byte-code may actually be better than turning it into native code, as long as the code-density is higher.

Security risks

Having memory that is both writable and executable is a security risk. And forbidden on iOS, for example. The only exception is Apple's own JavaScript engine, so on iOS you simply can't run your own JITs.

Machines got faster

On the low-end of performance, machines have gotten so fast that pure interpreters are often fast enough for many tasks. Python is used for many tasks as is and PyPy isn't really taking the Python world by storm. Why? I am guessing it's because on today's machines, plain old interpreted Python is often fast enough. Same goes for Ruby: it's almost comically slow (in my measurements, serving http via Sinatra was almost 100 times slower than using libµhttp), yet even that is still 400 requests per second, exceeding the needs of the vast majority of web-sites including my own blog, which until recently didn't see 400 visitors per day.

The first JIT I am aware of was Peter Deutsch's PS (Portable Smalltalk), but only about a decade later Smalltalk was fine doing multi-media with just a byte-code interpreter. And native primitives.

Successful hybrids

The technique used by Squeak: interpreter + C primitives for heavy lifting, for example for multi-media or cryptography has been applied successfully in many different cases. This hybrid approach was described in detail by John Ousterhout in Scripting: Higher-Level Programming for the 21st Century: high level "scripting" languages are used to glue together high performance code written in "systems" languages. Examples include Numpy, but the ones I found most impressive were "computational steering" systems apparently used in supercomputing facilities such as Oak Ridge National Laboratories. Written in Tcl.

What's interesting with these hybrids is that JITs are being squeezed out at both ends: at the "scripting" level they are superfluous, at the "systems" level they are not sufficient. And I don't believe that this idea is only applicable to specialized domains, though there it is most noticeable. In fact, it seems to be an almost direct manifestation of the observations in Knuth's famous(ly misquoted) quip about "Premature Optimization":

Experience has shown (see [46], [51]) that most of the running time in non-IO-bound programs is concentrated in about 3 % of the source text.

[..] The conventional wisdom shared by many of today's software engineers calls for ignoring efficiency in the small; but I believe this is simply an overreaction to the abuses they see being practiced by penny-wise-and-pound-foolish programmers, who can't debug or maintain their "optimized" programs. In established engineering disciplines a 12 % improvement, easily obtained, is never considered marginal; and I believe the same viewpoint should prevail in soft- ware engineering. Of course I wouldn't bother making such optimizations on a one-shot job, but when it's a question of preparing quality programs, I don't want to restrict myself to tools that deny me such efficiencies.

There is no doubt that the grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.

Yet we should not pass up our opportunities in that critical 3 %. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified. It is often a mistake to make a priori judgments about what parts of a program are really critical, since the universal experience of programmers who have been using measurement tools has been that their intuitive guesses fail. After working with such tools for seven years, I've become convinced that all compilers written from now on should be designed to provide all programmers with feedback indicating what parts of their programs are costing the most; indeed, this feedback should be supplied automatically unless it has been specifically turned off.

[..]

(Most programs are probably only run once; and I suppose in such cases we needn't be too fussy about even the structure, much less the efficiency, as long as we are happy with the answers.) When efficiencies do matter, however, the good news is that usually only a very small fraction of the code is significantly involved.

For the 97%, a scripting language is often sufficient, whereas the critical 3% are both critical enough as well as small and isolated enough that hand-tuning is possible and worthwhile.

I agree with Ousterhout's critics who say that the split into scripting languages and systems languages is arbitrary, Objective-C for example combines that approach into a single language, though one that is very much a hybrid itself. The "Objective" part is very similar to a scripting language, despite the fact that it is compiled ahead of time, in both performance and ease/speed of development, the C part does the heavy lifting of a systems language. Alas, Apple has worked continuously and fairly successfully at destroying both of these aspects and turning the language into a bad caricature of Java. However, although the split is arbitrary, the competing and diverging requirements are real, see Erlang's split into a functional language in the small and an object-oriented language in the large.

Unpredictable performance model

The biggest problem I have with JITs is that their performance model is extremely unpredictable. First, you don't know when optimizations are going to kick in, or when extra compilation is going to make you slower. Second, predicting which bits of code will actually be optimized well is also hard and a moving target. Combine these two factors, and you get a performance model that is somewhere between unpredictable and intractable, and therefore at best statistical: on average, your code will be faster. Probably.

While there may be domains where this is acceptable, most of the domains where performance matters at all are not of this kind, they tend to be (soft) real time. In real time systems average performance matters not at all, predictably meeting your deadline does. As an example, delivering 80 frames in 1 ms each and 20 frames in 20 ms means for 480ms total time means failure (you missed your 60 fps target 20% of the time) whereas delivering 100 frames in 10 ms each means success (you met your 60 fps target 100% of the time), despite the fact that the first scenario is more than twice as fast on average.

I really learned this in the 90ies, when I was doing pre-press work and delivering highly optimized RIP and Postscript processing software. I was stunned when I heard about daily newspapers switching to pre-rendered, pre-screened bitmap images for their workflows. This is the most inefficient format imaginable for pre-press work, with each page typically taking around 140 MB of storage uncompressed, whereas the Postscript source would typically be between 1/10th and 1/1000th of the size. (And at the time, 140MB was a lot even for disk storage, never mind RAM or network capacity.

The advantage of pre-rendered bitmaps is that your average case is also your worst case. Once you have provisioned your infrastructure to handle this case, you know that your tech stack will be able to deliver your newspaper on time, no matter what the content. With Postscript (and later PDF) workflows, you average case is much better (and your best case ridiculously so), but you simply don't get any bonus points for delivering your newspaper early. You just get problems if it's late, and you are not allowed to average the two.

Eve could survive and be useful even if it were never faster than, say, Excel. The Eve IDE, on the other hand, can't afford to miss a frame paint. That means Imp must be not just fast but predictable - the nemesis of the SufficientlySmartCompiler.
I also saw this effect in play with Objective-C and C++ projects: despite the fact that Objective-C's primitive operations are generally more expensive, projects written in Objective-C often had better performance than comparable C++ projects, because the Objective-C's performance model was so much more simple, obvious and predictable.

When Apple was still pushing the Java bridge, Sun engineers did a stint at a WWDC to explain how to optimize Java code for the Hotspot JIT. It was comical. In order to write fast Java code, you effectively had to think of the assembler code that you wanted to get, then write the Java code that you thought might net that particular bit of machine code, taking into account the various limitations of the JIT. At that point, it is a lot easier to just write the damn assembly code. And vastly more predictable, what you write is what you get.

Modern JITs are capable of much more sophisticated transformations, but what the creators of these advanced optimizers don't realize is that they are making the problem worse rather than better. The more they do, the less predictable the code becomes.

The same, incidentally, applies to SufficentlySmart AOT compilers such as the one for the Swift language, though the problem is not quite as severe as with JITs because you don't have the dynamic component. All these things are well-intentioned but all-in-all counter-productive.

Conclusion

Although the idea of Just in Time Compilers was very good, their area of applicablity, which was always smaller than imagined and/or claimed, has shrunk ever further due to advances in technology, changing performance requirements and the realization that for most performance critical tasks, predictability is more important than average speed. They are therefore slowly being phased out in favor of simpler, generally faster and more predictable AOT compilers. Although they are unlikely to go away completely, their significance will be drastically diminished.

Alas, the idea that writing high-level code without any concessions to performance (often justified by misinterpreting or simply just misquoting Knuth) and then letting a sufficiently smart compiler fix it lives on. I don't think this approach to performance is viable, more predictability is needed and a language with a hybrid nature and the ability for the programmer to specify behavior-preserving transformations that alter the performance characteristics of code is probably the way to go for high-performance, high-productivity systems. More on that another time.

What do you think? Are JITs on the way out or am I on crack? Should we have a more manual way of influencing performance without completely rewriting code or just trusting the SmartCompiler?

Update: Nov. 13th 2017

The Mono Project has just announced that they are adding a byte-code interpreter: "We found that certain programs can run faster by being interpreted than being executed with the JIT engine."

Update: Mar. 25th 2021

Facebook has created an AOT compiler for JavaScript called Hermes. They report significant performance improvements and reductions in memory consumption (with of course some trade-offs for specific benchmarks).

Speaking of JavaScript, Google launhed Ignition in 2017, a JS bytecode interpreter for faster startup and better performance on low-memory devices. It also did much better in terms of pure performance than they anticipated.

So in 2019, Google introduced their JIT-Less JS engine. Not even an AOT compiler, just a pure interpreter. While seeing slowdowns in the 20-80% range on synthetic benchmarks, they report real-world performance only a few percent lower than their high-end, full-throttle TurboFan JIT.

Oh, how could I forget WebAssembly? "One of the reasons applications target WebAssembly is to execute on the web with predictable high performance. ".

I've also been told by People Who Know™ that the super-amazing Graal JIT VM for Java is primarily used for its AOT capabilities by customers. This is ironic, because the aptly named Graal pretty much is the Sufficiently Smart Compiler that was always more of a joke than a real goal...but they achieved it nonetheless, and as a JIT no less! And now that we have it, it turns out customers mostly don't care. Instead, they care about predictability, start-up performance and memory consumption.

Last not least, Apple's Rosetta 2 translator for running Intel binaries on ARM is an AOT binary-to-binary translator. And it is much faster, relatively, than the original Rosetta or comparable JIT-based binary translators, with translated Intel binaries clocking in at around 80% the speed of native ARM binaries. Combined with the amazing performance of the M1, this makes M1 Macs frequently faster than Intel Macs at running Intel Mac binaries. (Of course it also includes a (much slower) JIT component as a fallback when the AOT compiler can't figure things out. Like when the target program is itself a JIT... )

Update: Oct. 3rd 2021

It appears that 45% of CVEs for V8 and more than half of "in the wild" Chrome exploits abused a JIT bug, which is why Microsoft is experimenting with what they call SDSM (Super Duper Secure Mode). What's SDSM? JavaScript without the JIT.

Unsurprisingly at this point, performance is affected far less than one might have guessed, even for the benchmarks, and even less in real-world tasks: "Anecdotally, we find that users with JIT disabled rarely notice a difference in their daily browsing.".

Update: Mar. 30th 2023

To my eyes, this looks like startup costs, much of which will be jit startup costs.

HN user jigawatts appears to confirm my suspicion:

Teams is an entire web server written in an interpreted language compiled on the fly using a "Just in Time" optimiser (node.js + V8) -- coupled to a web browser that is basically an entire operating system. You're not "starting Teams.exe", you are deploying a server and booting an operating system. Seen in those terms, 9 seconds is actually pretty good. Seen in more... sane terms, showing 1 KB of text after 9 seconds is about a billion CPU instructions needed per character.
. It also looks to be the root problem, well one of the root problems with Visual Studio's launch time.

Update: Apr. 1st 2023

"One of the most annoying features of Julia is its latency: The laggy unresponsiveness of Julia after starting up and loading packages." -- Julia's latency: Past, present and future

I find it interesting that the term "JIT" is never mentioned in the post, I guess it's just taken as a given.

I am pretty sure I've missed some developments.

Wednesday, March 12, 2014

Looking at a scripting language...and imagining the consequences

After thinking about the id subset and being pointed to WebScript, Brent Simmons imagines a scripting language. I have to admit I have been imagining pretty much the same language...and at some time decided to stop imagining and start building Objective-Smalltalk:

  • Peer of Objective-C: objects are Objective-C objects, methods are Objective-C methods, added to the runtime and indistinguishable from the outside. "You can subclass UIViewController, or write a category on it."
    <void>alertView:alertView clickedButtonAtIndex:<int>buttonIndex
       self newGame.
       self view setNeedsDisplay.
    
    The example is from the site, it was copied from an actual program. As you can see, interoperability with the C parts of Objective-C is still necessary, but not bothersome.
  • It has blocks:
    <void>deleteFile:filename
       thumbs := self thumbsView subviews.
       viewsToRemove := thumbs selectWhereValueForKey:'filename' isEqual:filename.
       aView := viewsToRemove firstObject.
    
       UIView animateWithDuration:0.4
              animations: [ aView setAlpha: 0.0. ]
              completion: [ aView removeFromSuperview. 
                            UIView animateWithDuration: 0.2
                                   animations: [ self thumbsView layoutSubviews. ]
                                   completion: [ 3  ]. 
                          ].
       url := self urlForFile:aFilename.
       NSFileManager defaultManager removeItemAtURL:url  error:nil.
       (self thumbsView afterDelay:0.4) setNeedsLayout.
    
    This example was also copied from an actual small educational game that was ported over from Flash.

    You also get Higher Order Messaging, Polymorpic Identifiers etc.

  • Works with the toolchain: this is a a little more tricky, but I've made some progress...part of that is an llvm based native compiler, part is tooling that enables some level of integration with Xcode, part is a separate toolset that has comparable or better capabilities.

While Objective-Smalltalk would not require shipping source code with your applications, due to the native compiler, it would certainly allow it, and in fact my own BookLightning imposition program has been shipping with part of its Objective-Smalltalk source hidden its Resources folder for about a decade or so. Go ahead, download it, crack it open and have a look! I'll wait here while you do.

Did you have a look? The part that is in Smalltalk is the distilled (but very simple) imposition algorithm shown here.


	drawOutputPage:<int>pageNo
		isBackPage := (( pageNo / 2 ) intValue ) isEqual: (pageNo / 2 ).

		pages:=self pageMap objectAtIndex:pageNo.
		page1:=pages integerAtIndex:0.
		page2:=pages integerAtIndex:1.

		self drawPage:page1 andPage:page2 flipped:(self shouldFlipPage:pageNo).

	drawPage:<int>page1 andPage:<int>page2 flipped:<int>flipped
		drawingStream := self drawingStream.
		base := MPWPSMatrix matrixRotate:-90.

		drawingStream saveGraphicsState.
		flipped ifTrue: [ drawingStream concat:self flipMatrix ].
		width := self inRect x.

		self drawPage:page1 transformedBy:(base matrixTranslatedBy: (width * -2) y:0). 
		self drawPage:page2 transformedBy:(base matrixTranslatedBy: (width * -1) y:0). 
		
		drawingStream restoreGraphicsState.
Page imposition code

What this means is that any user of BookLightning could adapt it to suit their needs, though I am pretty sure that none have done so to this date. This is partly due to the fact that this imposition algorithm is too limited to allow for much variation, and partly due to the fact that the feature is well hidden and completely unexpected.

There are two ideas behind this:

  1. Open Source should be more about being able to tinker with well-made apps in useful ways, rather than downloading and compiling gargantuan and incomprehensible tarballs of C/C++ code.
  2. There is no hard distinction between programming and scripting. A higher level scripting/programming language would not just make developer's jobs easier, it could also enable the sort of tinkering and adaptation that Open Source should be about.
I don't think the code samples shown above are quite at the level needed to really enable tinkering, but maybe they can be a useful contribution to the discussion.