Saturday, March 4, 2017

So I wrote a book about performance...

...specifically iOS and macOS Performance Tuning: Cocoa, Cocoa Touch, Objective-C, and Swift. Despite or maybe because this truly being a labor of love (and immense time, the first time Addison-Wesley approached me about this was almost ten years ago), I truly expected it to remain just that: a labor of love sitting in its little niche. So imagine my surprise to see the little badge today:

Ios macos number1

Wow! Number #1 new release in Apple Programming (My understanding is that this link will change over time). And yes I checked, it wasn't the only release in Apple books for the period, there were a good dozen. In fact, iOS and macOS Programming took both the #1 and the #4 spots: Apple releases Oh, and it's also taken #13 overall in Apple programming books.

So a big THANK YOU to everyone that helped make this happen, the people I was allowed to learn from, Chuck who instigated the project and Trina who saw it through despite me.

Anyway, now that the book is wrapped up, I can publish more performance related information on this blog. Oh, the source code for the book is on GitHub.

UPDATE (March 5th, 2017): Now taking both the #1 and #2 spots in Apple new releases and the print edition is in the top 10 for Apple, with the Kindle edition in the top 20. Second update: now at #5 and #21 in overall Apple and #1/#3 in new releases. Getting more amazing all the time. I should probably take a break...

Concept Shadowing and Capture in MVC and its Successors

In a previous post, I noted that Apple's definition of MVC does not actually match the original definition, that it is more like Taligent's Model View Presenter (MVP) or what I like to to call Model Widget Controller. Matt Gallagher's look at Model-View-Controller in Cocoa makes a very similar point.

So who cares? After all, a rose by any other name is just as thorny, and the question of how the 500 pound gorilla gets to name things is also moot: however it damn well pleases.

The problem with using the same name is shadowing: since the names are the same, accessing the original definition is now hard. Again, this wouldn't really be a problem if it weren't for the fact that the old MVC solved exactly the kinds of problems that plague the new MVC. .

However, having to say "the problems of MVC are solved by MVC" is less than ideal, because, well, you sound a bit like a lunatic. And that is a problem, because it means that MVC is not considered when trying to solve the problems of MVP/MVC. And that, in turns, is a shame because it solves them quite nicely, IMHO much nicer than a lot of the other suggested patterns.

It turns out that MVC is, just like Algol an improvement on most of its successors.

Sunday, February 12, 2017

mkfile(8) is severely syscall limited on OS X

When I got my brand-new MacBook Pro (late 2016), I was interested in testing out the phenomenal SSD performance I had been reading about, reportedly around 2GB/s. Unix geek that I am, I opened a Terminal window and tapped out the following:
mkfile 8G /tmp/testfile

To my great surprise and consternation, both the time command and an iostat 1 running in another window showed a measly 250MB/s throughput. That's not much faster than a spinning rust disk, and certainly much much slower than previous SSDs, never mind the speed demon that is supposed the MBP 2016's SSD.

So what was going on? Were the other reports false? At first, my suspicions fell on FileVault, which I was now using for the first time. It didn't make any sense, because what I had heard was that FileVault had only a minimal performance impact, whereas this was roughly a factor 8 slowdown.

Alas, turning FileVault off and waiting for the disk to be decrypted did not change anything. Still 250MB/s. Had I purchased a lemon? Could I return the machine because the SSD didn't perform as well as advertised? Except, of course, that the speed wasn't actually advertised anywhere.

It never occurred to me that the problem could be with mkfile(8). Of course, that's exactly where the problem was. If you check the mkfile source code, you will see that it writes to disk in 512 byte chunks. That doesn't actually affect the I/O path, which will coalesce those writes. However, you are spending one syscall per 512 bytes, and that turns out to be the limiting factor. Upping the buffer size increases throughput until we hit 2GB/s at a 512KB buffer size. After that throughput stays flat.

Mkfile ssd throughput X-Axis is buffer size in KB. The original 512 byte size isn't on there because it would be 0.5KB or the entire axis would need to be bytes, which would also be awkward at the larger sizes. Also note that the X-Axis is logarithmic.

Radar filed: 30482489. I did not check on other operating systems, but my guess is that the results would be similar.

UPDATE: In the HN discussion, a number of people interpreted this as saying that syscall speed is slow on OS X. AFAIK that is no longer the case, and in case not the point. The point is that the hardware has changed so dramatically that even seemingly extremely safe and uncontroversial assumptions no longer hold. Heck, 250MB/s would be perfectly fine if we still had spinning rust, but SSDs in general and particularly the scorchingly fast ones Apple has put in these laptops just changed the equation so that something that used to just not factor into the equation at all, such as syscall performance, can now be the deciding factor.

In the I/O tests I did for my iOS/macOS performance book (see sidebar), I found that CPU nowadays generally dominates over actual hardware I/O performance. This was a case where I just wouldn't have expected it to be the case, and it took me over day to find the culprit, because the CPU should be doing virtually nothing. But once again, assumptions just got trampled by hardware advancements.

So check your assumptions.

Saturday, May 21, 2016

What's Missing in the Discussion about Dynamic Swift

There have been some great posts recently on the need for dynamic features in Swift.

I think Gus Muller really nails it with his description of adding an "Add Layer Mask" menu item to Acorn that directly talks to the addLayerMask: implementation:

With that in place, I can add a new menu item named "Add Layer Mask" with an action of addLayerMask: and a target of nil. Then in the layer class (or subclass if it's only specific to a bitmap or shape layer) I add a single method named addLayerMask:, and I'm pretty much done after writing the code to do the real work.

[..]

What I didn't add was a giant switch statement like we did in the bad old days of classic Mac OS programming. What I didn't add was glue code in various locations setting up targets and actions at runtime that would have to be massaged and updated whenever I changed something. I'm not checking a list of selectors and casting classes around to make the right calls.

The last part is the important bit, I think: no need to add boiler-plate glue code. IMHO, glue code is what is currently killing us, productivity-wise. It is sort of like software's dark matter, largely invisible but making up 90% of the mass of our software universe.

After giving some great  examples, Brent Simmons spells it out:

In case it’s not clear: with recent and future articles I’m documenting problems that Mac and iOS developers solve using the dynamic features of the Objective-C runtime. My point isn’t to argue that Swift itself should have similar features — I think it should, but that’s not the point. The point is that these problems will need solving in a possible future world without the Objective-C runtime. These kinds of problems should be considered as that world is being designed. The answers don’t have to be the same answers as Objective-C — but they need to be good answers, better than (for instance) writing a script to generate some code.
Again, that's a really important point: it's not that us old Objective-C hands are saying "we must have dynamic features", it's that there are problems that need solving that are solved really, really well with dynamic features, and really, really poorly in languages lacking such dynamic features.

However, many of these dynamic features are definitely hacks, with various issues, some of which I talk about in The Siren Call of KVO and (Cocoa) Bindings. I think everyone would like to have these sorts of features implemented in a way that is not hacky, and that the compiler can help us with, somehow.

I am not aware of any technology or technique that makes this possible, i.e. that gives us the power of a dynamic runtime when it comes to building these types of generic architectural adapters in a statically type-safe way. And the direction that Objective-C has been going, and that Swift follows with a vengeance is to remove those dynamic features in deference to static features.

So that's a big worry. However, an even bigger worry, at least for me, is that Apple will take Brent's concern extremely literally, and provide static solutions for exactly the specific problems outlined (and maybe a few others they can think of). There are some early indicators that this will be the case, for example that you can use CoreData from Swift, but you couldn't actually build it.

And that would be missing the point of dynamic languages almost completely.

The truly amazing thing about KVC, CoreData, bindings, HOM, NSUndoManager and so on is that none of them were known when Objective-C was designed, and none of them needed specific language/compiler support to implement.

Instead, the language was and is sufficiently malleable that its users can think up these things and then go to implement them. So instead of being an afterthought, a legacy concern or a feature grudgingly and minimally implemented, the metasystem should be the primary focus of new language development. (And unsurprisingly, that's the case in Objective-Smalltalk).

To quote Alan Kay:

If you focus on just messaging - and realize that a good metasystem can late bind the various 2nd level architectures used in objects - then much of the language-, UI-, and OS based discussions on this thread are really quite moot.

[..]

I think I recall also pointing out that it is vitally important not just to have a complete metasystem, but to have fences that help guard the crossing of metaboundaries. [..assignment..] I would say that a system that allowed other metathings to be done in the ordinary course of programming (like changing what inheritance means, or what is an instance) is a bad design. (I believe that systems should allow these things, but the design should be such that there are clear fences that have to be crossed when serious extensions are made.)

[..]

I would suggest that more progress could be made if the smart and talented Squeak list would think more about what the next step in metaprogramming should be - how can we get great power, parsimony, AND security of meaning?

Note that Objective-C's metasystem does allow changing meta-things in the normal course of programming, and it is rather ad-hoc about which things it allows us to change and which it does not. It is a bad design, as far as metasystems are concerned. However, even a bad metasystem is better than no metasystem. Or to quote Guy Steele (again):
This is the nub of what I want to say. A language design can no longer be a thing. It must be a pattern—a pattern for growth—a pattern for growing the pattern for defining the patterns that programmers can use for their real work and their main goal.
The solution to bad metasystems is not to ban metasystems, it is to design better metasystems that allow these things in a more disciplined and more flexible way.

As usual, discussion here or on HN.

Wednesday, March 23, 2016

Feindenpeinlichkeit

Sam Harris asks:

I think the word should be Feindenpeinlichkeit.

Tuesday, October 6, 2015

Jitterdämmerung

So, Windows 10 has just been released, and with it Ahead Of Time (AOT) compilation feature .NET native. Google also just recently introduced ART for Android, and I just discovered that Oracle is planning an AOT compiler for mainstream Java.

With Apple doggedly sticking to Ahead of Time Compilation for Objective-C and now their new Swift, JavaScript is pretty much the last mainstream hold-out for JIT technology. And even in JavaScript, the state-of-the-art for achieving maximum performance appears to be asm.js, which largely eschews JIT techniques by acting as object-code in the browser represented in JavaScript for other languages to be AOT-compiled into.

I think this shift away from JITs is not a fluke but was inevitable, in fact the big question is why it has taken so long (probably industry inertia). The benefits were always less than advertised, the costs higher than anticipated. More importantly though, the inherent performance characteristics of JIT compilers don't match up well with most real world systems, and the shift to mobile has only made that discrepancy worse. Although JITs are not going to go away completely, they are fading into the sunset of a well-deserved retirement.

Advantages of JITs less than promised

I remember reading the copy of the IBM Systems Journal on Java Technology back in 2000, I think. It had a bunch of research articles describing super amazing VM technology with world-beating performance numbers. It also had a single real-world report from IBM's San Francisco project. In the real world, it turned out, performance was a bit more "mixed" as they say. In other words: it was terrible and they had to do an incredible amount of work for the system be even remotely usable.

There was also the experience of the New Typesetting System (NTS), a rewrite of TeX in Java. Performance was atrocious, the team took it with humor and chose a snail as their logo.

Nts at full speed One of the reasons for this less than stellar performance was that JITs were invented for highly dynamic languages such as Smalltalk and Self. In fact, the Java Hotspot VM can be traced in a direct line to Self via the Strongtalk system, whose creator Animorphic Systems was purchased by Sun in order to acquire the VM technology.

However, it turns out that one of the biggest benefits of JIT compilers in dynamic languages is figuring out the actual types of variables. This is a problem that is theoretically intractable (equivalent to the halting problem) and practically fiendishly difficult to do at compile time for a dynamic language. It is trivial to do at runtime, all you need to do is record the actual types as they fly by. If you are doing Polymorphic Inline Caching, just look at the contents of the caches after a while. It is also largely trivial to do for a statically typed language at compile time, because the types are right there in the source code!

So gathering information at runtime simply isn't as much of a benefit for languages such as C# and Java as it was for Self and Smalltalk.

Significant Costs

The runtime costs of a JIT are significant. The obvious cost is that the compiler has to be run alongside the program to be executed, so time compiling is not available for executing. Apart from the direct costs, this also means that your compiler is limited in the types of analyses and optimizations it can do. The impact is particularly severe on startup, so short-lived programs like for example the TeX/NTS are severely impacted and can often run slower overall than interpreted byte-code.

In order to mitigate this, you start having to have multiple compilers and heuristics for when to use which compilers. In other words: complexity increases dramatically, and you have only mitigated the problem somewhat, not solved it.

A less obvious cost is an increase in VM pressure, because the code-pages created by the JIT are "dirty", whereas executables paged in from disk are clean. Dirty pages have to be written to disk when memory is required, clean pages can simply be unmapped. On devices without a swap file like most smartphones, dirty vs. clean can mean the difference between a few unmapped pages that can be swapped in later and a process getting killed by the OS.

VM and cache pressure is generally considered a much more severe performance problem than a little extra CPU use, and often even than a lot of extra CPU use. Most CPUs today can multiply numbers in a single cycle, yet a single main memory access has the CPU stalled for a hundred cycles or more.

In fact, it could very well be that keeping non-performance-critical code as compact interpreted byte-code may actually be better than turning it into native code, as long as the code-density is higher.

Security risks

Having memory that is both writable and executable is a security risk. And forbidden on iOS, for example. The only exception is Apple's own JavaScript engine, so on iOS you simply can't run your own JITs.

Machines got faster

On the low-end of performance, machines have gotten so fast that pure interpreters are often fast enough for many tasks. Python is used for many tasks as is and PyPy isn't really taking the Python world by storm. Why? I am guessing it's because on today's machines, plain old interpreted Python is often fast enough. Same goes for Ruby: it's almost comically slow (in my measurements, serving http via Sinatra was almost 100 times slower than using libµhttp), yet even that is still 400 requests per second, exceeding the needs of the vast majority of web-sites including my own blog, which until recently didn't see 400 visitors per day.

The first JIT I am aware of was Peter Deutsch's PS (Portable Smalltalk), but only about a decade later Smalltalk was fine doing multi-media with just a byte-code interpreter. And native primitives.

Successful hybrids

The technique used by Squeak: interpreter + C primitives for heavy lifting, for example for multi-media or cryptography has been applied successfully in many different cases. This hybrid approach was described in detail by John Ousterhout in Scripting: Higher-Level Programming for the 21st Century: high level "scripting" languages are used to glue together high performance code written in "systems" languages. Examples include Numpy, but the ones I found most impressive were "computational steering" systems apparently used in supercomputing facilities such as Oak Ridge National Laboratories. Written in Tcl.

What's interesting with these hybrids is that JITs are being squeezed out at both ends: at the "scripting" level they are superfluous, at the "systems" level they are not sufficient. And I don't believe that this idea is only applicable to specialized domains, though there it is most noticeable. In fact, it seems to be an almost direct manifestation of the observations in Knuth's famous(ly misquoted) quip about "Premature Optimization":

Experience has shown (see [46], [51]) that most of the running time in non-IO-bound programs is concentrated in about 3 % of the source text.

[..] The conventional wisdom shared by many of today's software engineers calls for ignoring efficiency in the small; but I believe this is simply an overreaction to the abuses they see being practiced by penny-wise-and-pound-foolish programmers, who can't debug or maintain their "optimized" programs. In established engineering disciplines a 12 % improvement, easily obtained, is never considered marginal; and I believe the same viewpoint should prevail in soft- ware engineering. Of course I wouldn't bother making such optimizations on a one-shot job, but when it's a question of preparing quality programs, I don't want to restrict myself to tools that deny me such efficiencies.

There is no doubt that the grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.

Yet we should not pass up our opportunities in that critical 3 %. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified. It is often a mistake to make a priori judgments about what parts of a program are really critical, since the universal experience of programmers who have been using measurement tools has been that their intuitive guesses fail. After working with such tools for seven years, I've become convinced that all compilers written from now on should be designed to provide all programmers with feedback indicating what parts of their programs are costing the most; indeed, this feedback should be supplied automatically unless it has been specifically turned off.

[..]

(Most programs are probably only run once; and I suppose in such cases we needn't be too fussy about even the structure, much less the efficiency, as long as we are happy with the answers.) When efficiencies do matter, however, the good news is that usually only a very small fraction of the code is significantly involved.

For the 97%, a scripting language is often sufficient, whereas the critical 3% are both critical enough as well as small and isolated enough that hand-tuning is possible and worthwhile.

I agree with Ousterhout's critics who say that the split into scripting languages and systems languages is arbitrary, Objective-C for example combines that approach into a single language, though one that is very much a hybrid itself. The "Objective" part is very similar to a scripting language, despite the fact that it is compiled ahead of time, in both performance and ease/speed of development, the C part does the heavy lifting of a systems language. Alas, Apple has worked continuously and fairly successfully at destroying both of these aspects and turning the language into a bad caricature of Java. However, although the split is arbitrary, the competing and diverging requirements are real, see Erlang's split into a functional language in the small and an object-oriented language in the large.

Unpredictable performance model

The biggest problem I have with JITs is that their performance model is extremely unpredictable. First, you don't know when optimizations are going to kick in, or when extra compilation is going to make you slower. Second, predicting which bits of code will actually be optimized well is also hard and a moving target. Combine these two factors, and you get a performance model that is somewhere between unpredictable and intractable, and therefore at best statistical: on average, your code will be faster. Probably.

While there may be domains where this is acceptable, most of the domains where performance matters at all are not of this kind, they tend to be (soft) real time. In real time systems average performance matters not at all, predictably meeting your deadline does. As an example, delivering 80 frames in 1 ms each and 20 frames in 20 ms means for 480ms total time means failure (you missed your 60 fps target 20% of the time) whereas delivering 100 frames in 10 ms each means success (you met your 60 fps target 100% of the time), despite the fact that the first scenario is more than twice as fast on average.

I really learned this in the 90ies, when I was doing pre-press work and delivering highly optimized RIP and Postscript processing software. I was stunned when I heard about daily newspapers switching to pre-rendered, pre-screened bitmap images for their workflows. This is the most inefficient format imaginable for pre-press work, with each page typically taking around 140 MB of storage uncompressed, whereas the Postscript source would typically be between 1/10th and 1/1000th of the size. (And at the time, 140MB was a lot even for disk storage, never mind RAM or network capacity.

The advantage of pre-rendered bitmaps is that your average case is also your worst case. Once you have provisioned your infrastructure to handle this case, you know that your tech stack will be able to deliver your newspaper on time, no matter what the content. With Postscript (and later PDF) workflows, you average case is much better (and your best case ridiculously so), but you simply don't get any bonus points for delivering your newspaper early. You just get problems if it's late, and you are not allowed to average the two.

Eve could survive and be useful even if it were never faster than, say, Excel. The Eve IDE, on the other hand, can't afford to miss a frame paint. That means Imp must be not just fast but predictable - the nemesis of the SufficientlySmartCompiler.
I also saw this effect in play with Objective-C and C++ projects: despite the fact that Objective-C's primitive operations are generally more expensive, projects written in Objective-C often had better performance than comparable C++ projects, because the Objective-C's performance model was so much more simple, obvious and predictable.

When Apple was still pushing the Java bridge, Sun engineers did a stint at a WWDC to explain how to optimize Java code for the Hotspot JIT. It was comical. In order to write fast Java code, you effectively had to think of the assembler code that you wanted to get, then write the Java code that you thought might net that particular bit of machine code, taking into account the various limitations of the JIT. At that point, it is a lot easier to just write the damn assembly code. And more vastly more predictable, what you write is what you get.

Modern JITs are capable of much more sophisticated transformations, but what the creators of these advanced optimizers don't realize is that they are making the problem worse rather than better. The more they do, the less predictable the code becomes.

The same, incidentally, applies to SufficentlySmart AOT compilers such as the one for the Swift language, though the problem is not quite as severe as with JITs because you don't have the dynamic component. All these things are well-intentioned but all-in-all counter-productive.

Conclusion

Although the idea of Just in Time Compilers was very good, their area of applicablity, which was always smaller than imagined and/or claimed, has shrunk ever further due to advances in technology, changing performance requirements and the realization that for most performance critical tasks, predictability is more important than average speed. They are therefore slowly being phased out in favor of simpler, generally faster and more predictable AOT compilers. Although they are unlikely to go away completely, their significance will be drastically diminished.

Alas, the idea that writing high-level code without any concessions to performance (often justified by misinterpreting or simply just misquoting Knuth) and then letting a sufficiently smart compiler fix it lives on. I don't think this approach to performance is viable, more predictability is needed and a language with a hybrid nature and the ability for the programmer to specify behavior-preserving transformations that alter the performance characteristics of code is probably the way to go for high-performance, high-productivity systems. More on that another time.

What do you think? Are JITs on the way out or am I on crack? Should we have a more manual way of influencing performance without completely rewriting code or just trusting the SmartCompiler?

Sunday, October 4, 2015

Are Objects Already Reactive?


TL;DR: Yes, obviously.

My post from last year titled The Siren Call of KVO and Cocoa Bindings has been one of my most consequential so far. Apart from being widely circulated and discussed, it has also been a focal point of my ongoing work related to Objective-Smalltalk. The ideas presented there have been central to my talks on software architecture, and I have finally been able to present some early results I find very promising.

Alas, with the good always comes the bad, and some of the reactions (sic) have no been quite so positive. For example, consider the following I wrote:

[..] Adding reactivity to an object-oriented language is, at first blush, non-sensical and certainly causes confusion [because] whereas functional programming, which is per definition static/timeless/non-reactive, really needs something to become interactive, reactivity is already inherent in OO. In fact, reactivity is the quintessence of objects: all computation is modeled as objects reacting to messages.
This seemed quite innocuous, obvious, and completely uncontroversial to me, but apparently caused a bit of a stir with at least some of the creators of ReactiveCocoa:

Ouch! Of course I never wrote that "nobody" needs FRP: Functional Programming definitely needs FRP or something like it, because it isn't already reactive like objects are. Second, what I wrote is that this is non-sensical "at first blush" (so "on first impression"). Idiomatically, this phrase is usually sets up a " ...but on closer examination", and lo-and-behold, almost the entire rest of the post talks about how the related concepts of dataflow and dataflow-constraints are highly desirable.

The point was and is (obviously?) a terminological one, because the existing term "reactivity" is being overloaded so much that it confuses more than it clarifies. And quite frankly, the idea of objects being "reactive" is (a) so self-evident (you send a message, the object reacts by executing method which usually sends more messages) and (b) so deeply ingrained and basic that I didn't really think about it much at all. So obviously, it could very well be that I was wrong and that this was "common sense" to me in the Einsteinian sense.

I will explore the terminological confusion more in later posts, but suffice it to say that Conal Elliott contacted the ReactiveCocoa guys to tell them (politely) that whatever ReactiveCocoa was, it certainly wasn't FRP:

I'm hoping to better understand how the term "Functional Reactive Programming" gets applied to systems that are so far from the original definition and principles (continuous time with precise & simple mathematical meaning)
He also wrote/talked more about this confusion in his 2015 talk "Essence and Origins of FRP":
The term has been used incorrectly to describe systems like Elm, Bacon, and Reactive Extensions.
Finally, he seems to agree with me that the term "reactive" wasn't really well chosen for the concepts he was going after:

What is Functional Reactive Programming:  Something of a misnomer.  Perhaps Functional temporal programming

So having established the the term "reactive" is confusing when applied to whatever it is that ReactiveCooca is or was trying to be, let's have a look at how and whether it is applicable to objects. The Communication of the ACM "Special issue on object-oriented experiences and future trends" from 1995 has the following to say:

A group of leading experts from industry and academia came together last fall at the invitation of IBM and ACM to ponder the primary areas of future needs in software support for object-based applications.

[..]

In the future, as you talk about having an economy based on these entities (whether we call them “objects” or we call them something else), they’re going to have to be more proactive. Whether they’re intelligent agents or subjective objects, you enable them with some responsibility and they get something done for you. That’s a different view than we have currently where objects are reactive; you send it a message and it does something and sends something back.

But lol, that's only a group of leading researchers invited by IBM and the ACM writing in arguably one of the most prestigious computing publications, so what do they know? Let's see what the Blue Book from 1983 has to say when defining what objects are:

The set of messages to which an object can respond is called its interface with the rest of the system. The only way to interact with an object is through its interface. A crucial property of an object is that its private memory can be manipulated only by its own operations. A crucial property of messages is that they are the only way to invoke an object's operations. These properties insure that the implementation of one object cannot depend on the internal details of other objects, only on the messages to which they respond.
So the crucial definition of objects according the creators of Smalltalk is that they respond to messages. And of course if you check a dictionary or thesaurus, you will find that respond and react are synonyms. So the fundamental definition of objects is that they react to messages. Hmm...that sounds familiar somehow.

While those are seemingly pretty influential definitions, maybe they are uncommon? No. A simple google search reveals that this definition is extremely common, and has been around for at least the last 30-40 years:

A conventional statement of this principle is that a program should never declare that a given object is a SmallInteger or a LargeInteger, but only that it responds to integer protocol.
But lol, what do Adele Goldberg, David Robson or Dan Ingalls know about Object Oriented Programming? After all, we have one of the creators of ReactiveCocoa here! (Funny aside: LinkedIn once asked me "Does Dan Ingalls know about Object Oriented Programming?" Alas there wasn't a "Are you kidding me?" button, so I lamely clicked "Yes").

Or maybe it's only those crazy dynamic typing folks that no-one takes seriously these days? No.

So the only thing relevant thing for typing purposes is how an object reacts to messages.
Here's a section from the Haiku/BeOS documentation:
A BHandler object responds to messages that are handed to it by a BLooper. The BLooper tells the BHandler about a message by invoking the BHandler's MessageReceived() function.
A book on OO graphics:

The draw object reacts to messages from the panel, thereby creating an IT to cover the canvas.
CS lecture on OO:
Properties implemented as "fields" or "instance variables"
  • constitute the "state" of the object
  • affect how object reacts to messages
Heck, even the Apple Cocoa/Objective-C docs speak of "objects responding to messages", it's almost like a conspiracy.
By separating the message (the requested behavior) from the receiver (the owner of a method that can respond to the request), the messaging metaphor perfectly captures the idea that behaviors can be abstracted away from their particular implementations.
Book on OO Analysis and Design:
As the object structures are identified and modeled, basic processing requirements for each object can be identified. How each object responds to messages from other objects needs to be defined.
An object's behavior is defined by its message-handlers(handlers). A message-handler for an object responds to messages and performs the required actions.
CLIPS - object-oriented programming

Or maybe this is an old definition from the 80ies and early 90ies that has fallen out of use? No.

The behavior of related collections of objects is often defined by a class, which specifies the state variables of an objects (its instance variables) and how an object responds to messages (its instance methods).
Methods: Code blocks that define how an object responds to messages. Optionally, methods can take parameters and generate return values.
Cocoa, by Richard Wentk, 2010

The main difference between the State Machine and the immutable is the way the object reacts to messages being sent (via methods invoked on the public interface). Whereas the State Machine changes its own state, the Immutable creates a new object of its own class that has the new state and returns it.
So to sum up: classic OOP is definitely reactive. FRP is not, at least according to the guy who invented it. And what exactly things like ReactiveCocoa and Elm etc. are, I don't think anyone really knows, except that they are not even FRP, which wasn't, in the end reactive.

Tune in for "What the Heck is Reactive Programming, Anyway?"

As always, comments welcome here or on HN