Wednesday, September 26, 2007

Objective-C future(s)

Via LtU, I got alerted to the fact that theEtoile project now has an implementation of futures. Cool.

However, their implementation has specific objects reacting asynchronously to messages, making it more similar to the actor model,which as they mention is also very much Alan Kay's original conceptual model for Smalltalk:

Bob Barton, the main designer of the B5000 and a professor at Utah had said in one of his talks a few days earlier: "The basic principle of recursive design is to make the parts have the same power as the whole." For the first time I thought of the whole as the entire computer and wondered why anyone would want to divide it up into weaker things called data structures and procedures. Why not divide it up into little computers, as time sharing was starting to? But not in dozens. Why not thousands of them, each simulating a useful structure? [Emphasis mine]
Actors are inherently asynchronous, each actor runs in a separate process/thread and messages arealso asynchronous, with the sender not waiting for the message to be delivered or ever gettinga return value. Of course the actor model also makes all objects active, so the Etoile model, whichonly makes objects of specific classes active, is somewhere inbetween.

Futures, on the other hand, as introduced in MULTLSIP (pdf), tryto integrate asynchronous execution into a traditional call/return control- and data-flow. So messages(or functions in MULTILSIP) appear to have normal synchronous semantics and immediately yielda return value, but when annotated with the future keyword execution of that return valueis done in a background thread and the immediate return value is just a proxy for the value that is still being computed.

In the HOM paper (pdf) presented at OOPSLA 2005, I also describe a Future implementationbased on Higher Order Messaging that comes very close to the way it was done in MULTILSIP. A -futureHOM is all that is needed to indicate that you would like a result computed in a background thread:

  result = [anObject lengthyOperation:parameter];           //  synchronous
  result = [[anObject future] lengthyOperation:parameter];  //  asynchronous with future
I am probably biased, but this seems about as easy-to-use as possible,with all the nasty machinery (worker-queues, lockless FIFOs, etc.)hidden behind a single -future message.

Sunday, September 23, 2007

Message-oriented persistence

The good folks at Omni posted an interesting discussion of their persistence strategy for OmniFocus. In short they found that using a database, specifically a CoreData data store, was not exactly ideal for their primary public data format.

Instead, they appear to be using a pattern that Martin Fowler calls EventPoster. After reading David Reed's thesis, I think I prefer to call it message-oriented persistence.

I first stumbled on this pattern when designing a replacement for a feed processor at the BBC. The basic task was to process a feed of information snippets encoded as XML and generate/update web and interactive TV (Ceefax) pages.

Like a good little enterprise architect, and similar to the existing system, I had originally planned to use a central SQL database for storage, though designing a data model for that was proving difficult due to the highly irregular nature of the feed data. As an auditing/logging measure, I also wanted to keep a copy of the incoming feed data, so when the time came to do the first implementation spikes, I decided we would implemented the display, update and XML feed processing logic, but not the datastore. Instead, we would just re-play the feed data from the log we had kept.

This worked so well that we never got around to implementing the central database.

Leaving out the database vastly simplified both our code-base and the deployed system, which I could run in its entirety on my 12" AlBook whereas the system we were replacing ran around a dozen networked machines. Apart from making us popular with our sysadmin team both in terms of reliability and deployment/maintenance complexity (essentially a jar and a working directory was all it needed), a fringe benefit was being able to work on the system on said AlBook while disconnected from the network, working from home or from a sunny patch of grass outside the office.

In addition to personal happiness, systen performance was also positively affected: since we kept our working state completely in memory, the AlBook mentioned outperformed the original cluster by 2-3 orders of magnitude, producing hundreds of pages per second versus taking from several seconds to several minutes to produce a single page.

Performance and simplicity are exactly the benefits claimed for prevlayer, a persistence layer for Java based on the same principles.

TeaTime, the theoretical foundation and actual engine working underneath Croquet, takes this idea to a whole different level: objects do not have state, but are simply names for message histories. This is truly "objects [and messages] all the way down". Turtles need not apply.

Saturday, September 1, 2007

More on MPWObjectCache

Now that I've motivated why an MPWObjectCache might be useful, let's go into some more detail as to how it actually works. To follow along, or if you'd rather just read the source code than my ramblings, MPWObjectCache is part of MPWFoundation, which can be downloaded here: http://www.metaobject.com/downloads/Objective-C/.

As I mentioned before, the algorithm for MPWObjectCache is quite simple: there is a circular buffer of object slots. We try to get pre-allocated objects from this circular buffer if possible. If we find an object in the cache and it is available for reuse, we just return it and have just saved the cost of allocation. Two things can prevent this happy state of affairs: (1) we don't have an object yet or (2) we cannot reuse the object because it is still in use. In both cases we will need to allocate a new object, but in the second case we also remove the old object from the cache position.
#if  SLOW_SAMPLE_IMPLEMENTATION_WANTED
-getObject
{
    id obj;
    objIndex++;
    if ( objIndex >= cacheSize ) {
        objIndex=0;
    }
    obj=objs[objIndex];
    if ( obj==nil ||  [obj retainCount] > 1 ) {
        if ( obj!=nil ) {
            [obj release];
        }
        obj = [[objClass alloc] init];
        objs[objIndex]=obj;
    }
    return [[obj retain] autorelease];
}
#else

This is what a naive implementation looks like. A couple of notes on the code:

  • objects must be reinitialized by the client (and reinitializable in the first place)
  • only one attempt is made to find an object
  • the retain/autorelease will prevent the cache from working unless a fairly tight autorelease pool regime is maintained
  • there are quite a few message sends
  • it's not what is used in production
The effectiveness of the cache obviously depends on your allocation patterns and the size of the object-cache. Larger caches take longer to be filled up before they start wrapping around with the potential for reuse, but smaller sizes can mean that the object will still be in use when we do wrap around. The actual implementation is very similar to the one presented above, except that it does a little more probing and uses IMP-caching for all the messages sent on the critical path. These optimizations ensure that object-caches are no slower than normal allocations even in worst-case situations such as every allocated object being retained. In addition the cache can also be set to not do the retain/autorelease, which is safe when you are pushing objects and have control over the cache:
-doSomething:target
{
 // cache is an ivar
 id obj=GETOBJECT(cache);
 // target does not have access to cache
 [target doSomethingWithObject:obj];
 // obj now either has an extra retain or can be reused
}
This pleasant property is a side effect of the decision to turn the object-cache into an object that can be instantiated and placed in an instance variable, rather than the typical object pools that are implemented as class methods. The class method that maintains such a pool usually has no information about the lifetime of objects, so to be safe such an implementation always has to protect the objects it returns, negating much of the advantage of caching. Similar caveats apply to multi-threading and locking. Those caveats notwithstanding, MPWObjectCache also provides the CACHING_ALLOC macro for creating class-side allocation methods backed by an object cache, which is used in the HOM implementation to reduce the cost of allocating trampolines:
 CACHING_ALLOC( quickTrampoline, 5, YES )
This creates a +quickTramplone method backed by an object cache with 5 entries. The YES flag allows objects to be returned from the cache without the retain/autorelease despite the fact that it isn't one of the safe "push" patterns described above. However, this use is also safe because the trampoline is used only temporarily to catch and forward the message, all of which is code controlled by the implementation. It is no longer needed once any client code implementing the actual HOM is run. So, this is how and why object-caches can make your (temporary) object allocations much, much faster.