Sunday, April 5, 2020

Why any Fundamental Improvement in Software has to be a Generalisation

A dynamic I see playing out again and again when it comes to software is the tension between incrementalism and radical change. On the one hand, there is a justified sense, backed by a lot of experience, that just tweaking what we have really doesn't cut it, that it's just rearranging the deck chairs on the Titanic. We obviously need radical change.

On the other hand, radical change that assumes we need to throw away what we (think we) know doesn't really cut it either, and the problem of all that existing software and the techniques and technology we used to create it isn't just the pragmatics of the situation, with huge investments in code and know-how. The fact that we are actually capable of creating all this software means that the radical position of "throw it all away, it's wrong" isn't really tenable. Yes, there is something wrong with it, but it cannot actually be completely wrong.

So we are faced with a dilemma: incremental change and radical change are both obviously right and both obviously wrong. And so we get a lot of shouting at each other, a lot of "change", but not a whole lot of progress.

The only way out I see is that change has to be both radical while also including the status quo, and the only way I can see of achieving that is if it is a generalisation, sort of like quantum mechanics generalised classical mechanics, superseding classical mechanics but still including it as a special case. (Or how circles were generalised to ellipses etc.)

5 comments:

dmbarbour said...

As I understand it, your argument is that a new way of doing things must capture or encompass the old way of doing things - the old way isn't "wrong" by virtue of having been successful within its limited scope.

By analogy, stacking ladders to reach a gutter has been performed successfully, therefore it isn't wrong.

Perhaps success is not the only measure of rightness?

I think if we include other measures - scalability, security, update, hard real-time guarantees, etc. - that we'll find plenty of problems in the foundation, that the ability to simulate the current ways of doing things might be worse than useless because the current ways of doing things are actually very awkward or awful from a better perspective.

I think a fundamental improvement in software won't generalize what we do today, it will be a new way of approaching problems that addresses several issues that aren't measured by mere 'success'.

Marcel Weiher said...

Hi David,

thanks for your comment!

The argument of "not-wrongness" you are presenting would be more for the "extension of the existing"-faction, which is also wrong, IMHO, or even for "the status quo is good enough"

But if the status-quo, "stacking ladders", were good enough, we wouldn't need something new.

What I am saying is more along the lines of

(a) if your theory says that stacking ladders cannot possibly work, then it is not a good theory, because no matter how fragile and questionable it is, stacking can be made to work, somehow.

(b) more importantly, if your theory takes the fragility of stacked ladders and concludes that ladders in general don't work, and you must, for example, use cranes everywhere, then it is beyond just not a very good theory, it is clearly and obviously wrong.

Give me a theory that includes, for starters, both cranes and ladders, and both easily accessible so that choosing one over the other can be based on what is needed. We currently don't have that, we have ladders everywhere and if you want to have a crane, you need to make it out of ladders.

And the only alternative we seem to have is the "cranes everywhere" folk who claim that ladders cannot possibly work when they clearly do as long as you don't stack them...

Both are incorrect.

Anonymous said...

"scalability, security, update, hard real-time guarantees"

If those are the goals we are aiming for, reforming software alone (to any degree) isn't going to cut it. We need new hardware that has powerful features for achieving those ends.

Dave

dmbarbour said...

There are a few different concerns to pick apart here:

* whether a theory (PL) supports a model for ladders, e.g. either by axiom (primitive) or theorem (library)
* whether building with ladders is a good idea even if you can model them (e.g. relatively high accident rates, systematic discrimination against penguins or people in wheelchairs, etc.)
* whether weaknesses of ladders or fragility of stacking ladders is obvious in the model and can be controlled for in a larger context (i.e. safety, every-client-is-a full functioning human)

I think the latter points are perhaps very relevant for 'not wrongness'. A fundamental improvement in software might still permit you to model ladders, much as you could model any Turing tarpit if you truly insist. But that does not imply encouraging use of ladders or making them readily accessible is a good idea.

Certainly, making ladders easily accessible would make introduction to the theory/language easier for people already familiar with construction from ladders, but then they're likely to just repeat the fallacies of ladder-based-construction without really thinking about it instead of learning the 'better' foundation.

dmbarbour said...

Anon wrote: "reforming software alone (to any degree) isn't going to cut it" (referring to SW security, RT guarantees, etc.)

There are changes in hardware that could simplify these problems. However, reforming software alone is sufficient. This does assume that the 'OS' is part of the reformed software (perhaps by means of no-OS, unikernel, etc.) and that we under-provision hardware for any hard real-time properties (e.g. static allocations of memory and threads).

A primary issue today is that software is distributed today as opaque, black-box, insecure binaries. If we instead distributed software in a higher level language, even something like .NET CLR (which could be cache-compiled to by a trusted proxy) we could make some useful guarantees based on the types. If we design a distribution language with sufficiently advanced types, then almost any guarantee could be made, allowing for apps to be understood as safe plugins to an OS.

I think that hardware won't be significantly holding us back until after these problems are well handled at the software layer.