Note: a draft of this essay was accepted as a talk to JSConf EU 2019, and you can watch a video of it.
Can you think of a program that lets you draw, freehanded perhaps, and resize your shapes at will? Does this progarm let you set constraints for these shapes, such as "centered" or "parallel to", or "evenly spaced"? Can you make copies of your shapes? Do those copies update automatically when you update the original?
If you were thinking of Sketch, you'd be partially correct. I was describing Sketch...Pad. A program developed by Dr. Ivan Sutherland, back in 1963, so really more than 50 years ago. Let's look at SketchPad...
Ivan Sutherland did this development during his master thesis, and when Alan Kay, who studied with him, asked him how did he manage to create the first object-orientated system, and the first graphical interface system, all within one year, Ivan replied that he just didn't know it was hard! 1
Ivan really set a high bar and a direction for the likes of Douglas Engelbart, who created the first computer mouse. Engelbart and his team at the Standford Research Institute developed, during the span of 10 years, a system so incredible that I have a hard time grasping it existed.
The oN-Line System was first demoed publicly in 1968, and it's been dubbed "The Mother of All Demos". Let's look at why:
Here we can hear Douglas essentially doing a live-stream of his computer, where he modifies this content-based layout to fit in a video-stream from a different computer altogether. That's video conferencing happening in real-time in 1968.
They discuss some matters of the presentation, and they proceed to link the workspaces. They are then essentially doing live-coding. Seriously, this presentation is epochal.
With this contextual background we can move on to the last piece of historical context we need: Xerox PARC.
PARC was the place where Alan Kay, Adele Goldberg, and Dan Ingalls, created Smalltalk. This language was designed to be a vehicle for human-computer symbiosis2. It's a language that is designed to fade into the background fast and let you express yourself, just like a musical instrument would. It offers the programmers two core capabilities:
Something worth noting is that Alan Kay has repeatedly said that when they invented object-oriented programming, the important part was not the objects themselves, but the messages3. It's about the communication.
Our world is concurrent, everything around us happens in parallel from us, and we communicate with everything around us by sending messages.
So what made this approach revolutionary? I believe it was 2 things. The first one is the computation model behind these applications. The second one is the philosophy behind them.
When we model an application in Smalltalk, we model it in terms of objects that interact with each other via message passing. Each one of these objects performs hopefully one very small task, and has a number of messages that it understands and that it sends to other objects. It may have some internal state that can change over time as it receives and processes some of these messages.
Normally a Smalltalk application will have millions of objects, and you maybe even think of those tiny objects as computers on their own.
The underlying computational principles by which Smalltalk works are equivalent to the Actor Model.
In the actor model we have several actors, essentially independent entities, being executed in complete isolation. They solve problems by collaboration, and they collabroate via message passing. Just like I send a text to someone to open the door when I've arrived.
Modeling systems with this approach has 3 interesting properties:
It provides a degree of failure isolation that is just a necessity for applications that must continue to run uninterrupted
It is asynchronous by default, which more naturally models how we interact with computers
It can make better use of hardware by means of parallelization
Have you ever been in a website that was completely unusable because of some unhandled exception that blew up the complete application stack? I think we all have. If our applications are normally composed of dozes of independent collaborating systems, why do we treat them as a single monolith that we have to incredibly carefully program?
We know that our applications are scaling massively in complexity, and preserving good experiences for the people that use them is perhaps the definition of our day job for some of us, so why are we risking so much?
In the actor model, an actor failing does not take down the entire program. In fact, you can purpose actors to be on the look for failures and restart parts of the system that fail.
Of course we could decide that certain failures should crash the entire application, but it is now a conscious decision to degrade the experience to an irrecoverable crash.
When it comes to the nature of the applications that we build, we've had to go lenghts to create applications that can react to users instantaneously. Any actor system can choose to be as reactive (or proactive) as they need to be, and the choice is limited to what to do when we receive a message.
Lastly, we struggle to parallelize our stack-based applications because we have a single one, running on a single thread, and any coordination between multiple threads has to be done manually.
When thinking in terms of Actors, we could potentially run every actor parallely. This is because each actor is essentially its own tiny computer that knows nothing about the rest, other than their name, to be able to send them a message.
It is impressive to see how these 3 properties work together in practice. Most notably, Erlang is built under the same principles (although it uses a different terminology, calling actors processes), and it is these ideas that allow for massively concurrent systems.
I will grant you that we don't have 2 million users per browser, but knowing that your application can crash and recover itself on the fly is incredibly powerful for raising the quality bar of any experience that we design and develop.
The last point I'd like to make today is that of the philosophy behind these graphical systems of the past. Smalltalk applications, and those built on modern derivates such as Pharo, show 2 key ideas:
The idea of Liveness of an application, and
The idea of Directness of an application.
Liveness is the ability to ALWAYS respond to a users actions. This means that whatever you do on the system, and whatever the system is doing, there never is a complete stop, a gap in the feedback loop. Naturally as the workload of the system increases and the computing resources get scarce, the system will respond more slowly, but it never stops responding.
Now remember the last time you were on a website that just didn't do anything for some time? Maybe because it was loading something, or because an unhandled exception made it to the main event loop. Failure isolation is foundational to Liveness.
Directness, on the other hand, means that whatever the user is seeing, the graphical representation of the system, is a point at which the user can begin to explore the complete system. They can inspect a button, change its attributes, restructure the behavior, reconstruct a user flow to suit their needs better, persist those changes, all by modifying _the thing which they already see_.
It is likely hard to agree with me right now that we need to rethink the foundations on which we build user applications from the ground up. I get it. With all these fancy frameworks out there, backed by huge corporations, how could they be wrong?
I invite you to take a first step into this world by trying out Pharo Smalltalk. Pharo is a dialect of Smalltalk that is quite alive and has a growing community, and it should be more than enough to showcase why some of these attributes are paramount to increasing the quality of the experiences we ship.
You'll find that these same properties, alongside a highly reflective language, allow for a development experience quite unlike we have seen in other mainstream languages.
But most importantly, what can we do to start Building WebApps like it's 1972?
My best answer is to start by learning about them. In the meantime, at SRC we are working as hard as we can on a formally specified abstract actor machine that we can use to bring this computation model to every conceivable platform in a consistent, transparent, and delightful way.
It will be some time before we have something worth demoing, but stay in tune!