Time to Challenge the Dogma of Programming

December 19, 2014 by Alex Thompson

While continuing to enhance the feature set of Primoca I feel I need to take on a larger problem. Programming today is a mess. An endless stream of bugs and confusion. New languages and frameworks come along promising to ease your troubles but ultimately they bring their own problems and do little to improve the overall experience of programming.

To make a significant leap we need to dig deeper and try some more radical approaches. In doing so we'll have to challenge many of the assumptions of the past that have become dogma. I don't have all the answers yet but I do have some hunches and observations in this area.

Moving Beyond Plain Text

On a standard English keyboard there are 32 symbol characters

` ~ ! @ # $ % ^ & * ( ) - _ = + { } [ ] \ / | ? ; : ’ ” , . < >

These characters make up the syntax of nearly every programming language. But there are many more programming constructs than symbols so these characters get used in multiple scenarios. A side effect of that is a compiler can't give you a good error message for a syntax error because there are so many variations it can't guess what you were trying to do.

Also I would argue that this limited number of symbols is limiting our imagination of what a programming language can be. How did we end up with this set of symbols? Legacy compatibility of QWERTY keyboards going all the way back to typewriters more than 100 years ago.

  • Why must our programming languages be so heavily influenced by the punctuation needs of typewriter users in the late 1800's?
  • Are these symbols ($#%) even appropriate for conveying programming constructs?

The editing of code in plaint text files also seems like an odd paradigm. While in most software situations it is generally considered a good idea to separate presentation and storage, for code we insist on editing the serialization directly one byte at a time. When we edit an image we don't do it in a hex editor, so why is byte-by-byte manipulation ideal for code?

Editing the serialization also makes us need strange things like code formatting standards. If one person likes curly braces like this:

if(i==1){

}

and another like this:

if(i==1)
{

}

Who cares? The logic is the same. The view of the code should be adaptable to the user. All these issues make me think that plain text is not the final coding medium for all time.

See What You're Doing

Somewhat inspired by Bret Victor's principles about reducing friction between ideas and output, I think there's definitely a wide frontier of ways to improve IDE's. Or maybe to put it another way, the debugger needs to do a hell of a lot more than it does now.

As of now, code is just symbols and the execution flow has to be imagined. Furthermore any logic buried in abstractions must be imagined without even seeing the symbols. The skill of a programmer ends up being how many abstractions can you imagine in your head simultaneously. Anyone that can't handle a high number is considered "not cut out" for software.

But programming is mostly just putting many simple operations together. Operations that on their own or in small groups can be understood by anyone. The tools should enable the maximum of anyone's ability, even if their ability is limited. Many times if you can just see what's happening, the solution is obvious.

So what is the answer to making programming an order of magnitude easier and better? A hard question indeed but some thoughts: A language that's designed to be an IDE itself. A visual and spatial modeler for code execution. A runtime with tighter constraints but that alleviates even greater complexities. Its the wild west, time to start building...

Tags: programming