I recently read a few posts that suggest any computer language that requires an IDE is inherently flawed. If I understood the argument correctly the point was that all of the extra tools typically found in IDEs for languages like C++ and Java are really crutches that help the developer cope with the language’s failings.
On the one hand I suppose I can see that point. If it weren’t for all of the extra nudging and prompting provided by these tools then coding a Java application of any complexity would become much more difficult. The same could be said for serious C++ applications; and certainly any application with mixed environments and multiple developers.
On the other hand these languages are at the core of most heavy-lifting in software development and the feature list for most popular IDEs continues to grow. There must be a reason for that. The languages that can be easily managed with an ordinary editor (perhaps one with good syntax highlighting) are typically not a good fit for large scale projects, and if they were, a more powerful environment would be a must for other reasons.
This got me thinking that perhaps all of this extra complexity is part of the ongoing evolution of software development. Perhaps the complexity we are observing now is a temporary evil that will eventually give way to some truly profound advancements in software development. Languages with simpler constructs and syntax are more likely throw-backs to an earlier paradigm while the more complex languages are likely straining against the edges of what is currently possible.
The programming languages we use today are still rooted in the early days of computing when we used to literally hand-wire our systems to perform a particular task. In fact the term “bug” goes all the way back to actual insects that would occasionally infest the circuitry of these machines and cause them to malfunction. Once upon a time debugging really did mean what it sounds like!
As the hardware of computing became more powerful we were able to replace physical wiring with machine-code that could virtually rewire the computing hardware on the fly. This is still at the heart of computing. Even the most sophisticated software in use today eventually breaks down into a handful of bits that flip switches and cause one logic circuit to connect to another in some useful sequence.
In spite of the basics task remaining the same, software development has improved significantly over time. Machine-code was better than wires, but it too was still very complicated and hardware specific. Remembering op codes and their numeric translations is challenging for wetware (brains) and in any case isn’t portable from one type of machine to another. So machine-code eventually evolved into assembly language which allowed programmers to use more familiar verbs and register names to describe what they wanted to do. For example you can probably guess that “add ax, bx” probably instructs the hardware to add a couple of numbers together and that “ax” and “bx” are where those numbers can be found. Even better than that, assembly language offered some portability between one chunk of hardware and another because the compiler (a hardware specific chunk of software) would keep track of the specific op codes so that software developers could more easily reuse and share chunks of code.
From there we evolved to languages like C that were just barely more sophisticated than assembly language. In the beginning, C was slightly more than a handy syntax that could be expanded into assembly language in an almost cut-and-paste fashion. It was not uncommon to actually use assembly language inside of C programs when you wanted to do something specific with your hardware and you didn’t have a ready-made library for it.
That said, the C language and others like it did give us more distance from the hardware and allowed us to think about software more abstractly. We were better able to concentrate on algorithms and concepts once we loosened our grip the wiring under the covers.
Modern languages have come a long way from those days but essentially the same kind of translation is happening. It’s just that a lot more is being done automatically and that means that a lot more of the decisions are being made by other people, by way of software tools and libraries, or by the machinery itself, by way of memory managers, signal processors, and other specialized devices.
This advancement has given us the ability to create software that is profoundly complex – sometimes unintentionally! Our software development languages and development tools have become more sophisticated in order to help us cope with this complexity and the lure of creating ever more powerful software.
Still, fundamentally, we are stuck in the dark ages of software development. We’re still working from a paradigm where we tell the machine what to do and the machine does it. On some level we are still hand-wiring our machines. We hope that we can get the instructions right and that those instructions will accomplish what we have in mind but we really don’t have a lot of help with those tasks. We write code, we give it to the machine, we watch what the machine does, we make adjustments, and then we start again. The basic cycle has sped up quite a bit but the process of software development is still a very one-way endeavor.
What we are seeing now in complex IDEs could be a foreshadowing of the next revolution in software development where the machines will participate on a more equal footing in the process. The future is coming, but our past is holding us back. Right now we make educated guesses about what the machine will do with our software and our IDEs try to point out obvious errors and give us hints that help our memory along the way. In fact they are straining at the edges of the envelope to do this and the result is a kind of information overload.
The problem has become so bad that switching from one IDE to another is lot like changing countries. Even if the underlying language is the same, everything about how that language is used can be different. It is almost as if we’ve ended up back in the machine-code days where platform specific knowledge was a requirement. The difference is that instead of knowing how to rewire a chunk of hardware we must know how to rewire our tool stack.
So what would happen if we took the next step forward and let go of the previous paradigm completely? Instead of holding on to the idea that we’re rewiring the computer to do our bidding and that we are therefor completely responsible for all of the associated details, we could collaborate with the computer in a way that allows us to bring our relative strengths together and achieve a superior result.
Wetware is good at creativity, abstraction, and the kind of fuzzy thinking that goes into solving new problems and exploring new possibilities. Hardware is good at doing arithmetic, keeping track of huge amounts of data, and working very quickly. This seems like two sides of a great team because each partner brings something that the other is lacking. The trick is to create an environment where the two can collaborate efficiently.
Working with a collaborative IDE would be more like having a conversation than editing code. The developer would describe what they are trying to do using whatever syntax they understand best for that task and the machine would provide a real-time simulation of the result. Along the way the machine would provide recommendations about the solution they are developing through syntax highlighting and co-editing, hints about known algorithms that might be useful, and simulations of potential solutions.
The new paradigm takes the auto-complete, refactoring, and object browser features built into current IDEs and extends that model to reach beyond the code base for any given project. If the machine understands that you are building a particular kind of algorithm then it might suggest a working solution from the current state-of-the-art. This suggestion would be custom fitted to the code you are describing and presented as a complete simulation along with an analysis (if you want it) of the benefits. If the machine is unsure of what you are trying to accomplish then it would ask you questions about the project using a combination of natural language and the syntax of the code you are using. It would be very much like working side by side with an expert developer who has the entire world of computer science at top of mind.
The end result of this kind of interaction would be a kind of intelligent, self-documenting software that understands itself on a very deep level. Each part of the code base would carry with it a complete simulation of how the code should operate so that it can be tested automatically on various target platforms and so that new modifications can be regression tested during the development process.
The software would be _almost_ completely proven by the time it was written because unit tests would have been performed in real-time as various chunks of code were developed. I say, _almost_ because there are always limits to how completely any system can be tested and because there are always unknowns and unintended consequences when new software is deployed.
Intelligent software like this would be able to explain the design choices that were made along the way so that new developers could quickly get a full understanding of the intentions of the previous developers without having to hunt them down, embark on deep research efforts, or make wild guesses.
Intelligent software could also update and test itself as improved algorithms become available, port itself to new platforms automatically as needed, and provide well documented solutions to new projects when parts of the code base are applicable.
So, are strong IDEs a sign of weak languages? I think not. Instead, they are a sign that our current software development paradigm is straining at the edges as we reach toward the next revolution in computing: Intelligent Development Environments.