Menu

Development and obsolescence of programs – the programmer’s challenge and nightmare

Saša Divjak

I belong to the older generation of programmers, with my first programming experience dating back to 1967 and the legendary Zuse Z32 computer. The beginning of the 1970s was marked by punched cards and perforated tape. Micro- and sometimes minicomputers were programmed in an interesting way back then. Computers were equipped with a teletype, a peripheral unit that allowed typing, printing, paper tape reading and punching. Software was prepared and run in this order:

The first step was to set the computer up with the text editor, saved as binary code on the tape. Then you wrote a program and punched the source code (often in assembly language) to a new tape. Next was reading of the assembler, which was again coded in binary form on a separate tape. And then reading of “your own” program in the source code could follow. As far as I can remember, reading was done in two steps as well because the assembler needed at least two phases to complete the process. Finally, if everything went (fairly) well, you could punch a new tape with your own program in binary code. Then came reading of the new binary program and its execution. What now takes a fraction of a second used to take quite a few minutes and you could only hope you hadn’t made a mistake, or the whole process had to be repeated.

When programming microcomputers, your own machine code program was usually “burned” into EPROM (an integrated circuit which formed part of the microcomputer’s memory). This is how we developed various microcomputer-supported automation protocols. But programmers tend to get things wrong. Because the whole cycle took a while to complete, we often (whenever possible) made corrections directly in the machine code, thus skipping the time-consuming punching process. As a result, the program worked correctly in EPROM, but the source code did not match anymore. Which is very wrong, of course.

Mentally jumping back and forth from assembly language to the machine code of the program was nothing special. Afterall back then , we often entered the bootstrap loader using the switches of the computer console. It became a habit, and it’s perhaps no surprise that we knew sequences of dozens of commands at the machine level by heart. This is an example of the kind of experience I had with the first generation of Digital PDP 11 computers in the 1970s.

Before I continue, I would like to point out that I lectured, at the Faculty of Computer and Information Science, on programming, systems software and operating systems, and, what I loved the most, computer graphics. This is reflected in some of the memories I mention further on.

There are now over 9,000 registered programming languages in the world, and the way we program computers has changed significantly. In the 1980s, we introduced the C programming language to the computer and information science study course at the then Faculty of Electrical Engineering and Computer Science, and to this day it serves as our “Latin”, and a solid foundation for many other programming languages. In 1997 we introduced the Java programming language. Being the main lecturer, I always worried about the constant development of this fresh programming language, which actually went through some major transformation in the following years. We used these programming languages for various projects.

On the other hand, in the late 1990s we already used JavaScript for programming web applications, and it is still just as popular. Later on object-oriented programming was joined by other approaches as they emerged. Among them was component-oriented programming, which used various problem-oriented libraries to build new applications by combining their own source code and function modules offered by the libraries. Why reinvent the wheel when solutions are already available? For instance, supporting 2D and 3D graphics or running various more or less complex, tested and effective routines? This type of approach requires the knowledge of APIs (Application Programming Interfaces), but speeds up the development process quite substantially. As a consequence, the rapid development of new versions eventually always leads to different components of our applications being incompatible with each other, therefore making the software obsolete. Particularly dangerous is the “mixing” of different technologies from different developers, who each follow their own standards and guidelines. One of the examples would be the now-forgotten Virtual Reality Markup Language (VRML), which emerged along with JavaScript and enabled quite decent 3D visualization and 3D scene animation at the time. Combining VRML and JavaScript languages enabled creating very attractive 3D visualizations and interactive simulations of natural phenomena. These examples are now completely obsolete, and can no longer be displayed (truth be told modern 3D graphics are something completely different). You may remember the Java applets, which enabled various applications (and 3D visualizations) in our browsers. Then it transpired that such technology had too many security flaws, because it could work outside of the advertised supposedly safe “sandbox”. One browser after another disabled these applets in their upgraded versions. Today it is only possible to see them only on computers with purposefully installed obsolete operating systems and browsers. Often we can achieve this using virtual machines on the computer. Developers were eager to find solutions in similar technologies in order to urgently address this issue. Many of them therefore switched back to the once-popular Flash and its programming language, called ActionScript. In some cases that helped preserve up to 80% of their code. Later it transpired we were hopelessly wrong. Flash, and ActionScript along with it, are now extinct. JavaScript and jQuery were the way to go, due to their high programming efficiency. Once again various libraries came in handy for the effective and uniform development of user interfaces and similar systems etc..

Nowadays we are surrounded by numerous mobile devices, such as smartphones, tablets and large or small laptops and desktop computers. Due to a large variety in screens, responsive design was developed to enable a similar user experience across all devices, regardless of the size and resolution of the screen. What is helpful in developing such applications? The jQuery Mobile library seems like a logical step, because it enables the planning of graphic user interfaces that can suit various devices. But there are other options, such as the popular Bootstrap. This was all well and good, but only for a short while, and jQuery was upgraded to the next versions (version 3 at the time of writing this paper). Development tools are suddenly able to send the developer warnings about parts of the code being obsolete or deprecated. If nothing else this is rather unpleasant, and we should worry about code that is becoming old and will probably become unusable over time. Well, we could migrate the code to comply with the new rules. This task, although time-consuming and painfully tedious because it requires a systematic approach, is necessary to keep track of progress. But there is a trap or two behind the corner. The development of jQuery Mobile stopped, for instance, and it no longer followed new versions of jQuery. We could do another migration or abandon the code of such an obsolete library.

Today we talk about extinct languages, and a programmer’s work is far from finished after an attractive application is complete. Its maintenance over time requires extra effort.

One of the questions a programmer may have is who to trust and follow so that his or her efforts do not go to waste too fast. Certain giant, global players could offer some answers, but even this is not risk-free. If I recall 3D graphics, I remember Microsoft’s Silverlight technology, which offered an array of beautiful 3D worlds and their animation. The trap this time was that it was a Microsoft product. Will others follow? Other operating systems exist beside Microsoft Windows. And so it happened that Silverlight did not catch on, and is now abandoned.

The dilemmas developers face nowadays are no different. The development of Android and iOS applications is very attractive due to the popularity of mobile devices. There are quite a few developer platforms available on the internet. But which ones do we think will last at least a little bit longer? And which will die out quickly? Is it React, or perhaps Flutter, which is based on a brand-new language called Dart? Why is this even necessary? We read forums and shape our opinions in the hope that our direction is the right one.

If we start a project from scratch, the first thing to do is to analyze the prevailing trends. Web applications and increasingly cloud computing are the most popular. For a while HTML5, CSS and AJAX/JSON have been the principal languages to use. JavaScript (or better yet JQuery) has superseded Flash. Applets have been long gone.

And program development has changed. Rapid incremental development with a sequential approach is gaining momentum. When designing one single project, programmers sometimes use various languages and have to know different APIs. Programs are increasingly complex.

One can observe the polarization of programming: On the one hand, we use high-level programming languages boosting programmers’ productivity, parallelization and ability to work in cloud environments. On the other hand, code effectiveness, execution speed and asymmetric calculations (also due to multi-core systems) can sometimes be important as well. Then there is “democratic” computing, which means that even a less knowledgeable (but motivated) user can develop a small segment, or at least tailor it to their needs. We must also not forget about “dangerous” computing. If something gets too complex a new framework can be designed, and it may upgrade the previous one. In this way, stacking looks a lot like a stack of dirty dishes, with ineffective code or security vulnerabilities somewhere within.

We may wonder what lies ahead. Let us not forget Moore’s famous law, which says that the number of transistors (i.e. the density of integrated circuits) doubles every two years. Then there are four laws postulated in 1997 by Nathan Myhrvold, formerly Chief Technology Officer at Microsoft, that discuss what is happening to programs. His laws of software spark off an interesting association with Newton’s laws. Let’s take a look.

1st Nathan’s law: Software is a gas.

It always expands to fit whatever container it is stored in (i.e. the computer’s capacity). Such expansion can be observed in numerous new versions of operating systems, such as Windows and Linux, and in the ever-growing length of browsers’ code.

2nd Nathan’s law: Software grows until it becomes limited by Moore’s law.

The growth of software is initially rapid, like a gas expanding, but is inevitably limited by the rate of increase in hardware speed. So every processor fails at one point. This usually happens just before new models see the light of day.

3rd Nathan’s law: Software growth makes Moore’s law possible.

People buy new hardware because the software requires it. Integrated circuits are faster than ever, but the price of computers remains more or less the same. We get better value for our money. This phenomenon goes on and on, because new programs emerge all the time.

4th Nathan’s law: Software is limited only by human ambition and expectation.

We never get enough. We get to work with new applications and new ideas of what is popular.

Programs and programming are always in a state of crisis. Whatever we achieve rarely meets the expectations of the users. The bar of expectation is constantly rising.

Programming is challenging even for experienced programmers. We constantly switch from one thought model to another, and translate various solutions to the code and back. In fact, programs are abstractions, and we often use concrete examples to understand them.

Also helpful are approaches such as object-oriented programming, and the use of high-level programming languages. Another common practice is the copy/paste approach, which is based on using code snippets.

It is a well-known fact that even children can understand something better with a visual example. We can easily understand what we feel, but words have to be analyzed first to get to their meaning. Short sentences are not a problem. Long texts, however, are more time-consuming and exhausting. And program code is no different. Some source programs are difficult to understand and sometimes it is easier to develop them from scratch.

It is true that programming languages are intended for people, not computers, and that we are still at the dawn of the history of programming. We took a leap from punched cards and tapes to interactive work behind computer screens. The next shift will have to take into account the fact that in a few decades we expect computers to have a human intelligence level. How will we program such computers? Will they learn the skills by themselves? Will they come up with new standards? The future will be exciting. And perhaps we should be worried.