The application is the instrument

Remember the A:>? That’s the way we used to interact with computers back in 1982. Things will be different in 2030, as Dave Wilson explains.

‘Biologically the species is the accumulation of the experiments of all its successful individuals since the beginning’. – H. G. Wells.

Remember the A:>? That’s the way we used to interact with computers back in 1982. Can you believe it? Those old machines didn’t do a darned thing until you gave them some idea of what to do. And most of the time you had to write a program yourself in some really long winded assembly code and then ask the computer to execute it to get an application to run. What a waste of time.

Thank goodness things changed in 1995. Back then, you could actually write programs in the C language to create applications without the need to understand the underlying hardware of the computer itself. Those high level languages actually replaced the need to get down and dirty with machine code and ran rather well, albeit on the rather antiquated Von Neumann machines that they were tasked to execute on.

Fast forward to 2005. In those days, you could use graphical programming to link together a bunch of icons on a screen that were actually visual representations of the software algorithms that defined the specific functions of the program that you wanted your computer to run. And I can remember vividly that all you needed to do was to download the virtual representation of the software onto your computer in order to execute the program. And if it didn’t perform well enough on the old Von Neumann architecture? Well, then you could blow an FPGA to ensure that your very own customised data-flow architecture would handle all the critical timing paths of your application for you.

Jump forward again to 2020 and thank goodness we still didn’t have to go down that route. Then, if we were designing a control system, it was the very nature of the application that defined the algorithms that controlled it. And once defined, it was the design software that created the program to be executed. All we had to do was to take the measurements from the system that we wanted to control.

And thank goodness we didn’t have to worry anymore whether that program was being executed on a general purpose CPU, a network of CPU’s or a dedicated data-flow silicon architecture that the software had generated automatically for our benefit (or even a combination of all three!). The software could sort all that out for us based on the performance requirements of the algorithms that it had created itself.

Can you believe it? What a waste of time! That sure seems like a long time ago. In 2030, we don’t really think about processing power or the architecture of the computer system itself. Or about programming tools either. It’s given that we have enough processing power, memory and cool development tools to develop any control and acquisition system that we want to.

And it’s so easy to develop a control or data acquisition system that now anyone can do it. With speech recognition systems, you simply have to instruct a software automaton to ‘investigate’ and then ‘optimise’ a specific process that you are interested in customising to your own personal use. The voice-activated system automatically specifies the acquisition tools for the job, acquires the data and then writes the program to optimise the system for any performance that you require. The result is then automatically downloaded to whatever processing architecture is powerful and inexpensive enough to control the operation of the device. And the system is shipped to customers the same day. Job done in hours. Not months, like the old timers used to.

Dave Wilson would like to thank Ian Bell of National Instruments for his input into this week’s editorial.