Rant: More Specific Power = Less General
3D cards are out now for your Intel compatible PC, with lots of processing power and more graphics memory. Now the same can be said of your new video card and other cards that slot into your PC. They are all gaining processing power to improve the performance of todays Multi-Media products and lets face it games. However by placing all this nice hardware on the video cards and making it specific for shifting video information we're loosing the generality that made the PC so good in the first place. What we should be doing is plugging together boards with main processors on and cache that can talk to each other across a fast bus system. For our games and the like some of these general purpose processors could be given over exclusively to handling 3D graphics and block transfers, however when not playing games we have the power of all these processors to put to good use not just the one in the middle of the system.
Sacrificing the custom hardware for something more general will of course reduce performance to some extent. Custom hardware runs like the clappers (or should do) because its designed just to perform a few actions very well while our Pentiums do many different types of actions at an average speed. However if we wanted super-graphics why don't we just buy the latest 64 , 128, 256 bit games console and be done with it? Because our humble PC lets us do many different things apart from gameing. Thats the strength of the PC its versatile with many different applications and environment's available to suit different activities.
Now if we instead of placing all that custom hardware in our PC we could easily connect multiple processors together we would have a platform that could dedicate itself to many different tasks again while still having the power to process large amounts of data for games and multi-media. What we're talking about here is a massive increase in the power of the home computer, lets take the graphics processors off the graphics cards and plug in a couple mode P-II's or whatever the latest processor is. We now have the the same computing power as before but everything can use it.
However in order to make full use of this extra power our applications and operating systems will need to be designed to operate efficiently with many processors at the same time and to schedule threads of code to run on the different processors. One OS that seems to be heading this way is BeOS, it calls itself a Multi-Media OS and from looking at its self proposed goals we can see that they are aiming to make good use of any extra processing power out there.
Now today you can get multi processor motherboards but they are designed usually just to take a second or maybe 4 processors, not for the ability to simply plug many processors simply into a card slot the way we do with our other devices. The other point about these is that they cost! As there not the standard type of motherboard they don't get quite as much mass market competition to pull down their prices.
Now if we go to multi-processor systems you wont necessarily need to get the new +100 MHz chip because you could buy a cheaper chip with the same performance as your current ones and gain extra power at a lower cost.
In order to make effective use of multiple processing systems our current PC hardware must be redesigned so that we can have plug and play processors. Meaning we ent up with a processor on a board with local cache memory and a bus interface. These boards can then be plugged into a multi-processor bus the same way we plug sound cards in. The system can then recognise which processors are plugged in and take appropriate action.
Say we manage to get a plug and play a processor system, what restrictions will it have in terms of software and hardware. Well in order to run any application each processor must have the same instruction set and the same interface to the rest of the system. So we can't combine Intel's latest wonder with Motorola's new chip? Well maybe we can, if on the hardware side we have a standard for how each processor board is to access the memory and devices in the system we could certainly plug these chips into the same system. The trouble is that the operating system now has to manage different versions of programs for the different chips. But lately interest seems to have revived in a old idea, why not write a small interpreter for a imaginary chip and have all our existing chips emulate this ? Thats what Java and Inferno do now and what languages like SmallTalk and at one stage Pascal have done for ages. So we could define a small efficient virtual machine that will run our applications, the operating system now only has to manage the virtual machines for the different processors, dispatch the right code to the right one and we now have a load of parallel virtual machines. The vm could even be placed in an EEPROM on the processor card so that the system is running our vm from the start and our operating system now only has to manage applications in one language for one type of processor, our vm.
So if we went for such a system we would have a great boost in the average processing power avalable to our applications while being able to pick and choose whose hardware we placed in our system. Such a system of course would not make those companie who currently dominate the market happy as they stand to loose out, as do those companies who keep pushing us to use faster and faster chips, we could get better performance from many slower chips all working together. The trouble is that if a nice elegent system was designed then very soon each manufacturer would be offering us extra features that break the system unless you use their chips/software ect. So a ferm hand and set of standards would be needed to implement the personal supercomputer.
Last modified: Thu, 10 Jul 2003 21:25:52 UTC