There’s always a temptation to try to predict the course of future events, and the world of computing is no exception. With everything seemingly becoming bigger, better and faster year on year, there’s an insatiable appetite for predictions, and some individuals seem dutybound to meet that demand.
Today these people like to call themselves futurologists, and while this might give the practice an air of scientific respectability, in many cases it has to be said with hindsight that using a crystal ball would have been just as accurate.
Intrigued? Then join us as we examine seven prophecies that fell short of the mark.
“It would appear that we have reached the limits of what it’s possible to achieve with computer technology, although one should be careful with such statements, as they tend to sound pretty silly in five years.”
John von Neumann, 1949
Though the second part of this statement is certainly right, the first bit is unbelievable. Von Neumann was an eminent scientist and mathematician, and developed the computer architecture we still use today. Not a person you’d expect to make such a rash statement.
The fact that he couldn’t think of any possible new applications for computers suggests a serious lack of imagination considering that, in 1949, computers hadn’t been used for much yet.
In that year, you could count the number of operational computers in the world on your fingers. They had all been developed in universities and were deployed only for scientific purposes. It would be another two years before J Lyons and Company launched LEO (Lyons Electronic Office computer), the first computer designed specifically for business applications. So that’s one more potential application, for starters.
Von Neumann’s contemporaries weren’t as blinkered, though. As he was uttering these immortal words, Claude Shannon – now regarded as the father of information theory – was working on some truly groundbreaking applications.
In 1950, he took one of the first steps in the development of artificial intelligence by demonstrating an electromechanical mouse that could find its way around a maze. The same year, he published a paper detailing how computers could be used to play chess. So much for von Neumann’s prophecy!
“I think there is a world market for maybe five computers.”
Thomas J Watson, President of IBM, 1943
Computer historians dispute the validity of this quotation, but even if Watson himself didn’t utter those words, there’s plenty of evidence that computer experts expressed such a sentiment as recently as the early ’50s. And the idea wasn’t as daft as it sounds.
COLOSSAL MIS-JUDGEMENT: The first electric computer had yet to be built when Thomas J Watson predicted a market for five of them
Back in 1943, the world’s first fully electronic computer of any sort – the code-breaking Colossus at Bletchley Park – was just in the process of being commissioned. It would be another five years before the first ever computer as we now understand the word (the Manchester Baby) was built, a further eight years before the first commercial computer (the Ferranti Mark I) went on sale, and 10 years before Watson’s own company, IBM, launched its very first computer (the 701).
Of course, we all know that this prophecy turned out to be absolute rubbish, but the vast scale of the under-estimation might still be an eye-opener. Forget PCs (over a billion of them) and think of microcontrollers. They outnumber the world’s population many times over, and each one is vastly more powerful than anything Thomas Watson might have envisaged.
“Computers in the future will weigh no more than 1.5 tons.”
Popular Mechanics, 1949
Before you dismiss this prediction as coming from an unlikely source, we should tell you that Popular Mechanics has been one of America’s leading science and technology magazines for over 100 years. And as you’d expect from such an August publication, the prediction was, for the most part, spot-on – the vast majority of today’s computers do indeed weigh in at less than 1.5 tons. Not all of them, though – not by a long way.
Jaguar, the world’s fastest supercomputer, is housed at the Oak Ridge National Laboratory in Tennessee and weighs in at almost 200 tons. That doesn’t even include the massive air conditioning units that are needed to get rid of the heat that’s generated by almost a quarter of a million processor cores, which consume 10 megawatts of power between them.
FAT-CAT: Even today, some computers weigh more than 1.5 tons – this one considerably more
To be fair, though, at 1.75 petaflops, Jaguar is about two thousand billion times faster than 1949′s latest and greatest.
“There is no reason for any individual to have a computer in his home.” Ken Olsen, co-founder of Digital Equipment Corporation, 1977
He really ought to have known better. After all, the company Ken Olsen founded was responsible for the first of two important milestones in the history of home computing.
Prior to the early ’60s, a computer was one thing and one thing only – a mainframe. It would be priced in hundreds of thousands of pounds, if not millions, occupy a whole room and require a full-time staff to operate and maintain it.
In 1964 DEC launched the PDP- 8, which is generally considered the first commercially successful minicomputer. It was the size of a refrigerator, cost $18,000 and over 50,000 were sold – more than any other computer before it. For the first time, a computer could be owned by a single department, not a huge organisation, and it could be operated by people who weren’t scientists.
Computers were starting to pass from a select few to the many. Even more surprising, though, is the fact that Olsen made this statement after the second of those two milestones had passed. That was in 1975, when the MIPS Altair 8800 became the first personal computer to sell more than a handful of units.
“640kB should be enough for anyone.”
Bill Gates, 1981
He later denied it, but this was allegedly Bill Gates’ take on the maximum amount of memory a computer would need. Even if he didn’t actually say it, we can be pretty sure he believed it, as it seems fairly realistic in context.
Previous personal computers were based on 8-bit processors, which meant they couldn’t address more than 64kB of memory. But even this would have been the stuff of dreams for most home computer users of the day.
Perhaps the best known British home computer that year was the Sinclair ZX81, which had just 1kB of memory.
To put this in context, let’s bring it up to date. If you were offered a PC today with 2.56TB of memory, wouldn’t you think it was enough for anyone – at least for a few more years?
“I have travelled the length and breadth of this country and talked with the best people, and I can assure you that data processing is a fad that won’t last out the year.”
Editor in charge of business books, Prentice Hall, 1957
The computer revolution might already have been almost 10 years old by this point, but computers were still pretty thin on the ground. With an estimated 100 of them in use in 1953 and 250 in 1955, this new technology wasn’t exactly taking the world by storm.
What’s more, the phrase ‘data processing’ refers to business applications, which were lagging well behind technical computing. Lyons, of teashop fame, launched LEO, the first ever business computer, in 1951. But by 1957, only one was in operation – and that was used by Lyons itself for valuation jobs and payroll processing. Even Big Blue was slow to make an impact on business computing.
Its first offering, the IBM 702 Electronic Data Processing Machine, was only in production from 1953 to 1954. Its replacement, the 705, broke new ground by being the first commercial computer to use magnetic core memory, but the number sold isn’t on record. What we do know, though, is that back in the ’50s, IBM was overshadowed by a company now long forgotten: Remington Rand, later known as Sperry Rand.
Its earliest computer, the UNIVAC, first shipped in 1952 and was designed from the outset for business and administrative use. It did well, but success was relative back in the ’50s. By the time the UNIVAC was replaced by the UNIVAC II in 1958, a grand total of 46 devices had been sold.
Given that such machines cost between $1.25 and $1.5million (around $10million today), this gloomy prophecy wasn’t too surprising. We bet he thought differently in another five years, though.
“Transmission of documents via telephone wires is possible in principle, but the apparatus required is so expensive that it will never become a practical proposition.”
Dennis Gabor, 1962
Dennis Gabor wasn’t your average scientist – he was a Nobel Prize winner. That award was for his invention of holography, but he also applied his considerable talents to the theory of data communication. So he really ought to have known what he was talking about, but it turned out he didn’t – at least not on this particular subject.
It wasn’t long before his error was exposed. Later that same year, AT&T launched the Bell 103, which was the first commercially successful modem. It was now possible to transmit data at 300 bits per second across an ordinary telephone line. In fairness to Gabor, this technology was still too slow and too expensive to be used for anything other than mainframe communication.
It wasn’t until the early ’80s that the proliferation of bulletin boards heralded the era of low-cost data communication that was available to Joe Public. Just a year after making this spectacularly inaccurate prediction, Gabor had a change of heart on the subject of forecasting the future.
In his 1963 book, Inventing the Future, he wisely stated that “the future cannot be predicted, but futures can be invented”. This is surely a fitting place to conclude our investigation of computing’s most unreliable and inaccurate prophecies.