Darkness is our natural state. During the 200,000 years of human existence our species has only known electric light for .06% of that time. Yes there were candles and torches, but the thought of just flipping a switch to turn on all the lights in a room was inconceivable. Today we recognize the shapes of cities based on the lights they shine at our satellites.
Here in Europe electricity is like oxygen. It is everywhere. We would never even consider paying for electricity at an airport or cafe to charge the batteries of our computers and mobile phones. But what is essential to remember is that this year, 2009, exactly 130 years after the invention of the electric light bulb, the technology still hasn’t spread to all corners of the globe.
These students are studying under lamps at their city’s airport because it is the only place with stable electricity. Major African cities like Monrovia in Liberia, are powered completely by generators.
The Centralization of Electricity
Thomas Edison’s electric light bulb was really an ingenious invention, but from a business point of view it had a problem. You couldn’t use it without electricity. And so, to sell his electric lamps Edison realized that he would need to distribute electricity to the homes, offices, and warehouses that wanted electric light.
Pearl Street Station was Edison’s first power generating station. To the right you can see the small neighborhood in Manhattan that it was able to provide electricity to.
Edison came up with direct current to transmit electricity from power stations to nearby businesses and homes. The problem with direct current is that it is a very inefficient way of transmitting electricity over long distances. The direct current model would require a power station in every neighborhood. Still, Edison thought he came up with a master plan to provide electricity and electric light to the whole world. After all, who could challenge America’s greatest inventor?
Nikola Tesla is one of Europe’s and one of science’s most intriguing characters. A Serbian, Tesla was born in the tiny village of Smiljan, which today still has a population of less than 500 and is now part of Croatia. (As Danica Radovanovic recently reminded me on Facebook, Tesla was one of many brilliant Serbian scientists to leave his country for elsewhere.) Tesla questioned Edison’s use of direct current to transmit electricity and instead proposed alternating current, which is far more efficient as it travels over long distances.
What followed was the war of currents: Tesla and Edison went head to head. Europe’s greatest inventor of the time versus America’s greatest inventor of the time. And who lost? This poor elephant named Topsy.
Tesla’s system of alternating current raises the voltage to a very high level as it travels across distances. Edison electrocuted Topsy as a scare tactic to show the public what would happen if they touched a high voltage cable. But what sealed the deal was the Niagara Falls hydroelectric power project, the biggest power generator of the time. In 1883, the Niagara Falls Power Company hired Nikola Tesla and his business partner George Westinghouse to design a system to generate alternating current and transmit it throughout New York.
Had alternating current not won, then projects like China’s Three Gorges Dam, which will generate ten times as much electricity as Niagara Falls, would never exist. Alternating current led to the centralization of electricity. One of Edison’s arguments against alternating current is that a few major power generators are much more vulnerable than many spread all around the world. That remains true today, but the economics of centralized energy production and the allure of cheap energy won out in the end.
The Centralization of Computing
I wanted to briefly go over the history of electrification because its development so closely parallels what we are seeing today in the computing industry. This is not my own observation: it was first made in a book by Nicholas Carr called The Big Switch and the analogy is now commonly used by many when they explain the concept of cloud computing.
Phase 1: Mainframe Computing
This is the computer that the Internal Revenue Service used to process tax returns in the 1960’s. Mainframe computers at the time typically cost between $500,000 – $1 million. They were only available to programmers and researchers who had to wait hours if not days to use them because there was so much computing to be done and so few computers to do it.
Phase 2: Personal Computing
The second phase of computing began in 1977 with the Apple II computer, which brought computing into the home and personal office for the first time. Data was stored on audio cassette tapes. The first Apple II cost $1,300 with 4k of RAM and $2,600 with 48k of RAM. Today you can buy a gigabyte of RAM for $30 and a terabyte hard drive for $70. It had a one megahertz processor and the above ad from a 1977 issue of Scientific America touted its high resolution graphics; by which it meant a 280 by 192 pixels display with four colors: black, white, violet, and green.
This is Gary. From the tags on the Flickr picture I assume he works in the IT department of some startup company. In many ways Gary’s position exemplifies the era of personal computing. Gary is in charge of maintaining a network of computers for just one company. He goes around to each computer and upgrades new versions of software. He creates new email accounts for new employees. He answers questions as they come up. And he backs up all the data to make sure it isn’t lost in case of a hard drive failure. (Increasingly Gary is able to nap while online services do that work for him.)
Most of us still operate in the era of personal computing. We create content on our laptop and desktop computers. We store our information on our individual hard drives. And when we share documents it is usually with email.
Phase 3: Cloud Computing
The greatest evidence that we are entering the third chapter of computing is that laptop computers are becoming less powerful, not more. This is because today you don’t need a powerful computer; all you need is an internet connection. Google is even creating a free and open source operating system specifically targeted for cheap netbooks like the one above. In fact, increasingly we are leaving our laptops at home because smart phones are proving sufficient for most of our daily tasks.
This week the big talk of town is a rumored Apple tablet, which might be announced tomorrow, though much more likely sometime during the first half of 2010. There have, in fact, been rumors of an Apple tablet computer for years, but only now does it make sense to release such a powerful product with so few features. A tablet PC does little except connect to the internet. But today all you need is a keyboard, screen, camera, and microphone to connect to the cloud. Word processing, image editing, spreadsheets, video editing, audio recording: all the applications are available online.
So, what do we mean when we say “the cloud”? In many ways I think that the term is an unhelpful abstraction which masks the actual shift in infrastructure that is taking place. Increasingly computing and data storage are not taking place on our own computers, but rather at massive data centers like this one:
The data center in the above video has about 40,000 computers. Here is just one:
Machines like this process and store our emails, the photos we publish online, our blog posts, the videos we upload to YouTube, and even the voicemails we listen to from our mobile phones. DataCenterMap.org has a map-based directory of major data centers based around the world. When we speak of “the cloud” what we’re really referring to are these massive data centers, the thousands of computers they contain, and the countless software applications they make available to us through our browsers. And this visualization from New Scientist shows us how those applications interact with our laptops, cell phones, and tablets from the massive data centers that make up the cloud.
2009 is the 40th anniversary of the Internet, the 20th anniversary of the World Wide Web, and the 5th anniversary of what we call Web 2.0. The change is accelerating. Just look at these maps of internet users from 2002, 2004, 2006, and 2008. (Teddy points out that African internet users aren’t included on any of the maps.) Last year, for the first time ever, the United States was not the nation with the largest number of internet users. That now belongs to China. And, unless something drastic happens, China will continue to have the largest online presence for the rest of our lives.
Internet World Statistics estimates that there are over 1.5 billion internet users. Last year Google announced that it had indexed its trillionth web page. Researchers at Microsoft estimate that “if you spent just one minute reading every website in existence, you’d be kept busy for 31,000 years. Without any sleep.” (“That explains a lot,” says Georgia.) Some researchers estimate that global internet usage already makes up five percent of the world’s energy consumption.
Though this video has stirred up lots of debate about the statistics it cites (check out the 130+ comments on the post), its basic premise – that the internet has had a tremendous impact on human society – cannot be denied.
The Centralization of Intelligence
We have gone over the history of electrification and computing. I’d like to conclude with a brief history of collective intelligence. In the 17th century Thomas Hobbes was a controversial figure in part because he believed that intelligence came not from an all powerful god, but from each individual; and that if we could somehow bring each individual’s intelligence together to create a collective intelligence then we could shape society for the better. In 1938, HG Wells published World Brain, a collection of essays on the future organization of knowledge and education. Two years earlier the American Library Association had endorsed microfilm as a way to archive and store books, newspapers, manuscripts, and periodicals. Wells, inspired by advances in microfilm, imagined “a mental clearing house for the mind, a depot where knowledge and ideas are received, sorted, summarized, digested, clarified and compared.”
Even before Wells published World Brain, Belgian author and peace activist Paul Otlet was already envisioning his own pre-cursor to Google Books:
So how has collective intelligence changed in the era of cloud computing? What do we mean when we say “cloud intelligence?” For one thing, our relationship with software has become much more symbiotic. We depend on cloud software to make sense of the information around us; and cloud software depends on us to help it make sense of the ever-increasing amount of information we upload to the internet. Take Google Flu Trends, for instance, which can detect flu outbreaks faster than the Center for Disease Control by monitoring searches for symptoms. Another example of cloud intelligence is reCAPTCHA which has enlisted an army of millions of unknowing volunteers to help digitize books and the complete archive of the New York Times:
(If you want to leave a comment on this post you’re forced to join Von Ahn‘s mission.)
Another example of cloud intelligence is found in the active Astrometry group on Flickr. There are over 1,000 amateur astronomers in the group that help scientists keep an eye on the galaxies around us by regularly posting the photos they take from their telescopes. When members of the group publish a photograph of the nighttime sky an automated computer application scans their photos for recognizable stars, planets, and nebulae and labels them using Flickr’s notes function. Each photographer gains more information about the photograph that he or she took and Astrometry.net gets a new image of the nighttime sky to add to its ever-growing database. In an interview on the Flickr Developer Blog, project leader Christopher Stumm says that Astrometry.net is currently “using images from around the web to calculate the path comet Holmes took through the sky.”
Chris Messina from San Francisco brings us another example of cloud intelligence from a recent shopping trip to OfficeMax where he and his girlfriend were hoping to buy some dry erase boards for their home office.
The shopping trip wasn’t a success. Most of the boards were poor quality and when they did eventually find a product that suited their needs, it was damaged. Rather than moving on to the next office supply store Chris pulled out his iPhone and took a picture with the Amazon iphone application of the dry erase board they wanted to buy. The picture is then uploaded to one of Amazon’s massive data centers where it is posted on Mechanical Turk, a website that lists “human intelligence tasks” that pay anywhere from one penny to five dollars. Jeff, for example, will pay you one penny for every sermon time you find listed on a church website. Amazon is also willing to pay one penny to anyone who will look at the photo Chris uploaded from his iPhone application and identify that same product on the Amazon website. Chris will never know who did the work for him, but within minutes he received a message from Amazon with a link to the product he was looking for.
I would imagine that just about every tourist who has been to London has taken a picture of Big Ben. It must be the most photographed clock tower in the world. Ten years ago we put those photographs in a photo album to share with our family and friends. More recently we’ve become accustomed to sharing them on Picasa, Flickr, or Twitter. But now the tourist snapshots we take are helping create a 3D model of the world:
If the cloud is the world brain, then the cameras and microphones on our mobile phones and laptops are its eyes and ears. How many times have you been to a cafe or bar and heard a song that you liked but didn’t recognize? Today if you help the cloud listen, it will provide you with the information. Shazam identifies the song you are listening by sharing a small audio sample with your mobile phone. In turn Shazam is able to track the most popular songs as they come out.
Google’s crowdsourced traffic application also highlights the symbiotic relationship between human intelligence and cloud intelligence. When you start the application Google will show you traffic conditions on many of the streets around you. In turn, you agree to let Google track your speed as you travel; thus updating their maps with more real-time traffic data.
Just five years ago it would have cost at least hundreds of thousands of dollars to produce a video with actors on location based all around the world. Also, until very recently there was only one man in the world who could attract millions of views when dancing the moonwalk. Today we all can. All we have to do is upload a video to Eternal Moonwalk. The cloud is enabling a new possibilities of creative collaboration.
What is the future of collective intelligence? All that we can be sure of is that all intelligence is always collective; whether it is the collective intelligence of the nucleotides that make up our DNA or the billions of neurons in our brain as they produce the thoughts in our heads right now. Above is a visualization of the internet made by the Opte Project. It is a map of us – the internet would not exist were it not for us, and we would each be very different without the internet. Just as each individual neuron in our brain is unaware that collectively it is part of a larger self, it is too easy for each of us to forget that, as internet users, we too form parts of a collective whole. And that collective whole is much greater – and different – than the sum of its parts.
In many ways the cloud is leading to the centralization of power as it lures us into trusting just a few major corporations with our personal information. On the other hand it creates an architecture of participation that is enabling a larger percentage of the world population to become active citizens rather than just passive consumers.
This podium I am speaking at right now was once the sole symbol of power in the room. Today a cloud of conversation floats all around us that no one person can control. It is up to you if you would like to participate or not.
In the end, no matter how actively or passively we spend our time online, what we can all be sure of is that one day sooner or later our brain will stop functioning and our stay here on planet Earth will conclude. We will remain, of course, in the memories of our friends and family, and also in the bits and bytes of digital footprints that we leave in the cloud for the generations that follow. What they do with the information we leave behind – or, indeed, what the cloud itself does with the information – will depend on a new type of networked evolution that values sharing and community over proprietary protection.