Watson

A lot seems to get lost in the minutiae when it comes to science fiction and fantasy: we were supposed to have flying cars, disregarding that most people can't drive when limited to two dimensions, space exploration will be our salvation, despite the fact that our odds of reproducing and successfully colonizing anything outside of Earth is extremely limited at the present moment, and that when robots and computer systems can best a human, it's the beginning of the end of humanity.

I've been thinking of this since I read Ted Chiang's novella, The Lifecycle of Software Objects, dealing with the education and development of a viable A.I., and the complexities that arise when putting together such a thing. Chiang rejects the notion that a computer that's rigidly programmed will automatically produce a superior being: rather, intelligence is far more complex, and anything that is truly intelligent in the same way that we are could potentially come about in the same ways that people can.

Why all the fuss about Watson? It's an interesting programming trick, to be sure, but hardly the end of humanity as we know it. One of the books that I've been fascinated by over the last couple of years is P.W. Singer's Wired for War, which examines the development of robotic combat systems, a science fiction concept in and of itself. Every time a new type of robot is deployed, the internet inevitably shits itself predicting that Terminator is right around the corner, and that it's time to start stocking up on canned goods for when the robotic rebellion comes barreling down on our squishy, organic heads.

The future is rarely as predicted: look at the accuracy of weathermen. Within science fiction, we take far too much that's designed for entertainment as gospel, and very rarely will a science fiction film actually see some realistic predictions. 2001: A Space Odyssey missed by a long shot (we were supposed to have habitable moon bases), Terminator predicted our demise years ago, and Minority Report, one of the only films that seems to have gotten a lot of things right, overestimated things by half a decade: we've got our motion controlled computers, except that it's in gaming consoles.

The pace of technology doesn't live or die by our expectations of entertainment and wonder: indeed, the truly visionary science fiction films understand that our surroundings are build around how we will use things. Moon's director, Duncan Jones, noted that when building the sets and equipment for his film, they wanted to make sure that the designers built it with practicality in mind. Steven Spielberg gathered a bunch of technology experts into a think tank for the world-building of Minority Report.

To date, we don't have servant androids, daily moon flights or Zeppelins, for the simple reason that there's either no practical daily use for such things, or there's a better alternative. True, robots do exist in people's homes, in the form of iRobot's Roomba, but these aren't multiple-purpose devices: they exist to fill a certain function. I wouldn't trust it (as much as I'd like to) to cook me dinner, fetch the mail or put away things out of order, simply because I can do those things myself, and at a far less cost than such a thing would run me otherwise. Commonplace spaceflight, while on the brink of actually happening to the general (if wealthy) public, is not for business or industry, but entertainment, simply because we haven't found any other way to make it profitable for investors, while airplanes can do everything that an airship can faster and cheaper, because the infrastructure and needs of the economic world are in place for it.

At the same time, the things that are the most science fictional in our world go almost unnoticed, either because they aren't dramatic in any particular way: the number of computers and electronics in an automobile, for example, to the tablet computer that I'm typing this on. Take anything from the modern day and transplant it into the golden age of science fiction, it would most likely shock the world. Even things like Amazon.com, Facebook and Twitter fall under these categories, developed in response to how people interact and use the internet: technology has an almost organic development, changing in response to other, prior changes that pave the way for own existence. Twitter, for example, most likely never would have existed but for the happy coincidence of the prevalence of text messaging and Facebook's own introduction of status updates. Amazon.com was an outgrowth of business's ability to consolidate and the introduction of wide-spread internet use. When looking at the future, it's often the really little things, rather than the dramatic, that define our lives.  There are exceptions to this: major terrorist attacks such as on September 11th radically changed things in a lot of ways: most likely, some technologies and political or business environments would have altered how things went, much like the industrial boom that was sparked by the Second World War changed America's stance amongst the other nations on the planet.

Singer's book comes to mind in all of this, because of the way that robotics have developed for the battlefield throughout the War on Terror in Iraq and Afghanistan. He rejected the idea that we'll ever have a '3 Laws of Robotics', simply because they get in the way of a combat robot, but also because we really don't have robotics in the same way that we thought we might have: automatic responses and programming, with a person in the loop to direct how it carries out its mission. Robotics, rather than multi-purpose, are task-specific, much as Watson is on Jeopardy. He might be able to put together a lot of information and connect the dots, but that's what he's supposed to do: world domination most likely won't occur to him, and even if it did, I doubt that it could be carried out. Now, should he have the ability to learn, and apply the fact connection to a desire in a highly complicated and sophisticated manner, we could have a very different story.

We were supposed to have robots in the future. Instead, we have iPads, surveillance cameras, global positioning systems and quite a lot more, because of needs that weren't predicted back in the middle of the century, predictions that were influenced by the optimism that only a wealthy nation full of technology could bring.

Were these predictions bad? No. Unfounded? Nope. 2001: A Space Odyssey, came out at the beginnings of the Space Race, unsure of what would happen. In 1969, we went to the moon and discovered a magnificent, but empty expanse on the Moon, and haven't looked back. Blade Runner saw a future that was more grounded: a lived-in world, mired with the same human problems that have been the constant throughout our history.

When it comes to predicting the future, we might very well still have a lot of these things: we're making early steps towards civilian spaceflight, environmental costs might predicate the elimination of airplanes, and household robotics will likely be more sophisticated. However, the steps towards this direction will always rest on the requirements of the people using them. We simply don't need a supercomputer to take over the world: we needed one for entertainment.

 

History in 140 Characters

A couple weeks ago, the Library of Congress announced that they had acquired the expensive backlog of tweets from Twitter, adding in millions of bits of information to the archives. While each tweet is no more than 140 characters, it will likely represent one of the larger collections that the Library holds, preserving quite a lot of information for the long foreseeable future. It also sounds like it will be one of the more ridiculous holding that the Library holds, for with that mess of information contains abbreviations, thousands of individual statements and other 'useless' bits of information that in and of itself has little practical value. I was incredulous at the news at first, simply wondering why anyone would be interested in such a thing. However, it's since occurred to me that the massive amount of information that Twitter has acquired is a unique look into a society.

Twitter in and of itself is a communications tool. It's one that I've thought of as a ridiculous waste of time, not good for much, as have my friends and co-workers, who see its 140 character limit as something that is not only limiting, but a form of preening communication. I've come to use it myself, and while there are certainly a lot of people out there who'll say things for no reason at all, I've found that it's a fairly good, informal way to keep in touch with some people, businesses, and as a way to distribute information.

I follow several news sources, such as the BBC and the New York Times, which updates constantly on the news of the moment. But I also follow a number of celebrities, websites and figures, and find that at points, it's an interesting way to get an inside look into what is going on at their end. Craig Engler, the SVP & GM of Digital for the SyFy channel, regularly updates fans on some inside information on how the television business works, as does David Blue, who stars in SyFy's Stargate Universe. There's numerous other people in addition to that list, but it's also an interesting way to keep in touch with several other friends that I don't normally speak with in other channels.

Beyond that, the acquisition of Twitter by the Library of Congress makes a bit of sense when you look at the archive as a whole. Millions of people have signed up for the service, and there's a large number of people who do use the service. While it's not necessarily representative of the population as a whole, it is a highly visible, print record of people talking to each other, about everything. With that in mind, consider that a recent study found that by analysing tweets across the world, by keyword and frequency, researchers could accurately predict what a film's opening weekend take at the box office would be. While that's just one somewhat frivolous thing to study, imagine taking that study and applying it to the reaction to the Haitian, Chilian or Tibetan earthquakes that just occurred. Or a major presidential election, social events, major news stories and so on. People talk about these items, and by placing it into the Library of Congress, it essentially becomes historical record, for future historians to read and study long after we're gone. Individually, the tweets probably can't tell you much, but as a whole, there's a lot that can probably be learned, especially when you look at social mediums.

Beyond websites such as Twitter, there are numerous other blogs, wikis and other, more substantial websites that offer much more than 140 characters. With my own site, I try and work on analysis, review, and research, often presenting an argument that I try and prove with backed up information. Authors of numerous genres and backgrounds publish a great variety of information, and in all likelihood, internet authorship will rival print media in the nearish future.

As the internet becomes more ingrained into everyday life, it will be important to take a closer look into finding ways to preserve what is said, either social media sites, such as Facebook, Myspace, Flickr, Wordpress, and even e-mail. In the past, historians relied on what they could find relating to a source: photographs, letters, journals, which show a small window and limited view into any given events. The diary of a Civil War soldier might tell much about him, but not about the war as a whole. With hundreds of journals together, a much different picture emerges. The same goes with websites like Twitter. Historians of the future will likely have greater resources at looking at our time long in the future. Their challenges will be to sort through it all.

BTW, you can follow me here.