I think it was Niels Bohr who said, “It’s hard to predict, especially the future”. But, driven by the number of my friends working in the interactive media industry, who complain that things have got very boring, I thought I’d venture a few predictions.
The first is that we should still expect a lot of disruptive, technological surprises to come.
The second is that network thinking, or what George Nelson called the “connections game”, is going to become a key ability in life and in business.
And the third is that analogue interfaces to digital media are going to be a hot area of development over the next few years.
Beginning with my first prediction, I seem to remember Allan Kay remarking something to the effect that you knew a technology had really arrived when its use became banal. He supported this remark by saying that when he was with Xerox he discovered the most common use for office copiers had become printing invitations to barbecues. With much of the web and other interactive media we seem to have reached a premature maturity and a lot of what we see is not dissimilar to barbecue invitations. Hence my friends sense of ennui.
But before we get lulled into a false sense of security, we should remember Marshall Mcluhan maxim that the early content of a new medium is old media or Stafford Beer’s complaint that the way Information Technology was used was mainly “new lamps for old”. So one of the problems is that we are still largely using new technology to do things in the same old way. As Alan Kay has put it, “The commercial computer is now about 50 years old and is still imitating the paper culture that came before it, just as the printing press did with the manuscript culture it gradually replaced.”
However, he goes on to say, “No media revolution can be said to have happened without a general establishment of “literacy”: fluent “reading” and “writing” at the highest level of ideas that the medium can represent. With computers, we are so far from that fluent literacy — or even understanding what that literacy should resemble — that we could claim that the computer revolution hasn’t even started.”
I think he’s right and perhaps in an even wider sense than he was talking about. We are still tottering on the edge of what fifty year old technologies can do. I am reminded of the early stages of the industrial revolution, when John Smeaton established the principles of designing the most efficient kind of waterwheel. This was followed by a flurry of innovations in waterwheel design. What followed, as Andrew Tylecote puts it, was that “Quite suddenly, British industry had at its disposal a power source much cheaper than any it had known, and …in adequate supply: it could now provide its large mechanised factories with power, while (with the new roads and waterways) it moved their products out, and their inputs in.” So like the early days of the industrial revolution, we are still in the very early stages of finding out and improving what we can do with what we have invented.
As Philip Agre points out we have now reached a stage where, “As platforms multiply, it becomes possible to do amazing things in an afternoon, in a garage. The platform provides more and more of the functionality. That, again, is the ideal. Some people use the term “platform” to refer to programmable hardware, but the deeper meaning is: something that you can build things on top of. Platforms nest, with new ones built on old ones. As the platforms stack up, the newer ones come closer to the experience and concepts of ordinary people”
It is this human use of technology that is likely to be the source of many technological surprises, rather than purely commercial or technology driven research. Tim Berners-Lee wanted to provide the means for people at CERN using different computer systems to share their knowledge easily. Shawn Fanning wanted to be able to share music with his friends. Neither of them expected their inventions to shake up the world in the way they have done.
So I feel fairly confident with this first prediction. Someone, somewhere, is right now doing something “neat for their friends”, that will seem to spring out of nowhere and shake up the way we do things. And there is likely to be a lot of “someones” out there over the next couple of decades.
Moving on to my second prediction; network thinking – perceiving the world as sets of interacting elements – seems to be a very natural way of thinking and acting in the world for knowledge-based societies. Most hunter-gatherer societies we know about appear to adopt this mode. Despite what seems to be an innate propensity for humans to create patterns of connections, once we moved away from those early forms of life we adopted simpler, in some ways more effective, notions of cause and effect. In doing so we created a separation between the observer and the observed, humans and the world. But also we created a limited way of looking at the world, which may have reached the end of its usefulness.
One of the gifts of the World Wide Web was that the experience of moving through networks of nodes and connections began to re-connect us to this older mode of thinking. Robert Andreas Fischer, in somewhat tortuous academic prose, made the connection between Australian Aboriginal thinking and emerging digital communication networks, some years ago. More recently Eric Davis talking about the distinction he made between holism and what he called network thinking made the following point:
“The reason I like to contrast holism with network thinking, even though many think they’re the same thing, is that I think it’s a different thing to live your life recognizing that you are deeply embedded inside a huge set of networks, than it is to live your life in the shadow of the whole because it’s always a hard thing.”
He later continues, after discussing some of the problems of holism and the desire to feel that we are connected to the cosmos:
“But, it’s not that we’re interconnected immediately with the entirety of the cosmos. Maybe on some mystical level that’s true if you meditate long enough you can see that that’s true, but in the world that you and I share where we’re different people and we’re living in this world of finite resources, it’s more important to see the way that you’re involved with immediate networks – local networks, genetic networks, brain networks, communication networks. ”
So at a fairly profound level what the re-emergence of network thinking seems to promise us is a way of giving us a sense of being at home in the world.
But this does not mean a flight in to the irrational or magical thinking – though as I have written about elsewhere there is always the danger of lapsing into apophenia, a perception of connections and meaningfulness in unrelated things. Albert-Laszlo Barabasi and his colleagues, who began by studying the World Wide Web, have begun to identify some general principles that underlie how networks work and how that understanding can give us insights into a wider range of phenomena from how ideas spread to the interactions of the DNA in our cells.
Once we begin to think in terms of networks we can see how this impacts on almost every aspect of our lives. One immediate impact, which is likely to be strongly resisted, is on how we see business and marketing. Once we start think in this way the macho military metaphors of targets and campaigns will seem increasingly anachronistic and instead the questions become about where we fit and where we can connect to the networks that make up the lives of the people we want to reach.
One phenomenon that is well worth almost any business to study is the TV show “Big Brother”. Whether by accident or design or a mixture of both this is a great exemplar of network thinking. Calling it a TV show is a little misleading. What it is in Peter Bazalgette words is an “entertainment idea”. The TV show is an important node, but what is going on can be accessed in a variety of different ways, broadcast TV, digital TV, interactive TV, telephone, mobile phone and web. What you have is a cluster of different ways of experiencing and interacting with the “entertainment idea” and from a commercial point of view a variety of different revenue streams.
Big Brother probably deserves an entry of its own, but for the moment I just want to make two points.
First, that unlike the model of mass media where you have an audience and the size of the audience is the critical factor (though one must not forget that the demographics of that audience is also important) with something like Big Brother every member of the audience becomes a node and it is the frequency and nature of their interactions that may be commercially significant.
Secondly, the Big Brother network extends way beyond what the production company and Channel 4, the broadcaster, can control. The tabloid press, unofficial fan websites and peer-to-peer communication between nodes contribute in an important way to the success of the entertainment idea. But, equally as important, as multinationals such as Shell and McDonalds have found in the past “unofficial” nodes in the network can from a company’s point of view have a powerful negative impact.
The shift to network thinking is likely to be slow process. Our addiction to simple cause and effect models of how the world works is one that many will be reluctant or unable to give up. This will create opportunities for those who can make the move. But, it is not going to be easy. There are institutional barriers, which also stand in the way – our conceptual models are deeply buried in our economic and legal structures. So for a while much of the development of network thinking will be largely invisible, unless you look very hard. There will be exceptions that rise to surface like Big Brother or Cemex, the construction material company that rose from being a regional Mexican company to become one of the world’s largest and most profitable companies in its industry by adopting something that looks very like network thinking.
So, despite the difficulties involved in making such a major conceptual and practical shift, I stick by this second prediction. It may take several decades for this mode of thinking and action to become commonplace. In the mean time the strongest advice I could give to any individual or business is to become sensitive to where you fit in your networks, learn to think in terms of nodes and connections and the complex interactions and feedback between them, and be conscious of the dynamics of your patterns of connection. Whether you are aware of it or not, your success or failure is going to bound up in how well or not you identify, create and navigate your networks.
My third prediction that analogue interfaces to digital media are going to be a hot area of development over the next few years is closely related to network thinking. We are embodied creatures and the world of physical objects that we can touch, feel, and move play a greater role in who we are than we often recognise. One of the understandable errors of much the early thinking about the digital realm – cyberspace – was to think of it as a domain detached from the physical world. Nicolas Negroponte urged us to think “bits not atoms”. What I think many of us missed was the paradox that this largely invisible collection of media and technologies has, in a curious way, made many of the links between our networks of real, tangible objects “visible” by providing the means for them (and us) to communicate and interact with each other in ways that have not been possible before.
To go back to our hunter gatherers for a moment, they survive because they have an intimate knowledge of their environment. We on the other hand have create a world where there is much we don’t understand. When our world was mechanical or even later when it was electro-mechanical we could at a pinch make some guess about how things worked – we pressed a key on the typewriter and we could see the arm carrying the character hit the paper and make an impression.
Now, when I hit a key on my keyboard and the character appears on the screen I have no real idea of what is happening. This isn’t a problem, because I know what a keyboard is and what it does because I know about typewriters. For my son, who has grown up with computers and probably has never used a typewriter it is also no problem, because a computer is a familiar object. But he, like me, has little idea of what is really going on when he writes an essay for school, sends an email, has a conversation using instant messaging and all the other things he does on his computer.
Now in one sense this may not matter. You can have an excellent mechanic who is a lousy driver and an excellent driver who couldn’t change the spark plugs in their car or even where to find them. Knowing how a car works in detail and knowing how to drive well are different domains. Where it does become an issue is when the implementation of a technology either gets in the way of something we are trying to do or leaves us with a sense of incompetence because we can’t understand it well enough to achieve what we want to do or takes a disproportionate amount of attention to accomplish what would other wise be a simple task.
To take a very simple example, when digital watches first appeared they displayed the time in numbers, perhaps as sign that they were new and digital. The digital watch I am wearing now has gone back to displaying the time on a traditional clock face. This, I suspect, because it makes the task of telling the simpler and requires and requires less attention than interpreting as set of numbers. The difference between the two modes is probably trivial. But the cumulative effect of having to pay attention to things that should be “invisible” is much greater.
The late Mark Weiser, of XeroxParc, laid out an attractive future for digital technology when he said, “There is more information available at our fingertips during a walk in the woods than in any computer system, yet people find a walk among trees relaxing and computers frustrating. Machines that fit the human environment, instead of forcing humans to enter theirs, will make using a computer as refreshing as taking a walk in the woods.”
We may be closer to this vision than you might imagine. Chips are getting cheap, to the point where some people are talking of chips replacing the bar codes on everyday items like soap powder. Many companies and researchers are exploring the possibilities of networked homes where the appliances can communicate with one another. Now for those you like myself, who classically cannot programme their video recorder or indeed your heating system, oven, washing machine or any of the other domestic items, which have many capabilities, which remain unused, because we just use their most basic programmes, like switching them on or off or adjusting the temperature because that is all we can understand how to do, the thought of a networked home may be a nightmare.
More appropriate analogue interfaces look like the answer. In some cases, it?s easy. If you go to an amusement arcade you will find lots of them – driving games where have controls just like in a car, shooting games with what look like real weapons. Since 1968 when Douglas Engelbart demonstrated, among many other innovations, the mouse, which enabled users to point items on a monitor and the work at Xerox Parc in the Seventies, which gave use screens that mimicked paper and desktops we have been working towards analogue interfaces which more or less match the activity, with greater and lesser success. This can take us so far and for some activities may be the best approach.
But there is another approach, which is less of a direct matching -I think of the fountain whose flow is governed by the number of hits on server – when they are low there is a gentle gurgle, when high a cascade of water. Perhaps, the paradigm case of what I would call the symbolic analogue approach is Durrell Bishop’s ‘Marble Answering Machine’, which he devised as a student on the Royal College of Art’s Computer Related Design in the early 90s. The concept was of an answer machine where the messages were represented by marbles. You could easily see how many messages you had got, play them back by placing a marble in one slot and return the call by placing it in another.
Work of both kinds, matching and symbolic and hybrids of the two is actively being explored in Colleges and research centres around the world, all broadly addressing the issue posed by Hiroshi Ishii and Brygg Ullmer of MIT Media Laboratory’s Tangible Media Group:
“Although we have developed various skills and work practices for processing information through haptic interactions with physical objects (e.g., scribbling messages on Post-It? notes and spatially manipulating them on a wall) as well as peripheral senses (e.g., being aware of a change in weather through ambient light), most of these practices are neglected in current HCI design because of the lack of diversity of input/output media, and too much bias towards graphical output at the expense of input from the real world.”
They go to conclude:
“Current GUI-based HCI displays all information as “painted bits” on rectangular screens in the foreground, thus restricting itself to very limited communication channels. GUIs fall short of embracing the richness of human senses and skills people have developed through a lifetime of interaction with the physical world.
Our attempt is to change “painted bits” into “tangible bits” by taking advantage of multiple senses and the multi-modality of human interactions with the real world. We believe the use of graspable objects and ambient media will lead us to a much richer multi-sensory experience of digital information.”
Now since none of this work is visible in the marketplace, you have to go to research centres or student shows to see any of it, why am I so confident about this third prediction? There are four reasons.
The first one is that it makes sense. The move from “painted bits” into “tangible bits” has the same feel of appropriateness that that move from command line interfaces to “painted bits” did.
The second is that there is now a substantial body of work in this area.
The third is that the technology to do it is now cheap enough to make it a realistic possibility.
And the fourth, is that the market for domestic appliances and consumer electronics is relatively mature and needs something to get it growing again.
So my advice to the young and ambitious is to reverse Negroponte’s maxim and to “think atoms not bits”.
Quite where we will first see “tangible bits” surface in the marketplace is more difficult to predict. My best guess is that it will be in toys for kids. The success of Leapfrog’s matching analogue interface,“paper-based multimedia” and the competitive stimulus that poses the big toy manufacturerss supports this guess. But we live in an age of surprises, so the breakthrough could occur almost anywhere. What should be less of a surprise to anyone paying attention is that the breakthrough will come.
I began by talking about my friends in the interactive industry complaining that things were boring. After the heady days of the last years of the 20th century, it certainly feels like it. Money and confidence is short. There is a general movement to what seems safe and secure in what is perceived as an insecure, uncertain world. But for once I feel it is appropriate to use the clich? “the quiet before the storm”.
We are suffering from the aftermath of what is sometimes mistakenly called the dotcom bubble. What we need to remember is that what actually took place and is still taking place in some areas, like housing in the UK and senior executive remuneration in both the US and the UK, is a series of financial bubbles. The digital technologies that have developed over the last fifty years are like the water based technologies that drove the first Industrial Revolution. There were financial bubbles then too. But money madness is a temporary if cyclic phenomenon. Just as the innovations in water technology created the foundations for an immense leap forward in productive capability that transformed the nature of society and human identity, we will see the same with our digital technologies. And, perhaps, sooner than you think.