Jan 23 2009
My Twitter life has been exploding thanks to the RWW semantic web twitterers post, and I am now jacked in to a lot of very interesting feeds.
My recent experiments with robotics and sensors have been really eye opening. Almost everything that computers do is limited by available means of interaction. For the most part, output to the user is constrained to a few million scintillating points of light, and user input to a grid of 100 square tiles and a fake rodent. These provide sufficient bandwidth only for written and visual communication directly between the computer and the user.
A notable recent trend has been the expansion of user input mechanisms, particularly in gaming, where the intent of a three dimensional, mobile, interacting user has to pass through communications channel of miniscule capacity (e.g. a joystick pad + fire and jump buttons) to instruct an increasingly three dimensional, mobile, interacting game character. So, Nintendo and others have brought us the analog joystick, the vibrating joystick, force feedback, the Wii controller. Apple understood that a touch surface is not just a way to swap a mouse for a pen (different physical input mechanism – same bandwidth), but a way to increase bandwidth (multi-touch). Microsoft have done something similar with the Surface (as far as I can tell, a ton of people would buy one at a price ~ iPhone’s $400 – Microsoft’s problem seems to be manufacturing).
Voice input has not yet broken through, although Google’s iPhone app is quite compelling (except for an unfortunate problem with British accents). A limitation there is the compute power needed to do speech recognition, something which Google smartly got around by sending the data to their servers for processing.
Another important kind of input and output is provided by the computer’s network connection, which admits information into the computer faster than a keyboard, but provides slower output than a visual display unit. The network connection does not usually provide data which is immediately relevant to the user’s immediate situation: it does not provide data relating to the user at a small scale, and does not provide information which is actionable at that small scale. By “small scale”, I mean the scale of things we can ourselves see, touch, taste, move, like, love. This is important, because most of what we experience, think, and do is carried out at this small scale.
Your phone might let you talk and browse the web. Your PC might be able to play you a movie or control the lights in your house. Your city might have a computer which monitors the traffic and adjusts the traffic lights. Your country might have a computer which predicts the weather or models the spread of disease, or which computes stock prices. The larger the area from which the computer’s inputs are drawn, the more the computed outputs are relevant to people in the aggregate, and less they are relevant to people as individuals.
There is a huge scope, and, I think, a lot of benefit, to making computation much more local and therefore personal. A natural conclusion (but in no ways a limit) is provided by endowing every significant object with computation, sensing, and networking. I cannot put my finger on a single killer benefit from doing this… but I think that even small benefits, when multiplied by every object you own or interact with, would become huge and game-changing. You could use a search engine to find your lost keys, have your car schedule its own service and drive itself to the garage while you were out, recall everything you had touched or seen in a day. Pills would know when they need to be taken, food would know when it was bad.