Monthly Archives: January 2009

Sensing and Computation: Go Local

My Twitter life has been exploding thanks to the RWW semantic web twitterers post, and I am now jacked in to a lot of very interesting feeds.

whuffie mentioned cognitively endowed objects as something big coming down the pipe. I can’t agree more.

My recent experiments with robotics and sensors have been really eye opening. Almost everything that computers do is limited by available means of interaction. For the most part, output to the user is constrained to a few million scintillating points of light, and user input to a grid of 100 square tiles and a fake rodent. These provide sufficient bandwidth only for written and visual communication directly between the computer and the user.

A notable recent trend has been the expansion of user input mechanisms, particularly in gaming, where the intent of a three dimensional, mobile, interacting user has to pass through communications channel of miniscule capacity (e.g. a joystick pad + fire and jump buttons) to instruct an increasingly three dimensional, mobile, interacting game character. So, Nintendo and others have brought us the analog joystick, the vibrating joystick, force feedback, the Wii controller. Apple understood that a touch surface is not just a way to swap a mouse for a pen (different physical input mechanism – same bandwidth), but a way to increase bandwidth (multi-touch). Microsoft have done something similar with the Surface (as far as I can tell, a ton of people would buy one at a price ~ iPhone’s $400 – Microsoft’s problem seems to be manufacturing).

Voice input has not yet broken through, although Google’s iPhone app is quite compelling (except for an unfortunate problem with British accents). A limitation there is the compute power needed to do speech recognition, something which Google smartly got around by sending the data to their servers for processing.

Another important kind of input and output is provided by the computer’s network connection, which admits information into the computer faster than a keyboard, but provides slower output than a visual display unit. The network connection does not usually provide data which is immediately relevant to the user’s immediate situation: it does not provide data relating to the user at a small scale, and does not provide information which is actionable at that small scale. By “small scale”, I mean the scale of things we can ourselves see, touch, taste, move, like, love. This is important, because most of what we experience, think, and do is carried out at this small scale.

Your phone might let you talk and browse the web. Your PC might be able to play you a movie or control the lights in your house. Your city might have a computer which monitors the traffic and adjusts the traffic lights. Your country might have a computer which predicts the weather or models the spread of disease, or which computes stock prices. The larger the area from which the computer’s inputs are drawn, the more the computed outputs are relevant to people in the aggregate, and less they are relevant to people as individuals.

There is a huge scope, and, I think, a lot of benefit, to making computation much more local and therefore personal. A natural conclusion (but in no ways a limit) is provided by endowing every significant object with computation, sensing, and networking. I cannot put my finger on a single killer benefit from doing this… but I think that even small benefits, when multiplied by every object you own or interact with, would become huge and game-changing. You could use a search engine to find your lost keys, have your car schedule its own service and drive itself to the garage while you were out, recall everything you had touched or seen in a day. Pills would know when they need to be taken, food would know when it was bad.

Experimental 3d Sonar Map

Using EZ-4 sonar on my robot, I attempted to get a depth map of my view
of the opposite wall in our office at home. The arduino was taking sonar
readings at a rate of 20Hz and sending down the serial port to my Mac. I
pointed at the wall and then scanned across at an even speed clockwise -
taking about 6 seconds to cover 30 degress of arc – then put my hand in
front of the sonar to get some easily recognisable low readings to mark
as end-of-row, tilted the arduino up 5 degrees and scanned right to left
at about the same rate. I repeated this backwards and forwards scanning,
increasing the angle from the floor at the end of each row, until I had
9 rows.

I normalized the data by assuming that each sweep (left-right or
right-left) covered an equal angle but possible at a different speed
(resulting in a different number of readings for different rows – the
average number of readings was about 120 (6 seconds at 20Hz)).

Here is the raw data.

The readings were now – with a little work – in the 3d polar form with
variables phi, the angle the sonar beam made to the center of my point
of view when projected onto an axis parallel with the floor, theta, the
angle the beam was pointing above the floor (0 ~ parallel to the floor),
and r, the sonar range in inches. With a fair amount of pain, I
converted the readings into XYZ readings with (0,0,0) where I was
sitting and plotted them in grapher.

The data is… hard to interpret. I just bought a copy of “Probabilistic
Robotics” by Sebastian Thrun, Wolfram Burgard and Dieter Fox and
although I have hardly had a chance to look at it I did notice that the
sonar maps they present are very noisy. It looks like my sonar map is
noisy too. The data points resolve into a few coherent blocks. Closest
to the viewpoint in the picture below is a block of points where a tall
filing cabinet is. A little deeper into the picture, and not extending
so high (up the screen in the picture) there is a sofa. The other points
are very noisy but are bounded behind by a wall. I expect that the wall
is quite a good reflector of sonar, and so results in very poor
readings.

Robot maps floor

Robot-derived floorplan

At the weekend I calibrated the robot. In some driving tests I found that it drives 7.5″ to the right for every 30″ it drives forwards – which is corrected by adjusting the left wheel speed to be about 80% that of the right wheel, drives forwards at a rate of 6″ per second, and turns on the spot at a rate of 96 degrees per second.

I made some changes to the software to get my robot to use the sonar range data to produce a map of the terrain it traverses. Initially the robot was saving the map to onboard RAM, but I found that plugging the Arduino in to the Mac to read the map out over the serial port would reboot the Arduino and erase the data. The next version saved the map to on-board EEPROM, and that properly survided the reboot and could be read out. At first the results were very obscure and disappointing. Until I realized that degrees != radians and fixed the trig appropriately. The map is a little hard to interpret – I plan to make changes to the software to help that – but considering that the robot has been only roughly calibrated, the results are quite impressive (to me – I built the robot and wrote the software so I might be a little biased).

The picture is of my robot-derived floorplan. The height of the bumps is proportional to the time from start of drive that the robot added that map point. Note that new map points could overwrite old ones.

I think that the ridge of high bumps which crosses the middle of the picture parallel to the X axis corresponds to the same part of the hallway as the longer ridge which was found much earlier in the run and which appears below it in the picture. The high ridge comes from the robot’s second tour around the hallway.

Experimenting with the Google AJAX API, I uploaded a graph which progressively shows the obstacles the robot detected as it drove. Note, these are the obstacles detected, and not the robot’s estimated path. This gives some idea of how the robot was moving around, and also of the accumulating inaccuracies in its estimation of where it and the obstacles were.

TRS-80

It’s an appealing title, right…

I don’t often watch music videos – when I am browsing music I am using looking for stuff to download with a very limited time budget. I hooked myself up with this one on emusic.com after downloading a TRS-80 album. Anyway, I love the way the video suggests interpretations of the music.