Category Archives: robotics

Hard to get machine-readable weather data

Today, I interfaced an Arduino to a stepper motor – the hardest bit in the end was figuring out which pins of the Unipolar stepper motor do what. The motor, available as Jameco 171601, has six wires – yellow, red, orange, black, green, brown – which come out in a connector. The most useful reference I found was Tom Igoe’s Stepper Motor Control page. In the color sequence above, the wires are those numbered 1, 5, 2, 3, 6, 4 in Igoe’s diagram.

As a demonstration I wanted to turn it into a little weather toy, but it has been very difficult finding machine-readable real-time weather data on the web. The NOAA site is chaos. Eventually, I settled on this:

NOAA CA weather data.

It was easy to grep and sed the data I wanted out of its text and tables. God forbid that whoever produces this should take a more literary turn – then I would need NLP! 🙂

Sensing and Computation: Go Local

My Twitter life has been exploding thanks to the RWW semantic web twitterers post, and I am now jacked in to a lot of very interesting feeds.

whuffie mentioned cognitively endowed objects as something big coming down the pipe. I can’t agree more.

My recent experiments with robotics and sensors have been really eye opening. Almost everything that computers do is limited by available means of interaction. For the most part, output to the user is constrained to a few million scintillating points of light, and user input to a grid of 100 square tiles and a fake rodent. These provide sufficient bandwidth only for written and visual communication directly between the computer and the user.

A notable recent trend has been the expansion of user input mechanisms, particularly in gaming, where the intent of a three dimensional, mobile, interacting user has to pass through communications channel of miniscule capacity (e.g. a joystick pad + fire and jump buttons) to instruct an increasingly three dimensional, mobile, interacting game character. So, Nintendo and others have brought us the analog joystick, the vibrating joystick, force feedback, the Wii controller. Apple understood that a touch surface is not just a way to swap a mouse for a pen (different physical input mechanism – same bandwidth), but a way to increase bandwidth (multi-touch). Microsoft have done something similar with the Surface (as far as I can tell, a ton of people would buy one at a price ~ iPhone’s $400 – Microsoft’s problem seems to be manufacturing).

Voice input has not yet broken through, although Google’s iPhone app is quite compelling (except for an unfortunate problem with British accents). A limitation there is the compute power needed to do speech recognition, something which Google smartly got around by sending the data to their servers for processing.

Another important kind of input and output is provided by the computer’s network connection, which admits information into the computer faster than a keyboard, but provides slower output than a visual display unit. The network connection does not usually provide data which is immediately relevant to the user’s immediate situation: it does not provide data relating to the user at a small scale, and does not provide information which is actionable at that small scale. By “small scale”, I mean the scale of things we can ourselves see, touch, taste, move, like, love. This is important, because most of what we experience, think, and do is carried out at this small scale.

Your phone might let you talk and browse the web. Your PC might be able to play you a movie or control the lights in your house. Your city might have a computer which monitors the traffic and adjusts the traffic lights. Your country might have a computer which predicts the weather or models the spread of disease, or which computes stock prices. The larger the area from which the computer’s inputs are drawn, the more the computed outputs are relevant to people in the aggregate, and less they are relevant to people as individuals.

There is a huge scope, and, I think, a lot of benefit, to making computation much more local and therefore personal. A natural conclusion (but in no ways a limit) is provided by endowing every significant object with computation, sensing, and networking. I cannot put my finger on a single killer benefit from doing this… but I think that even small benefits, when multiplied by every object you own or interact with, would become huge and game-changing. You could use a search engine to find your lost keys, have your car schedule its own service and drive itself to the garage while you were out, recall everything you had touched or seen in a day. Pills would know when they need to be taken, food would know when it was bad.

Robot maps floor

Robot-derived floorplan

At the weekend I calibrated the robot. In some driving tests I found that it drives 7.5″ to the right for every 30″ it drives forwards – which is corrected by adjusting the left wheel speed to be about 80% that of the right wheel, drives forwards at a rate of 6″ per second, and turns on the spot at a rate of 96 degrees per second.

I made some changes to the software to get my robot to use the sonar range data to produce a map of the terrain it traverses. Initially the robot was saving the map to onboard RAM, but I found that plugging the Arduino in to the Mac to read the map out over the serial port would reboot the Arduino and erase the data. The next version saved the map to on-board EEPROM, and that properly survided the reboot and could be read out. At first the results were very obscure and disappointing. Until I realized that degrees != radians and fixed the trig appropriately. The map is a little hard to interpret – I plan to make changes to the software to help that – but considering that the robot has been only roughly calibrated, the results are quite impressive (to me – I built the robot and wrote the software so I might be a little biased).

The picture is of my robot-derived floorplan. The height of the bumps is proportional to the time from start of drive that the robot added that map point. Note that new map points could overwrite old ones.

I think that the ridge of high bumps which crosses the middle of the picture parallel to the X axis corresponds to the same part of the hallway as the longer ridge which was found much earlier in the run and which appears below it in the picture. The high ridge comes from the robot’s second tour around the hallway.

Experimenting with the Google AJAX API, I uploaded a graph which progressively shows the obstacles the robot detected as it drove. Note, these are the obstacles detected, and not the robot’s estimated path. This gives some idea of how the robot was moving around, and also of the accumulating inaccuracies in its estimation of where it and the obstacles were.

Little Robot Moves Around

The next thing little robot needed was the ability to sense its environment. I ordered a MaxBotix EZ4 sonar from Sparkfun for less than $30, and with a little nervous soldering today got it connected up to the robot. The obstacle avoidance code is very simple – if the robot detects an obstacle sufficiently close, it turns left a little. An earlier algorithm had both left and right turning but needs some work to prevent excessive oscillations.

Below is a little video of the robot navigating around the hallway and avoiding a moving beslippered object (me).

robot081231

And here is an interactive visualization of the obstacles the robot sensed.

Little Baby Bot Lives!

Today at around 4pm, a little baby bot was born. Oliver helped me with some of the design decisions, and with measuring and calibrating the little guy. He weighs about 2lb (including batteries), and is 14″ long. At this stage, he is blind. He does, however, diligently follow instructions fed to him as a program down a USB cable. After each programming, we remove the USB cable and he is reborn, newly obedient.

One of the lessons I learned with this robot was that the electronics and programming are at most half the challenge. What caused the most problems was the mechanical part. In the end, I settled for an old iPhone box for the body. We carefully cut holes in the sides for the motors, making the motors fit as snugly as possible to minimize wobble. Even so, a couple of folded pieces of paper and an earplug are used to pack the motors more tightly.

My first little Robot
My first little Robot

Little Robot Steps

It works! I built the beginnings of a little robot today. I have an Arduino Diecimila connected to a circuit built on a breadboard around a TI SN754410 Quadruple Half-H Driver, connected to two Pololu Gearmotors . I followed recommendations found in various net places to reduce motor noise by wiring each motor with 3 capacitors – fiddly work with such small motors, and followed a very clear example I found in a course on the web to hook up the Arduino via the H-Bridge to a motor.

I wrote some code to drive each motor forwards or backwards at a chosen speed, and on top of this wrote routines to drive the robot forwards, backwards, clockwise or anticlockwise. After a little debugging it all works. Only problem now is… the robot has no body, just guts.

I had in mind a Ferrero Rocher box for a transparent, light, appropriately sized body, but the particular – formerly ubiquitous – size which I wanted is nowhere to be found. I may have to cannibalize some tupperware.