|
|
models and machines Thursday, June 2 2011
On the recent drive to and from Albany in the Subaru, the brakes had developed an unpleasant (and new) behavior: a rough pulsing that could be felt up through the brake pedal when the car was moving slowly. It made me suspect that either some brake pads had failed catastrophically or that I preformed the reassembly of the right rear brake mechanism incorrectly (when I took it apart some weeks ago). So today I looked at all the brake shoes and disassembled and reassembled the brake pad assemblies for three of the wheels. Nothing in the braking system looked amiss, though I noticed that a boot for the front left CV axle had rupture, spraying grease everywhere. I didn't have any replacement shoes for the front brakes (which are a different, larger size), but I had replacements for the back shoes (which were, in any case, badly worn), so I replaced them.
Car work is messy, unpleasant work, but it's a nice opportunity to catch up on my podcasts. Today I was listening to the latest precious full-hour episode of Radiolab (these only seem to come out once a month at the most). Today's episode was entitled Talking to Machines: a look at where we are in terms of making computers seem lifelike (either emotionally, as in the case of Furby, or conversationally, as in Eliza and various chatbots). This has always been an interesting subject to me. I myself have written and deployed crude chatbots (in a Flash-based instant messaging system I built for Bathtubgirl's website) and have delved into the basics of English language parsing. I've also imagined how a system could organically develop language skills over time with experience. So I was interested to learn from Radiolab about a chatbot called Cleverbot that has been gradually learning how to converse by example.
Tonight I had a little chat with Cleverbot to see how clever it actually was. It's definitely better than Eliza, but not much better. Part of the problem with Cleverbot is that it doesn't seem to build up any logical model of the person it is conversing with; it acts perpetual present, working from whatver was just typed, looking through its vast database of past conversations to find the most appropriate response. But it doesn't seem to learn anything specifically about a person during the course of a chat. And what it says about itself is completely chaotic, since it has no model of itself either.
For linking purposes this article's URL is: http://asecular.com/blog.php?110602 feedback previous | next |