Banazîr the Jedi Hobbit (banazir) wrote,
Banazîr the Jedi Hobbit

  • Mood:
  • Music:

Day 12: Westering Home

UK Trip 2005: A Tronkie Travellogue
Day 12: Edinburgh, Scotland to London, England (IJCAI Conference)

09:00 - 15:05: The fourth day of technical presentations at the 19th Biennial International Joint Conference on Artificial Intelligence (IJCAI-2005).

The opening talk of the morning was an invited talk by Stephen Jacobsen of the University of Utah and Sarcos, Inc.. An expert in telerobotics and bionics, he designed the Utah Artificial Arm, the Utah/MIT Dexterous Hand, many of the playback "animatronic" figures at Walt Disney World and EPCOT center, and the famous dancing water fountains at the Bellagio in Las Vegas. His talk contained a smattering of technical detail, but consisted almost entirely of videos and design summaries on the new humaniform robots and human augmentation systems.

One of the robotic hands can pick up an egg without breaking it and then pick up a 300-pound anvil and throw it! My favorite video was the robot hand that deftly cracked an egg into a mixing bowl, then inserted its own fingers into the bowl and spun them to scramble it.

Currently, it takes up to three soldiers to rescue a single wounded combatant in the field, each of whom may be under fire or otherwise at risk. The Sarcos exoskeleton is a wearable robot that would allow one combatant to run with another on his back and even allow the injured soldier to "fight while hurt" if he was able to use a weapon. Currently, only the legs actually have been built, but already they are very reminiscent of the powered suits that Robert A. Heinlein described in Starship Troopers and which was later depicted in Mobile Suit Gundam.

KAELBLING: Lots of cool things that AI people would love to work with. What's the next thing that AI could help you with?
JACOBSEN: Control, calibration and correction - an organism supervisor that observes, analyzes kinematics and corrects sensors.
banazir: What do you think the prospects are for scaling up to multi-agent coordination among, say, autonomous robots?
JACOBSEN: I think we should first walk, then run, but once we have single robots figured out, it shouldn't be too hard. Can you see any reason why it shouldn't?
banazir: Just if they are mixed-initiative.

Later questions covered prosthetic applications and hydraulics.

I said hello to Ryszard Michalski, and the stage was set up for a half-hour Sony QRIO demo. 53cm (a foot and a half) tall, this humaniform robot dances, walk up stairs and crawl under tables. It would have hit the wall going through under the table in the obstacle course, though, and it tried to get up too early, so they had to help it out a little. More impressive was its ability to get up when pushed over and avoid falling when slightly tripped. The QRIO also orients toward hand-clapping and will walk up to a human, look at it with its two camera eyes (doing some basic 3-D face recognition in binocular stereo), greet the person, and bow.

Here are some video clips of the QRIO performing typical tasks, such as balancing on a moving surface and throwing a ball.

Of the morning presentations, I attended only three, in two different sessions. The tracks that seemed interesting to me were Learning 2 (Sidlaw), Learning in Music and the Web (Tinto), Text Categorization (Moorfoot), and Decision Theory in Adaptive Systems (Kilsyth). The ones I attended were:

  • ZHOU and Li - Learning 2

  • Kapanci and PFEFFER - Learning in Music and the Web (musical transcription)

  • MACCALLUM, Corrada-Emmanuel and Wang - Learning in Music and the Web (link analysis from e-mail headers)

At the start of the lunch hour, I talked with Hwee-Tou Ng of yodge's university, the National University of Singapore (NUS), and a former professor of mine in chambana, Dan Roth, who introduced me to his colleague Shaul Markovitch.

During lunch, I went to eat at a dim sum place with the Banafolks. That was probably the best Chinese food we had here, and surprisingly inexpensive even for lunch.

Coming back, I ran into Yolanda Gil, who told me to say hello to my old friend and classmate zurich31, who was later one of her research programmers and co-authors. The afternoon's talk was by Yolanda's husband, Kevin Knight: "What's New in Statistical Machine Translation". Kevin spent a half hour running down the progress from the late nineties through 2003, and the second half of the talk on new advances in the last 18 months. It was quite a good talk and garnered a few good questions:

  • Q1: Suppose your translation technique translates Chinese into English suboptimally and a human is tweaking it into the correct one. Are there any tools that assist with this tweaking?
    KNIGHT: Yes, good question. There are CMU's tools for languages where there isn't enough data, and work at Edinburgh by Chris Kelson on active learning.

  • Q2: How much would it helped if words were disambiguated?
    KNIGHT: Disambiguation hasn't made it into "what's new" because nobody has able to do automatic sense evaluation yet; manual sense tags may help.

  • Q3: One evaluation metric is building pairs: (A-B), (B-A) and looking at how well translations commute. Has anyone tried this?
    KNIGHT: The problem with this approach is in statistical NLP it is often the case that the dumber the system is, the better it works. I call this the "Porque in el mundo?" factor ("Porque in el mundo" doesn't mean anything in Spanish, even though it results in perfect backtranslation.)

  • Q4 (SLOMAN): I suspect that for MT to be successful, understanding plays a role. Do you see machine understanding as fundamental to the end goals of MT?
    KNIGHT: That's the question; we have made progress side-by-side but there is still a lot of work to be done.

15:05 - 21:30: Then it was off to Edinburgh International Airport, where we spent about five hours waiting for a flight to London Gatwick. I spent two of these hours (16:15 - 18:15) watching The Hunchback of Notre Dame.

Left: The view from the top deck of a double-decker bus, which I finally got to see.
Right: Edinburgh International Airport (EDI).

21:30 - 22:30: A quick flight, and we were on the ground at Gatwick.
22:15 - 23:15: Unfortunately, I failed to understand how the airport is laid out, and we end up looking for an hour for the cab that Banamum booked. They get charged for coming up to the upper level, so he's parked a ways back from the arrivals lounge.
23:15 - 00:30: And it's off to Heathrow for the plane home!

Tags: animatronics, autonomous robots, bnj, conferences, disney, edinburgh, engineering, human augmentation, ijcai, machine learning, mechanical engineering, planning, qrio, research, robotics, scotland, sony, teaching, teleoperation, travel, travellogue, uk

  • Post a new comment


    default userpic

    Your reply will be screened

    Your IP address will be recorded 

    When you submit the form an invisible reCAPTCHA check will be performed.
    You must follow the Privacy Policy and Google Terms of use.