Sunday, September 20, 2009

Wherein I Compare AI Development To Global Warming

Instapundit highlights this little article on Artificial Intelligence where J. Storrs Hall writes the following:

If you’re OK with calling a robot human equivalent if it can, say, do everything a janitor is supposed to, it’s likely by 2025; if it has to be able to create art and literature and do science and wheel and deal in the political and economic world and be a productive entrepreneur, you may have to wait a little bit longer.

Insty quotes this, and it's a misleading. Hall believes we'll have an AI capable of janitorial work, not really an AI that can "do everything a janitor is supposed to". What he means is that we'll have, essentially, a more advanced Roomba—perhaps humanoid, though humanoid shape wouldn't be necessarily optimal.

And, no, this isn't human intelligence. Robot janitors will, guaranteed, be stupid. They'll clean while building burns—or if that's prepared for, while the building floods. And if they're programmed for that, while the roof caves in.

To my mind, the key graf is:

What remains to be seen is whether it will be equivalent to the 2-year-old in that essential aspect that it will learn, grow, and gain in wisdom as it ages.

First of all: No, it won't. No mystery. See, that would be intelligence, versus pre-programming a set of defined tasks with a certain set of fixed parameters. I'll give him some credit that he's wondering, as opposed to making a prediction that anyone will actually be there in 15 years. 25 years ago, people who used to write and speak about AI predicted wondrous things in 5, 10, 15 years.

And we have the Roomba. And some other very cool domain-specializing tools. But nothing like intelligence.

But the idea that a two-year-old is considered less than a janitor, and a janitor less than an artist suggests to me that the field is still lacking a definition of intelligence. A two-year-old has as powerful an intellect as any of us will ever meet. A janitor's intelligence isn't necessarily going to be taxed by his job very often, but sometimes it will be—knowing how to react in an unexpected circumstances, like a fire, a flood, previously unsuspected structural unsoundness.

One can argue that many janitors who face such circumstances react wrongly or inappropriately, but they react to the best of their ability. Robots will simply fail to react to things outside their parameters.

Again, not to say that there won't be useful 'bots, but this isn't intelligence.

I'm not an expert in it, but I think the singularity guys have based their theory on a combination of working AI and Moore's Law. But Moore's Law is a trend, not an actual "law", and AI doesn't seem to be any closer to realization than it ever was—it's only a massive amount of computing power that allows the meagerest appearance of less-than-animal intelligence.

Appearance, I say. It's not even intelligence and the distinction is not something that can be remedied with quantity.

I'll go one step further: If the singularity were to come to pass, it would be a nightmare for humanity. But that's a different topic for a different rant.

8 comments:

  1. It seems like a more reasonable approach to try adding electronics to a living brain, than try making one from scratch.

    ReplyDelete
  2. The fact is, that civilisation requires slaves. The Greeks were quite right there. Unless there are slaves to do the ugly, horrible, uninteresting work, culture and contemplation become almost impossible. Human slavery is wrong, insecure, and demoralizing. On mechanical slavery, on the slavery of the machine, the future of the world depends.

    OSCAR WILDE, The Soul of Man Under Socialism

    Man created machine in his own image
    -author unknown

    ReplyDelete
  3. You make some nice points, but here is a bit of food for thought...

    Why humanoid robots? They fit into a world designed for humans and can use tools made for humans. You make a robotic vacuum cleaner and all it can do is vacuum. Make a humanoid robot and it can clean the dishes, vacuum the floor and drive the car.

    Common sense ideas about intelligence are different from what robotics people mean. We look at driving down an empty highway as essentially trivial and taking no intelligence. We take for granted things that are difficult for machines to do, like tell the road from the not-road etc.

    ReplyDelete
  4. CL--

    There's a book called "Beyond Civilization" that agues the same thing: Civilization requires slaves. However, since we don't have slaves any more, the author has determined we're moving into a post-Civilization era.

    ReplyDelete
  5. dbp--

    Ah, but the tools are largely trivial. The Roomba guys didn't make a machine that could hold a broom. In fact, look at it the other way: Our tools are inefficiently designed because they're design to fit human bodies rather than domain-specializing forms.

    And, I am a programmer: I've played with AI a lot over the years. But ultimately, if AI resolves itself by changing the meaning of the "I", I won't be impressed. Heh.

    ReplyDelete
  6. If robots are specialized or are humanoid is going to depend on how they are used: A robot that lives in a factory, welds cars and is bolted to the floor is not going to be humanoid.

    One that is made to care for an elderly person in their home would almost have to be. Imagine a non-humanoid robot that can get some carrots from the fridge, peel them, wash them, cut them up and then steam them.

    It is well taken that changing the definition of intelligence is a serious moving of goal-posts. But (and you knew there had to be one) our sense of what constitutes intelligence is human-centric. We see driving as trivial--any moron can do it, but it is tough for machines to do it. And yet we look at humans who can do multi-digit math in their head or play chess at a high level as geniuses and yet computers can easily out-do us in these areas.

    I think the ultimate test is the one Turing came up with. I think we are a pretty long way from this happening but I don't see any theoretical reason why it shouldn't happen eventually.

    ReplyDelete
  7. I'm with you right up to the end: Turing was of course brilliant, but the "Turing test" is one of the dumbest things I've ever heard.

    ReplyDelete
  8. And, I point out: Computers can't actually do math, they can only do what they're told.

    ReplyDelete

Grab an umbrella. Unleash hell. Your mileage may vary. Results not typical. If swelling continues past four hours, consult a physician.