No Singularity for me, mater

Charlie Stross, author of a great book on the Singularity, has a post explaining why he doesn’t in fact believe in it.

The crux of his reasoning seems to be this passage, where he suggests that we don’t want ‘real’ AIs:

We clearly want machines that perform human-like tasks. We want computers that recognize our language and motivations and can take hints, rather than requiring instructions enumerated in mind-numbingly tedious detail. But whether we want them to be conscious and volitional is another question entirely. I don’t want my self-driving car to argue with me about where we want to go today. I don’t want my robot housekeeper to spend all its time in front of the TV watching contact sports or music videos. And I certainly don’t want to be sued for maintenance by an abandoned software development project.

As I argued in the PCK, human-interchangeable robots are in general a fairly dumb idea.    We don’t need a human-level AI because we have humans.  We could use various subsmart appliances; we can and do use specialized robots; we could likely use supersmart apps that help run countries and corporations.  About the only real use I can think of is as generalized workers in niche environments we can’t survive in, like deep space.

(There could be a market for tutors, butlers, and sexbots.  But this runs into the paradox of automation: replace enough jobs and humans become extremely cheap.  Using robots for these tasks would be another niche, for people too fastidious or paranoid to hire humans.  And Stross’s argument applies: you don’t want your children’s tutor, or your sexbot, to be scheming against you.  You want situational cleverness, not sapience.)

Plus, all the great features of AIs… why not just add them to humans?  That’s where I think we’ll end up going, and it’s what I put in the Incatena. 

(Stross also talks about the ethics of developing AIs… before you give voting rights to AIs, think a bit about when in the development process sentience occurs.  Long before product ship, for sure– but in continuing development, are you killing a sentient being?  See also my story on this.)

Stross quotes Hans Moravec as thinking that humans just won’t be able to compete with the nimble AIs:

A human would likely fare poorly in such a cyberspace. Unlike the streamlined artificial intelligences that zip about, making discoveries and deals, reconfiguring themselves to efficiently handle the data that constitutes their interactions, a human mind would lumber about in a massively inappropriate body simulation, analogous to someone in a deep diving suit plodding along among a troupe of acrobatic dolphins. Every interaction with the data world would first have to be analogized as some recognizable quasi-physical entity … Maintaining such fictions increases the cost of doing business, as does operating the mind machinery that reduces the physical simulations into mental abstractions in the downloaded human mind.

There seem to be two main elements to Moravec’s suggestion.  One: primate brains are slow.  Now in part I think this is an illusion caused by dwelling in the land of theory: electronics are sure faster than neurons, so electronic brains must be better!  Only the comparison is quite unfair: computers are fast because their basic operations are trivial.   Your brain can still do things in an instant that megacomputers still can’t, such as dealing with language and easily processing visual data.  (Not that I think AI is impossible.  I think it’s harder than is often assumed, but I’d be really surprised if we didn’t have it in a couple of centuries.)

Computers are better at, well, computerlike tasks.  Again, the obvious step here is not to hand over the keys to civilization, but to incorporate those advantages into our brains.  In AD 4901 you’ll be able to contemplate millions of chess moves, grep megabytes of data, and do vector math as fast as a computer too, since you’ll have one in your skull.

The other idea is that AIs will somehow not need the physical metaphors that we allegedly need.  Seriously?   Does Moravec really think that, say, computer programming is based on primate metaphors?  BASIC maybe, but surely not C++.

I think Moravec’s argument here is actually backwards.  He seems to have a priggish distaste for the biological… a long trope in science fiction, one that C.S. Lewis had a great time parodying back in That Hideous Strength, where his villains had a cringing disgust for the messiness, the fluidness, the grossness of organisms.  But that taste for dead, totalitarian order is way past its sell-by date.

To put it bluntly, I think it’s barmy to give up sex, sports, gardening, and eating, to say nothing of the aesthetics of music, dance, or the visual arts.  All that in order to do what?  Play a really good game of chess?  One you can play anyway once you have that calculation neurimplant?

Also see this post reflecting on another excellent point of Stross’s: there’s a limit to how far you can get anyway with pure thinking.  After enough of that you have to go back to the lab and check it out anyway. 

If you want to take a minor planet, perhaps Vesta, and turn it into computronium and think deep thoughts with it, fine.  But I just don’t see any convincing reason that we need more deep thinking than that.   (None of the singularity advocates seem able to explain what superhuman AIs will do that requires ever-increasing computational power.)