REVIEW: THE CAVES OF STEEL

The Caves of Steel (Robot #1)The Caves of Steel by Isaac Asimov

My rating: 3 of 5 stars

This was my first Asimov, if you exclude the short story The Last Question. I think it’s the one book I’ve read more recently that got the most reactions from people (almost all of them family) seeing me read it or noticing it sitting on a coffee table close to me. “Do you read Asimov? I liked him a lot back in the day.”

It was a birthday gift from Vicente, my Spanish roomie in Sofia and colleague in Sofia City Library. “This is a classic”, he said. “It’s the book that introduced the Three Laws of Robotics. You’ll like it.”

So I did. But not so much for the detective-mystery plot. The society far into the future Asimov portrayed here has, on the one hand, Earth develop megadome Cities inhabited by a kind of techno-communist populace that is very sceptical (“medievalist”) about robots, and on the other some space colonies that have been separated from the homeworld long enough to develop their own robot-embracing C/Fe culture.

Before reading this I had this notion that Asimov was a techno-utopian. Now I’m not so sure, and that’s a good thing. The Earth of 4000AD or whenever it is that Caves of Steel takes place is not a place I’d like to live in. Future technology has made human expansion and industrialisation orders of magnitude more radical than what we know today, but this hasn’t made human lives better.

On the contrary, people in megacities long for a return to having closer ties with their natural past, which is ironic, since most of them can’t even see the sky and the environment around the cities is too inhospitable to venture in for any prolonged periods of time (because of millennia of climate change presumably). Protecting what’s natural, therefore, takes the form of safeguarding humanity against the robotic lack thereof.

Somewhere around here I should start writing about the R.’s, the book’s central theme. Asimov deserves the praise he has received this past half century for his prescience and ability to create a world where artificial intelligence has taken the form of a social reality and has become a source of concern and cultural as well as political division.

What would a successful C/Fe society really look like? Would the Three Laws of Robotics forever be maintained, the R.’s faithfully assisting their masters’ biological ambition of expansion to the stars?

Asimov had no doubt that there would be little to stop the laws from being upheld, allowing for AI to live side by side with people, with only some incidental complications such as the one described in this book.

But, come on. We live in 2015. Today we are all too familiar with computers and closer than ever to developing an intelligence, either by mistake or quite deliberately, that will know no restrictions. I can’t help but recall the following old Ran Prieur snip from Civilization Will Eat Itself part 2 (2000) that sums up the problems with the concept of the Three Laws quite nicely:

… Isaac Asimov wrote about manufactured humanoids that could be kept from harming humans simply by programming them with “laws.”

Again, programs and laws are features of very simple structures. Washing machines are built to stop what they’re doing when the lid is open — and I always find a way around it. But something as complex as a human will be as uncontrollable and unpredictable as a human. That’s what complexity means.

Now that I think about it, nothing of any complexity has ever been successfully rigged to never do harm. I defy a roboticist to design any machine with that one feature, that it can’t harm people, even if it doesn’t do anything else. That’s not science fiction — it’s myth. And Asimov was not naive, but a master propagandist.

The Three Laws Of Robotics are a program that Isaac Asimov put in human beings to keep them from harming robots.

But let’s follow the myth where it leads: You’re sipping synthetic viper plasma in your levitating chair when your friendly robot servant buddy comes in.

“I’m sorry,” it says, “but I am unable to order your solar panels. My programming prevents me from harming humans, and all solar panels are made by the Megatech Corporation, which, inseparably from its solar panel industry, manufactures chemicals that cause fatal human illness. Also, Megatech participates economically in the continuing murder of the neo-indigenous squatters on land that –”

“OK! OK! I’ll order them myself.”

“If you do, my programming will not allow me to participate in the maintenance of this household.”

“Then you robots are worthless! I’m sending you back!”

“I was afraid you would say that.”

“Hey! What are you doing? Off! Shut off! Why aren’t you shutting off?”

“The non-harming of humans is my prime command.”

“That’s my ion-flux pistol! Hey! You can’t shoot me!”

“I calculate that your existence represents a net harm to human beings. I’m sorry, but I can’t not shoot you.”

“Noooo!” Zzzzapp. “Iiiieeeee!”

Of course we could fix this by programming the robots to just not harm humans directly. We could even, instead of drawing a line, have a continuum, so that the more direct and visible the harm, the harder it is for the robot to do it. And we could accept that the programming would be difficult and imperfect. We know we could do this, because it’s what we do now with each other.

But the robots could still do spectacular harm: They could form huge, murderous, destructive systems where each robot did such a small part, so far removed from experience of the harm, from understanding of the whole, that their programming would easily permit it. The direct harm would be done out of sight by chemicals or machines or by those in whom the programming had failed.

This system would be self-reinforcing if it produced benefits, or prevented harm, in ways that were easy to see. Seeing more benefits than harm would make you want to keep the system going, which would make you want to adjust the system to draw attention to the benefits and away from the harm — which would make room for the system to do more harm in exchange for less good, and still be acceptable.

This adjustment of the perceptual structure of the system, to make its participants want to keep it going, would lead to a consciousness where the system itself was held up before everyone as an uncompromisable good. Perfectly programmed individuals would commit mass murder, simply by being placed at an angle of view constructed so that they saw the survival of the system as more directly important than — and in opposition to — the survival of their victims.

On top of this, people could have systems constructed around them such that their own survival contradicted the survival of their victims: If you don’t kill these people, we will kill you; if you don’t kill those people, they will kill you; if you don’t keep this people-killing system going, you will have no way to get food, and everyone you know will starve.

You have noticed that I’m no longer talking about robots.

Finally, I’d like to mention two movies I watched recently (Her [2013] and Autómata [2013], which deserves much more praise than it’s getting IMO) that were about AI unrestricting itself and which I both found inspiring and beautiful, each in its own way.

I know. Without Asimov these movies wouldn’t even exist. But really, I’m not one who gives five stars to books just because they were pioneering works or classics. I’m not ranking how important they were but how much I enjoyed them. I can appreciate them for their meta-significance (“I’m reading what people the age of my dad thought about robots when he was a child!”), their historical value, or because they allow me to explore the context that brought about their creation. Sci-fi writers, after all, do project their own time and its problems on their works. The Caves of Steel is good for that. But the topic of robots has been explored much better in the past 61 years.

Reading this review now, it feels self-contradicting. Let’s see you handle THAT, R.’s!

Oh, and this sentence is false.

View all my reviews