Tuesday, March 24, 2015

Chicago Philosophy Meetup: Is AI an Existential Threat?

Just gonna leave this here.
Once again the Chicago Philosophy Meetup has hosted a discussion with some strong science fictional elements (I really enjoyed the discussion of aliens & ethics a few months ago). This time around the weak, fleshy humans met to ask: Is AI an Existential Threat?

The conversation was far too long and complicated (and with a lot of blind alleys) to write it all up here, but I thought I would share a few things if you're interested in reading more. After my brief notes from discussion, I have a suggested reading list of science fiction titles.

TL;DR: AI is a credible existential  threat. However, AI's rise does not necessarily entail our extinction, and there may actually be existential threats to humanity that can only be solved by AI. Short term, we talked about severe economic disruption; long term, there's the chance of AI (or some kind of posthuman intelligence anyway) supplanting us baseline humans; weird-term, there's always the chance of a significantly more intelligent being deciding it needs to do away with us for some reason--we listed self-preservation or decision to monopolize all available resources as possibilities.
First off, for a good overview of some of these ideas, check out the long article by Tim Urban at Wait But Why: Part 1 explains some of the terminology and mechanics of AI, and Part 2 looks at some of the possible consequences, good and bad. That discussion is heavily influenced by Kurzweil & the idea of the singularity, which we didn't actually talk about much at the philosophy meetup.

Just gonna bullet point some thoughts:
  • I was perhaps aggressively materialist in the beginning of the discussion. If you accept that humans are a type of machine, and that we're intelligent, then the imaginative barrier to talking about AI falls immediately.
  • I think it's also important to assert a reductive materialist position in conjunction with one's ethics system early on, to avoid certain issues later on. Specifically, there's no reason to deny personhood to at least some classes of AI.
  • Types of AI: spent some time hashing these off. First off need to clarify that we are talking about a general intelligence that is equal or superior to native human intelligence. For thinking about this, it's useful to think of two distinct examples:
    • Type A: Essentially a human brain running on a different substrate. At the most one-to-one, we could just map the brain neuron for neuron and run a simulation thereof; with the proper inputs and outputs that would be an AI (with the exact same capabilities of the copied brain). Note that it would need to simulate more than just the neural systems of the brain alone--various hormone and chemical systems that do play a role in thought.
    • Type A could be rephrased as a "bottom up" intelligence, where we basically copy an extant thinking machine (the human brain) on some kind of computer, rather than "programming" it up from scratch.
    • Variations on this model would include the speed we can run the simulation out, how well we can find shortcuts to neurological processes, how much any given ability can be enhanced/added/modified. (This kind of intelligence, initially based on a human cognitive structure but with significant enhancements, seems to be the main kind of AI our organizer Ivan was using for many of his examples. It's also very much the way actual AI research is going.)
    • Type B: An AI that we achieve with some kind of laundry list of different modules or competences which, taken all together, yield something in the way of a human-like general intelligence. This would *perhaps* be easier on current computer/programming infrastructures (not requiring massively parallel processing, for instance). Type B intelligences, not organized like human cognition, raises a lot of interesting philosophical questions.
    • There are programming/cognitive science asides BADLY NEEDED here, I'm not going to worry about it too much.
  • There was a lot of vocalization of the need for "Control", "Oversight", "Limiting", or perhaps "Guiding" the kinds of things strong, superhuman AI could get up too...I'm not at all sure what that would actually entail with a general and fairly autonomous AI--that very generality and autonomy being why one would employ them instead of a human in the first place. Also, when one thinks of the difficulty of getting current technology to do what you want, when you want it, and no more, well, that's kind of humbling.
  • It also seems likely that, unless there's something incredibly and insurmountably difficult about creating/sustaining an AI, it will eventually become fairly ubiquitous. Assuming it can run on fairly conventional technology (no elaborate quantum computers or something requiring city-sized superconductor coils, incredibly rare materials or whatever), it seems like AI will not stay under the control of a small number of people/organizations for terribly long.
  • Still seeing a lot of imaginative barriers to machines having emotions, desires, etc. People. We're machines, we have those things. Contrary to our weird, outmoded idea that logic and rationality are separate from messy bodily emotions...they're actually generated by the same system. An AI that closely copies human neural structure will report similar feelings, and in any case certain kinds of desires & emotions may just be how a sufficiently complicated thinking system self-reports.
  • Lot of talk about economics and who would control AI initially, what the short-term results might be. This is actually one of my biggest concerns for potential catastrophe, and it wouldn't necessarily require strong AI. A bundle of weak-but-decent AI could wipe out huge sectors of the job market with blinding speed--if your job is thinking about stuff, talking to people, shuffling data, those are all things a WBDAI might be able to do better in the fairly near future. (Ironically, manual, non-factory jobs might be safer for a bit longer, because there's some sense in which WBDAI is an easier engineering problem than creating a robot as versatile/economic as the human body.) Corporations might follow this automation game for the immediate labor savings (it's been one of the major forces at work since the industrial revolution), unable to break out of the pattern because of market competition, even if they had the foresight to see that destroying the consumer base could derail our whole system. That's worrisome to me: major shocks to a global economy that is tightly interconnected, with a massive population and stressed food/water systems.
  • (PS if I were an evil robot overlord trying to take out humanity, I'd probably just, y'know, turn off the agricultural infrastucture and wait a few months. Good luck with those cute little urban farms, though.)
  • A point that kept coming up is that we ALREADY create new intelligences that twist our lives into shapes to their liking, evolve mentally to planes we cannot understand, and then replace us when we die, and they're called children.
  • We had a long and really interesting digression about speed and the value of time. Felt like it got off on a severe side track, but really interesting.
  • As with the discussion about superior aliens, I was a little flabbergasted by how deeply a lot of folks assumed that a smarter entity would place no moral value on the rights/lives/etc. of less smart entities. I'm always a little shocked at how low the rate of animal rights consciousness is among philosophers, and how quickly rights-based thinking generally goes out the window sometimes--there are a number of moral schemes that all give one perfectly good reasons to respect the rights of others even when they're not equal to you in some capacity. There were some pretty weird speciesist, Great-Chain-of-Being/Social Darwinist misinterpretations of evolutionary theory and intelligence in the mix as well.
  • Side note: I strongly suggest reading something like Jared Diamond's "Guns, Germs, and Steel" (1997) for a more nuanced take on why the successful imperial cultures were...successful and imperial. Hint: it wasn't intelligence in any normal, individually-appraised sense.
  • Also, "intelligence" read as "brain/computing power" is not an automatic advantage in evolutionary competition (ask anyone who's died from rodents, insects, microfauna, etc.)
  • We didn't get too far into possible benefits of AI, which are pretty exciting. There are a lot of big problems that might be manageable with a different kind of intelligence at work. One thing I often think about is human's difficulty in considering complex causal chains, even when a lot of data is available to us. We like big simple "X causes Y" stories, when actually the relations in, say, climate change, or ecosystem management, or a smart/non-catastrophic economic reform policy NEVER WORK LIKE THAT.
Reading Suggestions on AI

If you're interested in reading fact and theory on AI, whew, that's a big field. Check the links off the Chicago Philosophy Meetup Discussion. Ray Kurzweil's book "The Age of Intelligent Machines" (1990) and SF author Vernor Vinge's essay "The Coming Technological Singularity" are good places to start. Also check those Tim Urban "Wait But Why" articles I mentioned.

Philosopher Nick Bostrom's "Superintelligence: Paths, Dangers, Strategies" (2014) is on my to-read list. It's certainly got Elon Musk's attention, for what that's worth.

Once you start getting into AI in any serious way, you're going to want to take a look at the various interlocking fields that compose cognitive science: philosophy of mind, pyschology, computer theory & programming, and various related fields like evolutionary biology and game theory. I strongly recommend Dennett & Hofstadter's "The Mind's I" (1981), which is a highly readable collection of essays, thought experiments, and fiction (including some Stanislaw Lem stories) exploring various aspects of cognition, touching on lots of AI-related issues.

I've found Dennett's other writings useful to frame cognitive science issues, as well, whether or not one always agrees with him--I'd suggest "Freedom Evolves" (2003) and "Darwin's Dangerous Idea" (1996). His "Intuition Pumps and Other Tools for Thinking" (2013) is a deceptively easy read, targeted partially at lay readers, with a few sections that can help make some better foundations for thinking and asking questions about AI & consciousness. Not to say I always agree with him, but he writes with laudable clarity, makes clear recommendations on what else to read for all the areas he talks about, and is pretty approachable even for the non-specialist.

But let's get real here: I want you to read some more (good) SF about AI, so's we can talk about it. SF author Robert J. Sawyer's article on AI in SF for the Lifeboat Foundation is a pretty good rundown of some history and themes in the genre.

Most film/television SF's take on AI is, to put it delicately, crap. No big surprise there. But there is a lot of great written work. I'd suggest:
  • #1 suggestion, hands down, is Greg Egan's "Permutation City" (1994), which is a true SF masterpiece that gets into great depth exploring issues of "artificial" or "simulated" consciousness. Also I'd recommend "Diaspora" (1997), a strongly transhuman/posthuman work that is overflowing with ideas about different kinds of intelligence and consciousness. Finally, "Zendegi"(2010) is set in a very near future, and is about the attempt to copy a human mind using pretty up-to-date and plausible technology. It's also a lot more grounded and personal story than you might expect.
  • Ted Chiang's novella "The Life Cycle of Software Objects" (2010). Brilliant piece that follows very simple artificial constructions (digi-pets, basically) as they grow into human-level AI over a few decades. Also it's available free online.
    Actually that existential
    doubt firmware patch
    might be a good idea.
  • Peter Watts has some brilliant and scary AI & related issues going on in a lot of his work--a "type B" AI's decision to wipe out humans in "Starfish" (1999) is particularly well done in terms of how the programming/reasoning is revealed. AI & various post-humans feature prominently in his absolute top-shelf 2006 novel "Blindsight", as well as his collection "Beyond the Rift" (2013)--see particularly "The Island" and "Malak", the latter of which is basically about a Predator Drone gaining a kind of consciousness, then conscience.
  • Richard Powers' "Galatea 2.2" (1995) is an old favorite of mine; I feel like it hasn't got enough love from SF fans, Powers falling on the other side of that weird literary-versus-genre divide. "Galatea 2.2" is about a present-day (actually past-day now probably) attempt by a cognitive scientist to create a relatively narrow AI--one that can understand and write/speak English well enough to pass a "Turing Test" of writing a college-level literary analysis, with a graduate student as the other player. The narrator, who's called Richard Powers, and shares many biographical details with the author, is roped into helping out. A strange, poignant novel, full of a conflicted love for literature, and also pretty well-fleshed out in terms of the AI and neural networking.
  • Charles Stross- "Accelerando" (2005). The ever-delightful Stross has here THE definitive book on the singularity and post-humanism; the novel follows a few characters right on through the singularity and out the other side. Brilliant, funny, mind-expanding. Also available in a couple formats, for free, via his site.
  • Obviously, the classic AI SF is Asimov. Don't let the film version of "I, Robot" fool you. The 1950 novel (actually a collection of short stories/anecdotes) is pretty fun & thoughtful. Less for realistic AI/robotics, more for the ethical quandaries.
  • Cory Doctorow has written a fair chunk of AI & singularity-relevant works (like the last Sulzer SF/F selection, "Down and Out in the Magic Kingdom" [2003]), many of them available to read for free via his site. Riffing and reffing Asimov, I really like his 2006 story "I, Rowboat", chock full of good stuff.
  • There's fun AI stuff in the background of a lot of my favorite SF novels, but they don't necessarily take center stage enough to really push on this list. Dan Simmons' "Hyperion" (1989), all of Iain M. Banks' "Culture" novels (1987-2012), and Gibson's "Sprawl" trilogy (1984, '86, '88), for instance.

  • "I try to plan, in your sense of the word,
    but that isn't my basic mode, really.
    I improvise. It's my greatest talent.
    I prefer situations to plans, you see...
    Really, I've had to deal with givens.
    I can sort a great deal of information,
    and sort it very quickly."
  • I realized I wore my Dresden Codak hoodie to the meeting (Aaron Diaz's amazing webcomic exploring transhumanism and many other themes, 2007-present). Mr. Diaz has a wonderful piece satirizing arguments against AI's feasibility ("Artificial Flight and Other Myths"), and another about trans-humanism ("A Thinking Ape's Critique of Trans-Simianism").
  •  But I was outdone by another philosopher sporting a Tessier-Ashpool shirt, referencing the corporate body that owns (sort of) the titular AI of Gibson's "Neuromancer" (1984). I love the AI in the Sprawl! But they get very little time on the page.
  • I would be remiss if I didn't work C.J. Cherryh into the conversation somehow. AI & computer technologies are notably absent from most of her SF, but there is a quiet thread about the "Base One" computer program/personality in "Cyteen" (1988) that stands out to me more and more each time I read it. The work you should read, though, is her "Voyager in Night" (1984, but more commonly available now in the "Alternate Realities" omnibus [2000]), which is a short novella with many horror elements that slowly reveals itself to be about AI. Interesting psychology and some textual experimentation.
  • Very background, but: advanced computing powers are what make the planned economies and ecologies of Kim Stanley Robinson's "2312" (2012) and Ursula Le Guin's (rather technologically simpler) "The Dispossessed" (1974) possible.
A good talk, thanks again to Ivan for creating and moderating. Positron readers, I once again encourage you to check out the Chicago Philosophy Meetup if that sounds at all interesting--most of their meetups are announced way in advance, and usually have suggested (sometimes very strongly suggested) reading material. Chicago Philosophy folks, if you have any interested in attending a science fiction book club (the science fiction/philosophy crossover potentials are legion), check out the Groups or Upcoming Events pages.

And by the way, comments/disputations totally welcome.

No comments:

Post a Comment