Skip to content

Week 9 (5/20,22): The Singularity #9

@Gio-Choi

Description

@Gio-Choi

In the final week of the course, we consider the risks associated with building intelligent machines. Assembled readings include prophesies and utopian visions of the singularity, when machines achieve learning capacity and sentience capable of ushering in a new era of prosperity and possibility. They also contain voices of warning regarding the potential uncontrollability of such machines, or their selective control by a few individuals or corporations who can exercise unchecked power over the rest, or their incomplete controllability wherein they monomaniacally maximize a proximate objective (e.g., “make as many paperclips as possible”) and so unintentionally destroy the world. Still others see risks in the risks of multiple AIs competing with one another (e.g., the “flash crash” of 2010), or AI playing greater roles within elaborate misinformation operations used to turn people against each other. What do you find the most compelling risks associated with AI? Are these balanced by the potential benefits that AI could unleash? Are AI risks existential—do they pose a risk to humanity of extinction or a long-term prosperity reduction? And what are the character of AI risks—of one centralized, superintelligent machine, a network of AI agents, or the new, unstable structure of humans and bots together. What should humanity do to contain or mitigate these risks, if anything, and how could they be implemented?

Post your response as a Comment in reply to this message.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions