alignment problem

From Wiktionary, the free dictionary
Jump to navigation Jump to search

English[edit]

English Wikipedia has an article on:
Wikipedia

Etymology[edit]

Popularized by the 2020 book The Alignment Problem by Brian Christian.

Proper noun[edit]

alignment problem

  1. (artificial intelligence) The problem of how to create a superintelligent artificial intelligence whose values would align with the interests of humankind.
    • 2022 March 1, Rob Toews, “7 Must-Read Books About Artificial Intelligence”, in Forbes[1], New York, N.Y.: Forbes Media, →ISSN, →OCLC, archived from the original on 31 August 2022:
      As [Brian] Christian notes, the alignment problem bears a real resemblance to parenting: “The story of human civilization has always been about how to instill values in strange, alien, human-level intelligences who will inevitably inherit the reins of society from us—namely, our kids.”
    • 2022 December 13, Melanie Mitchell, “What Does It Mean to Align AI With Human Values?”, in Quanta Magazine[2], New York, N.Y.: Simons Foundation, →ISSN, →OCLC, archived from the original on 2023-03-15:
      Properly defining and solving the alignment problem won’t be easy; it will require us to develop a broad, scientifically based theory of intelligence.
    • 2023 February 27, Derek Thompson, “The AI Disaster Scenario”, in The Atlantic[3], Washington, D.C.: The Atlantic Monthly Group, →ISSN, →OCLC, archived from the original on 2023-03-22:
      For years, AI ethicists have worried about this so-called alignment problem. In short: How do we ensure that the AI we build, which might very well be significantly smarter than any person who has ever lived, is aligned with the interests of its creators and of the human race? An unaligned superintelligent AI could be quite a problem.

Translations[edit]