Feds appoint “AI doomer” to run AI security at US institute

Feds appoint “AI doomer” to run AI safety at US institute

Enlarge (credit score: Invoice Oxford | iStock / Getty Photos Plus)

The US AI Security Institute—a part of the Nationwide Institute of Requirements and Expertise (NIST)—has lastly introduced its management workforce after a lot hypothesis.

Appointed as head of AI security is Paul Christiano, a former OpenAI researcher who pioneered a foundational AI security method known as reinforcement studying from human suggestions (RLHF), however can be identified for predicting that “there is a 50 p.c probability AI growth may finish in ‘doom.'” Whereas Christiano’s analysis background is spectacular, some concern that by appointing a so-called “AI doomer,” NIST could also be risking encouraging non-scientific considering that many critics view as sheer hypothesis.

There have been rumors that NIST staffers oppose the hiring. A controversial VentureBeat report final month cited two nameless sources claiming that, seemingly due to Christiano’s so-called “AI doomer” views, NIST staffers had been “revolting.” Some employees members and scientists allegedly threatened to resign, VentureBeat reported, fearing “that Christiano’s affiliation” with efficient altruism and “longtermism may compromise the institute’s objectivity and integrity.”

Learn 34 remaining paragraphs | Feedback

Leave a Reply

Your email address will not be published. Required fields are marked *