OpenAI coaching its subsequent main AI mannequin, types new security committee

A man rolling a boulder up a hill.

Enlarge (credit score: Getty Pictures)

On Monday, OpenAI introduced the formation of a brand new “Security and Safety Committee” to supervise danger administration for its tasks and operations. The announcement comes as the corporate says it has “lately begun” coaching its subsequent frontier mannequin, which it expects to carry the corporate nearer to its purpose of attaining synthetic common intelligence (AGI), although some critics say AGI is farther off than we would suppose. It additionally comes as a response to a horrible two weeks within the press for the corporate.

Whether or not the aforementioned new frontier mannequin is meant to be GPT-5 or a step past that’s at present unknown. Within the AI business, “frontier mannequin” is a time period for a brand new AI system designed to push the boundaries of present capabilities. And “AGI” refers to a hypothetical AI system with human-level talents to carry out novel, common duties past its coaching knowledge (in contrast to slim AI, which is skilled for particular duties).

In the meantime, the brand new Security and Safety Committee, led by OpenAI administrators Bret Taylor (chair), Adam D’Angelo, Nicole Seligman, and Sam Altman (CEO), shall be answerable for making suggestions about AI security to the total firm board of administrators. On this case, “security” partially means the standard “we can’t let the AI go rogue and take over the world,” but it surely additionally features a broader set of “processes and safeguards” that the corporate spelled out in a Might 21 security replace associated to alignment analysis, defending youngsters, upholding election integrity, assessing societal impacts, and implementing safety measures.

Learn 5 remaining paragraphs | Feedback

Leave a Reply

Your email address will not be published. Required fields are marked *