In a significant move, the OpenAI Board has established a Safety and Security Committee to provide recommendations on crucial safety and security matters across all OpenAI projects. This committee, led by directors Bret Taylor (Chair), Adam D’Angelo, Nicole Seligman, and Sam Altman (CEO), will play a pivotal role in shaping the organization’s approach to safety and risk management.
OpenAI’s commitment to safety is underscored by its recent initiation of training for its next-generation AI model, which aims to surpass the capabilities of the current GPT-4 system. While OpenAI takes pride in developing cutting-edge models that excel in both performance and safety, it recognizes the need for rigorous scrutiny and debate during this critical juncture.
The Safety and Security Committee’s primary task over the next 90 days will be to evaluate and enhance OpenAI’s existing processes and safeguards. At the conclusion of this period, the committee will present its recommendations to the full Board. Subsequently, OpenAI will publicly share an update on the adopted recommendations, ensuring transparency and alignment with safety and security standards.
Joining the committee are OpenAI’s technical and policy experts, including Aleksander Madry (Head of Preparedness), Lilian Weng (Head of Safety Systems), John Schulman (Head of Alignment Science), Matt Knight (Head of Security), and Jakub Pachocki (Chief Scientist). Additionally, OpenAI will collaborate with external safety, security, and technical experts, including former cybersecurity officials Rob Joyce and John Carlin, to bolster its safety efforts.