OpenAI has formed a Safety and Security Committee which will be led by CEO Sam Altman as it begins training its next artificial intelligence model, the AI startup said this week.

Directors Bret Taylor, Adam D’Angelo and Nicole Seligman, will also lead the committee, OpenAI said on a company blog.

Former Chief Scientist Ilya Sutskever and Jan Leike, who were leaders of Microsoft-backed (MSFT.O) OpenAI’s Superalignment team, which ensured AI stays aligned to the intended objectives, left the firm earlier this month.

OpenAI had disbanded the Superalignment team earlier in May, less than a year after the company created it, with some team members being reassigned to other groups, CNBC reported days after the high-profile departures.

The committee will be responsible for making recommendations to the board on safety and security decisions for OpenAI’s projects and operations.

Its first task will be to evaluate and further develop OpenAI’s existing safety practices over the next 90 days, following which it will share recommendations with the board.

After the board’s review, OpenAI will publicly share an update on adopted recommendations, the company said.

Other committee members include the company’s technical and policy experts Aleksander Madry, Lilian Weng and head of alignment sciences John Schulman. Newly appointed Chief Scientist Jakub Pachocki and head of security Matt Knight will also be on the committee.