{"id":1056418,"date":"2026-01-19T09:47:48","date_gmt":"2026-01-19T09:47:48","guid":{"rendered":"http:\/\/Nrq9e3hyVb94i7QGc2ageR"},"modified":"2026-01-19T09:47:48","modified_gmt":"2026-01-19T09:47:48","slug":"shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving","status":"publish","type":"post","link":"https:\/\/arcader.org\/news\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving\/","title":{"rendered":"Shifting corporate priorities, Superalignment, and safeguarding humanity: Why OpenAI&#8217;s safety researchers keep leaving"},"content":{"rendered":"<article>\n<p>A number of senior AI safety research personnel at OpenAI, the organisation behind ChatGPT, have left the company. This wave of resignations often cites shifts within company culture, and a lack of investment in AI safety as reasons for leaving.<\/p>\n<p>To put it another way, though the ship may not be taking on water, the safety team are departing in their own little dinghy, and that is likely cause for some concern.<\/p>\n<p>The most recent departure is Rosie Campbell, who previously led the Policy Frontiers team. In <a data-analytics-id=\"inline-link\" href=\"https:\/\/www.rosiecampbell.xyz\/p\/leaving-openai\" target=\"_blank\">a post on her personal substack<\/a> (via <a data-analytics-id=\"inline-link\" href=\"https:\/\/www.tweaktown.com\/news\/102023\/openai-safety-researcher-quits-amid-concerns-about-human-level-ai\/index.html\" target=\"_blank\">Tweak Town<\/a>) Campbell shared the final message she sent to her colleagues in Slack, writing that though she has &#8220;always been strongly driven by the mission of ensuring safe and beneficial [Artificial General Intelligence],&#8221; she now believes that she &#8220;can pursue this more effectively externally.&#8221;<\/p>\n<div class=\"fancy-box\">\n<div class=\"fancy_box-title\">AI, explained<\/div>\n<div class=\"fancy_box_body\">\n<figure class=\"van-image-figure \" >\n<div class='image-full-width-wrapper'>\n<div class='image-widthsetter' >\n<p class=\"vanilla-image-block\" style=\"padding-top:56.25%;\"><img decoding=\"async\" id=\"eQ4QvnT5n24R9f4nQNq5MP\" name=\"GettyImages-1245391728.jpg\" caption=\"\" alt=\"OpenAI logo displayed on a phone screen and ChatGPT website displayed on a laptop screen are seen in this illustration photo taken in Krakow, Poland on December 5, 2022.\" src=\"https:\/\/arcader.org\/wp-content\/uploads\/2024\/12\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving.jpg\" mos=\"\" link=\"\" align=\"\" fullscreen=\"\" width=\"\" height=\"\" attribution=\"\" endorsement=\"\" class=\"pinterest-pin-exclude\"><\/p>\n<\/div>\n<\/div><figcaption itemprop=\"caption description\" class=\"\"><span class=\"credit\" itemprop=\"copyrightHolder\">(Image credit: Jakub Porzycki\/NurPhoto via Getty Images)<\/span><\/figcaption><\/figure>\n<p class=\"fancy-box__body-text\"><a data-analytics-id=\"inline-link\" href=\"https:\/\/www.pcgamer.com\/software\/ai\/general-intelligence-explained\/\" target=\"_blank\"><strong>What is artificial general intelligence?<\/strong><\/a><strong>:<\/strong> We dive into the lingo of AI and what the terms actually mean.<\/p>\n<\/div>\n<\/div>\n<p>Campbell highlights &#8220;the dissolution of the AGI Readiness team&#8221; and the departure of Miles Brundage, another AI safety researcher, as specific factors that informed her decision to leave.<\/p>\n<p>Campbell and Brundage had previously worked together at OpenAI on matters of &#8220;AI governance, frontier policy issues, and AGI readiness.&#8221;<\/p>\n<p>Brundage himself also shared a few of his reasons for parting ways with OpenAI in <a data-analytics-id=\"inline-link\" href=\"https:\/\/milesbrundage.substack.com\/p\/why-im-leaving-openai-and-what-im\" target=\"_blank\">a post to his Substack back in October<\/a>. He writes, &#8220;I think AI is unlikely to be as safe and beneficial as possible without a concerted effort to make it so.&#8221; Previously serving as a Senior Advisor for AGI Readiness, he shares, &#8220;I think I can be more effective externally.&#8221;<\/p>\n<p>This comes mere months after Jan Leike&#8217;s resignation as co-lead of OpenAI&#8217;s Superalignment team. This team was tasked with tackling the problem of ensuring that AI systems potentially more intelligent than humans still act in accordance with human values\u2014and they were expected to solve this problem <a data-analytics-id=\"inline-link\" href=\"https:\/\/openai.com\/index\/introducing-superalignment\/\" target=\"_blank\">within the span of four years<\/a>. Talk about a deadline.<\/p>\n<p>While Miles Brundage has described plans to be one of the &#8220;industry-independent voices in the policy conversation,&#8221; Leike, on the other hand, is now co-lead of the Alignment Science team at AI rival Anthropic, a startup that has recently received $4 billion of financial backing from Amazon.<\/p>\n<p>At the time of his departure from OpenAI, Leike <a data-analytics-id=\"inline-link\" href=\"https:\/\/x.com\/janleike\/status\/1791498174659715494\" target=\"_blank\">took to X<\/a> to share his thoughts on the state of the company. His comments are direct, to say the least.<\/p>\n<p>&#8220;Building smarter-than-human machines is an inherently dangerous endeavor,&#8221; He wrote, before criticising the company directly, &#8220;OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products.&#8221;<\/p>\n<p>He goes on to plead, &#8220;OpenAI must become a safety-first AGI company.&#8221;<\/p>\n<p><a data-analytics-id=\"inline-link\" href=\"https:\/\/openai.com\/charter\/\" target=\"_blank\">The company&#8217;s charter<\/a> details a desire to act &#8220;in the best interests of humanity&#8221; towards developing &#8220;safe and beneficial AGI.&#8221; However, OpenAI has grown significantly since its founding in late 2015, and recent corporate moves suggest its priorities may be shifting.<\/p>\n<p>Just for a start, news broke back in September that the company would be <a data-analytics-id=\"inline-link\" href=\"https:\/\/www.pcgamer.com\/software\/ai\/openai-reportedly-plans-to-ditch-its-nonprofit-mission-with-ceo-sam-altman-said-to-be-in-line-to-make-billions\/\" target=\"_blank\">restructuring away from its not-for-profit roots<\/a>.<\/p>\n<p>For another thing, multiple major Canadian media companies are in the process of <a data-analytics-id=\"inline-link\" href=\"https:\/\/www.pcgamer.com\/gaming-industry\/to-stop-its-strip-mining-of-journalism-some-of-the-biggest-canadian-news-companies-are-suing-openai-to-the-tune-of-usd20-000-for-every-article-fed-to-chatgpt\/\" target=\"_blank\">suing OpenAI for feeding news articles into their Large Language Models<\/a>. Generally speaking, it&#8217;s hard to see how plagiarism at that scale could be for the good of humanity, and that&#8217;s all without more broadly getting into the <a data-analytics-id=\"inline-link\" href=\"https:\/\/www.pcgamer.com\/software\/ai\/googles-dumb-ai-answers-increased-its-greenhouse-gas-emissions-by-nearly-50-in-the-last-5-years\/\" target=\"_blank\">far-reaching environmental implications of AI<\/a>.<\/p>\n<p>On a similar note, Future PLC, our overseers at PC Gamer, have today <a data-analytics-id=\"inline-link\" href=\"https:\/\/openai.com\/index\/openai-and-future-partner-on-specialist-content\/\" target=\"_blank\">announced a &#8216;strategic partnership&#8217; with OpenAI<\/a> which theoretically aims to bring content from the company&#8217;s brands to ChatGPT as opposed to it just being scraped without the company&#8217;s consent. However, the wording of the announcement is vague and full details of the partnership have not yet been published, so we still don&#8217;t know how exactly it&#8217;s going to roll out.<\/p>\n<p>With regards to the continuing development of AI and Large Language Models, I like to think significant course correction is still possible\u2014but you can also understand why I would much rather abandon the good ship AI altogether.<\/p>\n<div class=\"product\"><a data-dimension112=\"cb0469e1-ace6-419d-81be-19a9c48dd540\" data-action=\"Deal Block\" data-label=\"Best gaming PC\" data-dimension48=\"Best gaming PC\" target=\"_blank\" rel=\"nofollow\"><\/p>\n<figure class=\"van-image-figure \" >\n<div class='image-full-width-wrapper'>\n<div class='image-widthsetter' style=\"max-width:1500px;\">\n<p class=\"vanilla-image-block\" style=\"padding-top:100.00%;\"><img loading=\"lazy\" decoding=\"async\" id=\"cLHXUVfQ97mAGcMCS5uym6\" name=\"gaming-pc-pink.jpg\" caption=\"\" alt=\"\" src=\"https:\/\/arcader.org\/wp-content\/uploads\/2024\/12\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving-1.jpg\" mos=\"\" align=\"middle\" fullscreen=\"\" width=\"1500\" height=\"1500\" attribution=\"\" endorsement=\"\" credit=\"\" class=\"\"><\/p>\n<\/div>\n<\/div>\n<\/figure>\n<p><\/a><\/p>\n<p><strong><br \/><\/strong><a href=\"https:\/\/www.pcgamer.com\/best-gaming-pc\/\" target=\"_blank\" data-dimension112=\"cb0469e1-ace6-419d-81be-19a9c48dd540\" data-action=\"Deal Block\" data-label=\"Best gaming PC\" data-dimension48=\"Best gaming PC\" data-dimension25=\"\"><strong>Best gaming PC<\/strong><\/a>: The top pre-built machines.<br \/><a href=\"https:\/\/www.pcgamer.com\/best-gaming-laptop\/\" target=\"_blank\"><strong>Best gaming laptop<\/strong><\/a>: Great devices for mobile gaming.<\/p>\n<\/div>\n<\/article>\n<p><a href=\"https:\/\/www.pcgamer.com\/hardware\/why-safety-researchers-keep-leaving-openai\">Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>A number of senior AI safety research personnel at OpenAI, the organisation behind ChatGPT, have left the company. This wave of resignations often cites shifts within company culture, and a lack of investment in AI safety as reasons for leaving. To put it another way, though the ship may not be taking on water, the safety team are departing in their own little dinghy, and that is likely cause for some concern. The most recent departure is Rosie Campbell, who previously led the Policy Frontiers team. In a post on her personal substack (via Tweak Town) Campbell shared the final message she sent to her colleagues in Slack, writing that though she has &#8220;always been strongly driven by the mission of ensuring safe and beneficial&hellip;<\/p>\n<p class=\"excerpt-more\"><a class=\"blog-excerpt button\" href=\"https:\/\/arcader.org\/news\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving\/\">Read More&#8230;<\/a><\/p>\n","protected":false},"author":1,"featured_media":1056419,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[336],"tags":[66],"class_list":["post-1056418","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-pc-gamer","tag-hardware"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.5 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Shifting corporate priorities, Superalignment, and safeguarding humanity: Why OpenAI&#039;s safety researchers keep leaving | Arcader News<\/title>\n<meta name=\"description\" content=\"A number of senior AI safety research personnel at OpenAI, the organisation behind ChatGPT, have left the company. This wave of resignations often cites\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/arcader.org\/news\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Shifting corporate priorities, Superalignment, and safeguarding humanity: Why OpenAI&#039;s safety researchers keep leaving | Arcader News\" \/>\n<meta property=\"og:description\" content=\"A number of senior AI safety research personnel at OpenAI, the organisation behind ChatGPT, have left the company. This wave of resignations often cites\" \/>\n<meta property=\"og:url\" content=\"https:\/\/arcader.org\/news\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving\/\" \/>\n<meta property=\"og:site_name\" content=\"Arcade News\" \/>\n<meta property=\"article:published_time\" content=\"2026-01-19T09:47:48+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/arcader.org\/wp-content\/uploads\/2024\/12\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"480\" \/>\n\t<meta property=\"og:image:height\" content=\"270\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Arcade News\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Arcade News\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/arcader.org\\\/news\\\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/arcader.org\\\/news\\\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving\\\/\"},\"author\":{\"name\":\"Arcade News\",\"@id\":\"https:\\\/\\\/arcader.org\\\/news\\\/#\\\/schema\\\/person\\\/8460f5e5076b52fb2369f2f7ce6f2839\"},\"headline\":\"Shifting corporate priorities, Superalignment, and safeguarding humanity: Why OpenAI&#8217;s safety researchers keep leaving\",\"datePublished\":\"2026-01-19T09:47:48+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/arcader.org\\\/news\\\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving\\\/\"},\"wordCount\":734,\"commentCount\":0,\"image\":{\"@id\":\"https:\\\/\\\/arcader.org\\\/news\\\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/arcader.org\\\/wp-content\\\/uploads\\\/2024\\\/12\\\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving.jpg\",\"keywords\":[\"hardware\"],\"articleSection\":[\"PC Gamer\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/arcader.org\\\/news\\\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/arcader.org\\\/news\\\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving\\\/\",\"url\":\"https:\\\/\\\/arcader.org\\\/news\\\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving\\\/\",\"name\":\"Shifting corporate priorities, Superalignment, and safeguarding humanity: Why OpenAI's safety researchers keep leaving | Arcader News\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/arcader.org\\\/news\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/arcader.org\\\/news\\\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/arcader.org\\\/news\\\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/arcader.org\\\/wp-content\\\/uploads\\\/2024\\\/12\\\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving.jpg\",\"datePublished\":\"2026-01-19T09:47:48+00:00\",\"author\":{\"@id\":\"https:\\\/\\\/arcader.org\\\/news\\\/#\\\/schema\\\/person\\\/8460f5e5076b52fb2369f2f7ce6f2839\"},\"description\":\"A number of senior AI safety research personnel at OpenAI, the organisation behind ChatGPT, have left the company. This wave of resignations often cites\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/arcader.org\\\/news\\\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/arcader.org\\\/news\\\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/arcader.org\\\/news\\\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving\\\/#primaryimage\",\"url\":\"https:\\\/\\\/arcader.org\\\/wp-content\\\/uploads\\\/2024\\\/12\\\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving.jpg\",\"contentUrl\":\"https:\\\/\\\/arcader.org\\\/wp-content\\\/uploads\\\/2024\\\/12\\\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving.jpg\",\"width\":480,\"height\":270,\"caption\":\"Shifting corporate priorities, Superalignment, and safeguarding humanity: Why OpenAI\u2019s safety researchers keep leaving\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/arcader.org\\\/news\\\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/arcader.org\\\/news\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Shifting corporate priorities, Superalignment, and safeguarding humanity: Why OpenAI&#8217;s safety researchers keep leaving\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/arcader.org\\\/news\\\/#website\",\"url\":\"https:\\\/\\\/arcader.org\\\/news\\\/\",\"name\":\"Arcade News\",\"description\":\"Free Arcade News from the Best Online Sources\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/arcader.org\\\/news\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/arcader.org\\\/news\\\/#\\\/schema\\\/person\\\/8460f5e5076b52fb2369f2f7ce6f2839\",\"name\":\"Arcade News\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/3fea48a614d86edd987bc7bb25f4707c69546d4b1f78ad4aa20b26316bad1f9d?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/3fea48a614d86edd987bc7bb25f4707c69546d4b1f78ad4aa20b26316bad1f9d?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/3fea48a614d86edd987bc7bb25f4707c69546d4b1f78ad4aa20b26316bad1f9d?s=96&d=mm&r=g\",\"caption\":\"Arcade News\"},\"sameAs\":[\"https:\\\/\\\/cricketgames.tv\"],\"url\":\"https:\\\/\\\/arcader.org\\\/news\\\/author\\\/arcade-news\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Shifting corporate priorities, Superalignment, and safeguarding humanity: Why OpenAI's safety researchers keep leaving | Arcader News","description":"A number of senior AI safety research personnel at OpenAI, the organisation behind ChatGPT, have left the company. This wave of resignations often cites","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/arcader.org\/news\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving\/","og_locale":"en_US","og_type":"article","og_title":"Shifting corporate priorities, Superalignment, and safeguarding humanity: Why OpenAI's safety researchers keep leaving | Arcader News","og_description":"A number of senior AI safety research personnel at OpenAI, the organisation behind ChatGPT, have left the company. This wave of resignations often cites","og_url":"https:\/\/arcader.org\/news\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving\/","og_site_name":"Arcade News","article_published_time":"2026-01-19T09:47:48+00:00","og_image":[{"width":480,"height":270,"url":"https:\/\/arcader.org\/wp-content\/uploads\/2024\/12\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving.jpg","type":"image\/jpeg"}],"author":"Arcade News","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Arcade News","Est. reading time":"4 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/arcader.org\/news\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving\/#article","isPartOf":{"@id":"https:\/\/arcader.org\/news\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving\/"},"author":{"name":"Arcade News","@id":"https:\/\/arcader.org\/news\/#\/schema\/person\/8460f5e5076b52fb2369f2f7ce6f2839"},"headline":"Shifting corporate priorities, Superalignment, and safeguarding humanity: Why OpenAI&#8217;s safety researchers keep leaving","datePublished":"2026-01-19T09:47:48+00:00","mainEntityOfPage":{"@id":"https:\/\/arcader.org\/news\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving\/"},"wordCount":734,"commentCount":0,"image":{"@id":"https:\/\/arcader.org\/news\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving\/#primaryimage"},"thumbnailUrl":"https:\/\/arcader.org\/wp-content\/uploads\/2024\/12\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving.jpg","keywords":["hardware"],"articleSection":["PC Gamer"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/arcader.org\/news\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/arcader.org\/news\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving\/","url":"https:\/\/arcader.org\/news\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving\/","name":"Shifting corporate priorities, Superalignment, and safeguarding humanity: Why OpenAI's safety researchers keep leaving | Arcader News","isPartOf":{"@id":"https:\/\/arcader.org\/news\/#website"},"primaryImageOfPage":{"@id":"https:\/\/arcader.org\/news\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving\/#primaryimage"},"image":{"@id":"https:\/\/arcader.org\/news\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving\/#primaryimage"},"thumbnailUrl":"https:\/\/arcader.org\/wp-content\/uploads\/2024\/12\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving.jpg","datePublished":"2026-01-19T09:47:48+00:00","author":{"@id":"https:\/\/arcader.org\/news\/#\/schema\/person\/8460f5e5076b52fb2369f2f7ce6f2839"},"description":"A number of senior AI safety research personnel at OpenAI, the organisation behind ChatGPT, have left the company. This wave of resignations often cites","breadcrumb":{"@id":"https:\/\/arcader.org\/news\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/arcader.org\/news\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/arcader.org\/news\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving\/#primaryimage","url":"https:\/\/arcader.org\/wp-content\/uploads\/2024\/12\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving.jpg","contentUrl":"https:\/\/arcader.org\/wp-content\/uploads\/2024\/12\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving.jpg","width":480,"height":270,"caption":"Shifting corporate priorities, Superalignment, and safeguarding humanity: Why OpenAI\u2019s safety researchers keep leaving"},{"@type":"BreadcrumbList","@id":"https:\/\/arcader.org\/news\/shifting-corporate-priorities-superalignment-and-safeguarding-humanity-why-openais-safety-researchers-keep-leaving\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/arcader.org\/news\/"},{"@type":"ListItem","position":2,"name":"Shifting corporate priorities, Superalignment, and safeguarding humanity: Why OpenAI&#8217;s safety researchers keep leaving"}]},{"@type":"WebSite","@id":"https:\/\/arcader.org\/news\/#website","url":"https:\/\/arcader.org\/news\/","name":"Arcade News","description":"Free Arcade News from the Best Online Sources","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/arcader.org\/news\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/arcader.org\/news\/#\/schema\/person\/8460f5e5076b52fb2369f2f7ce6f2839","name":"Arcade News","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/3fea48a614d86edd987bc7bb25f4707c69546d4b1f78ad4aa20b26316bad1f9d?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/3fea48a614d86edd987bc7bb25f4707c69546d4b1f78ad4aa20b26316bad1f9d?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/3fea48a614d86edd987bc7bb25f4707c69546d4b1f78ad4aa20b26316bad1f9d?s=96&d=mm&r=g","caption":"Arcade News"},"sameAs":["https:\/\/cricketgames.tv"],"url":"https:\/\/arcader.org\/news\/author\/arcade-news\/"}]}},"_links":{"self":[{"href":"https:\/\/arcader.org\/news\/wp-json\/wp\/v2\/posts\/1056418","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/arcader.org\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/arcader.org\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/arcader.org\/news\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/arcader.org\/news\/wp-json\/wp\/v2\/comments?post=1056418"}],"version-history":[{"count":1,"href":"https:\/\/arcader.org\/news\/wp-json\/wp\/v2\/posts\/1056418\/revisions"}],"predecessor-version":[{"id":1467425,"href":"https:\/\/arcader.org\/news\/wp-json\/wp\/v2\/posts\/1056418\/revisions\/1467425"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/arcader.org\/news\/wp-json\/wp\/v2\/media\/1056419"}],"wp:attachment":[{"href":"https:\/\/arcader.org\/news\/wp-json\/wp\/v2\/media?parent=1056418"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/arcader.org\/news\/wp-json\/wp\/v2\/categories?post=1056418"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/arcader.org\/news\/wp-json\/wp\/v2\/tags?post=1056418"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}