{"id":1031505,"date":"2026-01-15T02:41:21","date_gmt":"2026-01-15T02:41:21","guid":{"rendered":"http:\/\/qpGW9j5fJKAvX9tAskfnPj"},"modified":"2026-01-15T02:41:21","modified_gmt":"2026-01-15T02:41:21","slug":"open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger","status":"publish","type":"post","link":"https:\/\/arcader.org\/news\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger\/","title":{"rendered":"Open AI co-founder reckons AI training has hit a wall, forcing AI labs to train their models smarter not just bigger"},"content":{"rendered":"<article>\n<p>Ilya Sutskever, co-founder of <a data-analytics-id=\"inline-link\" href=\"https:\/\/www.pcgamer.com\/software\/ai\/please-halt-this-activity-not-so-open-openai-seems-to-have-gone-full-mob-boss-sending-threatening-emails-to-anyone-who-asks-its-latest-ai-models-probing-questions\/\" target=\"_blank\" rel=\"noopener\">OpenAI<\/a>, thinks existing approaches to scaling up large language models have plateaued. For significant future progress, AI labs will need to train smarter, not just bigger, and LLMs will need to think a little bit longer.<\/p>\n<p>Speaking to <a data-analytics-id=\"inline-link\" href=\"https:\/\/www.reuters.com\/technology\/artificial-intelligence\/openai-rivals-seek-new-path-smarter-ai-current-methods-hit-limitations-2024-11-11\/\" target=\"_blank\" rel=\"noopener\">Reuters<\/a>, Sutskever explained that the pre-training phase of scaling up large language models, such as ChatGPT, is reaching its limits. Pre-training is the initial phase that processes huge quantities of uncategorized data to build language patterns and structures within the model.<\/p>\n<p>Until recently, adding scale, in other words increasing the amount of data available for training, was enough to produce a more powerful and capable model. But that&#8217;s not the case any longer, instead exactly what you train the model on and how is more important.<\/p>\n<p>\u201cThe 2010s were the age of scaling, now we&#8217;re back in the age of wonder and discovery once again. Everyone is looking for the next thing,\u201d Sutskever reckons, &#8220;scaling the right thing matters more now than ever.\u201d<\/p>\n<p>The backdrop here is the increasingly apparent problems AI labs are having making major advances on models in and around the power and performance of ChatGPT 4.0.<\/p>\n<p>The short version of this narrative is that everyone now has access to the same or at least similar easily accessible training data through various online sources. It&#8217;s no longer possible to get an edge simply by throwing more raw data at the problem. So, in very simple terms, training smarter not just bigger is what will now give AI outfits an edge.<\/p>\n<p>Another enabler for LLM performance will be at the other end of the process when the models are fully trained and accessed by users, the stage known as inferencing.<\/p>\n<p>Here, the idea is to use a multi-step approach to solving problems and queries in which the model can feed back into itself, leading to more human-like reasoning and decision-making.<\/p>\n<p>\u201cIt turned out that having a bot think for just 20 seconds in a hand of poker got the same performance boost as scaling up the model by 100,000x and training it for 100,000 times longer,\u201d Noam Brown, an OpenAI researcher who worked on the latest <a data-analytics-id=\"inline-link\" href=\"https:\/\/openai.com\/o1\/\" target=\"_blank\" rel=\"noopener\">o1 LLM<\/a> says.<\/p>\n<div class=\"fancy-box\">\n<div class=\"fancy_box-title\">Your next upgrade<\/div>\n<div class=\"fancy_box_body\">\n<figure class=\"van-image-figure \" >\n<div class='image-full-width-wrapper'>\n<div class='image-widthsetter' >\n<p class=\"vanilla-image-block\" style=\"padding-top:56.25%;\"><img decoding=\"async\" id=\"tidxyoUY3P2N5A2jEhgSNK\" name=\"nvidia-rtx-4070-12.jpg\" caption=\"\" alt=\"Nvidia RTX 4070 and RTX 3080 Founders Edition graphics cards\" src=\"https:\/\/arcader.org\/wp-content\/uploads\/2024\/11\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger.jpg\" mos=\"\" link=\"\" align=\"\" fullscreen=\"\" width=\"\" height=\"\" attribution=\"\" endorsement=\"\" class=\"pinterest-pin-exclude\"><\/p>\n<\/div>\n<\/div><figcaption itemprop=\"caption description\" class=\"\"><span class=\"credit\" itemprop=\"copyrightHolder\">(Image credit: Future)<\/span><\/figcaption><\/figure>\n<p class=\"fancy-box__body-text\"><a data-analytics-id=\"inline-link\" href=\"https:\/\/www.pcgamer.com\/best-cpu-for-gaming\/\" target=\"_blank\" rel=\"noopener\"><strong>Best CPU for gaming<\/strong><\/a>: The top chips from Intel and AMD.<br \/><a data-analytics-id=\"inline-link\" href=\"https:\/\/www.pcgamer.com\/best-gaming-motherboards\/\" target=\"_blank\" rel=\"noopener\"><strong>Best gaming motherboard<\/strong><\/a>: The right boards.<br \/><a data-analytics-id=\"inline-link\" href=\"https:\/\/www.pcgamer.com\/the-best-graphics-cards\/\" target=\"_blank\" rel=\"noopener\"><strong>Best graphics card<\/strong><\/a>: Your perfect pixel-pusher awaits.<br \/><a data-analytics-id=\"inline-link\" href=\"https:\/\/www.pcgamer.com\/best-ssd-for-gaming\/\" target=\"_blank\" rel=\"noopener\"><strong>Best SSD for gaming<\/strong><\/a>: Get into the game ahead of the rest.<\/p>\n<\/div>\n<\/div>\n<p>In other words, having bots think longer rather than just spew out the first thing that comes to mind can deliver better results. If the latter proves a productive approach, the AI hardware industry could shift away from massive training clusters towards banks of GPUs focussed on improved inferencing.<\/p>\n<p>Of course, either way, Nvidia is likely to be ready to take everyone&#8217;s money. The increase in demand for AI GPUs for inferencing is indeed something Nvidia CEO Jensen Huang recently noted.<\/p>\n<p>&#8220;We&#8217;ve now discovered a second scaling law, and this is the scaling law at a time of inference. All of these factors have led to the demand for <a data-analytics-id=\"inline-link\" href=\"https:\/\/www.pcgamer.com\/hardware\/graphics-cards\/nvidia-ceo-brings-out-a-monster-dual-gpu-blackwell-chip-at-gtc-heres-whats-it-tells-us-about-the-next-geforce-graphics-cards\/\" target=\"_blank\" rel=\"noopener\">Blackwell [Nvidia&#8217;s next-gen GPU architecture]<\/a> being incredibly high,&#8221; Huang said recently.<\/p>\n<p>How long it will take for a generation of cleverer bots to appear thanks to these methods isn&#8217;t clear. But the effort will probably show up in Nvidia&#8217;s bank balance soon enough.<\/p>\n<\/article>\n<p><a href=\"https:\/\/www.pcgamer.com\/software\/ai\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger\">Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Ilya Sutskever, co-founder of OpenAI, thinks existing approaches to scaling up large language models have plateaued. For significant future progress, AI labs will need to train smarter, not just bigger, and LLMs will need to think a little bit longer. Speaking to Reuters, Sutskever explained that the pre-training phase of scaling up large language models, such as ChatGPT, is reaching its limits. Pre-training is the initial phase that processes huge quantities of uncategorized data to build language patterns and structures within the model. Until recently, adding scale, in other words increasing the amount of data available for training, was enough to produce a more powerful and capable model. But that&#8217;s not the case any longer, instead exactly what you train the model on and how&hellip;<\/p>\n<p class=\"excerpt-more\"><a class=\"blog-excerpt button\" href=\"https:\/\/arcader.org\/news\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger\/\">Read More&#8230;<\/a><\/p>\n","protected":false},"author":1,"featured_media":1031506,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[336],"tags":[1997,1622],"class_list":["post-1031505","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-pc-gamer","tag-ai","tag-software"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Open AI co-founder reckons AI training has hit a wall, forcing AI labs to train their models smarter not just bigger | Arcader News<\/title>\n<meta name=\"description\" content=\"Ilya Sutskever, co-founder of OpenAI, thinks existing approaches to scaling up large language models have plateaued. For significant future progress, AI\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/arcader.org\/news\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Open AI co-founder reckons AI training has hit a wall, forcing AI labs to train their models smarter not just bigger | Arcader News\" \/>\n<meta property=\"og:description\" content=\"Ilya Sutskever, co-founder of OpenAI, thinks existing approaches to scaling up large language models have plateaued. For significant future progress, AI\" \/>\n<meta property=\"og:url\" content=\"https:\/\/arcader.org\/news\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger\/\" \/>\n<meta property=\"og:site_name\" content=\"Arcade News\" \/>\n<meta property=\"article:published_time\" content=\"2026-01-15T02:41:21+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/arcader.org\/wp-content\/uploads\/2024\/11\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"480\" \/>\n\t<meta property=\"og:image:height\" content=\"270\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Arcade News\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Arcade News\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/arcader.org\\\/news\\\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/arcader.org\\\/news\\\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger\\\/\"},\"author\":{\"name\":\"Arcade News\",\"@id\":\"https:\\\/\\\/arcader.org\\\/news\\\/#\\\/schema\\\/person\\\/8460f5e5076b52fb2369f2f7ce6f2839\"},\"headline\":\"Open AI co-founder reckons AI training has hit a wall, forcing AI labs to train their models smarter not just bigger\",\"datePublished\":\"2026-01-15T02:41:21+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/arcader.org\\\/news\\\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger\\\/\"},\"wordCount\":584,\"commentCount\":0,\"image\":{\"@id\":\"https:\\\/\\\/arcader.org\\\/news\\\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/arcader.org\\\/wp-content\\\/uploads\\\/2024\\\/11\\\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger.jpg\",\"keywords\":[\"ai\",\"software\"],\"articleSection\":[\"PC Gamer\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/arcader.org\\\/news\\\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/arcader.org\\\/news\\\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger\\\/\",\"url\":\"https:\\\/\\\/arcader.org\\\/news\\\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger\\\/\",\"name\":\"Open AI co-founder reckons AI training has hit a wall, forcing AI labs to train their models smarter not just bigger | Arcader News\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/arcader.org\\\/news\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/arcader.org\\\/news\\\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/arcader.org\\\/news\\\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/arcader.org\\\/wp-content\\\/uploads\\\/2024\\\/11\\\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger.jpg\",\"datePublished\":\"2026-01-15T02:41:21+00:00\",\"author\":{\"@id\":\"https:\\\/\\\/arcader.org\\\/news\\\/#\\\/schema\\\/person\\\/8460f5e5076b52fb2369f2f7ce6f2839\"},\"description\":\"Ilya Sutskever, co-founder of OpenAI, thinks existing approaches to scaling up large language models have plateaued. For significant future progress, AI\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/arcader.org\\\/news\\\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/arcader.org\\\/news\\\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/arcader.org\\\/news\\\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger\\\/#primaryimage\",\"url\":\"https:\\\/\\\/arcader.org\\\/wp-content\\\/uploads\\\/2024\\\/11\\\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger.jpg\",\"contentUrl\":\"https:\\\/\\\/arcader.org\\\/wp-content\\\/uploads\\\/2024\\\/11\\\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger.jpg\",\"width\":480,\"height\":270,\"caption\":\"Open AI co-founder reckons AI training has hit a wall, forcing AI labs to train their models smarter not just bigger\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/arcader.org\\\/news\\\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/arcader.org\\\/news\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Open AI co-founder reckons AI training has hit a wall, forcing AI labs to train their models smarter not just bigger\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/arcader.org\\\/news\\\/#website\",\"url\":\"https:\\\/\\\/arcader.org\\\/news\\\/\",\"name\":\"Arcade News\",\"description\":\"Free Arcade News from the Best Online Sources\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/arcader.org\\\/news\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/arcader.org\\\/news\\\/#\\\/schema\\\/person\\\/8460f5e5076b52fb2369f2f7ce6f2839\",\"name\":\"Arcade News\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/3fea48a614d86edd987bc7bb25f4707c69546d4b1f78ad4aa20b26316bad1f9d?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/3fea48a614d86edd987bc7bb25f4707c69546d4b1f78ad4aa20b26316bad1f9d?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/3fea48a614d86edd987bc7bb25f4707c69546d4b1f78ad4aa20b26316bad1f9d?s=96&d=mm&r=g\",\"caption\":\"Arcade News\"},\"sameAs\":[\"https:\\\/\\\/cricketgames.tv\"],\"url\":\"https:\\\/\\\/arcader.org\\\/news\\\/author\\\/arcade-news\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Open AI co-founder reckons AI training has hit a wall, forcing AI labs to train their models smarter not just bigger | Arcader News","description":"Ilya Sutskever, co-founder of OpenAI, thinks existing approaches to scaling up large language models have plateaued. For significant future progress, AI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/arcader.org\/news\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger\/","og_locale":"en_US","og_type":"article","og_title":"Open AI co-founder reckons AI training has hit a wall, forcing AI labs to train their models smarter not just bigger | Arcader News","og_description":"Ilya Sutskever, co-founder of OpenAI, thinks existing approaches to scaling up large language models have plateaued. For significant future progress, AI","og_url":"https:\/\/arcader.org\/news\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger\/","og_site_name":"Arcade News","article_published_time":"2026-01-15T02:41:21+00:00","og_image":[{"width":480,"height":270,"url":"https:\/\/arcader.org\/wp-content\/uploads\/2024\/11\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger.jpg","type":"image\/jpeg"}],"author":"Arcade News","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Arcade News","Est. reading time":"3 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/arcader.org\/news\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger\/#article","isPartOf":{"@id":"https:\/\/arcader.org\/news\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger\/"},"author":{"name":"Arcade News","@id":"https:\/\/arcader.org\/news\/#\/schema\/person\/8460f5e5076b52fb2369f2f7ce6f2839"},"headline":"Open AI co-founder reckons AI training has hit a wall, forcing AI labs to train their models smarter not just bigger","datePublished":"2026-01-15T02:41:21+00:00","mainEntityOfPage":{"@id":"https:\/\/arcader.org\/news\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger\/"},"wordCount":584,"commentCount":0,"image":{"@id":"https:\/\/arcader.org\/news\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger\/#primaryimage"},"thumbnailUrl":"https:\/\/arcader.org\/wp-content\/uploads\/2024\/11\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger.jpg","keywords":["ai","software"],"articleSection":["PC Gamer"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/arcader.org\/news\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/arcader.org\/news\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger\/","url":"https:\/\/arcader.org\/news\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger\/","name":"Open AI co-founder reckons AI training has hit a wall, forcing AI labs to train their models smarter not just bigger | Arcader News","isPartOf":{"@id":"https:\/\/arcader.org\/news\/#website"},"primaryImageOfPage":{"@id":"https:\/\/arcader.org\/news\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger\/#primaryimage"},"image":{"@id":"https:\/\/arcader.org\/news\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger\/#primaryimage"},"thumbnailUrl":"https:\/\/arcader.org\/wp-content\/uploads\/2024\/11\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger.jpg","datePublished":"2026-01-15T02:41:21+00:00","author":{"@id":"https:\/\/arcader.org\/news\/#\/schema\/person\/8460f5e5076b52fb2369f2f7ce6f2839"},"description":"Ilya Sutskever, co-founder of OpenAI, thinks existing approaches to scaling up large language models have plateaued. For significant future progress, AI","breadcrumb":{"@id":"https:\/\/arcader.org\/news\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/arcader.org\/news\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/arcader.org\/news\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger\/#primaryimage","url":"https:\/\/arcader.org\/wp-content\/uploads\/2024\/11\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger.jpg","contentUrl":"https:\/\/arcader.org\/wp-content\/uploads\/2024\/11\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger.jpg","width":480,"height":270,"caption":"Open AI co-founder reckons AI training has hit a wall, forcing AI labs to train their models smarter not just bigger"},{"@type":"BreadcrumbList","@id":"https:\/\/arcader.org\/news\/open-ai-co-founder-reckons-ai-training-has-hit-a-wall-forcing-ai-labs-to-train-their-models-smarter-not-just-bigger\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/arcader.org\/news\/"},{"@type":"ListItem","position":2,"name":"Open AI co-founder reckons AI training has hit a wall, forcing AI labs to train their models smarter not just bigger"}]},{"@type":"WebSite","@id":"https:\/\/arcader.org\/news\/#website","url":"https:\/\/arcader.org\/news\/","name":"Arcade News","description":"Free Arcade News from the Best Online Sources","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/arcader.org\/news\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/arcader.org\/news\/#\/schema\/person\/8460f5e5076b52fb2369f2f7ce6f2839","name":"Arcade News","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/3fea48a614d86edd987bc7bb25f4707c69546d4b1f78ad4aa20b26316bad1f9d?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/3fea48a614d86edd987bc7bb25f4707c69546d4b1f78ad4aa20b26316bad1f9d?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/3fea48a614d86edd987bc7bb25f4707c69546d4b1f78ad4aa20b26316bad1f9d?s=96&d=mm&r=g","caption":"Arcade News"},"sameAs":["https:\/\/cricketgames.tv"],"url":"https:\/\/arcader.org\/news\/author\/arcade-news\/"}]}},"_links":{"self":[{"href":"https:\/\/arcader.org\/news\/wp-json\/wp\/v2\/posts\/1031505","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/arcader.org\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/arcader.org\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/arcader.org\/news\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/arcader.org\/news\/wp-json\/wp\/v2\/comments?post=1031505"}],"version-history":[{"count":1,"href":"https:\/\/arcader.org\/news\/wp-json\/wp\/v2\/posts\/1031505\/revisions"}],"predecessor-version":[{"id":1462540,"href":"https:\/\/arcader.org\/news\/wp-json\/wp\/v2\/posts\/1031505\/revisions\/1462540"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/arcader.org\/news\/wp-json\/wp\/v2\/media\/1031506"}],"wp:attachment":[{"href":"https:\/\/arcader.org\/news\/wp-json\/wp\/v2\/media?parent=1031505"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/arcader.org\/news\/wp-json\/wp\/v2\/categories?post=1031505"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/arcader.org\/news\/wp-json\/wp\/v2\/tags?post=1031505"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}