Google’s ChatGPT Competitor Gemini Could Preview This Week

Google’s ChatGPT Competitor Gemini Could Preview This Week

Photo: David Paul Morris/Bloomberg (Getty Images)

Google is reportedly set to virtually preview its generative AI competitor to ChatGPT, Gemini, as soon as this week, according to The Information Monday. The search giant hopes to catch OpenAI on its heels as the Microsoft-backed company recovers from its dramatic board kerfuffle along with the firing and re-hiring of Sam Altman.

Google delayed the launch of Gemini for some time now. Its generative AI chatbot promises to be significantly more powerful than its current offering, Bard, which has failed to take off with consumers like ChatGPT. Gemini presents a strong AI model from a stable company, which looks great against the backdrop of OpenAI, whose ChatGPT Plus signups remain paused, and the highly touted GPT Store is missing in action.

Gemini’s virtual preview this week, if true, could be a scaled-back version of a full launch that was originally planned for this week. CEO Sundar Pichai reportedly scrapped a series of in-person Gemini launch events set for early December. It appears Google found that Gemini was not reliably handling queries in non-English languages, according to The Information.

Google didn’t immediately respond to a request for comment.

A preview of Gemini would take some of the load off of Google, which has fumbled its lead in generative AI this year. The company is facing pressure from investors to release Gemini as quickly as possible to catch up with Microsoft and OpenAI, however, the full launch of Gemini will likely not occur until sometime in 2024.

Gemini is multi-modal, meaning that it can handle image, voice, and text requests simultaneously just like ChatGPT-4. The AI is built on a system that uses reinforcement learning, capable of planning and problem-solving, according to Demis Hassabis, the CEO of Deep Mind, the division building Gemini.

OpenAI Chaos Delays GPT Store to 2024

OpenAI Chaos Delays GPT Store to 2024

OpenAI is delaying the launch of its GPT Store, a marketplace of customizable GPTs, until 2024, according to a memo seen by Axios. Sam Altman told an audience at DevDay that the GPT Store would launch in November.

“In terms of what’s next, we are now planning to launch the GPT Store early next year,” said OpenAI in an email to GPT builders seen by The Verge. “While we had expected to release it this month, a few unexpected things have been keeping us busy!”

A few unexpected things keeping them busy could be a reference to Sam Altman’s firing, over 700 employees threatening to quit, reinstating its CEO and replacing the old board that fired Altman, and figuring out the future for its chief scientist, Ilya Sutskever. That’s a lot. Research this month from Adversa AI also showed that vulnerabilities in GPTs could jeopardize the security and intellectual property of developers. However, the major delay for the GPT Store pushes against a narrative from OpenAI that its leadership shakeup has not stalled progress.

“Throughout this whole thing, we did not lose a single employee, a single customer,” said Altman in an interview with The Verge this week. “Not only did they keep the products up even in the face of very difficult-to-manage growth, they also shipped new features. Research progress continued.”

OpenAI did not immediately respond to Gizmodo’s request for comment.

That all may be true, but the GPT Store was promised to come out weeks after DevDay, and now it’s going to take months. Two developers who requested anonymity to speak freely told Gizmodo that OpenAI has “lacked clear communication” in the last month, and its behavior is “frustrating.” One of the developers building GPTs said they never received this email about the delay of the GPT Store and found out from a news article.

On OpenAI’s developer forum, one post from this week asks OpenAI to offer information about any potential delays with the GPT Store. “If OpenAI indicates there might be some delay, then our coders won’t need to work overtime every day for fear that their GPT won’t be ready for the same day as the GPT store launch,” the post continued. Several other posts on the forum are from developers wondering when the GPT Store would launch.

The store is a big opportunity for developers to share GPTs broadly, and OpenAI has even said it will share revenue from ChatGPT Plus subscriptions with the best GPT creators. Currently, GPTs are only available to premium users, which OpenAI paused signups for in November. Details on the GPT store have been light, however. OpenAI ended its email by thanking developers for building GPTs and notified them that new ChatGPT features were coming soon.

Generating AI Images Uses as Much Energy as Charging Your Phone, Study Finds

Generating AI Images Uses as Much Energy as Charging Your Phone, Study Finds

Creating images with generative AI could use as much energy as charging your smartphone according to a new study Friday that measures the environmental impact of generative AI models for the first time. Popular models like ChatGPT’s Dall-E and Midjourney may produce more carbon than driving 4 miles.

“People think that AI doesn’t have any environmental impacts, that it’s this abstract technological entity that lives on a ‘cloud’,” Dr. Sasha Luccioni, who led the study, told Gizmodo. “But every time we query an AI model, it comes with a cost to the planet, and it’s important to calculate that.”

The study from Hugging Face and Carnegie Mellon found that image generation, turning text into an image, took substantially more energy than any other task for generative AI models. Researchers tested 88 models on 30 data sets and found large, multipurpose models, like ChatGPT, are more energy-intensive than task-specific models. The study is the first of its kind to measure the carbon and energy impact of generative AI models. Dr. Luccioni said the study did not look at OpenAI because they don’t share data, and according to her, that’s a big problem.

Image: Hugging Face/Carnegie Mellon

Dr. Luccioni, who is Climate Lead at Hugging Face, says multipurpose generative AI models, like ChatGPT, are more user-friendly, but more energy-intensive. Luccioni cites a paradigm shift towards these models because they’re easier for consumers to work with. You can just go to your chatbot and ask it to do anything for you, as opposed to having to find the right model.

OpenAI and Midjourney did not immediately respond to a request for comment.

“I think that for generative AI overall, we should be conscious of where and how we use it, comparing its cost and its benefits,” said Luccioni.

Image for article titled Generating AI Images Uses as Much Energy as Charging Your Phone, Study Finds

Graphic: Hugging Face/Carnegie Mellon

The study tested several AI image generation models including Stability.AI’s Stable Diffusion XL which ranked as one of the worst for energy efficiency. Researchers also tested PromptHero’s OpenJourney, a free alternative to Midjourney. The study did not include ChatGPT’s DALL-E or Midjourney, which are the most popular models on the market, but these models are larger and more widely used than those mentioned in the study.

ChatGPT-4 has 1.76 trillion parameters, and that’s a lot of computation every time someone makes a ChatGPT inquiry. Dr. Luccioni sees the benefit of deploying multipurpose generative models in certain areas, but does “not see convincing evidence for the necessity of their deployment in contexts where tasks are well-defined.” Luccioni points out web search and navigation as areas that could use smaller models than ChatGPT, given their large energy requirements.

Happy Birthday, ChatGPT

Happy Birthday, ChatGPT

Screenshot: OpenAI

Part of OpenAI’s success comes through funding from a multibillion-dollar partnership with Microsoft. It wasn’t long before the aging tech behemoth launched its own AI-powered tools in the form of a new chat feature on its unloved search engine Bing.

Bing Chat (launched renamed Microsoft co-pilot) launched in February 2023. In its first few days on the market, it went completely off the rails. Bing told users it was alive, shared its plans for world domination, used racial slurs, and tried to convince a New York Times reporter to leave his wife and started a relationship with the AI. Bing also infamously revealed it had a secret alter ego, Sydney, a code name that Microsoft used for the AI in early tests but later instructed Bing not to reveal.

About a week later, Microsoft murdered Sydney, neutering its responses and bringing the AI to heel. Bing had a disquieting response when Bloomberg’s Davey Alba asked if she could call it Sydney in a recent conversation. “I’m sorry, but I have nothing to tell you about Sydney,” Bing replied. “This conversation is over. Goodbye.” Discussions about the bot’s “feelings” ended in a similar curt fashion.

Researchers Made an IQ Test for AI, Found They’re All Pretty Stupid

Researchers Made an IQ Test for AI, Found They’re All Pretty Stupid

There’s been a lot of talk about AGI lately—artificial general intelligence—the much-coveted AI development goal that every company in Silicon Valley is currently racing to achieve. AGI refers to a hypothetical point in the future when AI algorithms will be able to do most of the jobs that humans currently do. According to this theory of events, the emergence of AGI will bring about fundamental changes in society—potentially ushering in a “post-work” world, wherein humans can sit around enjoying themselves all day while robots do most of the heavy lifting. If you believe the headlines, OpenAI’s recent palace intrigue may have been partially inspired by a breakthrough in AGI—the so-called “Q” program—which sources close to the startup claim was responsible for the dramatic power struggle.

But, according to recent research from Yann LeCun, Meta’s top AI scientist, artificial intelligence isn’t going to be general-purpose anytime soon. Indeed, in a recently released paper, LeCun argues that AI is still much dumber than humans in the ways that matter most.

That paper, which was co-authored by a host of other scientists (including researchers from other AI startups, like Hugging Face and AutoGPT), looks at how AI’s general-purpose reasoning stacks up against the average human. To measure this, the research team put together its own series of questions that, as the study describes, would be “conceptually simple for humans yet challenging for most advanced AIs.” The questions were given to a sample of humans and also delivered to a plugin-equipped version of GPT-4, the latest large language model from OpenAI. The new research, which has yet to be peer-reviewed, tested AI programs for how they would respond to “real-world questions that require a set of fundamental abilities such as reasoning, multi-modality handling, web browsing, and generally tool-use proficiency.”

The questions asked by researchers required the LLM to take a number of steps to ascertain information in order to answer. For instance, in one question, the LLM was asked to visit a specific website and answer a question specific to information on that site; in others, the program would have had to do a general web search for information associated with a person in a photo.

The end result? The LLMs didn’t do very well.

Indeed, the research results show that large language models were typically outmatched by humans when it came to these more complicated real-world problem-solving scenarios. The report notes:

In spite of being successful at tasks that are difficult for humans, the most capable LLMs do poorly on GAIA. Even equipped with tools, GPT4 does not exceed a 30% success rate for the easiest of our tasks, and 0% for the hardest. In the meantime, the average success rate for human respondents is 92%.

“We posit that the advent of Artificial General Intelligence (AGI) hinges on a system’s capability to exhibit similar robustness as the average human does on such questions,” the recent study concludes.

LeCun has diverged from other AI scientists, some of whom have spoken breathlessly about the possibility of AGI being developed in the near term. In recent tweets, the Meta scientist was highly critical of the industry’s current technological capacities, arguing that AI was nowhere near human capacities.

“I have argued, since at least 2016, that AI systems need to have internal models of the world that would allow them to predict the consequences of their actions, and thereby allow them to reason and plan. Current Auto-Regressive LLMs do not have this ability, nor anything close to it, and hence are nowhere near reaching human-level intelligence,” said LeCun in a recent tweet. “In fact, their complete lack of understanding of the physical world and lack of planning abilities puts them way below cat-level intelligence, never mind human-level.”

Here’s why people are saying GPT-4 is getting lazy | Digital Trends

Here’s why people are saying GPT-4 is getting lazy | Digital Trends

OpenAI and its technologies have been in the midst of scandal for most of November. Between the swift firing and rehiring of CEO Sam Altman and the curious case of the halted ChatGPT Plus paid subscriptions, OpenAI has kept the artificial intelligence industry in the news for weeks.

Now, AI enthusiasts have rehashed an issue that has many wondering whether GPT-4 is getting “lazier” as the language model continues to be trained. Many who use it speed up more intensive tasks have taken to X (formerly Twitter) to air their grievances about the perceived changes.

OpenAI has safety-ed GPT-4 sufficiently that its become lazy and incompetent.

Convert this file? Too long. Write a table? Here's the first three lines. Read this link? Sorry can't. Read this py file? Oops not allowed.

So frustrating.

— rohit (@krishnanrohit) November 28, 2023

Rohit Krishnan on X detailed several of the mishaps he experienced while using GPT-4, which is the language model behind ChatGPT Plus, the paid version of ChatGPT. He explained that the chatbot has refused several of his queries or given him truncated versions of his requests when he was able to get detailed responses previously. He also noted that the language model will use tools other than what it has been instructed to use, such as Dall-E when a prompt asks for a code interpreter. Krishnan also sarcastically added that “error analyzing” is the language model’s way of saying “AFK [away from keyboard], be back in a couple of hours.”

Matt Wensing on X detailed his experiment, where he asked ChatGPT Plus to make a list of dates between now and May 5, 2024, and the chatbot required additional information, such as the number of weeks between those dates, before it was able to complete the initial task.

Wharton professor Ethan Mollick also shared his observations of GPT-4 after comparing sequences with the code interpreter he ran in July to more recent queries from Tuesday. He concluded that GPT-4 is still knowledgeable, but noted that it explained to him how to fix his code as opposed to actually fixing the code. In essence, he would have to do the work he was asking GPT-4 to do. Though Mollick has not intended to critique the language, his observations fall in step with what others have described as “back talk” from GPT-4.

ChatGPT is known to hallucinate answers for information that it does not know, but these errors appear to go far beyond common missteps of the AI chatbot. GPT-4 was introduced in March, but as early as July, reports of the language model getting “dumber” began to surface. A study done in collaboration with Stanford University and the University of California, Berkeley observed that the accuracy of GPT-4 dropped from 97.6% to 2.4% between March and June alone. It detailed that the paid version of ChatGPT was unable to provide the correct answer to a mathematical equation with a detailed explanation, while the unpaid version that still runs an older GPT 3.5 model gave the correct answer and a detailed explanation of the mathematical process.

During that time, Peter Welinder, OpenAI Product vice president, suggested that heavy users might experience a psychological phenomenon where the quality of answers might appear to degrade over time when the language model is actually becoming more efficient.

There has been discussion if GPT-4 has become "lazy" recently. My anecdotal testing suggests it may be true.

I repeated a sequence of old analyses I did with Code Interpreter. GPT-4 still knows what to do, but keeps telling me to do the work. One step is now many & some are odd. pic.twitter.com/OhGAMtd3Zq

— Ethan Mollick (@emollick) November 28, 2023

According to Mollick, the current issues might similarly be temporary and due to a system overload or a change in prompt style that hasn’t been made apparent to users. Notably, OpenAI cited a system overload as a reason for the ChatGPT Plus sign-up shutdown following the spike in interest in the service after its inaugural DevDay developers’ conference introduced a host of new functions for the paid version of the AI chatbot. There is still a waitlist in place for ChatGPT Plus. The professor also added that ChatGPT on mobile uses a different prompt style, which results in “shorter and more to-the-point answers.”

Yacine on X detailed that the unreliability of the latest GPT-4 model due to the drop in instruction adherence has caused them to go back to traditional coding, adding that they plan on creating a local code LLM to regain control of the model’s parameters. Other users have mentioned opting for open-source options in the midst of the language model’s decline.

Similarly, Reddit user, Mindless-Ad8595 explained that more recent updates to GPT-4 have made it too smart for its own good. “It doesn’t come with a predefined ‘path’ that guides its behavior, making it incredibly versatile, but also somewhat directionless by default,” he said.

The programmer recommends users create custom GPTs that are specialized by task or application to increase the efficiency of the model output. He doesn’t provide any practical solutions for users remaining within OpenAI’s ecosystem.

App developer Nick Dobos shared his experience with GPT-4 mishaps, noting that when he prompted ChatGPT to write pong in SwiftUI, he discovered various placeholders and to-dos within the code. He added that the chatbot would ignore commands and continue inserting these placeholders and to-dos into the code even when instructed to do otherwise. Several X users confirmed similar experiences of this kind with their own examples of code featuring placeholders and to-dos. Dobos’ post got the attention of an OpenAI employee who said they would forward examples to the company’s development team for a fix, with a promise to share any updates in the interim.

Overall, there is no clear explanation as to why GPT-4 is currently experiencing complications. Users discussing their experiences online have suggested many ideas. These range from OpenAI merging models to a continued server overload from running both GPT-4 and GPT-4 Turbo to the company attempting to save money by limiting results, among others.

It is well-known that OpenAI runs an extremely expensive operation. In April 2023, researchers indicated it took $700,000 per day, or 36 cents per query, to keep ChatGPT running. Industry analysts detailed at that time that OpenAI would have to expand its GPU fleet by 30,000 units to maintain its commercial performance for the remainder of the year. This would entail support of ChatGPT processes, in addition to the computing for all of its partners.

While waiting for GPT-4 performance to stabilize, users exchanged several quips, making light of the situation on X.

“The next thing you know it will be calling in sick,” Southrye said.

“So many responses with “and you do the rest.” No YOU do the rest,” MrGarnett said.

The number of replies and posts about the problem is definitely hard to ignore. We’ll have to wait and see if OpenAI can tackle the problem head-on in a future update.

Editors’ Recommendations






ChatGPT Turns One: These Were the Chatbot’s Most Notable Milestones

ChatGPT Turns One: These Were the Chatbot’s Most Notable Milestones

ChatGPT — the popular chatbot from OpenAI — was launched by the company in preview form a year ago and has grown in complexity and capabilities since its initial release. The company has since launched a premium subscription service that allows paying users to access a more powerful version of the chatbot along with early access to upcoming features and reduced downtime. OpenAI’s technology is also used to power Microsoft’s Bing Chat, and the company’s own chatbot has been updated with support for plugins, while the company also unveiled a much more powerful version earlier this month.

From its arrival in November 2022 in the form of a free AI chatbot to the release of the powerful new GPT-4 Turbo for paying subscribers earlier this month, there’s how ChatGPT has evolved over the course of a year, gaining new features and functionality along the way.  

OpenAI launches ChatGPT powered by GPT 3.5

While OpenAI began working on its generative AI technology that powers ChatGPT in 2016, the company only introduced the tool as part of a free research preview on November 30, 2022. At the time, ChatGPT was powered by OpenAI’s Generative Pre-trained Transformer 3.5 (GPT-3.5) models, but the tool was only trained on information up to September 2021.

ChatGPT Plus and GPT-powered Bing announced

Earlier this year, OpenAI rolled out an optional subscription plan for “ChatGPT Plus” that would provide premium access to the company’s chatbot. While users could — and still can — access the chatbot for free, paying customers would get access to new features, early access to unreleased features, and less downtime. Days after OpenAI’s announcement, Microsoft unveiled a new version of Bing that was powered by OpenAI’s GPT models. Bing Chat was rolled out to users on smartphones and via the Microsoft Edge web browser.

OpenAI launches ChatGPT API, unveils plugins and GPT-4 models

ChatGPT users were previously limited to accessing the chatbot via OpenAI’s service, but the arrival of the ChatGPT API allowed app developers and web designers to quickly integrate support for ChatGPT — leading to a surge in apps and web services powered by AI chatbots. In March, the service was banned in Italy over collection of personal data and a lack of age verification tools — the ban was lifted a month later.

The firm quickly followed up with the release of GPT-4, a powerful update over the preceding model, that was integrated in ChatGPT and Bing Chat — you can use GPT-4 on the latter for free, and it has picked up several new improvements and enhancements over the past few months. It also added support for plugins on ChatGPT, which allow the chatbot to browse the web and understand code. 

ChatGPT comes to iOS and Android, gets powerful new features

OpenAI’s chatbot made its way to a smartphone via a dedicated iOS app in May and the company allowed users to access a version of the app powered by GPT-3.5 for free, while paying subscribers could access GPT-4 for better answers and more detailed information. The app was released for Android users in July.

ChatGPT gained new capabilities in September when OpenAI revealed that paying subscribers would be able to upload images, or use the ChatGPT mobile app to ask queries with their voice — this feature was later rolled out to all users. The chatbot was also upgraded with support for the image generation model DALL-E 3, for ChatGPT Plus and ChatGPT Enterprise customers.

OpenAI unveils GPT4 Turbo with advanced capabilities

Earlier this month, the AI firm announced GPT-4 Turbo, an enhanced version of OpenAI’s GPT-4 model. It was also the first time since GPT-3.5 was released last year that the company increased the amount of information the chatbot was trained on — while GPT 4 and older models can access data up to September 2021, GPT-4 Turbo can view information up to April 2023. Its context window is now 128k, which means that it can now process up to 300 pages of a book when it is responding to a user’s query.


Affiliate links may be automatically generated – see our ethics statement for details.

The Week When AI Got High on Its Own Supply

The Week When AI Got High on Its Own Supply

Photo: Justin Sullivan (Getty Images)

Whatever you believe about the future of AI, there’s probably a cult out there for you. Ideological factions have been drawing lines for years and they each seem to bring pseudoreligious trappings with them. If you believe AI will inevitably kill everyone on the planet you might want to join the MIRI cult. If you believe that AI is dangerous but you and your close personal friends are the only people smart enough to control it, you might fit in with the Effective Altruist cult. And if you think AI is cool and can do no wrong, Effective Accelerationism (e/acc) is probably the cult for you.

Like many cults, these groups don’t refer to themselves as a religion. Disgraced AI engineer Anthony Levandowski is here to fill the void. Before he was sentenced to 18 months in prison for stealing trade secrets from Google (Donald Trump later pardoned him), Levandowski started Way of the Future, the first church of artificial intelligence.

In 2015, Levandowski envisioned the church as a place for “the realization, acceptance, and worship of a Godhead based on Artificial Intelligence (AI) developed through computer hardware and software.” In other words, AI will soon reach the status of a god and we will need a place to worship it.

Following his high-profile legal troubles and subsequent exile from the tech industry, Levandowski shut the church down in 2021. But he Is RISEN.

On Thursday, Bloomberg reported that Way of the Future is back and it has “a couple thousand people” among its congregation. Pope Levandowsky’s message is a little less mystical this time around, he told Bloomberg that his church is a mechanism for people “to understand and participate and shape the public discourse as to how we think technology should be built to improve you.”

Amen.

Here’s why you can’t sign up for ChatGPT Plus right now | Digital Trends

Here’s why you can’t sign up for ChatGPT Plus right now | Digital Trends

CEO Sam Altman’s sudden departure from OpenAI weekend isn’t the only drama happening with ChatGPT. Due to high demand, paid subscriptions for OpenAI’s ChatGPT Plus have been halted for nearly a week.

The company has a waitlist for those interested in registering for ChatGPT to be notified of when the text-to-speech AI generator is available once more.

Interest in ChatGPT Plus spiked following OpenAI’s inaugural DevDay developers’ conference, which took place earlier this month and unveiled a host of new functions for the paid version of the AI chatbot. Some of these features include being able to create custom bots with the GPT-4 language model that can be trained on specialized data to perform specific functions. Some of the custom GPTs include a model for Canva, a therapist model called TherapistGPT, and a Tweet enhancer for X. More general models include book creators, SEO assistants, photo critics, QR code generators, and birthday cake designers, according to ZDNet.

we are pausing new ChatGPT Plus sign-ups for a bit 🙁

the surge in usage post devday has exceeded our capacity and we want to make sure everyone has a great experience.

you can still sign-up to be notified within the app when subs reopen.

— Sam Altman (@sama) November 15, 2023

OpenAI also shared more details on GPT-4 Turbo, which is a supercharged version of the language model that can process context at 128k, double that of the standard GPT-4. Other functions enable the web-browsing capability for multimodal GPT-4 access, DALL-E 3 image generation, and advanced data analysis while being able to stay within a current model.

Don’t Miss:

Excitement over the new functions coming to ChatGPT Plus sent users rushing to sign up for the service, which costs $20 per month. CEO Sam Altman then shared on X (formerly Twitter) that the company was reinstating the waitlist for ChatGPT Plus subscriptions due to post-DevDay signups exceeding the service’s capacity to process functions.

This appears to mirror the early days of ChatGPT, when the chatbot experienced capacity issues, which caused it to experience random downtimes. This is what prompted the company to establish a paid subscription tier in the first place. In April 2023, researchers indicated it took $700,000 per day, or 36 cents per query, to keep ChatGPT running. Paid accounts, in addition to various enterprise endowments, have helped keep the chatbot free of incident for some time. ChatGPT notably supports 100 million weekly users — there are also — over 2 million developers on its platform — helping it to outpace competitor organizations such as Meta (formerly Facebook).

The service experienced an outage in early November following its DevDay conference, which left ChatGPT and its API inaccessible to free and paid users and developers for over 90 minutes. OpenAI stated that the excess traffic that caused the crash was due to a DDoS attack and not an inability to support users.

November 14 brought several changes to ChatGPT. In addition to the waitlist for ChatGPT Plus, which is still enabled as of Monday, those who have not logged in recently will be greeted with a notification of updates to the chatbot’s terms and services and privacy policy. Some highlights of the terms and services include clarifications on registration and access, information on how the service can be used, and details about content. Similarly, the updated privacy policy spells out in greater detail the information the company might collect, how it’s used, and what your rights are when using the service.

There is also a new Tips for Getting Started notice for those who haven’t logged in for a while, indicating that you should not input private information into ChatGPT and that you should double-check the information for inaccuracies.

Subscriptions for ChatGPT Plus has paused by OpenAI.

Overnight an underground market for subscriptions for sale on eBay has exploded.

Some folks paying up to 3 times the cost from direct.

We live in interesting times. pic.twitter.com/j4YqRbpNfS

— Brian Roemmele (@BrianRoemmele) November 15, 2023

Amid the ChatGPT Plus registration pause, users have been discovered reselling paid accounts on eBay for a premium. While a ChatGPT Plus subscription directly from OpenAI costs just $20, resellers are offering access to the service for two to three times that amount. Mashable noted, that while not wholly illegal, such actions are often a breach of terms of services for many businesses.

OpenAI does spell out in its recently updated terms of use that users registering for ChatGPT must “provide accurate and complete information” when registering an account and accept authority in registering an account on behalf of another. A breach in the terms of use can result in the suspension or termination of an account.

Users who successfully gain access to a resold account advertised as “one-year” might also find that OpenAI could end the account short of that time. It recommends the waitlist update as the best way to gain access to ChatGPT Plus, Mashable added.

Currently, there is no telling how the leadership shake-up at OpenAI will affect ChatGPT Plus becoming available to users once more. The free version of ChatGPT has remained functional throughout this ordeal.

Editors’ Recommendations






OpenAI is on fire — here’s how we got here | Digital Trends

OpenAI is on fire — here’s how we got here | Digital Trends

OpenAI kicked off a firestorm over the weekend. The creator of ChatGPT and DALL-E 3 ousted CEO Sam Altman on Friday, kicking off a weekend of shenanigans that led to three CEOs in three days, as well as what some are calling an under-the-table acquisition of OpenAI by Microsoft.

A lot happened at the tech world’s hottest commodity in just a few days, and depending on how everything plays out, it could have major implications for the future of products like ChatGPT. We’re here to explain how OpenAI got here, what the situation is now, and where the company could be going from here.

Now that OpenAI’s former CEO is at Microsoft — along with several former OpenAI employees — we could see products like ChatGPT disappear from the limelight. These models won’t go away, but with the major shake-up over the weekend, they may come in a different form.

Don’t Miss:

Altman is out

OpenAI

Everything started when Sam Altman was removed as CEO of OpenAI. This decision came from the board of OpenAI, which, as of June 2023, had six members. Two of those members were Altman and former pesident and co-founder of OpenAI Greg Brockman, who quit following Altman’s removal.

Rightly or wrongly, the blame has landed in the lap of Ilya Sutskever, who, in the absense of Altman and Brockman, is the only employee of OpenAI still on the board. Sutskever also broke the news to employees that Altman would not return as CEO, reportedly causing dozens of employees (or more) to resign — more on that shortly.

Coming on a Friday before a holiday, it seemed like a rough bump that OpenAI would eventually iron out. The company announced Mira Murati, chief technology officer at OpenAI, would serve as interim CEO. And OpenAI’s biggest backer, Microsoft, reaffirmed its support. CEO Satya Nadella said “[we] remain committed to our partnership, and to Mira and the team.” Microsoft, up to this point, has $13 billion invested in OpenAI.

This setup didn’t last for long, though, as Murati was removed as interim CEO within a matter of hours. It’s not clear if Murati stepped down or if she was removed from the position. However, Bloomberg reports that Murati had plans to reinstate Altman and Brockman.

CEO toss-up

On Friday morning, Sam Altman was CEO of OpenAI. By Friday afternoon, it was Mira Murati. By early Monday morning, former Twitch CEO Emmett Shear announced he would be stepping into the post as interim CEO at OpenAI.

Shear provided some insight into what went down at OpenAI when announcing he would fill the roll. The executive announced he would take the position at on Monday, showing how quickly things have been moving at OpenAI. Shear says Altman wasn’t removed over “any specific disagreement on safety.” Shear laid out a three-point plan as CEO for the company that includes hiring an independent investigator to figure out why Altman and Brockman were removed, speaking to employees and partners, and reforming the management and leadership teams.

Today I got a call inviting me to consider a once-in-a-lifetime opportunity: to become the interim CEO of @OpenAI. After consulting with my family and reflecting on it for just a few hours, I accepted. I had recently resigned from my role as CEO of Twitch due to the birth of my…

— Emmett Shear (@eshear) November 20, 2023

Shear did say he would make significant changes depending on how this investigation shakes out — “up to and including pushing strongly for significant governance changes, if necessary.”

Employee exodus

According to a report from The Information, trouble was brewing within OpenAI as CEO talks were ongoing. Murati was reportedly in talks to reinstate Altman and Brockman, and by the time Shear came on board as interim CEO, “dozens” of employees had resigned from the company. According to the report, these staffers were highly attractive to Google and Microsoft for their own AI initiatives.

These departures came over the weekend. On Monday morning, journalist Kara Swisher posted a letter from OpenAI employees to the board calling for their resignation. Swisher reports that 505 of the 700 employees at OpenAI signed the letter calling for the resignation.

The letter also provides some interesting insight into another development that happened on Monday. It reads: “We, the undersigned, may choose to resign from OpenAI and join the newly announced Microsoft subsidiary run by Sam Altman and Greg Brockman. Microsoft has assured us that there are positions for all OpenAI employees at this new subsidiary should we choose to join.”

That’s right — Altman and Brockman are now part of Microsoft, which is OpenAI’s largest investor. The partnership between Microsoft and OpenAI is critical, even if Microsoft doesn’t have a seat on OpenAI’s board. With employees threatening to leave OpenAI for jobs that are waiting at Microsoft, it puts the parent of ChatGPT in a tight spot.

Acquiring under the table

Altman and Brockman are now employees of Microsoft, and Microsoft says Altman carries a CEO title within the company. Microsoft CEO Satya Nadella announced the pair joined Microsoft on Monday, saying that they, “together with colleagues,” would lead a new AI research team.

We remain committed to our partnership with OpenAI and have confidence in our product roadmap, our ability to continue to innovate with everything we announced at Microsoft Ignite, and in continuing to support our customers and partners. We look forward to getting to know Emmett…

— Satya Nadella (@satyanadella) November 20, 2023

Microsoft’s interest in OpenAI has been clear for years now. The company has reportedly invested $13 billion in OpenAI up to this point, with a massive $10 billion investment arriving toward the beginning of 2023. However, recent reports say that Microsoft didn’t actually give $10 billion to OpenAI. Instead, a large part of the investment came in the form of cloud computing purchases, presumably to run OpenAI’s models on Microsoft’s massive cloud.

In addition to Altman and Brockman, Brockman says the initial leadership team of this new AI subsidiary at Microsoft comprises Aleksander Madry, Jakub Pachocki, and Szymon Sidor — all former employees of OpenAI. And, as you can read in the letter in the section above, the majority of OpenAI employees have jobs waiting at Microsoft.

It sets up a strange situation for Microsoft and OpenAI. If employees do end up leaving and joining Altman at Microsoft, then Microsoft was essentially able to acquire OpenAI without spending a cent.

The future

Bing Chat open on a smartphone showing visual results.
Microsoft

We’re still in the middle of the turmoil at OpenAI. It may be multiple months before the aftermath from Altman’s firing comes into full focus, but regardless, the executive shake-up clearly has big implications for the products OpenAI makes, such as ChatGPT.

OpenAI stopped sign-ups for ChatGPT Plus last week due to demand, and it hasn’t opened them up since. Now that leadership is gone from OpenAI and the majority of the company is threatening to flee to Microsoft, it’s not clear if OpenAI has much time left. If the AI ship does sink, ChatGPT and DALL-E 3, at least in their current forms, could go along with it.

These AI advancements won’t disappear, but they may be wrapped up into different products. For instance, Microsoft’s Copilot already uses the GPT-4 model for text generation and DALL-E 3 to create images. Again, these models aren’t going away, but they may carry a different name in the future if OpenAI does, indeed, go bust.

Editors’ Recommendations