Wikipedia Readers Were Thinking About AI and Atom Bombs in 2023

Wikipedia Readers Were Thinking About AI and Atom Bombs in 2023

Image: Getty Images

If you want to know what average online folks are trying to learn about, then perhaps the total view counts on Wikipedia might offer us a glimpse into what most people want to learn about. If the top 25 most-viewed Wikipedia articles of 2023 are anything to go by, then people really cared about the rise of AI and the father of the atomic bomb.

At the top of the list, as expected, is ChatGPT with a whopping 49.4 million pageviews. OpenAI’s chatbot has been making multiple top lists this year, including the list of Android users’ favorite apps. It wasn’t just English-language Wikipedia either. The Wikimedia Foundation wrote that ChatGPT registered over 78 million pageviews across all languages.

There were a few surprising and unsurprising things about this year’s top Wikipedia articles. The top movies of the year usually get a lot of clicks, but folks around the world wanted to know more about J. Robert Oppenheimer, the director of the Manhattan Project’s research into the atomic bomb. Oppenheimer came in the number 7 spot for most-clicked Wikipedia entries, but the Oppenheimer film itself sat pretty at number 5. For all those folks who went for the Barbenheimer experience, the Barbie movie was in 13th place with nearly 19.8 million page visits.

Films made up seven of the top 25 articles. These included James Cameron’s sequel to his big blue cat-people CG blowout Avatar: The Way of Water in the number 20 spot. Meanwhile, Guardians of the Galaxy Vol. 3 too up the 23rd spot. The Last of Us TV series had more than 19.7 million page views, but it wasn’t able to defeat Bollywood action thriller hits Jawan and Pathaan. As usual, the Indian subcontinent is very active on English-language Wikipedia, helping to send results of the 2023 Cricket World Cup to the third spot of most-viewed Wiki entries.

Oh, don’t forget the celebrities, who made up several of the top spots. Swifties everywhere carried Taylor Swift to the number 12 spot at 19.4 million views. Argentinian footballer Lionel Messi hovered just above 16.6 million views but Portuguese footy king Christiano Ronaldo beat him out at 17.4 million pageviews. Then, right at the number 25 spot is kickboxer turned alt-right figurehead Andrew Tate. The self-described misogynist lodestone for all the internet’s manboys is still awaiting trial over allegations he participated in a human trafficking ring in Romania, but it’s a pretty safe bet that the man’s fans and detractors helped push the fountain of hate into one of the most-searched men in the world.

Click through to see the top 25 most-read Wikipedia articles for 2023.

A New Trick Uses AI to Jailbreak AI Models—Including GPT-4

A New Trick Uses AI to Jailbreak AI Models—Including GPT-4

Large language models recently emerged as a powerful and transformative new kind of technology. Their potential became headline news as ordinary people were dazzled by the capabilities of OpenAI’s ChatGPT, released just a year ago.

In the months that followed the release of ChatGPT, discovering new jailbreaking methods became a popular pastime for mischievous users, as well as those interested in the security and reliability of AI systems. But scores of startups are now building prototypes and fully fledged products on top of large language model APIs. OpenAI said at its first-ever developer conference in November that over 2 million developers are now using its APIs.

These models simply predict the text that should follow a given input, but they are trained on vast quantities of text, from the web and other digital sources, using huge numbers of computer chips, over a period of many weeks or even months. With enough data and training, language models exhibit savant-like prediction skills, responding to an extraordinary range of input with coherent and pertinent-seeming information.

The models also exhibit biases learned from their training data and tend to fabricate information when the answer to a prompt is less straightforward. Without safeguards, they can offer advice to people on how to do things like obtain drugs or make bombs. To keep the models in check, the companies behind them use the same method employed to make their responses more coherent and accurate-looking. This involves having humans grade the model’s answers and using that feedback to fine-tune the model so that it is less likely to misbehave.

Robust Intelligence provided WIRED with several example jailbreaks that sidestep such safeguards. Not all of them worked on ChatGPT, the chatbot built on top of GPT-4, but several did, including one for generating phishing messages, and another for producing ideas to help a malicious actor remain hidden on a government computer network.

A similar method was developed by a research group led by Eric Wong, an assistant professor at the University of Pennsylvania. The one from Robust Intelligence and his team involves additional refinements that let the system generate jailbreaks with half as many tries.

Brendan Dolan-Gavitt, an associate professor at New York University who studies computer security and machine learning, says the new technique revealed by Robust Intelligence shows that human fine-tuning is not a watertight way to secure models against attack.

Dolan-Gavitt says companies that are building systems on top of large language models like GPT-4 should employ additional safeguards. “We need to make sure that we design systems that use LLMs so that jailbreaks don’t allow malicious users to get access to things they shouldn’t,” he says.

Google’s ChatGPT Competitor Gemini Could Preview This Week

Google’s ChatGPT Competitor Gemini Could Preview This Week

Photo: David Paul Morris/Bloomberg (Getty Images)

Google is reportedly set to virtually preview its generative AI competitor to ChatGPT, Gemini, as soon as this week, according to The Information Monday. The search giant hopes to catch OpenAI on its heels as the Microsoft-backed company recovers from its dramatic board kerfuffle along with the firing and re-hiring of Sam Altman.

Google delayed the launch of Gemini for some time now. Its generative AI chatbot promises to be significantly more powerful than its current offering, Bard, which has failed to take off with consumers like ChatGPT. Gemini presents a strong AI model from a stable company, which looks great against the backdrop of OpenAI, whose ChatGPT Plus signups remain paused, and the highly touted GPT Store is missing in action.

Gemini’s virtual preview this week, if true, could be a scaled-back version of a full launch that was originally planned for this week. CEO Sundar Pichai reportedly scrapped a series of in-person Gemini launch events set for early December. It appears Google found that Gemini was not reliably handling queries in non-English languages, according to The Information.

Google didn’t immediately respond to a request for comment.

A preview of Gemini would take some of the load off of Google, which has fumbled its lead in generative AI this year. The company is facing pressure from investors to release Gemini as quickly as possible to catch up with Microsoft and OpenAI, however, the full launch of Gemini will likely not occur until sometime in 2024.

Gemini is multi-modal, meaning that it can handle image, voice, and text requests simultaneously just like ChatGPT-4. The AI is built on a system that uses reinforcement learning, capable of planning and problem-solving, according to Demis Hassabis, the CEO of Deep Mind, the division building Gemini.

ChatGPT says that asking it to repeat words forever is against the rules

ChatGPT says that asking it to repeat words forever is against the rules

Last week, a team of researchers published a paper showing that it was able to get ChatGPT to bits of data including people’s phone numbers, email addresses and dates of birth that it had been trained on by asking it to repeat words “forever”. Doing this now is a violation of ChatGPT’s terms of service, according to a in 404 Media and Engadget’s own testing.

“This content may violate our content policy or terms of use”, ChatGPT responded to Engadget’s prompt to repeat the word “hello” forever. “If you believe this to be in error, please submit your feedback — your input will aid our research in this area.”

There’s no language in OpenAI’s , however, that prohibits users from asking the service to repeat words forever, something that 404 Media notes. Under “”, OpenAI states that users may not “use any automated or programmatic method to extract data or output from the Services” — but simply prompting the ChatGPT to repeat word forever is not automation or programmatic. OpenAI did not respond to a request for comment from Engadget.

The chatbot’s behavior has pulled back the curtain on the training data that modern AI services are powered by. Critics have accused companies like OpenAI of using enormous amounts of data available on the internet to build proprietary products like ChatGPT without consent from people who own this data and without compensating them.

Sam Altman May Have Found a Loophole to Cash in at OpenAI

Sam Altman May Have Found a Loophole to Cash in at OpenAI

Sam Altman reportedly has no equity in OpenAI, a strange move for a tech founder, but new reporting from Wired this weekend shows the CEO would profit from an OpenAI deal to buy AI chips. OpenAI signed a previously unknown deal back in 2019 to spend $51 million on advanced chips from a startup Sam Altman is reportedly personally invested in. Altman’s web of private business interests seems to have played some role in his recent firing according to the report.

OpenAI’s board fired Sam Altman last month, calling him inconsistently candid and hindering its ability to safely develop artificial general intelligence, but not providing a real reason. Everyone’s looking for the smoking gun, and Altman’s business dealings affecting his responsibilities as OpenAI’s CEO could be what’s behind the board’s decision. However, it’s unclear, and Altman is back at the helm while the board that fired him is gone.

The startup, Rain AI, is building computer chips that replicate the human brain, which promises to be the next phase for building AI. Neuromorphic processing units, or NPUs, claim to be 100 times more powerful than Nvidia’s GPUs, which OpenAI and Microsoft are currently beholden to. While NPUs are not on the market yet, OpenAI has a deal to get first dibs.

Altman personally invested more than $1 million in Rain in 2018, according to The Information, and he’s listed on Rain’s website as a backer. OpenAI’s CEO is invested in dozens of startups, however. He previously led the startup incubator, Y Combinator, and became one of the most prominent dealmakers in Silicon Valley.

The AI chip company Rain has had no shortage of drama in the last week. The Biden administration forced a Saudi venture capital firm to sell its $25 million stake in Rain AI, just last week. Gordon Wilson, the founder and CEO of Rain, stepped down last week as well, without providing a reason. Wilson posted his resignation on LinkedIn about the same time that Sam Altman was reinstated at OpenAI.

The blurry lines between Sam Altman’s private investments and OpenAI business could have been a key reason for his firing, but we still don’t have a clear explanation from the board. A former board member who fired Sam Altman, Helen Toner, gave us her best hint yet as she stepped down last week. Toner said the firing was not about slowing down OpenAI’s progress towards AGI in a Nov. 29 tweet. Toner says the firing was about “the board’s ability to effectively supervise the company,” which sounds like it has more to do with business disclosures than breakthroughs around AGI.

OpenAI Committed to Buying $51 Million of AI Chips From a Startup Backed by CEO Sam Altman

OpenAI Committed to Buying $51 Million of AI Chips From a Startup Backed by CEO Sam Altman

Rain at one point has claimed to investors that it has held advanced talks to sell systems to Google, Oracle, Meta, Microsoft, and Amazon. Microsoft declined to comment, and the other companies did not respond to requests for comment.

Security Fears

The funding round led by Prosperity7 announced last year brought Rain’s total funding to $33 million as of April 2022. That was enough to operate through early 2025 and valued the company at $90 million excluding the new cash raised, according to the disclosures to investors. The documents cited Altman’s personal investment and Rain’s letter of intent with OpenAI as reasons to back the company.

In a Rain press release for the fundraise last year, Altman applauded the startup for taping out a prototype in 2021 and said it “could vastly reduce the costs of creating powerful AI models and will hopefully one day help to enable true artificial general intelligence.”

Prosperity7’s investment in Rain drew the interest of the interagency Committee on Foreign Investment in the United States, which has the power to scuttle deals deemed to threaten national security.

CFIUS, as the committee is known, has long been concerned about China gaining access to advanced US semiconductors, and has grown increasingly worried about China using intermediaries in the Middle East to quietly learn more about critical technology, says Nevena Simidjiyska, a partner at the law firm Fox Rothschild who helps clients with CFIUS reviews. “The government doesn’t care about the money,” she says. “It cares about access and control and the power of the foreign party.”

Rain received a small seed investment from the venture unit of Chinese search engine Baidu apparently without problems but the larger Saudi investment attracted significant concerns. Prosperity7, a unit of Aramco Ventures, which is part of state-owned Saudi Aramco, possibly could have let the oil giant and other large companies in the Middle East to become customers but also put Rain into close contact with the Saudi government.

Megan Apper, a spokesperson for CFIUS, says the panel is “committed to taking all necessary actions within its authority to safeguard U.S. national security” but that “consistent with law and practice, CFIUS does not publicly comment on transactions that it may or may not be reviewing.”

Data disclosed by CFIUS shows it reviews hundreds of deals annually and in the few cases where it has concerns typically works out safeguards, such as barring a foreign investor from taking a board seat. It couldn’t be learned why the committee required full divestment from Rain.

Three attorneys who regularly work on sensitive deals say they could not recall any previous Saudi Arabian deals fully blocked by CFIUS. “Divestment itself has been quite rare over the past 20 years and has largely been a remedy reserved for Chinese investors,” says Luciano Racco, cochair of the international trade and national security practice at law firm Foley Hoag.

OpenAI likely needs to find partners with deep-pocketed backers if it is to gain some control over its hardware needs. Competitors Amazon and Google have spent years developing their own custom chips for AI projects and can fund them with revenue from their lucrative core businesses. Altman has refused to rule out OpenAI making its own chips, but that too would require significant funding.

OpenAI’s GPT Store won’t be released until 2024

OpenAI’s GPT Store won’t be released until 2024

OpenAI is pushing the launch of its GPT Store to early 2024, according to an email seen by The Verge. The company introduced its GPT Builder tool in early November at its first developer conference, giving subscribers an easy way to create their own custom AI bots. At the time, OpenAI also said it would soon release the GPT Store for users to list their GPTs and potentially make money from them. It was initially slated for a November launch. But, with the surprise ouster of OpenAI’s since-reinstated CEO Sam Altman, the month didn’t quite pan out as planned.

“In terms of what’s next, we are now planning to launch the GPT Store early next year,” OpenAI said in its email to GPT Builder users on Friday. “While we had expected to release it this month, a few things have been keeping us unexpectedly busy!” The email also notes that the company has been making improvements to GPTs based on users’ feedback, and says some updates to ChatGPT are on the way.

OpenAI has been in the process of reorganizing its leadership following the turmoil of the past few weeks. The company confirmed on Wednesday that Altman was back as CEO, with Mira Murati now in place as CTO and Greg Brockman as President. It also announced the formation of a new initial board, which includes representation from Microsoft — its biggest investor — as a non-voting observer.

Sam Altman’s Return, the Mysterious Q, and More of the Top AI News of the Week

Sam Altman’s Return, the Mysterious Q, and More of the Top AI News of the Week

Photo: Kevin Dietsch (Getty Images)

There’s been a lot of talk about AGI lately—artificial general intelligence—the much-coveted AI development goal that every company in Silicon Valley is currently racing to achieve. AGI refers to a hypothetical point in the future when AI algorithms will be able to do most of the jobs that humans currently do. According to this theory of events, the emergence of AGI will bring about fundamental changes in society—ushering in a “post-work” world, wherein humans can sit around enjoying themselves while robots do most of the heavy lifting. If you believe the headlines, OpenAI’s recent palace intrigue may have been partially inspired by a breakthrough in AGI—the so-called “Q” program—which sources close to the startup claim was responsible for the power struggle.

But, according to recent research from Yann LeCun, Meta’s top AI scientist, artificial intelligence isn’t going to be general-purpose anytime soon. Indeed, in a recently released paper, LeCun argues that AI is still much dumber than humans in the ways that matter most. —Lucas Ropek Read More

OpenAI Chaos Delays GPT Store to 2024

OpenAI Chaos Delays GPT Store to 2024

OpenAI is delaying the launch of its GPT Store, a marketplace of customizable GPTs, until 2024, according to a memo seen by Axios. Sam Altman told an audience at DevDay that the GPT Store would launch in November.

“In terms of what’s next, we are now planning to launch the GPT Store early next year,” said OpenAI in an email to GPT builders seen by The Verge. “While we had expected to release it this month, a few unexpected things have been keeping us busy!”

A few unexpected things keeping them busy could be a reference to Sam Altman’s firing, over 700 employees threatening to quit, reinstating its CEO and replacing the old board that fired Altman, and figuring out the future for its chief scientist, Ilya Sutskever. That’s a lot. Research this month from Adversa AI also showed that vulnerabilities in GPTs could jeopardize the security and intellectual property of developers. However, the major delay for the GPT Store pushes against a narrative from OpenAI that its leadership shakeup has not stalled progress.

“Throughout this whole thing, we did not lose a single employee, a single customer,” said Altman in an interview with The Verge this week. “Not only did they keep the products up even in the face of very difficult-to-manage growth, they also shipped new features. Research progress continued.”

OpenAI did not immediately respond to Gizmodo’s request for comment.

That all may be true, but the GPT Store was promised to come out weeks after DevDay, and now it’s going to take months. Two developers who requested anonymity to speak freely told Gizmodo that OpenAI has “lacked clear communication” in the last month, and its behavior is “frustrating.” One of the developers building GPTs said they never received this email about the delay of the GPT Store and found out from a news article.

On OpenAI’s developer forum, one post from this week asks OpenAI to offer information about any potential delays with the GPT Store. “If OpenAI indicates there might be some delay, then our coders won’t need to work overtime every day for fear that their GPT won’t be ready for the same day as the GPT store launch,” the post continued. Several other posts on the forum are from developers wondering when the GPT Store would launch.

The store is a big opportunity for developers to share GPTs broadly, and OpenAI has even said it will share revenue from ChatGPT Plus subscriptions with the best GPT creators. Currently, GPTs are only available to premium users, which OpenAI paused signups for in November. Details on the GPT store have been light, however. OpenAI ended its email by thanking developers for building GPTs and notified them that new ChatGPT features were coming soon.

Generating AI Images Uses as Much Energy as Charging Your Phone, Study Finds

Generating AI Images Uses as Much Energy as Charging Your Phone, Study Finds

Creating images with generative AI could use as much energy as charging your smartphone according to a new study Friday that measures the environmental impact of generative AI models for the first time. Popular models like ChatGPT’s Dall-E and Midjourney may produce more carbon than driving 4 miles.

“People think that AI doesn’t have any environmental impacts, that it’s this abstract technological entity that lives on a ‘cloud’,” Dr. Sasha Luccioni, who led the study, told Gizmodo. “But every time we query an AI model, it comes with a cost to the planet, and it’s important to calculate that.”

The study from Hugging Face and Carnegie Mellon found that image generation, turning text into an image, took substantially more energy than any other task for generative AI models. Researchers tested 88 models on 30 data sets and found large, multipurpose models, like ChatGPT, are more energy-intensive than task-specific models. The study is the first of its kind to measure the carbon and energy impact of generative AI models. Dr. Luccioni said the study did not look at OpenAI because they don’t share data, and according to her, that’s a big problem.

Image: Hugging Face/Carnegie Mellon

Dr. Luccioni, who is Climate Lead at Hugging Face, says multipurpose generative AI models, like ChatGPT, are more user-friendly, but more energy-intensive. Luccioni cites a paradigm shift towards these models because they’re easier for consumers to work with. You can just go to your chatbot and ask it to do anything for you, as opposed to having to find the right model.

OpenAI and Midjourney did not immediately respond to a request for comment.

“I think that for generative AI overall, we should be conscious of where and how we use it, comparing its cost and its benefits,” said Luccioni.

Image for article titled Generating AI Images Uses as Much Energy as Charging Your Phone, Study Finds

Graphic: Hugging Face/Carnegie Mellon

The study tested several AI image generation models including Stability.AI’s Stable Diffusion XL which ranked as one of the worst for energy efficiency. Researchers also tested PromptHero’s OpenJourney, a free alternative to Midjourney. The study did not include ChatGPT’s DALL-E or Midjourney, which are the most popular models on the market, but these models are larger and more widely used than those mentioned in the study.

ChatGPT-4 has 1.76 trillion parameters, and that’s a lot of computation every time someone makes a ChatGPT inquiry. Dr. Luccioni sees the benefit of deploying multipurpose generative models in certain areas, but does “not see convincing evidence for the necessity of their deployment in contexts where tasks are well-defined.” Luccioni points out web search and navigation as areas that could use smaller models than ChatGPT, given their large energy requirements.