Wikipedia Readers Were Thinking About AI and Atom Bombs in 2023

Wikipedia Readers Were Thinking About AI and Atom Bombs in 2023

Image: Getty Images

If you want to know what average online folks are trying to learn about, then perhaps the total view counts on Wikipedia might offer us a glimpse into what most people want to learn about. If the top 25 most-viewed Wikipedia articles of 2023 are anything to go by, then people really cared about the rise of AI and the father of the atomic bomb.

At the top of the list, as expected, is ChatGPT with a whopping 49.4 million pageviews. OpenAI’s chatbot has been making multiple top lists this year, including the list of Android users’ favorite apps. It wasn’t just English-language Wikipedia either. The Wikimedia Foundation wrote that ChatGPT registered over 78 million pageviews across all languages.

There were a few surprising and unsurprising things about this year’s top Wikipedia articles. The top movies of the year usually get a lot of clicks, but folks around the world wanted to know more about J. Robert Oppenheimer, the director of the Manhattan Project’s research into the atomic bomb. Oppenheimer came in the number 7 spot for most-clicked Wikipedia entries, but the Oppenheimer film itself sat pretty at number 5. For all those folks who went for the Barbenheimer experience, the Barbie movie was in 13th place with nearly 19.8 million page visits.

Films made up seven of the top 25 articles. These included James Cameron’s sequel to his big blue cat-people CG blowout Avatar: The Way of Water in the number 20 spot. Meanwhile, Guardians of the Galaxy Vol. 3 too up the 23rd spot. The Last of Us TV series had more than 19.7 million page views, but it wasn’t able to defeat Bollywood action thriller hits Jawan and Pathaan. As usual, the Indian subcontinent is very active on English-language Wikipedia, helping to send results of the 2023 Cricket World Cup to the third spot of most-viewed Wiki entries.

Oh, don’t forget the celebrities, who made up several of the top spots. Swifties everywhere carried Taylor Swift to the number 12 spot at 19.4 million views. Argentinian footballer Lionel Messi hovered just above 16.6 million views but Portuguese footy king Christiano Ronaldo beat him out at 17.4 million pageviews. Then, right at the number 25 spot is kickboxer turned alt-right figurehead Andrew Tate. The self-described misogynist lodestone for all the internet’s manboys is still awaiting trial over allegations he participated in a human trafficking ring in Romania, but it’s a pretty safe bet that the man’s fans and detractors helped push the fountain of hate into one of the most-searched men in the world.

Click through to see the top 25 most-read Wikipedia articles for 2023.

A New Trick Uses AI to Jailbreak AI Models—Including GPT-4

A New Trick Uses AI to Jailbreak AI Models—Including GPT-4

Large language models recently emerged as a powerful and transformative new kind of technology. Their potential became headline news as ordinary people were dazzled by the capabilities of OpenAI’s ChatGPT, released just a year ago.

In the months that followed the release of ChatGPT, discovering new jailbreaking methods became a popular pastime for mischievous users, as well as those interested in the security and reliability of AI systems. But scores of startups are now building prototypes and fully fledged products on top of large language model APIs. OpenAI said at its first-ever developer conference in November that over 2 million developers are now using its APIs.

These models simply predict the text that should follow a given input, but they are trained on vast quantities of text, from the web and other digital sources, using huge numbers of computer chips, over a period of many weeks or even months. With enough data and training, language models exhibit savant-like prediction skills, responding to an extraordinary range of input with coherent and pertinent-seeming information.

The models also exhibit biases learned from their training data and tend to fabricate information when the answer to a prompt is less straightforward. Without safeguards, they can offer advice to people on how to do things like obtain drugs or make bombs. To keep the models in check, the companies behind them use the same method employed to make their responses more coherent and accurate-looking. This involves having humans grade the model’s answers and using that feedback to fine-tune the model so that it is less likely to misbehave.

Robust Intelligence provided WIRED with several example jailbreaks that sidestep such safeguards. Not all of them worked on ChatGPT, the chatbot built on top of GPT-4, but several did, including one for generating phishing messages, and another for producing ideas to help a malicious actor remain hidden on a government computer network.

A similar method was developed by a research group led by Eric Wong, an assistant professor at the University of Pennsylvania. The one from Robust Intelligence and his team involves additional refinements that let the system generate jailbreaks with half as many tries.

Brendan Dolan-Gavitt, an associate professor at New York University who studies computer security and machine learning, says the new technique revealed by Robust Intelligence shows that human fine-tuning is not a watertight way to secure models against attack.

Dolan-Gavitt says companies that are building systems on top of large language models like GPT-4 should employ additional safeguards. “We need to make sure that we design systems that use LLMs so that jailbreaks don’t allow malicious users to get access to things they shouldn’t,” he says.

This is the first serious AI app for writers on the Mac | Digital Trends

This is the first serious AI app for writers on the Mac | Digital Trends

Image used with permission by copyright holder

In a year where virtually every tech company in existence is talking about AI, Apple has been silent. That doesn’t mean Apple-focused developers aren’t taking matters into their own hands, though. An update to the the popular Mac writing app iA Writer just made me really excited about seeing what Apple’s eventual take on AI will be.

In the iA Writer 7 update, you’ll be able to use text generated by ChatGPT as a starting point for your own words. The idea is that you get ideas from ChatGPT, then tweak its output by adding your distinct flavor to the text, making it your own in the process. Most apps that use generative AI do so in a way that basically hands the reins over to the artificial intelligence, such as an email client that writes messages for you or a collaboration tool that summarizes your meetings.

Things are a little different with popular writing app iA Writer, though — and the result is a nuanced and ethically minded take on generative AI that feels like something Apple could end up doing itself.

“Working responsibly with AI”

So, here’s how it works. Any text pasted from ChatGPT is grayed out by iA Writer. Any text that you change or replace turns black, indicating that you have written it yourself. The developer of iA Writer says this will turn ChatGPT into a “dialogue partner” rather than a ghostwriter, where there’s a risk that “it takes over and you lose your voice.”

The app won’t automatically recognize pasted text as coming from ChatGPT, though — you have to mark it yourself by right-clicking and selecting Paste As > AI > Enable Authorship. There are also options for marking pasted text using keyboard shortcuts and menus.

If you leave the writing app and ask ChatGPT to edit a paragraph created in iA Writer, you can let the app know by right-clicking and choosing Paste Edits From > AI. And if you want to tell iA Writer that the words you are inserting were created by yourself rather than AI, you can select Paste As > Me.

The developer of iA Writer says that assigning authorship in this way is “new, useful, and prerequisite to working responsibly with AI text generation.” They continued: “We think that everyone would benefit from a future where we can see what the machine processed and what humans felt, thought, and expressed with their hearts and minds.”

All of this means that iA Writer’s update is not designed to detect AI plagiarism. Indeed, the developer says that “it’s up to you to decide how honest you want to be with yourself.” Done properly, though, it could turn ChatGPT into a handy assistant that improves your writing without drowning out your own voice.

The future of Apple AI

This update to iA Writer is starting on macOS, iPadOS, and iOS, but it will be coming to Windows and Android at some point too.

And of course, iA Writer isn’t the only third-party Mac app around, nor will it be the last. But it is a preview of the kind of thing Apple will need to address in full at some point. It has its own first-party apps that are all starting to look a bit outdated compared to its competitors. Google, Adobe, and Microsoft have all embraced generative AI wholesale, integrating it into nearly every piece of software. Windows Copilot is, of course, the supreme example. Apple’s own solution for bringing AI more fully into its products will likely look different — but surely it’ll need to do something.

We’re expecting it to be a major focus at WWDC 2024 next June, where we’ll hopefully see a broader vision of how Apple tackles AI.

Editors’ Recommendations






Google’s ChatGPT Competitor Gemini Could Preview This Week

Google’s ChatGPT Competitor Gemini Could Preview This Week

Photo: David Paul Morris/Bloomberg (Getty Images)

Google is reportedly set to virtually preview its generative AI competitor to ChatGPT, Gemini, as soon as this week, according to The Information Monday. The search giant hopes to catch OpenAI on its heels as the Microsoft-backed company recovers from its dramatic board kerfuffle along with the firing and re-hiring of Sam Altman.

Google delayed the launch of Gemini for some time now. Its generative AI chatbot promises to be significantly more powerful than its current offering, Bard, which has failed to take off with consumers like ChatGPT. Gemini presents a strong AI model from a stable company, which looks great against the backdrop of OpenAI, whose ChatGPT Plus signups remain paused, and the highly touted GPT Store is missing in action.

Gemini’s virtual preview this week, if true, could be a scaled-back version of a full launch that was originally planned for this week. CEO Sundar Pichai reportedly scrapped a series of in-person Gemini launch events set for early December. It appears Google found that Gemini was not reliably handling queries in non-English languages, according to The Information.

Google didn’t immediately respond to a request for comment.

A preview of Gemini would take some of the load off of Google, which has fumbled its lead in generative AI this year. The company is facing pressure from investors to release Gemini as quickly as possible to catch up with Microsoft and OpenAI, however, the full launch of Gemini will likely not occur until sometime in 2024.

Gemini is multi-modal, meaning that it can handle image, voice, and text requests simultaneously just like ChatGPT-4. The AI is built on a system that uses reinforcement learning, capable of planning and problem-solving, according to Demis Hassabis, the CEO of Deep Mind, the division building Gemini.

ChatGPT says that asking it to repeat words forever is against the rules

ChatGPT says that asking it to repeat words forever is against the rules

Last week, a team of researchers published a paper showing that it was able to get ChatGPT to bits of data including people’s phone numbers, email addresses and dates of birth that it had been trained on by asking it to repeat words “forever”. Doing this now is a violation of ChatGPT’s terms of service, according to a in 404 Media and Engadget’s own testing.

“This content may violate our content policy or terms of use”, ChatGPT responded to Engadget’s prompt to repeat the word “hello” forever. “If you believe this to be in error, please submit your feedback — your input will aid our research in this area.”

There’s no language in OpenAI’s , however, that prohibits users from asking the service to repeat words forever, something that 404 Media notes. Under “”, OpenAI states that users may not “use any automated or programmatic method to extract data or output from the Services” — but simply prompting the ChatGPT to repeat word forever is not automation or programmatic. OpenAI did not respond to a request for comment from Engadget.

The chatbot’s behavior has pulled back the curtain on the training data that modern AI services are powered by. Critics have accused companies like OpenAI of using enormous amounts of data available on the internet to build proprietary products like ChatGPT without consent from people who own this data and without compensating them.

OpenAI Committed to Buying $51 Million of AI Chips From a Startup Backed by CEO Sam Altman

OpenAI Committed to Buying $51 Million of AI Chips From a Startup Backed by CEO Sam Altman

Rain at one point has claimed to investors that it has held advanced talks to sell systems to Google, Oracle, Meta, Microsoft, and Amazon. Microsoft declined to comment, and the other companies did not respond to requests for comment.

Security Fears

The funding round led by Prosperity7 announced last year brought Rain’s total funding to $33 million as of April 2022. That was enough to operate through early 2025 and valued the company at $90 million excluding the new cash raised, according to the disclosures to investors. The documents cited Altman’s personal investment and Rain’s letter of intent with OpenAI as reasons to back the company.

In a Rain press release for the fundraise last year, Altman applauded the startup for taping out a prototype in 2021 and said it “could vastly reduce the costs of creating powerful AI models and will hopefully one day help to enable true artificial general intelligence.”

Prosperity7’s investment in Rain drew the interest of the interagency Committee on Foreign Investment in the United States, which has the power to scuttle deals deemed to threaten national security.

CFIUS, as the committee is known, has long been concerned about China gaining access to advanced US semiconductors, and has grown increasingly worried about China using intermediaries in the Middle East to quietly learn more about critical technology, says Nevena Simidjiyska, a partner at the law firm Fox Rothschild who helps clients with CFIUS reviews. “The government doesn’t care about the money,” she says. “It cares about access and control and the power of the foreign party.”

Rain received a small seed investment from the venture unit of Chinese search engine Baidu apparently without problems but the larger Saudi investment attracted significant concerns. Prosperity7, a unit of Aramco Ventures, which is part of state-owned Saudi Aramco, possibly could have let the oil giant and other large companies in the Middle East to become customers but also put Rain into close contact with the Saudi government.

Megan Apper, a spokesperson for CFIUS, says the panel is “committed to taking all necessary actions within its authority to safeguard U.S. national security” but that “consistent with law and practice, CFIUS does not publicly comment on transactions that it may or may not be reviewing.”

Data disclosed by CFIUS shows it reviews hundreds of deals annually and in the few cases where it has concerns typically works out safeguards, such as barring a foreign investor from taking a board seat. It couldn’t be learned why the committee required full divestment from Rain.

Three attorneys who regularly work on sensitive deals say they could not recall any previous Saudi Arabian deals fully blocked by CFIUS. “Divestment itself has been quite rare over the past 20 years and has largely been a remedy reserved for Chinese investors,” says Luciano Racco, cochair of the international trade and national security practice at law firm Foley Hoag.

OpenAI likely needs to find partners with deep-pocketed backers if it is to gain some control over its hardware needs. Competitors Amazon and Google have spent years developing their own custom chips for AI projects and can fund them with revenue from their lucrative core businesses. Altman has refused to rule out OpenAI making its own chips, but that too would require significant funding.

OpenAI’s GPT Store won’t be released until 2024

OpenAI’s GPT Store won’t be released until 2024

OpenAI is pushing the launch of its GPT Store to early 2024, according to an email seen by The Verge. The company introduced its GPT Builder tool in early November at its first developer conference, giving subscribers an easy way to create their own custom AI bots. At the time, OpenAI also said it would soon release the GPT Store for users to list their GPTs and potentially make money from them. It was initially slated for a November launch. But, with the surprise ouster of OpenAI’s since-reinstated CEO Sam Altman, the month didn’t quite pan out as planned.

“In terms of what’s next, we are now planning to launch the GPT Store early next year,” OpenAI said in its email to GPT Builder users on Friday. “While we had expected to release it this month, a few things have been keeping us unexpectedly busy!” The email also notes that the company has been making improvements to GPTs based on users’ feedback, and says some updates to ChatGPT are on the way.

OpenAI has been in the process of reorganizing its leadership following the turmoil of the past few weeks. The company confirmed on Wednesday that Altman was back as CEO, with Mira Murati now in place as CTO and Greg Brockman as President. It also announced the formation of a new initial board, which includes representation from Microsoft — its biggest investor — as a non-voting observer.

Sam Altman’s Return, the Mysterious Q, and More of the Top AI News of the Week

Sam Altman’s Return, the Mysterious Q, and More of the Top AI News of the Week

Photo: Kevin Dietsch (Getty Images)

There’s been a lot of talk about AGI lately—artificial general intelligence—the much-coveted AI development goal that every company in Silicon Valley is currently racing to achieve. AGI refers to a hypothetical point in the future when AI algorithms will be able to do most of the jobs that humans currently do. According to this theory of events, the emergence of AGI will bring about fundamental changes in society—ushering in a “post-work” world, wherein humans can sit around enjoying themselves while robots do most of the heavy lifting. If you believe the headlines, OpenAI’s recent palace intrigue may have been partially inspired by a breakthrough in AGI—the so-called “Q” program—which sources close to the startup claim was responsible for the power struggle.

But, according to recent research from Yann LeCun, Meta’s top AI scientist, artificial intelligence isn’t going to be general-purpose anytime soon. Indeed, in a recently released paper, LeCun argues that AI is still much dumber than humans in the ways that matter most. —Lucas Ropek Read More

OpenAI’s Custom GPT Store Won’t Be Launched Until Early 2024

OpenAI’s Custom GPT Store Won’t Be Launched Until Early 2024

ChatGPT maker OpenAI has delayed the launch of its custom GPT store until early 2024, according to an internal memo seen by Reuters on Friday.

During its first developer conference in November, OpenAI introduced the custom GPTs and store, which were set to be launched later that month.

The company is continuing to “make improvements” to GPTs based on customer feedback, the memo said.

The delay comes against the backdrop of the startup’s surprise ouster of its CEO Sam Altman and his subsequent reinstatement following threats by employees to quit.

The GPTs are early versions of AI assistants that perform real-world tasks such as booking flights on behalf of a user. It is also expected to allow users to share their GPTs and earn money based on the number of users.

Last month, OpenAI announced it intends to work with organisations to produce public and private datasets for training artificial intelligence (AI) models.

Popular chatbot ChatGPT, which can generate poems and prose from simple prompts, is based on large language models that are trained entirely on open-source data available on the Internet.

The company’s latest effort could help it produce more nuanced training data that are more conversational in style.

“We’re particularly looking for data that expresses human intention, across any language, topic and format,” the company said in a blog post.

OpenAI said it is seeking partners to help it create an open-source dataset for training language models. This dataset would be public for anyone to use in AI model training, it said.

The company said it is also preparing private datasets for training proprietary AI models.

© Thomson Reuters 2023


Affiliate links may be automatically generated – see our ethics statement for details.

OpenAI Chaos Delays GPT Store to 2024

OpenAI Chaos Delays GPT Store to 2024

OpenAI is delaying the launch of its GPT Store, a marketplace of customizable GPTs, until 2024, according to a memo seen by Axios. Sam Altman told an audience at DevDay that the GPT Store would launch in November.

“In terms of what’s next, we are now planning to launch the GPT Store early next year,” said OpenAI in an email to GPT builders seen by The Verge. “While we had expected to release it this month, a few unexpected things have been keeping us busy!”

A few unexpected things keeping them busy could be a reference to Sam Altman’s firing, over 700 employees threatening to quit, reinstating its CEO and replacing the old board that fired Altman, and figuring out the future for its chief scientist, Ilya Sutskever. That’s a lot. Research this month from Adversa AI also showed that vulnerabilities in GPTs could jeopardize the security and intellectual property of developers. However, the major delay for the GPT Store pushes against a narrative from OpenAI that its leadership shakeup has not stalled progress.

“Throughout this whole thing, we did not lose a single employee, a single customer,” said Altman in an interview with The Verge this week. “Not only did they keep the products up even in the face of very difficult-to-manage growth, they also shipped new features. Research progress continued.”

OpenAI did not immediately respond to Gizmodo’s request for comment.

That all may be true, but the GPT Store was promised to come out weeks after DevDay, and now it’s going to take months. Two developers who requested anonymity to speak freely told Gizmodo that OpenAI has “lacked clear communication” in the last month, and its behavior is “frustrating.” One of the developers building GPTs said they never received this email about the delay of the GPT Store and found out from a news article.

On OpenAI’s developer forum, one post from this week asks OpenAI to offer information about any potential delays with the GPT Store. “If OpenAI indicates there might be some delay, then our coders won’t need to work overtime every day for fear that their GPT won’t be ready for the same day as the GPT store launch,” the post continued. Several other posts on the forum are from developers wondering when the GPT Store would launch.

The store is a big opportunity for developers to share GPTs broadly, and OpenAI has even said it will share revenue from ChatGPT Plus subscriptions with the best GPT creators. Currently, GPTs are only available to premium users, which OpenAI paused signups for in November. Details on the GPT store have been light, however. OpenAI ended its email by thanking developers for building GPTs and notified them that new ChatGPT features were coming soon.