What We're Reading
Recommended articles from our editorial team on misinformation, cognitive science, and more.
By Jillian Mock
We love reading and suspect you do too. Source: Alamy
The OpenMind editorial team is constantly digging through the latest news, investigations, and case studies related to science controversies and deceptions. We look into the ways in which technology influences belief, what journalism can do to protect democracy, the cognitive science behind conspiracies, and many other topics that help demystify an often-misunderstood problem.
We regularly share stories that have caught our attention in the weekly newsletter (subscribe if you haven’t already), but occasionally we want to offer our readers a chance to go deeper. This curated list from our editors is full of ideas and insights that we think you will find enlightening, clarifying, and in some cases even inspiring.
May 25, 2023
How to avoid falling for misinformation, fake AI images on social media, The Washington Post
This guide covers the basics of how to consume media online with a critical eye, with links to resources that go deeper on specific topics (like spotting fake videos).
Ben Smith on the death of Buzzfeed News, On The Media (podcast)
It is interesting to hear a former Buzzfeed insider reflect on the 2010s era of the internet, and how so many things did not turn out how Smith and his colleagues expected.
The debate over whether AI will destroy us is dividing Silicon Valley, The Washington Post
We included an article about the “stochastic parrot” study mentioned in this article in a previous edition of this newsletter. If you didn’t check it out then, that New York Magazine article is still worth a read.
Verified Twitter accounts spread AI-generated hoax of Pentagon Explosion, Vice
A timely example of why we all need to be hyper-alert to the possibility that what we’re seeing online is fake, even if it really seems real.
US Surgeon General says social media may be hazardous to teen health, The Verge
It would be really interesting to see more research into social media’s health effects.
April 14, 2023
Scoop: Schumer lays groundwork for Congress to regulate AI, Axios
The recent surge of popular artificial intelligence platforms means this type of legislation is urgently needed.
Three years later, Covid-19 is still a health threat. Journalism needs to reflect that, Nieman Reports
This article is a strong accompaniment to last week's OpenMind essay about Covid and sudden deaths. Read that essay here.
What’s driving a surge in opposition to renewables?, The Carbon Copy (podcast)
Two reporters talk about disinformation campaigns that are sowing opposition to renewable energy projects in communities across the United States.
As NPR exits, it becomes increasingly clear that Elon Musk doesn’t understand Twitter, The Present Age
NPR pulling back from the platform is the latest chapter in the Elon Musk-Twitter drama.
My week with ChatGPT: can it make me a healthier, happier, more productive person?, The Guardian
It seems like ChatGPT was good at trip planning and recipe development and less helpful with work research and neck pain.
March 9, 2023
You are not a parrot, New York Magazine
This profile of linguist Emily Bender does an excellent job articulating why it is crucially important that we understand the difference between artificial intelligence and human intelligence.
The study highlighted in this article quantified the volume of false information spread through images online.
They thought loved ones were calling for help. It was an AI scam, The Washington Post
When a new technology is released, someone somewhere inevitably harnesses for a nefarious scheme.
The people onscreen are fake. The disinformation is real, The New York Times
Deepfakes are getting better and can be weaponized to spread propaganda.
March 2, 2023
How journalists portray natural calamities can unintentionally reinforce stereotypes.
This article argues that the train wreck would likely not have gotten widespread attention without a certain social video platform.
Overblown fears and hopes surrounding artificial intelligence abound. This article discusses how the new technology intersects with an old problem—sexism.
Increased surveillance doesn’t necessarily mean increased safety, this writer argues.
February 16, 2023
This program to counter misinformation has solid scientific research behind it. We wonder: Could something like this be possible in the United States?
When publications use artificial intelligence to write stories that can impact people’s well-being, they have an obligation to make sure the AI gets things right.
ChatGPT is a blurry JPEG of the web, The New Yorker
This article offers a really helpful metaphor to explain how ChatGPT works and reframes the debate over how AI will influence writing and journalism. Our takeaway: We’ll always need human writers with original thoughts and ideas.
Extremist influencers are generating millions for Twitter, report says, The Washington Post
Expert observers make the argument that letting previously banned extremists back on the platform was a move by Elon Musk to boost revenue.
February 9, 2023
Google now wants to answer your questions without links and with AI. Where does that leave publishers?, Nieman Lab
Google is a business and, as the author of this article points out, wants to keep people on its website—instead of sending them to other sites—as long as possible. It’s hard for me to wrap my mind around a future in which search engines look like chatbot conversations instead of lists of links. But that might be where they’re heading.
How Bing’s AI reboot could shake up the search business, Axios
Complementing the Google news, this article provides a helpful, quick summary of what Microsoft’s announcement this week could mean in practical terms for the future of AI-driven search.
We’ve lost the plot, The Atlantic
We’re already living in the metaverse—when everything is entertainment and the lines between fact and fiction are blurred into nonexistence. Or so this article suggests.
How Wikipedia erases Indigenous history, Slate
Like it or not, Wikipedia is one of the most popular sources of information on the internet. The history that’s being told there shapes how those people understand the world around them.
February 2, 2023
The problems with this artificial intelligence experiment keep piling up. Futurism’s coverage points to many of the challenges the journalism industry will face as media companies try to take advantage of new AI technologies.
This article reflects on how attention from an extremely influential publication can shape the way readers perceive a particular contentious issue. The writer compares the Times’s coverage of health care for trans kids with the newspaper’s coverage of abortion and points out how journalists struggle to put tiny numbers in their proper context.
India’s approach to Chinese apps is likely not one the United States could emulate without some serious geopolitical consequences. But given India’s growing prominence on the international stage and signs of increasing authoritarianism by its government, it will be interesting to see what happens with India’s internet in the future.
January 19, 2023
This is all the gas industry’s fault, HEATED
Last week the debate over the health effects of gas stoves suddenly ignited in the mainstream media, and the Biden administration had to clarify that it had no intention of banning these appliances. Emily Atkin atHEATED discusses how we got to this point, where people can’t imagine their lives without gas stoves and fiercely defend their use.
Will your gas range make you sick? Here’s what the science says, Los Angeles Times
If you’re still confused about the health effects of gas stoves after last week’s news cycle, this article does a good job of laying out what we do and don’t know on the subject. It won’t tell you whether or not to ditch gas, however.
Are we too worried about misinformation?, Vox
In this fascinating conversation, internet security expert Alex Stamos considers the responsibility of platforms like Twitter and Facebook when it comes to dealing with misinformation and disinformation online.
Against platform rules, riots in Brasilia were plotted on social media since January 3, Rappler and Aos Fatos
The anti-government attacks in Brazil were a very recent and disturbing example of the tradeoffs that come with new technologies discussed in the Vox article above. Should platforms be held responsible for people who misuse them?
What the Jan. 6 probe found out about social media, but didn’t report, Washington Post
The Post offers another angle on platform responsibility and action against speech and organizing online that spills over into real-world violence. It also examines how tricky it can be to hold powerful companies and politicians responsible.
As elites arrive in Davos, conspiracy theories thrive online, TIME
Not even the World Economic Forum is immune from online conspiracy theories and misinformation. There are many legitimate critiques of the meeting in Davos—and there are critiques born out of falsehoods and distorted reality.
Watching the watchmen: A look at the lies and abuses of the Polish Border Guard, Wyborcza
This report by a Polish news outlet takes stock of misconduct and disinformation by the Polish Border Guard around the ongoing humanitarian crisis on the country's border.
CNET’s article-writing AI is already publishing very dumb errors, Futurism
The artificial intelligence bot that CNET has used to write articles published on its site has made some pretty basic mistakes. The technology news organization received a lot of pushback when Futurism broke the story that the site was using AI without disclosing it to readers. This closer look illustrates some of the issues with that behavior.
Getty Images is suing the creators of AI art tool Stable Diffusion for scraping its content, The Verge
The fight over AI-generated art continues to heat up, and it looks like it might be heading to court soon. Legal rulings that emerge could help lay the groundwork for the future of the AI art tool industry.
Here come the robot doctors, Axios
ChatGPT did better than expected on all three parts of the U.S. Medical Licensing Examination—and even managed to pass it, barely. But don’t expect a robot doctor to treat you anytime soon.
January 12, 2023
The Instagram Reels gold rush, The New Yorker
This is a super-interesting story about how Instagram Reels has been incentivizing creators to make, frankly, boring content.
Local governments are trying to figure out how to reach out to their constituents and encourage them to get vaccinated in a world awash in vaccine misinformation.
Artificial intelligence, and concerns over its uses, are not going anywhere anytime soon.
AI turns its artistry to creating new human proteins, The New York Times
There have been a lot of stories about the drawbacks of new AI technologies (see above). This article describes at least one way AI can be harnessed for good.
U.S.D.A. approves first vaccine for honeybees, The New York Times
Related to last week’s essay by Oliver Milman about the decline of all insects, this news is a possible win for the bee.
Smash hits: How viral memes conquered piñata design, Rest of World
The time-honored craft of piñata making is tuned in to Internet trends as a way to thrive in the 21st century.
January 5, 2023
Covid misinformation spikes in wake of Damar Hamlin’s on-field collapse, The Washington Post
As the NFL player remains in critical condition, anti-vaxxers are already exploiting the tragedy to spread Covid vaccine misinformation.
Musk has made Twitter a right-wing safe space in Brazil, Rest of World
New year, same old Twitter drama. This original analysis suggests people with more right-wing views see the platform in a new light now that Elon Musk has taken over.
Early in 2022, the Central African Republic became the second country in the world to adopt Bitcoin as legal currency. But most of the country’s population still has no access to it.
Independent journalist Michael Thomas investigates how the group disseminated its message across Google and Facebook despite efforts by the platforms to clamp down on climate misinformation.
What TikTok told us about the economy in 2022, The New York Times
While TikTok’s future in the United States is being debated in Washington, there’s no denying that the app shaped buying habits and reflected broader economic trends last year.
December 29, 2022
Science isn’t storytelling, Science Fictions
In this issue of his newsletter, psychologist and journalist Stuart Ritchie argues that science should not be conducted with any desired conclusion in mind.
This article examines how what we see in films and on television is different from reality, and asks what responsibility entertainers have to spread important information in their work.
How TikTok became a diplomatic crisis, The New York Times Magazine
The popular app may pose a national security threat. How to deal with it is still an open question.
How will China fare with Covid? ‘Meaningless’ data clouds the picture, The New York Times
Tracing the impacts of China’s rapid about-face on its "zero Covid" policy is all the harder because the available data is limited at best. Controlling data, here, is another way of controlling the narrative no matter what is really happening on the ground.
December 22, 2022
This investigation into local politics and journalism in Alabama and Florida brings to light some of the complex forces that stymie climate action at the local level.
What would Plato say about ChatGPT?, The New York Times
This opinion piece suggests ways the new artificial intelligence chat feature could be harnessed to promote education instead of wreaking havoc in schools.
13 predictions for platforms in 2023, Platformer
Journalist Casey Newton looks back on 2022 and makes predictions for Meta, Twitter, and more platforms in 2023.
The viral AI avatar app Lensa undressed me—without my consent, MIT Technology Review
In recent weeks, people have been sharing cool AI-generated avatars of themselves on social media. The experience of this MIT Tech Review reporter illustrates the unsurprising dark side of these tools.
Misinformation around the Russian war in Ukraine continues to spread.
Elon Musk claims he’s reduced child exploitation on Twitter, and his supporters echo the sentiment. The organization tracking that material says there hasn’t been much change since Musk’s takeover.
Activists file lawsuit against Meta over murdered Ethiopian professor, The Washington Post
This sad story is another example of the importance of content moderation and what can happen when platforms don’t invest in moderation outside the United States and western Europe.
This law contains worryingly vague language that could repress free expression online in the country.
December 15, 2022
Predictions for Journalism 2023, Nieman Lab
Every year, smart people from across the media world make predictions about what they expect to see in journalism the following year. It’ll be interesting to see how many of these prove prescient.
This news came as a nasty surprise for the editorial staff of the publication and Axios says they are worried this revelation will call the site's past reporting into question.
Ten years ago, people would upload an entire album of 100+ photos to commemorate one night out with friends. That behavior is unthinkably cringe now and slightly horrifying to consider with deepfakes that are more and more convincing.
Meta’s Oversight Board finally offered an opinion on this program and found, perhaps unsurprisingly, that it does not treat all users equally.
In recent years, Brazil has reduced investment in the sciences, leading to both an internal and external brain drain, highlighting the need for nongovernmental sources of scientific funding.
WhatsApp is the most popular smartphone app in India, and while some protections have been put in place to reduce spam messaging, it is still a major tool exploited by the country’s politicians.
December 8, 2022
A lot of the coverage around Twitter’s lackluster content moderation under new ownership has focused on consequences for the United States—but there could be really serious problems around the world.
This article illustrates how a viral trend online has ripple effects in the real world.
Covering studies before they are peer reviewed has always been tricky. Since the advent of the Covid-19 pandemic, it's also become much more common.
Scholarly subterfuge, particularly around manipulated images, keeps making the news. It just goes to show that science is always subject to the human foibles of its practitioners.
Artificial intelligence continues to improve, but humans still have the edge in conversation.
If companies had to pay into a conservation fund for using animals in their marketing, they could potentially raise a lot of money. For now, however, the initiative is voluntary.
December 1, 2022
This policy change marks a major shift at Twitter under Elon Musk’s ownership. The story is evolving quickly, but the overall concern is the same: that mis- and disinformation will run amok on the platform and lead to harm in the real world.
The Science Writer Every Science Nerd Wants You to Read, The Atlantic
David Quammen is an accomplished science journalist and essayist who has been active for decades. This profile focuses on his new book about Covid but also offers a meditation on the troubling state of science journalism as an industry and a profession writ large.
Sympathy, and Job Offers, for Twitter’s Misinformation Experts, The New York Times
Despite the shift away from content moderation at Twitter, addressing mis- and disinformation is becoming a routine part of doing business for many major companies.
Electric vehicles are a net benefit for the environment, but they are not without their own hidden consequences—such as pollution in places like Indonesia.
This story does a good job of breaking down common misconceptions about how our immune systems work.
How did we get so obsessed with streaks?, Culture Study
In this piece, journalist Anne Helen Petersen interviews a games expert on the gamification of, well, everything. Exercising, learning a language, and collecting airline miles have all been gamified with streaks, badges, and rewards that trap us in stasis and bleed us dry.
The controversy around AI-generated art continues. Who bears the responsibility for ensuring that artificial intelligence is used responsibly when generating images and art? Does the burden lie with the organizations that make the software, or with its users?
Twitter grapples with Chinese spam obscuring news of protests, The Washington Post
The Chinese government is suspected of being behind efforts to suppress information about widespread protests in the country. This development is also another signal of the potential impacts of Twitter’s slashed content moderation staff.
In the United States, the supposed rise in crime rates proved to be a potent issue in the midterm elections. And anyone living in New York will recognize the author’s opening anecdote. But this article uses hard data to show that many of the stories politicians and journalists tell about increasing crime rates are just that—stories.
While this article is now a couple of weeks old, it’s a great reminder that the collapse of companies like FTX have consequences far beyond their own balance sheets.
March 15, 2023
I have to admit, the "What We’re Reading" section is my favorite part of most of the newsletters I subscribe to, including OpenMind’s. I love having smart people curate a reading list for me, and I really enjoy building the list for OpenMind each week. It forces me to read more widely than I perhaps would otherwise, and it gives me a chance to share the most fascinating things I’ve found.
This occasional column is an extension of the WWR section of our newsletter. We hope it will help our readers continue diving deep into the topics we cover and care about—even if we aren’t the ones publishing the stories.
—Jillian Mock, managing editor, OpenMind