In March, the president fired off a tweet accusing former President Barack Obama of wiretapping Trump Tower. Republican members of the Senate Intelligence Committee and the F.B.I. director, James B. Comey, dismissed the claim. But the Trump team doubled down, writing off media reports and insisting that evidence of wiretapping would soon surface. It didn’t.
We’re used to this pattern by now: The president dresses up useful lies as “alternative facts” and decries uncomfortable realities as “fake news.” Stoking conservative passion and liberal fury, Trump stirs up confusion about the veracity of settled knowledge and, through sheer assertion, elevates belief to the status of truth.
Trump’s playbook should be familiar to any student of critical theory and philosophy. It often feels like Trump has stolen our ideas and weaponized them.
For decades, critical social scientists and humanists have chipped away at the idea of truth. We’ve deconstructed facts, insisted that knowledge is situated and denied the existence of objectivity. The bedrock claim of critical philosophy, going back to Kant, is simple: We can never have certain knowledge about the world in its entirety. Claiming to know the truth is therefore a kind of assertion of power.
These ideas animate the work of influential thinkers like Nietzsche, Foucault and Derrida, and they’ve become axiomatic for many scholars in literary studies, cultural anthropology and sociology.
From these premises, philosophers and theorists have derived a number of related insights. One is that facts are socially constructed. People who produce facts — scientists, reporters, witnesses — do so from a particular social position (maybe they’re white, male and live in America) that influences how they perceive, interpret and judge the world. They rely on non-neutral methods (microscopes, cameras, eyeballs) and use non-neutral symbols (words, numbers, images) to communicate facts to people who receive, interpret and deploy them from their own social positions.
Call it what you want: relativism, constructivism, deconstruction, postmodernism, critique. The idea is the same: Truth is not found, but made, and making truth means exercising power.
The gifts of free usage and anonymity have made WhatsApp the most popular tool to spread both outlandish stories and politically motivated rumors. On an ordinary Indian morning, messages on the app can include the rumor of a popular mango drink being laced with H.I.V.-positive blood, the United Nations Educational Scientific and Cultural Organization’s rating of Narendra Modi as the best prime minister in the world or Julian Assange describing him as an incorruptible leader.
When science fiction writers first imagined robot invasions, the idea was that bots would become smart and powerful enough to take over the world by force, whether on their own or as directed by some evildoer. In reality, something only slightly less scary is happening. Robots are getting better, every day, at impersonating humans. When directed by opportunists, malefactors and sometimes even nation-states, they pose a particular threat to democratic societies, which are premised on being open to the people.
The nation’s current post-truth moment is the ultimate expression of mind-sets that have made America exceptional throughout its history.
Each of us is on a spectrum somewhere between the poles of rational and irrational. We all have hunches we can’t prove and superstitions that make no sense. Some of my best friends are very religious, and others believe in dubious conspiracy theories. What’s problematic is going overboard—letting the subjective entirely override the objective; thinking and acting as if opinions and feelings are just as true as facts. The American experiment, the original embodiment of the great Enlightenment idea of intellectual freedom, whereby every individual is welcome to believe anything she wishes, has metastasized out of control. From the start, our ultra-individualism was attached to epic dreams, sometimes epic fantasies—every American one of God’s chosen people building a custom-made utopia, all of us free to reinvent ourselves by imagination and will. In America nowadays, those more exciting parts of the Enlightenment idea have swamped the sober, rational, empirical parts. Little by little for centuries, then more and more and faster and faster during the past half century, we Americans have given ourselves over to all kinds of magical thinking, anything-goes relativism, and belief in fanciful explanation—small and large fantasies that console or thrill or terrify us. And most of us haven’t realized how far-reaching our strange new normal has become.
Much more than the other billion or so people in the developed world, we Americans believe—really believe—in the supernatural and the miraculous, in Satan on Earth, in reports of recent trips to and from heaven, and in a story of life’s instantaneous creation several thousand years ago.
The article below is the response to the article above
HOW AMERICA REALLY LOST ITS MIND: HINT, IT WASN’T ENTIRELY THE FAULT OF HIPPIE NEW AGERS AND POSTMODERN ACADEMICS
Kurt Andersen in the Atlantic has given us a superb think-piece on how we arrived at our post-truth irrationalism, an American “Fantasyland” dominated by conspiracy theories, paranoia, outlandish ideas, fake news and alternative facts. The new information age accelerated the relativism birthed in the 1960s, Andersen contends, and now we can all mentally furnish our own fantastic dwellings with facts and ideas we want to be true—and we can even find countless likeminded individuals on the Internet who will confirm and embellish our deepest alternate realities.
It’s a great cultural history and a nuanced, detailed explanation of how we landed in our current predicament, culminating in a Donald Trump presidency. It’s also a deeply flawed account.
Do attempts to legislate against “fake news” recall the tactics of religious censors?
Though less extreme than the censorship of yesteryear, some laws threaten the important freedom to be wrong
ARE today’s warriors against “fake news” taking a road that will eventually lead to the methods of inquisitors and religious censors? That is the view put forward by Jacob Mchangama, a Danish lawyer and founder of a think-tank which defends free speech against all comers.
Mr Mchangama does not underestimate the threat that phony news items and the “weaponisation” of information pose to the functioning of democracy. But in his view, set out this week in a pithy essay, some reactions to the challenge are unhealthy and inimical to freedom.
Knowing What to Trust: Fake News’ Global Role in Society and Political Campaigns
In collaboration with Highness the Aga Khan Council for the Southwestern United States and the World Affairs Council of Greater Houston
With the 24 hours news cycle, our ability to learn what is happening both in our communities or halfway across the world is easier than ever before. However, with increased access also comes the possibility for stories to be disseminated and shared much quicker than their accuracy can be verified. Social media has added to this dilemma through its focus on sensational headlines and eye-catching imagery to attract viewers’ attention.
Recently, "fake news" has increasingly played a role in political decisions and business transactions as people look to be better informed and influence the competition’s perception of an issue. "Fake news" is not an issue specific to the United States, as Asian and European politicians have recently claimed that inaccurate information is aimed at destabilizing their countries and propping up interests from the opposition.
How might news become more influential in our everyday lives? How is the "fake news" experience similar and different in varying parts of the world? Join Asia Society for a discussion with journalist and MSNBC anchor Ali Velshi on this important and timely topic.
There is an abiding dream in the tech world that when all the planet’s people and data are connected it will be a better place. That may prove true. But getting there is turning into a nightmare — a world where billions of people are connected but without sufficient legal structures, security protections or moral muscles among companies and users to handle all these connections without abuse.
Lately, it feels as if we’re all connected but no one’s in charge.
Equifax, the credit reporting bureau, became brilliant at vacuuming up all your personal credit data — without your permission — and selling it to companies that wanted to lend you money. But it was so lax in securing that data that it failed to install simple software security fixes, leaving a hole for hackers to get the Social Security numbers and other personal information of some 146 million Americans, or nearly half the country.
But don’t worry, Equifax ousted its C.E.O., Richard Smith, with “a payday worth as much as $90 million — or roughly 63 cents for every customer whose data was potentially exposed in its recent security breach,” Fortune reported. That will teach him!
Smith and his board should be in jail. I’m with Senator Elizabeth Warren, who told CNBC, “So long as there is no personal responsibility when these big companies breach consumers’ trust, let their data get stolen, cheat their consumers … then nothing is going to change.”
Facebook, Google and Twitter are different animals in my mind. Twitter has enabled more people than ever to participate in the global conversation; Facebook has enabled more people than ever to connect and build communities; Google has enabled everyone to find things like never before.
Continue reading the main story
Those are all good things. But the three companies are also businesses, and the last election suggests they’ve all connected more people than they can manage and they’ve been naïve about how many bad guys were abusing their platforms.
In the coming weeks, executives from Facebook and Twitter will appear before congressional committees to answer questions about the use of their platforms by Russian hackers and others to spread misinformation and skew elections. During the 2016 presidential campaign, Facebook sold more than $100,000 worth of ads to a Kremlin-linked company, and Google sold more than $4,500 worth to accounts thought to be connected to the Russian government.
Agents with links to the Russian government set up an endless array of fake accounts and websites and purchased a slew of advertisements on Google and Facebook, spreading dubious claims that seemed intended to sow division all along the political spectrum — “a cultural hack,” in the words of one expert.
Yet the psychology behind social media platforms — the dynamics that make them such powerful vectors of misinformation in the first place — is at least as important, experts say, especially for those who think they’re immune to being duped. For all the suspicions about social media companies’ motives and ethics, it is the interaction of the technology with our common, often subconscious psychological biases that makes so many of us vulnerable to misinformation, and this has largely escaped notice.
Skepticism of online “news” serves as a decent filter much of the time, but our innate biases allow it to be bypassed, researchers have found — especially when presented with the right kind of algorithmically selected “meme.”
Facebook, Google and Twitter were supposed to save politics as good information drove out prejudice and falsehood. Something has gone very wrong
IN 1962 a British political scientist, Bernard Crick, published “In Defence of Politics”. He argued that the art of political horse-trading, far from being shabby, lets people of different beliefs live together in a peaceful, thriving society. In a liberal democracy, nobody gets exactly what he wants, but everyone broadly has the freedom to lead the life he chooses. However, without decent information, civility and conciliation, societies resolve their differences by resorting to coercion.
How Crick would have been dismayed by the falsehood and partisanship on display in this week’s Senate committee hearings in Washington. Not long ago social media held out the promise of a more enlightened politics, as accurate information and effortless communication helped good people drive out corruption, bigotry and lies. Yet Facebook acknowledged that before and after last year’s American election, between January 2015 and August this year, 146m users may have seen Russian misinformation on its platform. Google’s YouTube admitted to 1,108 Russian-linked videos and Twitter to 36,746 accounts. Far from bringing enlightenment, social media have been spreading poison.
Facebook Is Ignoring Anti-Abortion Fake News
Facebook’s current initiatives to crack down on fake news can, theoretically, be applicable to misinformation on other issues. However, there are several human and technical barriers that prevent misinformation about reproductive rights from being identified, checked and removed at the same — already slow — rate as other misleading stories.
First, the question of what’s considered a “fake news” site is not always black and white. Facebook says it has been tackling the sources of fake news by eliminating the ability to “spoof” domains and by deleting Facebook pages linked to spam activity. For example, this year Facebook identified and deleted more than 30 pages owned by Macedonian publishers, who used them to push out fake stories about United States politics, after alarms had been sounded about sites in the country spreading misinformation about the 2016 campaign. (Facebook says some of the sites may have been taken down for other terms-of-service violations.) But anti-abortion sites are different. They do not mimic real publications, and they publish pieces on real events alongside factually incorrect or thinly sourced stories, which helps blur the line between what’s considered a news blog and “fake news.”
Second, Facebook says one of its key aims in tackling fake news is to remove the profit incentive, because it says “most false news is financially motivated.” It says it hopes to do that by making it more difficult for the people behind the fake news sites to buy ads on its platform and by detecting and deleting spam accounts, which it says are a major force behind the spread of misinformation.
However, the incentive for the people who write content for anti-abortion news sites and Facebook pages is ideological, not financial. Anti-abortion, anti-science content isn’t being written by spammers hoping to make money, but by ordinary people who are driven by religious or political beliefs. Their aim isn’t to profit from ads. It’s to convince readers of their viewpoint: that abortion is morally wrong, that autism is caused by vaccines or that climate change isn’t real.
Finally, public pressure influences where Facebook directs its attention. Facebook may be focused on fake news and the United States election now, but its efforts to prevent the spread of misinformation in the buildup to the election were practically nonexistent. It took action only after intense scrutiny.
18 outlandish conspiracy theories Donald Trump has floated on the campaign trail and in the White House
In May 2011, reporters swarmed now-President Donald Trump as he exited the Hyatt in Washington, DC, after the White House Correspondents' Dinner.
Many wanted a response from Trump, who had just watched President Obama deliver jokes that night about Trump's constant questioning of the legitimacy of Obama's birth certificate.
Over six years later, Trump is still not convinced of the legitimacy of Obama's birth certificate. But this was perhaps the first of numerous debunked or unverified conspiracy theories that Trump has entertained during his time in the political spotlight.
Throughout the 2016 campaign and while in the White House, Trump has floated theories fueled by the conspiratorial-minded corners of supermarket tabloids and the internet, something unprecedented in modern politics. He's often used them as weapons against his opponents.
Here are 18 of the most notable conspiracy theories Trump has entertained:
Pittsburgh — Of all the questions that the ascendancy of Donald Trump has raised — on the value of political experience in governing, on the fitness of business executives as government executives or the profile of the Republicans as defenders of the rich and the Democrats as the sentinels of the poor — none is as perplexing as perhaps the central question of the age:
Does the truth still matter?
It emerged again recently when reports surfaced that the president, who had previously acknowledged his presence on the “Access Hollywood” videotape, has suggested that he did not make the comments on the tape. He also resumed questioning whether Barack Obama was born in the United States — despite having said he accepted it as true last year. In a speech, Mr. Trump, contradicting almost every analysis of the tax bill, said the measure would hurt wealthy people, including himself.
For nearly a half-century in journalism, from hometown cub reporter to national political correspondent to metro daily executive editor, I’ve navigated with the aid of a newspaperman’s North Star: the conviction that there is such a thing as objective truth that can be discovered and delivered through dispassionate hard work and passionate good faith, and that the product of that effort, if thoroughly documented, would be accepted as the truth.
Mr. Trump has turned that accepted truth on its head, sowing doubts about the veracity of news reporting by promoting the notion that the mainstream media spews “fake news.” Employing an evocative, sinister phrase dating to the French Revolution and embraced by Lenin and his Soviet successors, he has declared that great portions of the press are the “enemy of the people.”
Much of the Trump rhetoric on the press, to be sure, is less statecraft than stagecraft, designed to dismiss negative stories — as if the media had been never critical of past presidents instead of the equal-opportunity pugilists who bedeviled Bill Clinton (in the Monica Lewinsky episode) and George W. Bush (in the aftermath of the Iraq war).
Even so, Mr. Trump can be credited with prompting, however inadvertently, the most profound period of press self-assessment in decades — and it comes at a period of unusual financial peril for the mainstream media. All around are sad affirmations of the diminishing credibility of the press, disheartening reminders that at least a third of the country, and perhaps more, regards our work as meaningless, biased or untruthful. In newsrooms, as at newsstands across the country, difficult but vital questions about the methods and motives of the press are being raised, forcing newsmongers and consumers of news to question long-held assumptions.
WASHINGTON — The indictment of 13 Russians filed on Friday by Robert Mueller, the special counsel investigating Russian efforts to influence the 2016 presidential election, details the secret workings of the Internet Research Agency, an organization in St. Petersburg, Russia, that disseminates false information online. According to American intelligence officials, the Kremlin oversaw this shadowy operation, which made extensive use of social media accounts to foster conflict in the United States and erode public faith in its democracy.
But the Kremlin’s operation relied on more than just its own secrecy. It also benefited from the secrecy of social media platforms like Facebook and Twitter. Their algorithms for systematically targeting users to receive certain content are off limits to the public, and the output of these algorithms is almost impossible to monitor. The algorithms make millions of what amount to editorial decisions, pumping out content without anyone fully understanding what is happening.
The editorial decisions of a newspaper or television news program are immediately apparent (articles published, segments aired) and so can be readily analyzed for bias and effect. By contrast, the editorial decisions of social media algorithms are opaque and slow to be discovered — even to those who run the platforms. It can take days or weeks before anyone finds out what has been disseminated by social media software.
The Mueller investigation is shining a welcome light on the Kremlin’s covert activity, but there is no similar effort to shine a light on the social media algorithms that helped the Russians spread their messages. There needs to be. This effort should begin by “opening up” the results of the algorithms.
In computer-speak, this “opening up” would involve something called an open application programming interface. This is a common software technique that allows different programs to work with one another. For instance, Uber uses the open application programming interface of Google Maps to get information about a rider’s pickup point and destination. It is not Uber’s own mapping algorithm, but rather Google’s open application programming interface, that makes it possible for Uber to build its own algorithms for its distinctive functions.
The spread of misinformation on social media is an alarming phenomenon that scientists have yet to fully understand. While the data show that false claims are increasing online, most studies have analyzed only small samples or the spread of individual fake stories.
My colleagues Soroush Vosoughi, Deb Roy and I set out to change that. We recently analyzed the diffusion of all of the major true and false stories that spread on Twitter from its inception in 2006 to 2017. Our data included approximately 126,000 Twitter “cascades” (unbroken chains of retweets with a common, singular origin) involving stories spread by three million people more than four and a half million times.
Disturbingly, we found that false stories spread significantly more than did true ones. Our findings were published on Thursday in the journal Science.
Social media and fake news
On Twitter, falsehood spreads faster than truth
A lie is halfway round the world while the truth is still putting on its shoes
ACROSS the French countryside, in the summer of 1789, rumours swirled about vengeful aristocrats bent on the destruction of peasants’ property. It was not true. The Great Fear, as it is now known, tipped France into revolution with a flurry of fact-free gossip and rumour.
Two centuries later the methods for spreading nonsense are much improved. In the first paper of its kind, published in Science on March 8th, Soroush Vosoughi and his colleagues at the Massachusetts Institute of Technology present evidence that, on Twitter at least, false stories travel faster and farther than true ones.
How to Prevent Smart People From Spreading Dumb Ideas
We have a serious problem, and it goes far beyond “fake news.” Too many Americans have no idea how to properly read a social media feed. As we’re coming to learn more and more, such ignorance seems to be plaguing almost everybody — regardless of educational attainment, economic class, age, race, political affiliation or gender.
Some very smart people are helping to spread some very dumb ideas.
We all know this is a problem. The recent federal indictment of a Russian company, the Internet Research Agency, lists the numerous ways Russian trolls and bots created phony events and leveraged social media to sow disruption throughout the 2016 presidential election. New revelations about Cambridge Analytica’s sophisticated use of Facebook data to target unsuspecting social media users reminds us how complex the issue has become. Even the pope has weighed in, using his bully pulpit to warn the world of this new global evil.
But there are some remarkably easy steps that each of us, on our own, can take to address this issue. By following these three simple guidelines, we can collaborate to help solve a problem that’s befuddling the geniuses who built Facebook and Twitter.
If the problem is crowdsourced, then it seems obvious the solution will have to be crowdsourced as well. With that in mind, here are three easy steps each of us can take to help build a better civic polity. This advice will also help each of us look a little less foolish.
That such threats to democracy are now possible is due in part to the fact that our society lacks an information ethics adequate to its deepening dependence on data. Where politics is driven by data, we need an ethics to guide that data. But in our rush to deliver on the promises of Big Data, we have not sought one.
An adequate ethics of data for today would include not only regulatory policy and statutory law governing matters like personal data privacy and implicit bias in algorithms. It would also include a set of cultural expectations, fortified by extensive education in high schools and colleges, requiring us to think about data technologies as we build them, not after they have already profiled, categorized and otherwise informationalized millions of people. Students who will later turn their talents to the great challenges of data science would also be trained to consider the ethical design and use of the technologies they will someday unleash.
Clearly, we are not there. High schoolers today may aspire to be the next Mark Zuckerberg, but how many dream of designing ethical data technologies? Who would their role models even be? Executives at Facebook, Twitter and Amazon are among our celebrities today. But how many data ethics advocates can the typical social media user name?
Where Countries Are Tinderboxes and Facebook Is a Match
False rumors set Buddhist against Muslim in Sri Lanka, the
most recent in a global spate of violence fanned by social media.
We came to this house to try to understand the forces of social disruption that have followed Facebook’s rapid expansion in the developing world, whose markets represent the company’s financial future. For months, we had been tracking riots and lynchings around the world linked to misinformation and hate speech on Facebook, which pushes whatever content keeps users on the site longest — a potentially damaging practice in countries with weak institutions.
Time and again, communal hatreds overrun the newsfeed — the primary portal for news and information for many users — unchecked as local media are displaced by Facebook and governments find themselves with little leverage over the company. Some users, energized by hate speech and misinformation, plot real-world attacks.
Germany Acts to Tame Facebook, Learning From Its Own History of Hate
BERLIN — Security is tight at this brick building on the western edge of Berlin. Inside, a sign warns: “Everybody without a badge is a potential spy!”
Spread over five floors, hundreds of men and women sit in rows of six scanning their computer screens. All have signed nondisclosure agreements. Four trauma specialists are at their disposal seven days a week.
They are the agents of Facebook. And they have the power to decide what is free speech and what is hate speech.
This is a deletion center, one of Facebook’s largest, with more than 1,200 content moderators. They are cleaning up content — from terrorist propaganda to Nazi symbols to child abuse — that violates the law or the company’s community standards.
Uganda imposes WhatsApp and Facebook tax 'to stop gossip'
Uganda's parliament has passed a law to impose a controversial tax on people using social media platforms.
It imposes a 200 shilling [$0.05, £0.04] daily levy on people using internet messaging platforms like Facebook, WhatsApp, Viber and Twitter.
President Yoweri Museveni had pushed for the changes, arguing that social media encouraged gossip.
The law should come into effect on 1 July but there remain doubts about how it will be implemented.
◾Africa Live: BBC News Updates
◾Kenya, Uganda and Tanzania in 'anti-fake news campaign'
◾Uganda country profile
The new Excise Duty (Amendment) Bill will also impose various other taxes, including a 1% levy on the total value of mobile money transactions - which civil society groups complain will affect poorer Ugandans who rarely use banking services.
State Minister for Finance David Bahati told parliament that the tax increases were needed to help Uganda pay off its growing national debt.
Experts and at least one major internet service provider have raised doubts about how a daily tax on social media will be implemented, the BBC's Catherine Byaruhanga reports from Uganda.
The government is struggling to ensure all mobile phone SIM cards are properly registered.
And of the 23.6 million mobile phone subscribers in the country, only 17 million use the internet, Reuters reports.
It is therefore not clear how authorities will be able to identify Ugandans accessing social media sites.
50 Famous People Who Never Existed
We'd all love to be as successful as kitchen icon Betty Crocker, as prolific as Nancy Drew author Carolyn Keene, or as legendary as the great King Arthur. But even if our efforts at megastardom fall short, we've got at least one major advantage over these famous people: We're real.
That’s right: some of your favorite spokespeople, brand mascots, composers, and authors are nothing more than the invention of some very creative people. Curious to know which icons are purely fictional? Read on, because we've compiled 50 of them. And for more fascinating truths about the world, check out these 30 Astonishing Facts Guaranteed to Give You Childlike Wonder.
Technology has given rise to an age of misinformation. But philosophy, and a closer look at our own social behavior, could help eliminate it.
Technology spawned the problem of fake news, and it’s tempting to think that technology can solve it, that we only need to find the right algorithm and code the problem away. But this approach ignores valuable lessons from epistemology, the branch of philosophy concerned with how we acquire knowledge.
To understand how we might fix the problem of fake news, start with cocktail hour gossip. Imagine you’re out for drinks when one of your friends shocks the table with a rumor about a local politician. The story is so scandalous you’re not sure it could be right. But then, here’s your good friend, vouching for it, putting their reputation on the line. Maybe you should believe it.
This is an instance of what philosophers call testimony. It’s similar to the sort of testimony given in a courtroom, but it’s less formal and much more frequent. Testimony happens any time you believe something because someone else vouched for the information. Most of our knowledge about the world is secondhand knowledge that comes to us through testimony. After all, we can’t each do all of our own scientific research, or make our own maps of distant cities.
All of this relies upon norms of testimony. Making a factual claim in person, even if you are merely passing on some news you picked up elsewhere, means taking on the responsibility for it, and putting your epistemic reputation — that is, your credibility as a source — at risk. Part of the reason that people believe you when you share information is this: they’ve determined your credibility and can hold you accountable if you are lying or if you’re wrong. The reliability of secondhand knowledge comes from these norms.
Imagine if a doctored video of a politician appeared the day before an election. It’s everything Vladimir Putin ever dreamed of.
Now imagine the effect of deep fakes on a close election. Let’s say video is posted of Beto O’Rourke, a Democrat running for Senate in Texas, swearing that he wants to take away every last gun in Texas, or of Senator Susan Collins of Maine saying she’s changed her mind on Brett Kavanaugh. Before the fraud can be properly refuted, the polls open. The chaos that might ensue — well, let’s just say it’s everything Vladimir Putin ever dreamed of.
There’s more: The “liar’s dividend” will now apply even to people, like Mr. Trump, who actually did say something terrible. In the era of deep fakes, it will be simple enough for a guilty party simply to deny reality. Mr. Trump, in fact, has claimed that the infamous recording of him suggesting grabbing women by their nether parts is not really him. This, after apologizing for it.
If you want to learn more about the dangers posed by deep fakes, you can read the new report by Bobby Chesney and Danielle Keats Citron at the Social Science Research Network. It’s a remarkable piece of scholarship — although I wouldn’t dive in if your primary goal is to sleep better at night.
The Poison on Facebook and Twitter Is Still Spreading
Social platforms have a responsibility to address misinformation as a systemic problem, instead of reacting to case after case.
The internet platforms will always make some mistakes, and it’s not fair to expect otherwise. And the task before Facebook, YouTube, Twitter, Instagram and others is admittedly herculean. No one can screen everything in the fire hose of content produced by users. Even if a platform makes the right call on 99 percent of its content, the remaining 1 percent can still be millions upon millions of postings. The platforms are due some forgiveness in this respect.
It’s increasingly clear, however, that at this stage of the internet’s evolution, content moderation can no longer be reduced to individual postings viewed in isolation and out of context. The problem is systemic, currently manifested in the form of coordinated campaigns both foreign and homegrown. While Facebook and Twitter have been making strides toward proactively staving off dubious influence campaigns, a tired old pattern is re-emerging — journalists and researchers find a problem, the platform reacts and the whole cycle begins anew. The merry-go-round spins yet again.
The Problem With Fixing WhatsApp? Human Nature Might Get in the Way
Should the world worry about WhatsApp? Has it become a virulent new force in global misinformation and political trickery?
Or, rather, should the world rejoice about WhatsApp? After all, hasn’t it provided a way for people everywhere to communicate securely with encrypted messages, beyond the reach of government surveillance?
These are deep and complicated questions. But the answer to all of them is simple: Yes.
In recent months, the messaging app, which is owned by Facebook and has more than 1.5 billion users worldwide, has raised frightening new political and social dynamics. In Brazil, which is in a bruising national election campaign, WhatsApp has become a primary vector for conspiracy theories and other political misinformation. WhatsApp played a similar role in Kenya’s election last year. In India this year, false messages about child kidnappers went viral on WhatsApp, leading to mob violence that has killed dozens of people.
A man’s maternal grandfather determines whether he will go bald.
Dietary cholesterol raises blood cholesterol.
We use only 10% of our brains.
These statements are all false. The dog-to-human age ratio has been repeated since the thirteenth century, with zero scientific basis. The gene for baldness can be inherited from either side of your family. Eating cholesterol, as Dr. Peter Attia explains, “has very little impact on the cholesterol levels in your body.” And the 10% figure about the brain “is so wrong it is almost laughable,” explains neurologist Barry Gordon. Over the course of a day, we use 100% of our brains.
These are all facts—not opinions. Yet the misconceptions surrounding them have persisted for decades, for a simple reason: We’ve been repeating them over and over.
This principle of repetition has been the key to fueling toxic political propaganda over the centuries. “A lie told once remains a lie, but a lie told a thousand times becomes the truth,” explained Joseph Goebbels, the mastermind behind the Nazi propaganda machine. Adolf Hitler agreed: “The most brilliant propagandist technique will yield no success unless one fundamental principle is borne in mind constantly: It must confine itself to a few points and repeat them over and over.”
Repetition has become even easier with social media. Myths, once reported and retweeted, become the truth. When we see a piece of false information, our instinct is to correct it. But in repeating fake news in an attempt to dispel it, we end up unwittingly spreading the virus. People remember the myths rather than the opposing arguments.
Consider the myth regarding the link between autism and the MMR vaccine. In one study, researchers sent more than 1,700 parents one of four campaigns intended to increase MMR vaccination rates. The campaigns, which were adopted nearly verbatim from those used by federal agencies in the United States, ranged from textual information refuting the vaccine-autism link to graphic images of children who had developed diseases that could have been prevented by the vaccine. The study’s goal was to determine which campaign would be the most effective in overcoming parents’ reluctance to vaccinate their children.
None of the campaigns worked.
For parents with the least favorable attitude toward vaccines, the campaigns actually backfired and made the parents less likely to vaccinate their children. The fear appeal campaign—bearing tragic images of children suffering from measles—paradoxically increased already hesitant parents’ belief that the MMR vaccine causes autism.
So resist the urge to hit that retweet button. Don’t take the bait. Stop repeating false facts (as I did at the beginning of this post) or reposting that latest quip from your favorite demagogue.
Fake news won’t disappear. But at least we can drain it of the oxygen of repetition it needs to thrive.
Something continues to nag at me about the midterm elections.
It’s the way we in the news media too often allowed ourselves to be manipulated by President Trump to heighten fears about the immigrant caravan from Central America so as to benefit Republican candidates. Obviously there were many journalists who pushed back on the president’s narrative, but on the whole I’m afraid news organizations became a channel for carefully calculated fear-mongering about refugees.
We in the media have, quite rightly, aggressively covered the failings of Facebook and other social media in circulating lies that manipulated voters. That’s justified: We should hold executives’ feet to the fire when they pursue profits in ways that undermine the integrity of our electoral system.
The problem is that too often we in the media engage in the same kind of profit chasing. The news business model is in part about attracting eyeballs, and cable television in particular sees that as long as the topic is President Trump, revenues follow. So when Trump makes false statements about America being invaded by Central American refugees, he not only gets coverage, but also manages to control the media agenda.
At a recent Trilateral Commission conference in Silicon Valley, there was discussion of the irresponsibility of internet companies in modern democracy — but also tough words about the role of the mainstream media. Nathaniel Persily, a Stanford law professor and elections expert, told me that in 2016, Russians used mainstream media to manipulate voters even more successfully than they used Facebook.
Long before Donald Trump began his political career, he explained his attitude toward truth with characteristic brazenness. In a 2004 television interview with Chris Matthews on MSNBC, he marveled at the Republicans' successful attacks on the wartime heroism of Senator John Kerry, the Democrats' presidential candidate. "[I]t's almost coming out that [George W.] Bush is a war hero and Kerry isn't," Trump said, admiringly. "I think that could be the greatest spin I've ever seen." Matthews then asked about Vice President Dick Cheney's insinuations that Kerry's election would lead to a devastating attack on the United States. "Well," replied Trump, "it's a terrible statement unless he gets away with it." With that extraordinary declaration, Trump showed himself to be an attentive student of disinformation and its operative principle: Reality is what you can get away with.
Trump's command of the basic concept of disinformation offers some insight into how he approaches the truth as president. The fact is that President Trump lies not only prolifically and shamelessly, but in a different way than previous presidents and national politicians. They may spin the truth, bend it, or break it, but they pay homage to it and regard it as a boundary. Trump's approach is entirely different. It was no coincidence that one of his first actions after taking the oath of office was to force his press secretary to tell a preposterous lie about the size of the inaugural crowd. The intention was not to deceive anyone on the particular question of crowd size. The president sought to put the press and public on notice that he intended to bully his staff, bully the media, and bully the truth.
In case anyone missed the point, Sean Spicer, Trump's press secretary, made it clear a few weeks later when he announced favorable employment statistics. In the Obama years, Trump had been fond of describing monthly jobs reports as "phony" and "totally fiction." But now? "I talked to the president prior to this and he said to quote him very clearly," Spicer said. "They may have been phony in the past, but it's very real now." The president was not saying that the Bureau of Labor Statistics had improved its methodology. He was asserting that truth and falsehood were subject to his will.
Since then, such lies have only multiplied. Fact checkers say that, if anything, the rate has increased. For the president and his enablers, the lying reflects a strategy, not merely a character flaw or pathology.
America has faced many challenges to its political culture, but this is the first time we have seen a national-level epistemic attack: a systematic attack, emanating from the very highest reaches of power, on our collective ability to distinguish truth from falsehood. "These are truly uncharted waters for the country," wrote Michael Hayden, former CIA director, in the Washington Post in April. "We have in the past argued over the values to be applied to objective reality, or occasionally over what constituted objective reality, but never the existence or relevance of objective reality itself." To make the point another way: Trump and his troll armies seek to undermine the constitution of knowledge.
THE PROBLEM OF REALITY
The attack, Hayden noted, is on "the existence or relevance of objective reality itself." But what is objective reality?
In everyday vernacular, reality often refers to the world out there: things as they really are, independent of human perception and error. Reality also often describes those things that we feel certain about, things that we believe no amount of wishful thinking could change. But, of course, humans have no direct access to an objective world independent of our minds and senses, and subjective certainty is in no way a guarantee of truth. Philosophers have wrestled with these problems for centuries, and today they have a pretty good working definition of objective reality. It is a set of propositions: propositions that have been validated in some way, and have thereby been shown to be at least conditionally true — true, that is, unless debunked. Some of these propositions reflect the world as we perceive it (e.g., "The sky is blue"). Others, like claims made by quantum physicists and abstract mathematicians, appear completely removed from the world of everyday experience.
It is worth noting, however, that the locution "validated in some way" hides a cheat. In what way? Some Americans believe Elvis Presley is alive. Should we send him a Social Security check? Many people believe that vaccines cause autism, or that Barack Obama was born in Africa, or that the murder rate has risen. Who should decide who is right? And who should decide who gets to decide?
This is the problem of social epistemology, which concerns itself with how societies come to some kind of public understanding about truth. It is a fundamental problem for every culture and country, and the attempts to resolve it go back at least to Plato, who concluded that a philosopher king (presumably someone like Plato himself) should rule over reality. Traditional tribal communities frequently use oracles to settle questions about reality. Religious communities use holy texts as interpreted by priests. Totalitarian states put the government in charge of objectivity.
There are many other ways to settle questions about reality. Most of them are terrible because they rely on authoritarianism, violence, or, usually, both. As the great American philosopher Charles Sanders Peirce said in 1877, "When complete agreement could not otherwise be reached, a general massacre of all who have not thought in a certain way has proved a very effective means of settling opinion in a country."
As Peirce implied, one way to avoid a massacre would be to attain unanimity, at least on certain core issues. No wonder we hanker for consensus. Something you often hear today is that, as Senator Ben Sasse put it in an interview on CNN, "[W]e have a risk of getting to a place where we don't have shared public facts. A republic will not work if we don't have shared facts."
But that is not quite the right answer, either. Disagreement about core issues and even core facts is inherent in human nature and essential in a free society. If unanimity on core propositions is not possible or even desirable, what is necessary to have a functional social reality? The answer is that we need an elite consensus, and hopefully also something approaching a public consensus, on the method of validating propositions. We needn't and can't all agree that the same things are true, but a critical mass needs to agree on what it is we do that distinguishes truth from falsehood, and more important, on who does it.
Who can be trusted to resolve questions about objective truth? The best answer turns out to be no one in particular. The greatest of human social networks was born centuries ago, in the wake of the chaos and creedal wars that raged across Europe after the invention of the printing press (the original disruptive information technology). In reaction, experimenters and philosophers began entertaining a radical idea. They removed reality-making from the authoritarian control of priests and princes and placed it in the hands of a decentralized, globe-spanning community of critical testers who hunt for each other's errors. In other words, they outsourced objectivity to a social network. Gradually, in the scientific revolution and the Enlightenment, the network's norms and institutions assembled themselves into a system of rules for identifying truth: a constitution of knowledge.
Why Misinformation Is About Who You Trust, Not What You Think
Two philosophers of science diagnose our age of fake news.
I can’t see them. Therefore they’re not real.” From which century was this quote drawn? Not a medieval one. The utterance emerged on Sunday from Fox & Friends presenter Pete Hegseth, who was referring to … germs. The former Princeton University undergraduate and Afghanistan counterinsurgency instructor said, to the mirth of his co-hosts, that he hadn’t washed his hands in a decade. Naturally this germ of misinformation went viral on social media.
The next day, as serendipity would have it, the authors of The Misinformation Age: How False Beliefs Spread—philosophers of science Cailin O’Connor and James Owen Weatherall—sat down with Nautilus. In their book, O’Connor and Weatherall, both professors at the University of California, Irvine, illustrate mathematical models of how information spreads—and how consensus on truth or falsity manages or fails to take hold—in society, but particularly in social networks of scientists. The coathors argue “we cannot understand changes in our political situation by focusing only on individuals. We also need to understand how our networks of social interaction have changed, and why those changes have affected our ability, as a group, to form reliable beliefs.”
O’Connor and Weatherall, who are married, are deft communicators of complex ideas. Our conversation ranged from the tobacco industry’s wiles to social media’s complicity in bad data. We discussed how science is subtly manipulated and how the public should make sense of contradictory studies. The science philosophers also had a sharp tip or two for science journalists.
The internet contributed to the culture of mendacity in a fight between nuclear neighbors.
You don’t need to travel far to find such a nightmare. But distance can help clarify the picture: It’s easier to appreciate the simmering pot when you’re looking at it from the outside, rather than boiling in it.
And so I spent much of the last week watching a pot boil over on the other side of the world.
In retaliation for a terrorist attack against Indian troops last month, India conducted airstrikes against Pakistan. After I learned about them, I tried to follow the currents of misinformation in the unfolding conflict between two nuclear-armed nations on the brink of hot war.
What I found was alarming; it should terrify the world, not just Indians and Pakistanis. Whether you got your news from outlets based in India or Pakistan during the conflict, you would have struggled to find your way through a miasma of lies. The lies flitted across all media: there was lying on Facebook, Twitter and WhatsApp; there was lying on TV; there were lies from politicians; there were lies from citizens.
Besides outright lies, just about everyone, including many journalists, played fast and loose with facts. Many discussions were tinged with rumor and supposition. Pictures were doctored, doctored pictures were shared and aired, and real pictures were dismissed as doctored. Many of the lies were directed and weren’t innocent slip-ups in the fog of war but efforts to discredit the enemy, to boost nationalistic pride, to shame anyone who failed to toe a jingoistic line. The lies fit a pattern, clamoring for war, and on both sides they suggested a society that had slipped the bonds of rationality and fallen completely to the post-fact order.
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum