The country’s political parties are spreading propaganda about their opponents to gain votes. It’s working
NEW DELHI—In the days following a suicide bombing against Indian security forces in Kashmir this year, a message began circulating in WhatsApp groups across the country. It claimed that a leader of the Congress Party, the national opposition, had promised a large sum of money to the attacker’s family, and to free other “terrorists” and “stone pelters” from prison, if the state voted for Congress in upcoming parliamentary elections.
The message was posted to dozens of WhatsApp groups that appeared to promote Prime Minister Narendra Modi’s governing Bharatiya Janata Party, and seemed aimed at painting the BJP’s main national challenger as being soft on militancy in Kashmir, which remains contested between India and Pakistan, just as the two countries seemed to be on the brink of war.
The claim, however, was fake. No member of Congress, at either a national or a state level, had made any such statement. Yet delivered in the run-up to the election, and having spread with remarkable speed, that message offered a window into a worsening problem here.
India is facing information wars of an unprecedented nature and scale. Indians are bombarded with fake news and divisive propaganda on a near-constant basis from a wide range of sources, from television news to global platforms like Facebook and WhatsApp. But unlike in the United States, where the focus has been on foreign-backed misinformation campaigns shaping elections and public discourse, the fake news circulating here isn’t manufactured abroad.
Many of India’s misinformation campaigns are developed and run by political parties with nationwide cyberarmies; they target not only political opponents, but also religious minorities and dissenting individuals, with propaganda rooted in domestic divisions and prejudices. The consequences of such targeted misinformation are extreme, from death threats to actual murders—in the past year, more than two dozen people have been lynched by mobs spurred by nothing more than rumors sent over WhatsApp.
Elections beginning this month will stoke those tensions, and containing fake news will be one of India’s biggest challenges. It won’t be easy.
Disinformation can be defeated by treating the crisis as we responded to infectious diseases in the past.
Fake news is not a technological or scientific problem with a quick fix. It should be treated as a new kind of public health crisis in all its social and human complexity. The answer might lie in looking back at how we responded to the epidemics, the infectious diseases in the 19th and early 20th centuries, which have similar characteristics.
In response to infectious diseases, over a period of more than a century, nations created the public health infrastructure — a combination of public and private institutions that track outbreaks, fund research, develop medicines and provide health services. We need a similar response to tackle disinformation and fake news.
Epidemics taught us that citizen education is the first and most critical step for a solution. Without the widespread knowledge that washing hands with soap can prevent infections, all other interventions would have sunk under the sheer volume of patients. No number of tweaks to the Facebook algorithm, no size of fact-checking teams, no amount of government regulations can have the same impact as a citizen who critically examines the information being circulated.
Public education might seem a soft measure compared with regulation, but informing the people is the best investment to tackle the problem. In the long term, it will be effective because content distribution will be cheaper and the political and commercial incentives to spread lies will only grow.
Deepfakes Are Coming. We Can No Longer Believe What We See.
It will soon be as easy to produce convincing fake video as it is to lie. We need to be prepared.
On June 1, 2019, the Daily Beast published a story exposing the creator of a now infamous fake video that appeared to show House Speaker Nancy Pelosi drunkenly slurring her words. The video was created by taking a genuine clip, slowing it down, and then adjusting the pitch of her voice to disguise the manipulation.
Judging by social media comments, many people initially fell for the fake, believing that Ms. Pelosi really was drunk while speaking to the media. (If that seems an absurd thing to believe, remember Pizzagate; people are happy to believe absurd things about politicians they don’t like.)
The video was made by a private citizen named Shawn Brooks, who seems to have been a freelance political operative producing a wealth of pro-Trump web content. (Mr. Brooks denies creating the video, though according to the Daily Beast, Facebook confirmed he was the first to upload it.) Some commenters quickly suggested that the Daily Beast was wrong to expose Mr. Brooks. After all, they argued, he’s only one person, not a Russian secret agent or a powerful public relations firm; and it feels like “punching down” for a major news organization to turn the spotlight on one rogue amateur. Seth Mandel, an editor at the Washington Examiner, asked, “Isn’t this like the third Daily Beast doxxing for the hell of it?”
It’s a legitimate worry, but it misses an important point. There is good reason for journalists to expose the creators of fake web content, and it’s not just the glee of watching provocateurs squirm. We live in a time when knowing the origin of an internet video is just as important as knowing what it shows.
We’ll have to battle both the disease and the fake news.
When the next pandemic strikes, we’ll be fighting it on two fronts. The first is the one you immediately think about: understanding the disease, researching a cure and inoculating the population. The second is new, and one you might not have thought much about: fighting the deluge of rumors, misinformation and flat-out lies that will appear on the internet.
The second battle will be like the Russian disinformation campaigns during the 2016 presidential election, only with the addition of a deadly health crisis and possibly without a malicious government actor. But while the two problems — misinformation affecting democracy and misinformation affecting public health — will have similar solutions, the latter is much less political. If we work to solve the pandemic disinformation problem, any solutions are likely to also be applicable to the democracy one.
Pandemics are part of our future. They might be like the 1968 Hong Kong flu, which killed a million people, or the 1918 Spanish flu, which killed over 40 million. Yes, modern medicine makes pandemics less likely and less deadly. But global travel and trade, increased population density, decreased wildlife habitats, and increased animal farming to satisfy a growing and more affluent population have made them more likely. Experts agree that it’s not a matter of if — it’s only a matter of when.
When the next pandemic strikes, accurate information will be just as important as effective treatments. We saw this in 2014, when the Nigerian government managed to contain a subcontinentwide Ebola epidemic to just 20 infections and eight fatalities. Part of that success was because of the ways officials communicated health information to all Nigerians, using government-sponsored videos, social media campaigns and international experts. Without that, the death toll in Lagos, a city of 21 million people, would have probably been greater than the 11,000 the rest of the continent experienced.
When i open my phone, I am swamped by news,” says Matthew Stanley, a driver in Abuja, Nigeria’s capital. He scrolls through WhatsApp, a messaging service, bringing up a slick video forwarded into his church group. In a tone befitting a trailer for a horror film, the narrator falsely claims that Muhammadu Buhari, Nigeria’s Muslim president, is plotting to kill Christians. Mr Stanley squints at the tiny screen. “I think it’s fake news,” he says. “I need to check the source.”
If only everyone were so sceptical. WhatsApp, which has 1.5bn users globally, is especially influential in Africa. It is the most popular social platform in countries such as Nigeria, Ghana, Kenya and South Africa. In the West it is common for people to use multiple platforms such as Facebook and Twitter (see Graphic detail) but in African countries, where money is tighter and internet connections patchy, WhatsApp is an efficient one-stop-shop. The ability to leave audio notes makes it popular among illiterate people. But WhatsApp’s ubiquity also makes it a political tool.
That much is clear from Nigerian presidential and state elections in February and March. As recent research by Nic Cheeseman, Jamie Hitchen, Jonathan Fisher and Idayat Hassan indicates, Nigerians’ use of WhatsApp both reflects and exploits the country’s social structures.
For example, Nigerians belong to much larger WhatsApp groups than Westerners do. A survey by Mr Hitchen and Ms Hassan in Kano, a northern city, found that locals are typically in groups ofat least 50 people. These may be made up of school acquaintances, work colleagues or fellow worshippers. The larger the group, the more quickly information can spread. And since these groups often comprise friends and community leaders, recipients are inclined to trust what they read.
How a misleading YouTube video is stoking fears about Shariah law before the federal election
A short, grainy YouTube video circulating on social media purports to show evidence of an imam claiming that if Prime Minister Justin Trudeau is re-elected, he will institute Shariah law, the legal code of Islam, based on the Qur'an.
But the video was taken out of context, according to the man featured in it, and it was created by Sandra Solomon, known for her anti-Islam views.
The video has about 50,000 views on YouTube, a middling amount, but it has been posted on at least three different Facebook groups that are critical of Trudeau. Altogether, the groups have more than 185,000 likes, and posts of the video were shared more than 7,000 times.
The three pages get high engagement in terms of reactions, comments and shares, and they are in some of the most popular groups spreading memes and disinformation online. These groups equal or often exceed many traditional media outlets for engagement on Facebook.
The video itself includes a short section from a speech about Islam delivered by Mufti Aasim Rashid in Kamloops, B.C., in October 2017. It also features a picture of Justin Trudeau praying at a mosque and ends on a clip of Trudeau championing diversity, which is then covered up by a photo illustration of a small child wearing a "Make Canada Great Again" hat.
Computers can generate convincing representations of events that never happened
SUSAN SONTAG understood that photographs are unreliable narrators. “Despite the presumption of veracity that gives all photographs authority, interest, seductiveness,” she wrote, “the work that photographers do is no generic exception to the usually shady commerce between art and truth.” But what if even that presumption of veracity disappeared? Today, the events captured in realistic-looking or -sounding video and audio recordings need never have happened. They can instead be generated automatically, by powerful computers and machine-learning software. The catch-all term for these computational productions is “deepfakes”.
The term first appeared on Reddit, a messaging board, as the username for an account which was producing fake videos of female celebrities having sex. An entire community sprung up around the creation of these videos, writing software tools that let anyone automatically paste one person’s face onto the body of another. Reddit shut the community down, but the technology was out there. Soon it was being applied to political figures and actors. In one uncanny clip Jim Carrey’s face is melded with Jack Nicholson’s in a scene from “The Shining”.
Tools for editing media manually have existed for decades—think Photoshop. The power and peril of deepfakes is that they make fakery cheaper than ever before. Before deepfakes, a powerful computer and a good chunk of a university degree were needed to produce a realistic fake video of someone. Now some photos and an internet connection are all that is required.
Jeffrey Epstein and When to Take Conspiracies Seriously
Sometimes conspiracy theories point toward something worth investigating. A few point toward the truth.
The challenge in thinking about a case like the suspicious suicide of Jeffrey Epstein, the supposed “billionaire” who spent his life acquiring sex slaves and serving as a procurer to the ruling class, can be summed up in two sentences. Most conspiracy theories are false. But often some of the things they’re trying to explain are real.
Conspiracy theories are usually false because the people who come up with them are outsiders to power, trying to impose narrative order on a world they don’t fully understand — which leads them to imagine implausible scenarios and impossible plots, to settle on ideologically convenient villains and assume the absolute worst about their motives, and to imagine an omnicompetence among the corrupt and conniving that doesn’t actually exist.
Or they are false because the people who come up with them are insiders trying to deflect blame for their own failings, by blaming a malign enemy within or an evil-genius rival for problems that their own blunders helped create.
Or they are false because the people pushing them are cynical manipulators and attention-seekers trying to build a following who don’t care a whit about the truth.
For all these reasons serious truth-seekers are predisposed to disbelieve conspiracy theories on principle, and journalists especially are predisposed to quote Richard Hofstadter on the “paranoid style” whenever they encounter one — an instinct only sharpened by the rise of Donald Trump, the cynical conspiracist par excellence.
But this dismissiveness can itself become an intellectual mistake, a way to sneer at speculation while ignoring an underlying reality that deserves attention or investigation. Sometimes that reality is a conspiracy in full, a secret effort to pursue a shared objective or conceal something important from the public. Sometimes it’s a kind of unconscious connivance, in which institutions and actors behave in seemingly concerted ways because of shared assumptions and self-interest. But in either case, an admirable desire to reject bad or wicked theories can lead to a blindness about something important that these theories are trying to explain.
What should we really be worried about when it comes to “deepfakes”? An expert in online manipulation explains.
In the video Op-Ed above, Claire Wardle responds to growing alarm around “deepfakes” — seemingly realistic videos generated by artificial intelligence. First seen on Reddit with pornographic videos doctored to feature the faces of female celebrities, deepfakes were made popular in 2018 by a fake public service announcement featuring former President Barack Obama. Words and faces can now be almost seamlessly superimposed. The result: We can no longer trust our eyes.
In June, the House Intelligence Committee convened a hearing on the threat deepfakes pose to national security. And platforms like Facebook, YouTube and Twitter are contemplating whether, and how, to address this new disinformation format. It’s a conversation gaining urgency in the lead-up to the 2020 election.
Yet deepfakes are no more scary than their predecessors, “shallowfakes,” which use far more accessible editing tools to slow down, speed up, omit or otherwise manipulate context. The real danger of fakes — deep or shallow — is that their very existence creates a world in which almost everything can be dismissed as false.
We think we are sharing facts, but we are really expressing emotions in the outrage factory.
Given how much it’s talked, tweeted about and worried over, you’d think we’d know a lot about fake news. And in some sense, we do. We know that false stories posing as legitimate journalism have been used to try to sway elections; we know they help spread conspiracy theories; they may even cause false memories. And yet we also know that the term “fake news” has become a trope, so widely used and abused that it no longer serves its original function.
Why is that? And why, given all our supposed knowledge of it, is fake news — the actual phenomenon — still effective? Reflection on our emotions, together with a little help from contemporary philosophy of language and neuroscience, suggests an answer to both questions.
We are often confused about the role that emotion plays in our lives. For one thing, we like to think, with Plato, that reason drives the chariot of our mind and keeps the unruly wild horses of emotion in line. But most people would probably admit that much of the time, Hume was closer to the truth when he said that reason is the slave of the passions. Moreover, we often confuse our feelings with reality itself: Something makes us feel bad, and so we say it is bad.
As a result, our everyday acts of communication can function as vehicles for emotion without our noticing it. This was a point highlighted by mid-20th century philosophers of language often called “expressivists.” Their point was that people sometimes think they are talking about facts when they are really expressing themselves emotionally. The expressivists applied this thought quite widely to all ethical communication about right or wrong, good or bad. But even if we don’t go that far, their insight says something about what is going on when we share or retweet news posts — fake or otherwise — online.
Mawlana Hazar Imam Aga Khan IV: “technologies alone will not save us – the critical variable ….always lie in the disposition of human hearts and minds”
Posted by Nimira Dewji
“From the development of written language to the invention of printing, to the development of electronic and digital media – quantitative advances in communication technology have not necessarily produced qualitative progress in mutual understanding.
To be sure, each improvement in communications technology has triggered new waves of political optimism. But sadly, if information can be shared more easily as technology advances, so can misinformation and disinformation. If truth can spread more quickly and more widely, then so can error and falsehood.
Throughout history, the same tools – the printing press, the telegraph, the microphone, the television camera, the cell phone, the internet – that promised to bring us together, have also been used to drive us apart.”
Mawlana Hazar Imam
at the International New York Times Athens Democracy Forum, September 15, 2015
“We have more communication, but we also have more confrontation. Even as we exclaim about growing connectivity we seem to experience greater disconnection….technological advance does not necessarily mean human progress. Sometimes it can mean the reverse.
Mawlana Hazar Imam
Samuel L. and Elizabeth Jodidi Lecture, Harvard University, November 12, 2015
“In the final analysis, the key to human cooperation and concord has not depended on advances in the technologies of communication, but rather on how human beings go about using – or abusing – their technological tools.”
Mawlana Hazar Imam Aga Khan IV
Stephen Odgen Lecture at Brown University, Providence, USA, March 10, 2014
“It is ironic that a sense of intensified conflict comes at a time of unprecedented breakthroughs in communication technology. At the very time that we talk more and more about global convergence, we also seem to experience more and more social divergence. The lesson it seems to me is that technologies alone will not save us– the critical variable will always be and will always lie in the disposition of human hearts and minds.”
Mawlana Hazar Imam
North-South Prize Ceremony, Lisbon, Portugal, June 12, 2014
Aga Khan Hazar Imam
Mawlana Hazar Imam addresses the North-South Prize Ceremony in the Senate Hall of the Portuguese Parliament as His Excellency Aníbal Cavaco Silva, the President of the Republic of Portugal and President of the Assembly of the Republic, Maria Assunção Esteves look on. Photo: AKDN/ José Manuel Boavida Caria
“Technologies, after all, are merely instruments – they can be used for good or ill. How we use them will depend – in every age and in every culture – not on what sits on our desktops, but on what is in our heads – and in our hearts.”
Mawlana Hazar Imam
The LaFontaine-Baldwin Lecture, Toronto, Canada, October 15, 2010
Women in public life are increasingly subject to sexual slander. Don’t believe it
As deepfake technology spreads, expect more bogus sex tapes of female politicians
Adulterer, pervert, traitor, murderer. In France in 1793, no woman was more relentlessly slandered than Marie Antoinette. Political pamphlets spread baseless rumours of her depravity. Some drawings showed her with multiple lovers, male and female. Others portrayed her as a harpy, a notoriously disagreeable mythical beast that was half bird-of-prey, half woman. Such mudslinging served a political purpose. The revolutionaries who had overthrown the monarchy wanted to tarnish the former queen’s reputation before they cut off her head.
She was a victim of something ancient and nasty that is becoming worryingly common: sexualised disinformation to undercut women in public life (see article). People have always invented rumours about such women. But three things have changed. Digital technology makes it easy to disseminate libel widely and anonymously. “Deepfake” techniques (manipulating images and video using artificial intelligence) make it cheap and simple to create convincing visual evidence that people have done or said things which they have not. And powerful actors, including governments and ruling parties, have gleefully exploited these new opportunities. A report by researchers at Oxford this year found well-organised disinformation campaigns in 70 countries, up from 48 in 2018 and 28 in 2017.
Consider the case of Rana Ayyub, an Indian journalist who tirelessly reports on corruption, and who wrote a book about the massacre of Muslims in the state of Gujarat when Narendra Modi, now India’s prime minister, was in charge there. For years, critics muttered that she was unpatriotic (because she is a Muslim who criticises the ruling party) and a prostitute (because she is a woman). In April 2018 the abuse intensified. A deepfake sex video, which grafted her face over that of another woman, was published and went viral. Digital mobs threatened to rape or kill her. She was “doxxed”: someone published her home address and phone number online. It is hard to prove who was behind this campaign of intimidation, but its purpose is obvious: to silence her, and any other woman thinking of criticising the mighty.
Similar tactics are used to deter women from running for public office. In the run-up to elections in Iraq last year, two female candidates were humiliated with explicit videos, which they say were faked. One pulled out of the race. The types of image used to degrade women vary from place to place. In Myanmar, where antipathy towards Muslims is widespread, detractors of Aung San Suu Kyi, the country’s de facto leader, circulated a photo manipulated to show her wearing a hijab. By contrast in Iran, an Islamist theocracy, a woman was disqualified from taking the seat she had won when a photo, which she claims is doctored, leaked showing her without one.
High-tech sexual slander has not replaced the old-fashioned sort, which remains rife wherever politicians and their propagandists can get away with it. In Russia, female dissidents are dubbed sexual deviants in pro-Kremlin media. In the Philippines, President Rodrigo Duterte has joked about showing a pornographic video of a female opponent, which she says is a fake, to the pope. In China, mainland-based trolls have spread lewd quotes falsely attributed to Tsai Ing-wen, Taiwan’s first female president. Beijing’s state media say she is “extreme” and “emotional” as a result of being unmarried and childless.
Stamping out the problem altogether will be impossible. Anyone can make a deepfake sex video, or hire someone to do it, for a pittance, and then distribute it anonymously. Politicians will inevitably be targets. Laws against libel or invasion of privacy may deter some abuses, but they are not much use when the perpetrator is unknown. Reputable tech firms will no doubt try to remove the most egregious content, but there will always be other platforms, some of them hosted by regimes that actively sow disinformation in the West.
So the best defence against sexual lies is scepticism. People should assume that videos showing female politicians naked or having sex are probably bogus. Journalists should try harder to expose the peddlers of fake footage, rather than mindlessly linking to it. Some day, one hopes, voters may even decide that it is none of their business what public figures look like under their clothes, or which consenting adults they sleep with.■
Are you still reading? Editors frequently use this space to include important contextual information about a news story
Fake news is back in the news again (thanks to Mark Zuckerberg). But did it ever really leave? For some people, legitimate news from traditional media has become unreliable, no longer to be trusted. Is this at all fair?
Keeping the news in a state of good health, in the age of social media, has become more urgent than ever. The way we talk about things, in debates over the defining issues of our time, ends up determining what we do about them. Fake news can be deliberately manipulated by those with vested interests to shape and frame and control public opinions, which result in the problematic actions (and inactions) on existential issues, such as climate change or human rights.
Many, like Zuckerberg, may not be motivated to see these little words on a page as a major problem. Cynics among us might point out that this is really nothing new, and newsflash, fake news is just a kind of propaganda, which has long lived on the dark side of the printed word. Zuckerberg’s strange reluctance to ban or fact-check certain paid political propaganda that employs the long, global reach of Facebook to intentionally broadcast lies to an unsuspecting public is yet another facet of how powerfully language in the information age can be weaponized by those with the means to do so.
Although the tricks of persuasion may be as old as time, that doesn’t mean we shouldn’t worry. Fake news is sometimes hard to recognize for what it is, constantly evolving to fit seamlessly into the community spaces many of us feel safe and comfortable in, those social places and platforms where we share stories and connect with people we’re inclined to trust: our friends, families, and colleagues (rather than the once widely respected gatekeepers of reliable information, the traditional press).
What is unprecedented is the speed at which massive misinformation, from deliberate propaganda and fake news to trolling to inadvertent misunderstanding, flows around the world like “digital wildfire,” thanks to social media. Hunt Allcott and Matthew Gentzkow’s recent study “Social Media and Fakes News in the 2016 Election” noted three things:
“62 percent of US adults get news on social media,”
“the most popular fake news stories were more widely shared on Facebook than the most popular mainstream news stories,” and
“many people who see fake news stories report that they believe them.”
In fact, the World Economic Forum in 2016 considered digital misinformation one of the biggest threats to global society. Researcher Vivian Roese furthermore points out that while traditional media has lost credibility with readers, for some reason internet sources of news have actually gained in credibility. This may do lasting damage to public trust of the news, as well as public understanding of important issues, such as when scientific or political information is being repackaged and retold by the media, especially when coupled with our collectively deteriorating ability to interpret information critically and see propaganda for what it is.
“Concocting fake news to attract eyeballs is a habitual trick of America’s New York Times, and this newspaper suffered a crisis of credibility for its fakery,” the Chinese government declared after The Times broke the news this month of government documents detailing the internment of Uighurs, Kazaks and other Muslims in the northwestern region of Xinjiang.
Who would have guessed that history had such a perverse development in store for us? As the historian Timothy Snyder has written in The Times, Adolf Hitler and the Nazis came up with the slogan “Lügenpresse” — translated as “lying press” — in order to discredit independent journalism. Now the tactic has been laundered through an American president, Donald Trump, who adopted the term “fake news” as a candidate and has used it hundreds of times in office.
That is how, barely a generation after the murder of millions of Jews in Nazi death camps, the term “fake news” has come to be deployed so brazenly by another repressive regime to act against another minority, to cover up the existence of prison camps for hundreds of thousands of Muslims.
Mr. Trump surely didn’t intend this. He’s not a strategic or particularly ideological person. He tends to act instead out of personal or political interest and often on impulse, based on what he thinks his core supporters in the country or the cable television studios want from him. When he yanks troops out of Syria or pardons war criminals, it’s safe to assume he’s not thinking about the long-term balance of power in the Middle East or the reputation and morale of the American military. He is maneuvering, as ever, for some perceived immediate political advantage.
So it is with his attacks on the news media. Mr. Trump loves the press. He has catered to it and been nurtured by it since he first began inventing himself as a celebrity in the 1970s. But he has needed a way to explain to his followers why there are so many upsetting revelations about incompetent administration officials, broken campaign promises and Trump family self-dealing. He’s now tweeted out the term “fake news” more than 600 times.
When an American president attacks the independent press, despots rush to imitate his example. Dozens of officials around the world — including leaders of other democracies — have used the term since Mr. Trump legitimized it. Why bother to contend with facts when you can instead just pretend they don’t exist? That’s what the Chinese government did. It simply called the Times report fake, though it was based on the government’s own documents, and declared it “unworthy of refutation.”
Following the same Oval Office script, a senior government official in Burundi trotted out “fake news” to explain why his government was banning the BBC. In Myanmar, where the government is systematically persecuting an ethnic minority, the Rohingya, an official told The Times that the very existence of such a group is “fake news.” The Russian foreign ministry uses the image of a big red “FAKE” stamp on its website to mark news reports that it does not like.
Jordan has introduced a law allowing the government to punish those who publish “false news.” Cameroon has actually jailed journalists for publishing “fake news.” Chad banned social media access nationwide for more than a year, citing “fake news.”
How to survive the internet in 2020. (It’s not going to be easy.)
The new year is here, and online, the forecast calls for several seasons of hell. Tech giants and the media have scarcely figured out all that went wrong during the last presidential election — viral misinformation, state-sponsored propaganda, bots aplenty, all of us cleaved into our own tribal reality bubbles — yet here we go again, headlong into another experiment in digitally mediated democracy.
I’ll be honest with you: I’m terrified. I spend a lot of my time looking for edifying ways of interacting with technology. In the last year, I’ve told you to meditate, to keep a digital journal, to chat with people on the phone and to never tweet. Still, I enter the new decade with a feeling of overwhelming dread. There’s a good chance the internet will help break the world this year, and I’m not confident we have the tools to stop it.
Unless, that is, we are all really careful. As Smokey Bear might say of our smoldering online discourse: Only you can prevent dystopia!
And so: Here are a few tips for improving the digital world in 2020.
It’s likely we all know someone who has unfortunately shared inaccurate information on social media, or on a WhatsApp group.
We are living in a different world compared to just three months ago. Critical parts of our lives have been uprooted and turned upside down, which has led to a further spiral of worry and stress. We want to be helpful, so we tend to share information that comforts and reassures us - however, this doesn’t necessarily mean it’s accurate, and in fact, it often contributes to the growing uncertainty.
The spread of false information during the coronavirus outbreak has been rapid, with well-meaning friends and family sharing messages on WhatsApp and Facebook warning of everything from premature government lockdowns to unusual home remedies that claim to beat the virus.
“We’re not just fighting an epidemic; we’re fighting an infodemic,” said Tedros Adhanom Ghebreyesus, Director General of the World Health Organisation. “Fake news spreads faster and more easily than this virus, and is just as dangerous.”
It’s likely we all know someone who has unfortunately shared inaccurate information about the risk of the outbreak, however, according to research, it likely wasn’t intentional.
Studies indicate that 46 percent of Internet-using adults in the UK viewed false or misleading information about the virus in the first week of the country’s lockdown. Furthermore, researchers at King’s College London questioned people about Covid-19 conspiracy theories, such as the false idea that the virus was linked to the rollout of 5G mobile networks. Those who believed these theories were less likely to believe there was a good reason for the lockdown in the UK, potentially increasing the risk to their health if they chose to ignore government instructions.
These forwarded messages may contain useless, incorrect, or even harmful information and advice, which can hamper the public health response and add to social disorder and division.
Even within our own community, the temporary closure of our Jamatkhana spaces has resulted in the appearance of “virtual Jamatkhanas,” the details of which were forwarded onto many others without a second thought. These virtual gatherings are not appropriate, as Jamatkhana may only be established by the Imam-of-the-Time, through his institutions and appointed Mukhi-Kamadias.
Additionally, a recent report from one province in Iran found that hundreds of people had died from drinking industrial-strength methanol — based on a false claim that it could protect from contracting Covid-19.
Regardless of the consequences of taking notice of fake news, the “infodemic” will likely continue. While companies such as Instagram, Facebook, and Twitter have introduced new measures (Facebook recently introduced an Information Centre with a mix of curated information and medical advice), we must be proactive and act rationally, with prudence and sound judgment.
In East Africa in 2016, Mawlana Hazar Imam explained that digital mediums have produced a global flood of voices in the form of websites, blogs, and social media, saying that, “The result is often a wild mix of messages: good information and bad information, superficial impressions, fleeting images, and a good deal of confusion and conflict. And this is true all over the world.”
At a time of intensifying emotions and growing polarisation, this is resulting in a society in which people feel “entitled to their own facts.”
“In such a world, it is absolutely critical – more than ever – that the public should have somewhere to turn for reliable, balanced, objective, and accurate information,” Hazar Imam continued.
In a world filled with a mixture of information, what do these dramatic changes mean for the Jamat worldwide? Here are some research-backed suggestions to combat misinformation:
- Source: Question the source - references have been made to lots of institutions and experts during this outbreak. Check on official mainstream media to see if the story is repeated there. If it was forwarded from a “friend of a friend,” assume it’s a rumour, unless proven otherwise.
- Use fact-checking websites: Websites like APFactCheck and FullFact separate out true claims from false ones. While it’s far easier to just forward a WhatsApp message that someone else sent to you, a quick search on fact-checking sites will inform you if it’s been flagged as fake news by more trusted sources.
- Over-encouragement to share: Be wary if the message asks you to share - this is how viral messaging works.
- Listen to advice from official institutions: the best places to go for health information about Covid-19 are government health websites and the World Health Organisation website.
- At this time in particular, it is critical that we understand the risks of misinformation and miscommunication, and rely only on credible government and Jamati institutional sources, such as The Ismaili, the official website of the Ismaili community.
If we are mindful of our own online behaviour and think about the factual basis of the news we consume and forward on, we can stem the spread of misinformation while helping one another to decide what to trust for the betterment of our lives.
All times are GMT - 5 Hours Goto page Previous1, 2, 3
Page 3 of 3
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum