top of page

Algorithms, Attention, and Democracy

How do algorithms direct our attention in problematic ways? What difficulties does new digital media pose for democratic engagement?

Algorithms: Directors of Attention

Cafés, libraries, and public squares serve as social meeting places for people. A group of students might, for example meet at their neighborhood café to plan the spring dance. But another group of students might meet at the same café to plan the vandalization of the school cafeteria.

 

Whether the café is used as a meeting place to plan events that either benefit or burden the school community, we would typically not think that the café is responsible for what is planned under its roof. That is, we would likely not blame the café owner for the vandalization of the school, or think that it should have prevented students from meeting there to make their plans, or set rules about what could or could not be discussed in the café.

​

 

 

 

 

 

 

 

 

 

 

 

 

 

 

​

 

Photo by Brooke Cagle on Unsplash

​

Are social media platforms like Facebook any different? Is Facebook like the neighbourhood café, where people with various interests can gather online and organize various events? Should Facebook, like the café owner, be free from blame if some people gather online to organize a Coup D’État instead of a school fundraiser?

 

Facebook has, in fact, been blamed for serving as a forum for the coordination of the storming of the Capitol in Washington, DC on January 6 [1]. It has also been implicated in the 2017 genocide against the Rohingya people in Myanmar for its role as an instrument to spread hateful discussion and ultimately inciting tragic violence [2-4]. And it has been criticized for enabling the spread of vaccine misinformation during the COVID-19 pandemic and frustrating global efforts to stop the spread of the virus [5, 6]. (We might also note that many people think that social media has been important for good social changes as well. Twitter’s role in the Arab Spring in one such example [7].)

 

If we think that Facebook is like the neighbourhood café, we might think that the above criticisms are misplaced. But there is also reason to think that Facebook is different from the neighborhood café. One key difference is that Facebook—like other social media sites including X (formerly Twitter) and Tik Tok—uses personalized algorithms that shape how (and with whom) people gather, how information gets distributed, and what information gets shared [8]. These personalized algorithms lie at the heart of the business model that technology companies rely on, which we discussed in Lesson 2. To grab and keep our attention, these algorithms prioritize information for us, and deliver to us whatever we are most likely to engage with. This maximizes profit for technology companies. But this also, some worry, leads to political polarization, extremism, the spread of misinformation, and ultimately to tragic outcomes like the ones listed above.

 

The worry is that Facebook isn’t a neutral meeting ground like the neighbourhood café, where some ill-intentioned plans may be made; it is instead a space that is structured in a way that contributes to many of the negative attitudes and harmful beliefs that underlie those plans and make it easier for people to carry them out. (The same goes for other social media sites like X and Tik Tok.)

 

However, Facebook has largely disputed allegations against them, claiming instead that individual users rather than the algorithms are the primary drivers of polarization, extremism, and the spread of misinformation [9, 10]. Some of the difficulty in arriving at a consensus on the role such algorithms play in these effects lies in a lack of understanding of how exactly the algorithms work. Facebook and other social media platforms keep this information closely guarded [11].  Arguably, that secrecy is itself a moral problem. Given that these algorithms, as we have seen, so deeply influence us, one might argue that we have a right to know how they affect us and hence how they work. All of this could change with the introduction of new regulations (like the European Digital Services Act [12]) calling for greater transparency about how the algorithms employed by technology companies work.

 

Let’s now have a closer look at how online spaces might be different from a neutral meeting ground like the café above.

​

Students sitting at table in coffee shop

Filter Bubbles and Echo Chambers

Personalized algorithms deliver information that will capture the attention of users. For example, if John likes hiking, John will see ads for hiking equipment. If Nadine is skeptical of vaccines, she will be delivered more information with that content. In these cases, algorithms will deliver to the user information that they are most likely to engage with. John and Nadine will see very different things online, even if they are sitting in the same café together.​

​

​

​

​

​

​

​

​

​

 

 

 

 

 

 

 

​

Photo by Yitzhak Rodriguez on Unsplash

​

This filtering out of information creates what is known as a filter bubble [13]. According to Eli Pariser (who introduced the term), a filter bubble is "that personal ecosystem of information that's been catered by these algorithms [14]."

 

Some people worry that filter bubbles lead to less diversity of opinions, and this increases polarization and can sometimes fuel extremism [15-17]. When individuals are only exposed to views that they already agree with, so the worry goes, any already existing division in beliefs or values across society will only be reinforced and may be made more extreme. Someone who is already committed to anti-vaccination sentiments, for example, will become more committed to these sentiments when they only encounter views like their own.  

 

People worry that online algorithms also lead to echo chambers. An echo chambers is a “bounded, enclosed media space that has the potential to both magnify the messages delivered within it and insulate them from rebuttal" [18, p.176]. Characteristic of echo chambers is a mistrust of and active attempt to undermine opposing viewpoints.  Like filter bubbles, common concerns raised about echo chambers are that they increase polarization and drive people to adopt more extreme positions [19-22]. 

 

Despite the fact that the concerns raised about them are similar, filter bubbles and echo chambers are different phenomena and arguably require different solutions [23]. When one is in a filter bubble, one is simply not exposed to views that differ from one's own. Filter bubbles, some claim, can be burst by exposing individuals to a wider diversity of information. But echo chambers are arguably harder to escape. In the case of echo chambers, there is an active attempt to discredit views that are different from one's own [18, 23].Echo chambers have been described as operating as something like a cult [23]. Both isolate their members, and outsiders are actively labelled as untrustworthy [18, 23]. Thus, to escape an echo chamber it’s not enough to present different information: trust in that information needs to be repaired.

 

Filter Bubbles, Echo Chambers, and Democracy

There are some interesting philosophical questions that can be raised about how filter bubbles and echo chambers (and the algorithms that drive them) affect democracy.

 

One question is what is wrong with polarization. Let us suppose that social media platforms at least contribute to (if not cause) polarization. What’s bad about that?

 

Many people think that polarization prevents democracy from functioning well [24].  Democracy is characterized by the people’s rule. Individuals are taken as equal citizens with equal power to express their voices on matters that are of political relevance. On some accounts of democracy, an important part of the process is public deliberation about key issues of societal significance [25-28].  Deliberative approaches to democracy often propose citizen's assemblies where citizens from all stripes of life meet to discuss important issues [28]. This process gives legitimacy to the decisions made: they are the product of public deliberation by those who are subsequently affected by them.

 

However, this kind of engagement becomes difficult when groups are deeply polarized. Individuals may be unwilling to engage with other groups [29, 30], may have trouble relating to one another as equals [24], and, where they do engage, it may be unlikely that they can reach any sort of compromise in their decision-making [31].

 

A second and related question is to what extent the filtering of information by algorithms prevents democracy from fulfilling an important function of collective knowledge-formation [32]. On this view, one of democracy’s roles is to assemble information from a diverse population with varying levels of expertise on various topics relevant to society. Citizens have varied and diverse knowledge about certain policy issues. Democratic procedures help to collect this knowledge in a way that is useful and beneficial for society as a whole. But when individuals become trapped in separate information bubbles, each bubble will remain isolated from the knowledge the other groups could provide. Individuals inside these bubbles may also be vulnerable to manipulation from within, by actors—like propagandists—who wish to distort information for political gain [33].

​

How Widespread are Filter Bubbles and Echo Chambers?

Despite concerns raised by policymakers, academics and the general public, there is some disagreement about how widespread filter bubbles and echo chambers are and how harmful their effects are. Some people worry that the attention focused on filter bubbles and echo chambers takes attention away from more pressing matters, like the spread of disinformation online [34] (to which we will turn to in the next section).

 

There is evidence that suggests that some of the effects of filter bubbles and echo chambers may be overstated [35, 36].  For example, some research shows that social media users have a more diverse media diet than other users, which indicates that the effects of both filter bubbles and echo chambers in insulating information from users may not be as extensive as some have believed [37]. There is also research that suggests that when people are exposed to opposing views, their own positions become strengthened rather than made more moderate [38]. This casts doubt the claim that a lack of cross-cutting information leads to polarization (or at least that exposure to other perspectives will reduce polarization); in fact, the opposite may sometimes be true.

 

There is also evidence that suggests echo chambers are not as widespread as many have supposed them to be, with only an estimated 4% of online news users occupying echo chambers [39, 40]. On the other hand, while the proportion of individuals who occupy echo chambers may be relatively small, such individuals are often more politically active than the rest of the population and can have a significant impact on society and, as our opening examples suggest, devastating consequences [41].
 

coffee filter with coffee dripping through

Reflection Exercises

  1. Suppose we are sitting in a café reading different newspapers (rather than reading the news online), which display different news stories and run different advertisements. Do you think the effects of getting different content offline will be the same as getting it online?
     

  2. There is an abundance of information online. To what extent do you think that personalized algorithms are useful tools to help us navigate this? 
     

  3. How polarized do you feel your society is? To what extent do you think that polarization is caused by isolation from other viewpoints? Is polarization always bad? Why or why not?
     

  4. What do you think are some of the most important features and functions of democracy? To what extent do filter bubbles and echo chambers threaten these?

Hate and Lies

Filter bubbles and echo chambers are concerns about the way that information is delivered to users. Where they exist, they make online spaces where people gather different from neutral meeting places like the café we mentioned at the beginning. But this is not the only way online spaces may differ from places like the café. They may also differ with respect to the specific content that is delivered to those who gather in them.

 

In particular, concerns are often raised that online algorithms tend to spread divisive and inflammatory content. There is, for example, evidence that suggests that hateful content spreads faster online than other content [42]. The worry is that to keep people more engaged, algorithms deliver more and more “attention-grabbing” content. And since people pay more attention to content that generates outrage, algorithms will continue to deliver that content [43].

 

There are also often concerns raised about the use of online algorithms is their role in spreading misinformation [44]. Facebook has also faced significant criticism for its role in this regard. Research suggests that people are more likely to engage with and share false stories than they are factual stories [45]. This, some worry, may incentivize algorithms to show that kind of content, thus increasing the amount of false information that people are exposed to and potentially influenced by.  

 

If false and inflammatory content is favored by the algorithms an online platform uses, then it is importantly different from a neutral meeting place. Its design would seem to be especially welcoming to those who come to spread hate and falsehood, and implicitly encourage a specific type of use for the space it provides.

 

Arguably, holding beliefs that are false is bad. If we think of beliefs as maps to reality, we want to have an accurate map, and this requires holding beliefs that are true. Belief in misinformation can also adversely affect individual choices.  A person who believes in a dangerous conspiracy theory may, for example, end up harming themselves or others.

​

Falsehoods and inflammatory content also pose challenges for democratic engagement. As we saw in the case of polarization, when the information we get online is filled with hatred, it becomes difficult to engage with the other side. Moreover, when we each receive different information or when some of it is false, we lose a common ground. It then becomes difficult to democratically engage with one another about things that matter, like climate change, public health, and social justice.

 

​
 

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

Photo by Zoe VandeWater on Unsplash

​

Overcoming polarization, extremism, and misinformation

One potential solution to some of the concerns we have raised thus far is to put restrictions on what information that can be shared online. While Facebook has long advocated for freedom of expression, the company has introduced certain policies that attempt to limit the spread of this content. For example, it recently updated its hate speech policy by banning information that distorts the Holocaust [46]. They also have begun flagging misleading information, removing posts and suspending accounts of those who spread this information [47, 48].  Governments have also taken action against the spread of misinformation online by introducing legislation to prohibit fake news [49, 50].

 

Critics say this is unjustified censorship, and threatens freedom of speech [51]. One of the most well-known proponents of freedom of speech was 19th century philosopher John Stuart Mill [52].  Mill thought that the only justifiable restriction on individual freedom – and this included freedom of speech – was to protect other people from direct and serious harm. Mill thought that speech may be offensive, but so long as it was not harmful, it ought to be permitted. An exception to this would be when what one says incites violence. Mill thought, for example, that yelling “Mark Zuckerberg is a thief” in front of a riled-up crowd may incite violence and could justifiably be prevented. But he thought that expressing that opinion in writing in a newspaper could not.

​

Defending Freedom of Speech

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Photo by Volodymyr Hryshchenko on Unsplash
 

There were two main arguments for why Mill thought freedom of speech should be widely protected. The first is that the free expression of ideas will help to achieve true beliefs. The more ideas that are available for us to consider, the more likely we are to arrive at the truth.
 

The second argument appeals to the importance of open dialogue about beliefs and a guarding against dogmatism—that is, as taking certain opinions as fact without sufficiently examining them. As Mill says: 


However unwillingly a person who has a strong opinion may admit the possibility that his opinion may be false, he ought to be moved by the consideration that however true it may be, if it is not fully, frequently, and fearlessly discussed, it will be held as a dead dogma, not a living truth [52, p.29].

​

This means that even if a person has a strong opinion on a matter, they should realize that they might sometimes be wrong, and that even true opinions will benefit from a lively debate and discussion, since that will give us proper reason to believe them.
 

Mill thought that all ideas, no matter how hateful or obscene, should be allowed to be expressed, unless they caused harm to others. They might be offensive, but generally were not harmful, and could be combatted with different ideas. Indeed, part of the democratic process is the exchange of ideas. And Mill thought that insofar as the people rule, no information that is relevant to making decisions concerning policy should be kept from them.

​

We might wonder whether Mill's argument is outdated now. We are inundated with information. Online algorithms arguably select for hateful and false information and increase polarization. Not everyone is willing to engage in reasoned dialogue. We also (as we have seen) open ourselves up to distraction in the attention economy, and this limits our ability to engage in activities that require focused and sustained attention, like critical reflection and open dialogue about political ideas. Mill thought that the free exchange of ideas would allow the best ideas to win. But in the environment we are in today, there is reason to think this might not be the case. Indeed, there is evidence that suggests that arguments are not effective ways of changing people's opinions on matters [53,54].

​

These considerations suggest that some regulations concerning speech online may be warranted. Censoring some opinions is one potential resolution, but it is not the only option. There are also milder measures that can be taken that strike a balance between protecting people from harmful misinformation and preserving freedom of speech, like flagging misleading information or designing algorithms in ways that de-prioritize content that may pose a threat to democracy [55, pp.17-18].

 

Having said that, one might wonder to what extent giving the power to regulate individual expression to private companies like Facebook and X is compatible with democracy. This would give a very small number of individuals a very large amount of power. On the other hand, some argue that social media companies have special duties to regulate their online spaces, to protect citizens’ rights to accurate information and to protect against harm [56]. 

 

If a group of people in a coffee shop engaged in an activity that made it impossible for others to enjoy their coffee (or worse, engaged in activities that actively harmed other coffee-shop goers), we might think the coffee shop owner should restrict that kind of activity. Likewise, we might suggest that social media companies have special obligations to regulate their spaces to protect the rights of others who also occupy those spaces.

​

speech bubble
Protesters carrying signs

Reflection Exercises

  1. What counts as harmful speech? Do you think, for example, that denying the Holocaust counts as speech that is harmful or merely offensive?
     

  2. Do you think that is it important to have true beliefs? Why/why not? Do you agree the more beliefs that are expressed will increase our chances of achieving true beliefs? What about beliefs for which there is clear and compelling evidence of their falsehood, like the link between vaccines and autism? Does the expression of such beliefs better ensure the attainment of true beliefs?
     

  3. If you think that some speech can be limited online, who should have the power to do that? Do you think that there is something problematic about private companies like Facebook or Twitter being able to determine what we can or cannot say online?
     

  4. What do you think are the biggest challenges that the attention economy poses to democracy? Try to come up with some ways in which digital technology could be put to good use for the sake of democracy.

References

[1] Levine, A.S. (2021 Oct 25). Inside Facebook’s struggle to contain insurrectionists’ posts. Politico. https://www.politico.com/news/2021/10/25/facebook-jan-6-election-claims-516997/.

 

[2] Timberg, C., Dwoskin, E., and Albergotti, R. (2021 Oct 22). How Facebook played a role in the Jan. 6 Capitol riot. The Washington Post. https://www.washingtonpost.com/technology/2021/10/22/jan-6-capitol-riot-facebook/.

 

[3] Mozur, P. (2018 Oct 15). A genocide incited on Facebook, with posts from Myanmar’s military. The New York Times. https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html

 

[4] Stevenson, A. (2018 Nov 6). Facebook admits it was used to incite violence in Myanmar. The New York Times. 6 November 2018. https://www.nytimes.com/2018/11/06/technology/myanmar-facebook.html

 

[5] Broniatowski, D. A., Simons, J. R., Gu, J., Jamison, A. M., & Abroms, L. C. (2023). The efficacy of Facebook’s vaccine misinformation policies and architecture during the COVID-19 pandemic. Science Advances, 9(37), eadh2132.

 

[6] Milmo, D. (2021 Nov 2). Facebook failing to protect users from Covid misinformation, says monitor. The Guardian. https://www.theguardian.com/technology/2021/nov/02/facebook-failing-to-protect-users-from-covid-misinformation-says-monitor.

 

[7] Wikipedia contributors. (2023, October 25). Social media and the Arab Spring. In Wikipedia, The Free Encyclopedia. Retrieved 09:09, October 26, 2023, from https://en.wikipedia.org/w/index.php?title=Social_media_and_the_Arab_Spring&oldid=1181801041


[8] Bozdag, E. (2013). Bias in algorithmic filtering and personalization. Ethics and Information Technology, 15, 209-227.
 

[9] Bond, S. (2021 April 1). Facebook disputes claims it fuels polarization and extremism. NPR. https://www.npr.org/2021/04/01/983155583/facebook-disputes-claims-it-fuels-political-polarization-and-extremism#:~:text=Jenny%20Kane%2FAP-,Facebook%20is%20stepping%20up%20its%20defenses,its%20algorithms%20favor%20inflammatory%20content.&text=Facebook%20is%20making%20changes%20to,fuels%20extremism%20and%20political%20polarization.

 

[10] Clegg, N. (2021 March 31). You and the algorithm: it takes two to tango. Medium. https://nickclegg.medium.com/you-and-the-algorithm-it-takes-two-to-tango-7722b19aa1c2
 

[11] Krass, P. (n.d.). Transparency: the first step to fixing social media. MIT Initiative on the Digital Economy. https://ide.mit.edu/insights/transparency-the-first-step-to-fixing-social-media/

 

[12] European Comission. The Digital Services Act package. https://www.europarl.europa.eu/news/en/headlines/society/20211209STO19124/eu-digital-markets-act-and-digital-services-act-explained?&at_campaign=20234-Digital&at_medium=Google_Ads&at_platform=Search&at_creation=RSA&at_goal=TR_G&at_audience=digital%20services%20act&at_topic=DMA_DSA&at_location=DE&gclid=Cj0KCQjw7JOpBhCfARIsAL3bobfQRaZqupRV5BxbcKcGjBltr1BYXb1W58ktnXtgQxdvXv5XuAoKFAUaAg4uEALw_wcB
 

[13] Pariser, E. (2011). The filter bubble: what the internet is hiding from you. London: Penguin.
 

[14] Parramore, L. (2010 October 10). "The filter bubble". The Atlantic.  https://www.theatlantic.com/daily-dish/archive/2010/10/the-filter-bubble/181427/.

 

[15] Sunstein, C. R. (1999). The law of group polarization. University of Chicago Law School, John M. Olin Law & Economics Working Paper, (91).

 

[16] Sunstein, C. R. (2001). Republic.com. Princeton university press.

 

[17] Sunstein, C. (2018). # Republic: Divided democracy in the age of social media. Princeton university press.
 

[18] Jamieson, K. H., & Cappella, J. N. (2008). Echo chamber: Rush Limbaugh and the conservative media establishment. Oxford University Press.

​

[19] Sunstein, C. R. (2009). Going to extremes: How like minds unite and divide. Oxford University Press.

 

[20] Wu, K.J. (2019 March 28). Radical ideas spread through social media. Are algorithms to blame? PBS. https://www.pbs.org/wgbh/nova/article/radical-ideas-social-media-algorithms/

 

[21] Stevens, T. and Neumann, P.R. (2009). Countering online radicalisation: A strategy for action. https://icsr.info/wp-content/uploads/2010/03/ICSR-Report-The-Challenge-of-Online-Radicalisation-A-Strategy-for-Action.pdf

 

[22] Von Behr, I., et al. (2013). “Radicalisation in the Digital Era.” RAND Europe. https://www.rand.org/content/dam/rand/pubs/research_reports/RR400/RR453/RAND_RR453.pdf


[23] Nguyen, C. T. (2020). Echo chambers and epistemic bubbles. Episteme, 17(2), 141-161.

 

[24] Talisse, R. B. (2021). Sustaining democracy: what we owe to the other side. Oxford University Press.

 

[25] Elster, J. (Ed.). (1998). Deliberative democracy (Vol. 1). Cambridge University Press.
 

[26] Habermas, J. (2015). Between facts and norms: Contributions to a discourse theory of law and democracy. John Wiley & Sons.

 

[27] Cohen, J. (1996). “Procedure and substance in deliberative democracy”, in Democracy and Difference: Contesting the Boundaries of the Political, Seyla Benhabib (ed.), Princeton: Princeton University Press, 95–119.

 

[28] Landemore, H. (2020). Open democracy: Reinventing popular rule for the twenty-first century. Princeton University Press.

 

[29] Frimer, J. A., Skitka, L. J., & Motyl, M. (2017). Liberals and conservatives are similarly motivated to avoid exposure to one another’s opinions. Journal of Experimental Social Psychology, 72, 1–12.

 

[30] Mason, L. (2018). Uncivil agreement: How politics became our identity. University of Chicago Press.

​

[31] Pew Research Center. (2014 June 12). Report: Political polarization in the American public. Section 4: Political compromise and divisive policy debates. https://www.pewresearch.org/politics/2014/06/12/section-4-political-compromise-and-divisive-policy-debates/.

​

[32] Anderson, E. (2006). The epistemology of democracy. Episteme, 3(1-2), 8-22.
 

[33] Anderson, E. (2021). Epistemic bubbles and authoritarian politics. Political epistemology, 11-30.
 

[34] Garrett, R. K. (2017). The “echo chamber” distraction: Disinformation campaigns are the problem, not audience fragmentation. Journal of Applied Research in Memory and Cognition, 6(4), 370–376. https://doi.org/10.1016/j.jarmac.2017.09.011.

 

[35] Whittaker, J. & Looney, S. & Reed, A. & Votta, F. (2021). Recommender systems and the amplification of extremist content. Internet Policy Review, 10(2). https://doi.org/10.14763/2021.2.1565.

​

[36] O'Hara, K., & Stevens, D. (2015). Echo chambers and online radicalism: Assessing the Internet's complicity in violent extremism. Policy & Internet, 7(4), 401-422.
 

[37] Dubois, E., & Blank, G. (2018). The echo chamber is overstated: the moderating effect of political interest and diverse media. Information, Communication & Society, 21(5), 729-745.
 

[38] Bail, C. A., Argyle, L. P., Brown, T. W., Bumpus, J. P., Chen, H., Hunzaker, M. F., ... & Volfovsky, A. (2018). Exposure to opposing views on social media can increase political polarization. Proceedings of the National Academy of Sciences, 115(37), 9216-9221.
 

[39] Muise, D., Hosseinmardi, H., Howland, B., Mobius, M., Rothschild, D., & Watts, D. J. (2022). Quantifying partisan news diets in Web and TV audiences. Science Advances, 8(28), eabn0083.

 

[40] Fletcher, R., Robertson, C. T., & Nielsen, R. K. (2021). How many people live in politically partisan online news echo chambers in different countries?. Journal of Quantitative Description: Digital Media,  https://doi.org/10.51685/jqd.2021.020.
 

[41] Benson,T. (2023 Jan 20). The small but mighty danger of echo chamber extremism. Wired. https://www.wired.com/story/media-echo-chamber-extremism/
 

[42] Bellovary, A. K., Young, N. A., & Goldenberg, A. (2021). Left-and right-leaning news organizations use negative emotional content and elicit user engagement similarly. Affective Science, 2, 391-396.
 

[43] Rose-Stockwell, T. (2023). Outrage machine: How tech amplifies discontent, disrupts democracy—And what we can do about it. Legacy Lit..
 

[44]  Brown, É. (2021). Regulating the spread of online misinformation. In M. Hannon, M., & J. de Ridder (Eds.) The Routledge handbook of political epistemology (pp. 214-225). Routledge.
 

[45] Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146-1151.
 

[46] Reuters. (2020 Oct 12). Facebook to ban content that denies or distorts the Holocaust. The Guardian.  https://www.theguardian.com/technology/2020/oct/12/facebook-to-ban-content-that-denies-or-distorts-the-holocaust
 

[47] Mosseri, A. (2017 April 7). Working to stop misinformation and false news. Meta. https://www.facebook.com/formedia/blog/working-to-stop-misinformation-and-false-news

 

[48] Holzberg, M. (2021 Mar 22). Facebook banned 1.3 billion accounts over three months to combat “fake” and “harmful” content. Forbes. https://www.forbes.com/sites/melissaholzberg/2021/03/22/facebook-banned-13-billion-accounts-over-three-months-to-combat-fake-and-harmful-content/?sh=a80b00a52150

 

[49] International Press Institute. (2020 Oct 3). Rush to pass ‘fake news’ laws during Covid-19 intensifying global media freedom challenges. https://ipi.media/rush-to-pass-fake-news-laws-during-covid-19-intensifying-global-media-freedom-challenges/

 

[50] POFMA Office. Singapore Government. (n.d.). Protection from online falsehoods and manipulation act. https://www.pofmaoffice.gov.sg/regulations/protection-from-online-falsehoods-and-manipulation-act/
 

[51] Pomeranz, J. L., & Schwid, A. R. (2021). Governmental actions to address COVID-19 misinformation. Journal of Public Health Policy, 42, 201-210.

 

[52] Mill, J.S. (2002 [1859])  On Liberty. Dover Publications.

​

[53] Gordon-Smith, E. (2019). Stop being reasonable: how we really change our minds. PublicAffairs.

​

[54] McIntyre, L. (2021). How to talk to a science denier: conversations with flat earthers, climate deniers, and others who defy reason. MIT Press.

​

[55] Sunstein, C. R. (2021). Liars: Falsehoods and free speech in an age of deception. Oxford University Press.
 

[56] Smith, L., & Niker, F. (2021). What social media facilitates, social media should regulate: Duties in the new public sphere. The Political Quarterly, 92(4), 613-620.

bottom of page