• Ilana Masad

HotE Review: Wendy Hui Kyong Chun’s “Cultivating Online Spaces: Social Media as Laboratory”


Image by Hugh Dyar IV

It is no secret that many of us, no matter our political affiliation, have come to rely on the connections and communities we find online. It’s also no secret, by now, that online communities are ripe for manipulation, that our information is bought and sold, and that issues of privacy and security abound. Dr. Wendy Hui Kyong Chun, Simon Fraser University's Canada 150 Research Chair in New Media in SFU School of Communication, is not one to despair in the face of all this. Her invigorating, energetic lecture at the Sheldon Museum on March 7 was a breath of fresh, hopeful air--though not, by any means, sugarcoated or naive.

Chun began by introducing the importance of big data to the question of our post-truth era, this year’s Humanities on the Edge lecture series has been structured around. What futures do post truth put in place? she asks, and how do post-truths construct + constrict the future? It is big data, Chun argues, that makes truth “post.” We’ve been told that big data is changing the world, fueling a second information revolution, and that it has the potential to destroy science and theory due to the enormity of available information. Chun cited, instead, “The Rise of Big Data: How It's Changing the Way We Think About the World,” where authors Kenneth Cukier and Viktor Mayer-Schoenberger wrote that “A worldview built on the importance of causation is being challenged by a preponderance of correlations. The possession of knowledge, which once meant an understanding of the past, is coming to mean an ability to predict the future.”[1]

I. Authenticating Fake News

In the first part of Chun’s lecture, she introduced the concept of fake news, defined often as “news articles that are intentionally and verifiably false and could mislead readers.” [2] Of course, plenty of fake news is transparently fake, but as Chun pointed out, “many articles that originate on satirical websites could be misunderstood as factual, especially when viewed in isolation on Twitter and Facebook feeds.”

Fake news isn’t new, and Chun pointed to the mimetic quality of this statement, which is mentioned in almost every article about the topic, with most referring to yellow journalism, hoaxes, and satire. Some would argue that it’s been around as long as humans have lived in groups where power matters.

And so, Chun asked, how do we respond to fake news? The most common initial response is to expose what’s fake, and to share that knowledge. The issue, however, is that there will always be a lag, because there are relatively few fact-checking sites exposing false narratives working against a huge networks of disinformation. Even more important, the story that debunks reaches only a third of the audience that consumed the fake news item in the first place, and the audience fake news is aimed at is largely different than that which will consume the debunking.

Recently, AI has been developed to try to solve the fake news issue, with the hopes that machine learning will work faster than humans, but Chun pointed out the limitations here. Since most AI operate at the level of the keyword, for a long time articles that had some combination of the words “Brad,” “Angelina,” and “divorce” were flagged as fake. Except that then, of course, the famous couple’s divorce proceedings really began.

Humans can see what AI cannot. For instance, a story about the Boy Scouts accepting both girls and gay boys into their ranks suggested that it was this new liberal policy that caused the group to supply free condoms at large gatherings. An AI wouldn’t see the issue - the Boy Scouts have changed their policies around openly gay members and coed scout groups; the Boy Scouts also provide condoms at global gatherings. However, the causal relationship between these two events is incorrect: condoms have been available at global gatherings since the 1990s.

The issue is, even in situations where we can verify the truth, this isn’t enough. Debunking can backfire, sparking more interest than the initial fake news item had. Plus, calling something false can actually further convince people that they’re right because they don’t trust the source: antivaxxers, for instance, are often more adamant than ever once they’re told the initial study their convictions are based on was deeply flawed, its results incorrect.

But the issue isn’t just how to respond to fake news, which is disseminated through both unofficial and official channels and saturates public dialogue. The question, Chun told us, is why and how do people find such news items not only compelling and true but also spreadable? The problem, she argues, isn’t individual. It’s a problem of clusters.

II. Cultivating Segregation

Cambridge Analytica, a British consulting group that uses data mining, brokerage, and analysis, was hired to use their models to affect the Brexit referendum and the U.S. elections, both in 2016. Chun gave us an example of the way Cambridge Analytica used proxies to determine certain attributes: she asked everyone who liked Lady Gaga to raise their hand and declared them extroverts, while those who raised their hand when asked if they liked philosophy were declared introverts.

But Chun reminded us that blaming or praising Cambridge Analytica for the election results is a cop out, really, since we can’t measure what effect they really had. What was important about their ads was that they were designed to be emotional experiences that either agitated or confirmed bias, that took viewers on a journey down a rabbit hole, inducing click through after click through, so that by the end, the viewer was returned as a different person. But the goal of this journey wasn’t what we might think - it wasn’t meant to change individuals. No, the goal was to create clusters so that social media users create their own echo chambers where they’d share the same ads and fake news articles with one another. Cambridge Analytica didn’t, Chun reminded us, predict the future. They were more like meteorologists, collecting data about the past and using models that perpetuate the past into the future. This model, Chun argued, is the most disruptive perception of the future we can possible have, because it shuts down the future’s possibilities by trying to simply perpetuate the past.

What’s most interesting about these echo chamber clusters is that they weren’t created simply according to what people had in common. Specifically, these clusters put together people who deviated from a perceived norm in a specific way, thus creating communities of people who all “rebelled” or were similarly “alternative.” Indeed, people often feel like their most authentic selves in these spaces of perceived rebellion. I can vouch for this; I was unduly pleased when I was told by Spotify’s end-of-year statistics that in 2018 I’d listened to “non-mainstream artists 111% more than the average Spotify listener--so here’s to being different.”

The clustering that’s creating online, then, isn’t about what people like the most, but about what they like that other people don’t. Networks, in other words, presume segregation.

Chun shared a 2001 article titled “Birds of a Feather: Homophily in Social Networks,” which tells us that “Similarity breeds connection. This principle—the homophily principle—structures network ties of every type, including marriage, friendship, work, advice, support, information transfer, exchange, comembership, and other types of relationship.” [3] But Chun pushed back against this concept, asking what it means to consider socially constructed concepts like race or ethnicity as immutable or essential. One example that is often used to prove the concept of homophily is U.S. residential segregation. Yet the very term was invented while studying U.S. residential segregation, which has never been random, but purposefully constructed to segregate, using city planning, architecture, and other methods to create the conditions that allow for it. Plus, the studies around such segregation were biased from the very start, asking only white liberals and non-liberals about their preferred living conditions. In other words, there was nothing random or innate about such homophily - it was constructed. And it is, still, perpetuated.

This is true in online spaces as well. YouTube’s “alternative influencer network” is an example of this, as scholar Rebecca Lewis showed in her report for Data & Society, where she illustrated “common techniques that these far-right influencers use to make money as they cultivate alternative social identities and use production value to increase their appeal as countercultural social underdogs.” [emphasis in the original] Our fishbowls cultures may not seem dangerous to some of us, but they are. As Chun succinctly put it: “Manipulation isn’t the side effect. It’s the point.”

III. Beyond Verification

In the final portion of her lecture, Chun took on the thorny question of what we mean by authenticity. Social media users aren’t stupid or naive, she reminded us, despite how often they’re painted as such in exposes around big data’s use and influence. But no, she said, social media users are aware that they’re carefully crafting personae.

Authenticity is part of that cultivation, a type of branding that is neither correct nor incorrect. After all, authenticity has always been something literary and performative. As Lionel Trilling posited in Sincerity and Authenticity, the latter emerged as a reaction to the former.

Chun introduced the famous quote from Shakespeare’s Hamlet that reads:

“This above all- to thine own self be true,

And it must follow, as the night the day,

Thou canst not then be false to any man.” [Act I, Scene III]

Sincerity, according to Trilling, is the algorithm that follows this entire quote: one should be true to the self in order to be true to others. Authenticity, however, is based on the more often quoted section, “to thine own self be true,” which is a simpler algorithm that doesn’t face the public. It is an end, not a means.

Chun, however, explained why she disagrees with Trilling. Authenticity is still public-facing: personae become authentic through social interchanges. In the social media world, that means through liking and sharing etc., so that networks like the YouTube alternative influencer space gain credibility and authenticity by creating a version of themselves as underdogs, alternative-minded.

“American popular culture is obsessed with authenticity and awash with artificiality,” socioloist Chandra Mukherji wrote [4]. Perhaps, Chun said, but authenticity isn’t easily separable from artificiality. After all, the politician deemed most authentic in the 2016 elections was a reality TV star, as is one of the world’s youngest billionaires, Kylie Jenner. The irony of reality TV, and authenticity, is that it follows a set of algorithmic rules, formats. Authenticity, constructed as it might be, is valuable. As Chun said, “The imperative to be true to yourself, or more simply be true, makes our data valuable, that is recognizable, across the many media platforms we use.”

Chun ended her talk by suggesting a radical change. What if, she asked, instead of using verification to combat fake news, we also tried to understand ourselves as characters in a drama called Big Data? What if we revisit the correlations that appear to predict the future, examine the underlying bias and perpetuating paradigms in them, and thus figure out how to explode them? And, most radically, what if we redesigned our social media platforms so that instead of being organized by similarity we could be organized by difference? Or, instead, we could use mutual indifference (for instance, the fact that all the people in the lecture hall are sitting on the same chairs that none of us give a thought to) as grounds for similarity? What if, in other words, we took the challenge to treat all people as actually human, levelly human, a recognition that belies pattern connection? Once we begin to do that, Chun told us, we might be able to explode the fake news economy, the clickbait economy, and the perpetuation of the past into the post-truth future.

Citations:

[1] Cukier, Kenneth, and Viktor Mayer-Schoenberger. “The Rise of Big Data: How It's Changing the Way We Think About the World.” Foreign Affairs, vol. 92, no. 3, 2013, pp. 28–40. JSTOR, www.jstor.org/stable/23526834.

[2] Allcott, Hunt, and Matthew Gentzkow. 2017. "Social Media and Fake News in the 2016 Election." Journal of Economic Perspectives, 31 (2): 211-36. https://www.aeaweb.org/articles?id=10.1257/jep.31.2.211

[3] McPherson, M., Smith-Lovin, L., & Cook, J. M. (2001). Birds of a feather: Homophily in social networks. Annual Review of Sociology, 27, 415-444.http://dx.doi.org/10.1146/annurev.soc.27.1.415

[4] Mukerji, Chandra. “A Message from the Chair.” Newsletter of the Sociology of Culture Section of the American Socioloical Association, vol. 11, no. 3, Spring 2007, p. 12.

#humanitiesontheedge #posttruth #SocialMedia

Recent Posts