Press "Enter" to skip to content

Life without Facebook? Is there any chance for escape?5 min read

Few heeded the prophetic warnings from cautious analysts regarding the potential dangers of the online data market. Claire Wolfe outlined her central thesis in the 1999 article “Little Brother Is Watching You: The Menace of Corporate America”,

You’re not the customer [for social media] any more. You’re simply a “resource” to be managed for profit. The customer is someone else now — and usually someone without your best interests at heart. […] Who is the customer? Not you, whose life is reduced to someone else’s saleable, searchable, investigable data.

Yes, the whiff of problematic consent permeates both the methodologies by which organizations collect this data and the purpose for which they use it. Businesses like Facebook, entice users to ‘voluntarily’ give up their data by means of gilding our online comfort zones–via algorithms. Then they sell access to this data so that their B2B customers can better target consumers–via algorithms. The more we ‘like’ online posts, the more targeted perspective which fits in with our prevailing narrative is delivered to our feed–via algorithms. Until we succumb to the intoxicating online wonderland governed by data processing and robotic reasoning, the dark world of the web bots who endlessly vie for our attention. Had we known we were entering such a minefield of manipulation we might have been more careful. But hindsight does little to prevent future fallibility when faced with aggressive advertisements versus possible perils.

Facebook holds us in its thrall and its success is undeniable; in June 2016 their monthly user figures exceeded two billion. Of those, 1.37 billion people on average log onto Facebook daily (and Twitter follows closely behind). They chop and dice users carefully into segments, preferences and predictions to present a potentially lucrative global marketplace. But their impact reaches further than influencing the brands we buy. Facebook’s concern over whether human intervention in trending topics could lead to charges of political bias encouraged them to remove the ‘human element’ in August 2016. But mere days after the switch to trending topics without human intervention, a political ‘fake’ news story trended for hours. Why is this dangerous? Because our perception of reality is largely influenced by what we see, and the human mind makes sense of the world by creating narratives.

Narratives consist of pieces of information, linked in a sequence, across time and according to a plot. We select information out of many possible pieces which we fit together in a likely narrative. Importantly, this information is selected because in general it already fits a dominant plot, those which are outliers ‘remain hidden or less significant in light of the dominant plot’ [Morgan A. 2000]. The human mind therefore already has its own inbuilt ‘selection’ algorithms where objectivity is thwarted by largely invisible cognitive biases. When this is reinforced by algorithmic extension, it leads to a filter bubble where we believe that everyone thinks like us, and we are making choices of our own free will.

Unless we use scientific methods for every decision we make, it is impossible to escape our own internal algorithms which exist to help us make mental shortcuts in order to survive more successfully. And there is validity in the argument therefore that social media only provides an online extension of what our minds intrinsically do by default. But it is—at least theoretically—possible to escape those externalized algorithms which steer our interaction online. We could check out of Facebook, stop liking posts, deactivate our twitter accounts and source our news directly from newspapers. Following the furore surrounding the influence of ‘fake news’ on America’s presidential campaign, it seemed as if consumers had finally cottoned on to the dangers–many of the mainstream newspapers saw an upswing in subscribers. CNN reported that the New York Times, for example experienced the largest subscription increase since the introduction of it’s paywall in 2011 in the week following Trump’s election.

Yet data collected by the ‘99 days of Freedom’ 2016 campaign by a Dutch advertising agency, was supplied to information science researchers at Cornell University who examined the potential of influence of social media and likelihood of reversion of users who intentionally chose to opt out of Facebook. Their paper “Missing Photos, Suffering Withdrawal, or Finding Freedom?” documents the mixed results. The push and pull factors of staying away or going back to social media are different for each individual, yet certain macro trends were found for example, according to age group–older folk had a greater ability to resist Facebook’s convivial lure, resulting in the hypothesis that they might have already been light users and had adopted the technology later than others. Younger, more frequent users didn’t fare as well. Other individual influencing factors–importance of social capital, perceived level of addiction, concerns around privacy—were also found to have an effect on whether users were able to stay away or decrease their usage.

Depressingly, a portion of those ‘success stories’ who did not return to Facebook had simply replaced Facebook with other social media. Far from extracting themselves from an algorithmic life, they had only jumped out of the proverbial frying pan into the fire. And there’s plenty of anecdotal evidence which suggests that those who jubilantly succeeded in their 99-day challenge, paradoxically returned to Facebook afterwards to post blogs about their experience.

Moreover, if a user might manage to elude capture-by-algorithm permanently, Facebook’s data mining extends into smart phone contacts and inboxes to search for ‘people you may know’–which means that even if you haven’t supplied your data to Facebook, it only takes one of your friends to store your personal data in their phone, to enable Facebook to create a ‘shadow profile.’ To prevent the use of your data within such a maze of algorithms, just contact everyone to whom you’ve ever given your contact details and ask them to delete it…

Thus we are left with the unappealing prospect that where there is big data, there will also be algorithms which can be used to exploit the vulnerabilities in human psychology. If Facebook shut down, there are many who would fill the void since it seems to be beyond normal human capabilities to fight against our desire for connection and validation. So we find ourselves in an ethical dilemma of whether the means justifies the ends as we are faced with the even more unappealing option that the most effective course of action might be to use still more algorithms if we are to devise and implement the measures of social justice needed to protect ourselves… from the enemy within–ourselves.

(Visited 60 times, 2 visits today)