Can Computers Replace Human Thought?

William Uricchio | Media Scholar
William Uricchio, Media Scholar
Photo: Ishan Tankha

Think of the Internet as a vast medieval capital. Labyrinthine alleys separate shops selling wares from every corner of the known world. Touts, pimps, slave traders, assassins, spies lurk in every corner. You are a young farmer’s son, come to make your fortune in this great city (or, in an example closer to home, an immigrant in Delhi or Mumbai). You meet people who take you in, guide you through the utterly discombobulating experience, and offer you a modicum of stability. They help you negotiate your way around the market, get the things you need to survive, to make some sort of life. You don’t really know these people from Adam. But you trust them, either implicitly (you are a fairly trusting guy), or on the basis of evidence (they have helped you in the past), or because of some shared familiarity (they come from the same part of the world as you), more accurately, because they seem to know you, where you come from, what you desire. But for all you know, they could be spies keeping an eye on an alien, or touts taking your money little by little, or slave traders or assassins or worse. You trust them, because you know you are better off with them than alone, because they make your life simpler, less lonely. But you keep an eye open.
This leap of faith is roughly what we make with the increasing influence of Big Data in our lives. At a time when the kindness of strangers is not something that can be counted upon, it is but natural that we lean upon technology to help us negotiate the virtual and, as the two integrate more and more, the real world. Everybody who uses the Internet in a meaningful way in their daily lives knows by now that their personal data, their usage habits, are being logged somewhere, packaged and sold to someone else. Few know where or how much. We could find out by painstakingly reading and analysing the privacy agreements that we lie we have read and understood while filling any Internet form, but these terms and conditions are often longer than entire constitutions. We reason that the potential consequences to our privacy are outweighed by the desire to play Angry Birds sometime in the next year or so, and click the check mark and hit submit.
The underlying reason, of course, to why we continue to make this trade-off, on an almost daily basis, is that this data about us is used to create services that pervade almost every aspect of our existence. Both in the developing and developed worlds, there is an online alternative to most everyday tasks. Even our cultural lives are being driven more than anything by the Internet. The books we read, the films we watch, the music we listen to has always been curated in some way. That screening process is now being delegated from the human eye to mathematical algorithms that have mapped your tastes and use them to recommend something new. With the development of what is called narrative science, entire articles can be generated by some version of Roald Dahl’s Great Automatic Grammatisator. Even the people we let into our lives, or into our countries, are determined increasingly by computer algorithms that analyse past behaviour and personality traits to predict future behaviour. “The cultural practices that help to construct the self, our status and friends, our texts, their meanings and our world,” says William Uricchio, “this is the stuff of reality. These are the very factors where we are seeing the emergence of the algorithmic.” It’s the stuff of every science-fiction dystopia, a condition where the individual is secondary to how technology perceives the individual.
The difference between the two, according to Uricchio, Professor of Comparative Media Studies at MIT, is the difference between technological determinism and social constructivism. He is firmly of the latter school, believing that what technology does is what we make of it. Film, for instance, could have evolved in many different ways, if you look at the various debates that raged in the late 19th century (the dominant view at the time, for instance, saw it as some sort of proto-Skype). Algorithms, he believes, occupy a third space somewhere in the middle of the two, with “the purity of mathematics, but all the subjectivity and construction of language and categories”. Until the computer learns to learn by itself and becomes some sort of Skynet, there will always be human intervention. It is this human intervention that he studies to place this development in the context of modern history.
The reliance on algorithms, with all the messiness that comes with a predictive system, says Uricchio, represents a break from the insistence on precision that has characterised cultural progress in the last 400 years. The elimination of the human critic, teacher, even journalist, means that the reality we encounter through technology may not necessarily be the definitive one, but we trust that it is the most convenient one for us. That leap of faith might be far more insidious.