Value-Neutral Technology
The CNN headline read: “Forget North Korea, AI will start World War III," and the ensuing conversation, started by Elon Musk, revealed many fear the unintended consequences the development of algorithms we may or may not be able to control. Once a new technology is introduced it can’t be uninvented, as Sam Harris points out in his viral TED talk. He argues that it’ll be impossible to halt the pace of progress, even if humankind could collectively make such a decision.
While Bill Gates, Stephen Hawking and countless others are broadly on the same page as Musk and Harris, some of the leading thinkers in the industry recognize that AI — like any other technology — is value-neutral. Gunpowder, after all, was first used in fireworks.
Ray Kurzweil, for instance, argues that “AI will be the pivotal technology in achieving [human] progress. We have a moral imperative to realize this promise while controlling the peril.” In his view, humanity has ample time to develop ethical guidelines and regulatory standards. As the world edges towards singularity, future technology is bound to enhance the human experience in some way — and it's up to us to make sure it's for the better.
The critics and cheerleaders of AI tend to agree on one thing: the explosion of artificial intelligence will change the world beyond recognition. When thinking about the future, I found the metaphor offered by Vernor Vinge of NPR's Invisibilia podcast, especially stark: “Making computers part of us, part of our bodies, is going to change our capabilities so much that one day, we will see our current selves as goldfish.” If this is the true expected extent of our tech-powered evolution — then our contemporary norms and conventions go straight out the window.
“Making computers part of us, part of our bodies, is going to change our capabilities so much that one day, we will see our current selves as goldfish.”
Even if the accurate predictions are a dud, shouldn’t we at least attempt to apply the prism of exponential technologies to review our basic assumptions, question fundamentals of human behavior, and scrutinize our societal organization? AI’s promise could be an apocalypse or eternal bliss — or anything in between. As we speculate about the outcome, we are also making a value judgment. It's here we ought to recognize our susceptibility to the projection bias — which compels us to apply the present-day intellectual framing to the future.
Putting the war and AI in the same sentence, we anthropomorphize the latter. When we worry about the robots and machine-intelligence causing mass unemployment, we must recognize that such anxiety is only justified if human labor remains an economic necessity. When we say that out of control technological progress will create more inequality, we assume that the idea of private property, wealth, and money will survive the fourth-industrial revolution.
It’s an arduous task to define the fundamental terms, much less to question them. But, perhaps, playing out a couple of scenarios could prove a useful exercise in circumventing projection bias.
Competition & Collaboration
Natural selection is, at its core, a multidimensional competition of traits and behaviors. It manifests itself in a basic competitive instinct that humans are all too familiar with. Evolutionary psychology postulates that the driver of human behavior is a need to perpetuate one’s genes. Homo Sapiens, then, evolved competing for mates, fighting for resources to feed the resulting offspring, all with a singular objective to maximize their genes’ chances to be passed on.
On the other hand, we are — according to Edward O. Wilson — “one of only two dozen or so animal lines ever to evolve eusociality, the next major level of biological organization above the organismic. There, group members across two or more generations stay together, cooperate, care for the young, and divide labor...” In other words, we might have to attribute the stunning success of our species to the fine balance we’ve maintained between competition and cooperation instincts.
Whether general machine intelligence is imminent or even achievable, the idea of post-scarcity economy is gaining ground. If and when the automation of pretty much everything creates a world where human labor is redundant, what will be the wider ramifications for our value system and societal organization? When the algorithms are better at decision-making than humans, and we surrender much of our autonomy to them, how will our competitive instinct fare?
What will be the point of resource competition in the world of abundance? Is it possible that our instinct to compete slowly evaporates as a useful construct? Could we evolve to live without it? Unlike ants and bees that cooperate on the basis of rigid protocols, humans are spectacularly adaptable in our cooperation abilities. According to Yuval Harari, that’s what ultimately underpinned the rise of sapiens to dominate the Earth. Is it conceivable that the need to compete turns into an atavism as the technological transformations described by Kurzweil begin to materialize?
Economy
How can we be sure that the basic pillars of our economic thinking (e.g. private property, ownership, capital, wealth, etc.) will survive post-scarcity? 100 years from now, will anybody care about labor productivity? How relevant could our policies encouraging employment be when all of the humanity is freeriding on the “efforts” of machines? What are we left with when the basics of supply and demand have been shuddered?
To a gainfully employed person today, the prospect of indefinite leisure might appear more of a curse than a blessing. Viewed through the lens of natural selection, this sentiment makes sense: in the past, the economic contributions by those able to do so would have been preferable to a mass pursuit of idleness. But should we be projecting the same trend into the future? What may sound like decadence and decay to us now may be construed quite differently in the world no longer powered by presently known economic forces.
The working assumption is that no matter what, someone will have to own the machines and pay for goods and services. Yet property and money are nothing more than social constructs. If ownership is pointless and money is no longer a useful unit of exchange, how will we collectively define status?
The questions are plentiful and the answers few. I, for one, am in no position to offer concrete proposals or defend admittedly speculative arguments. The bottom line is that we are firmly on the path to subvert the forces of evolution — which have been, since the dawn of time, the main drivers of our behavior. As political and religious dogmas have changed, the very basic economic principle has remained, satisfying human needs and wants as required by human efforts. Those fundamental forces, however, are clearly threatened by the accelerating pace of technological progress, singularity notwithstanding.
The ideas presented here may sound utopian and naïve. In the end, Elon Musk might be right: the invention of AI could spell the end of human race. It is humanity’s awesome responsibility, therefore, to design proper governance for artificial intelligence and think it through before we take a plunge. That said, we must be cognizant of the limits of our understanding and thus make use of our imagination; a distinctly human trait. At least for the time being.
Disclaimer: The views and opinions expressed are solely those of the author. They do not necessarily represent the views of Futurism or its affiliates.
Share This Article