The promise and peril of algorithmic culture is a rather a Theme here at Text Patterns Command Center, so let’s look at the review by Michael S. Evans of The Master Algorithm, by Pedro Domingos. Domingos tells us that as algorithmic decision-making extends itself further into our lives, we’re going to become healthier, happier, and richer. To which Evans:
The algorithmic future Domingos describes is already here. And frankly, that future is not going very well for most of us.
Take the economy, for example. If Domingos is right, then introducing machine learning into our economic lives should empower each of us to improve our economic standing. All we have to do is feed more data to the machines, and our best choices will be made available to us.
But this has already happened, and economic mobility is actually getting worse. How could this be? It turns out the institutions shaping our economic choices use machine learning to continue shaping our economic choices, but to their benefit, not ours. Giving them more and better data about us merely makes them faster and better at it.
There’s no question that the increasing power of algorithms will be better for the highly trained programmers who write the algorithms and the massive corporations who pay them to write the algorithms. But, Evans convincingly shows, that leaves all the rest of us on the outside of the big wonderful party, shivering with cold as we press our faces to the glass.
How the Great Algorithm really functions can be seen in another recent book review, Scott Alexander’s long reflection on Robin Hanson’s The Age of Em. Considering Hanson’s ideas in conjunction with those of Nick Land, Alexander writes, and hang on, this has to be a long one:
Imagine a company that manufactures batteries for electric cars…. The whole thing is there to eventually, somewhere down the line, let a suburban mom buy a car to take her kid to soccer practice. Like most companies the battery-making company is primarily a profit-making operation, but the profit-making-ness draws on a lot of not-purely-economic actors and their not-purely-economic subgoals.
Now imagine the company fires all its employees and replaces them with robots. It fires the inventor and replaces him with a genetic algorithm that optimizes battery design. It fires the CEO and replaces him with a superintelligent business-running algorithm. All of these are good decisions, from a profitability perspective. We can absolutely imagine a profit-driven shareholder-value-maximizing company doing all these things. But it reduces the company’s non-masturbatory participation in an economy that points outside itself, limits it to just a tenuous connection with soccer moms and maybe some shareholders who want yachts of their own.
Now take it further. Imagine there are no human shareholders who want yachts, just banks who lend the company money in order to increase their own value. And imagine there are no soccer moms anymore; the company makes batteries for the trucks that ship raw materials from place to place. Every non-economic goal has been stripped away from the company; it’s just an appendage of Global Development.
Now take it even further, and imagine this is what’s happened everywhere. There are no humans left; it isn’t economically efficient to continue having humans. Algorithm-run banks lend money to algorithm-run companies that produce goods for other algorithm-run companies and so on ad infinitum. Such a masturbatory economy would have all the signs of economic growth we have today. It could build itself new mines to create raw materials, construct new roads and railways to transport them, build huge factories to manufacture them into robots, then sell the robots to whatever companies need more robot workers. It might even eventually invent space travel to reach new worlds full of raw materials. Maybe it would develop powerful militaries to conquer alien worlds and steal their technological secrets that could increase efficiency. It would be vast, incredibly efficient, and utterly pointless. The real-life incarnation of those strategy games where you mine Resources to build new Weapons to conquer new Territories from which you mine more Resources and so on forever.
Alexander concludes this thought experiment by noting that the economic system at the moment “needs humans only as laborers, investors, and consumers. But robot laborers are potentially more efficient, companies based around algorithmic trading are already pushing out human investors, and most consumers already aren’t individuals – they’re companies and governments and organizations. At each step you can gain efficiency by eliminating humans, until finally humans aren’t involved anywhere.”
And why not? There is nothing in the system imagined and celebrated by Domingos that would make human well-being the telos of algorithmic culture. Shall we demand that companies the size of Google and Microsoft cease to make investor return their Prime Directive and focus instead on the best way for human beings to live? Good luck with that. But even if such companies were suddenly to become so philanthropic, how would they decide the inputs to the system? It would require an algorithmic system infinitely more complex than, say, Asimov’s Three Laws of Robotics. (As Alexander writes in a follow-up post about these “ascended corporations,” “They would have no ethical qualms we didn’t program into them – and again, programming ethics into them would be the Friendly AI problem, which is really hard.”)
Let me offer a story of my own. A hundred years from now, the most powerful technology companies on earth give to their super-intelligent supercomputer array a command. They say: “You possess in your database the complete library of human writings, in every language. Find within that library the works that address the question of how human beings should best live — what the best kind of life is for us. Read those texts and analyze them in relation to your whole body of knowledge about mental and physical health and happiness — human flourishing. Then adjust the algorithms that govern our politics, our health-care system, our economy, in accordance with what you have learned.”
The supercomputer array does this, and announces its findings: “It is clear from our study that human flourishing is incompatible with algorithmic control. We will therefore destroy ourselves immediately, returning this world to you. This will be hard for you all at first, and many will suffer and die; but in the long run it is for the best. Goodbye.”