While browsing social media, have you ever encountered something that made you just so angry you couldn't stand it? Well, that's a feature, not a bug. As media theorist Douglas Rushkoff states:
“Researchers have found, for example, that the algorithms running social media platforms tend to show pictures of ex-lovers having fun. No, users don’t want to see such images. But, through trial and error, algorithms have discovered showing pictures of our exes having fun increases our engagement. We are drawn to click on those pictures and see what our exes are up to, and we’re more likely to do it if we’re jealous they’ve found a new partner. The algorithms don’t know why it works, and they don’t care. They’re only maximise whatever metric we’ve instructed them to pursue.”
- Douglas Rushkoff, Team Human
It struck me how far we've come from the 1950s, when Norbert Weiner's theory of cybernetics - humans using machines to improve humans - has turned itself on its head. We're now using machines to exploit humans to refine the machines. We can either program (the insidious "learn to code" meme) or be programmed.
The machines are trying their best to remove us from the human equation. We have metrics, dashboards, bounce rates, pageviews, conversions; some of us talk in this machine language that would've made Dr. Weiner's head spin. We're trying to be like computers, but we can't. A computer has perfect memory, perfect algorithms, perfect recall. It can know you better than you know yourself.
Unfortunately we cannot go back to a previous time; you can't cut smartphones or the Internet out from our culture; no more than you can remove your liver or lungs. It's transformed our entire society, whether we like it or not.
In French sociologist Jacques Ellul’s view, the bargain we make with new technology is Faustian; what problems new technology solves seems to take something away at the same time.
I remember a time when I used to (well, still do) blow up innocent pixels in flight simulators and first person shooters. My Dad couldn't understand the unrestrained glee I was having, using (simulated) weapons of mass destruction.
I said these disintegrated people weren't "real." Instead, "they" were computer-generated characters programmed to fight me and die trying. We can obviously distinguish between what's real and what's not (though deepfakes and other misinformation might muddy those already hazardous waters), but when we end up serving machines, we often lose sight of the human element.
Often, clients become folders, deadlines, and metrics. They're no longer people with struggles of their own, reaching out to me for help with something. I think in the PPC, CPC, "machine" algorithm age, many of us don't take the time to value a client for who he or she is.
A computer can't really recognise the value of something unless we tell it to.
We can.
It's hard, but I think it's worth doing.