By Tiago Forte of Forte Labs
Now available in Russian.
4. The more advanced technology becomes, the less it matters
Nearly every vision of the future I’ve come across has one glaring thing in common: it is dominated by visible technology. This kind of unthinking consensus is the first clue that it’s wrong.
In nearly all these stories, technology is something intrusive, physical, monolithic, and self-centered. Cities are represented as vast seas of endless chrome, not a tree in sight. Apartments are covered in plastic panels and bulky appliances, like a 1960’s space capsule. Our bodies are intertwined with crudely designed machines, implying that transhumanism will be mostly about humans becoming more machine-like, not the other way around.
But it’s so clear to me that this is not how technology will evolve at all. These are the predictions of a civilization that has just discovered gadgets, one that is deep in the throes of neomania, an obsession with everything new, that can only imagine a future with MORE and BIGGER everything. It’s like a child who assumes adulthood will be characterized primarily by bigger and more expensive toys (although I know adults who do, in fact, still think that).
A much more realistic and compelling vision for what technology will ultimately become is The Force, from Star Wars. Think about it. It has the best interface imaginable: no interface at all. There is no delay between thought and action, no barrier between subject and object. The Jedi Knight never has to wonder if he’s upgraded to the latest version, never has to charge the battery, or remember the wifi password.
The Force is everywhere, and nowhere; it can be used for evil, but has higher potential for good; it is a spiritual force, but eminently practical. The day that I can reach out my hand and, with nothing but my thoughts, make my intentions manifest in the real world, will be the day that technology can be considered grown up. Not before.
What I think will happen is this: technology will slowly disappear. It will fade into the background, vanishing into walls and furniture and clothing, shrinking its form even as it expands its function. It will get quieter and calmer, built with the specific intention of preserving our mindfulness, not just grabbing our attention, as we realize that creative focus is the only thing that can’t be automated. Think ancient Greece instead of Blade Runner — the world will be dominated by ideas, not tools.
In the future, technology as we know it simply won’t be important, because its ultimate purpose is to work itself out of a job — to finally outgrow its need for constant maintenance and troubleshooting and allow us to decide what we really want to use it for. It will cease to be an end in itself, and become instead a means to things that are much more important.
5. Collective consciousness is both our greatest hope, and our greatest fear
The image most people have of science fiction is space opera — massive starships flying through hyperspace, lasers, exploration of alien planets. In short, Star Trek.
But as entertaining and imaginative as space opera can be, it is the stories that explore “inner space” that fascinate me the most. Science fiction has the unique capability of creating external thought experiments to explore inner states. Our minds are not good at abstraction — our thinking is much more revealing when it revolves around a concrete story that actually has some basis in reality (hence the “science” part).
Let me give you an example.
There is something very curious I’ve noticed: in story after story, the ultimate destiny of mankind is some form of collective consciousness. Whether it’s the Gaia planetary superorganism in Foundation’s Edge, or the nanodrug-enabled communion of Nexus, there is something very utopian and exalted about the idea of joining our minds in shared experience. I was shocked to hear that there’s actually serious research on the possibility of “panpsychism” — the idea that everything in the universe has, or at least has the potential to have, consciousness.
But at the same time, this terrifies us. It is striking how often the alien enemy is some sort of bug-like, collective groupmind. We seem to regard the hive or swarm as the antithesis of everything we represent as humans. In Solaris, a planetary superorganism is terrifying not because of its evil intentions, but because it doesn’t seem to have a centralized consciousness we can understand. In Ender’s Game, worker and soldier bugs are controlled remotely by a queen (in how many movies is the solution to destroy the queen or overmind, causing all the drones to suddenly drop dead?). And of course we all remember the Borg, which is scary precisely because it is made up of once sentient beings, their individuality now subsumed.
For me this tension illustrates one of the central struggles of humanity far more effectively than a million pop psychology books. We crave connection like the air we breath, and yet vulnerability feels like a nearly existential threat. Study after study tells us exactly what we need to be happy, regardless of time, culture, age, or personality — intimate social relationships. So why is happiness so hard to find? Because relationships involve likely short-term risks and uncertain long-term rewards. Like the characters in the grandest space opera, we have to leave our comfort zone to find fulfillment, even if our “spaceship” is just a desk.
Collective consciousness is both our greatest hope, and our greatest fear. Maybe the hardest part about creating a “human-like” intelligence won’t be that we’re so smart, but that we’re so confusing.
6. Complexity and chaos, not the size of transistors, will be our obstacles
Here’s how I know we’re in a tech bubble: the idea that technology is everything, will solve every problem, and will soon eclipse every area of human endeavor is increasingly the only acceptable opinion. Anything else is met with breathless, shrill protests.
I can’t say I’ve been completely immune. Reading Ray Kurzweil’s The Singularity is Near was an almost transcendent experience, the modern equivalent of seeing the future through a crystal ball. It’s just that the arguments are so damn compelling, so self-evident, so apparently scientific (they have graphs!). The danger of being left behind seems to be growing, while being ahead of your time is increasingly a badge of honor. The result is we try to outdo each other with ever-sooner predictions for a given breakthrough (self-driving cars in ten years! No, five!), as if faith in the singularity was the only way to gain admittance.
At the same time, it really bothers me that the only alternative to blind faith in an imminent singularity is fundamentalist mysticism — consciousness as an ineffable mystery, the human mind a black box not subject to the laws of physics. This is exactly how we thought about the universe before Copernicus blew it open for us.
But what does science fiction have to say on the topic? Can it help us imagine plausible alternatives to a smooth, shining path to utopia, without relying on appeals to mysticism?
Here’s just one example of such a scenario:
There is the intriguing possibility that human-level consciousness cannot be simulated, not because it is too mysterious, but because of inherent characteristics of complexity. Our understanding (never mind our management) of complex systems still seems pretty dismal (see Malaysia Airlines flight 370, 2008–2009 financial crisis, and the recent missing-in-action Snowmageddon of 2015).
This is the foundation of chaos theory: that complex systems are not linear; their causes and effects are not like vector graphics that can simply be scaled to whatever size. There are tipping points — critical thresholds of reactivity and amplitude, like hitting a miniature golf ball just imperceptibly harder than your partner, sending it barely over a slope, and into a whole new maze of tunnels and obstacles.
There was an intriguing idea I remember reading about in a book on chaos theory: that there are complex systems that cannot be modeled. For example, problems that can only be solved by algorithms which run in superpolynomial time, which (very) basically means that the time required to compute them increases exponentially in relation to the inputs, making them impractical to use.
Just imagine if human consciousness happens to be such a problem — a system that cannot be modeled does not benefit from exponential improvements in computation, or recursive self-improvement. Even if we succeed in making computers equivalent in every way to our own brains, they would be forced to run operations in real time in this scenario. They would be limited not by the “number of angels dancing on the head of a pin,” but by the very principles of the logic on which they run.
Oh the delicious irony!