I’ve been reading Robin Hanson’s blog, “Overcoming Bias,” on and off for a few years now. Among other topics, he likes to write about the coming Age of Brain Emulations. Basically, the idea is that he (and I tend to agree here) thinks that general AI will not become a reality from first principles, that it will be easier, at least at some point, to simply emulate human brains in extremely powerful parallel hardware.
I think that the argument for and against possibility of general AI is a weaker version of the argument between two people on the opposite fences of the Newcomb paradox. General-AI-by-emulation is a form of perfect-cloning: but the more you think about how to perfect-clone someone, the more you realize how much you need to clone from-outside of that individual. Depending on game-theoretic criteria, i.e. is the predictor/general-AI mechanism adversarial or friendly, its behavior will be one or another. While some kind of an approximation to “intelligence” is viable, at least in certain context, the answer is very much dependent on both the designer’s intent and the limitations and trade offs imposed by fine-tuning to be a practical device. Just like a perfect oracle is both imaginable but not necessarily implementable, I cannot disprove that general AI is impossible, but neither can you prove that it is possible. My short take: it will always remain an error-prone approximation, limited in what it can be and do.
(Along with that assertion, by the way, my answer to the Newcomb paradox is to try to understand whether the oracle is adversarial or friendly. The default answer is “adversarial.” In this case, I’d always choose two boxes, because I am convinced that an adversarial oracle — unless it’s specifically tuned into my brain right before I’m to utter my answer — does not have any special insight into my game, other than estimating likelihoods of my choices from purely game-theoretic principles that minimize its loss… However, if I determine that the oracle is a friendly one, I’d almost-always choose one box before the game commences.)
Here is my longer take: although it’s impossible to prove within my lifetime, Hanson’s brain emulation (“em”) paradigm is not likely to be a near-term likelihood, certainly not even in this century, if at all. Even if something like this does happen in a few hundred years or millennia, these emulations will *not* completely displace humans and their cultures. The reasons are twofold.
The first reason I rely on in my thesis was best expressed by T.B. Lee. Basically, Lee’s argument is along the lines that computing em is of same or, possibly, even worse complexity than that of weather prediction: we might be able to predict weather, in some cases ahead to a few days. However, weather prognosis happens with sharply decreasing probabilities of correctness, as one measures the accuracy on ever finer space and bigger timescale basis — anything past a week or so is close to being completely meaningless. Albeit Hanson tried to address it, I am not convinced: I think this is an extremely hard problem to solve.
Second reason, let me describe. Even if the above gets partially surmounted and that we figure out a way to make those ems to productively last for a few hours, the point of those devices is to make production as cheap as possible. At some point, however, consumption needs to happen. Short of burning the products, it is difficult to imagine who would be consuming, as the masses will be unemployed, and just a cost. This goes back into motivation: if no one is driving the devices (i.e. people die), these devices need to form a culture. This culture will need to be self-evolvable and constantly expanding. Since life is an open system, completely new challenges arise that require reaction and self-modification (see Gödel’s First Incompleteness Theorem and “Meta-Halakhah” by Moshe Koppel). Since (even superb) emulations are mere approximations/fuzzy algorithms of (human) biological systems, they will still need to interface with humans and human culture, with these both forming a “vital organ,” a blueprint for the necessary “update stage.” Whether the update stage happens every few decades or few millennia or even somehow continuously, in a pipelined fashion, it will still need to be happening…
So, no worries on my part: the apocalyptic scenario is unlikely to happen; people are not going to disappear, not even into the Matrix! However, what might happen is that certain people and certain cultures of people will disappear. The ones who do disappear will likely be, initially, from outside of the culture which first comes to possess such em technology and, later, from cultures where ems overtake humans completely, more likely due to latter’s nihilism rather than a Skynet-type scenario, thus stalling said cultures (as they again are mere mathematical models, golems).
Basically, because of the nature of em devices, they will clearly relegate human mental and physical activity towards more contemplative, abstract and longterm ones. It is quite likely that humans will, by that time, become both smarter and live much longer, due to our ability to repair our cell tissues and organella, like mitochondria, via genetic engineering, virotherapy, and proliferating stem cells directly. Quite possibly, humans in 10,000 years will be able to live longer than 200 years, maybe even thousands of years (will post on that separately), while staying more youthful in the process. Rigorously imagining the fulfilled human and, more importantly, societal potential that comes along with those types of outcomes is something that merits its own post.
For now, I conjure that humans in the next 10,000 years will mostly be mothers, fathers, mathematicians, scientists, engineers, artists, lawmakers, and higher level executives — as these processes require more concentration and long-term thinking and acting. For example, typical expensive cooperating em will only be needed for one day’s worth of labor, for household needs. There will probably be different gradations of these ems. Ones that run for approx. 6 seconds till decomposition/decoherence (for example, something that gets triggered to pick up vegetables from a counter) will be energetically the cheapest to emulate. There will be ones that will run for O(6 minutes)-till-decoherence, for mundane tasks like cleaning a desk. There would be ones that run for O(6 hours), for things that require composing and analyzing a tax return spreadsheet based on various other documents, for example.
If em is as hard as weather prediction, running for O(6 hours) in a coherent fashion might be orders of magnitude more expensive than coherently running for O(6 minutes). Hence, the human (and em, as bootstrapping-like device!) approach to em computing will probably be concentrated on ruthlessly weeding out opportunities for optimizing the time, space, and energy requirements thereof. There will likely be a lot of binning for said capabilities, based on variously competing requirements and necessitated compromises.
As ems become better engineered and cheaper over time, it’s possible that ems that affordably cohere for longer than a few minutes to a few hours might become a reality. When I say “affordably,” I’m comparing it with the price of human worker. It is possible that there would be ems that will cohere for weeks, if not months. The question here becomes: are they going to be cheaper to build/initialize and run than same human workers? Is human cognition based on human existence cheaper than the em one at a certain timespan bin? That is, are there any fundamental (practical) computational/physical limits to how cheap a cohering em can get? How needed will be an em that can cohere for a week but is 1000x more expensive than an em that can cohere for about 6 hours? Can most human menial and low-level work be compartmentalized into 6 hour bins? These are open questions, and they need to be addressed. For now, based on T.B. Lee’s writeup and the fact that biology has optimized the operation of cell machinery (including neurons) my bet is that it’s going to be very expensive to have something that emulates a capable human that’s runnable coherently for more than a few hours or days: as, increasingly, per-cell metabolism, mitosis, hormons etc modeling will all need to be factored in — in their finest details. Couple that with ability of humans to live coherently for many decades and, as noted above, with help of next-gen medicine, possibly even millennia — and we have our division of labor.
In the unlikely situation that ems become a danger, human societies that don’t take control of such beasts might suffer collapse. However, that is not a finality. By the time that this happens (if it happens) human cultures will splinter into many sub-cultures, with some being very spacially removed (being potentially even on different planets/worlds). This guarantees evolution choosing cultures in which physical-human-based cultures, being the inherently open, Universe-embedded systems that they are, will be selected for. In such viable cultures, ems will still be present, as they could be hugely beneficial for production, which is what would make them into sensible cultures also. However, the model of em-human interaction will be akin to a very cooperating peon-aristocracy relationship of yore, as I’ve described above. This way, golems should remain both a sensible and a viable proposition. By the way, in the very unlikely situation that destruction of humans is total, biological life will, still, always find a way, and this includes intelligent biological civilization — even if it originates in another planet/world. This is the nature of the Universe itself (see my penultimate two links there, if you’re impatient).
The other opportunity for brain emulation might come from interfacing directly with living humans, as tissue augment, i.e the cyborg/Borg model. That, however, would require another post to discuss.