Kurzweil claims that the singularity, the point at which machine intelligence will exceed our own and the lines between machines and ourselves will become blurred, will happen by 2045 but for the nematode C. elegans that future is now.
Uploaded to a computer, made to learn new tricks this nematode precursor of our future has no real way of determining where the biological tip of its tail begins and the digitally encoded version of its head ends. While it is obviously easy to encode the nematode and make it do whatever we want based upon its own neurobiology the questions this raises are legion.
The leap from the nematode C. elegans to us is, conceptually massive, but from a practical point of view it looks like a case of neural system analysis and computer power. If, for argument’s sake, we suppose that we do manage to map our own neural system completely and we have, by then, computers capable of handling and manipulating all that information at lightning speed, we could end up sharing C. elegans’ predicament in not knowing where its organic body begins and the Matrix version of it self ends.
If the nematode appears to not care whether it lives as an organic being or just as encoded computer code would we, given the above suppositions, really be any different?
This is where the rabbit holes open up. If C. elegans can be perfectly described in code and code can be controlled by behavioral controls applied to the digital version of the real thing, the supposition then is that perfectly describing a living system in code renders its ‘description’ indistinguishable from what has been described. The virtual and the real are then separated only by a difference of perspective and each feels to itself just as real. Even more frightening looking at its opposite neither the real nor the digital could distinguish which is real and which is not. Both look and behave in equally valid fashion.
The questions being raised by this experiment range from: “Can the mind exist without the body?”, “Can Consciousness Be Downloaded?” like in the recent Netflix series Altered Carbon. “Are All We Are Just Our Connectome?”. “Could we, by the same token, build machines with consciousness?”
These are really big questions whose roots are in something way smaller and more fundamental. Namely, is machine learning the same as human learning? Could we even hope to tell the difference when the former is so opaque and the latter has yet to be understood well enough to be universally standardized?
At the very core of all this reside two fundamental questions: Who are we? followed closely by How do we improve upon who we are?
There is an underlying complexity to the answers of these two questions that’s only hidden by the simplicity of the questions we’ve asked. In truth the questions should be: “what are your core values?” and “how do you go about achieving them?”
The C. elegans, serene in its achievement of the singularity, may not quite be struggling with any of that but we, really, need to in order to get to the next level.
Drawn from material from: The Sniper Mind: Eliminate Fear, Deal with Uncertainty, and Make Better Decisions