In the previous article, I have hypothesized that there can be no free will. The brain is just the accomadator and guide of the thread. The thread is just a cascade of firing neurons. There is no place where the free will can be. But that raises the following question: if the brain is ultimately just a cascading signal, can we then replace these brain parts with implants that calculate and simulate the signal? This article is part of a series of articles. I would advise you to read those before you read this.
If the long term memory (LTM) is indeed the keeper of an intricate pattern of synaptic plasticity, then it should be theoretically possible to replace it with a device that can catch and mimic the signal. This device can then internally calculate what the result of the thread should be. It can then send the signal back. Now, we are talking about something that might be possible in the future, but is still nowhere near in sight todat. The thread is such a hodgepodge of memory objects that have been mixed and matched in many ways that it would be very hard to decode. But there is a pattern in there, and that pattern is destined to change in a predisposed way when external stimuli are encountered. It might be way too hard to decode for us now, but it is not impossible.
Ethical question
This does open up an ethical question. If it is possible to read and decode the signal that someone sends, and it is possible to send a thread intrusive signal to the neurons in the brain, then it is possible to force someone to think about, let’s say, a crime that they committed. These thoughts can then be read and decoded. Of course, this can also be used by dictators in the future to investigate literal thought crime with literal thought police. So how do we, as a society, have to deal with the idea that our thoughts might actually be readable and even forcible? It is a spooky idea that a nefarious agent could almost literally steal your thoughts. And with an intrusive device, someone could also erase your memories. I, sadly, don’t have a solution. It seems to be a thing that humans in the future might have to deal with.
Back to immortality
Let’s get back to immortality. If the LTM can be replaced by a device, then so can the work memory. This, I have speculated, also just accepts a signal and sends it back in a certain way. This can be decoded. This can be replaced, at least theoretically. The same seems to be true for almost any part of the brain.
But what about the senses. In my article about memory objects, I wrote about how I like to think that the thread ultimately ends up on the “back end” of the sensory organs. It is then this what creates our mental senses. This would mean that the dragon flying around a castle that I have been writing about, and I hope is still in your head, is in fact projected on the “back end” of your eyeballs somehow, If this is the case, then we should be able to replace eyeballs with cameras that can also project the signal on to the “back end” of the camera. This should then also be replaceable, since this is “just” another form of decoding a signal and then pass on a new signal. I would argue that all sensory organs should be replaceable.
Replacing of hormones
I have speculated that the release of hormones influences the thread in some way. Since this process must be one based on certain patterns, as it is a system that exists in the universe, its effects should also be reproducible, I would argue.
The brain is only cells that send a signal
So, all of this would mean that the brain is just made up out of cells that send a signal from A to B. The signal can be decoded, and therefore brain parts can be replaced. The sensory organs can also be replaced. The usage and effect of hormones can be simulated. This would mean that you could replace every part of your brain with a machine. Machines are easily replaceable, so that would mean that “immortality” is in reach. This is of course not real immortality, since it cannot escape the heat death of the universe, trillions and trillions of years would be possible.
But is this really immortality? Let’s explore this with a thought experiment.
The resurrection problem
Supposedly, in a hundred thousand years the human race has built up a vast collective mind. The knowledge that they have gathered is astronomical. They are so good at controlling and monitoring nature that they can recreate history by singling out the exact atoms that were present at that time. By using some clever future-people-understanding of physics, they can calculate where which atom was at what time. And now, they want to recreate you. They calculate the positions of your particles right at the time of your death, so that they can prevent your death and let you live in their society. Let’s say that they succeed. How would you experience that? Would you die and then instantly wake up in the future? They are your atoms and it is your plasticity. Or will you just die and someone else wakes up, who is just a clone of you?
To answer this question, I think you have to envision two separate realities side by side, where the humans are so advanced that they can compare results between realities. In one, you will wake up, in the other you die and someone else wakes up. Let’s say that the future humans want to test this. In the first reality, they check all of your atoms, and they ask you if you just woke up. You answer affirmatively, since you just woke up. In the second reality, they also check all of your clone’s atoms, and they ask it if it just woke up. Now, here’s the thing: it will also say that it just woke up. It has your pattern. There is no difference. It will believe that it just woke up. The future humans now compare realities, and they find that they are the same. There is no difference between you and the “clone”. The clone will not even believe that it is not you. And why would it? In what way is it different? It also woke up. How do we even know if it is a clone?
Resurrecting you from before your death or without the exact particles
What if they resurrect you from a time that you were not dead yet? Let’s say, a year before your death. That would mean that you would live that year until your death. Then you are resurrected, but your memory and body are that of one year before you were born. So the new “you” has lost that last year in its memory. Is this now you? There is no continuation of the thread. But ask the new “you” if it is the real you, it will say yes. And by all intents and purposes, it really is.
This question becomes even more intriguing if the future humans simply have a blueprint of you. They cannot reassemble all of your original particles and therefore have to settle for replacements. Ask this recreated you if it is really you, and it will say yes. But the future humans know that no particle is original. You are made up out of different atoms than before, And yet, you will act and feel in exactly the same way as the “real” you.
Some people might now recognize the problem of the ship of Theseus. The difference is that the ship of Theseus is about an external object and the resurrection problem is about you. The ship of Theseus can be explained by humans wanting to categorize things to make them easy to remember. People might therefore disagree on what is sufficient for it to be the ship of Theseus. The resurrection problem is about you. When are you you? If your thinking thread continues in exactly the same way as it did before, will it then be you?
So, if you die and then are resurrected from the moment of death, you can debate if that is still you. I would argue that since there is no difference between the “real” you and the “clone” you, it really doesn’t matter. They both seem to be you equally. But is this the real you? Let’s explore a little further.
Your thinking is the only thing that must be real
Descartes famously said: cogito ergo sum. I think, therefore I am. Your thinking is all you have that you can be certain of. Your worldview is just averages you picked up from external stimuli with your senses. Descartes reasoned that you cannot even be certain about those stimuli. Did they really happen? Do you even have a brain, or are you thinking in some other way? Is our world just an illusion? Ultimately, we don’t know. The only thing we know is that our thinking must be real.
But that your thinking is real does not mean that your memories are real. You are your memories. Your memories dictate the way that you respond. If someone erased your plasticity pattern, you would be gone forever. A new person is then going to be born in your body. New external stimuli will form new plasticity with different emotional connections. All the memories of what the body had done in its life would be gone. This would no longer be you. If your self is your pattern of the self in combination with your genetic structure, and only the genetic structure remains, you are gone.
Two yous, both not perfect
So keeping this in mind, let’s go to another hypothetical. What if the future humans create two yous? One with your original atoms, but from one year before your death and one made up out of different atoms but from the moment that you died. Both will say that they are the real you. In their minds, they are a continuation of you. Is one more you than the other? The first you has your particles, but there is no continuation of the thread from the moment of death. The other does have that continuation, but doesn’t have your particles. What is more important? I would say the continuation of the thread. If you and your pattern of the self are the same. Then the continuation of the thread is dominant, not the particles.
We can now carefully try to answer these questions. If you are your self, and your self is the pattern in your LTM plus your genetic responses, then any continuation of this will be you. Sadly enough, this is not the whole answer.
More yous when you are still alive
What if the future humans can time travel. They travel back in time to when you were still alive. They have a blueprint of you and make more yous. Everyone will now agree that there is only one you. The others that have been created by the future humans are clearly clones. This means that it is possible to limit the amount of yous to one. Apparently, created humans are never you when your thread continues on.
The snag here, of course, is that the continuation of the thread is still ongoing in your own head. The clones might have picked up from some moment in time, but since your pattern continued to exist, they could never truly become you.
So, what does it mean to be you? The “real” you might have died last night, and you are the impostor with the “fake” memories. Maybe you will die tonight again and “someone else” will take over. Since you are your memories and therefore have to trust your memories, you cannot really know if these weren’t just created yesterday.
Let’s see if this is different with a robot artificial intelligence (AI).
The robot and the continuation of the thread
Let’s say future people will develop an AI that mimics human understanding. It has mimic emotional connections, so that it can make decisions. Like a human child, it has experienced external stimuli and build up mimic plasticity in its mimic “brain”. It can talk and understands things in the same way as a human adult.
They now tell it that they are going to turn it off. They turn it off and remove parts of its mimic brain. They copy the material on these parts to other, similar parts and reconstruct the AI with those parts and the copied material. Effectively, this is the same machine, only with different, but similar, parts. They now turn the AI back on. They ask it if it is still itself. How will it respond? I would argue that it will not monitor any discrepancies, since we copied the memory directly. So it will answer affirmatively.
What if they do all the things that they did with you and your “clone”. What if they scrap the robot, but keep its memory stored. Later, they build a new robot from different parts. They take the memory from storage and add it to the new parts. They ask the AI if it is really the same robot. Just like the recreated you, it will answer yes. Its memory is still consistent with its past experiences, so why would it not answer affirmatively. For it, it must have been as if it jumped through time. It was switched off and then instantly switched on again from its perspective. The same would apply if the future humans had found the AI’s old parts and recreated it like that. In both cases, it would affirm that it has remained itself.
But what if they now keep the AI and also create a copy. The copy might believe that it really is the original AI, but seeing the original might change its mind. The original AI will no doubt claim that the other is the impostor. This is because the thread of the original AI didn’t stop, while the impostor’s thread did.
The AI and the human seem to be the same in every way.
Ship of you
The answer to the question, will you remain you when you are reconstructed somehow, seems to largely rely on the continuation of your thread. If someone in the future has a brain defect and replaces a brain part with a machine, we would all agree that this person is still the same person. What if we replace sensory organs? Will that change you into someone else? Will you no longer be you if you had some sort of mechanical eye? I would say no. So, what if we replace all brain parts? Is this still you? I will leave it up to you to conclude for yourself, but I would say yes. As long as the thread continues, you will remain you, even when you replace your entire body with machines.
Immortality is probably possible, but it depends on what you see as you
Humans and machines seem to be the same. Humans, however, like to see themselves as these special agents in nature. I don’t have a good answer to all the questions I raised, other than the answer seems to be the same for the human as for our robot AI, but I do think that as long as the continuation of the thread is not broken, the individual, or the AI, will continue to be themselves. If someone copies you, but your thread continues, you will remain you, and the copy will be a clone. If the continuation of your thread gets broken, you can split up in many yous.
Because robot AI and humans will have the same perspective of self, I think that you should be able to transport yourself into a machine. I think the thread is just a signal that goes from A to B in a loop, so I think that “immortality” should be possible. Theoretically, you should be able to infinitely replace brain and body parts and stay alive for many, many years, but I will leave it up to you to decide if that is what it means to be alive and to be you.
In conclusion
Depending on what your understanding is of you, immortality is probably theoretically possible in the future. At least in some way.
On to the next article about pride, shame, and the opium memory object.