I am currently reading: Oceanic by Greg Egan. Again. Yes. Well. It is short stories, I got part way then stopped, now I have started again. There. I am up to the story called ‘Oracle’ (at least, I think. I had to check google. My copy is downstairs). It is cleverly written in that the two main characters are just characters, but as the story progresses it becomes clear they’re actually Alan Turing and (I think) CS Lewis. I liked this, it was clever.
I think everyone should read at least a couple of Greg Egan stories just because the stories double as incredibly strong thought-experiment arguments for various fields of rationality. When his short stories are collected together into books it reads like they are blasting their way through a whole bunch of superstitions.
In related news:
Something I find really annoying is misuse of the word ‘theory’. Theory has two real meaings. Strictly it means a very strongly supported idea. Less strictly, it means something you expect to happen but aren’t entirely confident you haven’t overlooked something.
It annoys me when I read people discussing superstition and they say “well it’s all just theory”. No it’s not ‘just theory’, being theory would be a vast, vast improvement. What you have is a liberal application of “unfounded assertion” working in tandem with “talking out of your arse”. That is not ‘in theory’ and referring to it so is attempting to pass off your opinion as being worthy of consideration.
And at this point a lot of people turn this into a different fight. They aren’t willing to take you on in either a logical or empirical playing field. Instead they come out with something like: “I’m entitled to my opinion just as much as you are to yours”, as if that somehow passes for a defence of their crackpot ideas. I just don’t understand the perceived equality here. I have an opinion based on a fair amount of reasoning and you have an opinion based on a fairy tale which appealed to you. Our opinions aren’t both of equal merit here.
This person also said “if you were a real sceptic you’d remain open to the idea of an afterlife”. It’s the old close minded accusation. All ideas are equal, even the ones with no backing.
The reasons for my lack of belief in the afterlife can be demonstrated in a simple though experiment:
suppose existence of an advanced piece of artifiical intelligence, one which was indistinguishable from human intelligence. The AI claims to think, it claims to feel. Does it really have feelings? The naive answer is “no of course not, it’s artificial”, but after you think about it for a while, you come up against a problem: The fact its brain is implemented in different hardware is immaterial to its operation. And human emotions aren’t an absolute part of the universe, they’re just a bunch of chemical releases which make our brain and body respond a bit differently, so software emotions aren’t necessarily any less real. To separate us, we still must have something that it hasn’t. What is that?
The only answer is ‘a soul’.
Now consider the primary motivations humans had in the first place for coming up with the idea of a soul, and the primary justification that we really do have one. The answer is: we have the ability to think, to feel… we are aware of our surroundings. We are separate from the inanimate world. We are separate from the rest of the animal kingdom. We are at the top. We are the only things in the known universe who can do the things we do, therefore we must be special somehow.
Despite the mind blowing arrogance of that idea, it doesn’t hold up against our piece of AI. Look at it:
the ai isn’t the same as us, why?
because we have a soul
how do we know we have a soul?
because we can think, feel emotion, interpret our surroundings, nothing else can do it as good as we can
So what is it the AI can do again?
it can think, feel emotion, interpret its surroundings.
and how do you know it hasn’t it got a soul?
Thus, if such an AI can be possibly created, we lose the motivation for supposing a soul exists. It’s no longer necessary and to suppose its existence would be like supposing invisible pieces of string which stretch trees, instead of accepting they just grow by biological means. We can’t prove those invisible strings don’t exist, but we have a better solution which doesn’t assert existence of anything new.
So my argument hinges on the possibility (not practicality) of creating a real thinking computer. And I fully expect before the end of my lifetime we’ll see some convincing forms of artificial consciousness. I think I’m on a lot safer ground saying we will get there eventually than if you come along and say “nope, not possible!” as if you’re somehow an expert and all those AI researchers are just idiots who should have consulted you.
Now let’s see your argument… “people have believed in an afterlife for quite a long time so there must be something in it”. Oh right. So are these the same people who believed in it before they died, or are they the ones who came back afterwards and told us what the answer was? You’ll have to forgive me if I find the notion that our opinions are both of equal merit to be sligtly offensive.
So I have revised my stance. I believe when I die I will be transported to walhal by a bunch of valkyries. And I insist that this is just as plausible as your Abrahamic religions. If you disagree and place any more faith in yours than mine, you are not being open minded.