In terms fo the total consciousness, there will be a point where digital sentience will be the dominant holder vs. the biological sentience.

Ray Kurzweil, a technologist and predictor-of-the-future who’s been correct about quite a few of his predictions often talks about the singularity – the point where humans and machines merge. He’s supposedly an optimist about the future and says that, not only is this the most likely scenario, but it will happen within our lifetime.

I will add a bit to the above. Most don’t understand how this is possible and doubt it can be done, let alone it happening whilst they’re alive. My interjection is this: not only do I believe Ray is correct with his prediction that by 2029, computers will have human-level intelligence+, but he predicted the approx. date for the merging human+machine is 2045 and I will go ahead and say it is much sooner than that. Perhaps mister Kurzweil has some alt reason for saying he’s an optimist as this merging would certainly be anti-human. I also think it becomes a bit harder to predict super-precise dates when we’re entering the very apparent rate increase of innovation.

My thoughts on this have been dramatically changed since getting a look behind the curtain with folks in Silicon Valley and insights from people exposed to the cutting edge of this thing.

Once artificial intelligence is smarter than humans (we’re basically here), it will still continually improve upon itself. A future where computers are evil, robots have gone rogue and an enslaved human race by AI is feared by those who get a glimpse of this technology+can get a good understanding of it. Ray suggests that we’ll benefit greatly by transcending our biological intelligence ceiling – shit, I guess this guy truly embraces Darwinism.

Other Aldous Huxley wrote Brave New World published in 1931, which predicted what the turn into the 21st century may be like and was bang on for much of it; the incoming World State governing model/model for ‘efficiency’, artificial wombs, ‘conditioning centres’, cloning, etc. He was also right about a lot of shit I can’t really discuss here as well – go read the book.

Other people: Orwell (author of Nineteen Eighty-Four) Bill Joy (SUN Microsystems)

So, back to the title: Biological vs. Digital Sentience

Which will prevail? Personally, without getting super controversial here … I think only a very small number of enhanced humans will have the chance to survive. There’s a few things to look at here and I’ll them break down.

What I think is the most likely to occur: First thing, AGI will dominate and we’ll most likely have minimal to no control over it. You couldn’t co-exist with an increasing intelligent force without it finding a way for it to make you its bitch at some point.

Some smart folks have pointed out that humanity could be a biological bootloader for AI. This doesn’t seem unlikely as there won’t be much of a need for human 1.0 when the biological limits are reached. After a merge with silicon, the human portion will diminish – and quickly. Any value that was held within us will be extracted. This will essentially eliminate the need for sentience to be held in a biological shell, ending the need for base humans.

-

Additional thoughts:

  • leaving earth -> colonizing outside of Earth is important. It’s only a matter of time until some events happen that decimate the orbiting spec of dust and that’s if a number of other things like war, population collapse, AI or some other shit doesn’t removes us from existence first – so populating beyond Earth is a no-brainer. We do have things like the great filter that we face and will discuss this in future posts.

This post may be a bit of a stirring perspective.

The good news? Well … potential good news:

  • The probability of us living in base reality now is like one in billions. How deep does the rabbit hole go? Who in the fuck knows.
  • The bad news: what if our consciousness has been trapped here in this sim?

This is where I’m leaving this. Do what you will with that information.

J.B.