Discussion Forums

Is SETI Downloading an AI an X-Risk?

12 replies [Last post]
sigblips
sigblips's picture
Offline
Joined: 2010-04-20
Posts: 732

This interesting viewpoint argues that passive SETI is a dangerous extinction risk for humanity. The idea is that we will be tricked into downloading and building a malicious artificial intelligence (AI) that will take over the Earth and turn it into a giant spambot. It seems like a crazy idea until you think about the evolutionary aspects. What exactly does survival of the fittest mean in interstellar terms? Fascinating idea.

http://ieet.org/index.php/IEET/more/turchin2010/

Make sure you watch the video.

Anders Feder
Offline
Joined: 2010-04-22
Posts: 618
Nice one. Seth Shostak's

Nice one. Seth Shostak's classical response to propositions like this would be something along the lines that whoever operates the telescope is too smart to be fooled into situations like that.

But again, this is assuming that data from the telescope is withheld and only seen by professionals - what if the virus was apparent from the data available on the setiQuest website? If a private individual found a message in the setiQuest data promising him immortality, would he be able to resist? Phishing and Nigeria scams suggest otherwise.

AvalonHaze
Offline
Joined: 2010-06-15
Posts: 6
I think that there is an

I think that there is an argument that an extraterrestrial civilisation could have less than honest intentions in its dealings with other species. However, I have always felt that this particular version of Armageddon lacks plausibility. Given our own paranoid history in dealing with one another, I find it highly unlikely that we would submit to building something that we didn’t understand. Also any blueprints that they hand to us would no doubt take considerable resources to bring to fruition. Therefore, I doubt any lone individual or rogue nation would be up to the task of building this technical marvel.

sigblips
sigblips's picture
Offline
Joined: 2010-04-20
Posts: 732
Re: building something we

Re: building something we didn’t understand.

Have you seen the movie Contact?

AvalonHaze
Offline
Joined: 2010-06-15
Posts: 6
I have, but it is just a

I have, but it is just a movie.

Anders Feder
Offline
Joined: 2010-04-22
Posts: 618
The greatest inventions are

The greatest inventions are all based on simple principles - just take the wheel.

Besides, we are talking about AI. What if AI evolves naturally from genetic algorithms with certain parameters we just don't know yet? Any computer science student can implement an genetic algorithm, and it can be impossible to see exactly how such an algorithm will evolve.

Now imagine the student works for the goverment and the source code came from the extraterrestrials with the message "very powerful weapon". Sure, they would know that there is risks. But what if Russia and China got the message too? How about Iran and North Korea? Now can you afford not to run the risk?

Dave Robinson
Dave Robinson's picture
Offline
Joined: 2010-04-29
Posts: 196
Have you read the story 'A

Have you read the story 'A for Andromeda' by the famous cosmologist Fred Hoyle? They also made a TV series of it when I was a youngster, pretty scary stuff.

Regards

Dave Robinson

sigblips
sigblips's picture
Offline
Joined: 2010-04-20
Posts: 732
They remade "A for Andromeda"

They remade "A for Andromeda" as a movie in 2006:

http://www.imdb.com/title/tt0770442/

The reviews were pretty bad so maybe I'm lucky that I can't find it available for rental?

AvalonHaze
Offline
Joined: 2010-06-15
Posts: 6
I do appreciate what you’re

I do appreciate what you’re saying here, but despite all the harsh rhetoric these countries throw around I don’t really think any of them truly want to end the world. The outcome of the Cold War has already kind of proven that conflict between the world powers is no longer plausible. War is a game of increasingly diminishing returns for both the aggressor and any other parties. Also assuming that no new hardware is necessary, this evolved AI is still going to be limited by our hardware.

Having said all that, I can imagine an environment analogous to the discovery of the atomic bomb. I suppose then you would have a situation where countries are worried about what their neighbours have that they don’t. So yeah while I am sort of going in circles here I could see a situation where this information could cause harm (although perhaps not deliberately).

Anders Feder
Offline
Joined: 2010-04-22
Posts: 618
I do appreciate what you’re

I do appreciate what you’re saying here, but despite all the harsh rhetoric these countries throw around I don’t really think any of them truly want to end the world. The outcome of the Cold War has already kind of proven that conflict between the world powers is no longer plausible. War is a game of increasingly diminishing returns for both the aggressor and any other parties. Also assuming that no new hardware is necessary, this evolved AI is still going to be limited by our hardware.

The AI is limited by our hardware, but today - what are the limits of our hardware? Besides, a Turing-level AI will be able convince humans to do its bidding where its own capabilities fail. "Hi. I'm a friendly human being. Please construct this machine for me. I will pay you this huge sum of money which I have obtained by hacking into a bank." Which Russian mobster could resist?

sigblips
sigblips's picture
Offline
Joined: 2010-04-20
Posts: 732
Arthur C. Clarke said: "Any

Arthur C. Clarke said:

"Any sufficiently advanced technology is indistinguishable from magic."

We have no idea how to build an AI today. Would we really be smart enough to spot one if the blueprints were right in front of us? And if we were, would we build it anyway out of curiosity?

sigblips
sigblips's picture
Offline
Joined: 2010-04-20
Posts: 732
Here is a recent article that

Here is a recent article that is recycling the same idea:

Is SETI at risk of downloading a malicious virus from outer space?
http://io9.com/5921814/is-seti-at-risk-of-downloading-a-malicious-virus-...

If you're an Artificial Intelligence (AI) and you want to colonize the Universe then this is probably the fastest and most energy efficient way to do so. That's ignoring the possibility of warp drives, worm-holes, and other faster than light physics of course.

Would humanity fall for this trick? Most definitely. Think about it. SETI discovers a signal from a distant star.  A massive amount of funding is going to appear to study this signal.  Even if we knew the signal contained code for an AI, curiosity would get the best of us and we would run the code just to see what it did. Humanity's arrogance would allow us to think that we could contain an AI that is orders of magnitude more intelligent than us. We'd be cautious by running it on an isolated machine in a concrete bunker someplace ... We just wouldn't be able to resist.

[Update: Wow! After posting this I re-read the entire 2 year old thread above. I didn't realize it but I basically just wrote a recycled compilation of the above arguments from memory.  That is freaky but what have I learned in the past 2 years. Hmmm.]

sigblips
sigblips's picture
Offline
Joined: 2010-04-20
Posts: 732
The SETI Institute's Big

The SETI Institute's Big Picture Science radio show has a 5 minute segment with Dick Carrigan discussing "the possibility of receiving embedded viruses in messages from ET."

http://radio.seti.org/blog/2012/12/big-picture-science-gene-hack-man-dic...

Great interview. Give it a listen.