AI risk is stupid

argumate:

oliwhail:

argumate:

People seem to think that software spontaneously self-modifies once it get above a certain level of cleverness. It doesn’t, if you want it to do this then you need to go to a great deal of effort explicitly programming it to do that.

So why worry about it happening? Just don’t do it!

Ah, but what if you wrote a program that could do anything including rewriting itself? Well don’t do that either! It’s not difficult to make a program that can’t rewrite itself, that is in fact the default state of nature.

The only reason to be obsessed with self-modifying programs is if you think that’s the only way to write software that can ascend to godhood. It isn’t.

Imagine someone writes an intelligent proof assistant, basically a much smarter version of Eurisko, capable of noticing patterns, generalising, suggesting and testing hypotheses, crafting and explaining abstractions.

That would be fucking epic, I really cannot overstate this fact.

And it would have precisely 0% (zero percent, zilch, zip, nada, jack squat) chance of spontaneously deciding to rewrite itself and take over the world.

The biggest AI risk we face to date is people driving off cliffs after being instructed to do so by their GPS, that’s a risk that has an actual body count.

No one needs to argue against AI risk, in the same way that no one needs to argue against Russell’s teapot, but I will continue to do so anyway because I like my arguments easy and my fish in barrels.

“Just don’t do it!”? Congratulations, you’ve managed to produce an abstinence-only approach to AI safety.

Do you propose an international task force to police everyone who *might* write a self modifying program? Because if you don’t enforce it, it only takes one committed group of people ignoring your suggestion to mess everything up for the rest of us.

And there won’t just be one group. @shlevy stated that people already produce self-modifying software. Also, consider that companies and military organizations have strong incentives to have the fastest/best/etc. software, which means leveraging the best software engineers available. As soon as they can write a program better at writing programs than the rest of their employees, they’re incentivized to do so and have it write the next version of itself.

Then we’re back to hoping whoever wrote the first version didn’t read this post, see that there was a “0% (zero percent, zilch, zip, nada, jack squat) chance” of bad shit happening (obligatory “zero and one are not probabilities”), and then program the initial software to maximize something stupid.

And at that point it’s too late to do AI safety research.

You seem to be saying that bad people with a lust for power will be constrained by the actions of MIRI, that’s cute.

That… that is the actual goal of MIRI.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s