Why we should be worried about an artificial intelligence explosion

Does “general intelligence” exist?

Would we necessarily be separating intelligence from context when creating AIs?

Does high intelligence even matter that much?

What form would superhuman AI take?

Does civilization grant humanity protection?

Is recursive improvement possible?

Should we expect scientific progress to continue linearly?

Will lack of low hanging fruit stop an AI from improving quickly?

Is communication overhead a big problem for an AI?

AI culture vs. human culture

Conclusion

--

--

--

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Decrypted — AI-powered smart contract creation for non-programmers

Policing the future

How five businesses are using AI and big data today

Can Neuralink really offer a potential cure for disease and depression?

How to Build Your AI Dream Team

Can AI Mitigate Risk In BioPharma ?

AI used in COVID-19 Part 2

AI-powered classifications vs Keywords. Part 1/2: Editorial Orientations detection.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Elliot Olds

Elliot Olds

More from Medium

Simulation theory — the bored AI hypothesis

Enoch Kariuki, Former CEO at Lengo Therapeutics, on Translating Science to Therapies

An intelligent commitment to consciousness

“He Who Needs Riches Least, Enjoys Riches Most” — Epictetus