Why we should be worried about an artificial intelligence explosion

Does “general intelligence” exist?

Would we necessarily be separating intelligence from context when creating AIs?

Does high intelligence even matter that much?

What form would superhuman AI take?

Does civilization grant humanity protection?

Is recursive improvement possible?

Should we expect scientific progress to continue linearly?

Will lack of low hanging fruit stop an AI from improving quickly?

Is communication overhead a big problem for an AI?

AI culture vs. human culture

Conclusion

--

--

--

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

WCP Proudly Releases The AI Semiconductor Report

Welcome to Nalu: AI for CRE

Using Artificial Intelligence to Predict Building Daylight Autonomy

Oxford Insights’ AI News Roundup #3

When artificial intelligence judges a beauty contest, white people win

AI Is Too White

NLP’s Future Implications

Is a sigh just a sigh? The AI answer is no.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Elliot Olds

Elliot Olds

More from Medium

Hot Science Summer: Revolutionizing Research Resources

An event flyer showing the date, time, and youtube channel to tune into the Hot Science Stories symposium. The text is overlayed onto an image showing a Black scientist with multi-color dreadlocks standing behind a test tube rack looking at the camera.

Questions to ask a “true” Artificial Intelligence

Artificial Intelligence Will Never Be Like Humans

Hungry for Knowledge: Philosophizing in the Wilderness with Yellowjackets