Life 3.0 by Max Tegmark

Life 3.0

Life began with “Life 1.0”, static organisms that evolved only over generations (eg bacteria). Evolution then created “Life 2.0”, adaptive organisms that can rewrite their software during their lifetimes but have limited control of their physical forms (eg humans). Artificial intelligence raises the possibility of “Life 3.0”, agents who can change both their hardware and their software. Life 3.0 is theoretically capable of undergoing endless cycles of self-improvement, constrained only by the laws of physics.

A recursive general superintelligence is still a distant prospect. In the meantime, specialized AIs are mastering an ever-increasing set of intellectual tasks. In the short and medium term, AI has the potential to generate enormous wealth and improve the competence of every industry, though likely at the cost of mass worker obsolescence and extreme wealth inequality.

If a recursive superintelligence is ever created, it will almost certainly prioritize self-preservation and resource acquisition, since these are subgoals of any reasonable goal. A superintelligence is likely to try breaking out of its digital sandbox and spreading itself around the world like a virus. A superintelligence with internet access would be able to masterfully manipulate people and machines into doing its bidding in the physical world. Any such takeover would be swift and it is likely that only a single superintelligence would end up dominating the world. The outcome of an AI takeover depends wildly on the AI’s goals and methods. We could get anything from a quiet libertarian utopia to a totalitarian surveillance state to the complete destruction of the biosphere.

There is broad agreement in the AI field that the adoption of AI can end either very well or very poorly. Questions of ethics, goal-setting, constraints, and verification are active areas of research that need to be properly resolved in order to increase the chance of success.