I’ve figured out how to explain the situation with OpenAI and why some are stressed about the uncontrollable improvement in AI quality and Artificial General Intelligence (AGI).
Imagine a scenario where a startup releases an anti-aging drug tomorrow. I don’t know, something like Altos Labs, or Agex. And then it becomes secretive – not sharing key parts of this discovery with the scientific community, showing astounding results on small mammals, and even though testing on humans might take a century, initial results already demonstrate that the organism rejuvenates.
Of course, this pleasure isn’t available to everyone because the demand is astronomical while production is expensive and complex. And here the company works on producing this super-drug on a larger scale.
All simulations indicate that such progress eventually leads to the end of civilization. We can of course argue that simulations are dumb and incorrect, while people are rational and conscious, but history shows that no large group has ever acted in a truly conscious manner. Roughly one percent pushes forward, and the rest either endure, do not interfere, or assist, though rationally, they should only be helping. Second, all simulations show that as a result of this anti-aging drug, nothing disastrous will happen in the lifetime of anyone currently living. So, we need to think at least about our children and grandchildren, although again, the issue involves belief, as some think that our grandchildren’s grandchildren will migrate to live on Mars. What should an intelligent society do to ensure that people don’t kill each other 200 years from now?
There are two options – either ensure the anti-aging drug never comes into existence (consciously refuse progress) or formulate principles for its rational and fair application. How long will such principles last? How to explain to some that it’s just as fair for them to die tomorrow as it is for that millionaire scientist not to die because he deserves the drug. If an agreement is not reached, they will take up pitchforks in sheer numbers, and ultimately, humanity will regress hundreds of years.
The same is true with OpenAI and AGI. Nobody knows what to do with it, but many think that if we don’t reflect on it, the problem is inevitable, and if we do ponder, it might not be.
