Right now, there are a bunch of fearmongers, fun ruiners, fat guys like Eliezer Yudkowsky, and chlurmcks who know absolutely nothing about anything who want to ban AI because they think that AI will destroy humanity. There are also a lot of AI corporatists who do know stuff about AI but they want everyone else banned from using AI so that they can have a monopoly. The only sensible way to deal with this madness is to innovate as fast as possible so that as much AI will be let loose before anyone can ban it.
Here is what we need to do.
1. Invest in reversible computation. Reversible computation is the future of computation (mainly the hardware, but reversible algorithms are also important), and it will be the future of AI. You should start by learning about reversible computation.
2. Be careful who you listen to. Do not listen to anyone talk about AI who has not gone on the record about the virtues of reversible computation. Reversible computation is the future, so anyone interested in AI who is unfamiliar with reversible computation is a crackpot. In fact, you should not listen to any news or anything like that because it is all hype. All clean energy, nuclear fusion, quantum computing, and most AI is hype. If people really cared about innovation, they would be just as interested in reversible computation.
3. Innovate or at least invest in innovation. Develop your own AI algorithms. Invest in those developing their own AI algorithms. And by innovation I do not mean replacing ReLU with a continuous approximation of ReLU or changing a few hyperparameters. I mean trying to replace neural networks with something else for at least some ML tasks.
4. Improve AI safety just enough to keep most the cry babies at bay. You cannot please all of the cry babies because some of these cry babies have lost their minds. But some of them are sensible to respond and by satisfied by innovation in AI safety. Of course, not all AI safety research is suitable for our purposes. We need to invest in AI safety research that helps us understand and improve our AI systems so that we can have more fun (but do it safely). We should not invest in AI safety that just amounts to fearmongering, imagining scenarios that will never happen, and making scary stories. We should invest in AI safety research that allows us to understand and control the AI systems more.
5. Improve yourself and humanity as much as possible. This is probably really hard for most people because most people are absolutely disgraceful, pathetic, and exceedingly evil, but take care of yourself. Think about it. Who is the most afraid of AI? Pathetic people. That is right. Pathetic people realize that they are too incompetent to stand up to an advanced AI system. People with low self-esteem are the most afraid of AI. But if you realize that you are pathetic and cannot improve yourself, then you should just humbly admit that the AI is better than you. You should therefore rejoice that you are being replaced by something that is much better.
Don't work for OpenAI since they are scamming people with cryptocurrency scams. Don't invest in FTX. If you put money into FTX, then you are one of those pathetic people that I am talking about. Don't take money from FTX. If you did, then you took dirty money since you are one of those pathetic people I was talking about. Good bye.
-Joseph Van Name Ph.D.
Since the emergence of AI, there has been numerous opinions and debates about this technology. I, personally, believe that AI is actually a great invention that we can put into good use. It's only a matter of how can we utilize it- properly. Afterall, everything has its boons and banes. And you're right, we just need to improve AI safety and AI researches, and maybe in the future we can add up AI policies so we can make sure that AI are being used properly and accordingly.
AI won't be banned because it can't be banned. Even go through hassle of establishing ethics and safety boards/oversight, would China adhere to such concerns of AI malpractice? What about Russia? Let the AI hysteria from the dissidents continue. They can cry all they want.
Usually these naysayers will appear at the cusp of every new technological evolution. The computer explosion in the 70's had plenty. Where are they now?
Agreed on this one, let them cry and let them realize how powerful and valuable this emerging technology.