In a funny competition that was held last Wednesday at a Parisian institutin, artificial intelligence (ChatGPT) took a philosophy exam for baccalaureate students against one of the most prominent philosophers, Mr Rafael Enthoven, a lecturer at the University of Paris. The question in the exam was about producing an article on whether joy is a rational issue. The philosopher took the entire exam time (4 hours) to get a score of 20/20, while the program formulated his thesis to get a score of 11/20 with an acceptable note within few seconds, after two distinguished researchers carried out the evaluation process. This test led to several conclusions, the most important of which is that artificial intelligence is unable to match human intelligence, since philosophy is one of the finest productions of human brain as its proficiency is not just a narration of long sentences as it requires critical thinking of a special kind. This experience was very reassuring to different audiences and evidence of the shortcomings of artificial intelligence in several areas.
Artificial intelligence cannot match humans in brainpower. No machine can be compared with the human intellect and ability to handle issues or tasks based on the situation. If you tell an artificial intelligence machine to always instruct everybody to stand up, it will not be reasonable to tell an old woman or physically challenged to sit down. But a human will always reason based on circumstance, I am sure a human receptionist will reason that these people need special attention and offer them a sit. These tools can perform task fast but it is limited to only the information available on the internet.
These reassurances should not eliminate all concerns about the uses of artificial intelligence, and what increases our fears is the confidentiality of research in this field, because although we can control the capabilities of private companies such as Microsoft or Google, we do not know what the US administration, for example, or China is working on.
I don't know that their research is shredded in secrecy, that's scary. That's why the AI sector needs to be regulated. But so many sectors like the pharmaceutical sector also keep their research secret and it was clear during the COVID-19 pandemic.
Fears of technological intelligence had emerged since the seventies of the last century with the development of computing machines, but no one really cared. Today, artificial intelligence poses two main problems, the first is the potential destruction of many jobs, and the second is the development of autonomous weapon systems with little human intervention that can violate the laws of war, called "killer robots".
What do you think?
These killer robots are called lethal autonomous weapons systems (LAWS). They can identify, target, and kill their target. These AI warfare tools can be used as military personnel on the battlefield and can cause havoc without human feelings. These technologies also need to be regulated or even banned. The ban on chemical weapon discourages companies from venturing into the sector, it the UN ban the production of killer robots, many firms will not join the business.