Pages:
Author

Topic: What's really scary about artificial intelligence? - page 2. (Read 433 times)

legendary
Activity: 1708
Merit: 1364
🔃EN>>AR Translator🔃
Just as there are diverse opportunities that comes along with the use of AI in every sectors of life, we may also have the fear of using this in an indiscriminate manner to human abuse, since it's mostly and likely to appear decentralized, everyone may use them against their oppositions in many ways just to get their personal ambitions fulfilled, if this could not layer yield to an increased rate of illicit activities and scams on the digital network and technologies

In affirmation of your words, and at the same time that artificial intelligence is being developed to serve noble goals despite fears of its dominance as a technology, there is indeed a parallel development that many do not seem to pay any real attention to. There is a terrible development of employing artificial intelligence in suspicious activities whose ends are criminal and illegal in the first place.

During my research on the subject, I found a report published by Europol on 2020 detailing the criminal domains that artificial intelligence is employed to serve, which is considered a bridge to facilitate access to results that were not easy to implement in the past.
For example, AI could be used to support.
- Convincing social engineering attacks at scale.
- Document-scraping malware to make attacks more efficient.
- Evasion of image recognition and voice biometrics.
- Ransomware attacks, through intelligent targeting and evasion.
- Data pollution, by identifying blind spots in detection rules.
Link to descriptive article: New report finds that criminals leverage AI for malicious use – and it’s not just deep fakes
Link to PDF report: https://www.europol.europa.eu/cms/sites/default/files/documents/malicious_uses_and_abuses_of_artificial_intelligence_europol.pdf
hero member
Activity: 812
Merit: 560
Just as there are diverse opportunities that comes along with the use of AI in every sectors of life, we may also have the fear of using this in an indiscriminate manner to human abuse, since it's mostly and likely to appear decentralized, everyone may use them against their oppositions in many ways just to get their personal ambitions fulfilled, if this could not layer yield to an increased rate of illicit activities and scams on the digital network and technologies
legendary
Activity: 1708
Merit: 1364
🔃EN>>AR Translator🔃
These killer robots are called lethal autonomous weapons systems (LAWS). They can identify, target, and kill their target. These AI warfare tools can be used as military personnel on the battlefield and can cause havoc without human feelings. These technologies also need to be regulated or even banned. The ban on chemical weapon discourages companies from venturing into the sector, it the UN ban the production of killer robots, many firms will not join the business.

ChatGPT won't mimic human reasoning if you ask it to. It explicitly is disallowed from making moral or ethical judgement. But I don't think it'd do a good job at that anyways.

When the next iteration of AI enters the market that actually mimics human reasoning/judgement, that's when things get a bit concerning. Upload it onto a weapons system to avoid human casualties and you enter a new state of warfare. Assume China has already entertained the idea. Supposedly they spend a great deal of resources on AI R&D, so who's to say they don't already have something like this in production.

There are increasing concerns about this sector because the competition is not declared and we cannot be certain of the extent of development reached by each party. While the activity of companies can be monitored by government agencies or neutral monitoring institutions, it is not possible to monitor the activity of governments and countries in the same way.

No one can prevent North Korea from acquiring nuclear weapons and developing hostile activities because Korea is isolated internationally and does not participate in any international agreement. The same thing can happen with any country in organizing artificial intelligence development activities.

The matter is no less dangerous than any other sensitive sector, such as the production of weapons and medicines, but we note that even the protective measures are taken individually, each party acting alone, while there must be an action map that defines the activities of all parties.
legendary
Activity: 2744
Merit: 1512
These killer robots are called lethal autonomous weapons systems (LAWS). They can identify, target, and kill their target. These AI warfare tools can be used as military personnel on the battlefield and can cause havoc without human feelings. These technologies also need to be regulated or even banned. The ban on chemical weapon discourages companies from venturing into the sector, it the UN ban the production of killer robots, many firms will not join the business.

ChatGPT won't mimic human reasoning if you ask it to. It explicitly is disallowed from making moral or ethical judgement. But I don't think it'd do a good job at that anyways.

When the next iteration of AI enters the market that actually mimics human reasoning/judgement, that's when things get a bit concerning. Upload it onto a weapons system to avoid human casualties and you enter a new state of warfare. Assume China has already entertained the idea. Supposedly they spend a great deal of resources on AI R&D, so who's to say they don't already have something like this in production.
member
Activity: 691
Merit: 51
What is really scary about AI is how pretty much nobody who is talking about AI is aware of this thing called reversible computation. Reversible computation is the future, and I am going to continue to use reversible computation to determine who I should listen to about AI. The first step to solving problems concerning AI is to stop being so damn ignorant.
hero member
Activity: 686
Merit: 987
Give all before death
In a funny competition that was held last Wednesday at a Parisian institutin, artificial intelligence (ChatGPT) took a philosophy exam for baccalaureate students against one of the most prominent philosophers, Mr Rafael Enthoven, a lecturer at the University of Paris. The question in the exam was about producing an article on whether joy is a rational issue. The philosopher took the entire exam time (4 hours) to get a score of 20/20, while the program formulated his thesis to get a score of 11/20 with an acceptable note within few seconds, after two distinguished researchers carried out the evaluation process. This test led to several conclusions, the most important of which is that artificial intelligence is unable to match human intelligence, since philosophy is one of the finest productions of human brain as its proficiency is not just a narration of long sentences as it requires critical thinking of a special kind. This experience was very reassuring to different audiences and evidence of the shortcomings of artificial intelligence in several areas.
Artificial intelligence cannot match humans in brainpower. No machine can be compared with the human intellect and ability to handle issues or tasks based on the situation. If you tell an artificial intelligence machine to always instruct everybody to stand up, it will not be reasonable to tell an old woman or physically challenged to sit down. But a human will always reason based on circumstance, I am sure a human receptionist will reason that these people need special attention and offer them a sit. These tools can perform task fast but it is limited to only the information available on the internet.

Quote
These reassurances should not eliminate all concerns about the uses of artificial intelligence, and what increases our fears is the confidentiality of research in this field, because although we can control the capabilities of private companies such as Microsoft or Google, we do not know what the US administration, for example, or China is working on.
I don't know that their research is shredded in secrecy, that's scary. That's why the AI sector needs to be regulated. But so many sectors like the pharmaceutical sector also keep their research secret and it was clear during the COVID-19 pandemic.

Quote
Fears of technological intelligence had emerged since the seventies of the last century with the development of computing machines, but no one really cared. Today, artificial intelligence poses two main problems, the first is the potential destruction of many jobs, and the second is the development of autonomous weapon systems with little human intervention that can violate the laws of war, called "killer robots".


What do you think?
These killer robots are called lethal autonomous weapons systems (LAWS). They can identify, target, and kill their target. These AI warfare tools can be used as military personnel on the battlefield and can cause havoc without human feelings. These technologies also need to be regulated or even banned. The ban on chemical weapon discourages companies from venturing into the sector, it the UN ban the production of killer robots, many firms will not join the business.
legendary
Activity: 1708
Merit: 1364
🔃EN>>AR Translator🔃
The speed factor in development is what complicates the matter because the rush to the development process in what is like a race between companies and countries will reduce the percentage of observance of the required protection conditions and there will be less attention to security and protection.

And here the biggest concerns arise in designing artificial intelligence systems so that they have the ability to self-improvement and become smarter on their own since they are designed to learn from data and make decisions based on them, so the more advanced these systems become, the more they can develop their own goals that may not be conform to the goals and values of humans, and therefore they may make decisions that are detrimental to them, or they may become independent so that it is difficult (or impossible) for us (as humans) to get full control over them.
member
Activity: 1155
Merit: 77
I believe that the main reason most people fear artificial intelligence is the potential effects and impacts of AI on daily life and society as a whole; in other words, people are fearful of the unknown circumstances that may occur in the future, which practically leads to worry and unease in the majority of positions around the world. However, the solution is to regulate the artificial intelligence space.

hero member
Activity: 2184
Merit: 585
You own the pen
It’s still in its very early days. I think as a human race we need to be very careful how far we push it. We could very easily have robots programmed with AI doing many human jobs, it would create high unemployment.

What if one day AI becomes so developed we can’t stop it or turn it off & robots start attacking us. I think it’s very dangerous & needs to be carefully managed in the right hands.

As we see today, we have robots that are capable of quickly answering questions no matter how hard it is because their brain consists of books that are available in this world. As of now, their movement is not the likes of what we have seen in the movie but with continuous development. I'm sure they will gonna become war machines that are emotionless and will gonna in danger to the human race in the future. If you guys are familiar with the game Horizon Zero Dawn, in the future, the world would become like that if people continue to race to improve the robotic AI which they used for war.
jr. member
Activity: 154
Merit: 3
When law enforcement adopts advanced AI technology, a technology that is rapidly evolving daily, we risk transforming our world into an unrecognizable state. This could lead to total control.

We must hope AI is used responsibly. After all, it also holds the potential to enhance our lives significantly, cure diseases, and even restore sight to the blind, all of which are already feasible.

AI also give the chance for good people who know how to use it, to defend themself and expose the "Bad Guys".
legendary
Activity: 2226
Merit: 1249
I think with everyone’s reliance on social media and lack of using things like cryptography to ensure their identities, AI will be able to overwhelm platforms with misinformation, scam attempts, and impersonations to the point that people won’t be able to trust anything or anyone they see online. Events could be faked, relationships, even history could be altered at a scale where humans are powerless to fight all the disinformation. I honestly don’t think we have a chance against it, but this is a problem for the next generation.

and thats the thing, for the next generation it may be too late, they could be cursing
us that we didnt do more at the early stages to stop it.

ATM there are techies and scientists who are enthusiastic, excited and only too happy to
continue development of AI because big business is driving the demand for it in
order to increase their profits and we 'generally' continue to engage with it thinking
its "cool" and "exciting".
legendary
Activity: 1708
Merit: 1364
🔃EN>>AR Translator🔃
Today, artificial intelligence poses two main problems, the first is the potential destruction of many jobs, and the second is the development of autonomous weapon systems with little human intervention that can violate the laws of war, called "killer robots".


What do you think?
Yes, I agree with you on this very statement of yours mentioned above, because truly A.I have done both good and bad, as it enable people work faster and save a lot of money, unlike the bad part such as ever since this A.I came out called ChatGPT, nobody ever cares to hire content/script writers anymore, as they are bound to lose their job, and likewise since the emergence of "text to speech" free software such as "Speechelo" got into the market, nobody cares about hiring a voice-over anymore, as this tool does that for you very fast in few seconds. And the only way to still remain relevant in this era is to advance with the trend of knowing how to use A.I effectively.

There is no escaping the fact that many jobs will disappear within a few years, and that what is postponing their disappearance now is the need for human intervention that is still required to some extent until artificial intelligence develops more and becomes able to do full jobs.

One of the most terrifying experiments carried out by a laboratory recently, where artificial intelligence was asked to perform a complete task, one of the stages of which requires bypassing the captcha code to register on a website. The artificial intelligence requested the service of a freelancer, claiming that he is a person who has visual problems and cannot distinguish colors to solve the captcha code. What is terrifying is that artificial intelligence lies to achieve a specific goal. Because he did not admit that he is a robot and not a human being. This is one of the most astounding results in my opinion that provides evidence that the future may not be what we all expected.
hero member
Activity: 896
Merit: 653
Today, artificial intelligence poses two main problems, the first is the potential destruction of many jobs, and the second is the development of autonomous weapon systems with little human intervention that can violate the laws of war, called "killer robots".


What do you think?
Yes, I agree with you on this very statement of yours mentioned above, because truly A.I have done both good and bad, as it enable people work faster and save a lot of money, unlike the bad part such as ever since this A.I came out called ChatGPT, nobody ever cares to hire content/script writers anymore, as they are bound to lose their job, and likewise since the emergence of "text to speech" free software such as "Speechelo" got into the market, nobody cares about hiring a voice-over anymore, as this tool does that for you very fast in few seconds. And the only way to still remain relevant in this era is to advance with the trend of knowing how to use A.I effectively.
sr. member
Activity: 350
Merit: 255
What if one day AI becomes so developed we can’t stop it or turn it off & robots start attacking us. I think it’s very dangerous & needs to be carefully managed in the right hands.
Robots attacking human is what I have always think of,each time I come across AI on internet or on TV. Let it not be that what we watch on TV about robots fighting human will come to play someday. Since of developers or programmers might have bad intention and will wants to use those robots for their own evil purpose. AI has already started taken human jobs and if this continues,there will be high rate of unemployment,which will affect the human survival. It is dangerous to human race and I hope that things don't get out of human control. It is hard to go in war with robots,only if another robot will be created to destroy it or someone smarter.
legendary
Activity: 1708
Merit: 1364
🔃EN>>AR Translator🔃
What if one day AI becomes so developed we can’t stop it or turn it off & robots start attacking us. I think it’s very dangerous & needs to be carefully managed in the right hands.

In a media interview that took place in 2018, Elon Musk warned of the danger of artificial intelligence, describing it as more dangerous than nuclear weapons. The argument Elon Musk relied on was s interesting when he described the developers of artificial intelligence as delusional that they are at a degree of intelligence that they do not actually have. He said that these developers do not believe that a machine of their manufacture can outsmart them if it is programmed to learn on its own.
https://www.cnbc.com/2018/03/13/elon-musk-at-sxsw-a-i-is-more-dangerous-than-nuclear-weapons.html
Elon Musk had described artificial intelligence in a previous statement as more dangerous than North Korea as well, and that its destructive capacity would exceed all expectations if limits were not set for its pace of development.
legendary
Activity: 3080
Merit: 1593
#1 VIP Crypto Casino
It’s still in its very early days. I think as a human race we need to be very careful how far we push it. We could very easily have robots programmed with AI doing many human jobs, it would create high unemployment.

What if one day AI becomes so developed we can’t stop it or turn it off & robots start attacking us. I think it’s very dangerous & needs to be carefully managed in the right hands.
sr. member
Activity: 728
Merit: 308

What do you think?
It is really scary to think because the future of use of these Artificial intelligence has not yet been seen. AI already pose threats to people's jobs already, and threats in usage as weapons, we cannot tell was other things it will threaten as the technology becomes more advanced because all these current threats are threats just caused by these AI's that have just been introduced, makes you wonder what the Artificial intelligence that will be developed in the next two years will be capable of doing.
donator
Activity: 4718
Merit: 4218
Leading Crypto Sports Betting & Casino Platform
I think with everyone’s reliance on social media and lack of using things like cryptography to ensure their identities, AI will be able to overwhelm platforms with misinformation, scam attempts, and impersonations to the point that people won’t be able to trust anything or anyone they see online. Events could be faked, relationships, even history could be altered at a scale where humans are powerless to fight all the disinformation. I honestly don’t think we have a chance against it, but this is a problem for the next generation.
sr. member
Activity: 608
Merit: 264
Freedom, Natural Law
legendary
Activity: 1708
Merit: 1364
🔃EN>>AR Translator🔃
In a funny competition that was held last Wednesday at a Parisian institutin, artificial intelligence (ChatGPT) took a philosophy exam for baccalaureate students against one of the most prominent philosophers, Mr Rafael Enthoven, a lecturer at the University of Paris. The question in the exam was about producing an article on whether joy is a rational issue. The philosopher took the entire exam time (4 hours) to get a score of 20/20, while the program formulated his thesis to get a score of 11/20 with an acceptable note within few seconds, after two distinguished researchers carried out the evaluation process. This test led to several conclusions, the most important of which is that artificial intelligence is unable to match human intelligence, since philosophy is one of the finest productions of human brain as its proficiency is not just a narration of long sentences as it requires critical thinking of a special kind. This experience was very reassuring to different audiences and evidence of the shortcomings of artificial intelligence in several areas.

These reassurances should not eliminate all concerns about the uses of artificial intelligence, and what increases our fears is the confidentiality of research in this field, because although we can control the capabilities of private companies such as Microsoft or Google, we do not know what the US administration, for example, or China is working on.

Fears of technological intelligence had emerged since the seventies of the last century with the development of computing machines, but no one really cared. Today, artificial intelligence poses two main problems, the first is the potential destruction of many jobs, and the second is the development of autonomous weapon systems with little human intervention that can violate the laws of war, called "killer robots".


What do you think?
Pages:
Jump to: