Author

Topic: What's really scary about artificial intelligence? (Read 496 times)

legendary
Activity: 1778
Merit: 1474
🔃EN>>AR Translator🔃
In a funny competition that was held last Wednesday at a Parisian institutin, artificial intelligence (ChatGPT) took a philosophy exam for baccalaureate students against one of the most prominent philosophers, Mr Rafael Enthoven, a lecturer at the University of Paris. The question in the exam was about producing an article on whether joy is a rational issue. The philosopher took the entire exam time (4 hours) to get a score of 20/20, while the program formulated his thesis to get a score of 11/20 with an acceptable note within few seconds, after two distinguished researchers carried out the evaluation process. This test led to several conclusions, the most important of which is that artificial intelligence is unable to match human intelligence, since philosophy is one of the finest productions of human brain as its proficiency is not just a narration of long sentences as it requires critical thinking of a special kind. This experience was very reassuring to different audiences and evidence of the shortcomings of artificial intelligence in several areas.

These reassurances should not eliminate all concerns about the uses of artificial intelligence, and what increases our fears is the confidentiality of research in this field, because although we can control the capabilities of private companies such as Microsoft or Google, we do not know what the US administration, for example, or China is working on.

Fears of technological intelligence had emerged since the seventies of the last century with the development of computing machines, but no one really cared. Today, artificial intelligence poses two main problems, the first is the potential destruction of many jobs, and the second is the development of autonomous weapon systems with little human intervention that can violate the laws of war, called "killer robots".


What do you think?

I know some individuals are concerned that releasing up their AI research would put them at an advantage in the marketplace. But I believe that the advantages of openness will exceed the hazards. After all, if consumers don't trust AI technology, they are less likely to utilize it, which is terrible for the corporations that create it. And even if there are some risks to being open with one another I think they're surpassed by the risks of creating AI in secret because it's going to make it difficult for the general public to know what's really going on. But if we want to guarantee that AI is developed in a way that's both secure and legal, we need to have a greater degree of openness about how it's being created as well as how it's being used

As an example of one of the most dangerous things we have reached in the artificial intelligence experiment, is the discovery that Israel is using artificial intelligence techniques to develop its offensive devices in the war on Gaza. Today, the United Nations Commissioner-General, Gutrich, expressed his concern about Israel’s use of artificial intelligence in the Gaza war. The Israeli army confirmed that the Israeli forces were operating “in parallel above and below the ground.” Weapons that had not been used in previous wars were used.
“Every soldier is a sniper”: For the first time, the army used AI-enhanced aiming scope technology, equipped with weapons such as rifles and machine guns. It helps soldiers intercept drones because Hamas uses many of them, making every soldier, even a blind one, a sniper.
Another technology also helps the army launch drones capable of throwing nets at other drones with the aim of disrupting their work inside tunnels. These are drones capable of monitoring humans and working underground.

The danger in this information is that Israel launched the development process without monitoring or supervision from any regulatory body, and no one knows what is being worked on in secret.
hero member
Activity: 1162
Merit: 643
BTC, a coin of today and tomorrow.
" What's really scary about artificial intelligence?"

That it will "think"  and perhaps behave like us humans.
Not just that. Think of it this way, that you are unable to differentiate between a human emotion and a machine emotion. Think about it the OgNasty way; where you will be decieved to fall in love with a series of machine codes thinking it's a human because she(The AI), will be able to chat, place a phone call and perhaps engage in video calls. Many people are just scared that AI will replace their jobs but they have not thought about it that AI could be used to scam humans seamlessly.
To me the most scary thing about AI is that it's evolvement is not predictable. It might evolve to something so disastrous and damaging whereby its disadvantages will overwhelm the advantages.
sr. member
Activity: 1274
Merit: 457
The present age is the digital age.  Humans are now controlled by artificial intelligence.  AI has made people's lives easier and more comfortable.  Artificial intelligence has made human life and work much easier.  For this people are now thinking to keep pace with the most advanced technology.  But as it has made human life easier, it is slowly eroding man's own intelligence.  Humans have now become completely mechanized.  Which is immobilizing people's intelligence.  People can't imagine their life without AI or artificial intelligence even a day now.
member
Activity: 168
Merit: 77
In a funny competition that was held last Wednesday at a Parisian institutin, artificial intelligence (ChatGPT) took a philosophy exam for baccalaureate students against one of the most prominent philosophers, Mr Rafael Enthoven, a lecturer at the University of Paris. The question in the exam was about producing an article on whether joy is a rational issue. The philosopher took the entire exam time (4 hours) to get a score of 20/20, while the program formulated his thesis to get a score of 11/20 with an acceptable note within few seconds, after two distinguished researchers carried out the evaluation process. This test led to several conclusions, the most important of which is that artificial intelligence is unable to match human intelligence, since philosophy is one of the finest productions of human brain as its proficiency is not just a narration of long sentences as it requires critical thinking of a special kind. This experience was very reassuring to different audiences and evidence of the shortcomings of artificial intelligence in several areas.

These reassurances should not eliminate all concerns about the uses of artificial intelligence, and what increases our fears is the confidentiality of research in this field, because although we can control the capabilities of private companies such as Microsoft or Google, we do not know what the US administration, for example, or China is working on.

Fears of technological intelligence had emerged since the seventies of the last century with the development of computing machines, but no one really cared. Today, artificial intelligence poses two main problems, the first is the potential destruction of many jobs, and the second is the development of autonomous weapon systems with little human intervention that can violate the laws of war, called "killer robots".


What do you think?

I know some individuals are concerned that releasing up their AI research would put them at an advantage in the marketplace. But I believe that the advantages of openness will exceed the hazards. After all, if consumers don't trust AI technology, they are less likely to utilize it, which is terrible for the corporations that create it. And even if there are some risks to being open with one another I think they're surpassed by the risks of creating AI in secret because it's going to make it difficult for the general public to know what's really going on. But if we want to guarantee that AI is developed in a way that's both secure and legal, we need to have a greater degree of openness about how it's being created as well as how it's being used
member
Activity: 691
Merit: 51
If the people here are really afraid of artificial intelligence, maybe they should read this (unless they already know this) in order to prepare.

https://tutorial.math.lamar.edu/Classes/CalcI/CalcI.aspx

But don't enroll in a college or university or any other academic institution because those institutions promote violence.
legendary
Activity: 1162
Merit: 2025
Leading Crypto Sports Betting & Casino Platform
I don't find AI to be particularly frightening because we are developing it to enhance human capabilities rather than to replace people.


That would be at first, sure. We are already witnessing how Artiricial intelligence is being used to help doctors, content creators and other professionals.
But what is it going to happen when AI becomes better than a human being at working or completing tasks?
Corporations and big companies won't be able to resist the temptation of firing people in favor of intelligent machines which do not form unions, do not complain, do not eat, do not need to go to the bathroom, etc. In the end, the advance of technology and the greed of those in charge could lead towards very serious societal problems.
hero member
Activity: 912
Merit: 661
Do due diligence
" What's really scary about artificial intelligence?"

That it will "think"  and perhaps behave like us humans.
legendary
Activity: 3990
Merit: 1385
I've heard that self-driving cars that use a form of AI, don't get out of the way for emergency vehicles. I guess AI is just stubborn, like a little child.

Cool
member
Activity: 71
Merit: 21
I don't find AI to be particularly frightening because we are developing it to enhance human capabilities rather than to replace people.
member
Activity: 691
Merit: 51
Ultegra134-It is easy for a fucking chatbot to pose as one's girlfriend when 99.999% of people have absolutely no social skills at all. Maybe we should just let the AI win. People do not have meaningful relationships because most people are fucked up pieces of shit. The truth hurts.

-Joseph Van Name Ph.D.
hero member
Activity: 1778
Merit: 907
I'm not really sure about AI-controlled weapons of mass destruction; I believe this is quite unrealistic and something that came out of a conspiracy theory. I'd be more worried about the potential loss of jobs that AI would replace in the blink of an eye. Moreover, there's an increasing trend of AI chat companions that are so advanced that they are even capable of posing as your girlfriend and having full conversations with them, including voice calls and adult messaging. This, perhaps, is the worst of all. People have become a lot more introverted and lonely, and now with AI, it'll be even easier to have a virtual companion rather than real relationships in order to feel less lonely.
member
Activity: 691
Merit: 51
Maybe people should invest more into AI safety. In my cryptocurrency research (that you all fucking hate me for doing because you are horrible evil people), I have been working on machine learning algorithms that can perform some but not all AI tasks but which are pseudodeterministic in the sense that if one runs the gradient ascent multiple times to train the AI model, one will often end up with the exact same result (this is called pseudodeterminism). Hooray for pseudodeterminism. Pseudodeterminism means that the final model will not have very much random information in it, and it also means that we have a better shot at interpreting and understanding the AI model (since it will not be cluttered with randomness or pseudorandomness). And if we can interpret AI better, then we can control it better. Never trust AI that you cannot control and that you cannot understand the inner workings of.

On the other hand, if AI replaces humanity, then that would be a good thing because humans (especially the chlurcmklets on this site) are incredibly dysfunctional. We should therefore race towards as much AI as possible because humans are incredibly deplorable entities.

-Joseph Van Name Ph.D.
legendary
Activity: 1162
Merit: 2025
Leading Crypto Sports Betting & Casino Platform
To me, the scary thing about Artificial intelligence has mostly to do with the uncanny effectiveness it can have to perform certain activities which could lead towards the unemployment of millions in the future.

Also, we cannot ignore how the abuse of such technology could lead towards the wrongdoing by people who only wish to commit crimes through the use of this technology.

It is a thing to have technology to automatically produce images, but it is a completely different issue to have AIs capable of coding themselves or even analyze code and find vulnerabilities faster than any human being could...
full member
Activity: 548
Merit: 168
Play Bitcoin PVP Prediction Game
Prehistoric humans already used knives to skin their prey. They may also think, "this knife could also hurt me". But things like that don't stop people from using knives anymore. The existence of AI can be said to be a double-edged sword. On the one hand, it is useful in speeding up a work process, on the other hand it kills the profession.

There is a misconception about AI because of how quickly in spreads the word through the internet, social media, messengers and etc. People do also worry that these AIs are going to replace jobs which could be true but they don't look at the part that it can also generate new opportunities and even jobs. What's scary are those movies in which people think that someday these will come robots and they'll take over the world and then they'll make us a slave. Those fantasies and disbelief have been always there thinking that everything is possible with AI. What people have to analyze and recognize is that this technology is going to make tasks easier and will make someone who uses it more productive.
People think that one day AI could control humans. That opinion is indeed correct. To prevent this, humans are expected to be able to control AI by programming it so that AI doesn't go too far and creating appropriate algorithms. Because AI is ruled by human programs. So it depends on the human. Do you want to make a robot that can think about bombing? certainly can. Depends on who is programming.
legendary
Activity: 1778
Merit: 1474
🔃EN>>AR Translator🔃
There are many voices calling for addressing the danger of confronting artificial intelligence, and new theories are beginning to appear in the form of possibilities about the extent of the danger that artificial intelligence may cause if it continues at the same pace it is moving today.

During the year 2023 alone, many countries began to take measures to address the issue, proving the importance it holds due to the challenges it poses:
China is investing in artificial intelligence, but it is also issuing regulations to sustain artificial intelligence.
India is focusing on developing skills needed for the AI economy.
A presidential order from Biden calling for the development of ethical guidelines to guide the development and use of artificial intelligence.
The European Parliament had also issued directives regarding artificial intelligence with the aim of directing it to be more transparent.

Many governments in many countries have begun to take the matter seriously, and the conviction has increased that there are a number of challenges that must be dealt with, whether you intend to use artificial intelligence or ignore it.
hero member
Activity: 3066
Merit: 629
20BET - Premium Casino & Sportsbook
There is a misconception about AI because of how quickly in spreads the word through the internet, social media, messengers and etc. People do also worry that these AIs are going to replace jobs which could be true but they don't look at the part that it can also generate new opportunities and even jobs. What's scary are those movies in which people think that someday these will come robots and they'll take over the world and then they'll make us a slave. Those fantasies and disbelief have been always there thinking that everything is possible with AI. What people have to analyze and recognize is that this technology is going to make tasks easier and will make someone who uses it more productive.
legendary
Activity: 1778
Merit: 1474
🔃EN>>AR Translator🔃
Google's conversational and generative artificial intelligence platform Bard, is now available in Arabic in its latest expansion in conjunction with its competitor ChatGPT. The AI service can now understand questions in 16 Arabic dialects, but will provide answers in Standard Arabic: https://ar.cointelegraph.com/news/googles-bard-ai-now-available-in-arabic-in-latest-expansion

I tried it using my local accent and the result was completely satisfactory. And I asked the same question in another Arabic dialect, and it gave me almost the same result, with very slight differences that hardly appear.
It seems that the bard application from Google has more efficiency than its chatgpt counterpart, since it is not limited, as is chatgpt, whose knowledge stops until the year 2021 only, while bard supports all languages, and here it is now learning how to understand different dialects.

Developing artificial intelligence applications in line with the local dialects will make it much easier for users with limited experience or even those who suffer from technical obstacles in writing or proper linguistic expression. Now hiring artificial intelligence is much easier than anyone might think.
Currently, this is the trial version [completely free], which will also go through a trial period during which it will be able to learn more and develop its knowledge so that paid versions will be issued after that.
legendary
Activity: 3990
Merit: 1385
I think the scariest thing about Artificial Intelligence is that PEOPLE are the AI.

Just ask God.

Cool
sr. member
Activity: 322
Merit: 227
Playbet.io - Crypto Casino and Sportsbook
Fears of technological intelligence had emerged since the seventies of the last century with the development of computing machines, but no one really cared. Today, artificial intelligence poses two main problems, the first is the potential destruction of many jobs, and the second is the development of autonomous weapon systems with little human intervention that can violate the laws of war, called "killer robots".

The fear of artificial intelligence is just another fear that was created when we don't need to be scared of it. I don't believe computer can takeover humanity. This jobs that the AI are taking away isn't a problem because with the advancement of AI there'll be creation of more new Jobs. The new jobs aren't those that compete with the AI but to work hand in hand with it like maintenance.

The robots been built we'll need those that'll be servicing it or operating it and that's the new jobs that we get. We have to upgrade what we consider jobs as the world is evolving and things are changing as well, while we're here complaining about AI taking our jobs, others are learning new skills that'll make them hirable for other aspects of employment that comes from having this robots in our lives. AI can be good and bad, if we have those creating autonomous weapons, we can also have good AI that'll create protection against those weapon. Movies are making us believe so many things. We forget that movies are just imagination and most of the things we watched would never happened in real life like the aliens invading earth or robots dominating humanity.
legendary
Activity: 3066
Merit: 1169
Leading Crypto Sports Betting & Casino Platform
What if one day AI becomes so developed we can’t stop it or turn it off & robots start attacking us. I think it’s very dangerous & needs to be carefully managed in the right hands.
Yeah, we are not even heading a direction where that would be possible. That sounds like a movie plot done by a writer who doesn't understand AI. Scary and pseudoscience.

In a media interview that took place in 2018, Elon Musk warned of the danger of artificial intelligence, describing it as more dangerous than nuclear weapons. The argument Elon Musk relied on was s interesting when he described the developers of artificial intelligence as delusional that they are at a degree of intelligence that they do not actually have. He said that these developers do not believe that a machine of their manufacture can outsmart them if it is programmed to learn on its own.
https://www.cnbc.com/2018/03/13/elon-musk-at-sxsw-a-i-is-more-dangerous-than-nuclear-weapons.html
Elon Musk had described artificial intelligence in a previous statement as more dangerous than North Korea as well, and that its destructive capacity would exceed all expectations if limits were not set for its pace of development.
I am assuming this article was done right after he left openai? I am not sure what happened, but right after that musk began his crusade against AI. Most likely because he missed the boat. Maybe he got voted out.

Once again, Elon isn't smart, he acts like he is because he enjoys the admiration, i mean he bought the whole twitter for it.

And he doesn't go to any actual specifics on describing how is it more dangerous then nuclear weapons. He is just afraid. Not sure of what, Self awareness? Musk hasn't read enough or isn't self-aware enough to understand how self-awareness works. Only argument i found was from that article was:

Quote
“This tends to plague smart people. They define themselves by their intelligence and they don’t like the idea that a machine could be way smarter than them, so they discount the idea — which is fundamentally flawed.”
Now this is just stupid and sounds like biblical level nonsense. AI is so far from being self aware, or having needs (because we are not creating them to it), that it amazes me how he doesn't even follow the projects he funded. I don't disagree that at some point it would totally manipulate humans, but we are not even going that direction yet, and i can't see any reason to go there.
legendary
Activity: 1778
Merit: 1474
🔃EN>>AR Translator🔃
Just as there are diverse opportunities that comes along with the use of AI in every sectors of life, we may also have the fear of using this in an indiscriminate manner to human abuse, since it's mostly and likely to appear decentralized, everyone may use them against their oppositions in many ways just to get their personal ambitions fulfilled, if this could not layer yield to an increased rate of illicit activities and scams on the digital network and technologies

In affirmation of your words, and at the same time that artificial intelligence is being developed to serve noble goals despite fears of its dominance as a technology, there is indeed a parallel development that many do not seem to pay any real attention to. There is a terrible development of employing artificial intelligence in suspicious activities whose ends are criminal and illegal in the first place.

During my research on the subject, I found a report published by Europol on 2020 detailing the criminal domains that artificial intelligence is employed to serve, which is considered a bridge to facilitate access to results that were not easy to implement in the past.
For example, AI could be used to support.
- Convincing social engineering attacks at scale.
- Document-scraping malware to make attacks more efficient.
- Evasion of image recognition and voice biometrics.
- Ransomware attacks, through intelligent targeting and evasion.
- Data pollution, by identifying blind spots in detection rules.
Link to descriptive article: New report finds that criminals leverage AI for malicious use – and it’s not just deep fakes
Link to PDF report: https://www.europol.europa.eu/cms/sites/default/files/documents/malicious_uses_and_abuses_of_artificial_intelligence_europol.pdf
hero member
Activity: 812
Merit: 560
Just as there are diverse opportunities that comes along with the use of AI in every sectors of life, we may also have the fear of using this in an indiscriminate manner to human abuse, since it's mostly and likely to appear decentralized, everyone may use them against their oppositions in many ways just to get their personal ambitions fulfilled, if this could not layer yield to an increased rate of illicit activities and scams on the digital network and technologies
legendary
Activity: 1778
Merit: 1474
🔃EN>>AR Translator🔃
These killer robots are called lethal autonomous weapons systems (LAWS). They can identify, target, and kill their target. These AI warfare tools can be used as military personnel on the battlefield and can cause havoc without human feelings. These technologies also need to be regulated or even banned. The ban on chemical weapon discourages companies from venturing into the sector, it the UN ban the production of killer robots, many firms will not join the business.

ChatGPT won't mimic human reasoning if you ask it to. It explicitly is disallowed from making moral or ethical judgement. But I don't think it'd do a good job at that anyways.

When the next iteration of AI enters the market that actually mimics human reasoning/judgement, that's when things get a bit concerning. Upload it onto a weapons system to avoid human casualties and you enter a new state of warfare. Assume China has already entertained the idea. Supposedly they spend a great deal of resources on AI R&D, so who's to say they don't already have something like this in production.

There are increasing concerns about this sector because the competition is not declared and we cannot be certain of the extent of development reached by each party. While the activity of companies can be monitored by government agencies or neutral monitoring institutions, it is not possible to monitor the activity of governments and countries in the same way.

No one can prevent North Korea from acquiring nuclear weapons and developing hostile activities because Korea is isolated internationally and does not participate in any international agreement. The same thing can happen with any country in organizing artificial intelligence development activities.

The matter is no less dangerous than any other sensitive sector, such as the production of weapons and medicines, but we note that even the protective measures are taken individually, each party acting alone, while there must be an action map that defines the activities of all parties.
legendary
Activity: 2828
Merit: 1515
These killer robots are called lethal autonomous weapons systems (LAWS). They can identify, target, and kill their target. These AI warfare tools can be used as military personnel on the battlefield and can cause havoc without human feelings. These technologies also need to be regulated or even banned. The ban on chemical weapon discourages companies from venturing into the sector, it the UN ban the production of killer robots, many firms will not join the business.

ChatGPT won't mimic human reasoning if you ask it to. It explicitly is disallowed from making moral or ethical judgement. But I don't think it'd do a good job at that anyways.

When the next iteration of AI enters the market that actually mimics human reasoning/judgement, that's when things get a bit concerning. Upload it onto a weapons system to avoid human casualties and you enter a new state of warfare. Assume China has already entertained the idea. Supposedly they spend a great deal of resources on AI R&D, so who's to say they don't already have something like this in production.
member
Activity: 691
Merit: 51
What is really scary about AI is how pretty much nobody who is talking about AI is aware of this thing called reversible computation. Reversible computation is the future, and I am going to continue to use reversible computation to determine who I should listen to about AI. The first step to solving problems concerning AI is to stop being so damn ignorant.
hero member
Activity: 686
Merit: 987
Give all before death
In a funny competition that was held last Wednesday at a Parisian institutin, artificial intelligence (ChatGPT) took a philosophy exam for baccalaureate students against one of the most prominent philosophers, Mr Rafael Enthoven, a lecturer at the University of Paris. The question in the exam was about producing an article on whether joy is a rational issue. The philosopher took the entire exam time (4 hours) to get a score of 20/20, while the program formulated his thesis to get a score of 11/20 with an acceptable note within few seconds, after two distinguished researchers carried out the evaluation process. This test led to several conclusions, the most important of which is that artificial intelligence is unable to match human intelligence, since philosophy is one of the finest productions of human brain as its proficiency is not just a narration of long sentences as it requires critical thinking of a special kind. This experience was very reassuring to different audiences and evidence of the shortcomings of artificial intelligence in several areas.
Artificial intelligence cannot match humans in brainpower. No machine can be compared with the human intellect and ability to handle issues or tasks based on the situation. If you tell an artificial intelligence machine to always instruct everybody to stand up, it will not be reasonable to tell an old woman or physically challenged to sit down. But a human will always reason based on circumstance, I am sure a human receptionist will reason that these people need special attention and offer them a sit. These tools can perform task fast but it is limited to only the information available on the internet.

Quote
These reassurances should not eliminate all concerns about the uses of artificial intelligence, and what increases our fears is the confidentiality of research in this field, because although we can control the capabilities of private companies such as Microsoft or Google, we do not know what the US administration, for example, or China is working on.
I don't know that their research is shredded in secrecy, that's scary. That's why the AI sector needs to be regulated. But so many sectors like the pharmaceutical sector also keep their research secret and it was clear during the COVID-19 pandemic.

Quote
Fears of technological intelligence had emerged since the seventies of the last century with the development of computing machines, but no one really cared. Today, artificial intelligence poses two main problems, the first is the potential destruction of many jobs, and the second is the development of autonomous weapon systems with little human intervention that can violate the laws of war, called "killer robots".


What do you think?
These killer robots are called lethal autonomous weapons systems (LAWS). They can identify, target, and kill their target. These AI warfare tools can be used as military personnel on the battlefield and can cause havoc without human feelings. These technologies also need to be regulated or even banned. The ban on chemical weapon discourages companies from venturing into the sector, it the UN ban the production of killer robots, many firms will not join the business.
legendary
Activity: 1778
Merit: 1474
🔃EN>>AR Translator🔃
The speed factor in development is what complicates the matter because the rush to the development process in what is like a race between companies and countries will reduce the percentage of observance of the required protection conditions and there will be less attention to security and protection.

And here the biggest concerns arise in designing artificial intelligence systems so that they have the ability to self-improvement and become smarter on their own since they are designed to learn from data and make decisions based on them, so the more advanced these systems become, the more they can develop their own goals that may not be conform to the goals and values of humans, and therefore they may make decisions that are detrimental to them, or they may become independent so that it is difficult (or impossible) for us (as humans) to get full control over them.
member
Activity: 1191
Merit: 78
I believe that the main reason most people fear artificial intelligence is the potential effects and impacts of AI on daily life and society as a whole; in other words, people are fearful of the unknown circumstances that may occur in the future, which practically leads to worry and unease in the majority of positions around the world. However, the solution is to regulate the artificial intelligence space.

hero member
Activity: 2268
Merit: 588
You own the pen
It’s still in its very early days. I think as a human race we need to be very careful how far we push it. We could very easily have robots programmed with AI doing many human jobs, it would create high unemployment.

What if one day AI becomes so developed we can’t stop it or turn it off & robots start attacking us. I think it’s very dangerous & needs to be carefully managed in the right hands.

As we see today, we have robots that are capable of quickly answering questions no matter how hard it is because their brain consists of books that are available in this world. As of now, their movement is not the likes of what we have seen in the movie but with continuous development. I'm sure they will gonna become war machines that are emotionless and will gonna in danger to the human race in the future. If you guys are familiar with the game Horizon Zero Dawn, in the future, the world would become like that if people continue to race to improve the robotic AI which they used for war.
jr. member
Activity: 154
Merit: 3
When law enforcement adopts advanced AI technology, a technology that is rapidly evolving daily, we risk transforming our world into an unrecognizable state. This could lead to total control.

We must hope AI is used responsibly. After all, it also holds the potential to enhance our lives significantly, cure diseases, and even restore sight to the blind, all of which are already feasible.

AI also give the chance for good people who know how to use it, to defend themself and expose the "Bad Guys".
legendary
Activity: 2464
Merit: 1387
I think with everyone’s reliance on social media and lack of using things like cryptography to ensure their identities, AI will be able to overwhelm platforms with misinformation, scam attempts, and impersonations to the point that people won’t be able to trust anything or anyone they see online. Events could be faked, relationships, even history could be altered at a scale where humans are powerless to fight all the disinformation. I honestly don’t think we have a chance against it, but this is a problem for the next generation.

and thats the thing, for the next generation it may be too late, they could be cursing
us that we didnt do more at the early stages to stop it.

ATM there are techies and scientists who are enthusiastic, excited and only too happy to
continue development of AI because big business is driving the demand for it in
order to increase their profits and we 'generally' continue to engage with it thinking
its "cool" and "exciting".
legendary
Activity: 1778
Merit: 1474
🔃EN>>AR Translator🔃
Today, artificial intelligence poses two main problems, the first is the potential destruction of many jobs, and the second is the development of autonomous weapon systems with little human intervention that can violate the laws of war, called "killer robots".


What do you think?
Yes, I agree with you on this very statement of yours mentioned above, because truly A.I have done both good and bad, as it enable people work faster and save a lot of money, unlike the bad part such as ever since this A.I came out called ChatGPT, nobody ever cares to hire content/script writers anymore, as they are bound to lose their job, and likewise since the emergence of "text to speech" free software such as "Speechelo" got into the market, nobody cares about hiring a voice-over anymore, as this tool does that for you very fast in few seconds. And the only way to still remain relevant in this era is to advance with the trend of knowing how to use A.I effectively.

There is no escaping the fact that many jobs will disappear within a few years, and that what is postponing their disappearance now is the need for human intervention that is still required to some extent until artificial intelligence develops more and becomes able to do full jobs.

One of the most terrifying experiments carried out by a laboratory recently, where artificial intelligence was asked to perform a complete task, one of the stages of which requires bypassing the captcha code to register on a website. The artificial intelligence requested the service of a freelancer, claiming that he is a person who has visual problems and cannot distinguish colors to solve the captcha code. What is terrifying is that artificial intelligence lies to achieve a specific goal. Because he did not admit that he is a robot and not a human being. This is one of the most astounding results in my opinion that provides evidence that the future may not be what we all expected.
hero member
Activity: 1176
Merit: 785
Today, artificial intelligence poses two main problems, the first is the potential destruction of many jobs, and the second is the development of autonomous weapon systems with little human intervention that can violate the laws of war, called "killer robots".


What do you think?
Yes, I agree with you on this very statement of yours mentioned above, because truly A.I have done both good and bad, as it enable people work faster and save a lot of money, unlike the bad part such as ever since this A.I came out called ChatGPT, nobody ever cares to hire content/script writers anymore, as they are bound to lose their job, and likewise since the emergence of "text to speech" free software such as "Speechelo" got into the market, nobody cares about hiring a voice-over anymore, as this tool does that for you very fast in few seconds. And the only way to still remain relevant in this era is to advance with the trend of knowing how to use A.I effectively.
sr. member
Activity: 392
Merit: 262
Lohamor Family
What if one day AI becomes so developed we can’t stop it or turn it off & robots start attacking us. I think it’s very dangerous & needs to be carefully managed in the right hands.
Robots attacking human is what I have always think of,each time I come across AI on internet or on TV. Let it not be that what we watch on TV about robots fighting human will come to play someday. Since of developers or programmers might have bad intention and will wants to use those robots for their own evil purpose. AI has already started taken human jobs and if this continues,there will be high rate of unemployment,which will affect the human survival. It is dangerous to human race and I hope that things don't get out of human control. It is hard to go in war with robots,only if another robot will be created to destroy it or someone smarter.
legendary
Activity: 1778
Merit: 1474
🔃EN>>AR Translator🔃
What if one day AI becomes so developed we can’t stop it or turn it off & robots start attacking us. I think it’s very dangerous & needs to be carefully managed in the right hands.

In a media interview that took place in 2018, Elon Musk warned of the danger of artificial intelligence, describing it as more dangerous than nuclear weapons. The argument Elon Musk relied on was s interesting when he described the developers of artificial intelligence as delusional that they are at a degree of intelligence that they do not actually have. He said that these developers do not believe that a machine of their manufacture can outsmart them if it is programmed to learn on its own.
https://www.cnbc.com/2018/03/13/elon-musk-at-sxsw-a-i-is-more-dangerous-than-nuclear-weapons.html
Elon Musk had described artificial intelligence in a previous statement as more dangerous than North Korea as well, and that its destructive capacity would exceed all expectations if limits were not set for its pace of development.
legendary
Activity: 3304
Merit: 1617
#1 VIP Crypto Casino
It’s still in its very early days. I think as a human race we need to be very careful how far we push it. We could very easily have robots programmed with AI doing many human jobs, it would create high unemployment.

What if one day AI becomes so developed we can’t stop it or turn it off & robots start attacking us. I think it’s very dangerous & needs to be carefully managed in the right hands.
sr. member
Activity: 924
Merit: 329
Hire Bitcointalk Camp. Manager @ r7promotions.com

What do you think?
It is really scary to think because the future of use of these Artificial intelligence has not yet been seen. AI already pose threats to people's jobs already, and threats in usage as weapons, we cannot tell was other things it will threaten as the technology becomes more advanced because all these current threats are threats just caused by these AI's that have just been introduced, makes you wonder what the Artificial intelligence that will be developed in the next two years will be capable of doing.
donator
Activity: 4760
Merit: 4323
Leading Crypto Sports Betting & Casino Platform
I think with everyone’s reliance on social media and lack of using things like cryptography to ensure their identities, AI will be able to overwhelm platforms with misinformation, scam attempts, and impersonations to the point that people won’t be able to trust anything or anyone they see online. Events could be faked, relationships, even history could be altered at a scale where humans are powerless to fight all the disinformation. I honestly don’t think we have a chance against it, but this is a problem for the next generation.
sr. member
Activity: 608
Merit: 264
Freedom, Natural Law
legendary
Activity: 1778
Merit: 1474
🔃EN>>AR Translator🔃
In a funny competition that was held last Wednesday at a Parisian institutin, artificial intelligence (ChatGPT) took a philosophy exam for baccalaureate students against one of the most prominent philosophers, Mr Rafael Enthoven, a lecturer at the University of Paris. The question in the exam was about producing an article on whether joy is a rational issue. The philosopher took the entire exam time (4 hours) to get a score of 20/20, while the program formulated his thesis to get a score of 11/20 with an acceptable note within few seconds, after two distinguished researchers carried out the evaluation process. This test led to several conclusions, the most important of which is that artificial intelligence is unable to match human intelligence, since philosophy is one of the finest productions of human brain as its proficiency is not just a narration of long sentences as it requires critical thinking of a special kind. This experience was very reassuring to different audiences and evidence of the shortcomings of artificial intelligence in several areas.

These reassurances should not eliminate all concerns about the uses of artificial intelligence, and what increases our fears is the confidentiality of research in this field, because although we can control the capabilities of private companies such as Microsoft or Google, we do not know what the US administration, for example, or China is working on.

Fears of technological intelligence had emerged since the seventies of the last century with the development of computing machines, but no one really cared. Today, artificial intelligence poses two main problems, the first is the potential destruction of many jobs, and the second is the development of autonomous weapon systems with little human intervention that can violate the laws of war, called "killer robots".


What do you think?
Jump to: