Pages:
Author

Topic: What's really scary about artificial intelligence? (Read 432 times)

legendary
Activity: 1708
Merit: 1364
🔃EN>>AR Translator🔃
In a funny competition that was held last Wednesday at a Parisian institutin, artificial intelligence (ChatGPT) took a philosophy exam for baccalaureate students against one of the most prominent philosophers, Mr Rafael Enthoven, a lecturer at the University of Paris. The question in the exam was about producing an article on whether joy is a rational issue. The philosopher took the entire exam time (4 hours) to get a score of 20/20, while the program formulated his thesis to get a score of 11/20 with an acceptable note within few seconds, after two distinguished researchers carried out the evaluation process. This test led to several conclusions, the most important of which is that artificial intelligence is unable to match human intelligence, since philosophy is one of the finest productions of human brain as its proficiency is not just a narration of long sentences as it requires critical thinking of a special kind. This experience was very reassuring to different audiences and evidence of the shortcomings of artificial intelligence in several areas.

These reassurances should not eliminate all concerns about the uses of artificial intelligence, and what increases our fears is the confidentiality of research in this field, because although we can control the capabilities of private companies such as Microsoft or Google, we do not know what the US administration, for example, or China is working on.

Fears of technological intelligence had emerged since the seventies of the last century with the development of computing machines, but no one really cared. Today, artificial intelligence poses two main problems, the first is the potential destruction of many jobs, and the second is the development of autonomous weapon systems with little human intervention that can violate the laws of war, called "killer robots".


What do you think?

I know some individuals are concerned that releasing up their AI research would put them at an advantage in the marketplace. But I believe that the advantages of openness will exceed the hazards. After all, if consumers don't trust AI technology, they are less likely to utilize it, which is terrible for the corporations that create it. And even if there are some risks to being open with one another I think they're surpassed by the risks of creating AI in secret because it's going to make it difficult for the general public to know what's really going on. But if we want to guarantee that AI is developed in a way that's both secure and legal, we need to have a greater degree of openness about how it's being created as well as how it's being used

As an example of one of the most dangerous things we have reached in the artificial intelligence experiment, is the discovery that Israel is using artificial intelligence techniques to develop its offensive devices in the war on Gaza. Today, the United Nations Commissioner-General, Gutrich, expressed his concern about Israel’s use of artificial intelligence in the Gaza war. The Israeli army confirmed that the Israeli forces were operating “in parallel above and below the ground.” Weapons that had not been used in previous wars were used.
“Every soldier is a sniper”: For the first time, the army used AI-enhanced aiming scope technology, equipped with weapons such as rifles and machine guns. It helps soldiers intercept drones because Hamas uses many of them, making every soldier, even a blind one, a sniper.
Another technology also helps the army launch drones capable of throwing nets at other drones with the aim of disrupting their work inside tunnels. These are drones capable of monitoring humans and working underground.

The danger in this information is that Israel launched the development process without monitoring or supervision from any regulatory body, and no one knows what is being worked on in secret.
hero member
Activity: 896
Merit: 584
BTC, a coin of today and tomorrow.
" What's really scary about artificial intelligence?"

That it will "think"  and perhaps behave like us humans.
Not just that. Think of it this way, that you are unable to differentiate between a human emotion and a machine emotion. Think about it the OgNasty way; where you will be decieved to fall in love with a series of machine codes thinking it's a human because she(The AI), will be able to chat, place a phone call and perhaps engage in video calls. Many people are just scared that AI will replace their jobs but they have not thought about it that AI could be used to scam humans seamlessly.
To me the most scary thing about AI is that it's evolvement is not predictable. It might evolve to something so disastrous and damaging whereby its disadvantages will overwhelm the advantages.
sr. member
Activity: 1106
Merit: 421
The present age is the digital age.  Humans are now controlled by artificial intelligence.  AI has made people's lives easier and more comfortable.  Artificial intelligence has made human life and work much easier.  For this people are now thinking to keep pace with the most advanced technology.  But as it has made human life easier, it is slowly eroding man's own intelligence.  Humans have now become completely mechanized.  Which is immobilizing people's intelligence.  People can't imagine their life without AI or artificial intelligence even a day now.
member
Activity: 168
Merit: 75
In a funny competition that was held last Wednesday at a Parisian institutin, artificial intelligence (ChatGPT) took a philosophy exam for baccalaureate students against one of the most prominent philosophers, Mr Rafael Enthoven, a lecturer at the University of Paris. The question in the exam was about producing an article on whether joy is a rational issue. The philosopher took the entire exam time (4 hours) to get a score of 20/20, while the program formulated his thesis to get a score of 11/20 with an acceptable note within few seconds, after two distinguished researchers carried out the evaluation process. This test led to several conclusions, the most important of which is that artificial intelligence is unable to match human intelligence, since philosophy is one of the finest productions of human brain as its proficiency is not just a narration of long sentences as it requires critical thinking of a special kind. This experience was very reassuring to different audiences and evidence of the shortcomings of artificial intelligence in several areas.

These reassurances should not eliminate all concerns about the uses of artificial intelligence, and what increases our fears is the confidentiality of research in this field, because although we can control the capabilities of private companies such as Microsoft or Google, we do not know what the US administration, for example, or China is working on.

Fears of technological intelligence had emerged since the seventies of the last century with the development of computing machines, but no one really cared. Today, artificial intelligence poses two main problems, the first is the potential destruction of many jobs, and the second is the development of autonomous weapon systems with little human intervention that can violate the laws of war, called "killer robots".


What do you think?

I know some individuals are concerned that releasing up their AI research would put them at an advantage in the marketplace. But I believe that the advantages of openness will exceed the hazards. After all, if consumers don't trust AI technology, they are less likely to utilize it, which is terrible for the corporations that create it. And even if there are some risks to being open with one another I think they're surpassed by the risks of creating AI in secret because it's going to make it difficult for the general public to know what's really going on. But if we want to guarantee that AI is developed in a way that's both secure and legal, we need to have a greater degree of openness about how it's being created as well as how it's being used
member
Activity: 691
Merit: 51
If the people here are really afraid of artificial intelligence, maybe they should read this (unless they already know this) in order to prepare.

https://tutorial.math.lamar.edu/Classes/CalcI/CalcI.aspx

But don't enroll in a college or university or any other academic institution because those institutions promote violence.
legendary
Activity: 1162
Merit: 2025
Leading Crypto Sports Betting & Casino Platform
I don't find AI to be particularly frightening because we are developing it to enhance human capabilities rather than to replace people.


That would be at first, sure. We are already witnessing how Artiricial intelligence is being used to help doctors, content creators and other professionals.
But what is it going to happen when AI becomes better than a human being at working or completing tasks?
Corporations and big companies won't be able to resist the temptation of firing people in favor of intelligent machines which do not form unions, do not complain, do not eat, do not need to go to the bathroom, etc. In the end, the advance of technology and the greed of those in charge could lead towards very serious societal problems.
hero member
Activity: 902
Merit: 655
Do due diligence
" What's really scary about artificial intelligence?"

That it will "think"  and perhaps behave like us humans.
legendary
Activity: 3766
Merit: 1368
I've heard that self-driving cars that use a form of AI, don't get out of the way for emergency vehicles. I guess AI is just stubborn, like a little child.

Cool
member
Activity: 71
Merit: 21
I don't find AI to be particularly frightening because we are developing it to enhance human capabilities rather than to replace people.
member
Activity: 691
Merit: 51
Ultegra134-It is easy for a fucking chatbot to pose as one's girlfriend when 99.999% of people have absolutely no social skills at all. Maybe we should just let the AI win. People do not have meaningful relationships because most people are fucked up pieces of shit. The truth hurts.

-Joseph Van Name Ph.D.
hero member
Activity: 1540
Merit: 744
I'm not really sure about AI-controlled weapons of mass destruction; I believe this is quite unrealistic and something that came out of a conspiracy theory. I'd be more worried about the potential loss of jobs that AI would replace in the blink of an eye. Moreover, there's an increasing trend of AI chat companions that are so advanced that they are even capable of posing as your girlfriend and having full conversations with them, including voice calls and adult messaging. This, perhaps, is the worst of all. People have become a lot more introverted and lonely, and now with AI, it'll be even easier to have a virtual companion rather than real relationships in order to feel less lonely.
member
Activity: 691
Merit: 51
Maybe people should invest more into AI safety. In my cryptocurrency research (that you all fucking hate me for doing because you are horrible evil people), I have been working on machine learning algorithms that can perform some but not all AI tasks but which are pseudodeterministic in the sense that if one runs the gradient ascent multiple times to train the AI model, one will often end up with the exact same result (this is called pseudodeterminism). Hooray for pseudodeterminism. Pseudodeterminism means that the final model will not have very much random information in it, and it also means that we have a better shot at interpreting and understanding the AI model (since it will not be cluttered with randomness or pseudorandomness). And if we can interpret AI better, then we can control it better. Never trust AI that you cannot control and that you cannot understand the inner workings of.

On the other hand, if AI replaces humanity, then that would be a good thing because humans (especially the chlurcmklets on this site) are incredibly dysfunctional. We should therefore race towards as much AI as possible because humans are incredibly deplorable entities.

-Joseph Van Name Ph.D.
legendary
Activity: 1162
Merit: 2025
Leading Crypto Sports Betting & Casino Platform
To me, the scary thing about Artificial intelligence has mostly to do with the uncanny effectiveness it can have to perform certain activities which could lead towards the unemployment of millions in the future.

Also, we cannot ignore how the abuse of such technology could lead towards the wrongdoing by people who only wish to commit crimes through the use of this technology.

It is a thing to have technology to automatically produce images, but it is a completely different issue to have AIs capable of coding themselves or even analyze code and find vulnerabilities faster than any human being could...
full member
Activity: 548
Merit: 167
Play Bitcoin PVP Prediction Game
Prehistoric humans already used knives to skin their prey. They may also think, "this knife could also hurt me". But things like that don't stop people from using knives anymore. The existence of AI can be said to be a double-edged sword. On the one hand, it is useful in speeding up a work process, on the other hand it kills the profession.

There is a misconception about AI because of how quickly in spreads the word through the internet, social media, messengers and etc. People do also worry that these AIs are going to replace jobs which could be true but they don't look at the part that it can also generate new opportunities and even jobs. What's scary are those movies in which people think that someday these will come robots and they'll take over the world and then they'll make us a slave. Those fantasies and disbelief have been always there thinking that everything is possible with AI. What people have to analyze and recognize is that this technology is going to make tasks easier and will make someone who uses it more productive.
People think that one day AI could control humans. That opinion is indeed correct. To prevent this, humans are expected to be able to control AI by programming it so that AI doesn't go too far and creating appropriate algorithms. Because AI is ruled by human programs. So it depends on the human. Do you want to make a robot that can think about bombing? certainly can. Depends on who is programming.
legendary
Activity: 1708
Merit: 1364
🔃EN>>AR Translator🔃
There are many voices calling for addressing the danger of confronting artificial intelligence, and new theories are beginning to appear in the form of possibilities about the extent of the danger that artificial intelligence may cause if it continues at the same pace it is moving today.

During the year 2023 alone, many countries began to take measures to address the issue, proving the importance it holds due to the challenges it poses:
China is investing in artificial intelligence, but it is also issuing regulations to sustain artificial intelligence.
India is focusing on developing skills needed for the AI economy.
A presidential order from Biden calling for the development of ethical guidelines to guide the development and use of artificial intelligence.
The European Parliament had also issued directives regarding artificial intelligence with the aim of directing it to be more transparent.

Many governments in many countries have begun to take the matter seriously, and the conviction has increased that there are a number of challenges that must be dealt with, whether you intend to use artificial intelligence or ignore it.
hero member
Activity: 2660
Merit: 608
Vave.com - Crypto Casino
There is a misconception about AI because of how quickly in spreads the word through the internet, social media, messengers and etc. People do also worry that these AIs are going to replace jobs which could be true but they don't look at the part that it can also generate new opportunities and even jobs. What's scary are those movies in which people think that someday these will come robots and they'll take over the world and then they'll make us a slave. Those fantasies and disbelief have been always there thinking that everything is possible with AI. What people have to analyze and recognize is that this technology is going to make tasks easier and will make someone who uses it more productive.
legendary
Activity: 1708
Merit: 1364
🔃EN>>AR Translator🔃
Google's conversational and generative artificial intelligence platform Bard, is now available in Arabic in its latest expansion in conjunction with its competitor ChatGPT. The AI service can now understand questions in 16 Arabic dialects, but will provide answers in Standard Arabic: https://ar.cointelegraph.com/news/googles-bard-ai-now-available-in-arabic-in-latest-expansion

I tried it using my local accent and the result was completely satisfactory. And I asked the same question in another Arabic dialect, and it gave me almost the same result, with very slight differences that hardly appear.
It seems that the bard application from Google has more efficiency than its chatgpt counterpart, since it is not limited, as is chatgpt, whose knowledge stops until the year 2021 only, while bard supports all languages, and here it is now learning how to understand different dialects.

Developing artificial intelligence applications in line with the local dialects will make it much easier for users with limited experience or even those who suffer from technical obstacles in writing or proper linguistic expression. Now hiring artificial intelligence is much easier than anyone might think.
Currently, this is the trial version [completely free], which will also go through a trial period during which it will be able to learn more and develop its knowledge so that paid versions will be issued after that.
legendary
Activity: 3766
Merit: 1368
I think the scariest thing about Artificial Intelligence is that PEOPLE are the AI.

Just ask God.

Cool
full member
Activity: 24
Merit: 0
Play Bitcoin PVP Prediction Game
Fears of technological intelligence had emerged since the seventies of the last century with the development of computing machines, but no one really cared. Today, artificial intelligence poses two main problems, the first is the potential destruction of many jobs, and the second is the development of autonomous weapon systems with little human intervention that can violate the laws of war, called "killer robots".

The fear of artificial intelligence is just another fear that was created when we don't need to be scared of it. I don't believe computer can takeover humanity. This jobs that the AI are taking away isn't a problem because with the advancement of AI there'll be creation of more new Jobs. The new jobs aren't those that compete with the AI but to work hand in hand with it like maintenance.

The robots been built we'll need those that'll be servicing it or operating it and that's the new jobs that we get. We have to upgrade what we consider jobs as the world is evolving and things are changing as well, while we're here complaining about AI taking our jobs, others are learning new skills that'll make them hirable for other aspects of employment that comes from having this robots in our lives. AI can be good and bad, if we have those creating autonomous weapons, we can also have good AI that'll create protection against those weapon. Movies are making us believe so many things. We forget that movies are just imagination and most of the things we watched would never happened in real life like the aliens invading earth or robots dominating humanity.
legendary
Activity: 2800
Merit: 1128
Leading Crypto Sports Betting & Casino Platform
What if one day AI becomes so developed we can’t stop it or turn it off & robots start attacking us. I think it’s very dangerous & needs to be carefully managed in the right hands.
Yeah, we are not even heading a direction where that would be possible. That sounds like a movie plot done by a writer who doesn't understand AI. Scary and pseudoscience.

In a media interview that took place in 2018, Elon Musk warned of the danger of artificial intelligence, describing it as more dangerous than nuclear weapons. The argument Elon Musk relied on was s interesting when he described the developers of artificial intelligence as delusional that they are at a degree of intelligence that they do not actually have. He said that these developers do not believe that a machine of their manufacture can outsmart them if it is programmed to learn on its own.
https://www.cnbc.com/2018/03/13/elon-musk-at-sxsw-a-i-is-more-dangerous-than-nuclear-weapons.html
Elon Musk had described artificial intelligence in a previous statement as more dangerous than North Korea as well, and that its destructive capacity would exceed all expectations if limits were not set for its pace of development.
I am assuming this article was done right after he left openai? I am not sure what happened, but right after that musk began his crusade against AI. Most likely because he missed the boat. Maybe he got voted out.

Once again, Elon isn't smart, he acts like he is because he enjoys the admiration, i mean he bought the whole twitter for it.

And he doesn't go to any actual specifics on describing how is it more dangerous then nuclear weapons. He is just afraid. Not sure of what, Self awareness? Musk hasn't read enough or isn't self-aware enough to understand how self-awareness works. Only argument i found was from that article was:

Quote
“This tends to plague smart people. They define themselves by their intelligence and they don’t like the idea that a machine could be way smarter than them, so they discount the idea — which is fundamentally flawed.”
Now this is just stupid and sounds like biblical level nonsense. AI is so far from being self aware, or having needs (because we are not creating them to it), that it amazes me how he doesn't even follow the projects he funded. I don't disagree that at some point it would totally manipulate humans, but we are not even going that direction yet, and i can't see any reason to go there.
Pages:
Jump to: