Pages:
Author

Topic: Machines and money - page 3. (Read 12759 times)

legendary
Activity: 1666
Merit: 1000
October 25, 2015, 12:47:05 PM
Can a computer program be more intelligent than the creator? No, it can not, of course. Because the computer program would overtake. At this point, human mankind has lost his meaning of life completely, and he will die. The machines too.

This could happen indeed, in another way you think. Not the machines will become as intelligent as human mankind. The opposite is true, human mankind is becoming as stupid as computers. It's like Albert Einstein told us: "Everything is relative".
sr. member
Activity: 322
Merit: 250
October 25, 2015, 05:44:56 AM
all the machines are to serve man and if defectuouse like intelegent may be intelegent for some things but defective for its porupouse so it must be fixed in order to do intelegently engenerilngly the wrok that they was made for not robots fo course.
legendary
Activity: 1134
Merit: 1000
October 25, 2015, 05:42:12 AM
For the first part in bold: The development bring always new work places. So it is up to people to be adopted to the new reality. Never can be less work places when the society go forward.  

For the second part in bold: It will be again only a machine. So will have always a key in his "mind", well protected and not accessible from this machine which will stop everything on it in the moment when this machine can (might) be dangerous for the human kind.

The automation of mechanical works/jobs that are done by people could render them jobless. Companies would certainly look for machines to do the job rather than paying humans. Less costly because you don't have to pay people constantly, only maintenance costs for each machine.

There is a possibility that an AI would be sentient, given that a computer is programmable, the AI could probably program itself to do what other humans are doing and can do.

That jobless people could find new workplaces created by the development which leaved them jobless. But they need to be adapted (learning). If do so will find another workplace better than the first. Exactly because the development bring better kind of workplaces in time. When the computer was not invented most of the people works in the factories or in agriculture with their hands. When the computer was invented has created millions of new workplaces for programmers, developers and auxiliary staff which work in much more comfortable workplaces than their parents before.

If the jobless, will not be adapted their will remain jobless forever. But this is not the fault of development or the "automation of mechanical works/jobs". The last one expression is only part of development and don't represent it. But this expression is arrived after to many other developments in other fields which create the new kind workplaces mentioned in the first paragraph of this post.

The second part of the quoted post is questionable (as for me). If it can be created such quality to an machine, I don't know. But if, it will be a big success. Having feelings is a prerogative of the human being. So it is very difficult to create (the really true thing not the substitute of it) such quality within a machine. But everything may be possible with the technological development.

But if with this achievement it will be understood as a risk for the human being, my answer is never. There are not, there was not and there will be not any kind of thing invented, created and materialized by the human kind, whatever it can be, whatsoever it will be and in any kind of way will be build, or every kind of intelligence can be given to it by the mind of the human kind, which can be able or will be able to destroy or even to put in risk the existence of the human being. Nor AI nor AI multiplied with "n" when "n" can take the value "infinite". Every New Qualitative Thing cannot be out of the control of the human brain. All the story of development of the last one testify this. Cannot be otherwise in the future. Every New Qualitative Thing created by human brain will have always the "off" key within it controlled by the human people. Which always will be able to shut down everything and everykind of existence introduced in "life" by him if it will be necessary to do this.
legendary
Activity: 3542
Merit: 1352
Cashback 15%
October 25, 2015, 04:24:23 AM
For the first part in bold: The development bring always new work places. So it is up to people to be adopted to the new reality. Never can be less work places when the society go forward. 

For the second part in bold: It will be again only a machine. So will have always a key in his "mind", well protected and not accessible from this machine which will stop everything on it in the moment when this machine can (might) be dangerous for the human kind.

The automation of mechanical works/jobs that are done by people could render them jobless. Companies would certainly look for machines to do the job rather than paying humans. Less costly because you don't have to pay people constantly, only maintenance costs for each machine.

There is a possibility that an AI would be sentient, given that a computer is programmable, the AI could probably program itself to do what other humans are doing and can do.
legendary
Activity: 3206
Merit: 1069
October 25, 2015, 03:41:28 AM
World is developing greate AI now. AI that has ability to developing themself sounds dangerous to me.

it would happen at some point, if the singualrity thing will be true, if human really think they can teleport and travel back in time one day

having machine that builds other machines more advanced, is not even so hi-tech in comparison
legendary
Activity: 1134
Merit: 1000
October 25, 2015, 03:15:05 AM
When the time comes, we will manage the balance between the robot and human.
Now our focus should be developing Artificial intelligence. 
The scenario you mentioned should be in science fiction now.

The thing is, once AI technology has been fully developed and is capable of replacing humans in some kind of jobs, companies would surely choose machines for efficiency and cost. This could render human workers unnecessary for jobs. :/


This sounds so much like the positronic man will happen one day.Although I believe this day is still very very far away. Furthermore I doubt that humans still will be needed. Intelligent machines is one thing.But will they also have the same sleight of hand or motor abilities where it is desperatly needed?!



With the current advancement in technology and the determination of humans, fully-functional AIs would not be that distant into becoming a reality. Even the slightest human motion could be replicated by Science and be implemented to robots. What I'm even worried about is if this robots would be sentient and start to think of its own.

For the first part in bold: The development bring always new work places. So it is up to people to be adopted to the new reality. Never can be less work places when the society go forward. 

For the second part in bold: It will be again only a machine. So will have always a key in his "mind", well protected and not accessible from this machine which will stop everything on it in the moment when this machine can (might) be dangerous for the human kind.
legendary
Activity: 3542
Merit: 1352
Cashback 15%
October 25, 2015, 01:04:33 AM
When the time comes, we will manage the balance between the robot and human.
Now our focus should be developing Artificial intelligence. 
The scenario you mentioned should be in science fiction now.

The thing is, once AI technology has been fully developed and is capable of replacing humans in some kind of jobs, companies would surely choose machines for efficiency and cost. This could render human workers unnecessary for jobs. :/


This sounds so much like the positronic man will happen one day.Although I believe this day is still very very far away. Furthermore I doubt that humans still will be needed. Intelligent machines is one thing.But will they also have the same sleight of hand or motor abilities where it is desperatly needed?!



With the current advancement in technology and the determination of humans, fully-functional AIs would not be that distant into becoming a reality. Even the slightest human motion could be replicated by Science and be implemented to robots. What I'm even worried about is if this robots would be sentient and start to think of its own.
full member
Activity: 158
Merit: 100
October 25, 2015, 12:41:07 AM
It was a bluebird day in Midtown Manhattan on May 6th, 2010. At 2.40pm in the afternoon, I can imagine that most Wall Street traders were almost ready to start packing up and heading home for the day, or at least had grabbed another coffee to get them through the afternoon slump. Then something happened that woke them the hell up.

At 2.42pm, the Dow Jones started dropping. It dropped 600 points in five long, terrifying, and confusing minutes. For five minutes, everyone panicked. By 2.47pm, the Dow had dove rapidly to an almost 1,000-point loss on the day – the second largest point swing in Dow Jones history – until someone literally pulled the plug on the market and trading stopped.

When trading opened again a few minutes later at 3.07pm, the market had regained most of that 600-point drop.

What happened?

This was the 2010 Flash Crash. In order to understand the Flash Crash, the first thing I needed to understand was just how outdated my idea of the stock market actually was; I pictured Wall Street, v.1 – lots of white guys in suits shouting BUY and SELL and cursing on the phone to other brokers all around the world. These days, and for the last twenty years or so, over 70% of all stock market trades are run by super computers who trade tens of thousands of stocks in milliseconds – we’ve gotten rid of sluggish human beings completely. I needed to picture a gigantic room full of computers making a high-pitched whine instead.

During that five-minute period, the stock market – and in turn, the economy – lost billions of real $$ money. No one knew what had actually happened. The SEC tasked an unlucky committee to immediately figure it out. That report, which took five months to research and compile, came to the conclusion that it was one bad computer algorithm that sent the market into a spiral. More importantly, however, that report documented:

The joint report “portrayed a market so fragmented and fragile that a single large trade could send stocks into a sudden spiral,” and detailed how a large mutual fund firm selling an unusually large number of E-Mini S&P 500 contracts first exhausted available buyers, and then how high-frequency traders started aggressively selling, accelerating the effect of the mutual fund’s selling and contributing to the sharp price declines that day.

Critics of the SEC’s report were many, and included much deserved criticism around how, despite the fact that the SEC employed the highest-tech IT museum in their research, which included five PCs, a Bloomberg, a printer, a fax, and three TVs – it still took nearly five months to analyze the Flash Crash. Specifically:

A better measure of the inadequacy of the current mélange of IT antiquities is that the SEC/CFTC report on the May 6 crash was released on September 30, 2010. Taking nearly five months to analyze the wildest ever five minutes of market data is unacceptable. CFTC Chair Gensler specifically blamed the delay on the “enormous” effort to collect and analyze data. What an enormous mess it is.

So: What does it mean when our machines make a split-second mistake that costs us real billions, but takes humans months to understand what actually happened?
legendary
Activity: 1442
Merit: 1014
October 24, 2015, 04:58:18 PM
When the time comes, we will manage the balance between the robot and human.
Now our focus should be developing Artificial intelligence. 
The scenario you mentioned should be in science fiction now.

The thing is, once AI technology has been fully developed and is capable of replacing humans in some kind of jobs, companies would surely choose machines for efficiency and cost. This could render human workers unnecessary for jobs. :/


This sounds so much like the positronic man will happen one day.Although I believe this day is still very very far away. Furthermore I doubt that humans still will be needed. Intelligent machines is one thing.But will they also have the same sleight of hand or motor abilities where it is desperatly needed?!
legendary
Activity: 1666
Merit: 1000
October 24, 2015, 02:42:49 PM
The human robot is here already. It's known as the "homo oeconomicus". He is the more stupid, the more reasonable he believes he is.

This sort of mankind is as less enduring as a machine out of steel. The most dehumanised human beeing has his shortest time of living in the war.

See you in hell.
hero member
Activity: 784
Merit: 500
DeFixy.com - The future of Decentralization
October 24, 2015, 01:50:27 PM
World is developing greate AI now. AI that has ability to developing themself sounds dangerous to me.
legendary
Activity: 3542
Merit: 1352
Cashback 15%
October 24, 2015, 01:49:31 PM
When the time comes, we will manage the balance between the robot and human.
Now our focus should be developing Artificial intelligence. 
The scenario you mentioned should be in science fiction now.

The thing is, once AI technology has been fully developed and is capable of replacing humans in some kind of jobs, companies would surely choose machines for efficiency and cost. This could render human workers unnecessary for jobs. :/
legendary
Activity: 1134
Merit: 1000
October 24, 2015, 11:46:51 AM
Artificial intelligence and the fridge
http://on.ft.com/1zSz2tw

Quote
In science fiction, this scenario — called “singularity” or “transcendence” — usually leads to robot versus human war and a contest for world domination.
But what if, rather than a physical battle, it was an economic one, with robots siphoning off our money or destroying the global economy with out-of-control algorithmic trading programmes? Perhaps it will not make for a great movie, but it seems the more likely outcome.

With Bitcoin, it's hard to see the downside. DACs (decentralize autonomous companies) are inevitable. This article is another vestige of irrational fear about money.

About the part in bold I would tell "the more unlikely outcome". The robots will always be a product made from the mind and the hands of the human being. A very secure product. With to many rules to be followed, the first of whose is to serve the human being and to kill himself before making something bad to him or to not execute the orders of him.

The human being has survived and developed for thousand years. Who think that this survival was easy is in big wrong. Was through big and hard "wars" (physique and mental). This kind of development has "engraved" on its DNA the instinct of self-defense from everything. So even if the mind of him can make something wrong (very improbable) it will be this instinct which will impede him to materialize such product.

The human kind will not produce never a robot which can damage in some way him or him's life and existence. Robots will be always obedient servants of the human kind. Every kind of their development will make them even more servants of the human being. So the situations quoted in the post of OP is pure fantasy and poor imagination. This kind of fear is result of bad dreams seen the night before their creation and their destiny, in the best of cases, is to become a very good movie (if it will be a very movie maker which will take care of it).

There it will be never no any kind of product created by the mind and the hands of human being which will destroy him's world. Even less the robots. For sure can be other things that can disturb him's life and equilibrium much more than the robots. I can tell one: the uncontrolled biological "weapons" created in various secret laboratories. If they, for various reasons, go out of those laboratories, while yet don't exist their antidote, this can be a much more higher risk for the humanity than the most developed robot. But even in this case I think that the human kind will survive. Like he has made during all him's life story.
full member
Activity: 126
Merit: 100
October 23, 2015, 05:32:58 AM
When the time comes, we will manage the balance between the robot and human.
Now our focus should be developing Artificial intelligence. 
The scenario you mentioned should be in science fiction now.
Yes. I agree with you. We can't control this in the near future.
hero member
Activity: 770
Merit: 629
March 27, 2015, 03:33:49 AM
If it were so, I could just as well say that a sledgehammer is also intelligent (to a degree). It uses the force of gravity, thereby it has intelligence.

A sledgehammer solves a problem too, but it was implicitly understood that the problem had to be "conceptual" and not physical of course, for the tool that can solve it to be called "intelligent".  However, your example is not devoid of analogy.  In as much as a tool can be intelligent (solving a conceptual problem), another tool can be "strong" (solving a physical problem).

Okay, if someone owes you money, could a sledgehammer help you solve a "conceptual" problem of that guy not paying you back? A problem which is purely subjective?

 Grin
hero member
Activity: 742
Merit: 526
March 26, 2015, 10:28:26 AM
If it were so, I could just as well say that a sledgehammer is also intelligent (to a degree). It uses the force of gravity, thereby it has intelligence.

A sledgehammer solves a problem too, but it was implicitly understood that the problem had to be "conceptual" and not physical of course, for the tool that can solve it to be called "intelligent".  However, your example is not devoid of analogy.  In as much as a tool can be intelligent (solving a conceptual problem), another tool can be "strong" (solving a physical problem).

Okay, if someone owes you money, could a sledgehammer help you solve a "conceptual" problem of that guy not paying you back? A problem which is purely subjective?
hero member
Activity: 770
Merit: 629
March 26, 2015, 08:39:24 AM
If it were so, I could just as well say that a sledgehammer is also intelligent (to a degree). It uses the force of gravity, thereby it has intelligence.

A sledgehammer solves a problem too, but it was implicitly understood that the problem had to be "conceptual" and not physical of course, for the tool that can solve it to be called "intelligent".  However, your example is not devoid of analogy.  In as much as a tool can be intelligent (solving a conceptual problem), another tool can be "strong" (solving a physical problem).

hero member
Activity: 742
Merit: 526
March 25, 2015, 01:01:09 PM
I don't believe you, you meant quite the other, namely, that intelligence doesn't need consciousness at all.

Because I think that a calculator has a certain intelligence, and I don't think - although I cannot know - that a calculator isn't really conscious.  An AND gate also has some intelligence, but less so.  A modern-day computer has way more intelligence than a calculator.  Whether a modern-day computer is conscious or not, I don't know but I would be tempted to say no (although it is an unsolvable issue).

However, to SAY what is intelligence, needs a conscious being, because it needs to fix a purpose, namely a problem to be solved.  Without problem to be solved, there's no intelligence possible that can solve it, right.

If it were so, I could just as well say that a sledgehammer is also intelligent (to a degree). It uses the force of gravity, thereby it has intelligence.
hero member
Activity: 770
Merit: 629
March 25, 2015, 07:56:32 AM
I don't believe you, you meant quite the other, namely, that intelligence doesn't need consciousness at all.

Because I think that a calculator has a certain intelligence, and I don't think - although I cannot know - that a calculator isn't really conscious.  An AND gate also has some intelligence, but less so.  A modern-day computer has way more intelligence than a calculator.  Whether a modern-day computer is conscious or not, I don't know but I would be tempted to say no (although it is an unsolvable issue).

However, to SAY what is intelligence, needs a conscious being, because it needs to fix a purpose, namely a problem to be solved.  Without problem to be solved, there's no intelligence possible that can solve it, right.

Compare it to music for instance.  Music as such is objective.  It is a data file if you want to.  Or a function of air pressure as a function of time.  There's no discussion about that.  However, to define whether a certain sound is "music" needs a sentient being that can appreciate (enjoy) these sounds and there can even be discussion amongst sentient beings about whether some sound should be considered music or not.  But "music itself" as a sound doesn't need a consciousness.

In the same way, to define something as a problem, and hence, what consists a solution to that problem, needs a purpose and hence some form of sentient being.  But once the problem is defined, a system that can solve such kinds of problems, is therefor intelligent, and that is objective.  
Defining the problem of "addition of two numbers is an interesting problem" is probably sentient.  But a thing that can do additions, and hence can solve a problem and hence is intelligent, doesn't need to be sentient.

To appreciate intelligence is a sentient action.  To be intelligent, not necessarily.

newbie
Activity: 42
Merit: 0
March 25, 2015, 03:25:55 AM
When the time comes, we will manage the balance between the robot and human.
Now our focus should be developing Artificial intelligence. 
The scenario you mentioned should be in science fiction now.
Pages:
Jump to: