Pages:
Author

Topic: A Resource Based Economy - page 60. (Read 288376 times)

hero member
Activity: 840
Merit: 1000
November 08, 2012, 11:43:16 AM
Well, i don't think you want creativity in this case.
The whole idea of RBE is that the decisions the AI takes are more scientifically sound than what humans could oversee. So the idea is that it needs to be based on facts, not creativity.
Atonomy is not a problem per se. Your computer does lots and lots of autonomous things.
The problem is maybe that we would not like the cold hard decisions of such a system would make without our personal concent and with no human emotions to fall back on.

Creativity is necessary for autonomy (because autonomy means you can adapt to unexpected situations, and to do so you need creativity).

And autonomy is necessary if you want a system where no human labor is necessary (which is the main objective of RBE proponents, iirc).

Nonsense.
An amoeba can adapt to its surroundings but i don't see it being 'creative'.
My computer can draw graphics autonomously (without me telling it what to do) and yet it never showed any creativity (unless you mean the artifacts from overheating Wink )

You are projecting way too much human stuff onto life and other machinery.
We are not general examples of life. In fact, we are pretty amazingly specific examples of life. It's just silly to think humans are the default and to expect intelligence to be human-like in nature.
hero member
Activity: 840
Merit: 1000
November 08, 2012, 11:38:56 AM
Yes, Zeitgeist is communism in another skin. Adding computers and robots does not make a planned economy work.
Actually the idea of "machines doing all the work" wouldn't be a too alien idea to the people leading the industrial revolution. The machines are indeed doing almost all of it now if you take into account what "work" meant back then.
It's not "robots do all the work" that's the issue here. It's the "AI plans the economy" that causes me to worry.
I think you are confused about what an AI is.
AI doesn't imply self-awareness or conciousness.

Nothing I said required or even implied that the AI that "got it into it's head" that humans were getting in the way would be conscious. A computer system designed to run an economy efficiently would almost by definition see humans as inefficiencies and act to remove them.

You're oversimplifying the problem.
An AI that would see humans as inefficient would never be introduced as it would not function in its role to regulate humans. It would fail from the beginning.
Or do you think we would invent an AI and hook it up to control the whole of humanity just to see if it would work?
Of course not, that would be useless.
We would engineer it to function in a certain way and so we would engineer it so it's task is to make humans survive.
Such an AI would just never consider damaging humans.
The only escape would be if such an AI would somehow acquire self-conciousness so that it can somehow sidestep it's designation and start acting in a way that cannot be stopped by humanity.
But if you see how AI works in practice you can see that this is not a real threat unless we specifically desing an AI that behaves in such a way.
legendary
Activity: 1288
Merit: 1080
November 08, 2012, 11:36:31 AM
Why wouldn't it be possible?  Why your brain would be so different from a machine?
As far as I can tell, I'm the only consciousness in existence, and "everything else" is just a product of my imagination in my little universe. At least with people, there is empirical evidence suggesting that they are capable of mirroring my feelings, sense of mercy or justice and many other human concepts.

Empirical evidence is not the appropriate tool for inferring the possibility of things that do not exist in the present.
kjj
legendary
Activity: 1302
Merit: 1026
November 08, 2012, 11:34:12 AM

Now I'm confused.  You posted a link to a state that existed from 930 to 1262 as evidence of the viability of a system invented by a guy born in 1819.  Did Gustave invent anarcho-capitalism, or the time machine?

At any rate, Iceland as a whole has never had more population than a small city, so it is hard to draw conclusions that would scale up to the size of the Roman, Venetian or American republics.
hero member
Activity: 840
Merit: 1000
November 08, 2012, 11:30:35 AM
I think you are confused about what an AI is.
AI doesn't imply self-awareness or conciousness.
You can make an AI that just regulates stuff without ever asking questions.

Yeah I mentioned that earlier.  You need intelligence so that your machines can be autonomous and creative in their decision-making.

But you want to avoid self-awareness because you want to make sure they will obey.

Consciousness is only desirable if you want to create a computational replica of your mind.  If you want immortality or something.

Well, i don't think you want creativity in this case.
The whole idea of RBE is that the decisions the AI takes are more scientifically sound than what humans could oversee. So the idea is that it needs to be based on facts, not creativity.
Atonomy is not a problem per se. Your computer does lots and lots of autonomous things.
The problem is maybe that we would not like the cold hard decisions of such a system would make without our personal concent and with no human emotions to fall back on.


That's exactly why I would prefer the new intelligence to be sentient and/or endowed with consciousness...
But wait... What do we mean when we say endowed with consciousness?  Do we mean merely self-aware, or also emotionally aware of other living things?
The latter is what I am calling for.

Yeah, well, there is a problem with that.
Emotions is what makes humans unpredictable.
Emotions is what makes humans evil.
Greed is an emotion.
And that is besides the point that emotion is even more specific than human intelligence.
So any emotion we build into an AI will be fake. If you want real emotion you would need to evolve it and so you would have to present the same kind of environment to the developing mechanism to make it develop these qualia we call emotions.
In other words, we have no such possibilities of creating emotional machines.
hero member
Activity: 775
Merit: 1000
November 08, 2012, 11:28:26 AM
Artificial Intelligence is a cybernetically impossible transformation. It's just not possible to create it, by definition.

Machines can be arbitrarily complex but they are defined in such a way that they depend on Man to control them. In computer science AI is used as a weasel word to describe mechanisms which attempt to solve problems using mathematical concepts which should, in theory enable the machine to compute solutions for problems it wouldn't have sufficient computational strength using other methods.
In transhumanism it refers to self-improving machines which again can not be constructed by definition. Every machine will still have a constraint defined by the parameters it is programmed even if it is able to construct copies of itself and use stochastic processes to fine-tune the parameters.
+1 I couldn't have put it better myself.

As for 'the singularity', I call bullshit on that one too. It can't be done. Someone show me a compelling argument that it's theoretically possible for machines to have consciousness, and I will eat my words.

Why wouldn't it be possible?  Why your brain would be so different from a machine?
As far as I can tell, I'm the only consciousness in existence, and "everything else" is just a product of my imagination in my little universe. At least with people, there is empirical evidence suggesting that they are capable of mirroring my feelings, sense of mercy or justice and many other human concepts.

Your definition of intelligence is too specific.
What you propably talk about is human intelligence.
And sure enough, human intelligence is so specific that we would need to recreate most structures of the brain to create such an intelligence.
But intelligence is a much broader concept.
Intelligence is best classified as an information system for dealing with the environment.
In that view even DNA molecules contain intelligence because they lead to specific manipulations of the environment.
Everything that manipulates the environment in a deliberate manner (acting on information) can be said to possess intelligence.
Human intelligence is just a very very specific case of intelligence.
In the case of an AI controling society, there is nothing that requires that AI to be concious or something like that.

Well, for the purposes of a central authority to run 'our' lives, why toy around with machines that are far simpler than humans? Why not use the best there is, i.e.: actual humans? Some might argue that it's a complex, rewarding job. Cheesy
hero member
Activity: 532
Merit: 500
FIAT LIBERTAS RVAT CAELVM
November 08, 2012, 11:26:48 AM
But wait... What do we mean when we say endowed with consciousness?  Do we mean merely self-aware, or also emotionally aware of other living things?
The latter is what I am calling for.

lol... Yeah, Pipe dream. Take another hit, man, 'cause that is never happening. Even assuming machines could develop consciousness, it would be an entirely alien consciousness that, at best, viewed us as ants. At worst, well... You've seen the Terminator movies, right?

Terminator 2 is one of my favorite movies of all time.


And yet, you still desire AI...

If you're suicidal, there are hotlines for that. And there's no need to take the rest of us with you.
hero member
Activity: 686
Merit: 500
Shame on everything; regret nothing.
November 08, 2012, 11:23:09 AM
But wait... What do we mean when we say endowed with consciousness?  Do we mean merely self-aware, or also emotionally aware of other living things?
The latter is what I am calling for.

lol... Yeah, Pipe dream. Take another hit, man, 'cause that is never happening. Even assuming machines could develop consciousness, it would be an entirely alien consciousness that, at best, viewed us as ants. At worst, well... You've seen the Terminator movies, right?

Terminator 2 is one of my favorite movies of all time.
hero member
Activity: 938
Merit: 1002
November 08, 2012, 11:22:41 AM
Yes, Zeitgeist is communism in another skin. Adding computers and robots does not make a planned economy work.

Actually the idea of "machines doing all the work" wouldn't be a too alien idea to the people leading the industrial revolution. The machines are indeed doing almost all of it now if you take into account what "work" meant back then.
The problem is that no technology can take away human drives. Notice that since the industrial revolution people don't work that much less, they just work on other things than machine work.

Yeah, that's pretty much what I've been saying.

No, it doesn't work that simple.



But most people don't care about reaching this kind of enlightment and just want to work for food so they can have kids and support their family.

And why is that?

I think enlightenment is one of the primitive components of human existence. I think we naturally strive to find answers to metaphysical questions about our condition, which lead to all sorts of interesting constructs like science and religion. I think we destroy people's natural curiosity in order to create a more robust machine. Humans are born thinkers.

Having said that, the overall production output would drastically decrease, not because people go dumb. Quite the opposite. But then we'd be taken over by a society that train specialized, well-behaved professionals (that have probably produced a shitload of guns in the meantime).

LOL you mean to tell me you can say with a straight face that a detached and delusional political class knows better than human beings who have to work for a living what skills are in demand in the current market? Don't make me mock you, because I will.

In your system those hard working skilled worker would have never acquired the skills needed because when they enter the makrket they are clueless. As a child, when the time is optimal to get skilled, they would have absolutely no idea of what the market requires.
So yes, the deluded politicians still have a better idea about these things than a kid that needs to decide a future for themselfs.
If you would let that choice to the people you would get a dysfunctional society because noone wants to do what needs to be done.

I'll call this an argument out of lack of imagination. I learned to program C when I was 14 (around '91) without even having read a book about programming because I had friends who were also interested in it. This was not possible because I was somehow smarter or naturally more curious. It was possible because learning programming without support was possible. Most skills are out of reach of the uninitiated. You have to be oriented towards it, you have to prove to society that you are worthy of being in a privileged position to be introduced to a subject. You have to let yourself get indoctrinated with a specific school of thought that dominates a specific discipline.

I'm mostly imagining a society where there is no distinction between a teacher and a student, or a place of work and a place of study.

And "no one wants to do what needs to be done" doesn't prove much. If no one wants to be a janitor, you pay more to janitors. Done. Why is this even a problem? Having specialized man-machines is a useful thing of course, but a society formed by flexible individuals capable of thinking would have other advantages.
legendary
Activity: 1288
Merit: 1080
November 08, 2012, 11:20:47 AM
Well, i don't think you want creativity in this case.
The whole idea of RBE is that the decisions the AI takes are more scientifically sound than what humans could oversee. So the idea is that it needs to be based on facts, not creativity.
Atonomy is not a problem per se. Your computer does lots and lots of autonomous things.
The problem is maybe that we would not like the cold hard decisions of such a system would make without our personal concent and with no human emotions to fall back on.

Creativity is necessary for autonomy (because autonomy means you can adapt to unexpected situations, and to do so you need creativity).

And autonomy is necessary if you want a system where no human labor is necessary (which is the main objective of RBE proponents, iirc).
hero member
Activity: 686
Merit: 500
Shame on everything; regret nothing.
November 08, 2012, 11:20:03 AM
What the ZG and Venus Project movements really need then is probably something like a new ruling class, that is made up of a new life form that is more powerful and intelligent and able in every way than humans (including emotionally -- they would be benevolent).  Of course, there will be archangels so to speak that will attempt to destroy the new civilization ...

Have I sufficiently caricatured this neo-communism yet?   Grin

Really though, the transhumanists' ideas are most compelling...
legendary
Activity: 1666
Merit: 1057
Marketing manager - GO MP
November 08, 2012, 11:19:35 AM
In transhumanism it refers to self-improving machines which again can not be constructed by definition. Every machine will still have a constraint defined by the parameters it is programmed even if it is able to construct copies of itself and use stochastic processes to fine-tune the parameters.

I'm no expert but it seems to me that you're talking about twentieth century style AI.   Nowadays computing engineers for AI use genetic algorithms, artificial neural networks and stuff like that.  They don't program the behavior of the machine.   Moreover, your brain also has "parameters":  the maximum number of neurons, the physical laws they obey, and so on.   A computer might actually have more degrees of freedom than your brain can ever have.

Again a machine is by definition lifeless. Discussing a hypothetical scenario where we could enable life makes no sense since we have no idea how it could be accomplished.
Genetic algorithms and artificial networks are exactly what falls under computer science as the subject of "AI" they are exactly the mathematical processes I was referring to.
hero member
Activity: 532
Merit: 500
FIAT LIBERTAS RVAT CAELVM
November 08, 2012, 11:19:05 AM
But wait... What do we mean when we say endowed with consciousness?  Do we mean merely self-aware, or also emotionally aware of other living things?
The latter is what I am calling for.

lol... Yeah, Pipe dream. Take another hit, man, 'cause that is never happening. Even assuming machines could develop consciousness, it would be an entirely alien consciousness that, at best, viewed us as ants. At worst, well... You've seen the Terminator movies, right?
hero member
Activity: 686
Merit: 500
Shame on everything; regret nothing.
November 08, 2012, 11:14:30 AM
I think you are confused about what an AI is.
AI doesn't imply self-awareness or conciousness.
You can make an AI that just regulates stuff without ever asking questions.

Yeah I mentioned that earlier.  You need intelligence so that your machines can be autonomous and creative in their decision-making.

But you want to avoid self-awareness because you want to make sure they will obey.

Consciousness is only desirable if you want to create a computational replica of your mind.  If you want immortality or something.

Well, i don't think you want creativity in this case.
The whole idea of RBE is that the decisions the AI takes are more scientifically sound than what humans could oversee. So the idea is that it needs to be based on facts, not creativity.
Atonomy is not a problem per se. Your computer does lots and lots of autonomous things.
The problem is maybe that we would not like the cold hard decisions of such a system would make without our personal concent and with no human emotions to fall back on.


That's exactly why I would prefer the new intelligence to be sentient and/or endowed with consciousness...
But wait... What do we mean when we say endowed with consciousness?  Do we mean merely self-aware, or also emotionally aware of other living things?
The latter is what I am calling for.
hero member
Activity: 532
Merit: 500
FIAT LIBERTAS RVAT CAELVM
November 08, 2012, 11:14:11 AM
If you would let that choice to the people you would get a dysfunctional society because noone wants to do what needs to be done.

Yup. No kid I've ever asked "what do you want to be when you grow up?" has ever replied "I wanna be a janitor!" (Here, though, is where robots could potentially help... these menial and dirty or dangerous jobs could easily be handled by mechanical workers, and you yourself said that decision-making capability does not require consciousness.)

They still thought it was the best option...

And about 70-odd years later, Gustave de Molinari came up with a better one.

Meh.  Untested conjecture.

*ahem*... https://en.wikipedia.org/wiki/Icelandic_Commonwealth
hero member
Activity: 840
Merit: 1000
November 08, 2012, 11:12:59 AM
I think you are confused about what an AI is.
AI doesn't imply self-awareness or conciousness.
You can make an AI that just regulates stuff without ever asking questions.

Yeah I mentioned that earlier.  You need intelligence so that your machines can be autonomous and creative in their decision-making.

But you want to avoid self-awareness because you want to make sure they will obey.

Consciousness is only desirable if you want to create a computational replica of your mind.  If you want immortality or something.

What i'm trying to say is that most propably you would have to actually design the AI to have something like a conciousness for it to have one.
Our brains are pretty specific.
Unfortunately our conciousness feels so 'right' to us that we tend to think conciousness is a general thing. It is not. It's amazingly specific.
legendary
Activity: 1288
Merit: 1080
November 08, 2012, 11:10:59 AM
In transhumanism it refers to self-improving machines which again can not be constructed by definition. Every machine will still have a constraint defined by the parameters it is programmed even if it is able to construct copies of itself and use stochastic processes to fine-tune the parameters.

I'm no expert but it seems to me that you're talking about twentieth century style AI.   Nowadays computing engineers for AI use genetic algorithms, artificial neural networks and stuff like that.  They don't program the behavior of the machine.  Not with "if then else" lines anyway.   Moreover, your brain also has "parameters":  the maximum number of neurons, the physical laws they obey, and so on.   A computer might actually have more degrees of freedom than your brain can ever have.  At least because your brain is trapped in your skull and can't grow bigger than that.
hero member
Activity: 840
Merit: 1000
November 08, 2012, 11:10:45 AM
Imagine a world where both a Binding Contract and Artificial Intelligence are considered the product of fanatic nutjobs imagination as a general consensus in society.

The world would be so much better.  Smiley

Electricity is arguably a product of "fanatic nutjobs imagination" ...  If you have any problems with it, like the nice man from the electric monopoly company said when he came to turn my power off one time,
"There are other alternatives"

...
I looked at him like  Huh Roll Eyes and flipped him off as hard as I could.

Artificial Intelligence is a cybernetically impossible transformation. It's just not possible to create it, by definition.

Machines can be arbitrarily complex but they are defined in such a way that they depend on Man to control them. In computer science AI is used as a weasel word to describe mechanisms which attempt to solve problems using mathematical concepts which should, in theory enable the machine to compute solutions for problems it wouldn't have sufficient computational strength using other methods.
In transhumanism it refers to self-improving machines which again can not be constructed by definition. Every machine will still have a constraint defined by the parameters it is programmed even if it is able to construct copies of itself and use stochastic processes to fine-tune the parameters.

Cybernetics is sooo 1930's...
If a system is designed along the cybernetic line of tought then sure, it will have the properties you mention.
If not then you're left with a much cleaner informational system.
I agree that these ideas of RBE reek of cybernetics, but by now cybernetics is not a limiting factor anymore. We understand a lot more about dynamical information systems.
kjj
legendary
Activity: 1302
Merit: 1026
November 08, 2012, 11:09:46 AM
They still thought it was the best option...

And about 70-odd years later, Gustave de Molinari came up with a better one.

Meh.  Untested conjecture.

I'm not opposed to the idea of anarcho-capitalism, and if it worked, I'd probably even prefer it.

But I do know that a Republic works, for as long as the people value it, generally at least a few hundred years, and it has a track record of not failing in the direction of apocalypse.
legendary
Activity: 1666
Merit: 1057
Marketing manager - GO MP
November 08, 2012, 11:09:37 AM

Nothing is the only thing that is impossible, and I (and I would argue no one else) still do not KNOW this.

Except when something is impossible by definition.
Could it be possibly to enable mechanical life some day? Who knows, but that wouldn't be artificial nor a machine.
Pages:
Jump to: