Author

Topic: Yet another analyst :) - page 128. (Read 269580 times)

legendary
Activity: 938
Merit: 1000
chaos is fun...…damental :)
December 27, 2012, 01:57:23 PM
that wave 2 i dont consider to be so easy i looks more like a complex correction under format flat+3w+triangle
so where your 2 ends is a W then 3 w up to X then triangle abcde where e is Y on the May 29 hammer



your count do not explain the move from the end of wave 2 and sub-wave1(wave3)
from your 4 to 5 u get nothing even if there was a  correction

full member
Activity: 207
Merit: 100
legendary
Activity: 938
Merit: 1000
chaos is fun...…damental :)
December 27, 2012, 08:39:40 AM
Ok , we will just wait and see..... Cool
I will like to see a sub-count for the wave 2 and 3 on your chart
sr. member
Activity: 462
Merit: 250
Clown prophet
December 27, 2012, 08:02:46 AM
What do you call the same? http://www.btcwallet.org/wp-content/uploads/2012/12/2012-2013-Bitcoin-Price-Analysis-end-of-one-year-circle1.png

Your waves C,4,5 do not feet in any known EW count variant - they are truncated. But thanks for affords.

In my count we are still in B of your 4. In my oppinion we have not seen neither B, nor 4 so far.

We are tracing B and C/4 is somewhere around 5.
Of course, C around 5 ruins vssa's upper count - it will not be 4 and 1540 was not a 3.
full member
Activity: 207
Merit: 100
December 27, 2012, 07:52:38 AM
Ok , we will just wait and see..... Cool
sr. member
Activity: 462
Merit: 250
Clown prophet
December 27, 2012, 07:50:12 AM
What do you call the same? http://www.btcwallet.org/wp-content/uploads/2012/12/2012-2013-Bitcoin-Price-Analysis-end-of-one-year-circle1.png

Your waves C,4,5 do not feet in any known EW count variant - they are truncated. But thanks for affords.

In my count we are still in B of your 4. In my oppinion we have not seen neither B, nor 4 so far.

We are tracing B and C/4 is somewhere around 5.
full member
Activity: 207
Merit: 100
December 27, 2012, 07:41:57 AM
I have published this same analysis a day before you on the 15 of December :
https://bitcointalksearch.org/topic/2012-2013-bitcoin-price-analysis-end-of-one-year-circle-130935

 on my website....

very interesting  Shocked Shocked
sr. member
Activity: 350
Merit: 251
Dolphie Selfie
December 27, 2012, 06:27:06 AM
Even solid state drive cant handle such database where this curve points to. I dont talk about CPU and RAM speed. So we have the situation where regular participant cant be a full member of decentralized network. So we run to unpredictable point in future with such database.

About unspent transactions. Sure the client has not to verify them, just keep. But in this way he will not know whether transaction valid or not.

In current Bitcoin client there is DoS protection, which banning remote node if it relayed invalid TX one or several times (i tried to craft invalid TX and relay it and my node was banned).

So if client will remove the TX verification, it will be vulnerable to DoS. The fork trap again: kill your HDD or be vulnerable to DoS.

Of course, Gavin's magick alert key for all doors will salvate our asses when we all be in long position. But not too much power for one person?

Oh dear, you thought Bitcoin is decentralized? I am sorry for disappointment Wink

I expected the down-move, too. But this looks like FUD.
With the current implementation of "Ultraprune" the pruned copy of the blockchain is about 100MiB (vs. 3-4 GiB unpruned). The size of this pruned copy does not grow as fast as the blockchain itself, because addresses, that currently do not hold any coins are removed. In an extreme case, the size of the pruned copy even can go down (this already happened, as you can see here: https://bitcointalksearch.org/topic/m.1257750). However, the block history has to be kept somewhere to bootstrap new nodes from zero. You can imagine this as the pruned copy being the current state of the system and to recreate this state the block history has to be replayed. But this also means, that the block history only needs to be stored on some large drive and there is no need for highly performant IO throughput (well, except that the faster the storage, the faster a new node can be bootstrapped from zero.).
sr. member
Activity: 462
Merit: 250
Clown prophet
December 27, 2012, 05:52:01 AM
But fuck this technicial. We got a situation here. The short term triangle off the 1374 high isnt broken so far. As the volume didnt confirm that - it is still low. I'd say that is abnormal triangle as regular one has constantly decreasing volume on its way to breakout which should be bold.

So that I was pointed as "e" of "B" looks now as "c" of "B". So we may have some more time stagnation awating d and e. If this is a triangle at all - it has outstanding volume stick in the middle. This is abnormal. So it may break in irregular way at any time.

Daily RSI and Macd still hinting down move and already drew a Head and Shoulders figure (for slowpokes).

That is why I prefer break down.

sr. member
Activity: 462
Merit: 250
Clown prophet
December 27, 2012, 05:31:56 AM
pent, I draw a problem - you try to find a solution. That was a deal, no? My part is complete. Go ahead with your round. But not here )


jr. member
Activity: 42
Merit: 1000
December 27, 2012, 05:09:00 AM
So, if you are right ,
ask you alter-ego to design new Bitcoin 3.0,
with much less disk I/O.

You both will be swimming in the huge pile
of the donated moneys (do you still accept
 BTC ? ).
A la Scroodge McDuck )
hero member
Activity: 490
Merit: 500
December 27, 2012, 05:11:34 AM
So, if you are right ,
ask you alter-ego to design new Bitcoin 3.0,
with much less disk I/O.
I plan to work on distributed chain database within DIANNA project. But still no programmers here ))

Lucif, you shitting ppl pants, as always. Bravo.
sr. member
Activity: 462
Merit: 250
Clown prophet
December 27, 2012, 04:44:18 AM
Even solid state drive cant handle such database where this curve points to. I dont talk about CPU and RAM speed. So we have the situation where regular participant cant be a full member of decentralized network. So we run to unpredictable point in future with such database.

About unspent transactions. Sure the client has not to verify them, just keep. But in this way he will not know whether transaction valid or not.

In current Bitcoin client there is DoS protection, which banning remote node if it relayed invalid TX one or several times (i tried to craft invalid TX and relay it and my node was banned).

So if client will remove the TX verification, it will be vulnerable to DoS. The fork trap again: kill your HDD or be vulnerable to DoS.

Of course, Gavin's magick alert key for all doors will salvate our asses when we all be in long position. But not too much power for one person?

Oh dear, you thought Bitcoin is decentralized? I am sorry for disappointment Wink
legendary
Activity: 938
Merit: 1000
chaos is fun...…damental :)
December 27, 2012, 04:01:06 AM
HDD has two parameters: space and bus throughput.

People still use HDD?  Cheesy
nah most ppl use this  http://www.ramsan.com/products/rackmount-flash-storage/ramsan-810
legendary
Activity: 1904
Merit: 1002
December 27, 2012, 01:55:23 AM
Just imagine Bitcoin network serving 100m customers.

New transaction arrives every second. Software have to verify its inputs by queriyng huge database. And every client do this job for every trans. Every transaction will cause millions of nodes to lookup bunch of keys in 100G database....

Also every client must keep this transaction buffer.

Well, let's wait that times.

It's called an unspent transaction cache.  You don't have to query every transaction, just keep a running list of how much each nonempty address holds.

Yes, there are scalability issues to be addressed before we get to 100 GB, but they are being addressed and the software is improving constantly.
legendary
Activity: 1666
Merit: 1057
Marketing manager - GO MP
December 26, 2012, 11:54:38 PM

I see 3 linear curves, if the overall thing tends to become exponential it has to be weighted against advances in hard-drive capacity and communication bandwidth both of which increase in a exponential fashion.
The second part of my person is expirenced Unix sysadmin.

I saw many crying clients, who were cry that there is enough space in hdd to keep video content, but server is hanged coz of heavy io.

HDD has two parameters: space and bus throughput.

Home users don't take into account second one. Never.

You know what is database double-tripple bigger then RAM size under read/write load? It is a death for IO scheduler if there is no good array of 15k rpm disks.

And even with array...100G database is really hell for a home PC even with 100TB single HDD.

So I ask one more time: where this curve leads to?

Triple redundant parity raid, for now later even higher redundancy. But that is true for any database with exponential growing size. The immediate solution is "ultraprune", simply that clients do not run the full database but rely on others to confirm transactions.
This issue is not unique to Bitcoin but is a known challenge for all IT. I don't know the answer but I think that eventually we are going to move away from hard-disks at all. Flash memory will in the short run archive enough bandwidth to do away with data degradation in time. In the long run there will be processor arrays which have their own non-volatile memory attached to each node.
I would expect bitcoin to be obsolete in it's current form by that time, which again will take about 20 years I think.

But that doesn't mean the blockchain couldn't be stored in multiple nodes communicating with error correcting codes. That's as far as I can imagine, everything after that is uncertain.
sr. member
Activity: 462
Merit: 250
Clown prophet
December 26, 2012, 10:54:50 PM
I remember the times when blockexplorer replied instantly on any query. Now it gives denial of service of about 30% of my queries. And that is what fate of every regular client in future if things will not change. And this future comes with exponental speed.

Disk space isn't problem - Satoshi was right. But disk IO is a problem. Big problem.

What I want to say with all of that? Bitcoin is beta and officially not recommended for serious investments by Gavin if I don't mistake. Too sad if I do. Too sad that press release on halving day from Bitcoin Foundation contains only squeal about Bitcoin price rise.

The big disappointment will reach fanatic believers if they loose connection with reality. See my sig.

legendary
Activity: 1904
Merit: 1037
Trusted Bitcoiner
December 26, 2012, 10:15:26 PM
Just imagine Bitcoin network serving 100m customers.

New transaction arrives every second. Software have to verify its inputs by queriyng huge database. And every client do this job for every trans. Every transaction will cause millions of nodes to lookup bunch of keys in 100G database....

Also every client must keep this transaction buffer.

Well, let's wait that times.

"millions of nodes"  Cheesy

buy buy buy, millions of nodes! Millions of nodes!!!
sr. member
Activity: 462
Merit: 250
Clown prophet
December 26, 2012, 10:00:45 PM
Just imagine Bitcoin network serving 100m customers.

New transaction arrives every second. Software have to verify its inputs by queriyng huge database. And every client do this job for every trans. Every transaction will cause millions of nodes to lookup bunch of keys in 100G database....

Also every client must keep this transaction buffer.

Well, let's wait that times.
sr. member
Activity: 462
Merit: 250
Clown prophet
December 26, 2012, 09:47:12 PM

I see 3 linear curves, if the overall thing tends to become exponential it has to be weighted against advances in hard-drive capacity and communication bandwidth both of which increase in a exponential fashion.
The second part of my person is expirenced Unix sysadmin.

I saw many crying clients, who were cry that there is enough space in hdd to keep video content, but server is hanged coz of heavy io.

HDD has two parameters: space and bus throughput.

Home users don't take into account second one. Never.

You know what is database double-tripple bigger then RAM size under read/write load? It is a death for IO scheduler if there is no good array of 15k rpm disks.

And even with array...100G database is really hell for a home PC even with 100TB single HDD.

So I ask one more time: where this curve leads to?
Jump to: