Pages:
Author

Topic: The 2.0 throwdown thread - page 2. (Read 21035 times)

legendary
Activity: 3136
Merit: 1116
October 23, 2015, 07:45:15 AM
legendary
Activity: 2968
Merit: 1198
October 23, 2015, 07:33:24 AM
no MAID

MAID trades, right? I'm not familiar with it through, is that just some placeholder token with no features?
legendary
Activity: 1470
Merit: 1010
Join The Blockchain Revolution In Logistics
October 23, 2015, 06:20:11 AM
"No Lightning Network, no eMunie, no MAID, or anything else that is practically a foregone conclusion or most likely vaporware"

Heard of eMunie ... for like 2 years ... where does it trade?
Everyone has heard of MAID ... where's this platform it hyped so well?
Lightning Network ... is that an actual thing?

" -it has to be released publicly (no alpha/beta/testnet/bullshit youtube videos) to be considered here- "

Agree.  The 2.0 has reached the point of either you already have to have product in the field, or get out.
That means at least a working platform with real users.  And we are very near the point that that platform also needs to contain assets of value, not just fluff that no one wants.
legendary
Activity: 1260
Merit: 1000
legendary
Activity: 2044
Merit: 1005
October 22, 2015, 11:10:37 PM
Here's me giving BTS grief on the TPS issue:
https://bitcointalksearch.org/topic/m.12620412

Some useful information contained there....looks to me like BTS are throwing security and stability out the window to get close to the 100,000 TPS they've been hyping, though it turns out that the 100,000 TPS figure is based on running a lab benchmark, and will only be achievable in the real world on next-genreration hardware and infrastructure. Shocked

@rdnkjdi: Yeps, that's Emunie, which has been in development for a very, very long time.

Relevant:
https://twitter.com/eMunie_Currency

eMunie achieved a load of 200-300 tx/s for over an hour tonight. No true crypto has gone so fast for so long!

"The future is NOW"  ...and that's with only 1 partition! 

Multiply that times 1024 partitions and you can see it's easily 256,000 txn/s with commodity (of the shelf) hardware.

That doesmt mean anything.. I want a third party to test it with open source code
legendary
Activity: 3136
Merit: 1116
October 22, 2015, 10:37:40 PM
No Lightning Network, no eMunie, no MAID, or anything else that is practically a foregone conclusion or most likely vaporware - it has to be released publicly (no alpha/beta/testnet/bullshit youtube videos) to be considered here.
full member
Activity: 179
Merit: 100
October 22, 2015, 09:54:33 PM
Here's me giving BTS grief on the TPS issue:
https://bitcointalksearch.org/topic/m.12620412

Some useful information contained there....looks to me like BTS are throwing security and stability out the window to get close to the 100,000 TPS they've been hyping, though it turns out that the 100,000 TPS figure is based on running a lab benchmark, and will only be achievable in the real world on next-genreration hardware and infrastructure. Shocked

@rdnkjdi: Yeps, that's Emunie, which has been in development for a very, very long time.

Relevant:
https://twitter.com/eMunie_Currency

eMunie achieved a load of 200-300 tx/s for over an hour tonight. No true crypto has gone so fast for so long!

"The future is NOW"  ...and that's with only 1 partition! 

Multiply that times 1024 partitions and you can see it's easily 256,000 txn/s with commodity (of the shelf) hardware.
legendary
Activity: 2044
Merit: 1005
October 22, 2015, 06:52:27 PM
sidhujag, you are not disproving their the measurements are all assuming the network has scaled to the throughput of the CPU and they never mention DoS attacks at all.

Right it was a controlled environment like I said earlier and that their tests showed more than 100k on standard hw and much more on better hardware and much much more on optimized source code, I just said that there would be some leeway for DDOS although a full DDOS attack on delegates would cause standby delegates to get called upon, im sure a lag indicator(which im sure is already enforced at some time limit) can be used if delegates are not responding within a certain time causing blocks to be delegated to another witness. It would be short lived because other witnesses (standby) can be used and list of witnesses can be increased dynamically to withstand an attack over an extended period of time.

I'd prefer some official technical document detailing the DDoS algorithm and real world tests or modeling results.

Hiccups every 10s with intermittent DDoS might not be squelched and many other possible scenarios. We can't piecemeal this, we need actually engineering level inspection and test data.

That is a good suggestion and I believe the best way to get them to share this information or perhaps conduct these tests (which I'm sure they already have) is to post on their forum over at bitsharestalk.org. Create an account and post a thread im sure Daniel Larimer reads it daily.

It would be good to reflect back here on the results of those tests or responses from the core devs.

If you have a hunger for information it is a good place to look for a solution in a time sensitive manor.
sr. member
Activity: 420
Merit: 262
October 22, 2015, 06:49:27 PM
sidhujag, you are not disproving their the measurements are all assuming the network has scaled to the throughput of the CPU and they never mention DoS attacks at all.

Right it was a controlled environment like I said earlier and that their tests showed more than 100k on standard hw and much more on better hardware and much much more on optimized source code, I just said that there would be some leeway for DDOS although a full DDOS attack on delegates would cause standby delegates to get called upon, im sure a lag indicator(which im sure is already enforced at some time limit) can be used if delegates are not responding within a certain time causing blocks to be delegated to another witness. It would be short lived because other witnesses (standby) can be used and list of witnesses can be increased dynamically to withstand an attack over an extended period of time.

I'd prefer some official technical document detailing the DDoS algorithm and real world tests or modeling results.

Hiccups every 10s with intermittent DDoS might not be squelched and many other possible scenarios. We can't piecemeal this, we need actually engineering level inspection and test data.
legendary
Activity: 2044
Merit: 1005
October 22, 2015, 06:43:24 PM
sidhujag, you are not disproving their the measurements are all assuming the network has scaled to the throughput of the CPU and they never mention DoS attacks at all.

Right it was a controlled environment like I said earlier and that their tests showed more than 100k on standard hw and much more on better hardware and much much more on optimized source code, I just said that there would be some leeway for DDOS although a full DDOS attack on delegates would cause standby delegates to get called upon, im sure a lag indicator(which im sure is already enforced at some time limit) can be used if delegates are not responding within a certain time causing blocks to be delegated to another witness. It would be short lived because other witnesses (standby) can be used and list of witnesses can be increased dynamically to withstand an attack over an extended period of time.

Remember that the next witness is random and cannot be predicted by attackers. So if they keep trying to attack witnesses, new ones can easily replace them from the standby pool or new ones voted in during a long attack. I'm pretty sure ComeFromBeyond already tried this attack on bitshares 1.0 and left without a proper response from him, although he was eager to learn and try to find attack vectors. When he realized that DDOS attack is really the only attack to the network for him he tried it (many people were complaining about delegate lags at the time but the important note is that the system resolved itself feasibly). Infact I think he said he would attack the testnet to see if its a viable attack without hurting mainnet but I personally believe his intentions were to try to break it (from a competitors angle for NXT) and he didn't really comment on his results,  leaving the thread https://bitsharestalk.org/index.php/topic,13921.0.html
sr. member
Activity: 420
Merit: 262
October 22, 2015, 05:21:34 PM
The proposed Lightning Networks for Bitcoin is not on the table in the OP. I finally dug into some research on this today and I reasonably shocked to find out what a mess LN could end up being it if is enabled by some block chain changes.

  • I haven't found any mention of what happens with chain reorganization but I am assuming it could create an irrevocable mess. Strategies could probably steal coins:

    Quote from: myself from pvt msg
    Reorg that removes all the payment transactions and refunds everything to himself. The payment transactions will have timeout, but the refunds can be made up to 40 days later. Lets say you can buy something of value with microtransactions that can be aggregated and exchanged back to crypto coin, such as game tokens.

    Quote from: myself from pvt msg
    Quote from: anonymous
    This seems like a massive 51% attack. You could do the same with confirmed on-chain transactions then.

    Not in my design. I have 51% attack immunity to double-spends in my design. Comparing designs for microtransactions.
  • Extra block chain load (and not even in the context of DoS attacks).

    Quote from: myself from pvt msg
    To make this work they will need hubs with reputation that establish all the intermediate channels. So then you as a user open only one channel to pay and get paid with. That can work and eliminate the DoS problem and also the latency problem.

    But there is another way to DoS it which is to create more and more Bitcoin addresses so the block chain gets flooded with new channels. There is no way to stop a DoS attack against a resource which has no cost of creation. Well I guess you force the recipient to have a payment channel open so the cost is the minimum bond of a payment channel. But if they make that value too high you limit participation (as you said onramp cost). And you can't stop users from closing a payment channel (by sending the refund or the first spend TX) and immediately creating another Bitcoin channel with a new address. Well I guess you have the Bitcoin block chain TX fee to contend with.
  • Kills anonymity. No ring sigs nor value hiding is possible in this structure. Trusted corporations must run the servers for this to work against DoS and latency issues. This is a tracking system.
  • Requires big spikes in block chain headroom. Due to many people closing their channels at the same time. And if a server gets hacked.

    Quote from: myself from pvt msg
    At the end of the video they admit payment channels are likely to make block chain scaling worse because there will be proliferation of competing payment channel networks with different features thus users will open multiple channels on the block chain.

    This appears very, very messy and too many risks. I would endeavor to make sure payment channel networks can not function if I created a coin.

    On-chain type end-to-end anonymity should be seamless and available for microtransactions when you want it.

    They can make this LN stuff work sort of with lots of bandaids and duck tape. And then...kaboom.

    Quote from: myself from pvt msg
    Appears Lightening Networks is going to require orders-of-magnitude more block chain space than my block chain scaling design. Also the increased latency (unless corporate spying servers with reputation dominate) and on ramp delay (wait for block confirmation to open a channel). Also I bet LN will end up having effectively more than one signature to the block chain to settle up (when DoS attacked by unfriendly nodes). DoS attacks are going be nearly impossible to deal with given any sort of anonymity mixing such as CoinShuffle.

    I calculate the bandwidth cost per TX on my design will be in neighborhood of $0.000001 per TX. I haven't computed CPU costs yet. Bitshares is claiming 100,000 TPS on commodity hardware in one thread, so looks like amortized hardware costs will be extremely low also.
sr. member
Activity: 420
Merit: 262
October 22, 2015, 04:53:00 PM
sidhujag, you are not disproving their the measurements are all assuming the network has scaled to the throughput of the CPU and they never mention DoS attacks at all.
legendary
Activity: 2044
Merit: 1005
October 22, 2015, 04:02:39 PM
In a controlled environment its already demonstrated 100k tps.. The bottleneck is the network capabilities rather than the core code which is what it demonstrates. Once network capabilities scale up naturally, then 100k comes possible as demand sees fit.

You are obscuring that what you just wrote effectively means, "this is how many TX/s a fast server CPU can process and we eliminated any sense of a real network from the test".

Once network capabilities scale up naturally, then 100k comes possible as demand sees fit.

This is like saying once we redesign our coin (or monopolize the DPOS witnesses with our own corporate funded super hardware because DPOS is really just a way to obscure that we are paying dividends to ourselves, analogous to how masternodes in Dash was ostensibly a way obscuring that those huge interest fees were going to those who owned the most coins), it will get faster. Let's rather talk about what a coin can do now, today, in the real world of decentralized witnesses of varying capabilities.

Obscuring instamines, and other means (cheap pre-sales of ProtoShares that were converted to Bitshares?) of having control over large percent of the coins and then setting up a paradigm where coins can be parked to earn dividends. Hmmm. How dumb are we. Hey more power to them if investors are gullible enough to buy it. But it all starts to fit together in terms of analyzing why they would even think they could have a uniform capability across all witnesses.

Your assumption which led to a dozen more about which witness is next to sign a block is incorrect, thus your derived assumptions also incorrect. Thus you really have no claim ablut bitshares and the tps without fully understanding the concepts behind the tests and feature itself.

If you would be kind enough, you are welcome to cite a reference document. I was working from the official description of DPOS at the official website. As I wrote, I will edit my post for corrections if any one provides information. You have not yet provided any information. So far I read only your (perhaps incorrect) spin on the matter.

Why not read the code?

Again the bottleneck is in the consensus code which was been optimized so that it is possible to do more than 100k tps, a bitcoin controlled environment cant do this because of the bottleneck outside of network constraints. By leveraging LMAX technology and applying it to blockchains they were able to increase efficiency in validating and signing blocks. Propogation is always an issue.. Which is where scaling up network parameters helps and is totally feasible which multiple markets are betting on and will benefit. Because there is no mining it is possible off the bat, and now optimized to create more tps. Dpos allows them to maximize decentralization while remaining anonymous and even so with bitshares following regulatory rules gives less incentive from a regulation attack than bitcoin.

With fiberoptic internet would bitcoin be able to do 100k tps? No.

Lmax does 100k in 1ms latency http://www.infoq.com/presentations/LMAX

On the use of lmax in bts https://bitshares.org/technology/industrial-performance-and-scalability/

Increasing network params will only help bitcoin by helping with the regulation attack but not scale up in tps as efficiently. Today btc is restricted to 7tps at 1mb so its orders of magnitudes off and id argue that dpos is still more decentralized than using LN to increase tps and use bitcoin as a settlement platform.

As I wrote from the start of this, Bitshares 2.0 has optimized the witness code so the CPU can scale to 100,000 TX/s, but not only are they apparently requiring on the order of LMAX's 1ms network latency to achieve it, but I haven't read where they've modeled DoS service attacks on the transaction propagation network at such high TX/s. Real-time systems are not only about average throughput but also about CIS (guaranteed reliability and continuous throughput). If you are sending your real-time payment through and the next 10 witnesses that are queued in the chosen order are DoS attacked so they are unable to receive the transactions, then they can't complete their function. That is a fundamental problem that arises from using PoS as the mining method if you claim such high TX/s across variable hardware and network capabilities of nodes (those PoS claiming more conservative TX/s and block times are thus less likely to bump into these issues external to the speed of updating the ledger in the client code). They can adopt counter measures, but it is going to impact the the maximum TX/s rates to the downside, perhaps significantly.

I am not even confident they can maintain 100 TX/s on a real-world network today composed of a myriad of witnesses capabilities under a DDoS attack. Someone needs to do some modeling.

LMAX is able to push 6M TPS but its not on a blockchain. Thus it is not apparently requiring that kind of latency at all. "BitShares is able to process 100,000 transactions per second without any significant effort devoted to optimization" that means with optimization they can pull alot more and deal with DDOS or whatenot.

"The real bottleneck is not the memory requirements, but the bandwidth requirements. At 1 million transactions per second and 256 bytes per transaction, the network would require 256 megabytes per second (1 Gbit/sec). This kind of bandwidth is not widely available to the average desktop; however, this level of bandwidth is a fraction of the 100 Gbit/s that Internet 2 furnishes to more than 210 U.S. educational institutions, 70 corporations, and 45 non-profit and government agencies."


"The NASDAQ claims that orders are acknowledged in 1 ms and then executed in just 1 ms. This claim has the built in assumption that the machines doing the trading are on fiber optic connections to the exchange and located within 50 miles. This is due to the fact that light can only travel 186 miles in a millisecond in space and half of that speed on a fiber optic cable. The time it takes for an order to travel 50 miles and back is a full millisecond even if the NASDAQ had no overhead of its own.

If a user in China were to do trading on the NASDAQ then they would expect that order acknowledgement would be at least 0.3 seconds.

If BitShares were configured with 1 second block intervals, then on average orders are acknowledged and/or executed within 0.5 seconds. In other words, the performance of BitShares is on par with a centralized exchange processing orders submitted from around the world. This is the best that can be achieved with a decentralized exchange because it puts everyone on equal footing regardless of their geographical location. In theory, traders could locate their machines within 50 miles of the elected block producers and trade with millisecond confirmations. Unfortunately, block producers are randomly selected from locations all around the world which means that at least some of the time a trader would have higher latency."

"We setup a test blockchain where we created 200,000 accounts and then made 2 transfers and 1 asset issuance to each account. This is involved a total of 1 million operations. After creating the blockchain we timed how long it took to “reindex” or “replay” without signature verification. On a two year old 3.4 Ghz Intel i5 CPU this could be performed at over 180,000 operations per second. On newer hardware single threaded performance is 25% faster.

Based upon these numbers we have concluded that claiming 100,000 transactions per second is well within the capability of the software."

"When measuring performance we make the assumption that the network is capable of streaming all of the transaction data and that disks are capable of recording this stream. We make the assumption that signature verification has been done in parallel using as many computers as necessary to minimize the latency. A single core of a 2.6 Ghz i7 is able to validate 10,000 signatures per second. Todays high-end servers with 36 cores (72 with hyper-threading) could easily validate 100,000 transactions per second. All of these steps have been designed to be embarrassingly parallel and to be independent of blockchain state."


Read https://bitshares.org/blog/2015/06/08/measuring-performance/
sr. member
Activity: 420
Merit: 262
October 22, 2015, 11:22:31 AM
In a controlled environment its already demonstrated 100k tps.. The bottleneck is the network capabilities rather than the core code which is what it demonstrates. Once network capabilities scale up naturally, then 100k comes possible as demand sees fit.

You are obscuring that what you just wrote effectively means, "this is how many TX/s a fast server CPU can process and we eliminated any sense of a real network from the test".

Once network capabilities scale up naturally, then 100k comes possible as demand sees fit.

This is like saying once we redesign our coin (or monopolize the DPOS witnesses with our own corporate funded super hardware because DPOS is really just a way to obscure that we are paying dividends to ourselves, analogous to how masternodes in Dash was ostensibly a way obscuring that those huge interest fees were going to those who owned the most coins), it will get faster. Let's rather talk about what a coin can do now, today, in the real world of decentralized witnesses of varying capabilities.

Obscuring instamines, and other means (cheap pre-sales of ProtoShares that were converted to Bitshares?) of having control over large percent of the coins and then setting up a paradigm where coins can be parked to earn dividends. Hmmm. How dumb are we. Hey more power to them if investors are gullible enough to buy it. But it all starts to fit together in terms of analyzing why they would even think they could have a uniform capability across all witnesses.

Your assumption which led to a dozen more about which witness is next to sign a block is incorrect, thus your derived assumptions also incorrect. Thus you really have no claim ablut bitshares and the tps without fully understanding the concepts behind the tests and feature itself.

If you would be kind enough, you are welcome to cite a reference document. I was working from the official description of DPOS at the official website. As I wrote, I will edit my post for corrections if any one provides information. You have not yet provided any information. So far I read only your (perhaps incorrect) spin on the matter.

Why not read the code?

Again the bottleneck is in the consensus code which was been optimized so that it is possible to do more than 100k tps, a bitcoin controlled environment cant do this because of the bottleneck outside of network constraints. By leveraging LMAX technology and applying it to blockchains they were able to increase efficiency in validating and signing blocks. Propogation is always an issue.. Which is where scaling up network parameters helps and is totally feasible which multiple markets are betting on and will benefit. Because there is no mining it is possible off the bat, and now optimized to create more tps. Dpos allows them to maximize decentralization while remaining anonymous and even so with bitshares following regulatory rules gives less incentive from a regulation attack than bitcoin.

With fiberoptic internet would bitcoin be able to do 100k tps? No.

Lmax does 100k in 1ms latency http://www.infoq.com/presentations/LMAX

On the use of lmax in bts https://bitshares.org/technology/industrial-performance-and-scalability/

Increasing network params will only help bitcoin by helping with the regulation attack but not scale up in tps as efficiently. Today btc is restricted to 7tps at 1mb so its orders of magnitudes off and id argue that dpos is still more decentralized than using LN to increase tps and use bitcoin as a settlement platform.

As I wrote from the start of this, Bitshares 2.0 has optimized the witness code so the CPU can scale to 100,000 TX/s, but not only are they apparently requiring on the order of LMAX's 1ms network latency to achieve it, but I haven't read where they've modeled DoS service attacks on the transaction propagation network at such high TX/s. Real-time systems are not only about average throughput but also about CIS (guaranteed reliability and continuous throughput). If you are sending your real-time payment through and the next 10 witnesses that are queued in the chosen order are DoS attacked so they are unable to receive the transactions, then they can't complete their function. That is a fundamental problem that arises from using PoS as the mining method if you claim such high TX/s across variable hardware and network capabilities of nodes (those PoS claiming more conservative TX/s and block times are thus less likely to bump into these issues external to the speed of updating the ledger in the client code). They can adopt counter measures, but it is going to impact the the maximum TX/s rates to the downside, perhaps significantly.

I am not even confident they can maintain 100 TX/s on a real-world network today composed of a myriad of witnesses capabilities under a DDoS attack. Someone needs to do some modeling.
legendary
Activity: 3136
Merit: 1116
October 22, 2015, 10:39:12 AM
OK, bumped BTS down to 100 tx/s and left caveat. If someone has link to better info I'll be glad to check it out.
legendary
Activity: 2044
Merit: 1005
October 22, 2015, 09:53:10 AM
In a controlled environment its already demonstrated 100k tps.. The bottleneck is the network capabilities rather than the core code which is what it demonstrates. Once network capabilities scale up naturally, then 100k comes possible as demand sees fit.

You are obscuring that what you just wrote effectively means, "this is how many TX/s a fast server CPU can process and we eliminated any sense of a real network from the test".

Once network capabilities scale up naturally, then 100k comes possible as demand sees fit.

This is like saying once we redesign our coin (or monopolize the DPOS witnesses with our own corporate funded super hardware because DPOS is really just a way to obscure that we are paying dividends to ourselves, analogous to how masternodes in Dash was ostensibly a way obscuring that those huge interest fees were going to those who owned the most coins), it will get faster. Let's rather talk about what a coin can do now, today, in the real world of decentralized witnesses of varying capabilities.

Obscuring instamines, and other means (cheap pre-sales of ProtoShares that were converted to Bitshares?) of having control over large percent of the coins and then setting up a paradigm where coins can be parked to earn dividends. Hmmm. How dumb are we. Hey more power to them if investors are gullible enough to buy it. But it all starts to fit together in terms of analyzing why they would even think they could have a uniform capability across all witnesses.

Your assumption which led to a dozen more about which witness is next to sign a block is incorrect, thus your derived assumptions also incorrect. Thus you really have no claim ablut bitshares and the tps without fully understanding the concepts behind the tests and feature itself.

If you would be kind enough, you are welcome to cite a reference document. I was working from the official description of DPOS at the official website. As I wrote, I will edit my post for corrections if any one provides information. You have not yet provided any information. So far I read only your (perhaps incorrect) spin on the matter.

Why not read the code?

Again the bottleneck is in the consensus code which was been optimized so that it is possible to do more than 100k tps, a bitcoin controlled environment cant do this because of the bottleneck outside of network constraints. By leveraging LMAX technology and applying it to blockchains they were able to increase efficiency in validating and signing blocks. Propogation is always an issue.. Which is where scaling up network parameters helps and is totally feasible which multiple markets are betting on and will benefit. Because there is no mining it is possible off the bat, and now optimized to create more tps. Dpos allows them to maximize decentralization while remaining anonymous and even so with bitshares following regulatory rules gives less incentive from a regulation attack than bitcoin.

With fiberoptic internet would bitcoin be able to do 100k tps? No.

Lmax does 100k in 1ms latency http://www.infoq.com/presentations/LMAX

On the use of lmax in bts https://bitshares.org/technology/industrial-performance-and-scalability/

Increasing network params will only help bitcoin by helping with the regulation attack but not scale up in tps as efficiently. Today btc is restricted to 7tps at 1mb so its orders of magnitudes off and id argue that dpos is still more decentralized than using LN to increase tps and use bitcoin as a settlement platform.
sr. member
Activity: 420
Merit: 262
October 22, 2015, 07:02:05 AM
In a controlled environment its already demonstrated 100k tps.. The bottleneck is the network capabilities rather than the core code which is what it demonstrates. Once network capabilities scale up naturally, then 100k comes possible as demand sees fit.

You are obscuring that what you just wrote effectively means, "this is how many TX/s a fast server CPU can process and we eliminated any sense of a real network from the test".

Once network capabilities scale up naturally, then 100k comes possible as demand sees fit.

This is like saying once we redesign our coin (or monopolize the DPOS witnesses with our own corporate funded super hardware because DPOS is really just a way to obscure that we are paying dividends to ourselves, analogous to how masternodes in Dash was ostensibly a way obscuring that those huge interest fees were going to those who owned the most coins), it will get faster. Let's rather talk about what a coin can do now, today, in the real world of decentralized witnesses of varying capabilities.

Obscuring instamines, and other means (cheap pre-sales of ProtoShares that were converted to Bitshares?) of having control over large percent of the coins and then setting up a paradigm where coins can be parked to earn dividends. Hmmm. How dumb are we. Hey more power to them if investors are gullible enough to buy it. But it all starts to fit together in terms of analyzing why they would even think they could have a uniform capability across all witnesses.

Your assumption which led to a dozen more about which witness is next to sign a block is incorrect, thus your derived assumptions also incorrect. Thus you really have no claim ablut bitshares and the tps without fully understanding the concepts behind the tests and feature itself.

If you would be kind enough, you are welcome to cite a reference document. I was working from the official description of DPOS at the official website. As I wrote, I will edit my post for corrections if any one provides information. You have not yet provided any information. So far I read only your (perhaps incorrect) spin on the matter.
full member
Activity: 140
Merit: 100
October 22, 2015, 01:38:32 AM
That article seems to be big on press releases and very light on detail. I think the network effect of Ethereum and the Ethereum DAPP ecosystem will render BTC and current Altcoins redundant.

You don't NEED SIDECHAINS in Ethereum so that counterparty press release appears to be complete bollocks.

BURST and QORA both have turing complete at's, capable of doing ANYTHING which eth can do.

You really need to go and dyor....

No they don't why are you lying?

Go away, you're the only one lying. Not gonna talk to you anymore. JWinterm, we need a moderated thread.

If you're not lying give me one shred of evidence. No actually you'll probably cook up one of your fake meaningless infographics

Yes i thought it was moderated
legendary
Activity: 2044
Merit: 1005
October 22, 2015, 12:51:29 AM
legendary
Activity: 2044
Merit: 1005
October 22, 2015, 12:46:20 AM
Where's Maid?

I think it deserves to be in this conversation...

It's not released, afaik. I am leaning towards removing both Syscoin (most advanced features not released) and Crypti (closed-source), as well. I'm skeptical MAID will ever see the light of day, honestly. Hasn't it been under development since like 2006?

SYS ... i don't get it ... what is it? what does it do?  show me don't tell me it's been along time dan-0
MAID  ... same ... what is it?  I get some large cap 'asset' that trades like wildfire but why?  And if Omni/Master are shutting down their exchange well that just doesn't bode well in the greater picture.
CRYPTI ... some asset created called 'Sia' seems popular ... so at least they've created an asset within that platform ... but why has the wallet on cryptsy been locked for about 15 months?

point is ... enough talking ... get to creating a working exchange/platform with assets of real value trading on it.

CP is the only one doing the walk AFIK.  NXT is a step behind, and it serves their interests to put some pressure on SYS.

The rest in mostly vapors in the ethere-sphere ... dan-0!
Check out the shade release of syscoin.. You get a builtin decentralized exchange into the wallet.
Pages:
Jump to: