Pages:
Author

Topic: [neㄘcash, ᨇcash, net⚷eys, or viᖚes?] Name AnonyMint's vapor coin? - page 19. (Read 95256 times)

sr. member
Activity: 420
Merit: 262
I think this is relevant to my project:

I have great respect for Martin Armstrong's work.

However, when it comes to the topic of Bitcoin, he seem to be a little too pessimistic, saying the government can shut it down anytime it wants.

I wrote him an email asking who he would think the government could do this, but received no reply until today.

Anyone with more info?

He says very little about Bitcoin, but he sees the matter from a pragmatic viewpoint: crypto currencies - that allow hiding the money from the tax man - will be stopped by the government. Government will collude with the Bitcoin foundation or just simply direct them to do it and the protocol will be modified to implement features to enable more efficiently track of the money (but even in its current form BTC is a god given gift for the tax man as by definition everything is in the blockchain and traceable.) The miners will cooperate too. In case the developers and miners don't comply - which is extremely unlikely - measures will be exposed on different levels such as ISP and OS (in the case of proprietary OS like Windows and iOs) to crack down on Bitcoin. (Of course all naive libertarian attempts like Dash and zerocash that try to implement privacy will be cracked down too).
Please note, there is no reason for government to intervene at this moment in time. Bitcoin is minuscule and insignificant terms of market capitalization and volume, used by very few users (the active 1 million users are lot less than a mediocre porn site has) so the government will step in when for some reason it will be actually popular, but at the time the government will act quickly and forcefully.

MA is pessimistic of private money surviving long term, it usually appears during a crisis and withers away or ends up being taxed anyway, there's nothing new under the sun so the saying goes.

He doesn't believe bitcoin will be any different.

"The idea that Bitcoin can circumvent government currencies and taxes I just do not buy. I suspect these people are licking their chops to rush in. That is my only concern. You cannot withdraw $3,000 in cash without them freaking out. They will an underground economy exist? I seriously doubt it."

Agreed on your and trollercoaster's response. And btw, this is why I suggested to Zcash, they hedge their bets and also focus some of their efforts on the corporate applications of zk-snarks to privacy for public block chains with an optional viewkey for the government. This is absolutely needed by corporations if they are going to use block chain technology to interopt more efficiently. Zcash is sitting on a gold mine technology if they get some clarity into their strategy. I think perhaps they should have hired me. Any way, I am off on other project/work now.

Let me add to both of your points, that the global government is also taking form but not entirely organized yet. For example, the G20 has just recently pledged to share information and work together on tax cheating. It takes a while to get all these 100+ nations to cooperate, but realize the leverage the G20 has over international banking (e.g. they threatened to shut off the Philippines' OFW remittances if the Philippines didn't comply with the wished for the USA such as ending the bank secrecy law in the Philippines) and over the internet trunk lines (undersea) and other geopolitical leverage.

The government will be able to shut down Bitcoin for the reasons AltcoinUK has explained. Bitcoin is becoming centralized and controlled by a few people. For example, the Chinese mining cartel controls 67% of the hashrate and recently did a 51% attack on Bitcoin.

I have pretty much given up on the anarchistic, ideological aspect of crypto currency. I am focused on a smarter "anarchistic" ideology of making crypto currency popular and enabling netizens to express themselves more freely. Ultimately the battle with be political so I am better off to use my technological expertise to enable more popular freedoms in decentralized social networking than to focus on some impossible dream.

I have also recently shown that no decentralized crypto currency can ever exist. Sorry there are lots of uninformed soundbite-mastery "experts" who persistently say otherwise, but they are wrong.



Governments would be pro Bitcoin because everything is being recorded in the blockchain

Correct. That's why Bitcoin is a God given gift to the the government and tax man. That's why there are conspiracy theories - which I don't subscribe to - that Sathosi is an NSA creation.

Non-sequitor. (plz don't be offended)

If Bitcoin is a God given gift to the government, then it does not logically follow that Satoshi wouldn't be a DEEP STATE (global elite/NSA/CIA) operation.

On the other hand, I don't agree with conspiracy theories that government will make Bitcoin big just to have this centralized place for tax collection i.e. intentionally make Bitcoin very big as some theory advocates it. Government don't have to do that - they implement the electronic FIAT system anyway, in Europe very-very quickly so they don't need Bitcoin, the electronic FIAT system is their central store of information. That's why I said if for some miraculous reason BTC get bigger then the government will step in, but not before.

I years ago figured out why the global elite planted Bitcoin.

It undermines the nation-states, central banks, and existing banking systems. They could not accomplish that in a hidden way if they did it top-down as you claim.

(yet another reason that Zcash should take my advice and make sure they focus also on corporate applications of their technology)

The plan all along has been to discredit the existing nation-state system to move towards a world government system.

On the note of electronic FIAT - and it indicates IMHO how intelligent Armstrong is and he sees the big picture - on the other day he said that yes, all nations will implement the electronic FIAT system, but the US, because the role of US dollars in world economy won't be able to that. Therefore, it will be a very messy situation. Now, that is very interesting. Electronic FIAT will be a big change around the world with its consequence on society and economy. The sheep of Europe - like we are in the UK and all others in the EU - move to electronic FIAT. In the meantime the physical US dollar will be still in place. How that does work, everyone will just buy US $ as Armstrong predicts it (please note he predicts it for mainly different reasons)? But Armstrong also says after 2020 the money flow to Asia. Why would it flow if US dollar still the safe heaven? Indeed a complicated issue.

He also stated the USA has the least chance of cancelling the dollar (not just cash) because the Bible Belt might secede from the USA, but Europe, Japan, etc have done it very often.

What Armstrong is saying is the same as I have long stated, which is that the rest of the world will implode and then blame the strong dollar. It is this contagion of blame that will lead to a new Breton Woods where a global government can be initiated. The point of Bitcoin and the sovereign debt crisis is force the nation-states to their knees to beg for a globalized monetary union.



And btw, this is why I suggested to Zcash, they hedge their bets and also focus some of their efforts on the corporate applications of zk-snarks to privacy for public block chains with an optional viewkey for the government. This is absolutely needed by corporations if they are going to use block chain technology to interopt more efficiently. Zcash is sitting on a gold mine technology if they get some clarity into their strategy. I think perhaps they should have hired me. Any way, I am off on other project/work now.

That was the reason I started to support the Gadgetcoin developers, because they identified businesses need a private implementation of the blockchain, businesses simply can't operate with public blockchains. Such public blockchain exposes all their trading and transaction information to the public, which is simply not an option for business operations (now the GDC devs are unhappy with me because of my IOTA fight, but that is an other matter). I think you are quite right if you see there is a business case and your project can be successful in that area.

I am saddened to hear that the GDC affiliation turned slightly sour because of your questioning the ethics of Iota. I would guess what they would be upset about is being associated with any strife, because it might turn off corporations. I don't think you can do anything about speculation in coins. You have nothing to gain from fighting such a war. Instead be laserbeam focused on investing correctly. And trying to find any serious and capable development that you want to invest in.

One of the problems that GDC and I face is economies-of-scale. We simply don't have enough full-time developers and I am also ill. But I am going to redouble my efforts on both my health treatment and on my resolve to not post in forums and be focused on coding. The key in my mind is reaching a point where others invest in the ecosystem, then economies-of-scale become easy. So I am just trying to devise the clever way to jumpstart to that pole position with the resources I have (i.e. myself and enough cash to support my living expenses for several more months).

Let me add to both of your points, that the global government is also taking form but not entirely organized yet. For example, the G20 has just recently pledged to share information and work together on tax cheating. It takes a while to get all these 100+ nations to cooperate, but realize the leverage the G20 has over international banking (e.g. they threatened to shut off the Philippines' OFW remittances if the Philippines didn't comply with the wished for the USA such as ending the bank secrecy law in the Philippines) and over the internet trunk lines (undersea) and other geopolitical leverage.

It always amaze me how powerless we are when it comes to money, and how quick government and the money lobby can act. You remember, not long time ago the Greeks voted and said no to austerity. The vote was on a Sunday. The following week their prime minister sat down with the Troika and by Friday the prime minister implemented more severe measures that was declined by the voters just a few days before.
I think the Chinese, Europe and all others will implement those money control measures in a heartbeat when they feel the Status Quo needs to be defended.

That is why I want to work on the problem insidiously. The people should taste freedom in the things they do normally on the internet, e.g. social networking. From there, the people are astute at finding all the opportunities and fighting for them. Enable the masses to get what they REALLY want. That is the key.



I'm only noticing the "unlimited altcoin cloning" days are over and people are moving away to (future) projects that try to change or disrupt existing markets.

BOOM you hit it right on the head friend. sounds like you understand the level the game is at... come Build a BUBBLE
https://bitcointalksearch.org/topic/m.13739933

Very astute.

Btw, Bubble is sort of headed in the right direction, but I am thinking you lack marketing experience. I mean you lack the experience to be able to analyze the difference between an idea and a market (unless your marketing is to speculators and not to an adoption market, in which case I see the cleverness of your approach).

If you want to succeed in adoption markets, you might consider partnering with someone like me who has that experience. But then again, I am not that interested in partnering. I am  more interested in interopting where each developer owns their own project in a mutual ecosystem. I don't like being responsible for someone else's coding/code. I prefer modules and economically separated as well. Finding a well matched partner for co-founding a project is very difficult for me, because I am not socializing in real life with coders. I am far away on an island.
sr. member
Activity: 420
Merit: 262
Is there a rough timeline of your project?

I'll try to respond to this approximately next week. First step is stop posting on the forum, so I can get my head back into coding.

I need to do some analysis of the implementation issues for the broader scope of that the project has transitioned to. I am no longer just creating a crypto coin, but I am also creating a social networking site. So this appears to be an immense amount of implementation effort.

Note I did code up a more than just rudimentary dating site (with roughly a few 1000s LOC of client side Javascript) in roughly 2 months in March/April 2015 which I shut down due to not having a microtransactions funding model available to make the site scale economically. I am not going back into the dating site direction, but that should provide some indication that when I am feeling healthy my production is quite high for one coder.

The other problem I have been dealing with (besides the recent addiction to posting too much in the forum which has to stop!), is that from roughly July forward my chronic illness worsened to the point where I became let's say almost entirely unproductive as a coder. I think this was due to overeating meat (since I originally thought a carb-free diet might be the solution to my 3+ year chronic illness), which now in hindsight was the worst thing I could have done, because I am now speculating that my illness is some issue with the pancreas, gall bladder, and/or colon (which then fucks up the entire body causing horrible symptoms that mimic Multiple Sclerosis or neurological auto immune disease). So as of past 3 weeks, I have been taking 30+ grams of curcumin extract+piperine (in coconut oil/milk) daily. And just this past week, I have decided to eat broccoli with every meal (bcz I read together with the curcumin is necessary to be most effective) and greatly reduce my meat consumption as well. I have had some better days where I felt very alert and mostly healthy, but I have also had some days where I still feel horrible and can't code effectively. The result is thus far inconclusive. I have noticed my jogging (although still very painful in my gut and my legs often ache and feel weak) has improved such that I can regularly run twice daily (2 x 2.5 kms) and make it to the end of the run without my abdomen hurting so bad that I have to stop. I have noticed that appears I'm experiencing more constant pain now in my abdomen but the pain has changed from being near the bottom of my rib cage to a lower position and the pain feels less like something I can't understand/describe and more like an injury. And I notice now when I run I can fight the pain (and that physical fighting with my body is also causing me to become like tiger or gruffy/ornery attitude). My hope is this is a sign of healing. I have seen clinical studies where huge pancreatic cancer masses were eliminated with high dose curcumin treatment. So I am hoping that what is going on now inside my body is that there are open wounds inside as a tumor is being shrunk.

Any way I really have no solid, clinical diagnosis.

So my point is that if my health isn't normal, my productivity isn't normal. The reason is because when we code like we did when we were age 20, 30, and 40, we feel a rush of inspiration and we get very deeply engrossed in what we are doing. But when one has a headache, body that wants to collapse in the bed, and yucky feeling all over the body and can't concentrate, then there is no inspiration and there is no way to get engrossed. I can try to explain it with an analogy. Imagine you were faced with the opportunity of fucking a Penthouse model but her vagina was lined with course grit sandpaper. Or for those with a less vivid imagination, imagine you had to code during the worst flu you ever had where you had a raging fever, were vomiting or sitting on the toilet every 15 minutes with diarrhoea, etc.. You simply were too distracted by your gut and throbbing head to accomplish any productive work.

I'll get back to you with a response next week or so if I can get my daily work regimen organized/disciplined and if my health (latest treatment regimen) continues on more or less an upturn.

Thanks.
sr. member
Activity: 420
Merit: 262
My private message:

plz kindly be aware of some facts:

https://bitcointalksearch.org/topic/m.13841923

And this should be relevant to me why?

I just want you to know that I am not spamming Ethereum. That I am explaining technological information that many people do not know. Since I am the only person who is making this effort (at great cost to myself in terms of time lost from more productive activities that might more directly benefit myself), it is necessary that I post more than others, because there are so many others who are posting who do not understand the technological issues.

Please do not misconstrue my effort as selfish, harmful, or against forum policy.

I also wanted you to know that my posting in the past that you might have construed to be offensive and spamming to the community (large red font, consternated tone, etc) was factual. I have no idea really if the Ethereum folks are sincere or corrupt. I just know they are not open/honest about the reality of the technological constraints. They talk technobabble but never really come out and state in terms that people can grasp what they have solved and not solved. They misdirect the community in hype.

Apologies to all of you for the noise level. I am also frustrated with the level of communication that has been necessary to convey important points to the community and make those points be accepted rather than dismissed as "loony".

One thought sticks in my mind that my former boss (a co-founder) told me in 1995, "if you are so smart then why are you working here and not for your own company". (this was the one and last time I tried to express my thoughts about priorities for new features for Painter, even if he was probably correct about my knowledge being inferior to the founders)

I left in 1995 and proceeded to make CoolPage by 1998 (while working from a Nipa Hut in squalor because my savings was depleted) which attained approximately near to 0.33% market share of the entire internet.

I think the same applies now. I just wanted you to understand what I tried to do and why. Which was to promote the level of awareness of the facts.
legendary
Activity: 1098
Merit: 1000
Angel investor.
Is there a rough timeline of your project?
sr. member
Activity: 420
Merit: 262
Congratulations to the Monero Team on your successful Hydroden Helix release. I primarily believe in bitcoin and strongly believe most altcoins are doomed for failure. Monero has good chances to be an important exception.

https://github.com/monero-project/bitmonero/releases/tag/v0.9.0

Monero scales to 0 block size  Shocked

It is dangerous when you hand chess masters the keys to software design.
sr. member
Activity: 420
Merit: 262
...but it's really just bitcoin for me.  All the rest are wannabees IMO.  But if something comes along that can challenge bitcoin--legitimately--then more power to whatever that is.  If it's ethereum, well then it's got its work cut out for itself.  It's not going to be easy to dethrone btc.

The only coin that could dethrone Bitcoin would be one that had higher levels of adoption.

None of the altcoins have generated any adoption. Maybe Doge generated a few 1000s of actual users, but that is nothing compared to 10,000s of serious users of Bitcoin.

Millions of users estimates for Bitcoin is delusion (based on wallet estimates for example from Coinbase which doesn't include wallets that have been abandoned and driven originally by affiliation payouts, Coinbase is burning up IPO money three times to the well because its business model has failed). No coin has that yet.

You'll know it when you become aware of the Bitcoin killer. It will be so damn obvious due to the millions of users adopting it.
sr. member
Activity: 420
Merit: 262
The truth about Ethereum, i.e. borderline scam or at least technological incompetence:

Ignore the fud and the hype.  I'm long term excited about this project.

[...]

Even though I think eth will destroy the need and market cap of bitcoin (wait and see).  It won't happen today or tomorrow.  You are buying into a pump.  Wait until things are boring and then don't put off your purchase.

Eth is already better than bitcoin.  People just don't know it yet.  And the ecosystem is coming down the like at an insane rate.  Most popular alts are scams.

Why are you displaying your ignorance about technological issues that you are apparently incapable of comprehending? Even you were in my Reddit thread where I explained it, yet you somehow still are in this delusion that Ethereum is not incompetent.  Huh

Ethereum is not better than any other scam. And it is a borderline scam or at least incompetence masked by technobabble from some young nerds who know some math and programming, but have limited capitulation to reality.

So why are you constantly making threads spreading pointless FUD about ethereum?

Yeah why are you doing that stoat?

And you are lying about my identity and have refused to retract your slander, when I am clearly not the two users you accused me of being and I have even shared my LinkedIn photo and identity.

Why can't you admit that Ethereum's developers suck and after $millions wasted, they still have not solved the most fundamental issue that must be solved in order to make scripting on a block chain work?

The technological challenge with a long-running script on a block chain is verification. The gas (and txn fees) are paid to the winner of the PoW block, not to all miners, but all miners (full nodes) have to endure the SAME cost of verification. Yet not all miners have the same hashrate, thus not all miners have the same income per block. Thus some miners recoup less of their verification costs than other miners. As I explained in greater detail, this forces mining to become 100% centralized in one miner with 100% hashrate.

Ethereum is off on another tangent named Casper, with shards, consensus-by-betting, etc, which is another hopeless and futile attempt to solve a problem that CAN NOT BE SOLVED BECAUSE OF THE INVIOLABLE CAP THEOREM!

Ethereum will never solve this problem and remain decentralized. Never. Thus all the scripts and products being built on top of Ethereum are headed to failure when Ethereum fails to solve the scaling problem of verification in a decentralized manner. Because centralization of scripting is meaningless, we always had that already.

I have solved the problem because I realized verification MUST be centralized (due to the inviolable CAP theorem and the correct understanding that a 100% decentralized system can not solve the Byzantine General's Problem), and thus I instead designed a way to control the centralization of verification with decentralized PoW miners (because each user submits a PoW share with their txn and because PoW mining is rendered UNprofitable for all parties).

So who will be the winner of everything? Me. Not Ethereum. Not to mention that marketing plan is light years ahead of any altcoin, because I will market directly to the millions of masses and achieve millions of adoptions (and be the first coin to do so).

Look I was there at the beginning telling Charles (one of the guys who founded and organized the creation of Ethereum) in Skype that Vitalik's PoW algorithm could be parallelized thus not CPU only, telling him that they could not solve the fundamental problem above, and telling him that they were going to raise too much $ with too many mouths to feed and still wouldn't solve the fundamental problems. Originally Charles was recruiting me to form this company, not Vitalik. But I balked and said I didn't want to raise all that money and I didn't want to start something until I was sure I had solved all fundamental issues. If you don't believe me, go ask Charles.

All the gory details about Ethereum's technical incompetence are here:

https://www.reddit.com/r/ethtrader/comments/42rvm3/truth_about_ethereum_is_being_banned_at/

Enjoy the Ethereum pump while it is hot and while people are ignorant of the truth about the technical incompetence of the Ethereum developers. Eventually the truth will come out and especially when my white papers and coin are released.



Ethereum is off on another tangent named Casper, with shards, consensus-by-betting, etc, which is another hopeless and futile attempt to solve a problem that CAN NOT BE SOLVED BECAUSE OF THE INVIOLABLE CAP THEOREM!

Casper adds Game theory into the mix. You can fool lightweight nodes, but you will lose a lot of money after that. Security deposits solve a lot of problems in real world.

Proof-of-cheating (or by any other name) backed by deposits, is a game theory that leads to centralization. Think it out. It is not really that hard to show that. It has analogous flaws as PoS.

Edit: for n00bs, note that by definition if we will use proof-of-cheating with deposits in Casper (along with consensus-by-betting) to rely on some limited number of nodes with deposits to verify the block chain for us, then those masternodes have the same economic problem that I explained in my prior post and thus the one with the most income is the winner take all, i.e. 100% centralization. There is simply no way to avoid centralization of verification. The solution is the one I have designed and explained and which Ethereum is not implementing.



Let's start this by making myself a little less popular, saying: the altcoin scene will die in 2016

It is quite possible that the altcoin market will be devasted by a potential decline of Bitcoin to < $150 and perhaps well below $100, because we have an unexpected global contagion coming that will be worse than 2008 when everything crashed. As we all know, when Bitcoin's price gets a flu, altcoins' prices go into comas.


Are there any exceptions?


Yes, there are!

Coins that offer unique decentralized services, "Blockchain 2.0" or other disrupting technology will be the buy-and-holds for 2016.

Some examples?


- Ethereum | Decentralized software & smart contracts on the blockchain
- Factom | Honesty to record-keeping
- Voxelus | Virtual Reality without coding
- Radium | Decentralized Services on the smartchain
- MaidSafe | Crowd-sourced internet
- Synereo (AMP) | Decentralized social network
- Sia | Decentralized storage

All those coins are doomed to fail due to insoluble fundamental technological issues which they didn't solve which render those coins entirely useless:

https://bitcointalksearch.org/topic/m.13842262
https://bitcointalksearch.org/topic/m.13833591
https://bitcointalksearch.org/topic/m.13043602 (I will be responding to AlanX's post soon)



So far I see the ethereum blockchain and consensus protocol working fine.

It hasn't been scaled yet. Bitcoin's scalepocalypse will pale in comparison to Ethereum's doom in the wild. Essentially what Ethereum is designing with Casper is a technobabble wrapper around centralized verification, because either they know that verification can't be decentralized (as I have explained), or they are determined to delude themselves otherwise (with the result being the centralization occurs anyway).

With centralization, Ehereum can scale except note that will be viewed as a failure by the market, unless verification centralization can be hidden behind Sybil attacks on the verification nodes (meaning no one can prove that a 1000 nodes aren't controlled by the same entity). I have a strong suspicion that is why Ethereum is being funded by Peter Thiel and other banksters, because they understand Ethereum is a way for them to control without being detected. Satoshi had prevented this outcome in Bitcoin by setting the maximum block size to 1MB, which thus restricted verification from centralizing entirely (yet it will still be impossible to prevent Bitcoin Classic from centralizing due to the other economics of profitable PoW mining).

But centralization always leads to failure. So ultimately this will fail sooner or later.

And what I see from you is just a load of wild claims.

That is because you are n00b and you can't understand the technological arguments. The points I have made are not wild at all. Do you realize I was probably the first person to predict Bitcoin's scalepocalypse in 2013 as ArticMine graciously admits today:

I introduced this concept in 2013 in my thread Spiraling Transaction Fees and I nailed the block size as the fundamental issue in my last post in that 2013 thread.

Seems stoat you had no clue how long I have been here in this forum and doing serious technological research.
sr. member
Activity: 420
Merit: 262
The first draft specification for my Shazam encryption function is complete and will deployed in my upcoming (much improved over Scrypt and my prior design from 2013) sequential memory hard PoW hash function (note Shazam is a simple tweak of ChaCha so it isn't like I invented much for Shazam, but I did need to analyze and assimilate several issues as stated in the specification ... also included my formerly unpublished security analysis of ARX from 2013):

Code:
/*
Shazam is a fast 1024-bit (128B) encryption function. Shazam has the following
attributes— which are required[1] for use in constructing sequential memory-hard
functions¹; and which are not sufficient to make Shazam a secure stream cipher
nor a cryptographic hash function[2]:

  1. The outputs are uniformly distributed.
  2. Per instance computation can’t be significantly accelerated by more
     internal parallelism than can be exploited on CPUs nor by precomputation of
     a limited-space data structure.
  3. Computation of any segment of the output can’t be significantly faster
     than computing the entire output.
  4. Maximizes the ratio of the output length to the execution speed.

For security against the structure around the matrix diagonal which passes
through[3] the Salsa20 and ChaCha block function, the first use of Shazam in a
chain of hashes should employ input constants[4].

ChaCha[5] seems best fit to these requirements; and compared to Salsa20 (and its
impressive visualized diffusion[8]), ChaCha has 50% greater bit diffusion[6],
updating each 32-bit doubleword twice per quarter-round, equivalent security yet
requiring one less round[9], and equivalent or faster per-round execution speed.
Same as for Salsa20, each ChaCha quarter-round is a confusion-and-diffusion[10]
block function that employs (48 per round, i.e. 16 each of) 32-bit
add-rotate-xor (ARX) operations.

Salsa20 alternates row and column rounds which required a slow matrix transpose
between rounds (i.e. swapping rows and columns across the diagonal) for naive
SIMD vector implementations. ChaCha incorporates an optimization[7] first
discovered for Salsa20[11] that instead rotates each column (except the first)
by multiples of the 32-bit doublewords; which is a faster operation on SIMD
registers than swapping. The slow inverse mapping to generate the final
output[11] is eliminated in ChaCha[7].

Rotate operations on SIMD registers are typically only fast for multiples of
8-bits (one byte). Although ChaCha has one each of (multiple of a byte) 16-bit
and 8-bit rotate operations that Salsa20 didn’t, the other two rotate operations
per quarter-round are (12-bit and 7-bit which are) not multiples of a byte. To
maximize the ratio in requirement #4 and enable all rotate operations to be a
multiple of a byte, Shazam widens ChaCha from 512-bit to 1024-bit by increasing
the 32-bit doublewords to 64-bit quadwords. Thus the chosen rotate operations
are 32-bit, 24-bit, 16-bit, and 8-bit per quarter-round. Most recent mobile ARM
processors have the NEON SIMD feature which provides sixteen 128-bit SIMD
registers comprising two 64-bit quadwords. Thus two instances of Shazam per core
(or per hyperthread for Intel CPUs) can be executed in parallel. NEON provides
the VTBL instruction[12] for (multiple of a byte) rotates on 64-bit quadwords.
ARM’s Advanced SIMD doubles the number of 128-bit SIMD registers and Intel’s
AVX2 doubles the SIMD registers’ width to 256-bit; and thus can execute in
parallel four instances of Shazam per core (or per hyperthread).

The widening to 256-bit registers can be implemented on AVX2 for column rotates
(of the four 64-bit quadwords) with a single instruction per column. On ARM NEON
the maximum register size is 128-bit so each column rotate can’t be accomplished
with one instruction because the VTBL instruction has only a 64-bit quadword
output. The third column rotate can be accomplished with a single VSWP
instruction, and the second and fourth columns each with two VSWP instructions.
Note the definition of the terms ‘quadword’ and ‘doubleword’ are different for
AVX2 and NEON; and this document adopts the AVX2 definition where quadwords are
four 16-bit words and doublewords are two 16-bit words.

________________________________________________________________________________
¹ A sequential memory-hard function is typically one-way and has uniformly
  distributed, constant length outputs; and is a non-invertible hash function
  only if the input is variable-length because variable-length input is the
  definition of a hash function.

ARX Security

ARX operations are employed in some block encryption algorithms, because they
are relatively fast in software on general purpose CPUs and reasonable
performance on dedicated hardware circuits, and also because they run in
constant time, and therefore immune to timing attacks.

Rotational cryptanalysis attempts to attack encryption functions that employ ARX
operations. Salsa and ChaCha employ input constants to defeat such attacks[3].

Addition and multiplication modulo (2ⁿ-1) diffuse through high bits but set low
bits to 0. Without shuffles or rotation permutation to diffuse changes from high
to low bits, addition and multiplication modulo (2ⁿ-1) can be broken with low
complexity working from the low to the high bits[13].

The overflow carry bit, i.e. addition modulo ∞ minus addition modulo (2ⁿ-1),
obtains the value 0 or 1 with equal probability, thus addition modulo (2ⁿ-1) is
discontinuous—i.e. defeats linearity over the ring Z/2ⁿ[17]—because the carry
is 1 in half of the instances[14] and defeats linearity over the ring Z/2[16]
because the low bit of both operands is 1 in one-fourth of the instances.

The number of overflow high bits in multiplication modulo ∞ minus multiplication
modulo (2ⁿ-1) depends on the highest set bits of the operands, thus
multiplication modulo (2ⁿ-1) defeats linearity over the range of rings Z/2 to
Z/2ⁿ.

Logical exclusive-or defeats linearity over the ring Z/2ⁿ always[16] because it
is not a linear function operator.

Each multiplication modulo ∞ amplifies the amount diffusion and confusion
provided by each addition. For example, multiplying any number by 23 is
equivalent to the number multiplied by 16 added to the number multiplied by 4
added to the number multiplied by 2 added to the number. This is recursive since
multiplying the number by 4 is equivalent to the number multiplied by 2 added to
the number multiplied by 2. Addition of a number with itself is equivalent to a
1-bit left shift or multiplication by 2. Multiplying any variable number by
another variable number creates additional confusion.

Multiplication defeats rotational cryptoanalysis[15] because unlike for
addition, rotation of the multiplication of two operands never distributes over
the operands, i.e. is not equal to the multiplication of the rotated operands.
A proof is that rotation is equivalent to the exclusive-or of left and right
shifts. Left and right shifts are equivalent to multiplication and division by a
factor of 2, which don’t distribute over multiplication, e.g.
(8 × 8) × 2 ≠ (8 × 2) × (8 × 2) and (8 × 8) ÷ 2 ≠ (8 ÷ 2) × (8 ÷ 2). Addition
modulo ∞ is always distributive over rotation[17] because addition distributes
over multiplication and division e.g. (8 + 8) ÷ 2 = (8 ÷ 2) + (8 ÷ 2). Due to
the aforementioned non-linearity over Z/2ⁿ due to carry, addition modulo (2ⁿ-1)
is only distributive over rotation with a probability 1/4 up to 3/8 depending on
the relative number of bits of rotation[15][18].

However, multiplication modulo (2ⁿ-1) sets all low bits to 0,
orders-of-magnitude more frequently than addition modulo (2ⁿ-1)— a degenerate
result that squashes diffusion and confusion.

References

[1] C. Percival, “Stronger Key Derivation Via Sequential Memory-Hard Functions”,
    pg. 9: http://www.tarsnap.com/scrypt/scrypt.pdf#page=9

[2] Stream cipher security considerations for Salsa20 aren’t required, c.f.
    D. Berstein, “Salsa20 security”: https://cr.yp.to/snuffle/security.pdf

    Cryptographic hash function security considerations aren’t required[1].

[3] D. Berstein, “Salsa20 security”, §4 Notes on the diagonal constants, pg. 4:
    https://cr.yp.to/snuffle/security.pdf#page=4

    J. Hernandez-Castro et al, “On the Salsa20 Core Function”, §4 Conclusions,
    pg. 6: https://www.iacr.org/archive/fse2008/50860470/50860470.pdf#page=6

[4] Y. Nir et al, “ChaCha20 and Poly1305 for IETF Protocols”, IRTF RFC 7539,
    §2.3. The ChaCha20 Block Function, pg. 7:
    https://tools.ietf.org/html/rfc7539#page-7

[5] D. Berstein, “ChaCha, a variant of Salsa20”:
    http://cr.yp.to/chacha/chacha-20080128.pdf

[6] D. Berstein, “ChaCha, a variant of Salsa20”, pg. 3:
    http://cr.yp.to/chacha/chacha-20080128.pdf#page=3

[7] D. Berstein, “ChaCha, a variant of Salsa20”, pg. 5:
    http://cr.yp.to/chacha/chacha-20080128.pdf#page=5

[8] https://cr.yp.to/snuffle/diffusion.html

[9] https://en.wikipedia.org/w/index.php?title=Salsa20&oldid=703900109#ChaCha_variant

[10] http://en.wikipedia.org/wiki/Confusion_and_diffusion

     http://www.theamazingking.com/crypto-block.php

     H. Feistel, “Cryptography and Computer Privacy”, Scientific American,
     Vol. 228, No. 5, 1973:
     http://apprendre-en-ligne.net/crypto/bibliotheque/feistel/

[11] P. Mabin Joseph et al, “Exploiting SIMD Instructions in Modern
     Microprocessors to Optimize the Performance of Stream Ciphers”, IJCNIS
     Vol. 5, No. 6, pp. 56-66, §C. Salsa 20/12, pg. 61:
     http://www.mecs-press.org/ijcnis/ijcnis-v5-n6/IJCNIS-V5-N6-8.pdf#page=6

[12] “ARM Compiler toolchain Assembler Reference”, §4.4.9 VTBL, VTBX, pg. 224:
     http://infocenter.arm.com/help/topic/com.arm.doc.dui0489f/DUI0489F_arm_assembler_reference.pdf#page=224

     T. Terriberry, “SIMD Assembly Tutorial: ARM NEON”,
     §Byte Permute Instructions, pg. 53:
     http://people.xiph.org/~tterribe/daala/neon_tutorial.pdf#page=53

[13] D. Khovratovich et al, “Rotational Cryptanalysis of ARX”,
     §2 Related Work, pg. 2:
     https://www.iacr.org/archive/fse2010/61470339/61470339.pdf#page=2

[14] D. Khovratovich et al, “Rotational Cryptanalysis of ARX”,
     §6 Cryptanalysis of generic AR systems, pg. 10:
     https://www.iacr.org/archive/fse2010/61470339/61470339.pdf#page=10

[15] D. Khovratovich et al, “Rotational Cryptanalysis of ARX”,
     §3 Review of Rotational Cryptanalysis, pg. 3:
     https://www.iacr.org/archive/fse2010/61470339/61470339.pdf#page=3

[16] D. Berstein, “Salsa20 design”, §2 Operations:
     https://cr.yp.to/snuffle/design.pdf

[17] M. Daum, “Cryptanalysis of Hash Functions of the MD4-Family”,
     §4.1 Links between Different Kinds of Operations, pg. 40:
     www-brs.ub.ruhr-uni-bochum.de/netahtml/HSS/Diss/DaumMagnus/diss.pdf#page=48

[18] M. Daum, “Cryptanalysis of Hash Functions of the MD4-Family”,
     §4.1.3 Modular Additions and Bit Rotations, Corollary 4.12, pg. 47:
     www-brs.ub.ruhr-uni-bochum.de/netahtml/HSS/Diss/DaumMagnus/diss.pdf#page=55
*/
legendary
Activity: 1428
Merit: 1030
I suspect
Benthach and spoetnik are both tptbneedwar alt accounts.  Just 3 different personalities of the same unhinged loon

This is slander against my reputation. If you don't retract this, I will put negative trust on your profile.

I am absolutely not either of those accounts. I demand a retraction from you.

I have pointed out the engineering reasons that Ethereum is fundamentally flawed. I have nothing to do with the weak arguments of those two clowns.

Yeah Anonymint is merely grumpy, disagreeable and egomaniacal, but he makes good technical points to make up for it. Spoetnik is a clown, agreed, don't know about the other one.
sr. member
Activity: 420
Merit: 262
I suspect
Benthach and spoetnik are both tptbneedwar alt accounts.  Just 3 different personalities of the same unhinged loon

This is slander against my reputation. If you don't retract this, I will put negative trust on your profile.

I am absolutely not either of those accounts. I demand a retraction from you.

I have pointed out the engineering reasons that Ethereum is fundamentally flawed. I have nothing to do with the weak arguments of those two clowns.
legendary
Activity: 990
Merit: 1108
I do hope you deduced that by 'memory space' I mean the size of the memory allocated to the random access data structure of the PoW algorithm.

Oops; I didn't realize you were talking about a tiny Cuckoo Cycle instance. I normally think of sizes in the dozens or hundreds of MB. Apologies for the misunderstanding.

In that case you are right that most memory accesses will be coelescing into the same row/page.
But when you start running multiple instances they may occupy different pages in the same bank
and conflict with each other.

The 2^-14 applies when most of your physical memory is used for Cuckoo Cycle,
as might be tried in an ASIC setup.

Quote
I had PM'ed you to collaborate on PoW algorithms and alerted you to my new post thinking that in the past you've always been amicable and a person to collaborate productively with.

I'm just keen to correct what I perceive as misrepresentation or false claims about my work.
sr. member
Activity: 420
Merit: 262
Example, for a 128KB memory space with 32 KB memory banks

Hard to argue with someone who either confuses terms or whose numbers are way off.
You have at best a few hundred memory banks.

Quoting from http://www.futurechips.org/chip-design-for-all/what-every-programmer-should-know-about-the-memory-system.html

Banks

To reduce access latency, memory is split into multiple equal-sized units called banks. Most DRAM chips today have 8 to 16 banks.

...

A memory bank can only service one request at a time. Any other accesses to the same bank must wait for the previous access to complete, known as a bank-conflict. In contrast, memory access to different banks can proceed in parallel (known as bank-level parallelism).

Row-Buffer

Each DRAM bank has one row-buffer, a structure which provides access to the page which is open at the bank. Before a memory location can be read, the entire page containing that memory location is opened and read into the row buffer. The page stays in the row buffer until it is explicitly closed. If an access to the open page arrives at the bank, it can be serviced immediately from the row buffer within a single memory cycle. This scenario is called a row-buffer hit (typically less than ten processor cycles). However, if an access to another row arrives, the current row must be closed and the new row must be opened before the request can be serviced. This is called a row-buffer conflict. A row-buffer conflict incurs substantial delay in DRAM (typically 70+ processor cycles).

I have already explained to you that the page size is 4KB to 16KB according to one source, and I made the assumption (just for a hypothetical example) that maybe it could be as high as 32KB in specially designed memory setup for an ASIC. And I stated that I don't know what the implications are of making the size larger. I did use the word 'bank' instead of 'page' but I clarified for you in the prior post that I meant 'page' (see quote below) and should have been evident by the link I had provided (in the post you are quoting above) which discussed memory pages as the unit of relevance to latency (which I guess you apparently didn't bother to read).

Thus again I was correct what I wrote before that if the memory space is 128KB and the page size is 32KB, then the probability is not 2^-14. Sheesh.

What number is far off? Even if we take the page size to be 4KB, that is not going to be any where near your 2^-14 nonsense.

The number of memory banks is irrelevant to the probability of coalescing multiple accesses into one scheduled latency window. The relevancy is the ratio of the page size to the memory space (and the rate of accesses relative to the latency window). Duh!

I do hope you deduced that by 'memory space' I mean the size of the memory allocated to the random access data structure of the PoW algorithm.

The page size and the row buffer size are equivalent. And the fact that only one page (row) per bank can be accessed synchronously is irrelevant!

Now what is that you are slobbering about?

(next time before you start to think you deserve to act like a pompous condescending asshole, at least make sure you have your logic correct)

I had PM'ed you to collaborate on PoW algorithms and alerted you to my new post thinking that in the past you've always been amicable and a person to collaborate productively with. I don't know wtf happened to your attitude lately. Seems ever since I stated upthread some alternatives to your Cucooko PoW, that you've decided you need to hate on me. What is up with that. Did you really think you were going to get rich or massive fame from a PoW algorithm. Geez man we have bigger issues to deal with. That is just one cog in the wheel. Isn't worth destroying friendships over. I thought of you as friendly but not any more.
sr. member
Activity: 420
Merit: 262
If you think about what gives a currency its value independent of any FX exchange, it is the level of production for sale in that currency increases so via competition, more is offered at lower currency price and thus the value of the currency has increased. Thus our goal is to get more users offering more things for sale in the currency.
sr. member
Activity: 420
Merit: 262
I have come to the conclusion that we will all stab each other to death when faced with the choice between that and applauding/cheering each other (working together).

It is the nature of men. Find leverage. Seek. Destroy. Pretend to be part of a team while it serves one's interests but only while it does.

Men are damn competitive.
sr. member
Activity: 420
Merit: 262
This make no sense to me. When all your memory banks are already busy switching rows on every
(random) memory access, then every additional PoW instance you run will just slow things down.

The bolded statement is not correct in any case. Threads are cheap on the GPU. It is memory bandwidth that is the bound. Adding more instances and/or more per instance parallelism (if the PoW proving function exhibits per instance parallelism) are both valid means to increase throughput until the memory bandwidth bound limit is reached. Adding instances doesn't slow down the performance of each instance unless the memory bandwidth bound has been reached (regardless of whether the memory spaces of separate instances are interleaved or not).
sr. member
Activity: 420
Merit: 262
If the odds are great enough then I agree, and that is why I said increasing the size of memory space helps. Example, for a 128KB memory space with 32 KB memory banks then the odds will only be roughly 1/4 (actually the computation is more complex than that), not 2^-14.

No, no, no. Banks operate independently of each other.

Why do you say 'no' when I also wrote that alternative possibility is that banks are independent:

(or can schedule accesses to more than one memory bank simultaneously? ... I read DRAM gets faster because of increasing parallelism)



But each bank can only have one of its 2^14=16384 rows active at any time.

My point remains that if there is parallelism in the memory access (whether it be coalescing accesses from the same bank/row or for example 32K simultaneous accesses from 32K independent banks), then by employing the huge number of threads in the GPU (ditto an ASIC) then the effective latency of the memory due to parallelism (not the latency as seen per thread) drops until the memory bandwidth bound is reached.

However it might be an important distinction between whether the accesses are coalesced versus simultaneously accessed (and thus more than one energized) memory banks (row of the bank) in terms of electricity consumption. Yet I think the DRAM memory consumption is always much less than the computation, so as I said unless the computation portion (e.g. the hash function employed) can be a insignificant then electricity consumption will be lower on the ASIC. Still waiting to see what you will find out when you measure Cuckoo with a Kill-A-Watt meter.

Why did you claim that memory latency is not very high on the GPU? Did you not see the references I cited? By not replying to my point on that, does that mean you agree with what I wrote about you were confusing latency per sequential access with latency under parallelism?



Edit: I was conflating 'bank' with 'page'. I meant page since I think mentioned 4KB and it was also mentioned in the link I provided:

http://www.chipestimate.com/techtalk.php?d=2011-11-22

I hope I didn't make another error in this corrected statement. It is late and I am rushing.

Quote from that link:

DDR DRAM requires a delay of tRCD between activating a page in DRAM and the first access to that page. At a minimum, the controller should store enough transactions so that a new transaction entering the queue would issue it's activate command immediately and then be delayed by execution of previously accepted transactions by at least tRCD of the DRAM.

And note:

The size of a typical page is between 4K to 16K. In theory, this size is independent of the OS pages which are typically 4KB each.

Thus again I was correct what I wrote before that if the memory space is 128KB and the page size is 32KB, then the probability is not 2^-14. Sheesh.
legendary
Activity: 990
Merit: 1108
If the odds are great enough then I agree, and that is why I said increasing the size of memory space helps. Example, for a 128KB memory space with 32 KB memory banks then the odds will only be roughly 1/4 (actually the computation is more complex than that), not 2^-14.

No, no, no. Banks operate independently of each other.
But each bank can only have one of its 2^14=16384 rows active at any time.
sr. member
Activity: 420
Merit: 262
However, what a GPU (which starts with 4 - 10X worse main memory latency than CPUs)

Where do you get those numbers? What I can measure is that a GPU has a 5x higher throughput
of random memory accesses. I don't know to what extent that is due to more memory banks in the GPU
but that makes it hard to believe your numbers.

From my old rough draft:

and especially an ASIC will do to get better DRAM amortization (if not also lower electricity consumption due to less latency) is run dozens or hundreds of instances of the proving algorithm with the memory spaces interleaved such that the latencies are combined and amortized over all instances, so that the effective latency drops (because reading from the same memory bank of DRAM is latency free if multiple accesses within the same bank are combined into the same transaction).

This make no sense to me. When all your memory banks are already busy switching rows on every
(random) memory access, then every additional PoW instance you run will just slow things down.
You cannot combine multiple random accesses because the odds of them being in the same row
is around 2^-14 (number of rows).

If the odds are great enough then I agree, and that is why I said increasing the size of memory space helps. Example, for a 128KB memory space with 32 KB memory banks then the odds will only be roughly 1/4 (actually the computation is more complex than that), not 2^-14.

I am not expert on the size of memory banks and the implications of increasing them.
legendary
Activity: 990
Merit: 1108
However, what a GPU (which starts with 4 - 10X worse main memory latency than CPUs)

Where do you get those numbers? What I can measure is that a GPU has a 5x higher throughput
of random memory accesses. I don't know to what extent that is due to more memory banks in the GPU
but that makes it hard to believe your numbers.

Quote
and especially an ASIC will do to get better DRAM amortization (if not also lower electricity consumption due to less latency) is run dozens or hundreds of instances of the proving algorithm with the memory spaces interleaved such that the latencies are combined and amortized over all instances, so that the effective latency drops (because reading from the same memory bank of DRAM is latency free if multiple accesses within the same bank are combined into the same transaction).

This make no sense to me. When all your memory banks are already busy switching rows on every
(random) memory access, then every additional PoW instance you run will just slow things down.
You cannot combine multiple random accesses because the odds of them being in the same row
is around 2^-14 (number of rows).
sr. member
Activity: 420
Merit: 262
My (age 17 in May) high IQ daughter has confirmed that my plan is correct and if I can implement then I do have a shot of replacing Facebook for her generation.

She is very excited to promote my new site to her 10,000+ Facebook friends.

My daughter has Fb friends in every country of the world.
Pages:
Jump to: