Pages:
Author

Topic: [ANN] ParallelCoin - DUO - SHA256 + Scrypt | Community Takeover - page 14. (Read 61140 times)

newbie
Activity: 85
Merit: 0
Pre-release fatigue is starting to get to me so I am quitting with less important things in order to get the core release done.

It does just so happen that today due to my early work on the kopach GPU miner that I found and eliminated a bunch of problems relating to mining work RPC and processing, so in a way this is also decided because it is getting very close to usable and I think that the people will be happy enough for a start to have the upcoming hardfork and protocol improvements running. Miners won't mind so much if they have to use the podctl CLI client to send their coins to exchanges, so we can slate finishing this along with getting the GPU miner done. I'll put together some simple instructions on how to do basic operations using the CLI.

So, I am completing the final parts as follows, in this order:

  • Finish changing RPC (mainly responses) to incorporate the new protocol features
  • Test miner with ccminer/sgminer/bfgminer on all possible algorithms, ensure correct blocks are submitted and all that
  • Make sure wallet is working, change any RPC features related to protocol changes (if any, I don't think there is any)
  • Package releases for easy, friendly installation, they already self-autoconfigure. There will be binary installers for Arch, debs and rpms, I will try to find the simplest way to make a windows installer also, and mac is a little simpler
  • Update parallelcoin.info to new branding, new protocol features and a roadmap. Maybe a draft white paper
  • Fixate the hard fork height
  • Create a new ANN thread to give info about protocol changes, where to get the new clients, and when users have to upgrade in order to stay on the main chain[/i]


There is a few small other things to note. The simple Whirlpool 256 bit hash and GOST Stribog 256 bit hashes have no existing GPU or ASIC miners. So of course we will be filling the office with every piece of hardware capable of hashing, even my partly broken old mobile phones - I have already confirmed that it is possible to run the node inside Termux on Android, so this is a possibility for anyone. Because also it is not consensus related, I can (*after the release*) add multi-algo switching to the built-in CPU miner, tonight I just changed it so it can switch algorithms while running, and it probably won't take much to change it - add a benchmark function and use this to select the algorithm with the highest odds of finding a solution. Not a big priority but a relatively easy task to complete in more relaxed conditions.

It is pretty rare when it is at all practical to mine with CPU. Effectively due to the design of the system, these two non-GPU/ASIC mineable algorithms are set to always win an equal share in proportion with the number of algorithms, or 2/9ths of the blocks will go to those mining these algorithms, and I suppose this may induce someone to build kernels, at least, for ccminer/sgminer, and I know both algorithms have kernels already, just not put into place.

The other note I have to make is that inevitably in the first few days block times will be shorter as the network adjusts to the new algorithms. So, for us here, living in the second world, responsible for updating this thing, obviously we are going to do our derndest to get as many of those early easy blocks, not that we will be alone in this, but we should manage to maybe make a few score DUO out of it. Obviously a clever person might be able to get GPUs mining it, with the few weeks warning. Whatever, hype is good, people are always happy about a chance to share in the rewards.
newbie
Activity: 85
Merit: 0
Well, the whole market... if you look at the last 5 years of bitcoin price also... I don't think we are quite at the bottom yet.

I know this process is taking some time but I am living poor as a student, so that I can do this. I love crypto and p2p tech and I'm sure once the shiny new GUI wallet is finished and everything is proven in testing we will have a freshly updated chain to build on, there will be increased interest. In the process of building I have inevitably ended up studying code and architectures from I guess maybe close to a hundred other projects.

There is now a big problem for any project to gain and hold userbase, too few things really stand out.

The important thing is that we are all involved in building the future of how humans communicate and trade. We are up against several massive adversaries, governments and large multinational corporate cartels. They don't want to give up their control. Every way they can slow us down, they are trying.

I remember 2014. That was when this project originally started. Everyone was saying crypto was over because of the MtGox crash and hack. People are once again saying that crypto is over.

I am sure that we will find the bottom by summer time.
newbie
Activity: 85
Merit: 0
Ok, another significant milestone, so I need to make a short report.

I got all the getwork stuff working properly now, and initially I was going to work from Decred's Gominer GPU miner, but after spending a couple of days fighting with it and my previously incomplete work rpc stuff, I am writing a new, clean miner app based on the very neat and tidy config/version/etc frameworks used in these btcsuite apps.

The miner already now makes a lot of things automatic, and I simplified a lot of stuff. No more strings of complicated flags, just one parser for credentialed URL, an option to only mine on one, and/or to automatically scan for the other 8 ports counting up from the first one, or optionally a custom list. I probably should think about maybe fallbacks, but being that it's currently being made for solo mining first (for me, the developer!) I am getting that side of it done first. First up I will implement the simplest getwork protocol, and then, I have to implement the rpcclient part for getblocktemplate, as it is strangely missing from the RPC library (guess they had nobody wanting it, bitcoin and all, and obviously also nobody is really using btcd for mining, even on the back-end of a pool)

I did a lot of reading of OpenCL code (and a bit of Cuda) and came to the conclusion that I am not going to target Cuda at all, I am only one man, I can only do so much, and OpenCL covers the broadest number of hardware platforms and of course I would love it if we can add cuda to it later but it's not so important at this stage. Stream compute kernels that I saw are written in all different ways, with different structures, some full of preprocessor macros, and none of them with any kind of protocol standard for invoking, loading and getting the responses back from the processor. I will design a simple and easy to understand framework to write the 9 kernels required. Also, for those who might just want to 'play' with mining, the kernels will be designed to run for short times, probably aiming for no more than the time of a single 60hz video frame, so when the 'interactive' mode is enabled, you can work as normal on your machine, without excessive latency being caused to video updates. This is one area where Nvidia's platform is superior to OpenCL, but it's just a matter of priorities for the programmers. Most CL mining is done with headless, AMD rigs. Mining with Nvidia can be a real pain in the ass because Cuda doesn't run unless the X server is also running.

Main thing at this stage is for it to be decently fast kernels and fast enough that between all the users I know already who will be doing a bit of mining, it will be a substantial boost to the baseline hashrate.

The miner will have a default automatic mode where it selects the algorithm with the lowest difficulty to block solution odds ratio based on a benchmark, and combined with pools, the lower margin of profit will be compensated for by chasing the easiest blocks and this will also help ensure that GPU miners are not unduly excluded from being able to mine. In the future, if big pools and ASIC miners get to be an irritation, despite all the measures to mitigate their influence, the nuclear option will be in constructing arbitrary, not-implemented-in-custom-silicon algorithms to bolster the lower end of hashrate capabilities, and further insure against centralisation of mining power.

The current master branch at https://github.com/parallelcointeam/pod is working perfectly on mainnet already. I have set back the hard-fork height maybe another week into january, when it is ready it will be fixated and announced.
newbie
Activity: 85
Merit: 0
I have now extended the hard fork logic to cover everything now, pre hardfork mainnet information queries don't show new algorithms.

The extra algorithm ports are all working, though they open before hardfork but will return an error saying they are not active if the block height is not past the hard fork starting block height (so when it gets close to hard fork you can just start the miner on it and it will start mining the moment the block arrives). The testnet always runs the latest hardfork by default, currently only one, but for the next one it is just adding sections to cover rule changes, all the slots are there to fill.

I have also reinstated getwork, which allows the decred gocoin GPU miner to work with it directly. I figure that unlike an ASIC, the wire to the miner is pci-express and the time used getting new solutions is not such a big percentage of the time it can be hashing as for an ASIC, and being that most ASICs are connected by serial ports far slower than PCI-express

I should have all available kernels for at least OpenCL, it may be harder to find kernels for the algorithms with Cuda. I will be automating its benchmarking function for use with an automatic lowest-difficulty  algorithm switcher, to maximise profits for GPU miners, and help keep the floor from rising too fast before the price catches up to it.

There definitely is the potential to add more algorithms but I would be of the opinion it would get a bit silly much past 20. It just needs to represent a broad enough spread of types of hardware with the better efficiency ratios being geographically and ownership-wise less likely to give one miner an edge, especially as I definitely agree that pools and getblocktemplate ensures the pools can't so easily abuse their users to attack coins. We will be building a pool that works this way also and probably need to also make a pipe that users can configure to proxy to gbt for getwork only hardware should there be a need for it.
legendary
Activity: 1124
Merit: 1013
ParalleCoin's ruler from the shadow
.
Why Go?

Cross-Platform and Binary
Go can build binaries for almost every operating system that anyone runs on any CPU, which greatly improves the use of resources compared to interpreted or virtual machine execution

Garbage Collection
When processing large amounts of data, it is critical that anything no longer in use is returned to the pool so it can be used for something else - in Go it is automatic, and most of the time, optimal. No more memory leaks!

Great Debugging and Profiling tools
Go lets you inspect every aspect of your software's performance in great detail, and the support for debugging is nearly perfect

Easy to Read
Which means it is easy to maintain and easy to change, and with its standard formatting, no longer do you have to think about where you can break a line and where you can continue, it always looks good!

Strict on Security
It is much simpler in go to ensure that data is not going where it shouldn't be - most security problems come from bugs

Concurrent
Whether your process is just a simple pipeline, or multiple processes handling different stages and input and outputs, Go makes it easy to keep everything synchronized and working to maximum capacity, as well as being able to scale up quickly at peak times

Libraries
Go has a huge and growing number of repositories to do almost everything. Because it is easy to interface with C, it should not be long before a lot of system software is written in it. Yet you can use it equally well for a website as any kind of desktop or mobile application
legendary
Activity: 1124
Merit: 1013
ParalleCoin's ruler from the shadow
I was advised by my boss, @marcetin

I am the leader not a boss but yes ok I will take all responsibility Smiley

Anyhow, it was long time from my last login here, but that was just because I had no time as Loki and me were developing new ParallelCoin platform, which is on 95% to be ready for the release. So as some people got from Loki's writings on Discord how much we made and where we are and that in mid January we will do fork interest has rise and Cryptopia market shows that.

My hope is to settle all things by end of next month and that all things from step 1 will be done and released. I know this all took so long but we finally have new coin code, all golang with whole system com-http for indexing coins and bitnodes for coin node hosting. Bitnodes will go live in next couple of days. Things have been changed from my initial plan, as you know I worked with mblados (Rod Zisso) who act as a jerk and stop responding me after months of telling me he is working on the coin, do not work ever with that guy if he is still here, and if he see this FUCK YOU ROD. I am still mad at that guy as he let me hang as a full and stop answering my messages.

On the new site you will get all explanations and we will create new and when fork happen.

So stay tuned and know that this is our life story and there is no back, it is true story, we dont ask for money but we have are just combining all accessible and suitable technologies to expand cryptocoins tech story.
As we here say last days for us is like "ParallelCoin or death", because we are really last 6 months in this: sleep, eat, programming. I had even learn to do programming in golang so all my new work is in go, which people still did not recognize full as it is the language of internet thing, servers etc, build by Google to solve googles problems which are greatest server problems in the world. Also 3 legendary guys are involved which each of them was involved in developing at least 3 programming languages before.

So as I was mentioned as in charge I announce with this post that our "PLAN 9" goes live from this moment!!!

Cheers!
newbie
Activity: 85
Merit: 0
I was advised by my boss, @marcetin, to try to put more of my rambling thoughts into a more accessible medium, as, rightly, the possibilities and the designs I am working on, have great potential especially for us parallelcoiners, but also more broadly to proof of work altcoins in general, as obviously a good model to deal with the circumstances of the millieu, decreasing instability and insecurity, is good for everyone, and brings more money to the table for all.

So, as I am now almost finished adding an effective hardfork switch system, and now most of the remaining work is on the mining RPC API, I have written an article that I have posted on the Steem-fork forum system being run, very protectively, by my good friend Bilal Haider, a project that I will become more involved with once we make the milestone and have the full suite ready for the parallelcoin network to upgrade.

It's about how Proof of Work is the only real way to protect networks, but that it needs a radical revamp in order to survive:

https://fast.bearshares.com/cryptocurrency/@loki/pow-s-not-dead

More generally, regarding the progress, as I said, the core network functionality changes are mostly complete, and will now enter the intensive alpha test phase once I have the GetWork functions working on all algorithms.

I found a suitable miner to base an official multi-algorithm GPU miner in the Decred miner, it is written in Go, and currently contains the OpenCL and Cuda kernels for Blake14lr, being one of the algorithms Parallelcoin will have post-fork. It should not be excessively difficult to expand it to at least 8 of the 9 algorithms, as GPU miners exist for most of them, Lyra2REv2, Skein, Gost (as part of x11-gost), x11, keccak, and sha256d and scrypt. The repository has been forked and I will start working with it immediately afterwards.

Once the getwork functionality and multi-algo RPC ports are all fully functional, the new miner will be available to use. One of the modes I plan to make for it is algorithm-hopping, so that it automatically switches to the lowest difficulty (based on benchmarks) currently on the chain, a feature that will help keep the chain robust against attacks, as when some algos are hit with high hashpower, the others, in proportion to the rise, will lower, as part of ensuring the clock is maintained within as tight a range as is possible.

So, anyway, back to work, gotta wrangle these miner RPCs!
newbie
Activity: 85
Merit: 0
Not gonna get *too* excited just yet but it's a nice splash of cold water in the face Smiley I'm now building the proper hardfork framework so I can merge the alpha development branch back easily... It's been a pretty rough month so far.

I'm basically not going to get paid anything unless the market likes what I am doing, and this is, even if it's a P&D for the moment, or just another community member playing silly buggers, it still woke me up.

Oh, of course, nothing is 100% confirmed but I will be scheduling the hard fork to fall somewhere around 18 January, this should be plenty of time. The difficulty adjustment seems robust in testing so far, I only need to tidy up the mining RPC functions and I'll be able to test semi-realistic increases in difficulty and see how it responds. I won't really be over the moon until I see it consistently recover fast and stay on the clock even with a big drop in network hashrate.
member
Activity: 732
Merit: 18
New exchange generation
newbie
Activity: 85
Merit: 0
My test console, going top left downwards and then right each terminal is a node mining the following algorithms:

sha256d
blake14lr
whirlpool
lyra2rev2
skein
x11
gost
keccak
scrypt

https://i.postimg.cc/hX71nLRD/Screenshot-from-2018-12-11-14-24-40.png

You can see in the terminals that have found blocks recently which algorithm, how far their block times are diverging according to the adjustment windows, they essentially act independently, and randomly.
newbie
Activity: 85
Merit: 0
Just a short note this time...

So, we had several options open, adding more PoW algorithms, and Merged Mining.

After some research on the latter, it became clear that it only expands the attack surface for malicious miners, only Doge had any gain from its introduction, and it has killed Namecoin, pretty much dead, and Myriad has been able to avoid being destroyed by it because only scrypt and sha256d are merged. But Myriad doesn't seem to have generally benefited either, like DUO and many small coins, hovering around a very small market value, unable to build momentum, and low volumes.

I added 7 new hashes, Lyra2REv2, Skein, X11, X13, Blake2b, Blake14lr, and Keccac. After some testing, it was found that the X13 implementation haemorrhaged memory, consuming all of my 32gb of memory in about an hour. Strangely, the Blake2b hash appeared to not function at all, though strangely it seemed to produce solutions now and then, no idea how when the system monitor reported zero CPU activity.

So I had to discard these, and anyway, being that both were from the same family of functions, I thought this would be a good opportunity to make the functions more diverse. I searched for a while and came up with GOST, the one used by Sibcoin in addition to x11, and I also selected Whirlpool, as it is from also an entirely different construction as most of the others.

In actual fact, Lyra2REv2 and Scrypt are Key Derivation functions, Lyra is based on Blake, I think? Maybe something else? It uses a scrambling function construction called Sponge. Well, anyway, KDFs are used mostly for turning passwords into symmetric keys for encrypting data, such as the wallet data, usually using AES-SHA-256 or GCM.

TL;DR

Anyway, I'll just get to the point. What I discovered was that inadvertently, my design whereby the averaging window, the first and last in a number of sequential blocks, the time between them, is found simply by walking back through the chain to find the first previous instance of a particular version.

Because of the inherently random nature of finding solutions for blocks, the distance backwards for each algorithm's averaging window is also randomised, and, the most important thing is:

each algorithm is independent but the chain rate is bound together.

What this basically means is that in order to mount a hashrate attack on such a chain is that you first have to be looking for solutions with more than half the algorithms. Inherently, the extreme rise caused by a large pool showing up to grab a few blocks during a low diff or high price period, only affects the algorithm being mined immediately.

As such an event leads to a string of the same algorithm blocks, this shortening of the average block time is not immediately changing the difficulty target of all the other algorithms.

What this means is that any attempt to 51% attack the chain will require a quite high logistical burden:

- one has to mine to more than half of the algorithms to have a sustained effect on the block times,

- all of the other algorithms not being targeted are likely to prevent a long delay before the next block arrives

- there is no rhythm or timing that can be used to amplify the effect with resonance, as the system is inherently stochastic, and randomness includes not repeating numbers or factors of numbers

I was finding that with this many algorithms the time to convergence given a flat hashrate was excessively slow, using the parabolic filter, so I am now in testing with a plain linear adjustment (target/actual*difficulty), and it appears that only a small amount of bit-flipping is required to jiggle the difficulty value a little to reduce its response to aliasing distortion.

As far as I can tell from my research, nobody has done 9 algorithms for one coin before, I think Myriad is the most, with 5, and most of these algorithms use complex filters over the averaging window, such as the original Parallelcoin method which found the time between all of the scrypt or sha256d blocks and the previous block, grabs 10, and then works from this average. I believe others also try to isolate the algorithms.

When I told @marcetin about my idea of giving this hardfork the codename 'Plan 9 from Crypto Space' and that two of the extra 7 algorithms had serious implementation bugs, he said "There must be 9, that name is awesome", or something like this. So, I looked for algorithms and came up with GOST and Whirlpool.

Whirlpool, strangely, has not found itself in any PoW algorithm to date, which is strange, because it's a particularly good one, it was most notably used by Truecrypt. GOST only appears in a couple of coins, and not by itself.

SHA256D, Scrypt, Keccac, Skein, X11, Lyra2REv2, and Blake14lr are all used exactly as I have implemented them in numerous coins, so given the right getwork configuration one will be able to use ASICs and GPU to mine these ones, depending on what you have available, what is sitting idle collecting dust.

On the other hand, the simple GOST and Whirlpool both do not have existing ASIC or GPU miners, though likely modification would allow an X11-GOST miner to just deploy the GOST function by itself, Whirlpool will be constrained to CPU only for some time. Eventually someone will probably implement them for OpenCL and CUDA, in fact this announcement may cause someone to spend a few weeks writing them in preparation. But possibly at first only CPU mining will be possible with these two.

If you followed the explanation above of how the indiscriminate averaging window (only constraint being it ends with the algo in question) causes stochastic behaviour and latency in the difficulty adjustment, what this means essentially is that no matter what algo you mine with, each algo cannot, for long, take more than 1/9th of the total hashing power, and the only miner for Whirlpool and GOST will initially be the new Pod non-wallet full node.

Sure, maybe this enables botnets, but that would still only be 2/9th of the hashpower affected this way.

Also, I should note that because so many options are available, and their stochastic latency of averaging windows, it doesn't matter if ASICs mine the coins, as would be attackers have at least 5x as much logistical burden to an attack attempt. I am not against ASICs, I just want to make it so that miners don't wake up one day and suddenly they are minnows up against a whale of hashpower.

The multi-algorithm based security of the chain hashrate is definitely an innovation, although only a logical simple step beyond multi-algo chains. And if the difficulty of arranging so many algorithms gets easier, we can probably add some more algorithms to further dilute the effect.

After this the Plan 9 from Crypto Space hardfork, Parallelcoin should be the most hashrate attack resistant small coin in the space.
newbie
Activity: 85
Merit: 0
It's nice to know I'm not talking to myself  Grin

So, more news, and general commentary. I got through a lot of tasks today and I need to debrief a bit.

There is 7 new algorithms being added: blake14lr (as used in decred), blake2b (sia), Lyra2REv2 (vertcoin), skein (one of myriad's 5), x13 and x11. These were selected because all of the coins that they are associated with have the standart bitcoin mining RPC API, and none of these require any changes to the block header structure (equihash requires an additional 1134 bytes, monero's difficulty adjustment is double the length at 64 bits) and basically most of it is tedious administrative work going through all the places where I already added the support for Scrypt to the btcd base, now there will be 9 in total. I am most of the way through the initial implementation.

The difficulty adjustment deserves a little more discussion. KGW and Dark Gravity Wave are more well known continuous difficulty adjustment algorithms, and as people would know, the Verge attack was due to flaws in the DGW. I spent a lot of the last week staring at logs of the new node mining, well, only one algorithm, but this doesn't substantially matter because all of them are poisson point processes anyway, so if designed right it should behave not too differently with more. It might be a day or two before I have them fully working, and then I can see how my algorithm copes with the multiple algorithms.

So, the existing difficulty adjustment is terrible, I don't know how long the original authors spent testing but I suspect not very long. It bases its calculations on the time between a block of an algorithm and the previous block of the algorithm, with a mere 10 sampled to generate the average upon which it adjusts linearly.

Firstly, yes, of course this new algorithm uses exponents, not squares, as in many of the others, but rather, a cubic curve, which is shifted with adding one to the result and taking one from the divergence ratio (target:actual) then cubing it. This is a slightly rough description:

https://github.com/parallelcointeam/pod/blob/master/docs/parabolic_filter_difficulty_adjustment.md

One subject I didn't fully address because it wasn't on my mind so much as I was simply testing to see how it works, was the matter of how the different algorithms cooperate with each other. The many schemes I see, monero's and others, do a LOT of processing and slicing and weighting and such, and I think if this can be avoided, it is better, as this is a process that takes place during replay, and nothing is more irritating than waiting for sync, because of bandwidth or disk access problems. It can be worse sometimes also depending on the database, leveldb is not bad but I would prefer, down the track, to switch it all to the Badger database, which separates the key and value fields, and keeps (optionally) all of the keys in memory.

So, I basically (lazily) implemented it, without too much thinking beforehand, such that for each algorithm, it always steps back to the previous block of that algorithm, and then walks backwards some amount of blocks (probably 288 blocks, but I'm not completely sure yet), and simply uses these two times, subtracts the older time from the newer, and adjusts from that.

What I did not realise immediately was how this would work with especially this many algorithms. As I have discussed in that document above, the difficulty is dithered by a tiny amount, just the last two bits, after getting that average block time, and then feeding it into that curve filter. This is a unique feature for difficulty adjustments, I thought about more 'random' ways of doing it but the fact is when you are dithering, you should not shuffle things too much. The bit-flipping may not be sufficient to smooth out the bumpy, pointy timestamp resolution, but it probably is a long way towards eliminating resonances caused by common factors between numbers.

So, in every case, the difficulty adjustment is computed based on, what will often be, some time in the past, maybe 1 block, maybe hundreds. The adjustment based on this will influence only the one algorithm's blocks, but it also combines with the other blocks mingled together. But the most important effect, which was just a side effect, in my mind, that I didn't model yet, is that no matter how long the gap is while an algorithm isn't being used, it is as though no time has passed, so each of the different algorithms will bring new, complex 'echoes' into the feedback loop that computes difficulty, which I figure will probably somewhat resemble reverb or radiosity in sound and light respectively. Or in other words, the possibly different inherent distributions of solutions, which are already independent from each other, will combine a lot of different numbers together.

Pure gaussian noise is by its nature soft and fluffy, the sound is like the whooshing of breath, and visually it is blurring, density of dots, for example, is used for most forms of print matter, and the best looking images are ones that have the natural chaotic patterning as you see in film. Even many new developments in camera technology, and displays, are increasingly using multiple exposures, and highly pattern-free algorithms like Floyd-Steinberg and later, more fluffy types of ditherers.

So, in a simple way of explaining it, the only thing that really differentiates how I am implementing this, is I am making maximum use of the benefits of poisson point processes and gaussian distribution with intentional noise. Too much will overwhelm it, but I think between the bit flips and the time-traveling block difficulty that always ignores the immediate previous block, maybe many more, since the previous time the algorithm was used for a block, these blocks will appear randomly, and the echos will be sometimes short, nearly not an echo at all, and sometimes quite a ways back. These will definitely add more randomness to the edges of the adjustments, and this is very important, in my opinion, as the reality is, difficulty adjustment regimes really are not up to date compared to other fields of science where control systems are implemented.

You may have heard of Fuzzy Logic, and probably don't realise that almost all cars, washing machines, and many many devices now use fuzzy logic systems to progressively adapt via feedback loops to keep a random or undesirably shaped response pattern from appearing in the behaviour of the device. These unwanted patterns tend to also cause inefficiency and catastrophic changes if they are not mitigated.

The new stochastic parabolic filter difficulty adjustment scheme should hopefully fix almost all of the issues relating to blockchain difficulty attacks, most of them, the most devastatingly efficient, as was the first live instance of a time warp attack, on Verge, earlier this year, and the attacker succeeded in freezing the clock and issuing arbitrary tokens (of course limited by block height/reward size, but not in sheer number).

The only remaining vector of attack is the pure and simple 51%, just simply overwhelming the chain for long enough to be able to rewrite it.

Just some brief comments about the timing behaviour that I observed so far in tests, with the dithering added, particularly, the block times started to nicely swing quite a ways back and forth, in an almost alternating pattern. It's probably actually a 4-part pattern, because 4 different ways 2 bits can be  set. Firstly, it does not react strongly to blocks when they fall within the inner 20% of divergence, between 80% and 125%, and as such, even with quite dramatic changes in hashpower, it stays pretty close, because basically once block time falls, as it is prone to, randomly, at the further out edges, from 50% to 200% (smaller and bigger) the algorithm kicks them hard, so in the events of a sharp rise in hashpower, what happens is the difficulty rapidly increases, *but* it does not do so in a linear fashion. It is dithered in the smaller harmonics, as well as being a smooth surface like a mirror (and made smoother by dithering), so it homes in on the correct adjustment quite quickly.

From what I saw so far, a sharp rise sees the difficulty accelerates upwards, but not smoothly, and because it isn't smooth, though when a big miner stops mining, yes there may be longer gaps between, but as I mentioned, the block times seem to pretty well oscillate in a 1:4 and 1:2 ratio, usually within 3 blocks it will, as it did going up, go back down.

Most algorithms don't try to address the issue of aliasing distortion in the signal, and it is this that makes difficulty adjustment algorithms get stuck or oscillate excessively.

Anyway, I will have all of the bits changed to fit the 7 new algorithms in the next days, and then we will be able to see more concrete data about what additional effect the 9 different algorithms will have, as well as the pattern that it ignores the newest block if it is the same algorithm, meaning that due to the random distribution of the algorithms, the time will also be dithered as a product of the poisson process of finding solutions.

Blockchains basically take a random source, and attempt to modulate the process with difficulty to make it more regular. Regular blocks is very important because of the width of the amount of time between blocks being somewhat long, and in fact blockchains can't really do much better than 2.5 minutes, at best, and in my opinion, 5 minutes is a good amount of time, and long enough. Between 10 minute blocks, there is 600 seconds. You would not be happy if there was only 600 pixels across your screen (these days), and I think it helps people understand what is going on when an analogous phenomena is compared.

Blockchains are highly dependent on clocks, and it is because of the many random factors between them, that it is also the most difficult thing to really get right. Many of the older coins have terrible difficulty adjustments, bitcoin only adjusts every 2 weeks, and the only reason it can get away with it is because of so much loyal hashpower, the volatility is small. But the more ASICs that are available, the more GPUs, the more variance there can be between hashpower and profitability on various chains, the more of a problem it gets to be for more chains.

I have thought about other strategies, and maybe they will be added later, but I am not sure yet if it is even necessary. The most challenging thing with difficulty adjustment and the main type of attack carried out on pretty much a daily basis, even, mostly unintentional, is that blockchains can be caused to pause for indeterminate amounts of time when difficulty adjusts up due to high hashpower, and then when it goes away, it cannot go back down until a new block hits the old target. Min diff blocks, as used in test networks, do not work in the real world, because it is not possible to come to a clear consensus about time, and the network can be highly divided about whether the necessary amount of time elapsed or not, and thtis gets even worse when you add in that people alter their nodes to feed fake times or divergent times as part of attempts to game the system.

I think the solution is just noise, basically, in case you hadn't figured it's all about that, and using noise to eliminate distortion. Even if the chain has dramatic, like maybe even as high as 20x jumps in difficulty, that with added wiggle, when difficulty ramps up, it does not get caught, either slowed or sped up, in moving up, it wiggles all the way to the top, and at the top, though there will be a time of gap between, the 'time travelling' averaging I have devised, likely will further counter this.

There will naturally be clumpiness in the distribution, even given a continuous, long time of all algorithms at a stable hashrate, and, especially with so many algorithms, now there is 9, this means that when difficulty goes up a lot, an algorithm that has gone through a long dry patch will have a potentially lower (and also higher) target already, and thus where the blocks with algos with not many recent solutions find a solution, or another giant pool shows up, it gives far more opportunities for a block to come in on target, sooner than it would have otherwise.

The real results of course will be seen with more testing and when the hardfork activates. I hold to Proof of Work as the currently and still and will probably be best strategy for protecting the security of the blockchains, as I saw first hand how easily it is for proof of stake blockchains to get extremely badly distributed. Proof of Stake also, in the beginning, is very cheap to acquire, and after a short time, only the early people or someone with a lot of money can have any influence. This is not conducive to adoption, nor is it conducive to security.
jr. member
Activity: 171
Merit: 3
I need a break!
Thanks for the update lokiverloren, very informative as usual Smiley
I hope others appreciate this great work as much as me Smiley
Im a long time DUO holder, and can see a very good future Smiley
newbie
Activity: 85
Merit: 0
Just a short little note to update on progress. For the last week, in between the many distractions that have come all at once, I have been developing an improved difficulty adjustment algorithm. The existing one is very prone to oscillation, a problem further compounded by the volatility of hashpower being pointed at the chain.

The new algorithm uses a fairly simple cubic curve that is more or less just, a cubic curve, calibrated to cross at 1,1. It uses the curve against the variance measured by a fixed number of past blocks. The old algorithm attempted to separate them and walks backwards through the chain to gather the last 10 per algorithm, and  then uses a flat linear adjustment, which is basically y=x, or  in other words, if the average time is under by 20%, it will reduce the target by 20%. This is a terrible way to do it, as it increasingly encounters aliasing distortion close to the target caused by the 1 second granularity of the block timestamps.

https://github.com/parallelcointeam/pod/blob/master/docs/parabolic_filter_difficulty_adjustment.md

Two main strategies are used as described in the document above, one is the cubic curve, and the other is that after the adjustment is made against the curve, the last two bits of the 'Bits' are flipped, which ensures that the target is unlikely to fall into a resonant cascade and getting stuck oscillating between long and short blocks over - in this case, typically 4-12 hour periods. This kind of extreme variance is very problematic for users because they literally cannot know when even the first confirmation will come in, better than 'about maybe half a day'. Intentionally adding noise to reduce these kinds of interference and distortion are well proven especially in radio and digital signalling technology in general, it is used by audio devices to reduce unintentional frequencies caused by the samplerate, it is used in most modern IPS and better LCD displays (they used to just do a checkerboard, the fuzz is much nicer on the eyes).

Oh, and one last thing, it does not filter out algorithms from the computation. The minimum target for scrypt blocks on the parallelcoin network is currently the smallest possible value, 7 at the front then lots of f's. Because there isn't many miners using this algorithm, at times the difficulty *will* drop to zero and *will* mean 5-10 blocks can be spewed forth suddenly. Being that mining is supposed to be about the transactions, and reward for doing that work, the incentives are all upside down. So the new algorithm only looks for the most recent one, and uses all of the previous blocks no matter which difficulty - the past difficulty adjustments are irrelevant, what is important is the timestamps. By not distinguishing between algorithms in this computation the block rhythm should be better regulated.

I am basically satisfied with the workings of the new adjustment now, and as you can probably imagine, it's more about staring and watching grass grow than actually coding, so I'm glad and just had to draw a line in the sand about how long I would fiddle with it. There is probably significant improvements possible, but they would be pushing the boundary of diminishing returns. As it stands, the variance from the target is typically held within 20%, and the dithering helps to ensure that strings of short blocks happen far less frequently, now it appears to be the case that more often they come in alternating or sometimes 2, and less frequently, 3 blocks at a time. Also, the very wide variance caused when pool miners dive bomb the chain will be greatly diminished, as such short blocks trigger very big difficulty adjustments, and instead of the current spates of 5-10 blocks under 10 seconds between, it is unlikely that more than 3 will usually happen so quickly, and hopefully they hang around long enough to get a few more at which point the timing will be more normal.

Now it will get a *little* more interesting for me. I have to get the scrypt CPU mining working properly again, I think the two RPC ports properly produce block templates, but this will be checked and fixed if it is not. Then I will be adding more proof of work algorithms. The main thing that will determine what makes the cut for the first hard fork will be that the hash functions are in a Go package. Most of them are covered. I am unsure which, at this point, and I recall looking at Ethash and thinking it would be a huge pain, but Equihash, Cryptonote 7, possibly Cuckoo, x11, and maybe some others like Myriad, Keccac, Skunk and similar. I will aim to ensure that miners with idle GPU capacity can help secure the network against hashpower attacks. Adding merged mining is a considerable undertaking, and if it is going to be done it will be in the second hard fork, if it seems necessary to do this.
newbie
Activity: 85
Merit: 0
Just a minor update regarding planned hardfork changes...

Parallelcoin suffers from a problem with its' difficulty adjustment because of how it is computer. It bases the difficulty adjustment of the measurement of the time between the latest and 9 previous blocks of the same algorithm. This is absurdly short, and typically ranges between 50 and 100 minutes. As such, it is highly vulnerable to what I call a 'rhythmic hashpower attack'. For this, I have two main strategies:

  • A long number of blocks, 4032, being approximately 2 weeks at 5 minute blocks, randomly 50% are selected, the top and bottom 12.5% are removed and the average is computed from this, eliminating sequence and outliers causing the mean to be inaccurately representative of block times. This is to ensure that no type of sequence or rhythm can control the adjustment, and it uses a much longer timescale so it also does not fluctuate wildly up and down chasing the impossible regularity of a poisson process, and so short periods of high hash power do not move the mean above the actual mean.
  • The weights of the 4032 past blocks are added and at the mean of the block weight, difficulty is further adjusted up or down based on a new block's proportion to this average. Below this value difficulty is nominal, above this, the nominal difficulty is multiplied by the factor above the average, and then squared. This does not affect the nominal difficulty, its purpose is to discourage processing all of the blocks when they bank up in one block, meaning more miners get a share of the block fees.

The second part is based on the method used by Freshcoin, which mines most of the supply early. This coin doesn't do that, but it instead has the problem that most blocks have no transactions put in them and they very often are spewed out at 10 at a time within a minute. By raising the difficulty required according to over-average numbers of transactions in a block, it ensures that these high powered pools are not able to deny the loyal, smaller mining pools and solo miners from getting blocks. Possibly I may also consider that below the mean block weight the difficulty scales downwards, meaning that even maybe solo miners have a decent chance of being able to clear one transaction.
newbie
Activity: 85
Merit: 0
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

As you can all see from @traxor's updates, I have been very very busy.

This morning I finished up with getting the wallet daemon (mod - because it's modular, and that's a nice short name). The new full node and wallet are based on btcsuite btcd and btcwallet, with all the necessary changes for it to conform to the existing consensus.

They can be found here: https://github.com/parallelcointeam/pod and https://github.com/parallelcointeam/mod I will be making a full release with installation packages for Windows, Mac, Debian/Ubuntu, Redhat/Suse and I have an account at the arch linux AUR and I will make a PKGBUILD for it also, and it will be available through the AUR (I'll make a binary, version pinned build and a rolling master branch build).

The new full node runs two separate RPCs for miners and pools to use, defaults to 11048 for sha256d and 11049 for scrypt. The full node, CLI controller and wallet daemon all autoconfigure for localhost access and integrates the full node and wallet automatically, both `podctl` and `mod` on first run with no configuration will copy the RPC credentials so you can get up and running quick and easy. The wallet daemon listens by default to port 11046

It got quite exciting a few weeks back as somebody with access to a lot of mining power started to put out blocks with different version numbers, which it turns out the old client treats as sha256d blocks. Then they started using 'Big S' signatures, which are part of the hypothetical signature malleability double spend attack - and then sure enough, logs started showing blocks coming in that were rejected because the outputs were already spent. The latest seems to be that they are trying to mine a side chain, but so far it looks like the rest of the miners have enough hash power to keep the head at the front.

Our secret shopper, whoever they are, or maybe they are just a legitimate Wink would-be-blockchain-robber, has ensured that it will be necessary to upgrade the protocol significantly. These are the changes I have in mind:

- - The new client will automatically set the equivalent of a checkpoint at about 288 (2 days) blocks, so that it will automatically block attempts to create side chains and push them ahead with a 51% attack.

- - The difficulty adjustment needs to work from a longer timespan than 50 minutes. I will set it to 4032 blocks, approximately 2 weeks. This is to address the problem that short term bursts of greatly increased hash power is leading to miners printing 10 blocks within a minute and then nothing for hours, which is entirely unsuitable for a real world transactional currency. Extending the averaging window will mean that the difficulty adjusts more slowly and can't be pushed up artificially by timed bursts.

- - Block versions have been messed up by the secret shopper, but after the prescribed hardfork block height they will again be properly enforced and I will design an extensible protocol similar to the one in bitcoin, that allows us to in the future upgrade the protocol using BIP9 softforks.

- - The peer to peer and RPC protocols will be enhanced with a messsagepack binary format protocol and use the OTR/Diffie Hellman Perfect Forward Secrecy encryption protocol, as it is patently retarded to use OpenSSL/GNUTLS, which is based on a web of trust model - which means you have to faff about with certificate authorities to just get it connected. The mechanism will be extended to enable users to make black or whitelists. Web of trust protocols are not identity-secure, as you sign public keys that afterwards the signed keyholder will advertise therefore your approval of them. There might be some purpose to this but I can't think of it. This encryption protocol will also be added to the peer to peer protocol, because in my opinion, even though it's public information, the propagation pattern should be protected at least a little bit, as locations enable attacks.

- - I am planning an SPV wallet, which will have native compiled GUI interface available. The SPV wallet is the testbed and initial showcase/prototype for the peer to peer network extensions I have planned.

- - The first application of this will be in the implementation of a transaction proxy relaying, like the outbound connection component of the Tor network. The nodes will all have a unique identifying EC keypair (ed25519, natch) which will be constructing a wrapper around its transaction broadcast with three layers designating the 3 peers that will relay the message. Each node will only know where it came from and where it's going but not whether it is the first, second or last hop before the destination node.

- - From there on, there will be built wrappers and sockets and interfacing libraries that can be used by other applications to connect to the network and use it to find specific subprotocol peers to enable other types of distributed applications, both centralised/federated type applications (eg, diaspora, paxos/raft/sporedb and other federated WoT database protocols) and trust-computed (for example like the Steem reputation system) and trustless (yes, potentially allowing multiple cryptocurrencies to interface directly through the network) peer to peer protocols. In other words, I am aiming to make the parallelcoin network the wires to connect a whole ecosystem of applications.

When I have completed the releases, with all the installers made for all of the above mentioned platforms (and if anyone has specific requests) - and already you can see on the newest release that @trax0r posted, there is literally binaries for almost every platform in existence, I will make a new ANN thread for discussion and feedback.
-----BEGIN PGP SIGNATURE-----

iHUEARYIAB0WIQThc/kXLToA5xCfIuOCA/USO9KcBAUCW/nWowAKCRCCA/USO9Kc
BJdXAQD/+BxK5stddGlA1InaDDrF6GI74Dfqz62E2Qu5E7mqCgEApeF9r1SRvS4c
rveVObyPNZ5DJNMIkMVlpsRT0rZIhA0=
=EmXt
-----END PGP SIGNATURE-----
full member
Activity: 375
Merit: 103
Coinz-Universe
Good news today Cheesy :

The Core blockchain client for the Parallelcoin network >>> Beta release <<< is now ready for testing:

https://github.com/parallelcointeam/pod/releases/tag/v0.1.2-beta

Please let loki know any bugs you might find.
copper member
Activity: 42
Merit: 0
full member
Activity: 375
Merit: 103
Coinz-Universe
DUO Market on Cryptopia has been paused today as on our (DUO team) request.

This was necessary as there are ongoing attacks on the DUO blockchain.

Our dev is looking into the issue and we will solve this problem within the next days.

Stay tuned for more infos.

You can also join DUO on discord for the latest news. Follow this link: https://discord.gg/eEunhn

full member
Activity: 375
Merit: 103
Coinz-Universe
Again good news: Today an update of the suite of core apps was released.  Cool

It is the suite of core apps for Parallelcoin cryptocurrency network - parallelcointeam/pod

https://github.com/parallelcointeam/pod/releases/tag/v0.1.1-alpha

The full node and the CLI client are both fully functional and seemingly bug-free.  Wink
Pages:
Jump to:
© 2020, Bitcointalksearch.org