Author

Topic: Anti ASIC/GPU/FPGA POW-algorithm. New (2019). (Read 1230 times)

member
Activity: 264
Merit: 13
The official good news ...
Today we concluded an agreement with foreign investors on the implementation of our first project, which was called VenusGEET. Now, however, it will be realized under a different working name, but the essence remains the same.

Firstly, it is a cryptocurrency for everyone, the coin mining in which is protected from specialized equipment like ASIC, GPU, FPGA. That is essentially CPU_Only cryptocurrency. Such protection will be based on the innovative POW algorithm developed by me based on RBF (ring bit functions).

Secondly, it will be the first cryptocurrency such as RawCoin in the world, that is, a crypt whose value is tied to the raw materials most used by mankind.

Thirdly, it will be paired with a decentralized messenger based on the possibility of complete anonymity of users on the network, and will be provided with strong cryptography.

Well, there are still interesting things that I just do not want to reveal in the interest of intrigue.

In any case, this is a good and interesting project that has long been awaiting not only the crypto community, but also many users of information technology in the world. And I hope that he will nevertheless help our company take a step forward.
full member
Activity: 322
Merit: 151
They're tactical


Bitcoin mining pools are centralized. What you posted are not real solutions for a decentralized blockchain.


In the current state of things, its already the case that mining is mostly pooled, but this system would at least give a chance for small miners without expansive specialized equipment and limit the electricity bills. So in itself that would still make it less centralized than current situation.

But even the purely decentralized bitcoin solo mining would still fail with 51% of bad nodes anyway.

This system would probably be less expansive to game, but the huge power cost of bitcoin is the number one argument against bitcoin and crypto, if a solution can be found it still worth a try. Even if the model is different and doesnt have same characteristics and requirement, need to see the pro and cons and the bigger picture of the economic implied to see if it can keep consistency across all nodes, which is what interest me, im not extremely concerned with privacy.
newbie
Activity: 23
Merit: 6
I didnt see a point you made that not solvable with similar characteristics to bitcoin mining pool.

Bitcoin mining pools are centralized. What you posted are not real solutions for a decentralized blockchain.

Wanting To obtain funding is not incompatible with having a working solution, what kind of logic is that lol it make you skeptical, doesnt mean everyone "should be" Smiley

It's common sense to be sceptical when someone is asking for money on the internet.

its ironical for someone advocating decentralisation to make so many argument of authority, maybe there is room for skepticism about your opinion as well Smiley

Yes, I'm advocating for decentralization, that's why I don't see your centralized pool-protocol as a solution to anything.

And please look up what argument from authority means. I didn't make any. I simply pointed out the holes in this concept and expressed my opinion that they cannot be fixed. If someone posts an actual working solution, I'll be more than happy to admit that I was wrong.
full member
Activity: 322
Merit: 151
They're tactical
bitcoin solo mining that is used by 0.1% of bitcoin miners today.

It's not just for solo mining. Different pools also need to reach consensus among themselves. Your argument would be valid only if 99.9% of bitcoin hashrate came from a single pool.

Your protocol fails even in the very likely case that two pools mine two blocks with different merkle roots.

I don't see why this would be more problematic than the bitcoin network with mining pools.

Anyone can emit some work, as block header + work circuit, then miners decide on which work they want to mine, like they would choose a mining pool. If the work is invalid then the block is not going to be accepted by the network and they loose their work, so they need to have source for valid work with valid transactions and circuit that will match the protocol and will be accepted by other nodes.

Bitcoin protocol doesn't rules this at all, in theory nothing would prevent to have hash rate evenly distributed between 1 millions pools and the blockchain being in constant conflict. Miner choose large trusted pool because it maximize their reward and decrease the risk, the same logic should apply with this system.

It still require a form of miner id to distribute the work, and this miner distribution should be the same for the whole network, but then there can be different block headers proposition with different work circuit. As long as the final the proof contain enough information to confirm that the work is made and conform to the protocol, i don't see why this is more a problem than the current configuration with pooled mining.

Miners should have some sort of work-seed that they agree to work on, and all miners can check that the transactions are valid and match the merkle root, and that the work circuit is conform to protocol specification.

If the miner id is made with address/ip pairs, then ok one obvious problem is avoiding IP/address spamming. But if there is IP spamming, does all the IP lead to the same physical machine ? In that case that can be detected, and it doesn't even need a strong consensus because everyone can check this, and if IP are found to be clone of the same machine, they should be excluded from the miner pools. If it need a different physical machine or system for each, it's already not the same attack cost. And that would be the only garantee that the system give, you need to have a unique IP and address to be able to mine, and anyone who can provide that has equal chance to earn a reward, no matter how much cores or computational power it has.

If the decentralization barrier become about spamming address and IP for 1$ the billion, it's still not worst than 10 000$ of mining equipment and an hydroelectric dam.

The same principle still apply that even if miners provide an invalid block, proof of work, or that the circuit cannot be proved to fit with the protocol, then the block is rejected and miners wasted their work.

I'm not saying there is no problem with this approach, but i don't see a "fatal flaw" with it either.

You seem to have this stance that proof of work = bitcoin protocol, and if a solution doesn't follow the model then it's flawed, but that's what i would call dogmatism. Yes it's not the same model, it doesn't have the same characteristics and requirement, and it needs a good system to distribute the work, additionally to the pow, the proof for a valid block need to contain the miner circuit as well as the work itself, but is it really unsolvable , i don't know.

I'm just doing surface study for the moment, maybe there is something i don't see and i just take the problems one by one, and see if there is a really a brick wall to them, or in the end how resistant to byzantine fault it is, knowing that decentralized distributed system cannot keep consistency with 51% of bad nodes. That's same for bitcoin and any decentralized distributed system.
full member
Activity: 322
Merit: 151
They're tactical
He Said after it solve the pow problem in itself with the properties he listed, which i already checked and they work.

There is no "pow problem".

Proof of work is a solution to the problem of decentralized consensus. The PoW posted here doesn't solve it.

he says he has other solution to solve work distribution.

I don't see any solution posted anywhere. It looks like he is using this argument to obtain funding, which everyone should be very sceptical about.


Yes for the moment there is no full solution, but it doesnt mean one cannot be found.

I didnt see a point you made that not solvable with similar characteristics to bitcoin mining pool.

Wanting To obtain funding is not incompatible with having a working solution, what kind of logic is that lol it make you skeptical, doesnt mean everyone "should be" Smiley its ironical for someone advocating decentralisation to make so many argument of authority, maybe there is room for skepticism about your opinion as well Smiley
newbie
Activity: 23
Merit: 6
He Said after it solve the pow problem in itself with the properties he listed, which i already checked and they work.

There is no "pow problem".

Proof of work is a solution to the problem of decentralized consensus. The PoW posted here doesn't solve it.

he says he has other solution to solve work distribution.

I don't see any solution posted anywhere. It looks like he is using this argument to obtain funding, which everyone should be very sceptical about.
full member
Activity: 322
Merit: 151
They're tactical
nobody claimed this is a full solution to solve all problems of current blockchain protocols.

The claims made by OP are certainly strong:


What to do with it?

You can implement this algorithm in any cryptocurrency and it will be the best POW algorithm you have ever known.

I was just pointing out that this is not something usable at the moment and probably never will be. The things I mentioned above are not just minor issues but fundamental flaws.

He Said after it solve the pow problem in itself with the properties he listed, which i already checked and they work.

The pool part is the third part, where he says its not a full solution for all blockchain problems in the following posts.

The problem you talk about has been issued also in the first posts, where he says he has other solution to solve work distribution.

I didnt see any point you made that is not solvable, if you only have arguments of authority its not going to have a lot of impact on me.
newbie
Activity: 23
Merit: 6
nobody claimed this is a full solution to solve all problems of current blockchain protocols.

The claims made by OP are certainly strong:


What to do with it?

You can implement this algorithm in any cryptocurrency and it will be the best POW algorithm you have ever known.

I was just pointing out that this is not something usable at the moment and probably never will be. The things I mentioned above are not just minor issues but fundamental flaws.
full member
Activity: 322
Merit: 151
They're tactical
bitcoin solo mining that is used by 0.1% of bitcoin miners today.

It's not just for solo mining. Different pools also need to reach consensus among themselves. Your argument would be valid only if 99.9% of bitcoin hashrate came from a single pool.

Your protocol fails even in the very likely case that two pools mine two blocks with different merkle roots.

It's not my protocol lol

I just find the principle interesting, and i think it can be made to work with certain advantages, i'm just studying the system Smiley If it can't work then too bad, i just discovered this few days ago like everyone else, but i'm really not so sure it can't be made to work.

I read your arguments and i understand them, but i don't see much anything else than bold statement of authority, and no brick wall for this, if you can think outside of the box of bitcoin pow protocol, nobody claimed this is a full solution to solve all problems of current blockchain protocols.

I'm just extrapolating possible solution path, would need to out think the whole things a bit more thoroughly, but also waiting for more informations from OP as he said he has a solution, so need to see his side as well. Would need to put all the problematic on paper flat down and seeing the properties and problems and how they can be solved or not. I wouldn't be so categorical so far.
newbie
Activity: 23
Merit: 6
bitcoin solo mining that is used by 0.1% of bitcoin miners today.

It's not just for solo mining. Different pools also need to reach consensus among themselves. Your argument would be valid only if 99.9% of bitcoin hashrate came from a single pool.

Your protocol fails even in the very likely case that two pools mine two blocks with different merkle roots.
full member
Activity: 322
Merit: 151
They're tactical
It could still be less centralized than pooled mining on certain aspect, like currently nothing prevent mining pool to cheat on the reward / share, and they already take a % of the benefits, here at least this aspect is more transparent.

So basically you want to fight ASIC mining centralization with even more centralization.

The problem with different version of the blockchain is transposed to establishing the consensus on a "mining route" , that start with a merkle root, and break the work into different miners selected evenly in the pool.

And if two nodes have a different merkle root?

It seems that you have missed the main point of proof of work and the Nakamoto consensus.

The proof of work can prove that a certain number of nodes, ideally selected evenly in the pool, have agreed on the merkle root that they have mined. Pooled mining cannot provide more than this, all nodes needs to work on the same block.

This protocol is different than bitcoin, i'm not saying it's the same consensus method or equivalent to bitcoin pow, it needs another mechanism added to it that is still not clearly defined to make it as decentralized as bitcoin solo mining that is used by 0.1% of bitcoin miners today.
newbie
Activity: 23
Merit: 6
It could still be less centralized than pooled mining on certain aspect, like currently nothing prevent mining pool to cheat on the reward / share, and they already take a % of the benefits, here at least this aspect is more transparent.

So basically you want to fight ASIC mining centralization with even more centralization.

The problem with different version of the blockchain is transposed to establishing the consensus on a "mining route" , that start with a merkle root, and break the work into different miners selected evenly in the pool.

And if two nodes have a different merkle root?

It seems that you have missed the main point of proof of work and the Nakamoto consensus.
newbie
Activity: 61
Merit: 0
for the third rule the two-step signature. I agree with the two stages you mentioned. But I still want to ask, do we need another three-stage four?  Cheesy
full member
Activity: 322
Merit: 151
They're tactical
And modelling a decentralized network based on pooled mining is flawed since pooled mining is centralized.

It could still be less centralized than pooled mining on certain aspect, like currently nothing prevent mining pool to cheat on the reward / share, and they already take a % of the benefits, here at least this aspect is more transparent.

The problem with different version of the blockchain is transposed to establishing the consensus on a "mining route" , that start with a merkle root, and break the work into different miners selected evenly in the pool.

But pooled mining is necessary for this system to work, it cannot work if every node solo mine its own block.
newbie
Activity: 62
Merit: 0
I will follow the principles that you have summarized. I really like the first rule is periodic calculations. Forced hashing is what is needed, which is the main algorithm for block signing. In your opinion, it's great to replace it with the Ring Bit Function (RBF).
newbie
Activity: 62
Merit: 0
Hi. You created the stable POW algorithm not only with ASIC devices, but also against GPUs, which is awesome. We hope you will develop it further and make it more popular.  Smiley
newbie
Activity: 23
Merit: 6
Everyone can still check the transactions in the merkle root, the pow consensus show that all nodes agree on this merkle root. If not they will not mine the block. Enough node need to be honest like in any byzantine fault tolerant system.

So you are basically saying "all nodes must agree otherwise they will not agree". Then you will have at least several chain splits per day and the whole network will malfunction. You simply cannot hope that all nodes will always have the same set of transactions.


In monero they take more care about privacy, but in itself bitcoin protocol doesn't specially protect again ip/address association. All nodes that receive a new mining block know the ip of the mining node and the address used for the reward.

Mining pool already know your ip and your mining address and it wouldn't be really hard for an attack to connect the two.

If there was a way to link bitcoin addresses to IP addresses on the protocol level, it would be a huge issue for bitcoin. Although bitcoin is already pretty much a surveillance coin due to its linkable transactions, this would be a whole new level of orwellian proportions.

And modelling a decentralized network based on pooled mining is flawed since pooled mining is centralized.
full member
Activity: 322
Merit: 151
They're tactical
1. All nodes need to work on signature starting from a hash that contain the same merkle root, like pool mining.

Then what you are describing is not actually a consensus mechanism. You are saying that there is only one version of the truth and all nodes have to follow it. That's a centralized network. You may as well have a central bank to simplify everything. Pooled mining is also centralized.

If you want a consensus protocol, you need a way to choose which version of the blockchain to follow (and all nodes must agree on that).

The network address is already known to all nodes or mining pool that you are connected to, and physical address are actually propagated to the whole network to increase the number of nodes that can connect to each other, so if you connect to an open P2P network like blockchain, you're IP already potentially shared on the whole network. https://en.bitcoin.it/wiki/Protocol_documentation#Network_address https://en.bitcoin.it/wiki/Protocol_documentation#addr

Yes, the IP addresses of nodes are known but not linkable to their coin address.

There was a recent attack on ZCash an Monero which allowed IP addresses to be linked with funds. It's already been fixed. Do you want to reintroduce this attack as a "feature"?

Everyone can still check the transactions in the merkle root, the pow consensus show that all nodes agree on this merkle root. If not they will not mine the block. Enough node need to be honest like in any byzantine fault tolerant system. As far as i know, it's already like this on most mining pool even if in theory the stratum protocol allow for each miner to change the block, i don't think a lot of miner really even check the merkle root in a pool mining.

In monero they take more care about privacy, but in itself bitcoin protocol doesn't specially protect again ip/address association. All nodes that receive a new mining block know the ip of the mining node and the address used for the reward.

Mining pool already know your ip and your mining address and it wouldn't be really hard for an attack to connect the two.

I'm not saying there are no problem with this system, but i'm not so categorical that they can't be fixed at all, keeping it in a sufficient byzantine fault tolerance range comparable to bitcoin pool mining. Maybe maybe not Smiley The OP also said he had full solution Smiley
newbie
Activity: 23
Merit: 6
1. All nodes need to work on signature starting from a hash that contain the same merkle root, like pool mining.

Then what you are describing is not actually a consensus mechanism. You are saying that there is only one version of the truth and all nodes have to follow it. That's a centralized network. You may as well have a central bank to simplify everything. Pooled mining is also centralized.

If you want a consensus protocol, you need a way to choose which version of the blockchain to follow (and all nodes must agree on that).

The network address is already known to all nodes or mining pool that you are connected to, and physical address are actually propagated to the whole network to increase the number of nodes that can connect to each other, so if you connect to an open P2P network like blockchain, you're IP already potentially shared on the whole network. https://en.bitcoin.it/wiki/Protocol_documentation#Network_address https://en.bitcoin.it/wiki/Protocol_documentation#addr

Yes, the IP addresses of nodes are known but not linkable to their coin address.

There was a recent attack on ZCash an Monero which allowed IP addresses to be linked with funds. It's already been fixed. Do you want to reintroduce this attack as a "feature"?
full member
Activity: 322
Merit: 151
They're tactical
Ok let say on 10 minutes block you would create chunks of 10 sec of work, like first generating the total ring chain to be computed, then breaking it down to séries of sub ring chain, in sort that each sub chain need to hash its address or id with the previous work.

Now let say this miner id is not just the address, but an ip/address pair. Each time a new node appear on the network it register itself on the network, and put on the global list of miners id, each time a new block arrive this address/Ip pair is hashed with the new block signature and miners id sorted on this hash, and the first 60 are selected for the next block.

The Ip will be used to send the work to the miner and to send it to the next so Ip can checked in and out even if that wouldnt prevent 3 ips to collude to steal work.

It could be made stronger if all nodes do traceroute on miners and a consensus can be reached on topography of ips, i tend to think its a problem that has a degree of byzantine fault tolerance as any node can check the traceroute of other nodes and deduce if the traceroute sent by another node is incohérent, i think its a classic problem of graph theory with a byzantine fault tolerance, similar to this Techniques for Detection of Malicious Packet Drops in Networks   , taking in account that the topography doesn't have to be 100% accurate, but at least give sufficient probability that two nodes are not located too close to each others, and using some connectivity testing along path with a technique similar to the link. Some 'hard' consensus could be added if there is too much conflict above the byzantine fault tolerance of the system.

Would be a long shot, but wouldnt this garantee a certain degree of decentralisation ?

There are so many problems with this I don't even know where to start.

1. This scheme fails to provide the most important property: consensus. What happens if a node receives two different blocks, each with a correct set of 60 signatures? Which version of the blockchain is it going to choose? Note that this doesn't have to be malicious, it can be simply caused by a temporary network split.

2. You failed to explain what happens if one of the 60 selected miners doesn't respond, either maliciously or due to simply being offline.

3. Using IP addresses is a can of worms you don't want to open, trust me. Are you going to limit 1 unique address per IP address? Are you aware that sometimes thousands of people share the same external IP? Are you aware that network routing changes rapidly, sometimes several times per day? Do you know that a billion of IPv6 addresses can be rented for less than $1 per month? Have you thought about the privacy implications of linking coin addresses with physical network addresses?

1. All nodes need to work on signature starting from a hash that contain the same merkle root, like pool mining.

2. One solution to this could be have several possible miners for the same work, either spreading the reward, or selected depending on network latency or other method.

3. All miners should have unique IP. The technique for routing is not for detecting internet layer network routing, but an internal routing between blockchain nodes. A specific node routing could be selected for mining nodes.

The network address is already known to all nodes or mining pool that you are connected to, and physical address are actually propagated to the whole network to increase the number of nodes that can connect to each other, so if you connect to an open P2P network like blockchain, you're IP already potentially shared on the whole network. https://en.bitcoin.it/wiki/Protocol_documentation#Network_address https://en.bitcoin.it/wiki/Protocol_documentation#addr, so in theory an attacker with a certain number of spying node can already do this. If you mine on a pool, this connection is already made. It inself bitcoin protocol doesn't really prevent ip/address association.

For billion IPs harder to solve. Need to see if a distribution on ip range or location could mitigate this.
newbie
Activity: 23
Merit: 6
Ok let say on 10 minutes block you would create chunks of 10 sec of work, like first generating the total ring chain to be computed, then breaking it down to séries of sub ring chain, in sort that each sub chain need to hash its address or id with the previous work.

Now let say this miner id is not just the address, but an ip/address pair. Each time a new node appear on the network it register itself on the network, and put on the global list of miners id, each time a new block arrive this address/Ip pair is hashed with the new block signature and miners id sorted on this hash, and the first 60 are selected for the next block.

The Ip will be used to send the work to the miner and to send it to the next so Ip can checked in and out even if that wouldnt prevent 3 ips to collude to steal work.

It could be made stronger if all nodes do traceroute on miners and a consensus can be reached on topography of ips, i tend to think its a problem that has a degree of byzantine fault tolerance as any node can check the traceroute of other nodes and deduce if the traceroute sent by another node is incohérent, i think its a classic problem of graph theory with a byzantine fault tolerance, similar to this Techniques for Detection of Malicious Packet Drops in Networks   , taking in account that the topography doesn't have to be 100% accurate, but at least give sufficient probability that two nodes are not located too close to each others, and using some connectivity testing along path with a technique similar to the link. Some 'hard' consensus could be added if there is too much conflict above the byzantine fault tolerance of the system.

Would be a long shot, but wouldnt this garantee a certain degree of decentralisation ?

There are so many problems with this I don't even know where to start.

1. This scheme fails to provide the most important property: consensus. What happens if a node receives two different blocks, each with a correct set of 60 signatures? Which version of the blockchain is it going to choose? Note that this doesn't have to be malicious, it can be simply caused by a temporary network split.

2. You failed to explain what happens if one of the 60 selected miners doesn't respond, either maliciously or due to simply being offline.

3. Using IP addresses is a can of worms you don't want to open, trust me. Are you going to limit 1 unique address per IP address? Are you aware that sometimes thousands of people share the same external IP? Are you aware that network routing changes rapidly, sometimes several times per day? Do you know that a billion of IPv6 addresses can be rented for less than $1 per month? Have you thought about the privacy implications of linking coin addresses with physical network addresses?
full member
Activity: 322
Merit: 151
They're tactical
one miner get all the reward when he find the good nonce, and all the other work done for the block is useless.

The "useless" work is what makes the hashcash style PoW secure.

It still need a way spread the work in sort that you have this same kind of distribution of reward based on miner ID or address, except that the miner is just idle until he is given the work, and the distribution depends on another algorithm that define which address is going to do which part of the work for a given block, and all other miner just idle and do no work, so they don't 'loose' anything

Then the main question is what provides the incentive for miners to cooperate (and suffer the network latency penalty) rather than mine selfishly 100% of the time.

For example, if the block interval is 10 minutes, each stages takes 1 ms to calculate and the average network delay is 49 ms, we can support up to 12000 cooperating miners on the network. However, a selfish miner can calculate 600000 stages locally (with 600000 different addresses) and win the whole block reward every time because his blockchain contains more work than the cooperating blockchain.



Ok let say on 10 minutes block you would create chunks of 10 sec of work, like first generating the total ring chain to be computed, then breaking it down to séries of sub ring chain, in sort that each sub chain need to hash its address or id with the previous work.

Now let say this miner id is not just the address, but an ip/address pair. Each time a new node appear on the network it register itself on the network, and put on the global list of miners id, each time a new block arrive this address/Ip pair is hashed with the new block signature and miners id sorted on this hash, and the first 60 are selected for the next block.

The Ip will be used to send the work to the miner and to send it to the next so Ip can checked in and out even if that wouldnt prevent 3 ips to collude to steal work.

It could be made stronger if all nodes do traceroute on miners and a consensus can be reached on topography of ips, i tend to think its a problem that has a degree of byzantine fault tolerance as any node can check the traceroute of other nodes and deduce if the traceroute sent by another node is incohérent, i think its a classic problem of graph theory with a byzantine fault tolerance, similar to this Techniques for Detection of Malicious Packet Drops in Networks   , taking in account that the topography doesn't have to be 100% accurate, but at least give sufficient probability that two nodes are not located too close to each others, and using some connectivity testing along path with a technique similar to the link. Some 'hard' consensus could be added if there is too much conflict above the byzantine fault tolerance of the system.

Would be a long shot, but wouldnt this garantee a certain degree of decentralisation ?
newbie
Activity: 23
Merit: 6
one miner get all the reward when he find the good nonce, and all the other work done for the block is useless.

The "useless" work is what makes the hashcash style PoW secure.

It still need a way spread the work in sort that you have this same kind of distribution of reward based on miner ID or address, except that the miner is just idle until he is given the work, and the distribution depends on another algorithm that define which address is going to do which part of the work for a given block, and all other miner just idle and do no work, so they don't 'loose' anything

Then the main question is what provides the incentive for miners to cooperate (and suffer the network latency penalty) rather than mine selfishly 100% of the time.

For example, if the block interval is 10 minutes, each stages takes 1 ms to calculate and the average network delay is 49 ms, we can support up to 12000 cooperating miners on the network. However, a selfish miner can calculate 600000 stages locally (with 600000 different addresses) and win the whole block reward every time because his blockchain contains more work than the cooperating blockchain.

full member
Activity: 322
Merit: 151
They're tactical
the distribution of time between solutions must be exponential (since it's the only memoryless distribution), i.e. the probability of finding a solution at time t < T is:

This ok, but need to see the bigger picture of why these property makes it ideal proof of work.

The idea with the model you describe is the distribution of the reward is based on equal chance to win the reward for each unit of work done, with 99% of the work done that doesn't participate in the final solution, only one miner get all the reward when he find the good nonce, and all the other work done for the block is useless.

The idea of OP for reward distribution is clearly different because each unit of work done participate to the elaboration of the final proof and earn a reward. The problem is how to force the work to be shared between different miners to distribute the reward, as each unit of work has 100% chance of being rewarded, and the total amount of work is determined for a given block target time, the distribution must use another mechanism. If a miner is never allocated a "work slot", then he just idle and cost nothing, when he is allocated a "work slot", he compute the proof using the previous one from another miner and gain the reward.

It still need a way spread the work in sort that you have this same kind of distribution of reward based on miner ID or address, except that the miner is just idle until he is given the work, and the distribution depends on another algorithm that define which address is going to do which part of the work for a given block, and all other miner just idle and do no work, so they don't 'loose' anything, and you could have still same idea of the probability to given work to do in a time T will be evenly distributed between all miners, even if a single persons could spawn many miners.

It would still end with the same sort of calcultion except the C for core become a miner ID, and miner is idle until his ID is selected to mine a block.

It's not the same principle than bitcoin pow, but i think it could be viable , or not completely impossible to solve, but maybe i'm missing something.
newbie
Activity: 23
Merit: 6
In case it was not clear from the conversation above, I'm posting an informal proof:

If an algorithm is progress-free (or memoryless), the distribution of time between solutions must be exponential (since it's the only memoryless distribution), i.e. the probability of finding a solution at time t < T is:

Code:
P(t < T) = 1 - exp(-C*T)

for some constant C, which equals to the relative computational power.

If I have two equally powerful machines, then the probability that I find a solution at time t < T if I mine with both of them is:

Code:
P2(t < T) = 1 - (1 - P(t < T)) * (1 - P(t < T))
          = 1 - exp(-C*T) * exp(-C*T)
  = 1 - exp(-2*C*T)
    
which is exactly the same as the probability for a twice more powerful machine.

So progress-freeness implies perfect parallelizability.
full member
Activity: 322
Merit: 151
They're tactical
My main point is that you cannot have a permissionless proof of work system that's not parallelizable.

Really ? you have link to a book or paper explaining this ? Smiley
newbie
Activity: 23
Merit: 6
My main point is that you cannot have a permissionless proof of work system that's not parallelizable.
full member
Activity: 322
Merit: 151
They're tactical
Then it's not progress-free. The miner who starts first will always solve the block, which is not what you want in a decentralized cryptocurrency.


It's not progress-free, but the work can still be distributed on different addresses, selected evenly for each block. In the end it still have same property than progress-free work for decentralization of the mining. But it needs another system to distribute the work than the pow itself.
newbie
Activity: 23
Merit: 6
Why ? If each unit of work depend on the result of previous work, it's still non parallelizable ?

Then it's not progress-free. The miner who starts first will always solve the block, which is not what you want in a decentralized cryptocurrency.

Even if already it get to the point of not being able to distinguish many small miners from a big entity, they will still all mine with equal work-cost, which is already a win as i see it Smiley And its possible to find way to prevent it to a degree, or make it harder to do.

No, the large entity will mine more efficiently using many parallel calculations. The standard progress is: CPU -> GPU -> FPGA -> ASIC. On the blockchain, it will look like many small miners claiming the block reward, but in reality there will be no decentralization.
full member
Activity: 322
Merit: 151
They're tactical
If you make the unit of work small, you will achieve progress-freeness, but you will lose non-parallelizability.

Why ? If each unit of work depend on the result of previous work, it's still non parallelizable ?


"Proof of miner id" is susceptible to sybil attacks since you can't limit the generation of new addresses without some central authority.

In short, there is no way to distinguish many small miners from a single entity mining in parallel with many addresses.

I spent a fair bit of time researching this myself and I'm pretty sure it's a dead end.

Even if already it get to the point of not being able to distinguish many small miners from a big entity, they will still all mine with equal work-cost, which is already a win as i see it Smiley And its possible to find way to prevent it to a degree, or make it harder to do.


The principle in itself could be close to some onion routing as each node re encrypt the previous message, except it cycle back to the original point after a certain number of rounds. It would need to establish a route through nodes to compute all the work sequentially, adding their address to the computation.

But maybe the OP has a solution to this Smiley  need to wait a bit for updates Smiley
newbie
Activity: 23
Merit: 6
If you make the unit of work small, you will achieve progress-freeness, but you will lose non-parallelizability. "Proof of miner id" is susceptible to sybil attacks since you can't limit the generation of new addresses without some central authority.

In short, there is no way to distinguish many small miners from a single entity mining in parallel with many addresses.

I spent a fair bit of time researching this myself and I'm pretty sure it's a dead end.
full member
Activity: 322
Merit: 151
They're tactical
they will all take the same time to compute

Then it's not progress-free, which is a fundamental requirement for decentralized proof of work. PoW that is not progress-free will not work in practice because it encourages selfish mining, i.e. each miner will mine their own chain to avoid the network latency delay.


Its a problem i pointed before, but not sure it cannot be solved. I didnt see a full solution for this, but in theory it should still be possible to force a break down of the work, like forcing each ring to be computed with a different address, as it can still be broken into small pieces and the proof still need all the work to be done, and the minimal unit of work is very small, so maybe it can still be forced into small bits including network latency.

I still dont see a perfect solution for this, but i think it can be worked around. I think if a way To make poisson like distribution on the address or miner id can be found, it can replace the progress free problem up stream by forcing a break down of the work, it require zero initialisation time , and the minimal unit of work can be very small, so i think it would be equivalent. It still have same property than any miner id has equal chance to get a reward even with small computational power, even if it needs another mechanism to distribute the work than the pow itself.

It could be a system similar in some aspect to ourobos in cardanno, even if the difference is cardanno works with POS so the reward is related to the stake power of an address, and here the reward distribution would be more based on a "proof of miner id" than the pow itself, but then anyone with even small cpu power can participate and have equal chance to get reward with other miners, and its less vulnerable than POS to long range attack with the nothing at stake problem, even if i cant see a good way To avoid penalising miners with high network latency, without ending with problems if a node in the mining chain stop responding or have high latency to compute his part of the work.

Another potential problem i see is that the total pow is not going to scale with the number of miners, the amount of work to do to solve a full block will stay constant for a given block target, which will limit the maximum number of miner that can get a reward for a given block, so there is lower probability for a miner to get a reward in short time on a large network with lots of miners.

There are certain number of issue to study well, but i dont think its completely unsolvable even if it would more thinking and few more mechanism for it To work Well in practice.
newbie
Activity: 23
Merit: 6
they will all take the same time to compute

Then it's not progress-free, which is a fundamental requirement for decentralized proof of work. PoW that is not progress-free will not work in practice because it encourages selfish mining, i.e. each miner will mine their own chain to avoid the network latency delay.
full member
Activity: 322
Merit: 151
They're tactical
2. Even if you remove the timestamp and nonce from the block header, any miner can generate as many wallet addresses as they want to get unlimited parallelization.


Only one address will get the reward, and they cant work in parallel on the same chain, no ring chain can advance faster with parallelisation. They can compute differents ring chain in // with different address, they will all take the same time to compute, and only one reward, so there is no huge benefits To that. ( if i get it right Smiley )

The issue with the address spamming can become a problem to distribute the work across different miners, but even if there is address spamming, they will still have same cost/hash To compute the ring for each address, and everyone can address spam as well, so i think its solvable. In any case it comes back closer To 1 cpu one vote, even if someone with many addresses could get more work and reward in a pooled work, even with a single cpu, the number of cpu doesnt matter.


This idea is not new. Search for the "RSA timelock puzzle" - this is the most famous non-parallelizable proof of work. However, it only works when some central authority is generating the work. It is absolutely useless for decentralized consensus in cryptocurrencies.

There are many problems that can be solved only throught recursion, but its not cyclic like this, so it needs the solution to be known first, then puzzle made from it, for system like this where the solution is known first there are many way to have proof of work, but here the solution is not known first, but the work can still be verified in one step.
newbie
Activity: 23
Merit: 6
This idea is not new. Search for the "RSA timelock puzzle" - this is the most famous non-parallelizable proof of work. However, it only works when some central authority is generating the work. It is absolutely useless for decentralized consensus in cryptocurrencies.

Thus, if a miner immediately makes a lot of block headers, he can start doing calculations from all headers at once. For example, there are 1000 of them. I take 1000 cores on the video card and each core counts 1 version of the header. Then I have 1000 chances against 1 that I will quickly find the right pre-hash.
To avoid this possibility of parallel calculations, the header should start only from the data that cannot be changed. And this is the height of the previous block, the hash of the previous block and the address of the miner's wallet. This data cannot be parallelized. This is the secret - why you can not mine on all the cores of the CPU or GPU. Only 1 core for 1 IP address, which is associated with 1 wallet address.

1. You need timestamps to adjust the difficulty of the PoW so you can keep an approximately constant block rate.
2. Even if you remove the timestamp and nonce from the block header, any miner can generate as many wallet addresses as they want to get unlimited parallelization.
full member
Activity: 322
Merit: 151
They're tactical
I made some code to find all possible rings, and storing them in a csv file
it's all the combination i found : https://github.com/NodixBlockchain/rbf/blob/master/keylst.csv
Also made a thing to generate loops path randomly and run the forward/backward thing, it seems to work  Smiley
I did some more testing, testing more combination of rings it doesn't seem too bad Smiley But there are probably some combination that works better than others.
Why did you spend time on this? This has already been done by me and it can be seen from my video. In addition, this is not necessary, because in this code 32-bit numbers will not be used, 256-bit numbers will be used there, and completely different keys are needed for them. Smiley

Just to do some testing Smiley but it seems ok Smiley but ok i wait for the full code Smiley

But it's not very long to compute, it took less than a minute to scan most possible value, even doing it four times with four different number to make sure there is no false positive.

Tested with many different combination of rings up To 1024 picked randomly seems To work Smiley
member
Activity: 264
Merit: 13
I made some code to find all possible rings, and storing them in a csv file
it's all the combination i found : https://github.com/NodixBlockchain/rbf/blob/master/keylst.csv
Also made a thing to generate loops path randomly and run the forward/backward thing, it seems to work  Smiley
I did some more testing, testing more combination of rings it doesn't seem too bad Smiley But there are probably some combination that works better than others.
Why did you spend time on this? This has already been done by me and it can be seen from my video. In addition, this is not necessary, because in this code 32-bit numbers will not be used, 256-bit numbers will be used there, and completely different keys are needed for them. Smiley
full member
Activity: 322
Merit: 151
They're tactical
I made some code to find all possible rings, and storing them in a csv file

it's all the combination i found : https://github.com/NodixBlockchain/rbf/blob/master/keylst.csv

Also made a thing to generate loops path randomly and run the forward/backward thing, it seems to work  Smiley

I did some more testing, testing more combination of rings it doesn't seem too bad Smiley But there are probably some combination that works better than others.
full member
Activity: 322
Merit: 151
They're tactical
Sha256 has very high entropy, zero regression or anything it has non linear components to cascade and amplify all the source entropy capped with a mod.
Good. I think I understand what you don’t understand. Let's do so - I'll finish the code today and shoot the last video. Then I will post the video and my code. You will take one ring from this code, BUT + take my hash function Mystique to it and use it to mask the starting number.
That is, your sequence should be like this:
Take a weak starting number, for example - 0x1 (256 bit, like in SHA256)
We perform its hashing through Mystigue.
We calculate one ring through RBF.
You then pass this combination through your tests and see what they show you.
OK?

Oki Smiley Can do more testing adding the hash, im always up to cracking a good number sequence Cheesy
member
Activity: 264
Merit: 13
Sha256 has very high entropy, zero regression or anything it has non linear components to cascade and amplify all the source entropy capped with a mod.
Good. I think I understand what you don’t understand. Let's do so - I'll finish the code today and shoot the last video. Then I will post the video and my code. You will take one ring from this code, BUT + take my hash function Mystique to it and use it to mask the starting number.
That is, your sequence should be like this:
Take a weak starting number, for example - 0x1 (256 bit, like in SHA256)
We perform its hashing through Mystigue.
We calculate one ring through RBF.
You then pass this combination through your tests and see what they show you.
OK?
full member
Activity: 322
Merit: 151
They're tactical
If its to make a full coin, i can get into this, but also it needs block explorer, wallet etc and existing software for this will probably need to be modified as well to take in account the block signature format.

Im not sure how difficult it can be To adapt all the bitcore code for this algorithm, as blocks are also indexed with the block header hash, it probably needs changes in many places but i can look into this. Already got familliar with some versions of bitcore.

Sha256 has very high entropy, zero regression or anything it has non linear components to cascade and amplify all the source entropy capped with a mod.

I understand the problem with finding working keys as the number of rounds has to be found by brute force.

I still put my number crunching neurons at work on this one to find simple solution.

Sbox is just essentially a table look up, its not going to use lot of cpu power, but not sure if this can keep the cyclic property with the cypher implementation, but i believe similar technique can be used to increase entropy. With the sparx concept of large weak sbox it mean you can change it yourself and any weak sbox is supposed to keep the non linearity.

But adding entropy to break linear regression ( linear as in linear system) is not necessarily complex or it doesnt need lot of cpu power, the problem is to keep it cyclic, otherwise simple algorithm can works well.

With the few research i did so far, it doesnt seem that non linear function can be cyclic, because determined cycle period means its not non linear. Maybe its possible to find such non linear function that has at least bounded cycle period but didnt find this so far.
member
Activity: 264
Merit: 13
Maybe using something like onion cypher, and each round encrypt the last exploiting same property of bit rotation with certain algorithm its possible it will still cycle back. And then you will have something with a strong entropy.
Monsieur, in SHA256 hashing, which is used in Bitcoin, there is only ONE encryption operation, similar to the one I use in RBF. She is coming - at the very end. ONLY ONE!!! And ANYONE has not been able to crack this algorithm even despite the fact that it has an obvious built-in backdoor.
My algorithm uses thousands of such operations, and besides, they are all triple. You can also use quadruples, although this will not change anything. For example, rotate_right ^ rotate_left ^ shift_right ^ shift_left ... The difficulty level will be the same.
Perhaps you may ask why the level of difficulty when using the quadruple operation will be the same. I explain ... When the complexity level of the triple operation is greater than (the number of all atoms in the universe) ** 2 (to the second degree), then the difference between the triple operation and the quadruple operation, in which the level of complexity will be equal to (the number of all atoms in the universe) ** 3 (to the third degree), then for any cracker this will not change anything. That one, that the other - for hacking it is equally IMPOSSIBLE to hack.
HOWEVER, this will greatly complicate the task for us! Because we, as developers, need to find all the working keys, and this is a very difficult computational task. Especially because it cannot be parallelized. If we add complexity to this algorithm, then it will take us more time to search for working keys. Do you understand what I'm talking about? Not all key sets provide direct ring functions. Many key sets provide deferred ring functions and cannot be used as POWs. Our task is to find working keys and add them to the array of keys so that there are as many of them as possible (not 4, as I use in my videos for simplicity).
Therefore, your proposal to make even more complex what is IMPOSSIBLE difficult is meaningless. Absolutely.
If this idea really captivated you, then better let’s think about how to implement it. We can organize a project and create a new cryptocurrency, which will be for all people in the world. But to do this, we need funding. At least a little. I don’t know how much you need, but I need $ 500 per month so that I can work on the project. Otherwise, labor emigration to another country awaits me. I can not survive in my state. There is almost nothing here. Only one war and political butts of different politicians.
Let's just join forces and think about how to do it. Moreover, I have other solutions for Bitcoin. For example, I have a simple and elegant algorithm that allows me to solve the problem of the “BIG” blockchain, which is growing very quickly in any cryptocurrency. And things like that. If we implement them all in New Bitcoin, then this will be real Crypto_2.0.
Think - what can we do? This will be the best contribution of your precious time, I assure you.
And with this algorithm, everything is absolutely transparent and obvious. The XOR operation is one-way. That is - it is not reversible if you do not have starting numbers. Everything. There is nothing more to think about. No “weak solutions” can be found here. There is none of them.
full member
Activity: 322
Merit: 151
They're tactical
I need think this more, but its possible there is still hidden property on the last numbers that are going to be used for the signature, as its only one step away from the original, and lot of the entropy has been removed at this point.

There are way to scramble it more, but need to keep the cyclic property as well, which is the more difficult part.

Maybe using something like onion cypher, and each round encrypt the last exploiting same property of bit rotation with certain algorithm its possible it will still cycle back. And then you will have something with a strong entropy.

Im looking into simple cypher like this

https://www.cryptolux.org/index.php/SPARX


SPARX is a family of ARX-based 64- and 128-bit block ciphers. Only addition modulo 216, 16-bit XOR and 16-bit rotations are needed to implement any version. SPARX-n/k denotes the version encrypting an n-bit block with a k-bit key.

The SPARX ciphers have been designed according to the Long Trail Strategy put forward by its authors in the same paper. It can be seen as a counterpart of the Wide-Trail Strategy suitable for algorithms built using a large and weak S-Box rather than a small strong one. This method allows the designers to bound the differential and linear trial probabilities, unlike for all other ARX-based designs. Non-linearity is provided by SPECKEY, a 32-bit block cipher identical to SPECK-32except for its key addition. The linear layer is very different from that of, say, the AES as it consists simply in a linear Feistel round for all versions.

The designers claim that no attack using less than 2k operations exists against SPARX-n/k in neither the single-key nor in the related-key setting. They also faithfully declare that they have not hidden any weakness in these ciphers. SPARX is free for use and its source code is available in the public domain (it can be obtained below).


It doesnt need something very strong, just to resist 10 minutes brute force attacks, even using all the cpu power of the world on the brute force, even 64bits of "true entropy" would be enough from a 256bits space it mean even if its 75% broken its still ok for this purpose.


I will rename the files to remove confusion between rbf & hash Smiley
member
Activity: 264
Merit: 13
The hash algorithm seems already more complex, but if the video is correct it still mean you can find a correlation between the hashes between each ring and the proof of work become mostly on the hash.

Maybe i did something wrong in the procedure, but if the rbf can be regressed with 99%+ correlation like this, its not going to be a very good proof of work, and its very simple regression from free website, not even really tough crypto analysis. On weak numbers there is 70%+ correlation even on simpler functions.

If the video is correct still need to add something additionally to the rbf to break algebraic regression, and even weak correlation with simple regression doesnt mean there is no vulnerability.

Even if there is large part of the number that are eliminated from the brute force, it can still be ok because the attack time is short, essentially the target time between blocks, so in the 10minutes for bitcoin but would still need something a bit stronger.

Entropy of numbers in my algorithm (RBF).
https://www.youtube.com/watch?v=F3D1mgMJvuw&feature=youtu.be

Video response.
And you confused the names a little. My ring function algorithm is called RBF. Mystigue is the name of my hash algorithm. These are different algorithms. I just use them in my POW code both.
full member
Activity: 322
Merit: 151
They're tactical
Ring Bit Function. 3 part.
https://www.youtube.com/watch?v=9-7NmuZXbdU&feature=youtu.be
An explanation of the new POW algorithm, which I call the Ring Bit Function (RBF), with C ++ code examples. In this part we will masking our chain of RBF rings.


I will do some testing to check the sequence you get from numbers like 0x00010000 or such, number with low entropy, and put that in regression test or check the distribution. If it can find a regression it mean you can compute round N in a single Step. It wouldnt surprise me that with certain numbers some regression can be found on the sequence, if not then good Smiley
Roughly speaking, if the number can be compressed a lot, it mean the "entropy" is low and the security would be related to how much you can compress it.
If you have only zero, it can be compressed to just 0 ans it doesnt matter if you have 256 zeroes or one.
I have long solved this problem. The answer in this video ... Smiley

Even if there is large part of the number that are eliminated from the brute force, it can still be ok because the attack time is short, essentially the target time between blocks, so in the 10minutes for bitcoin but would still need something a bit stronger.
member
Activity: 264
Merit: 13
Ring Bit Function. 3 part.
https://www.youtube.com/watch?v=9-7NmuZXbdU&feature=youtu.be
An explanation of the new POW algorithm, which I call the Ring Bit Function (RBF), with C ++ code examples. In this part we will masking our chain of RBF rings.


I will do some testing to check the sequence you get from numbers like 0x00010000 or such, number with low entropy, and put that in regression test or check the distribution. If it can find a regression it mean you can compute round N in a single Step. It wouldnt surprise me that with certain numbers some regression can be found on the sequence, if not then good Smiley
Roughly speaking, if the number can be compressed a lot, it mean the "entropy" is low and the security would be related to how much you can compress it.
If you have only zero, it can be compressed to just 0 ans it doesnt matter if you have 256 zeroes or one.
I have long solved this problem. The answer in this video ... Smiley
full member
Activity: 322
Merit: 151
They're tactical
But its just in case if there can be too much weak numbers on long ring maybe it could improve, especially that the brute force can be made with // cores.

Maybe even simple huffman coding could remove some problems in case there is lot of zero or repetitive bit sequence. To make the number "more compact" so to speak. Even maybe only as a test like if the number can be easily compressed with huffman it mean if has low entropy and it should be changed.
I just don’t understand what you’re talking about ... Really. What are the weak numbers? How do you want to crack them? Take any of the small rings already generated by me (in the pictures) and try to crack it in any way known to you. If you succeed - I will start to think about what to do with it. And now I think that you do not understand what you say, because you rely on experience that cannot be compared with this case.
Show in practice what you mean by weak rings and the possibilities of their direct attacks.

I will do some testing to check the sequence you get from numbers like 0x00010000 or such, number with low entropy, and put that in regression test or check the distribution. If it can find a regression it mean you can compute round N in a single Step. It wouldnt surprise me that with certain numbers some regression can be found on the sequence, if not then good Smiley

Roughly speaking, if the number can be compressed a lot, it mean the "entropy" is low and the security would be related to how much you can compress it.

If you have only zero, it can be compressed to just 0 ans it doesnt matter if you have 256 zeroes or one.

The actual size of the key when using Smart brute force is related to the entropy of the key.

If you have 255 zero and 1 one in a 256 bits key it mean the brute force is not on 2^256. And it mean the sequence will be predictible and the brute force will only need small.number of test to find the signature.

In any case it shouldnt be too hard to strengthen it just to avoid degenrate number that lead to predictible sequence.

What make the algorithm hard to reverse is the same principle than cypher algorithm which still can have some weakness in the simple form ( without sbox or any other thing), especially with low entropy input.

With a hash normally it give good entropy, but after many rings the signature could loose entropy and the rings become easy to predict.

But maybe it doesnt matter too much but would need to be sure and not waiting for an attack to show its broken Wink
member
Activity: 264
Merit: 13
But its just in case if there can be too much weak numbers on long ring maybe it could improve, especially that the brute force can be made with // cores.

Maybe even simple huffman coding could remove some problems in case there is lot of zero or repetitive bit sequence. To make the number "more compact" so to speak. Even maybe only as a test like if the number can be easily compressed with huffman it mean if has low entropy and it should be changed.
I just don’t understand what you’re talking about ... Really. What are the weak numbers? How do you want to crack them? Take any of the small rings already generated by me (in the pictures) and try to crack it in any way known to you. If you succeed - I will start to think about what to do with it. And now I think that you do not understand what you say, because you rely on experience that cannot be compared with this case.
Show in practice what you mean by weak rings and the possibilities of their direct attacks.
full member
Activity: 322
Merit: 151
They're tactical
Sbox are very common in many algorithm like this Smiley but yeah if you pick one from existing algorithm from service there can always be a backdoor and they are hard to design properly, maybe its possible to find simple one Who can fit for this. But its just in case if there can be too much weak numbers on long ring maybe it could improve, especially that the brute force can be made with // cores.

But the old ones like gost/des they already have been studied many times and their inner working is well known now.

The blowfish they can generate sbox from the data directly. But same never can be 100% sure with already made algorithm.

Maybe even simple huffman coding could remove some problems in case there is lot of zero or repetitive bit sequence. To make the number "more compact" so to speak. Even maybe only as a test like if the number can be easily compressed with huffman it mean if has low entropy and it should be changed.
member
Activity: 264
Merit: 13
S box they are like simple substitution tables, like look up table that make sequence less repetitive, you can find this in all advanced block cypher ( des,gost etc)

The cryptographic properties of the S-box play a crucial role in the security of the algorithm because they are the only source of non-linearity. They are also at the center of the security arguments given by algorithm designers. In fact, designers are expected to explain how the S-box they used was designed and why they chose the structure their S-box has. For example, the AES has an S-box which is based on the multiplicative inverse in the finite field . This choice is motivated by the fact that both the linearity and the differential uniformity 1 of this permutation are the lowest known to be possible.

Its essentially to improve security when data can be predictible. Like if there is a sequence of zero or repetitive it will change it to something else less predictible which can deter certain analysis.

Even a simple compression algorithm could reduce "blanks" or repetitive sequence that can exploited but on short keys like its not very efficient, i think an s box would be more efficient.

Im not sure if its going to be very efficient for this, but could be improve it i guess.

The attack it can be given a start num and the keys, the possible signatures after X rounds could be reduced and brute forced. Even on good algorithm its possible to divide key size, with a simple algo like it on weak numbers its possible attacks can exists. But need to see if it take more time to brute force than compute, as in the case its "Real time" race, so i dont think there is huge risk.
Ok Now it is clear. If I were you, I would forget forever about any cryptographic algorithms that the special services created and adopted the government to standard. I will not use what most likely has a built-in backdoor. I have my own simple, understandable and obviously provable algorithms for encryption, keys and everything that is needed. For them I will be calm.
full member
Activity: 322
Merit: 151
They're tactical
S box they are like simple substitution tables, like look up table that make sequence less repetitive, you can find this in all advanced block cypher ( des,gost etc)

https://en.m.wikipedia.org/wiki/S-box


https://who.paris.inria.fr/Leo.Perrin/pi.html

The cryptographic properties of the S-box play a crucial role in the security of the algorithm because they are the only source of non-linearity. They are also at the center of the security arguments given by algorithm designers. In fact, designers are expected to explain how the S-box they used was designed and why they chose the structure their S-box has. For example, the AES has an S-box which is based on the multiplicative inverse in the finite field . This choice is motivated by the fact that both the linearity and the differential uniformity 1 of this permutation are the lowest known to be possible.

Its essentially to improve security when data can be predictible. Like if there is a sequence of zero or repetitive it will change it to something else less predictible which can deter certain analysis.

Even a simple compression algorithm could reduce "blanks" or repetitive sequence that can exploited but on short keys like its not very efficient, i think an s box would be more efficient.

Im not sure if its going to be very efficient for this, but could be improve it i guess. Normally its supposed to make bit operation like this more efficient/less predictible.

I know many programming language ( c c++ Java js php assembler).

I dont have strong background in cryptography but i study maths and been into cracking groups before so i know the basics Smiley i can never resist when i see numbers grid like this to make sense of them Cheesy

The attack it can be given a start num and the keys, the possible signatures after X rounds could be reduced and brute forced. Even on good algorithm its possible to divide key size, with a simple algo like it on weak numbers its possible attacks can exists. But need to see if it take more time to brute force than compute, as in the case its "Real time" race, so i dont think there is huge risk.

Not sure how i can help but the idea is interesting Smiley
member
Activity: 264
Merit: 13
Ha yes its true its hashed before, and its what i was thinking on hundreds of rings even if there is a weaker number it shouldnt matter too much. An s box doesnt cost a lot either if it can improve the security, if a very long ring would happen with a weak number its still a bit of waste, but i dont think its a big problem.
But need to see the brute force attack can be made with // units, so very weak rings could still be vulnerable.
IadixDev, I'm sorry, but I don’t understand what kind of "s boxes" you are talking about and how you are going to attack weak rings in general.
Today I will post a video in which I will show an intermediate masking between the rings, and you can describe the algorithm with examples - how you are going to get the “weak ring” and how you want to attack it. And most importantly - what will it give you ...
Ok?

In fact, I have already matured the concept of this project. I want to restart Bitcoin, but with corrections of its main shortcomings. You already see one of these algorithms. The second will be about fixing the BIG blockchain. Additionally, along the way, we will solve the problem of anonymity.
So it’s better to write - can you participate in the development and on what conditions ... what programming languages ​​do you know? I see you have experience in encryption algorithms and other cryptography ... If you want, you can join and do it together. You will analyze my algorithms for vulnerabilities Smiley.
full member
Activity: 322
Merit: 151
They're tactical
Anyway even if there is a problem with that, normally things like  s-boxes can solve it easily. With a good s box worked on the input number, you could as well use 0 or such as input number and it would still be safer.

Maybe im wrong To think this, but if there is let say only 1 bit set in the start number even if Its a big number the sequence with bit operations are going To be more predictible.

In the simple text cypher algorithm, if both the key and the data are "weak" it can be exploited by certain attacks, in the case as the number is both the data and the key, if the number is "weak" there can be some attack to predict the sequence.

Its why adding some salt/initilization vector or an sbox rolled over the sequence of signature at each block could improve that, and wouldnt complexify the algorthm too much, but maybe its not necessary.

Normally its supposed to make block cypher algorithm safer, as the algorithm use the same principle, it would not cost much and make it safer, but not 100% sure Smiley

Maybe its not necessary because the cost of the attack is superior than the pow cost, so its more when cracking encryption when the attack time can be long, and its to narrow the possible numbers on brute force attack. But maybe even with weak number and analysis the brute force will be higher than the computation.
Oh, I understand what you're talking about, but this is a vain concern. All input data will be hashed, therefore, no “weak” numbers will be input to the algorithm. In addition, even one weak ring out of 1000 (for example) can in no way affect the result of the entire chain of rings.

Ha yes its true its hashed before, and its what i was thinking on hundreds of rings even if there is a weaker number it shouldnt matter too much. An s box doesnt cost a lot either if it can improve the security, if a very long ring would happen with a weak number its still a bit of waste, but i dont think its a big problem.

But need to see the brute force attack can be made with // units, so very weak rings could still be vulnerable.
member
Activity: 264
Merit: 13
Anyway even if there is a problem with that, normally things like  s-boxes can solve it easily. With a good s box worked on the input number, you could as well use 0 or such as input number and it would still be safer.

Maybe im wrong To think this, but if there is let say only 1 bit set in the start number even if Its a big number the sequence with bit operations are going To be more predictible.

In the simple text cypher algorithm, if both the key and the data are "weak" it can be exploited by certain attacks, in the case as the number is both the data and the key, if the number is "weak" there can be some attack to predict the sequence.

Its why adding some salt/initilization vector or an sbox rolled over the sequence of signature at each block could improve that, and wouldnt complexify the algorthm too much, but maybe its not necessary.

Normally its supposed to make block cypher algorithm safer, as the algorithm use the same principle, it would not cost much and make it safer, but not 100% sure Smiley

Maybe its not necessary because the cost of the attack is superior than the pow cost, so its more when cracking encryption when the attack time can be long, and its to narrow the possible numbers on brute force attack. But maybe even with weak number and analysis the brute force will be higher than the computation.
Oh, I understand what you're talking about, but this is a vain concern. All input data will be hashed, therefore, no “weak” numbers will be input to the algorithm. In addition, even one weak ring out of 1000 (for example) can in no way affect the result of the entire chain of rings.
full member
Activity: 322
Merit: 151
They're tactical
Anyway even if there is a problem with that, normally things like  s-boxes can solve it easily. With a good s box worked on the input number, you could as well use 0 or such as input number and it would still be safer.

Maybe im wrong To think this, but if there is let say only 1 bit set in the start number even if Its a big number the sequence with bit operations are going To be more predictible.

In the simple text cypher algorithm, if both the key and the data are "weak" it can be exploited by certain attacks, in the case as the number is both the data and the key, if the number is "weak" there can be some attack to predict the sequence.

Its why adding some salt/initilization vector or an sbox rolled over the sequence of signature at each block could improve that, and wouldnt complexify the algorthm too much, but maybe its not necessary.

Normally its supposed to make block cypher algorithm safer, as the algorithm use the same principle, it would not cost much and make it safer, but not 100% sure Smiley

Maybe its not necessary because the cost of the attack is superior than the pow cost, so its more when cracking encryption when the attack time can be long, and its to narrow the possible numbers on brute force attack. But maybe even with weak number and analysis the brute force will be higher than the computation.
member
Activity: 264
Merit: 13
There should be a way to calculate the number of cycle needed for a certain combination of "keys" ( the number used in the ror/rol ) no ?
No, there is no such way. This is mathematically impossible. This number has to be calculated by brute force for each key combination.


The way i see it, it works like simple cypher algorithm, like the simplest cypher is only xoring a number with the key to encrypt and xoring it again to decrypt, here its like using the number itself as a key with bit rotation that cancel itself out after a certain number of iterations because rotation is cyclic.
Right...


So its why i tend to think its not easy to reverse, even if maybe certain numbers are going to be weaker than others, but maybe a "salting" or initialisation vector can be used on the start num to make it more random. Otherwise it can have same problem than plaintext cypher if the text and key are too repetitive, it can make the algorithm easier to crack, its possible with certain degenerate numbers the sequence will be more predictible, but in the average it shouldnt matter to much.
This is absolutely impossible! With 256 bits, the number of possible combinations is huge. This decimal number is 10^78 (10**78 - cтeпeнь чиcлa), approximately equal to the number of atoms in the universe. Now this number needs to be multiplied by the number of possible key combinations that will be used for calculations. This is only for a variation of one ring. To this we need to add the same number of possible masks with which we will mask the rings. Now think - what is the likelihood that at least once during the calculations the same combination (starting number + key + mask) will occur? In which computer will you be able to remember all the options encountered? How long will it take you each time to search for similar options in this computer?
Smiley
It's impossible. This is called transcomputing operations ...
full member
Activity: 322
Merit: 151
They're tactical
Ring Bit Function. 2 part.
https://www.youtube.com/watch?v=Ir9Ptfg0Nbg&feature=youtu.be
In this part we make a chain of RBF rings.


The thing with the ring diagram is you want to show two things in the same time.

There is the ring as the total work to do to complete the ring, and the repartition of the numbers, but the ring on the diagram show the total amount of work to do, not the total space of possible 256 bits numbers. Each round still advance linearly into the ring of total work to do even if the number distribution along the steps is not linear.
Yes, you are right - I did not immediately realize this. But I have a lot of work, so I try to explain as much as I can. Perhaps the video will be more clear. Moreover, the function is extremely simple ...


As far as i can tell, function like xor/ror are the bread and butter of most simple cypher cryptographic algorithm, so i would think it cannot be easily simulated with linear function, BUT the amount of work to do for a particular ring is still linear so the progression on the ring that represent the amount of work to do should still be linear Smiley
Here I do not quite understand what you were talking about. Indeed, for each ring, the amount of work is known in advance. However, we can change the complexity of the calculations, making the rings larger, as well as complicating the task. In addition, the most important thing is that we cannot predict in advance how much work is needed to calculate the signature. That is - how long will the chain of rings be.
In this sense, everything works exactly the same as with the usual SHA256 algorithm.


There should be a way to calculate the number of cycle needed for a certain combination of "keys" ( the number used in the ror/rol ) no ?

In any case for a given combination of keys the amount of work is determined.

The way i see it, it works like simple cypher algorithm, like the simplest cypher is only xoring a number with the key to encrypt and xoring it again to decrypt, here its like using the number itself as a key with bit rotation that cancel itself out after a certain number of iterations because rotation is cyclic.

So its why i tend to think its not easy to reverse, even if maybe certain numbers are going to be weaker than others, but maybe a "salting" or initialisation vector can be used on the start num to make it more random. Otherwise it can have same problem than plaintext cypher if the text and key are too repetitive, it can make the algorithm easier to crack, its possible with certain degenerate numbers the sequence will be more predictible, but in the average it shouldnt matter to much.

But its Nice idea, it looks like Sparta approach, when out numbered in the number of // core force the fight on one vs one Smiley
member
Activity: 264
Merit: 13
Ring Bit Function. 2 part.
https://www.youtube.com/watch?v=Ir9Ptfg0Nbg&feature=youtu.be
In this part we make a chain of RBF rings.


The thing with the ring diagram is you want to show two things in the same time.

There is the ring as the total work to do to complete the ring, and the repartition of the numbers, but the ring on the diagram show the total amount of work to do, not the total space of possible 256 bits numbers. Each round still advance linearly into the ring of total work to do even if the number distribution along the steps is not linear.
Yes, you are right - I did not immediately realize this. But I have a lot of work, so I try to explain as much as I can. Perhaps the video will be more clear. Moreover, the function is extremely simple ...


As far as i can tell, function like xor/ror are the bread and butter of most simple cypher cryptographic algorithm, so i would think it cannot be easily simulated with linear function, BUT the amount of work to do for a particular ring is still linear so the progression on the ring that represent the amount of work to do should still be linear Smiley
Here I do not quite understand what you were talking about. Indeed, for each ring, the amount of work is known in advance. However, we can change the complexity of the calculations, making the rings larger, as well as complicating the task. In addition, the most important thing is that we cannot predict in advance how much work is needed to calculate the signature. That is - how long will the chain of rings be.
In this sense, everything works exactly the same as with the usual SHA256 algorithm.


im just waiting for this coin to release, good luck with your funding
Thank you... Smiley
full member
Activity: 233
Merit: 100
im just waiting for this coin to release, good luck with your funding
full member
Activity: 322
Merit: 151
They're tactical
If i may offer suggestion in the way you present it because its confusing.

The thing with the ring diagram is you want to show two things in the same time.

There is the ring as the total work to do to complete the ring, and the repartition of the numbers, but the ring on the diagram show the total amount of work to do, not the total space of possible 256 bits numbers. Each round still advance linearly into the ring of total work to do even if the number distribution along the steps is not linear.

As far as i can tell, function like xor/ror are the bread and butter of most simple cypher cryptographic algorithm, so i would think it cannot be easily simulated with linear function, BUT the amount of work to do for a particular ring is still linear so the progression on the ring that represent the amount of work to do should still be linear Smiley

Well its just my 2 cents to make it more clear, i can try to make some diagram to explain better Smiley
member
Activity: 264
Merit: 13
Ring Bit Function. 1 part.
https://www.youtube.com/watch?v=yg-G6itsHpU&feature=youtu.be
An explanation of the new POW algorithm, which I call the Ring Bit Function (RBF), with C ++ code examples.
full member
Activity: 322
Merit: 151
They're tactical
Maybe you should add a time scale on the vertical axis with the amount of work done on horizontal axis to show that its not going to take less time to compute it with // units.
member
Activity: 264
Merit: 13
Since Monero is one of the coins that changes its algo to stay anti-ASIC I've posted in their thread a link to this.
Monero users did fund useful projects (useful for Monero), but I cannot tell if this is indeed good and it's indeed what Monero needs (since afaik Monero is CPU and GPU)
Great! Thank you. I would be glad if they respond and use my suggestions.


Edit: OK, I see that each portion or these rings must be sequential and then they can be combined. Weird wording, maybe I'll delve into this later I have no time today.
Yes, you can see the starting topic again, I added a few pictures to explain. Maybe you will better understand. In addition, I plan to post the first video today, where I will explain my algorithm. And along with this, the source code.


Have you ever heard of RandomX? RandomX is not against anything. It just dont give advantage to anything. No one can build an ASIC that have double efficiency as top CPUs you can buy in computer stores all around the world.
RandomX is not a solution. Any algorithm that can be parallelized is not a solution to the problem. In addition, everything that can be counted on the GPU with sufficient investment can be implemented in the ASIC.
My algorithm (RBF) is the solution. It cannot be parallelized. He does not need a graphics card. He does not need a lot of energy. Over time, you will understand ...


TO ALL.
Guys, today I added a few pictures to the description, maybe this will help you better understand my algorithm.
I also added a link to "My Story", where I explained - who I am and in what position I am.
I also added a link to the video "Myself introduce", where I prove that I am I, showing my documents and old's photos (from the Soviet Union - if anyone is interested).
All this I posted in the starting topic.
Now you know who I am and you can better understand what is happening.

Next, I plan to post the first video in which I will show the code of my algorithm and explain how it works. Wait a bit and I will do it. I think I’ll be in time today.
full member
Activity: 322
Merit: 151
They're tactical
For several years I was thinking about how to make a POW algorithm that will be stable not only against ASIC devices, but also against GPU miners.

Have you ever heard of RandomX? RandomX is not against anything. It just dont give advantage to anything. No one can build an ASIC that have double efficiency as top CPUs you can buy in computer stores all around the world.


Since Monero is one of the coins that changes its algo to stay anti-ASIC I've posted in their thread a link to this.

Monero should not change mining algo anymore. If RandomX works as is intended to that is it.

ASIC only works because the proof of work is based on each hash computation has a certain probability To gain a reward and a fixed cost, and you can compute an infinite number of them in //. Its the only thing that give asic an advantage.

With sequential computation like this, ASICs will be much less powerfull and cost efficient than even a smartphone, because 90% of the transistors are going to be useless for a simple sequential computation. Any cpu even the cheapest micro controller can do a xor/ror in one cycle, so its only a question of clock rate, which is pretty much capped now, and even common hardware already have close to max frequency,  ASICs dont have very high clock frequency.

Ring algorithm like this can still make it easy To proove the work that has been made by a miner.
legendary
Activity: 3668
Merit: 6382
Looking for campaign manager? Contact icopress!
Monero should not change mining algo anymore. If RandomX works as is intended to that is it.

Maybe you're right. However, I remember the early days of Monero when it was thought to be CPU only. Then it evolved to the point they changed that algo in order to fight ASICs.
I don't know the internals of RandomX. But I think that it may be useful to keep an eye on projects like this, just in case the history is repeating.
legendary
Activity: 2730
Merit: 1288
For several years I was thinking about how to make a POW algorithm that will be stable not only against ASIC devices, but also against GPU miners.

Have you ever heard of RandomX? RandomX is not against anything. It just dont give advantage to anything. No one can build an ASIC that have double efficiency as top CPUs you can buy in computer stores all around the world.


Since Monero is one of the coins that changes its algo to stay anti-ASIC I've posted in their thread a link to this.

Monero should not change mining algo anymore. If RandomX works as is intended to that is it.
full member
Activity: 322
Merit: 151
They're tactical

I hope I clarified your doubt to you.


Yes i think i get it, still need to give it more thoughts but it looks interesting.

So the rounds can still be distributed but they still need to be computed sequentially even if its by different miners to share the work, and needs a limit on the addresses that can be used for mining.

IP is not very good for this because it can become cheap and it would require that IP and its not possible for everyone to check if the Ip match the address, would be better with a solution that doesnt depend on IP.


But maybe need to find a way to register address for mining in way that cannot be spammed easily. Or another way to identify individual miners that would be costly to réplicate i think its not impossible though. Im thinking if every miner has it own different ring path that depends on his address in sort that it would cost something to start mining from a new address because of cumulated proof of work on a specific path that depends on an address. Or some way that would penalise changing address for the pow.



You bolded is contradictory as latency would factor in to such an extent that a sequential system cannot be timely distributed.

Whether or not this effects what is trying to be achieved here is beyond me but I figured i'd point that out.

Edit: OK, I see that each portion or these rings must be sequential and then they can be combined. Weird wording, maybe I'll delve into this later I have no time today.



The total workload can still be distributed even if each round or ring is computed sequentially by different miners. The goal is not To scale the workload to the moon using // processing, on the contrary its To limit it using sequential computation.

I think the logic hold because sequential computational power is not increasing, asic are not especially fast in term of clocking and sequential processing, even google use same processor clocks than what you have in common computers, They just have millions of them, if the work load is limited To what can be computed sequentially it doesnt give a big advantage to group who can put together lot of computational power exploiting // processing.

As mining is a relativist game, the absolute amount of work doesnt matter, what matter is that an attacker cant beat 51% of the network power, it seems to be an interesting idea in this regard using determinist sequential proof of work.

The distribution is only To spread the cost and reward, not To increase the total work load.
legendary
Activity: 3836
Merit: 4969
Doomed to see the future and unable to prevent it

I hope I clarified your doubt to you.


Yes i think i get it, still need to give it more thoughts but it looks interesting.

So the rounds can still be distributed but they still need to be computed sequentially even if its by different miners to share the work, and needs a limit on the addresses that can be used for mining.

IP is not very good for this because it can become cheap and it would require that IP and its not possible for everyone to check if the Ip match the address, would be better with a solution that doesnt depend on IP.


But maybe need to find a way to register address for mining in way that cannot be spammed easily. Or another way to identify individual miners that would be costly to réplicate i think its not impossible though. Im thinking if every miner has it own different ring path that depends on his address in sort that it would cost something to start mining from a new address because of cumulated proof of work on a specific path that depends on an address. Or some way that would penalise changing address for the pow.



You bolded is contradictory as latency would factor in to such an extent that a sequential system cannot be timely distributed.

Whether or not this effects what is trying to be achieved here is beyond me but I figured i'd point that out.

Edit: OK, I see that each portion or these rings must be sequential and then they can be combined. Weird wording, maybe I'll delve into this later I have no time today.

legendary
Activity: 3668
Merit: 6382
Looking for campaign manager? Contact icopress!
Maybe I posted in the wrong thread?

Since Monero is one of the coins that changes its algo to stay anti-ASIC I've posted in their thread a link to this.
Monero users did fund useful projects (useful for Monero), but I cannot tell if this is indeed good and it's indeed what Monero needs (since afaik Monero is CPU and GPU)
member
Activity: 264
Merit: 13
Ok i thought a bit more into it, in fact as long as it stay "solo mining" as un every miner compute the whole rings, it doesnt really matter if there 1000 merkle root and address because ultimately only one path get the reward and have more or less same deterministic computation time, and the work put in // is only wasted.
Its only when the work become distributed in several miners than one with many address could get a bigger share by computing several round.
But in fact is this really a problem ? Because anyway if you have something like ouroboros to select which address are going to be selected for sharing the work of the next block, creating many address can give more chances to have a share of the work and more reward, but anyone can create also many address so as long as the mechanism to select address is fair it doesnt really matter that every miner has a single address or not no ? Even Someone with lot of // processing unit still cant get a huge advantage no ? Especially that there is supposedly a limited number of round at a given difficulty so spamming with more address will not really give that much of an advantage to // processor anyway. Maybe im wrong though Smiley
Im still trying to find the way to force a fair distribution of the work but i think its possible.
Ok
I see that you are looking for a serious solution. Then I will tell you a secret. Remember what I wrote about my first ICO project? VenusGEET ...?
So - in this project, all the problems of cryptocurrencies are solved radically. Everything is built there on completely different principles. Including in terms of economics. BUT! This project requires a lot of development time, which means that it need a lot of finance. Nobody will give me this money. Therefore, I decided so far to offer the community ONLY ONE algorithm that can be implemented in most cryptocurrencies, and which solves a really useful task.
This is necessary in order to solve the problem of my personal survival at the moment, and also allows me to earn a little trust from the crypto community.
Therefore, at the moment I am considering the application of this POW algorithm only to the cryptocurrency architecture that already exists, and which is now recognized. And in this architecture there are a lot of problems that are still not resolved. But I will not touch them. So far, I am focused on solving ONLY ONE of them.
If you are interested in a more global solution, then it will be solved fundamentally, but only after I can find funding for my VenusGEET project.
full member
Activity: 322
Merit: 151
They're tactical
Ok i thought a bit more into it, in fact as long as it stay "solo mining" as un every miner compute the whole rings, it doesnt really matter if there 1000 merkle root and address because ultimately only one path get the reward and have more or less same deterministic computation time, and the work put in // is only wasted. Maybe there can be margin gain with // processor but the reward is not going to scale with the number of // units, where the cost will still scale linearly.

Its only when the work become distributed in several miners than one with many address could get a bigger share by computing several round.

But in fact is this really a problem ? Because anyway if you have something like ouroboros to select which address are going to be selected for sharing the work of the next block, creating many address can give more chances to have a share of the work and more reward, but anyone can create also many address so as long as the mechanism to select address is fair it doesnt really matter that every miner has a single address or not no ? Even Someone with lot of // processing unit still cant get a huge advantage no ? Especially that there is supposedly a limited number of round at a given difficulty so spamming with more address will not really give that much of an advantage to // processor anyway. Maybe im wrong though Smiley

Im still trying to find the way to force a fair distribution of the work but i think its possible.
full member
Activity: 322
Merit: 151
They're tactical

I hope I clarified your doubt to you.


Yes i think i get it, still need to give it more thoughts but it looks interesting.

So the rounds can still be distributed but they still need to be computed sequentially even if its by different miners to share the work, and needs a limit on the addresses that can be used for mining.

IP is not very good for this because it can become cheap and it would require that IP and its not possible for everyone to check if the Ip match the address, would be better with a solution that doesnt depend on IP.


But maybe need to find a way to register address for mining in way that cannot be spammed easily. Or another way to identify individual miners that would be costly to réplicate i think its not impossible though. Im thinking if every miner has it own different ring path that depends on his address in sort that it would cost something to start mining from a new address because of cumulated proof of work on a specific path that depends on an address. Or some way that would penalise changing address for the pow.
member
Activity: 264
Merit: 13
Ok so if i understand correctly the first part, the idea is to use cyclic algorithm like rings, who have a known cycle length before it comes back to the initial number, and the miner need to provide the result at N-1 of the final result, prooving he has computed all the other values in the cycle before, and those rings can be combined to make longer proof that are still easy To check because only the last Step has To be computed by validator , is this correct ? Smiley and it cannot be parallelized because its lot of small sequential steps that depends on the previous result.
yep

The third part the idea is to solve long range attack ? Like there only a single determinstic "seed" for each block height, which is also why its pool resistant as you explained in the other thread , because there can be only one problem to solve at each block height , so a huge mining rate doesnt give a big advantage for long range attack , as the only solution for this block has already been found before ?

The problem i would see with this approach is it mean if all miners have equal mining power, it would mean they will all find the block at the same time if they all have the same amount of computation to do no ?
I think there is a misunderstanding.
The third principle protects against parallel computing. This is his main task.
For example, if we do not use the third principle, each miner can generate 1000 different headers for the same block. This is possible because any rearrangement of transactions in places gives a different Merkle root. For 2 transactions, you can get 2 different Merkle roots. For 3 transactions already - 6 different Merkle roots, etc. The same goes for the time stamp. The miner can adjust the time when he began to mine the block. And this means that he can create many headings in advance from different seconds.
Thus, if a miner immediately makes a lot of block headers, he can start doing calculations from all headers at once. For example, there are 1000 of them. I take 1000 cores on the video card and each core counts 1 version of the header. Then I have 1000 chances against 1 that I will quickly find the right pre-hash.
To avoid this possibility of parallel calculations, the header should start only from the data that cannot be changed. And this is the height of the previous block, the hash of the previous block and the address of the miner's wallet. This data cannot be parallelized. This is the secret - why you can not mine on all the cores of the CPU or GPU. Only 1 core for 1 IP address, which is associated with 1 wallet address.

How this can help against long-range attacks, I do not quite understand. Long-range attacks are more characteristic of POS algorithms. However, if we assume a long-range attack option for the POW algorithm, then I think that “fixing the accumulation of network complexity” and a short distance of recalculating the complexity of the network would be a much better way to deal with them.
How it works...

Let's say that the attacker started with the genesis block and, after making a long calculation, forced the blockchain to increase complexity. Then he begins to make quick calculations and generates many blocks at high speed. Thus, after some time, the height of the block that the attacker will generate will be greater than that of the real blockchain. In this case, the nodes can go to a longer chain of the attacker.
What would be the best way to deal with this?
It will be good if the distance between the blocks until the next recalculation of complexity is short. But this does not save 100%.
But accumulating network complexity is a 100% solution.

It looks simple. In each block, we fix the complexity of the network at which it was calculated. In the next block, we add the old complexity of the network with the new one and fix the sum of complexity. In the third block, we add the complexity of the new block to the old amount of complexity. Thus, the “heavy” block chain will always be heavier than the light blocks of the fraudulent chain. This is a good marker by which the network can fight long-range attacks.

Now about the benefits of the pool.
The pool in my algorithm cannot bring an advantage precisely because it is impossible to divide the range of search values ​​between miners. Since each miner can mine only from the header of the block that contains his wallet, and this value is hashed along with the hash of the previous block, for each miner the starting value will be unique, BUT - predetermined. The pool cannot allocate a range of values ​​from 8 to 12 to the first miner, and from 56 to 72 to the other, because each miner has a predetermined starting value. In addition, due to the fact that the spectrum of values ​​is distributed randomly in the ring of values, it also cannot be correctly divided.
For example, the first miner will have his range of values ​​not from 100 to 200, but from this set: {100, 282, 13, 86, 72, 254, 989.} These are his predetermined values ​​that he must pass. He can’t jump after 282 to 283 ... Do you understand?
And so for each miner.
Moreover, these values ​​are NOT KNOWN in advance !!! That is - they are predetermined, because they are determined by the algorithm. BUT! They are not known in advance.

Thus, the pool does not have any data that can allow it to make a competent distribution of the spectrum of calculations.
That's why I drew that when using my algorithm, each miner will have the same range of values ​​for searching a hash.
However, you should not be mistaken that such a situation will lead to the fact that all miners will find the hash value at the same time. Someone, after starting the calculation of the block, will not be able to find a suitable pre-hash even for 1000 years. Smiley
But this problem is also being solved. The algorithm will check for a new block in the chain. As soon as a message appears about a new block (of a higher height), the algorithm will immediately stop calculating the obsolete block and begin to calculate a new one.
I hope I clarified your doubt to you.
full member
Activity: 322
Merit: 151
They're tactical
Ok so if i understand correctly the first part, the idea is to use cyclic algorithm like rings, who have a known cycle length before it comes back to the initial number, and the miner need to provide the result at N-1 of the final result, prooving he has computed all the other values in the cycle before, and those rings can be combined to make longer proof that are still easy To check because only the last Step has To be computed by validator , is this correct ? Smiley and it cannot be parallelized because its lot of small sequential steps that depends on the previous result.

The third part the idea is to solve long range attack ? Like there only a single determinstic "seed" for each block height, which is also why its pool resistant as you explained in the other thread , because there can be only one problem to solve at each block height , so a huge mining rate doesnt give a big advantage for long range attack , as the only solution for this block has already been found before ?

The problem i would see with this approach is it mean if all miners have equal mining power, it would mean they will all find the block at the same time if they all have the same amount of computation to do no ?

With the original btc solo mining, all nodes have different tx in the mempool and work on different blocks with non linear solving time, so there is more chance one find a block before the other and make it easier to settle on longest chain.

It would be harder to get to a longest chain if all mining work is perfectly equal between all miners no ? The non linear solving time with the hash function still make it easy to have winners of the longest chain even with equal hash power. And game theory incitate to stay on the longest chain.

With your algorithm it is linear amount of computation for all miners no ?

Maybe using the 4th part also on the address to select the ring algorithm and forcing address change on each block. Maybe it could get less linear solving time.

Is it still supposed to work with constant block target time with the difficulty adjusting to keep time between blocks constant ?
member
Activity: 264
Merit: 13
This looks nice and promising. But you may also wanna take this to the Altcoin Discussion section for more exposure and also for more feedback from people.

Go here

https://bitcointalk.org/index.php?board=67.0;sort=first_post;desc

I get it. Thank you. Now I will make a duplicate of the topic in the thread that you indicated.
jr. member
Activity: 108
Merit: 1
December 09, 2019, 04:12:46 AM
#9
This looks nice and promising. But you may also wanna take this to the Altcoin Discussion section for more exposure and also for more feedback from people.

Go here

https://bitcointalk.org/index.php?board=67.0;sort=first_post;desc
member
Activity: 264
Merit: 13
December 09, 2019, 02:15:49 AM
#8
I myself am Russian but I do not have such links
and what is the algorithm say for litecoin and miner Huh?
He вoпpoc - вoт pyccкaя вeткa:
https://bitcointalksearch.org/topic/asicgpufpga-pow-5208035

and what is the algorithm say for litecoin and miner Huh?
Do you want to rewrite Litecoin using this algorithm? Nobody can stop you from doing this. It will block the use of GPU / ASIC / FPGA in the source code of any cryptocurrency.
hero member
Activity: 1484
Merit: 505
December 09, 2019, 01:33:36 AM
#7
if there is no possibility of mining coins using GPU or ASIC or CPU then how will pow yours mine the coin Huh?
Why did you include CPUs in this list? I did not say anything about this. My algorithm is ONLY for the CPU.

p.s.  perhaps you have links to any other addresses related to your project?
Only on russian...
I myself am Russian but I do not have such links
and what is the algorithm say for litecoin and miner Huh?
member
Activity: 264
Merit: 13
December 09, 2019, 01:31:56 AM
#6
if there is no possibility of mining coins using GPU or ASIC or CPU then how will pow yours mine the coin Huh?
Why did you include CPUs in this list? I did not say anything about this. My algorithm is ONLY for the CPU.

p.s.  perhaps you have links to any other addresses related to your project?
Only on russian...
hero member
Activity: 1484
Merit: 505
December 09, 2019, 12:53:49 AM
#5
most on a piece of paper to do the calculations and thus will be on your algorithm is mining  Cheesy Cheesy Grin Grin Grin
hero member
Activity: 1484
Merit: 505
December 09, 2019, 12:51:44 AM
#4
if there is no possibility of mining coins using GPU or ASIC or CPU then how will pow yours mine the coin Huh?
sr. member
Activity: 1330
Merit: 251
December 09, 2019, 12:43:46 AM
#3
Maybe I posted in the wrong thread?

 
   Hello! If you will transfer to another place, please leave a new address here, I would like to watch your development.


 p.s.  perhaps you have links to any other addresses related to your project?
member
Activity: 264
Merit: 13
December 08, 2019, 05:36:28 PM
#2
Maybe I posted in the wrong thread?
member
Activity: 264
Merit: 13
December 08, 2019, 01:49:34 PM
#1
Anti ASIC/GPU/FPGA/CLOUD/MAINFRAME POW-algorithm
Now mining is available to everyone...

Hello to all.
For several years I was thinking about how to make a POW algorithm that will be stable not only against ASIC devices, but also against GPU miners.
And I developed it! Smiley
Last night I successfully conducted the generation and confirmation of the first local chain of signatures to confirm the block (see screenshot below). Therefore, from today on, I will publish a description of the algorithm and parts of the C++ code.



The main decision.

The only reason that does not allow the crypto community to solve the problem of specialized devices is the adaptability of modern POW algorithms to parallel calculations.
This is where the ideal solution comes from - the calculation algorithm must be consistent and resistant to any possibility of performing calculations in parallel.

The first principle is cyclic calculations.

We must abandon hashing, which is now the main calculation algorithm for block signing. I propose replacing it with the Ring Bit Function (RBF), because it perfectly fulfills all the requirements for the POW algorithm.

POW algorithm requirements:
1) one-way function;
2) hard to calculate;
3) easy to check;
4) only sequential calculations.

Let me give you an example of RBF for a 32-bit number.



You see that after 8 rounds of such calculations, we again get the initial number. Since the calculations go in a ring, we can use them for the POW algorithm. For example, from the 1st to the 7th round is a search for the result of calculations. The 7th is the signature found. And the 8th round is a test of accuracy of calculations!



Moreover, this function is one-sided, that is, the inverse transformation is possible only by brute force.
Thus, for this type of computation, which I called the "Ring Bit Function", all 4 requirements for the POW algorithm are fulfilled.



If we talk about ordinary hashing, then parallel computations are possible in it, which allows the use of special devices to speed up computations.


(Thanks to IadixDev for the diagram.)

The ring size in RBF may be different. It depends on the combination of rotations and shifts that we apply. Some combinations do not create ring functions, so you must first check which combination works.



I will give some working examples for 32-bit numbers:

239 rounds
NewNum = (num >> 1) ^ (num <<< 1)

55117 rounds
NewNum = (num >> 1) ^ (num <<< 1) ^ (num >>> 3)

131069 rounds
NewNum = (num >> 1) ^ (num <<< 1) ^ (num >>> 13)

There are a lot of such combinations, which allows us to vary the level of computational complexity.

The second principle is Olympic rings.

One ring function can give us only one signature. But in order to find a “beautiful” signature, we need to calculate many ring functions. The fourth principle of the POW algorithm requires that all ring functions be connected. Since the calculations in the ring function are one-sided, when searching for a “beautiful” signature we move along the ring in one direction, and during verification, in the other. If we begin to calculate a new ring from an old signature using the same algorithm, then we will move along the same ring. To create a new ring from the old signature, we must change the algorithm to another. In this case, the chain of calculations will be similar to the Olympic rings (only longer - it will consist of hundreds or thousands of rings). Changing the calculation algorithm on each ring will help us increase the stability of our algorithm against ASIC devices.



To further complicate the algorithm against ASIC and FPGA devices, we will use masks at each transition from one ring to another. This masking algorithm is two-way. I called him Mystique.



The third principle is two-step signature.

However, an attacker can still increase his chances of receiving a block reward, since he can do parallel calculations based on different Merkle roots, as well as different points in time.
In order to eliminate this possibility, when calculating the signature, it is necessary to use only the data that cannot be calculated in parallel. At the same time, they must constantly change from block to block. Such data includes only the signature of the previous block and the block number.
Additionally, you need to fix the right of the miner to receive the reward, therefore, I propose to add the miner's wallet to the signature of the previous block, to which the reward for the block will be credited. All other data can be changed for the same block, so they are not suitable for calculating the signature.
However, all other data should also be fixed in the signature of the block.
To solve this problem, I suggest creating a block signature in two stages:

Stage 1.
Based on the signature of the previous block, block number, and miner wallet address, hashing is performed (for example, SHA256).
The resulting hash serves as a start for the main calculation when searching for a "beautiful" signature.

2 stage.
The signature found is only a pre-signature.
The pre-signature serves as a starting number for hashing, which receives all other block data - Merkle root, chain counter, timestamp, etc. as input.
We consider the resulting hash to be the signature of the block.
Thus, the miner will not be able to calculate the hash of the block until it finds the correct pre-signature. But finding it already makes no sense to calculate different hashes from different input data.

The fourth principle is self programming.

The latter principle allows maximum protection of calculations from ASIC devices. It is based on the idea that each input number is simultaneously a program by which it will be converted. Thus, each new calculation will be performed according to a unique program, which will not allow the use of ASIC devices to speed up the calculations.
To do this, it is necessary to divide the starting number into groups of several bits, and attach a certain sequence of transformations to the value of each group. This will best be illustrated by the example of the powerful one-way hash algorithm Mystique, which I developed specifically to enhance hash operations when computing a block. It uses various number conversions, examples of which you can see in the picture below. This algorithm will be considered separately, as it is an additional solution.



The problem is resolved.

Done. The problem of protecting the POW algorithm from ASIC / GPU / FPGA and other special devices is resolved. For greater strength of the algorithm, 256 bit numbers will be used in it.

What has already been done.

At the moment, the entire POW algorithm is developed and tested. The main algorithm code is implemented locally in C++.



What to do with it?

You can implement this algorithm in any cryptocurrency and it will be the best POW algorithm you have ever known.

Ring Bit Function. 1 part.
https://www.youtube.com/watch?v=yg-G6itsHpU&feature=youtu.be
An explanation of the new POW algorithm, which I call the Ring Bit Function (RBF), with C ++ code examples.

Ring Bit Function. 2 part.
https://www.youtube.com/watch?v=Ir9Ptfg0Nbg&feature=youtu.be
In this part we make a chain of RBF rings.

Ring Bit Function. 3 part.
https://www.youtube.com/watch?v=9-7NmuZXbdU&feature=youtu.be
An explanation of the new POW algorithm, which I call the Ring Bit Function (RBF), with C ++ code examples. In this part we will masking our chain of RBF rings.

Entropy of numbers in my algorithm (RBF).
https://www.youtube.com/watch?v=F3D1mgMJvuw&feature=youtu.be
Jump to: