Pages:
Author

Topic: New PoW method using factorization of large numbers. - page 2. (Read 916 times)

member
Activity: 98
Merit: 26
So would I Wink.  I would like to apologize to haltingprobability, I have been a bit harsh on him.  Some people like seeing proof, I for one am someone who will go out and search for myself.  But different strokes for different folks.

No hard feelings.

Quote
I am not the kind of meticulous person to work on proofs.  I don't have a degree in mathematics or computer science and I tend to get interested in developing concepts and ideas and implementing them.

I am a computer engineer. I have a degree in computer engineering. I'm trying to inform you that your intuitive notions about the difference between GPUs and CPUs - while good enough for basic comprehension - are not rigorous and that, in this case, that matters.

There is a theorem in computational complexity (the field of mathematics that studies how fast any given kind of formal problem can be solved), called the speed-up theorem. Basically, this theorem says that any particular solution to a problem (an "algorithm") can be sped up by a linear factor. So, suppose that you find an algorithm A that solves problem Z in time n2, where n is the size of a problem in Z. This is called quadratic complexity - as n increases, the time required to solve the problem grows as the square of n. The speed-up theorem tells us that it is always possible to find some fixed factor, k, such that we can construct a new algorithm A' from A that solves a problem in Z in time n2 / k. Note that this does not change the complexity class - A still solves problems in Z in quadratic time. Intuitively, this makes sense - if I can solve a problem in time n2 with one computer, then I should always be able to solve it in nearly n2 / 2 with two computers. There are some very intractable problems where this becomes difficult (due to network complexity) but such problems tend to be academic, i.e. specially constructed to make parallelization difficult.

Quote
CPU's are adaptable, GPU's are not.  CPU's are fast and think on their feet whereas GPU's are slow and methodical and share their work well.  

This is mostly not true anymore. GPGPUs (General-Purpose GPU) can, in principle, do anything that CPUs can do and the latest GPU cards are being released with GPGPU cores in them. GPU clock speeds are in the same class as CPUs, so they are not slow. GPUs are designed around "data parallelism" or "data flow architecture" and this is why they are well-suited to graphics applications or any application where you can easily parallelize the work by simply dividing up the work surface and handing out more or less identical instructions to each work unit (rendering pipeline).

Quote
So it would be very shortsighted to think that GPU's can accelerate any given task and be better than a CPU.  

For any large search problem (and any non-trivial PoW must be a large search problem), GPUs will be superior. If I have to try 260 possibilities (about a billion billion possibilities) in order to solve a given PoW problem, I can try these one-by-one on a CPU or I can parallelize the problem and have multiple CPUs or GPUs work on it in parallel. Suppose I have 128 GPUs available to me. Instead of having to search 260 possibilities, each GPU now only has to solve 253 possibilities, resulting in a 128x reduction in search time. Factoring is just a large search problem.
member
Activity: 322
Merit: 54
Consensus is Constitution
Where do you get those 100 digit numbers in the first place? Is there some database or some other computer (server) generating those tasks or what? Sorry if that has been already mentioned but I didn't see that.

They are generated using a hashing algorithm like sha-256.  Basically the same way they currently work in bitcoin.  Great question.  Let me know if you want a more in depth answer.

Yes I would love to have a more in depth answer. I have read some descriptions about sha256 and I didn't see any reason it would produce large numbers that are either primes or semiprimes.

We don't want them to be primes.  We want them to be composite so that we can find the winning factor (the factor that meets the difficulty requirements).

Hashing algorithms take any input and produce a large random output.  This output can be converted into base-10; normal numbers.  Lets say it produces 120 digit number and we want a 102 digit number.  we can just truncate the number to the amount of digits we want.
member
Activity: 73
Merit: 10
Where do you get those 100 digit numbers in the first place? Is there some database or some other computer (server) generating those tasks or what? Sorry if that has been already mentioned but I didn't see that.

They are generated using a hashing algorithm like sha-256.  Basically the same way they currently work in bitcoin.  Great question.  Let me know if you want a more in depth answer.

Yes I would love to have a more in depth answer. I have read some descriptions about sha256 and I didn't see any reason it would produce large numbers that are either primes or semiprimes.
member
Activity: 322
Merit: 54
Consensus is Constitution
Wow i don't know how us programmers in the past managed to do anything with out proof of work, you know
counting the number of black pixels on hundreds of images just so we can keep wasting electricity and spending
money on 1080 GPU's in order to have POW in our programs.

Where did the need come from, yes i remember this block-chain thing as if we never had databases or
spread sheets before then or be in a position to lock records or data.

That satoshi guy created more problems than he solved in developing digital "Peoples money" and TBTB
are none too happy about Bit-Torrent but BT does not waste time and energy counting grains of sand to
over come the problems does it now.

Tor is also another nice tool (Before someone got it to spread wannacry viruses using the client SW, not back doors in browsers, kicked off the boom in BTC price)

It seem to me that what we wanted and what we have got are nothing like each other and people like me or people that are
smarter need to go back to the drawing board if we are left with counting grains of sand and distributing a 200gb to each and
every node that tries to run a system in near real time.

ETH that allows us to deploy programs (call them contract, keeps wall street happy) on distributed servers that use a type of gas to cover
the running costs is cool but look closer, it's not offering .onion type sites you get on the underground using Tor and seems to be just offering
a very expensive service, if not easy to use that allows us to rent space to store a few global strings and int32 numbers at a huge cost

We need to take concepts like Tor, Bit-Torrent, ETH, Tangle, BAT and what has been learned from BTC and go back to the drawing board and to
look at this again and start from scratch me thinks

Each node in the network should also offer a VPN or proxy type service and be paid in a type of gas because the internet is being closed down
using censorship so we need to not only take the money back but also our freedom of speech whilst we are about it







Good point but even eth uses proof of work.  The problem that is unique to blockchains is that they require the information to be correct.  How do we verify that sally did or did not give bob 100 bitcoins?  Who do we trust to tell us the truth?  That is where proof of work comes in.  We let everyone vote on it.  If we let every IP address vote, then people would just make multiple IP addresses to vote more.  PoW makes it so you have to prove that you are doing work so you can't just get a bunch of free votes. 

That said, my PoW method here actually does provide value to the world.  Prime numbers are valuable (they are used to create hard to break encryption like TOR) and everyone factoring large numbers would give us hints that some numbers that haven't been factored are possible primes so those can be checked out later.
member
Activity: 322
Merit: 54
Consensus is Constitution
Where do you get those 100 digit numbers in the first place? Is there some database or some other computer (server) generating those tasks or what? Sorry if that has been already mentioned but I didn't see that.

They are generated using a hashing algorithm like sha-256.  Basically the same way they currently work in bitcoin.  Great question.  Let me know if you want a more in depth answer.
member
Activity: 73
Merit: 10
Where do you get those 100 digit numbers in the first place? Is there some database or some other computer (server) generating those tasks or what? Sorry if that has been already mentioned but I didn't see that.
member
Activity: 210
Merit: 26
High fees = low BTC price
Wow i don't know how us programmers in the past managed to do anything with out proof of work, you know
counting the number of black pixels on hundreds of images just so we can keep wasting electricity and spending
money on 1080 GPU's in order to have POW in our programs.

Where did the need come from, yes i remember this block-chain thing as if we never had databases or
spread sheets before then or be in a position to lock records or data.

That satoshi guy created more problems than he solved in developing digital "Peoples money" and TBTB
are none too happy about Bit-Torrent but BT does not waste time and energy counting grains of sand to
over come the problems does it now.

Tor is also another nice tool (Before someone got it to spread wannacry viruses using the client SW, not back doors in browsers, kicked off the boom in BTC price)

It seem to me that what we wanted and what we have got are nothing like each other and people like me or people that are
smarter need to go back to the drawing board if we are left with counting grains of sand and distributing a 200gb to each and
every node that tries to run a system in near real time.

ETH that allows us to deploy programs (call them contract, keeps wall street happy) on distributed servers that use a type of gas to cover
the running costs is cool but look closer, it's not offering .onion type sites you get on the underground using Tor and seems to be just offering
a very expensive service, if not easy to use that allows us to rent space to store a few global strings and int32 numbers at a huge cost

We need to take concepts like Tor, Bit-Torrent, ETH, Tangle, BAT and what has been learned from BTC and go back to the drawing board and to
look at this again and start from scratch me thinks

Each node in the network should also offer a VPN or proxy type service and be paid in a type of gas because the internet is being closed down
using censorship so we need to not only take the money back but also our freedom of speech whilst we are about it





member
Activity: 322
Merit: 54
Consensus is Constitution
Think about your target audience.  Your number 1 goal is proving this PoW algo is THE FUTURE OF BITCOIN.  This isn't a "lets bench test it in the lab and see how it works"  You are literally suggesting something that would end an entire mining industry, start another, and affect a multi-billion dollar market.  I'd start with "who here is a math minded person that can help me work out the proofs"

Fair enough.  But the difference in perspective on how to pitch this is because of a difference in perspective of how to implement it.  I am of the mind that we create a bitcoin based altcoin with this PoW and test it out and if it really is the future, then bitcoin will have to implement it.

Most people want to see something working and succeeding before adopting it.  This is the reason why people still aren't investing in bitcoin en masse, they want to see if it is still around in 10 years first.
member
Activity: 322
Merit: 54
Consensus is Constitution
You are essentially suggesting you have a solution that could/should be applied to a multi billion dollar industry.   And you are unwilling to entertain the idea of a mathematical proof.  It's likely that you're right, but in this arena, I think its pretty much a requirement to at least ATTEMPT a rigorous proof.   Otherwise your idea will forever remain an idea.  Personally I'd love to see the math that goes into such a proof for the sake of learning alone.

That said, its a very interesting concept.  

So would I Wink.  I would like to apologize to haltingprobability, I have been a bit harsh on him.  Some people like seeing proof, I for one am someone who will go out and search for myself.  But different strokes for different folks.

I am not the kind of meticulous person to work on proofs.  I don't have a degree in mathematics or computer science and I tend to get interested in developing concepts and ideas and implementing them.  I am more of an entrepreneur and engineer than a mathematician.  My brain works like a CPU and I take in lots of information from many sources and synthesize it into a product idea.  I don't need proof, I just need a general understanding how something works, and if it makes sense and passes the smell test, I go for it.  

Here is how I see it.  GPU's are like robots.  You program them for a specific task and they can beat any human in that specialized task.  But lets say something unexpected happens like the roof caves in.  The robots would get stuck, they wouldn't know how to move out of the way and bring their work elsewhere to continue.  The humans would then beat them because they are more adaptable.  CPU's are adaptable, GPU's are not.  CPU's are fast and think on their feet whereas GPU's are slow and methodical and share their work well.  So it would be very shortsighted to think that GPU's can accelerate any given task and be better than a CPU.  They are very different entities.  If GPU's are inherently better than CPU's at some tasks (and they certainly are like SHA-256 and Scrypt) Then it only makes sense that CPU's would be inherently better at other tasks.  These tasks would be tasks where lots of different and unpredictable things are happening.  The problem is finding a stable consistent task that also offers this unpredictability.

For me to attempt to prove that Large number factoring would offer this unpredictability would require that I know much more than I currently do about large number factoring.  But I take the experts word that factoring (more specifically sieving) large numbers is a very intensive thing that requires lots of unpredictable memory access and varied tasks.  Its about as random of a task as a non random task can be if that makes any sense.  So that is why I feel it is the #1 contender for a CPU tilted coin.  The cool thing is certain tasks in the sieving are predictable, and so the combination of a GPU and CPU would be best, which just so happens to be the perfect thing if we want to favor PC's.  Also it is good if we want to bring the GPU miners and CPU miners together to both adopt this PoW.
member
Activity: 116
Merit: 101
Think about your target audience.  Your number 1 goal is proving this PoW algo is THE FUTURE OF BITCOIN.  This isn't a "lets bench test it in the lab and see how it works"  You are literally suggesting something that would end an entire mining industry, start another, and affect a multi-billion dollar market.  I'd start with "who here is a math minded person that can help me work out the proofs"
member
Activity: 322
Merit: 54
Consensus is Constitution
My number one priority is to implement this idea but I need help.  I am an inventor, engineer, and scientist but unfortunately I fail when it comes to languages and programming.  That side of my brain is stunted.

What needs to be done is a miner made and bitcoin code modified to allow for this PoW to be used.  

I think at first we should use stock bitcoin code and get this PoW working in that. From there people can tweak it to make coins that have modified blocksize, reward, maximum emission, blocktime, etc tuned to how they like it.  The more the merrier and I know the best way for a coin to scale is for child coins to be made.  But if we get it working in otherwise stock code then that would be a good foundation to build on.

We also need a miner.  This may be the hardest part.  But there is msieve, a C program that may be a great fit for the foundation of a pretty well optimized miner https://sourceforge.net/projects/msieve/

Is there anything else I'm missing?

The PoW is the heart of a coin.  I think this new heart could give coins a much longer life.  If anyone else is excited about this, please lets figure out how to get this started.
member
Activity: 116
Merit: 101
You are essentially suggesting you have a solution that could/should be applied to a multi billion dollar industry.   And you are unwilling to entertain the idea of a mathematical proof.  It's likely that you're right, but in this arena, I think its pretty much a requirement to at least ATTEMPT a rigorous proof.   Otherwise your idea will forever remain an idea.  Personally I'd love to see the math that goes into such a proof for the sake of learning alone.

That said, its a very interesting concept.  
member
Activity: 322
Merit: 54
Consensus is Constitution
It's really exciting to me that this method could very well be the holy grail PoW method that we have been looking for.  Not only is it inefficient on GPU's, the best you could hope for is GPU's approaching CPU's in speed, but the optimal setup would be 1 GPU and 1 CPU working together.  Not only does this make GPU farms and standard ASIC farms unproductive, but it also makes the Server farms and Botnet's that have plagued CPU only coins unproductive as well.  The optimum setup would be similar to a desktop computer, one CPU and one GPU.

One CPU + One GPU = 1 vote

http://www.mersenneforum.org/showthread.php?t=16687
member
Activity: 322
Merit: 54
Consensus is Constitution
Can you prove an upper bound on parallelization? In other words, can you prove that the advantage of x working units as x->oo goes to zero? Empirical evidence about how hard people have found parallelization of factoring to be is just hand-waving.

Once you prove that SHA-256 encryption can't be broken without using brute force.  You trying to draw this hard line of definitive proof shows that you don't have the wisdom to understand how the world works.

You're shifting the goalposts. Your opening claim was that factorization does not benefit from GPUs (by which, I assume you mean parallelization). I nitpicked by pointing out that there's no reason to believe that factorization can't be sped up by parallelization. Claiming that factorization cannot benefit from parallelization is tantamount to claiming that there is no linear speedup for factorization. If this were true, it could be proved. There is no other way to know that it is the case than to prove it.

Yes you are nitpicking.  Like I said if you try to prove that brute force is the only way to crack SHA-256 encryption then I will try to prove that GPU's are inherently weaker than CPU's for the task of GNFS sieving.

I'm sorry to break it to you but mathematical proofs can't be applied to everything you do in life.  Next time you take a shit you better offer exact proof that it smells bad.

People like you nitpick and demand things from people because you want to make an excuse to yourself why you don't want to learn something new.
member
Activity: 98
Merit: 26
Can you prove an upper bound on parallelization? In other words, can you prove that the advantage of x working units as x->oo goes to zero? Empirical evidence about how hard people have found parallelization of factoring to be is just hand-waving.

Once you prove that SHA-256 encryption can't be broken without using brute force.  You trying to draw this hard line of definitive proof shows that you don't have the wisdom to understand how the world works.

You're shifting the goalposts. Your opening claim was that factorization does not benefit from GPUs (by which, I assume you mean parallelization). I nitpicked by pointing out that there's no reason to believe that factorization can't be sped up by parallelization. Claiming that factorization cannot benefit from parallelization is tantamount to claiming that there is no linear speedup for factorization. If this were true, it could be proved. There is no other way to know that it is the case than to prove it.
member
Activity: 322
Merit: 54
Consensus is Constitution
So there would be no block timeout.  Since everyone is working on a different number, if one person happened to be working on a prime and is destined to never find a factor, it doesn't matter because someone else surely was working on a composite number and will find a factor.

If a miner suspects that their number is prime, or does not have many (or any) factors of the required size, they can just update the timestamp in the block header and get a new number to try getting factors from. They even get a new number for changing which transactions they want to include in their block. So, there can be a block timeout of 15 minutes, where if it has been that long without a solution, automatically update the timestamp in the header. But that's probably something that was going to be done anyways, at a much faster interval than 15 minutes. However you still cannot guarantee a block is found in 15 minutes.

Bitcoin testnet guarantees a block is found every hour by making the difficulty reset to 1 on any block whose predecessor is at least an hour older than itself. But that's probably not something we would want to do with a real currency. The low probability of happening to have long times without any blocks is just something we all have to deal with in bitcoin.

That sounds good to me.  Ya the probability, assuming a correctly set difficulty, of a block even bieng 5 minutes late is extremely low given the fact that everyone will be working on different numbers.  I wonder if anyone has analyzed the data for bitcoin and made a normal distribution of how long blocks actually take.

This new understanding of the method would mean there is less ability, as a side effect, to classify numbers as possible primes, but at least we could have a list of large composite numbers which can be used to narrow down the search for primes.  Since even at very large numbers primes are not all that uncommon - one in a few thousand or so.  It isn't much, but it's better than nothing I guess.

Of course the main point of a new algorithm is to resist centralization of mining power but providing useful work is very attractive too.
legendary
Activity: 1386
Merit: 1053
Please do not PM me loan requests!
So there would be no block timeout.  Since everyone is working on a different number, if one person happened to be working on a prime and is destined to never find a factor, it doesn't matter because someone else surely was working on a composite number and will find a factor.

If a miner suspects that their number is prime, or does not have many (or any) factors of the required size, they can just update the timestamp in the block header and get a new number to try getting factors from. They even get a new number for changing which transactions they want to include in their block. So, there can be a block timeout of 15 minutes, where if it has been that long without a solution, automatically update the timestamp in the header. But that's probably something that was going to be done anyways, at a much faster interval than 15 minutes. However you still cannot guarantee a block is found in 15 minutes.

Bitcoin testnet guarantees a block is found every hour by making the difficulty reset to 1 on any block whose predecessor is at least an hour older than itself. But that's probably not something we would want to do with a real currency. The low probability of happening to have long times without any blocks is just something we all have to deal with in bitcoin.
member
Activity: 322
Merit: 54
Consensus is Constitution
When you learn more about factorization you will discover that for factoring numbers of 100 digits or more the best way to narrow down the options to use trial factoring on is by using a process called GNFS sieving.  This process simply cannot be efficiently done on graphics cards.  Graphics cards can help with the process (step 1) but the longest part is step 2 so CPU's have had an overall advantage.  Ideally GPU would be used in tandem with a CPU... which just so happens to be a GREAT way to block botnets or server farms and give the advantage to personal computers.  

Can you prove an upper bound on parallelization? In other words, can you prove that the advantage of x working units as x->oo goes to zero? Empirical evidence about how hard people have found parallelization of factoring to be is just hand-waving.

Once you prove that SHA-256 encryption can't be broken without using brute force.  You trying to draw this hard line of definitive proof shows that you don't have the wisdom to understand how the world works.
member
Activity: 98
Merit: 26
When you learn more about factorization you will discover that for factoring numbers of 100 digits or more the best way to narrow down the options to use trial factoring on is by using a process called GNFS sieving.  This process simply cannot be efficiently done on graphics cards.  Graphics cards can help with the process (step 1) but the longest part is step 2 so CPU's have had an overall advantage.  Ideally GPU would be used in tandem with a CPU... which just so happens to be a GREAT way to block botnets or server farms and give the advantage to personal computers. 

Can you prove an upper bound on parallelization? In other words, can you prove that the advantage of x working units as x->oo goes to zero? Empirical evidence about how hard people have found parallelization of factoring to be is just hand-waving.
member
Activity: 322
Merit: 54
Consensus is Constitution
Here is a whitepaper about how to define a new proof of work system that gives preference to CPU's forever theoretically and is immune to velnerabilities that encryption algorithms are prone to.
(link is to wayback machine so you can trust visiting it)

https://web.archive.org/web/20171212224738/http://www.naturehackerproducts.com/2017/12/new-proof-of-work-pow-method-based-on.html?m=1

The take home message is to find a single factor (of a length defined by the difficulty) of a very large number (over 100 digits).

Since it is such a large number GPU's become very slow at attempting it.

I have a quibble with your "GPU-resistant" claims. We don't know that factorization cannot be efficiently randomized. If it can, it can also be trivially parallelized by randomly dividing the search space and sending each range to a different processing unit. In short, you can't claim a lower bound on parallelization without formal proof. Even if no one knows how to parallelize factorization (is this the case?), it doesn't prove that factorization can't be parallelized.

IMO, the effort expended on trying to democratize mining is wasted. Centralization of mining will always occur. Even factors like bandwidth and network latency are favored by centralized mining. Differences in electricity costs and so on only amplify this effect.

But there is no reason that the effort spent solving PoW puzzles has to be useless. Any problem in NP (including factoring) is a potential candidate and I recommend Circuit-SAT as an ideal candidate for PoW puzzles. Using this approach has the benefit that any real-world problem can be encoded as a circuit and then submitted for solution by the mining network.

When you learn more about factorization you will discover that for factoring numbers of 100 digits or more the best way to narrow down the options to use trial factoring on is by using a process called GNFS sieving.  This process simply cannot be efficiently done on graphics cards.  Graphics cards can help with the process (step 1) but the longest part is step 2 so CPU's have had an overall advantage.  Ideally GPU would be used in tandem with a CPU... which just so happens to be a GREAT way to block botnets or server farms and give the advantage to personal computers.  

This is a well known state of affairs as these problems have been done for decades and people have been working for years on making CUDA variants of the software to do it.

To give you a counter example to your proposal that my "GPU resistant" claim needs to be definitively proven, SHA-256 has never been definitively proven to not be crackable without brute force yet we still use it.  Experience has taught us that it is secure and experience has show us that factoring very large numbers cannot be efficiently randomized.

Mining will always tend towards centralization but if we can very much disincentivize using GPU or ASIC or server farm or botnet then it will be decentralized for longer.  I never claim an infinite utopia, the constitution in America only lasted 200 years before it broke down and if this next coin can last 100 years before centralizing I will consider it a success.
Pages:
Jump to: