Author

Topic: Help me out with variance in parallel, block size, etc (Read 844 times)

copper member
Activity: 56
Merit: 0

Similar to pool hopping, I was questioning a parallel request versus more serial sequences.

Using simple numbers, let's say that I have one box (be it FPGA, desktop, CPU or GPU) that mines 1MHs.  If I magically increased the speed to 5MHs, I am doing work requests for individual blocks and processing that work in a serial manner (that is, on a single processing source with no threads/timeslicing/multicore/multiprocessor).

If I instead had 5 x 1MHs boxes, I would be doing 5 times the number of requests in parallel.

Regardless of power and other factors... if I hear someone try to proselytize KWh/cost one more time I'm seriously going to have a breakdown.... what do you think the variance would be in these methods?

In small experiments (100MHs) over two months, I seem to have received more payouts with parallel computation rather than a single source of equal computation.  Given that it could be pure coincidence, I was wondering if there is a possible justifiable explanation as to why which would be better (other than KW/h nonsense).

There is the obvious benefit of health to the Bitcoin network if the parallel sources are co-located.  That is, if I have 100MHs in my bedroom, it's much better for the health of the network if I have 100 x 1 nodes scattered across datacenters.  Consider me more of a philanthropist than redbull-fueled-overclocking-profit-rabid-miner.
Jump to: