Pages:
Author

Topic: Data limits becoming an issue......any thoughts? - page 2. (Read 4403 times)

legendary
Activity: 2044
Merit: 1000
This might seem pretty basic, but do both your pool and your miner support rollNtime? This drastically reduces the number of getworks you're requesting, especially for 17GH/s.

I use CGminer in conjunction with GPUmax to manage my farm.....
legendary
Activity: 952
Merit: 1000
This might seem pretty basic, but do both your pool and your miner support rollNtime? This drastically reduces the number of getworks you're requesting, especially for 17GH/s.
sr. member
Activity: 336
Merit: 250
Are their any major pools accepting higher difficulty shares? 

I have to use the Verizon because the location is not served by normal broadband providers. 
Eclipse has a higher-difficulty pool, there's also HHTT, which isn't exactly major but it's nice and simple - 2% fee PPS with difficulty 32 shares:
https://bitcointalksearch.org/topic/500-ghshhtt-selected-diffstratumpplnspaid-staleshigh-availabilitytor-95378
sr. member
Activity: 336
Merit: 250
My monthly traffic is usually 2.5-3TB. 10GB in 20 days, Nowhere near enough for me.
Have you ever thought about downloading porn that isn't 1080p?  Tongue
legendary
Activity: 2044
Merit: 1000
Are their any major pools accepting higher difficulty shares? 

I have to use the Verizon because the location is not served by normal broadband providers. 
legendary
Activity: 1862
Merit: 1011
Reverse engineer from time to time
My monthly traffic is usually 2.5-3TB. 10GB in 20 days, Nowhere near enough for me.
donator
Activity: 1055
Merit: 1020
I had the same problem at one location where I have 14Gh.  I upgraded to a DSL line for $29 a month.

If that is not an option can you add another 4G modem to your account with another 10GB and just split up the miners?

legendary
Activity: 1036
Merit: 1002
Why are you paying mobile phone traffic prices, isn't there a decent connection available?

I once threw 1.2 TB over my home connection in one month, just because. And no, I did NOT pay 12k USD. Less than 100, actually. And I still live in the broadband-middle-ages. FTTH deployment is spreading... expect times when the network beats the RAID. Smiley

Anyway, that's a seriously bad connection for business stuff. Can't you do something about that rather than trying to cut down a few GB traffic? You'll be poor if someone uses YouTube from there!
donator
Activity: 1218
Merit: 1079
Gerald Davis
Some options:
a) Solo mining.
Enough said.  All the advantages and disadvantages which come with it.

b) Higher difficulty shares.
Using a pool which supports higher difficulty shares will reduce outgoing bandwidth.  You may also wan to encouraging other pools to adopt higher difficulty shares.  For example on a pool using difficulty 32 shares you will find (and thus transmit) 1/32 as many shares. Compensation is the same as each share is "worth" 32x as much.  Pools could support dynamic difficulty as optimal share difficulty varies by hashing power.  20GH/s at difficult 1 will find (and need to report) 12,069,941 shares every month. At ~ 0.5KB per share that is ~6GB per month.  Note that is just outbound data and is under optimal conditions.  Using a difficult of 100 would reduce that by 99%.  Inbound bandwidth remains the same at 1 (or more) getworks per physical processor (CPU core, GPU, FPGA chip, etc) per block.  

c) Getwork splitting.
This may require more work that it is worth.  Higher difficulty shares improve outbound bandwidth but even with longpolling and ntime rolling you need at least 1 getwork per physical processor per block.  So a farm with 50 GPU farm requests 50 block headers on every longpoll and one every time a share is found.  In theory a single getwork could be split across multiple GPU/FPGA (by giving each one of them part of the nonce range).  There is still an upper limit of roughly 1 getwork per 4GH/s per block.  That limit exists because more than 4GH/s will need more than 1 getwork per second and the timestamp has a precision limit of one second.

d) Local work.
Local work is likely the long term solution given the rise of TH/s ASICs.  It would operate similar to how p2pool works but still using a central "traditional" pool server.  Essentially solo mining with shared rewards/variance.   There is no real requirement that the server generate the blockheader for all miners and distribute them.  The only thing the server needs to provide (and verify on submitted hashes) is the coinbase address to ensure the pool (and all miners) are equitably paid.  It would require new mining software but a miner could construct the entire blockheader locally as needed.  In theory this would reduce inbound bandwidth to a negligible amount.   Outbound bandwidth per share is increased as miner needs to submit the merkle tree to pool for verification.  Still by using higher difficulty shares the overall outbound bandwidth can be reduced.  Say using 100 difficulty shares even if the share requires 20x as much bandwidth we are still talking an 80% reduction in overall outbound bandwidth and incoming bandwidth is reduced to a negligible (rounding error) amount.


c & d require real work however there is no reason pools couldn't support difficulty >1 today.
sr. member
Activity: 336
Merit: 250
Where are you pointing all that hashrate? The new mining proxy on slush's pool would probably use less bandwidth, not sure if that's an option?
legendary
Activity: 2044
Merit: 1000
I hard code pool ip addresses into the computer hosts files to cut down on DNS lookups. Not sure how much of your traffic they actually make up though and I've done this more to keep the router connections down than anything else.

DNS lookups are cached locally on each system by most OS, so this should produce very little data usage. My guess around data usage for DNS lookup would be under 100 megs per 10 gigs. your talking about asking to converting "www.greenbtc.com" to "124.0.0.1" very liittle data involved


A real way to save data is to increase your share difficulty, the higher your difficulty the less shares you will produce and the less data you will transfer.
Say you systems are working on difficulty 1 shares and you switch to difficulty 4 shares, this would cut down your shares produced ~ 75% and save data.
You would just get paid more for the higher difficulty Shares.

but your pool has to support this and your miner software has to support this

Are you using a 4G,3G,Wimax based broadband modem?


It is a router from Cradlepoint, with a USB 4G modem plugged into it. 
hero member
Activity: 826
Merit: 500
I hard code pool ip addresses into the computer hosts files to cut down on DNS lookups. Not sure how much of your traffic they actually make up though and I've done this more to keep the router connections down than anything else.

DNS lookups are cached locally on each system by most OS, so this should produce very little data usage. My guess around data usage for DNS lookup would be under 100 megs per 10 gigs. your talking about asking to converting "www.greenbtc.com" to "124.0.0.1" very liittle data involved


A real way to save data is to increase your share difficulty, the higher your difficulty the less shares you will produce and the less data you will transfer.
Say you systems are working on difficulty 1 shares and you switch to difficulty 4 shares, this would cut down your shares produced ~ 75% and save data.
You would just get paid more for the higher difficulty Shares.

but your pool has to support this and your miner software has to support this

Are you using a 4G,3G,Wimax based broadband modem?
vip
Activity: 1358
Merit: 1000
AKA: gigavps
I hard code pool ip addresses into the computer hosts files to cut down on DNS lookups. Not sure how much of your traffic they actually make up though and I've done this more to keep the router connections down than anything else.
legendary
Activity: 1372
Merit: 1007
1davout
10G isn't that much, your plan looks quite smallish Sad
legendary
Activity: 2044
Merit: 1000
Hey all,

One of my three locations is using Verizon broadband for internet access, and it works great.  Unfortunately, I am now consistently going over the 10 GB data limit 2/3's through the billing cycle and am charged 10 bucks per GB overage. 

The facility hosts approximately 16-17 GH/s.

Is it normal to burn through 10 GB in data in only 20 days with this type of hashing power?  Is there any way to reduce the data usage?  Or am I just F-ed?

thanks! 
Pages:
Jump to: