Author

Topic: How to Avoid DDoS and Other Downtime (Read 5583 times)

sr. member
Activity: 378
Merit: 250
June 20, 2011, 06:22:14 AM
#30
Maybe it's just my setup, but I've tried using Pheonix (both with/without phatk) on multiple cards including 5850's, 5830's and a 6950 and I can never get the same hashing rates I can get from just using Poclbm (with GUIMiner on the front end).
hero member
Activity: 575
Merit: 500
The North Remembers
June 20, 2011, 06:02:01 AM
#29
Phoenix Rising makes it really easy to setup a backup pool. Set your timeout options and it will automatically change to a backup server or even restart phoenix.exe.
legendary
Activity: 1855
Merit: 1016
June 20, 2011, 05:37:47 AM
#28
-f flag is not equal to AGGRESSION.
If you put -f 120, you will see NO mining until main server down & then it starts & hash rate will be some 10-20 Mhash/s less only.

But if you try with AGGRESSION, that completely different.
AGGRESSION=6 will make a 5870 to mine at 230 Mhash/s
while AGGRESSION=12 will make to mine at 430 Mhash/s.


Using -f flag works only if you use poclbm & phoenix with phatk gives more hash than pcolbm. 
legendary
Activity: 1708
Merit: 1020
June 20, 2011, 03:58:27 AM
#27
when people brag about their hashrates I wonder if they just sum up their multiple miners' hashrates or if they crank up aggression all the way to 16...
legendary
Activity: 1762
Merit: 1011
June 19, 2011, 08:06:40 PM
#26
I had a feeling that I would get more stale shares using more than one miner per GPU. Also one should only trust the hashrate measured at the pool(s).

I run 2 Phoenix clients per GPU (I have 3 x HD5870) and I run them both at the same Aggression. This splits the work between them evenly until one of the pools has a hiccup. My total hash rate is slightly higher than running one instance of Phoenix per GPU.

Why would you get more stale shares using more than one miner per GPU?

The hash rate at the pool is an estimate, so nowhere near as accurate as the client reported hash rate.
I did the same thing happily for a while but now I am not so sure any more if it is the optimum.

Theoretically you get more stales because your workers are slower compared to the other workers in the pool. I get lots of stales but there might be some other reason I have yet to find.

---edit---
For my system the gain in hashrate using two miners is three to five times larger than the loss from stales. I will keep running two miners per GPU.

This is what I do.  The only thing I've noticed is that I've had some disconnection problems arise when running multiple miner connections when I have had two computers both running two instances (four total connections) through the same mobile broadband connection. I don't know if it is a port issue with the router or what exactly, but it is something to be aware of.
legendary
Activity: 1762
Merit: 1011
June 19, 2011, 08:01:43 PM
#25
I had a feeling that I would get more stale shares using more than one miner per GPU. Also one should only trust the hashrate measured at the pool(s).

I've actually gotten a couple hundred megahash rate HIGHER on the pools' stated hashrates before than what I should be getting from my card model, so I don't know that you can trust the pools to be accurate, either.
member
Activity: 111
Merit: 10
★Trash&Burn [TBC/TXB]★
June 17, 2011, 01:17:55 PM
#24
I've notice lately that many users have our pool at BitClockers.com set as a fail-over for the larger pools. We jumped to over 250GHash during the Deepbit/Slush DDoS. Why not just mine with us 24/7 Tongue

I guess we'll take what we can get until people figure out that its not advantageous to run with a giant pool. Any pool that can manage 2 blocks a day has no real drawbacks from variance. Mining isnt a race, a block found somewhere doesnt "reset" anything. Any hash has the chance to be a block... Mine with pools that care, have stability, and a rocking community.
legendary
Activity: 1708
Merit: 1020
June 16, 2011, 04:27:06 PM
#23
I had a feeling that I would get more stale shares using more than one miner per GPU. Also one should only trust the hashrate measured at the pool(s).

I run 2 Phoenix clients per GPU (I have 3 x HD5870) and I run them both at the same Aggression. This splits the work between them evenly until one of the pools has a hiccup. My total hash rate is slightly higher than running one instance of Phoenix per GPU.

Why would you get more stale shares using more than one miner per GPU?

The hash rate at the pool is an estimate, so nowhere near as accurate as the client reported hash rate.
I did the same thing happily for a while but now I am not so sure any more if it is the optimum.

Theoretically you get more stales because your workers are slower compared to the other workers in the pool. I get lots of stales but there might be some other reason I have yet to find.

---edit---
For my system the gain in hashrate using two miners is three to five times larger than the loss from stales. I will keep running two miners per GPU.
full member
Activity: 120
Merit: 100
June 16, 2011, 04:43:34 AM
#22
I had a feeling that I would get more stale shares using more than one miner per GPU. Also one should only trust the hashrate measured at the pool(s).

I run 2 Phoenix clients per GPU (I have 3 x HD5870) and I run them both at the same Aggression. This splits the work between them evenly until one of the pools has a hiccup. My total hash rate is slightly higher than running one instance of Phoenix per GPU.

Why would you get more stale shares using more than one miner per GPU?

The hash rate at the pool is an estimate, so nowhere near as accurate as the client reported hash rate.
legendary
Activity: 1708
Merit: 1020
June 16, 2011, 02:32:17 AM
#21
I had a feeling that I would get more stale shares using more than one miner per GPU. Also one should only trust the hashrate measured at the pool(s).
full member
Activity: 238
Merit: 100
June 13, 2011, 10:39:47 PM
#20
Please see my post here, http://forum.bitcoin.org/index.php?topic=16548.0

I'm working on an automated solution to this exact thing and will need testers soon!
full member
Activity: 227
Merit: 100
June 13, 2011, 10:30:30 PM
#19
The hash rate may not be as good with the aggro method bit its a trade off between bigger pools w/ more frequent payouts vs mining on smaller pools

sr. member
Activity: 336
Merit: 250
yung lean
June 13, 2011, 06:59:49 PM
#18
The proxy is still a single point of failure. It just moves it from the pool to the proxy server.

How many threads have you seen about people's proxies going down?  How many for pools being DDOSed?

The proxy is much more reliable than the frequently crashing/DDOSed pools.

No point arguing, besides...  I've seen the light.

This guy is completely right.  I can't believe how silly I'm being.  I mean... I use Linux and several boxes for everything, but all of that is "single points of failure."

What if someone hax0rs my kernels?  I should totally start mixing Windows and MacOS boxes into my miner pool to avoid that single point of failure.  While I'm at it, I think I'm going to order a couple of T1s that are provided by several different service providers.  And ATI cards - crap, the drivers are a single point of failure.  I'm going to have to make large NVIDIA farms just to be redundant for my boxes!

And my name servers.  I only have 2 and they're both from the same provider!  Oh shiii-!  What if that goes down?  I'm going to need to convince all of the pool operators to join an MPLS cloud.

Wait... the MPLS requires internet to be functioning....  F#@#!  I'm going to have to call Al Gore and see if we can reinvent the internet as an alternate path to my pools!


Well I don't have 2 T1's but I do have a failover connection Tongue I guess I just take my money making more seriously then most.
sr. member
Activity: 313
Merit: 250
June 13, 2011, 06:27:55 PM
#17
Hello,


Don't like the Flexible Proxy Project?

I haven't used it, but it seems a bit more elegant than running 99 miners on 99 pools to dodge downtime....

I use a C++ proxy that I should eventually release.


Proxies are definitely a viable alternative, but might be a bit complicated to set up for some folks.  I run 20 different miners across 6 machines and it hasn't been a huge burden.  Once the secondary miners are set up you don't have to do anything, so it's an extra couple of minutes to get them going, but then you're set.

Some folks need to get on the ball, then! Wink

There's also a modified poclbm that has a fallback option floating around somewhere.

It's good to hear you are having luck with your improvised system, but there is clearly a need for a better solution.


I guess you mean this poclbm with backup options -> https://github.com/kylegibson/poclbm
Seems to do what it should, but I don't like poclbm, have a lot of "miner is idle" with it Sad

If you could release your c++ proxy that would be great, I assume this doesn't need mysql?
Pretty sure that people would be willing to donate you something for that.
I Was thinking about this flexible miner proxy, but maybe its not so a great idea to
run mysql server on a usb-stick Cheesy
copper member
Activity: 56
Merit: 0
June 13, 2011, 06:12:15 PM
#16
The proxy is still a single point of failure. It just moves it from the pool to the proxy server.

How many threads have you seen about people's proxies going down?  How many for pools being DDOSed?

The proxy is much more reliable than the frequently crashing/DDOSed pools.

No point arguing, besides...  I've seen the light.

This guy is completely right.  I can't believe how silly I'm being.  I mean... I use Linux and several boxes for everything, but all of that is "single points of failure."

What if someone hax0rs my kernels?  I should totally start mixing Windows and MacOS boxes into my miner pool to avoid that single point of failure.  While I'm at it, I think I'm going to order a couple of T1s that are provided by several different service providers.  And ATI cards - crap, the drivers are a single point of failure.  I'm going to have to make large NVIDIA farms just to be redundant for my boxes!

And my name servers.  I only have 2 and they're both from the same provider!  Oh shiii-!  What if that goes down?  I'm going to need to convince all of the pool operators to join an MPLS cloud.

Wait... the MPLS requires internet to be functioning....  F#@#!  I'm going to have to call Al Gore and see if we can reinvent the internet as an alternate path to my pools!
full member
Activity: 154
Merit: 100
June 13, 2011, 05:14:06 PM
#15
The proxy is still a single point of failure. It just moves it from the pool to the proxy server.

How many threads have you seen about people's proxies going down?  How many for pools being DDOSed?

The proxy is much more reliable than the frequently crashing/DDOSed pools.
sr. member
Activity: 336
Merit: 250
yung lean
June 13, 2011, 04:39:42 PM
#14
The proxy is still a single point of failure. It just moves it from the pool to the proxy server.
copper member
Activity: 56
Merit: 0
June 13, 2011, 04:35:46 PM
#13

Don't like the Flexible Proxy Project?

I haven't used it, but it seems a bit more elegant than running 99 miners on 99 pools to dodge downtime....

I use a C++ proxy that I should eventually release.


Proxies are definitely a viable alternative, but might be a bit complicated to set up for some folks.  I run 20 different miners across 6 machines and it hasn't been a huge burden.  Once the secondary miners are set up you don't have to do anything, so it's an extra couple of minutes to get them going, but then you're set.

Some folks need to get on the ball, then! Wink

There's also a modified poclbm that has a fallback option floating around somewhere.

It's good to hear you are having luck with your improvised system, but there is clearly a need for a better solution.
sr. member
Activity: 378
Merit: 250
June 13, 2011, 04:29:03 PM
#12

Don't like the Flexible Proxy Project?

I haven't used it, but it seems a bit more elegant than running 99 miners on 99 pools to dodge downtime....

I use a C++ proxy that I should eventually release.


Proxies are definitely a viable alternative, but might be a bit complicated to set up for some folks.  I run 20 different miners across 6 machines and it hasn't been a huge burden.  Once the secondary miners are set up you don't have to do anything, so it's an extra couple of minutes to get them going, but then you're set.

I go from about 330mhash a core to 310mhash. Unless my miners are down for 2 hours a day, I'm losing money by doing this. I agree that this isnt really much of a solution. Thy should build failover control into guiminer or something.

I agree on the built-in failover control , that would be the best option of all.  I'm not sure I follow you on the losing 20 MH/sec though.  Are you losing that from your primary miner when the secondary is running?  Or is that the secondary going full out when the primary is down?  

I tried this with phoenix and the phatk kernel. Gave the primary miner an aggression value of 11 and the secondary miner an aggression of 1. This worked fun until I took down the primary miner. The mh/s on the secondary miner only jumped from ~1 mh/s to ~15 mh/s. It should have jumped to ~300 mh/s.

I can set the aggression the same on both the primary and secondary and then they both run ~150 mh/s. I don't want to do that though as it makes my rigs a bit less stable.

I gave up on using phoenix/phatk because I was actually getting better numbers using poclbm no matter what I tried.  Try tweaking the secondary miner a bit though and run it with a higher number.  You might find a balance that works.
member
Activity: 87
Merit: 10
June 13, 2011, 04:22:58 PM
#11
I tried this with phoenix and the phatk kernel. Gave the primary miner an aggression value of 11 and the secondary miner an aggression of 1. This worked fun until I took down the primary miner. The mh/s on the secondary miner only jumped from ~1 mh/s to ~15 mh/s. It should have jumped to ~300 mh/s.

I can set the aggression the same on both the primary and secondary and then they both run ~150 mh/s. I don't want to do that though as it makes my rigs a bit less stable.
sr. member
Activity: 336
Merit: 250
yung lean
June 13, 2011, 04:22:05 PM
#10
I go from about 330mhash a core to 310mhash. Unless my miners are down for 2 hours a day, I'm losing money by doing this. I agree that this isnt really much of a solution. Thy should build failover control into guiminer or something.
copper member
Activity: 56
Merit: 0
June 13, 2011, 04:17:24 PM
#9

Don't like the Flexible Proxy Project?

I haven't used it, but it seems a bit more elegant than running 99 miners on 99 pools to dodge downtime....

I use a C++ proxy that I should eventually release.
sr. member
Activity: 378
Merit: 250
June 13, 2011, 04:11:43 PM
#8
When I do this it seems I lose about 10% total hash rate. Thats the only reason I don't.

Yeah, you'll lose a bit because the secondary miner is set with a lower aggression rate, but with the -f 1 / -f 15 I usually don't lose more than a few MH/sec.  My 6950 drops from ~369 to ~363.  On another machine I have 5830's on I drop from ~ 299 MH/sec to 295 MH/sec when 'failed over' to the secondary.

I'd much rather lose a few percentage points on my secondary miner when it ramps up than lose 100% of the mining when my primary is down.

You could set both primary and secondary miners to say -f 1 and it'd split the mining equally between the both of them and still have full power to one of them if the other goes down.  Of course, then you're splitting your mining, so, it's a judgement call.

Experiment with two miners and different aggression rates, you can start/stop the miner to simulate one of them going down and compare what you'd get at different rates.
member
Activity: 98
Merit: 10
Testing
June 13, 2011, 04:07:37 PM
#7
When I do this it seems I lose about 10% total hash rate. Thats the only reason I don't.

I've lost quite a bit of hash time due to downtime.... If I can tweak this enough to avoid downtime all together, I think I will
sr. member
Activity: 336
Merit: 250
yung lean
June 13, 2011, 04:06:17 PM
#6
When I do this it seems I lose about 10% total hash rate. Thats the only reason I don't.
sr. member
Activity: 378
Merit: 250
June 13, 2011, 03:58:10 PM
#5
Ah, no worries.  Happy to do it.  I'd bet if everyone set up secondary miners that mined solo, DDoS attacks would probably ease up because there'd be no drop in total network power even if a pool goes down.
member
Activity: 98
Merit: 10
Testing
June 13, 2011, 03:56:11 PM
#4
No need to launch GUIMiner twice.  Sorry if I didn't clarify.

In the same GUIMiner window just hit "File -> New OpenCL Miner" and create a secondary miner.  You can then tell that miner to use the same GPU/CPU and just point it to a differnent pool/local server and give it the higher -f flag.  Start both up and you're good to go...

Ahhhhh brilliant! Thank you! I'll try to remember to tip you on my next pool cash out, this is really awesome advice man, thank you!
sr. member
Activity: 378
Merit: 250
June 13, 2011, 03:55:20 PM
#3
No need to launch GUIMiner twice.  Sorry if I didn't clarify.

In the same GUIMiner window just hit "File -> New OpenCL Miner" and create a secondary miner.  You can then set that miner to use the same GPU/CPU and just point it to a differnent pool/local server and give it the higher -f flag.  Start both up and you're good to go...
member
Activity: 98
Merit: 10
Testing
June 13, 2011, 03:52:15 PM
#2
This is very, very helpful and I am going to try this asap

thank you Smiley


Do I need to make two different directories, or can i just launch the same executable twice (refering to GUI miner, it saves its config in %appdata% directory so if i edit one GUIminer, they all get affected for now)

sr. member
Activity: 378
Merit: 250
June 13, 2011, 03:35:56 PM
#1
I'm seeing more and more posts about people freaking out that their favorite pool x got hit with to Denial of Service attack or was down and they lost hours of mining.  There's not a whole lot you can do about the DoS attacks and as some pools get larger, there's people that will attempt to knock the pool's hashing power offline and give them a better shot at finding blocks.  Yeah, it sucks, but that's the way it is.

While the pool operators are doing their best to keep their pools up and running, there's a simple thing that you can do to stop your clients from losing any mining time when the pools get attacked, set up two "miners" for every GPU/CPU.

All you need to do is to set up and run a secondary miner object for each GPU/CPU with a lower "aggression" and point it to either another pool (preferably a smaller one that won't be a target) or to a local bitcoin server so you can solo mine when your primary pool is offline.

For example, I use GUIMiner/Poclbm for most of my mining and I set up two miners for my GPU's.  On one of my GPU's, a 6950, I have one miner pointing to my primary pool I mine for, and another miner, using the same 6950, pointing to my local bitcoin client/server.  I set the primary miner with the flag '-f 1' and the secondary miner with the flag '-f 15'.  I start both and let them run at the same time.  The primary miner runs at just about full speed (~ 369 MH/s), while the secondary miner runs at very low speed (~1 MH/sec).  If there's any connection problems with the primary '-f 1' miner, the secondary '-f 15' miner ramps right up and starts hammering away full speed at solo mining.

There's no reason you couldn't set up your secondary miner to point to another pool, and even a third or fourth miner for the same GPU for even more fallback options.  I know the other miners like Phoenix and Diablo have 'aggression' flags that will do pretty much the same thing at the -f flag with GUIMiner/Poclbm and with a bit of experimentation you can get things set up to fail over easily.

I'm pretty sure one of the reasons folks are doing DoS attacks is to kill a portion of the network power and give better odds in finding a block.  If everyone would just set up their miners to have 2 or 3 fallback options the network power would hardly skip a beat, even if the bigger pools get knocked offline.

Seriously, spend the time to set up secondary/tertiary miners for each of your GPU/CPU's, you won't be sorry.
Jump to: