Pages:
Author

Topic: How to Avoid DDoS and Other Downtime (Read 5545 times)

sr. member
Activity: 378
Merit: 250
June 20, 2011, 07:22:14 AM
#30
Maybe it's just my setup, but I've tried using Pheonix (both with/without phatk) on multiple cards including 5850's, 5830's and a 6950 and I can never get the same hashing rates I can get from just using Poclbm (with GUIMiner on the front end).
hero member
Activity: 575
Merit: 500
The North Remembers
June 20, 2011, 07:02:01 AM
#29
Phoenix Rising makes it really easy to setup a backup pool. Set your timeout options and it will automatically change to a backup server or even restart phoenix.exe.
legendary
Activity: 1855
Merit: 1016
June 20, 2011, 06:37:47 AM
#28
-f flag is not equal to AGGRESSION.
If you put -f 120, you will see NO mining until main server down & then it starts & hash rate will be some 10-20 Mhash/s less only.

But if you try with AGGRESSION, that completely different.
AGGRESSION=6 will make a 5870 to mine at 230 Mhash/s
while AGGRESSION=12 will make to mine at 430 Mhash/s.


Using -f flag works only if you use poclbm & phoenix with phatk gives more hash than pcolbm. 
legendary
Activity: 1708
Merit: 1019
June 20, 2011, 04:58:27 AM
#27
when people brag about their hashrates I wonder if they just sum up their multiple miners' hashrates or if they crank up aggression all the way to 16...
legendary
Activity: 1762
Merit: 1010
June 19, 2011, 09:06:40 PM
#26
I had a feeling that I would get more stale shares using more than one miner per GPU. Also one should only trust the hashrate measured at the pool(s).

I run 2 Phoenix clients per GPU (I have 3 x HD5870) and I run them both at the same Aggression. This splits the work between them evenly until one of the pools has a hiccup. My total hash rate is slightly higher than running one instance of Phoenix per GPU.

Why would you get more stale shares using more than one miner per GPU?

The hash rate at the pool is an estimate, so nowhere near as accurate as the client reported hash rate.
I did the same thing happily for a while but now I am not so sure any more if it is the optimum.

Theoretically you get more stales because your workers are slower compared to the other workers in the pool. I get lots of stales but there might be some other reason I have yet to find.

---edit---
For my system the gain in hashrate using two miners is three to five times larger than the loss from stales. I will keep running two miners per GPU.

This is what I do.  The only thing I've noticed is that I've had some disconnection problems arise when running multiple miner connections when I have had two computers both running two instances (four total connections) through the same mobile broadband connection. I don't know if it is a port issue with the router or what exactly, but it is something to be aware of.
legendary
Activity: 1762
Merit: 1010
June 19, 2011, 09:01:43 PM
#25
I had a feeling that I would get more stale shares using more than one miner per GPU. Also one should only trust the hashrate measured at the pool(s).

I've actually gotten a couple hundred megahash rate HIGHER on the pools' stated hashrates before than what I should be getting from my card model, so I don't know that you can trust the pools to be accurate, either.
member
Activity: 111
Merit: 10
★Trash&Burn [TBC/TXB]★
June 17, 2011, 02:17:55 PM
#24
I've notice lately that many users have our pool at BitClockers.com set as a fail-over for the larger pools. We jumped to over 250GHash during the Deepbit/Slush DDoS. Why not just mine with us 24/7 Tongue

I guess we'll take what we can get until people figure out that its not advantageous to run with a giant pool. Any pool that can manage 2 blocks a day has no real drawbacks from variance. Mining isnt a race, a block found somewhere doesnt "reset" anything. Any hash has the chance to be a block... Mine with pools that care, have stability, and a rocking community.
legendary
Activity: 1708
Merit: 1019
June 16, 2011, 05:27:06 PM
#23
I had a feeling that I would get more stale shares using more than one miner per GPU. Also one should only trust the hashrate measured at the pool(s).

I run 2 Phoenix clients per GPU (I have 3 x HD5870) and I run them both at the same Aggression. This splits the work between them evenly until one of the pools has a hiccup. My total hash rate is slightly higher than running one instance of Phoenix per GPU.

Why would you get more stale shares using more than one miner per GPU?

The hash rate at the pool is an estimate, so nowhere near as accurate as the client reported hash rate.
I did the same thing happily for a while but now I am not so sure any more if it is the optimum.

Theoretically you get more stales because your workers are slower compared to the other workers in the pool. I get lots of stales but there might be some other reason I have yet to find.

---edit---
For my system the gain in hashrate using two miners is three to five times larger than the loss from stales. I will keep running two miners per GPU.
full member
Activity: 120
Merit: 100
June 16, 2011, 05:43:34 AM
#22
I had a feeling that I would get more stale shares using more than one miner per GPU. Also one should only trust the hashrate measured at the pool(s).

I run 2 Phoenix clients per GPU (I have 3 x HD5870) and I run them both at the same Aggression. This splits the work between them evenly until one of the pools has a hiccup. My total hash rate is slightly higher than running one instance of Phoenix per GPU.

Why would you get more stale shares using more than one miner per GPU?

The hash rate at the pool is an estimate, so nowhere near as accurate as the client reported hash rate.
legendary
Activity: 1708
Merit: 1019
June 16, 2011, 03:32:17 AM
#21
I had a feeling that I would get more stale shares using more than one miner per GPU. Also one should only trust the hashrate measured at the pool(s).
full member
Activity: 238
Merit: 100
June 13, 2011, 11:39:47 PM
#20
Please see my post here, http://forum.bitcoin.org/index.php?topic=16548.0

I'm working on an automated solution to this exact thing and will need testers soon!
full member
Activity: 227
Merit: 100
June 13, 2011, 11:30:30 PM
#19
The hash rate may not be as good with the aggro method bit its a trade off between bigger pools w/ more frequent payouts vs mining on smaller pools

sr. member
Activity: 336
Merit: 250
yung lean
June 13, 2011, 07:59:49 PM
#18
The proxy is still a single point of failure. It just moves it from the pool to the proxy server.

How many threads have you seen about people's proxies going down?  How many for pools being DDOSed?

The proxy is much more reliable than the frequently crashing/DDOSed pools.

No point arguing, besides...  I've seen the light.

This guy is completely right.  I can't believe how silly I'm being.  I mean... I use Linux and several boxes for everything, but all of that is "single points of failure."

What if someone hax0rs my kernels?  I should totally start mixing Windows and MacOS boxes into my miner pool to avoid that single point of failure.  While I'm at it, I think I'm going to order a couple of T1s that are provided by several different service providers.  And ATI cards - crap, the drivers are a single point of failure.  I'm going to have to make large NVIDIA farms just to be redundant for my boxes!

And my name servers.  I only have 2 and they're both from the same provider!  Oh shiii-!  What if that goes down?  I'm going to need to convince all of the pool operators to join an MPLS cloud.

Wait... the MPLS requires internet to be functioning....  F#@#!  I'm going to have to call Al Gore and see if we can reinvent the internet as an alternate path to my pools!


Well I don't have 2 T1's but I do have a failover connection Tongue I guess I just take my money making more seriously then most.
sr. member
Activity: 313
Merit: 250
June 13, 2011, 07:27:55 PM
#17
Hello,


Don't like the Flexible Proxy Project?

I haven't used it, but it seems a bit more elegant than running 99 miners on 99 pools to dodge downtime....

I use a C++ proxy that I should eventually release.


Proxies are definitely a viable alternative, but might be a bit complicated to set up for some folks.  I run 20 different miners across 6 machines and it hasn't been a huge burden.  Once the secondary miners are set up you don't have to do anything, so it's an extra couple of minutes to get them going, but then you're set.

Some folks need to get on the ball, then! Wink

There's also a modified poclbm that has a fallback option floating around somewhere.

It's good to hear you are having luck with your improvised system, but there is clearly a need for a better solution.


I guess you mean this poclbm with backup options -> https://github.com/kylegibson/poclbm
Seems to do what it should, but I don't like poclbm, have a lot of "miner is idle" with it Sad

If you could release your c++ proxy that would be great, I assume this doesn't need mysql?
Pretty sure that people would be willing to donate you something for that.
I Was thinking about this flexible miner proxy, but maybe its not so a great idea to
run mysql server on a usb-stick Cheesy
copper member
Activity: 56
Merit: 0
June 13, 2011, 07:12:15 PM
#16
The proxy is still a single point of failure. It just moves it from the pool to the proxy server.

How many threads have you seen about people's proxies going down?  How many for pools being DDOSed?

The proxy is much more reliable than the frequently crashing/DDOSed pools.

No point arguing, besides...  I've seen the light.

This guy is completely right.  I can't believe how silly I'm being.  I mean... I use Linux and several boxes for everything, but all of that is "single points of failure."

What if someone hax0rs my kernels?  I should totally start mixing Windows and MacOS boxes into my miner pool to avoid that single point of failure.  While I'm at it, I think I'm going to order a couple of T1s that are provided by several different service providers.  And ATI cards - crap, the drivers are a single point of failure.  I'm going to have to make large NVIDIA farms just to be redundant for my boxes!

And my name servers.  I only have 2 and they're both from the same provider!  Oh shiii-!  What if that goes down?  I'm going to need to convince all of the pool operators to join an MPLS cloud.

Wait... the MPLS requires internet to be functioning....  F#@#!  I'm going to have to call Al Gore and see if we can reinvent the internet as an alternate path to my pools!
full member
Activity: 154
Merit: 100
June 13, 2011, 06:14:06 PM
#15
The proxy is still a single point of failure. It just moves it from the pool to the proxy server.

How many threads have you seen about people's proxies going down?  How many for pools being DDOSed?

The proxy is much more reliable than the frequently crashing/DDOSed pools.
sr. member
Activity: 336
Merit: 250
yung lean
June 13, 2011, 05:39:42 PM
#14
The proxy is still a single point of failure. It just moves it from the pool to the proxy server.
copper member
Activity: 56
Merit: 0
June 13, 2011, 05:35:46 PM
#13

Don't like the Flexible Proxy Project?

I haven't used it, but it seems a bit more elegant than running 99 miners on 99 pools to dodge downtime....

I use a C++ proxy that I should eventually release.


Proxies are definitely a viable alternative, but might be a bit complicated to set up for some folks.  I run 20 different miners across 6 machines and it hasn't been a huge burden.  Once the secondary miners are set up you don't have to do anything, so it's an extra couple of minutes to get them going, but then you're set.

Some folks need to get on the ball, then! Wink

There's also a modified poclbm that has a fallback option floating around somewhere.

It's good to hear you are having luck with your improvised system, but there is clearly a need for a better solution.
sr. member
Activity: 378
Merit: 250
June 13, 2011, 05:29:03 PM
#12

Don't like the Flexible Proxy Project?

I haven't used it, but it seems a bit more elegant than running 99 miners on 99 pools to dodge downtime....

I use a C++ proxy that I should eventually release.


Proxies are definitely a viable alternative, but might be a bit complicated to set up for some folks.  I run 20 different miners across 6 machines and it hasn't been a huge burden.  Once the secondary miners are set up you don't have to do anything, so it's an extra couple of minutes to get them going, but then you're set.

I go from about 330mhash a core to 310mhash. Unless my miners are down for 2 hours a day, I'm losing money by doing this. I agree that this isnt really much of a solution. Thy should build failover control into guiminer or something.

I agree on the built-in failover control , that would be the best option of all.  I'm not sure I follow you on the losing 20 MH/sec though.  Are you losing that from your primary miner when the secondary is running?  Or is that the secondary going full out when the primary is down?  

I tried this with phoenix and the phatk kernel. Gave the primary miner an aggression value of 11 and the secondary miner an aggression of 1. This worked fun until I took down the primary miner. The mh/s on the secondary miner only jumped from ~1 mh/s to ~15 mh/s. It should have jumped to ~300 mh/s.

I can set the aggression the same on both the primary and secondary and then they both run ~150 mh/s. I don't want to do that though as it makes my rigs a bit less stable.

I gave up on using phoenix/phatk because I was actually getting better numbers using poclbm no matter what I tried.  Try tweaking the secondary miner a bit though and run it with a higher number.  You might find a balance that works.
member
Activity: 87
Merit: 10
June 13, 2011, 05:22:58 PM
#11
I tried this with phoenix and the phatk kernel. Gave the primary miner an aggression value of 11 and the secondary miner an aggression of 1. This worked fun until I took down the primary miner. The mh/s on the secondary miner only jumped from ~1 mh/s to ~15 mh/s. It should have jumped to ~300 mh/s.

I can set the aggression the same on both the primary and secondary and then they both run ~150 mh/s. I don't want to do that though as it makes my rigs a bit less stable.
Pages:
Jump to: