Pages:
Author

Topic: [XPM] [ANN] Primecoin High Performance | HP14 released! - page 30. (Read 397657 times)

full member
Activity: 364
Merit: 100

Running hp10 since monday morning, found 3 blocks on monday with 20 cores, none today - guess it still comes down to luck Cheesy

Oh and 2 blocks over the weekend with hp9

Which instance type did you use? I found the c1.xlarge is the most cost effective. By running HP10 the blocks generated can cover 90% of the bill, given I bought the discounted AWS Credit code to fund the running cost, I am still at profit with sizable margin.

If it only covers 90% of the bill, how are you "still at profit with sizable margin"? And the falling exchange rate certainly isn't helping...
newbie
Activity: 10
Merit: 0

Running hp10 since monday morning, found 3 blocks on monday with 20 cores, none today - guess it still comes down to luck Cheesy

Oh and 2 blocks over the weekend with hp9

Which instance type did you use? I found the c1.xlarge is the most cost effective. By running HP10 the blocks generated can cover 90% of the bill, given I bought the discounted AWS Credit code to fund the running cost, I am still at profit with sizable margin.
Sy
legendary
Activity: 1484
Merit: 1003
Bounty Detective
Cool, thanks for the clarification.  Just FYI, I did a major test (500-1000 machines from 4 to 16 cores for several hours) of both HP9 and HP10 on mainnet and for the most part found that the default settings you had selected were the optimal.  When I did the HP9 test it was just about profitable enough to run for a full day and cover costs (though I had my wallet hacked/stolen along with my entire XPM collection!)  HP10 was not even close to profitable, so I won't be running any more tests unless there is a likelihood of 2x performance Smiley

I think it's time to turn your attention to the GPU now Mikael Smiley

Running hp10 since monday morning, found 3 blocks on monday with 20 cores, none today - guess it still comes down to luck Cheesy

Oh and 2 blocks over the weekend with hp9
full member
Activity: 322
Merit: 113
Sinbad Mixer: Mix Your BTC Quickly
I can see the (your) money burning in AWS EC2 powered charcoal power plants ...   Undecided  Sad
http://www.zdnet.com/greenpeace-slams-amazon-over-green-datacentre-efforts-3040155041/
Please use a different more environment friendly cloud provider!
... and get your account closed in 2 days. Win win.
hero member
Activity: 516
Merit: 500
I can see the (your) money burning in AWS EC2 powered charcoal power plants ...   Undecided  Sad
http://www.zdnet.com/greenpeace-slams-amazon-over-green-datacentre-efforts-3040155041/
Please use a different more environment friendly cloud provider!
newbie
Activity: 56
Merit: 0
Multiple c1.xlarge

How close does that come to paying for itself these days?
That's mostly luck. You may get zero blocks, you may get several. Northern Virginia c1.xlarge is ~0.07 an hour. I have found about 4 blocks in 2 weeks (3 of them have been today though). Quick rough estimate, I earned 0.25 BTC and paid $23.52. Very slim margins, though still profitable, *IF* you're lucky. I'd advise trying it out and seeing, killing the instance when it gets to 10 Difficulty.

I guess I dont have that luck Sad
full member
Activity: 322
Merit: 113
Sinbad Mixer: Mix Your BTC Quickly
Multiple c1.xlarge

How close does that come to paying for itself these days?
That's mostly luck. You may get zero blocks, you may get several. Northern Virginia c1.xlarge is ~0.07 an hour. I have found about 4 blocks in 2 weeks (3 of them have been today though). Quick rough estimate, I earned 0.25 BTC and paid $23.52. Very slim margins, though still profitable, *IF* you're lucky. I'd advise trying it out and seeing, killing the instance when it gets to 10 Difficulty.
full member
Activity: 210
Merit: 100
I not use any kind of messenger beware of scammers
Incidentally the cc2.8xlarge is not the most cost effective for xpm as the L1 cache appears to be a bottleneck. To prove it, try running just 16 threads instead of all 32 and you'll see little difference!! Or at least that was true for hp9. You can get more bang for buck with 8 cores.
I noticed this as well. Even though the cc2s put out amazing numbers the ROI just doesn't seem to be there (get 2-3x as many larges). Would be nice if there was something in between

How close does that come to paying for itself these days?
In my experience they do not. Variance/luck killed it for me, some days a loss/some good, but overall the profit was getting too low for my tastes. That was with 50 instances which is a small number so YMMV, esp with the "luck" factor.
full member
Activity: 364
Merit: 100
Multiple c1.xlarge

How close does that come to paying for itself these days?
hero member
Activity: 552
Merit: 500
sr. member
Activity: 476
Merit: 250
Multiple c1.xlarge
full member
Activity: 168
Merit: 100
Cool, thanks for the clarification.  Just FYI, I did a major test (500-1000 machines from 4 to 16 cores for several hours) of both HP9 and HP10 on mainnet and for the most part found that the default settings you had selected were the optimal. 

Holy balls, and I thought running 20x cc2.8xlarge AWS instances was nuts.
Out of curiosity, how does one come across so much computing power?  Were you using AWS as well?
aws with increased limits. Incidentally the cc2.8xlarge is not the most cost effective for xpm as the L1 cache appears to be a bottleneck. To prove it, try running just 16 threads instead of all 32 and you'll see little difference!! Or at least that was true for hp9. You can get more bang for buck with 8 cores.

Which would you recommend? Multiple c1.xlarge or just using cc2.8xlarge with threads = core/2?
full member
Activity: 364
Merit: 100
Cool, thanks for the clarification.  Just FYI, I did a major test (500-1000 machines from 4 to 16 cores for several hours) of both HP9 and HP10 on mainnet and for the most part found that the default settings you had selected were the optimal. 

Holy balls, and I thought running 20x cc2.8xlarge AWS instances was nuts.
Out of curiosity, how does one come across so much computing power?  Were you using AWS as well?
aws with increased limits. Incidentally the cc2.8xlarge is not the most cost effective for xpm as the L1 cache appears to be a bottleneck. To prove it, try running just 16 threads instead of all 32 and you'll see little difference!! Or at least that was true for hp9. You can get more bang for buck with 8 cores.

Want to know something really strange? I've been running 3 VMs on Azure. I'm doing the free trial, so I'm limited to 20 cores. I had 2 VMs with 8 cores and 1 with 4. Over about 5 days, 1 of the 8 core VMs got 4 blocks, and the other 2 got absolutely nothing. I'm burning through the $200 credit pretty fast, so I turned off the 2 "unlucky" ones, and I'm hoping the lucky streak continues on the one that's left. But once the free trial is over, it's gone as well - it certainly won't be paying for itself at this rate.
newbie
Activity: 53
Merit: 0

You might need to reinstall your Visual C++ runtimes. I'm guessing either the 2010 or 2012 runtimes in this case.
member
Activity: 98
Merit: 10
Incidentally the cc2.8xlarge is not the most cost effective for xpm as the L1 cache appears to be a bottleneck. To prove it, try running just 16 threads instead of all 32 and you'll see little difference!! Or at least that was true for hp9. You can get more bang for buck with 8 cores.

The slowdown is caused by Hyperthreading.  The cc2.8xlarge instances have 32 logical cores, 16 physical cores.
hero member
Activity: 552
Merit: 500
hmm hp10 blows up on my core i7 795 box..

hero member
Activity: 820
Merit: 1000
Cool, thanks for the clarification.  Just FYI, I did a major test (500-1000 machines from 4 to 16 cores for several hours) of both HP9 and HP10 on mainnet and for the most part found that the default settings you had selected were the optimal. 

Holy balls, and I thought running 20x cc2.8xlarge AWS instances was nuts.
Out of curiosity, how does one come across so much computing power?  Were you using AWS as well?
aws with increased limits. Incidentally the cc2.8xlarge is not the most cost effective for xpm as the L1 cache appears to be a bottleneck. To prove it, try running just 16 threads instead of all 32 and you'll see little difference!! Or at least that was true for hp9. You can get more bang for buck with 8 cores.
full member
Activity: 168
Merit: 100
Cool, thanks for the clarification.  Just FYI, I did a major test (500-1000 machines from 4 to 16 cores for several hours) of both HP9 and HP10 on mainnet and for the most part found that the default settings you had selected were the optimal. 

Holy balls, and I thought running 20x cc2.8xlarge AWS instances was nuts.
Out of curiosity, how does one come across so much computing power?  Were you using AWS as well?
sr. member
Activity: 301
Merit: 250
Well, I found some small issues in the chains/day estimate with regards to the 'sieveextensions' parameter. The fix is now on github. The estimate seems to have gone down by about 5% so it's not a big issue. That also means that the estimate still doesn't match the actual block rates people have been reporting.

https://github.com/mikaelh2/primecoin/commit/42496a823b15fadd1a8809298c20310686d12ce9
hero member
Activity: 820
Merit: 1000
I'm fairly certain that HP10 does NOT find blocks at 2 x the rate of HP9, despite the 2 x increase in chains per day value.

I can confirm this.

Yup, it looks like the actual speedup doesn't match the chains/day estimate. I never did a full comparison between hp9 and hp10 myself on mainnet because it's pretty expensive to do that. So big thanks to the guys who did. I did try to adjust the chains/day estimate to account for effects of extending the sieve. It's possible there are some bugs with that. Of course the estimate isn't fully accurate in the first place either.
The CPD is a good reference point though.  I think the key here is that we need a stable metric that we can use to benchmark performance, whether it's accurate in what it is attempting to measure or not.  The downside now is that if you change the chains/day estimate to accurately reflect the sieve extension, when you release the next version people are going to complain that's its slower than HP10 Cheesy

With regards to the sieveextensions factor, will changing this value be reflected in the CPD?  i.e. if I change it and I see CPD increase, can I take that as a positive thing, regardless if the increase % is inaccurate?

Do you have any other ideas for performance improvements in the pipeline?

I agree that it would be nice to have a stable metric. The issue is that if the metric is broken then it's simply misleading. The 'sieveextensions' parameter is reflected in chains/day but I have a feeling that it may be broken. And if it's broken, then you can't really trust it.

There are still things on my to-do list but I'm not sure if there's going to be anything big anymore.
Cool, thanks for the clarification.  Just FYI, I did a major test (500-1000 machines from 4 to 16 cores for several hours) of both HP9 and HP10 on mainnet and for the most part found that the default settings you had selected were the optimal.  When I did the HP9 test it was just about profitable enough to run for a full day and cover costs (though I had my wallet hacked/stolen along with my entire XPM collection!)  HP10 was not even close to profitable, so I won't be running any more tests unless there is a likelihood of 2x performance Smiley

I think it's time to turn your attention to the GPU now Mikael Smiley
Pages:
Jump to: