Pages:
Author

Topic: GPUMAX | The Bitcoin Mining Marketplace - page 73. (Read 215554 times)

sr. member
Activity: 350
Merit: 250
No probs, thanks guys Smiley
donator
Activity: 308
Merit: 250
sr. member
Activity: 378
Merit: 250
"Yes I am a pirate, 200 years too late."
The site's still down right (or is it just me)?

Yep still down, migration in progress.
sr. member
Activity: 350
Merit: 250
The site's still down right (or is it just me)?
sr. member
Activity: 378
Merit: 250
"Yes I am a pirate, 200 years too late."
Is this even work subscribing for a beta invitation? Or is there a backlog of several years anyway?
Would be nice to show that on the registration form.

Years, NO... Maybe just 13 months.  Nah, we stopped invited due to maxing out our system.  We should be able to get everyone on within a week.
hero member
Activity: 504
Merit: 500
FPGA Mining LLC
Is this even worth subscribing for a beta invitation? Or is there a backlog of several years anyway?
Would be nice to show that on the registration form.
sr. member
Activity: 378
Merit: 250
"Yes I am a pirate, 200 years too late."
Update

The soon has come! We'll be migrating the system to the new cluster today.  During this time we except the system to be down for about an hour.  Due to timing the changes with our DC we don't have an exact time for the down time, so please be sure to have a fail-over set for your miners so you don't get caught with your pants down.

I'll send another message with details of the update when all system are GO.

Thanks again for your patience and support.

-pirate
hero member
Activity: 504
Merit: 500
works now...thank you!

 Shocked
sr. member
Activity: 289
Merit: 250
for the guys running bamt, what does the line in your pool config look like if your mining with phoenix?

i had it but for somereason i had to reimage and cannot get it to work. it will mine on slush directly, but i cannot get it to mine when i use the gpumax user and password. Huh

http://user:[email protected]:8332
hero member
Activity: 504
Merit: 500
for the guys running bamt, what does the line in your pool config look like if your mining with phoenix?

i had it but for somereason i had to reimage and cannot get it to work. it will mine on slush directly, but i cannot get it to mine when i use the gpumax user and password. Huh
vip
Activity: 574
Merit: 500
Don't send me a pm unless you gpg encrypt it.
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
The disk write/read speeds is the biggest issue with AWS.  We did some testing and it just couldn't keep up with the even a small load.  They've done a lot of changes recently and it may be better now.
Good to know that you are exploring all kind of options.
sr. member
Activity: 378
Merit: 250
"Yes I am a pirate, 200 years too late."
Guys the change is just to ease the pressure on the system while the new cluster comes up.  This is nothing more than a short term fix.  The old way will be back, better and more reliable.
Can your transaction logs be stored with NoSQL or an equivalent, or must they be in a standard relational database? Was just looking at this: http://aws.amazon.com/dynamodb/ and wondered if it could be cost effective while maintaining extremely high performance.
We use MongoDB now.  I'm not sure what you are referring to.  Amazon's service is too slow for what we need.
MogngoDB is a NoSQL non-relational db which is similar to Amazon's service. However yes it probably would be slower if you are hosting the server in one place and the database in another. If it were all on AWS, I'd be interested in the performance.

Our company has a high performance distributed logging system that is not the same as this but somewhat similar in scale, and we are getting ready to set it up with AWS. I guess that will tell me whether the performance is anything close to a local cluster, but it is nice that there is no hardware to manage now. Grin

The disk write/read speeds is the biggest issue with AWS.  We did some testing and it just couldn't keep up with the even a small load.  They've done a lot of changes recently and it may be better now.
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
Guys the change is just to ease the pressure on the system while the new cluster comes up.  This is nothing more than a short term fix.  The old way will be back, better and more reliable.
Can your transaction logs be stored with NoSQL or an equivalent, or must they be in a standard relational database? Was just looking at this: http://aws.amazon.com/dynamodb/ and wondered if it could be cost effective while maintaining extremely high performance.
We use MongoDB now.  I'm not sure what you are referring to.  Amazon's service is too slow for what we need.
MogngoDB is a NoSQL non-relational db which is similar to Amazon's service. However yes it probably would be slower if you are hosting the server in one place and the database in another. If it were all on AWS, I'd be interested in the performance.

Our company has a high performance distributed logging system that is not the same as this but somewhat similar in scale, and we are getting ready to set it up with AWS. I guess that will tell me whether the performance is anything close to a local cluster, but it is nice that there is no hardware to manage now. Grin
full member
Activity: 200
Merit: 100
|Quantum|World's First Cloud Management Platform
Guys the change is just to ease the pressure on the system while the new cluster comes up.  This is nothing more than a short term fix.  The old way will be back, better and more reliable.
Great to hear Grin
sr. member
Activity: 378
Merit: 250
"Yes I am a pirate, 200 years too late."
Guys the change is just to ease the pressure on the system while the new cluster comes up.  This is nothing more than a short term fix.  The old way will be back, better and more reliable.
Can your transaction logs be stored with NoSQL or an equivalent, or must they be in a standard relational database? Was just looking at this: http://aws.amazon.com/dynamodb/ and wondered if it could be cost effective while maintaining extremely high performance.

We use MongoDB now.  I'm not sure what you are referring to.  Amazon's service is too slow for what we need.
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
Guys the change is just to ease the pressure on the system while the new cluster comes up.  This is nothing more than a short term fix.  The old way will be back, better and more reliable.
Can your transaction logs be stored with NoSQL or an equivalent, or must they be in a standard relational database? Was just looking at this: http://aws.amazon.com/dynamodb/ and wondered if it could be cost effective while maintaining extremely high performance.
sr. member
Activity: 378
Merit: 250
"Yes I am a pirate, 200 years too late."
Guys the change is just to ease the pressure on the system while the new cluster comes up.  This is nothing more than a short term fix.  The old way will be back, better and more reliable.
full member
Activity: 200
Merit: 100
|Quantum|World's First Cloud Management Platform
Would you rather do 50% public work or no public work.

I'm not sure why doing 50% public work is bad but if you don't like it you can set your workers to only do private work.
Obviously as it currently stands it's still better than 0% all the time. I was saying it was better when it used to be 100% or 0%, now it's 50% or 0% and the btc/day ends up being a lot less than it was back before this change.
legendary
Activity: 2646
Merit: 1137
All paid signature campaigns should be banned.
zomg! purchases are awesome!
They are fun to watch.
Pages:
Jump to: