Pages:
Author

Topic: Noob question about hardware: Why so ATI dominant? - page 2. (Read 4581 times)

714
member
Activity: 438
Merit: 10
https://en.bitcoin.it/wiki/Why_a_GPU_mines_faster_than_a_CPU

Decent comparison ATI vs. Nvidia. The Nvidia hardware does better on some tasks. Bitcoin is not one of them. The instruction set of the R5xxx and higher ATI chipsets makes Bitcoin very efficient.
legendary
Activity: 1148
Merit: 1008
If you want to walk on water, get out of the boat
It seems he think that the mining software is biased to run better on ATI and to be purposely inefficient on nvidia. That they purposely made it slow on nvidia

Too bad his trolling is epic fail, bitcoin mining is not a secret and it's very simple, he can try to make his own mining software and experiment with it and discover WHY nvidia sucks at mining.
hero member
Activity: 756
Merit: 500
If the OP can't accept that ATI is superior, go on and get your Nvidia to mine bitcoins and good luck!
legendary
Activity: 1148
Merit: 1008
If you want to walk on water, get out of the boat
That's why we have a newbie section, to avoid trolls go trolling elsewhere and forcing them to troll here.

And, infact, we have a troll here.

Dear troll here is a protip: the computing behind bitcoin mining is NOT a secret, and is indeed very simple: sha-256 hashing. It run faster on ATI because uh guess what? ATI can compute these things FASTER than nvidia, due to it's hardware.

Yes, that's all. Nvidia just sucks for mining.

You speak about software implementation. As i said, the software is very simple, go and try to make it run faster on Nvidia if you want. Hire someone, hire everyone. Good luck, call me when you have a code faster than a same priced ATI.


As for physx... oh LOL. Do you realize that it's NVIDIA CODE? And that it's closed source? Guess what, it run well on nvidia? More like it run ONLY on nvidia. Lololol

Quote
all math is simple at the base level.  The biggest determinant factor is how much memory is there, how quickly the memory communicates with the processor and how long the job is
Protip: if card 1 make the same computing in 1 cycle while card 2 take 10 cycles, card 1 will be 10 times faster.

Now go troll elsewhere.
hero member
Activity: 518
Merit: 500
What on earth makes you think all math is the same or that it is memory bound?
For a start there is quite a fundamental difference between floating point and integer math. The simple truth is that AMD cards have 2-3x more ALUs than nvidia cards and can therefore process 2-3x as many 32 bit integer calculations per clock. Another simple truth is that SHA hashing (and many other similar cryptographic functions) is greatly sped  up by rotate right  register operations. AMD cards have a dedicated 1 cycle hardware function for this, nvidia does not and requires 3 clock cycles for this.

Memory bandwidth or capacity is absolutely a non issue for hashing. Miners clock the memory as low they can to save power consumption and even to speed up mining (!) and bitcoin mining takes like a few megabyte of vram. Anything more is wasted for mining.

Please get a clue.
newbie
Activity: 11
Merit: 0
Quote
I had used both ATI and Nvidia, the charts also collaborate with what I had seen.  So had the rest of us.  If you don't care anything, goodbye.  You don't even need to bother to ask since you can't accept anything other than what you are thinking


didnt ask measurements.  I have google as well.  I can type in "which GPUs are best for BTC mining"


I asked a question that was clearly not relevant in the noob section.  I have no alternative. place to post it.



Perhaps the entire website is not the place to post such a question.



I am asking for a question that is based on genuine information, not speculation.   I highly doubt there are very many electrical engineers on this board who can explain why the simple integer math associated with bitcoin mining operates more efficiently (in terms of time, aka:  hashs/sec) than they do on Nvidia boards.


realistically I will almost certainly never get an answer here.



the answer that "simple math is done quicker because there are more cores, which were designed to supplement for the lack of complexity in those cores when dealing with more complex math associated with graphics" is nonsensical on a number of levels.


all math is simple at the base level.  The biggest determinant factor is how much memory is there, how quickly the memory communicates with the processor and how long the job is


short job, short overall calculation time pretty much eliminate most of the differences in hardware.


hero member
Activity: 756
Merit: 500
thus far the answer has been "because the charts say so"
remember:  all math, all simulatios are all the same.  computational fluid dynamics looks the same as quantum mechanics at the most base level of the code.


integrals, derivatives, exponentials.... they are all just represented as sums and differences of polynomials.

the math of bitcoin mining is identical between both platforms (crucially, however, the way in which the math is "sent" to and "read from" the GPU may not be the same, amongst a host of other potential differences, but I digress here).

a satisfactory logically reaoned answer, supported with proof from spec/tech sheets from reliable sources (eg:  corroborated by anandtech, THG and/or the manufacturers themselves, to ensure that "claims" are genuine, and not just fluff).

1) the AMD boards have more cores
2) those cores are individually clocked higher than the Nvidia boards (and perhaps they can be threaded)

3) thus, because the math is so simple, the AMD boards can compute more calculations per second AND there are more of those cores, so those numbers literally multiply to add up to much higher performance.
4) Memory and bus architecture are irrelevant because of the relatively small number involved, and the fact that the total data involved in a complete calclulation is miniscule in comparison to the memory avaialble.


THAT would be a sound justification of why the HARDWARE is the true source of the discrepancy, and not the software implementation.



is that the case?  I dont know.  I just made those up as possible solutions that did NOT include the software developer POSSIBLE, I repeat this so take it to heart:  POSSIBLE discrepancies in how the software was developed offer a significantly larger number of reaons why there is this discrepancy  that is why I mentioned it first.

despite what people say, hardware is less of a tie breaker than you might think.



that IS why physX and other market SOFTWARE BASED norms have given nvidia the edge.  It is also a lot easier to FORCE software down the throats of an industry than it is to force a particular hardware architecture (I am referring to Nvidia here, not AMD.   I am referring to Nvidia forcing their software down the throats of the gaming developers, not the debate above about bitcoin mining software).


my point here is to simply say that the fundamental difference that lies at the heart of this discrepancy is more likely to be software based than hardware based


for fucks sake, most software developers don't have a fucking clue how the hardware really actually works anyway, its a lot harder for a software developer to maximize their hardware than it is for the hardware developer to give out tools to make it easier for the developers to ax the hardware


the addage "software lags behind hardware" comes from the fact that its actually a lot fucking hardware to write sophisticated software than it is to decrease the gate size on a silicon wafer  A graduate student, working by himself, can pull it off with minial support from faculty, whereas it usually takes teams of seasoned veteran programers to churn out high quality software.



in terms of "fanboyism"


1) I don't play video games.   I would rather not expound upon my opinions of people who attempt to justify spending lots of money for the ability to play video games at higher frame rates and to be able to see the grass on the simulated ground appear more realistic


2) I have a 6 year old HP laptop with a single core, intel core duo (not even a core 2 duo) with 2 gb of memory and onboard video.  I use 2 of my 4 USB ports to juggle between external hard drive enclosures to utilize a stack of equally old 3.5" internal drives ranging between 250 to 500 gb.  Im not a gamer.  I also have a 1st generation (literally first generation of the first generation) xbox that I won from Taco Bell.  It has an Xecuter 3 mod chip and I use it as a media center with 1st generation 1080p Samsung DLP (as in:  the first 1080p DLP they ever sold) with audio piped through a 13 year old onkyo receiver.

I dont give a flying fuck about fanboyism.   Except for my Car.  fuck yeah its a nissan 240sx  (that is where my money goes lol).  But no I don't drift.  never.  Drifting is an abomination.

I had used both ATI and Nvidia, the charts also collaborate with what I had seen.  So had the rest of us.  If you don't care anything, goodbye.  You don't even need to bother to ask since you can't accept anything other than what you are thinking
newbie
Activity: 11
Merit: 0
thus far the answer has been "because the charts say so"
remember:  all math, all simulatios are all the same.  computational fluid dynamics looks the same as quantum mechanics at the most base level of the code.


integrals, derivatives, exponentials.... they are all just represented as sums and differences of polynomials.

the math of bitcoin mining is identical between both platforms (crucially, however, the way in which the math is "sent" to and "read from" the GPU may not be the same, amongst a host of other potential differences, but I digress here).

a satisfactory logically reaoned answer, supported with proof from spec/tech sheets from reliable sources (eg:  corroborated by anandtech, THG and/or the manufacturers themselves, to ensure that "claims" are genuine, and not just fluff).

1) the AMD boards have more cores
2) those cores are individually clocked higher than the Nvidia boards (and perhaps they can be threaded)

3) thus, because the math is so simple, the AMD boards can compute more calculations per second AND there are more of those cores, so those numbers literally multiply to add up to much higher performance.
4) Memory and bus architecture are irrelevant because of the relatively small number involved, and the fact that the total data involved in a complete calclulation is miniscule in comparison to the memory avaialble.


THAT would be a sound justification of why the HARDWARE is the true source of the discrepancy, and not the software implementation.



is that the case?  I dont know.  I just made those up as possible solutions that did NOT include the software's particular development as a POSSIBILITY for the discrepancy,


despite what people say, hardware is often less of a tie breaker than you might think.

that IS why physX and other market SOFTWARE BASED norms have given nvidia the edge (and probably Intel for that matter, and probably also ARM in the field of mobile).  It is also a lot easier to FORCE software down the throats of an industry than it is to force a particular hardware architecture (I am referring to Nvidia here, not AMD.   I am referring to Nvidia forcing their software/SDK down the throats of the gaming developers, not the debate above about bitcoin mining software).

my point here is to simply say that the fundamental difference that lies at the heart of this discrepancy is more likely to be software based than hardware based, simply because there are more ways in which the software itself can cause differences in performance.

for fucks sake, most software developers don't have a fucking clue how the hardware really actually works anyway. Its a lot harder for a software developer to maximize their hardware by themselves through tintkering and testing than it is for the hardware developer to siimply give out tools to make it easier for the developers to max the hardware, which gives whichever company who offers the better deal the edge.


the addage "software lags behind hardware" comes from the fact that its actually a lot fucking hardware to write sophisticated software than it is to decrease the gate size on a silicon wafer (up to a point, of course; a point that we have obviously reached) A graduate student, working by himself, can pull it off with minial support from faculty, whereas it usually takes teams of seasoned veteran programers to churn out high quality software.

in terms of "fanboyism"

1) I don't play video games.   I would rather not expound upon my opinions of people who attempt to justify spending lots of money for the ability to play video games at higher frame rates and to be able to see the grass on the simulated ground appear more realistic

2) I have a 6 year old HP laptop with a single core, intel core duo (not even a core 2 duo) with 2 gb of memory and onboard video.  I use 2 of my 4 USB ports to juggle between external hard drive enclosures to utilize a stack of equally old 3.5" internal drives ranging between 250 to 500 gb.  Im not a gamer.  I also have a 1st generation (literally first generation of the first generation) xbox that I won from Taco Bell.  It has an Xecuter 3 mod chip and I use it as a media center with 1st generation 1080p Samsung DLP (as in:  the first 1080p DLP they ever sold) with audio piped through a 13 year old onkyo receiver.  Point is:  I don't pay attention to what is new.  I don't care either.  Everything I have is sufficient for my needs.  when 4k TVs and boxes start coming out, I will upgrade.

I dont give a flying fuck about fanboyism.   Except for my Car.  Fuck yeah Nissan.  Fuck all yall Euro, 'merikan, Australian, and other alterntive JDM shit.  Nissan reigns supreme.  fuck rotaries (mazda), fuck yamaha designed engine components (toyota), fuck lol-crank-walk (Mitsubishi), and fuck diesel-engine sounding broke-transmission bulbous monstrosities (Subaru).

haha Im joking.  but maybe Im not.

no I really am.  anyone who likes cars is a friend.  Even if you like to drive around with a live rear axle or pushrods.
legendary
Activity: 1148
Merit: 1008
If you want to walk on water, get out of the boat
Nvidia fanboy spotted  Cheesy

legendary
Activity: 1428
Merit: 1001
Okey Dokey Lokey
I smell someone who Loves nvidia cards but cant understand why the Fucking Suck at mining,
Heres the thing

MAJORITY(and i Seriously dare anyone to challenge me on this) Of newer graphics that look "pretty" (and by newer i mean like the higher end dx9 stuff and all the dx11) Are done by using Shader values.

AMD cards Do Not have a "Shader Clock" Flatout dont need one. because they pack Alot more Stream Proccessing Cores to do the "Shadey work" instead.

Where as nvidia, Knows, That they do not NEED nearly as many stream proccessor cores if VIDEO GAMES are Majority programmed on Shading Style grapics.

AMD cards have more Power, They do, They are stronger, They can fit ALOT more power in that "spot" where the "shader clock" doesnt exsist.
Nvidia cards have more Programming, They do, Sorry, Im an AMD fanboy FOR LIFE, And im sorry to say but i feel that Nvidia has Toatally rigged the market with all these "new style of graphics rendering!" (Ever since dx10.1 nearly All new graphic "Polish" tech is "shadey").

Bitcoins. Need no graphics, They are nothing but math, And in the computer thats Faster, And has More Cores, You get more work done, Wich results in more bitcoins mined.

Heres a Perfect example
5830
321Mhash/sec

1000core clock

1120Stream Processing cores

Cost? About $139

While the BEST nvidia card (correct me if im wrong)GTX590

193.1Mhash/sec

1215 "Clock"

512x2 Stream Proccessing Cores

Costs about $749.99

Now clearly, The 5830, Is Faster than the Best of Nvidias cards, AT CRUNCHING ONE TYPE OF NUMBER, And that One Type of number, Is what you need to mine bitcoins

Could someone chime in and remind me what the name of the calculation is?
WILD GUESSES AS TO THE NAME:
Floating point...
Interger....
I dont know... Whats that damn name for it.

Oh just another note, Those stats were taken off the mining hardware comparison chart
hero member
Activity: 518
Merit: 500
Sounds like we have another nvidia fan boy that thinks all bitcoin miners are amd fanboys.

Its really simple, AMD cards have a different architecture that happens to be massively better for bitcoin mining; they have many more but much simpler shaders than nvidia cards (which have fewer, but more complex). For games these different approaches tend to be competitive, for floating point math, nVidia is typically far ahead, but for simple integer math like bitcoin mining, AMD is the obvious choice.
hero member
Activity: 756
Merit: 500
newbie
Activity: 11
Merit: 0
I am not a programmer.


but I do know that such a massive, 100% consistent, discrepancy in performance between ATI and Nvidia hardware with the existing Mining software is not just:


"DURRR ATI IS BETTAR BUCUZ THEY HAZ BETTER DESIGN!"


thats bullshit.


what is it?  was the software developed FOR ATI hardware?  were the developers fed up with Nvidia's PhysX monopolistic microsoft-esque bullshit?  is Nvidia's SDK clunky and hard to work with?

what?

there must be a REAL REASON.  Because I can tell you right now, that even the best programmers and comptuer scientists will never be able to corroborate why a particular hardware design is better than any other with why their software works better on a particular hardware system.


especially when we are talking about what amounts to the absolute simplest mathematics on the planet.


it would take more than 1 "fresh," very intelligent Ph.D. in EE/solid sate physics to even begin to expound upon that subjet, and probably more than 1.


the discrepancies I havve seen between ATI and Nvidia with Mining look like the discrepancies I have seen between PhysX based benchmarks in the past.  I think that is the real situation.  I hope my implication there makes sense to everyone.
Pages:
Jump to: