Pages:
Author

Topic: Noob question about hardware: Why so ATI dominant? (Read 4596 times)

newbie
Activity: 5
Merit: 0
I'm gonna get murdered for this post Sad.

I'm not sure if anyone truly answered your question. I am a programmer, i don't know if that helps... To the point. ATI/AMD has a one single operation that nvida does not have. It is called RightRotate aka BIT_ALIGN_INT. It is the fundamental basis for optimized bitcoin hashing. As it stands, with the most briliant minds working towards making nvidia operations as optimized as possible, It takes 3 operations; 2 shifts + 1 add Sad. There are some people that are working their best to petition for an updated set of cuda instructions to get the RightRotate and LeftRotate included. Just getting it down to one operation is an amazing improvement. The other issue as you obviously read, is that amd just hase a massive amount more of the stream processors as compared to nvidia. the stream processors are what process the SHA-256 operations, be it nvidia or amd.

The rotate right (circular right shift) operation ROTR n(x), where x is a w-bit word
and n is an integer with 0 £ n < w, is defined by
ROTRn(x) = (x >> n) Ú (x << w - n).
Thus, ROTR n(x) is equivalent to a circular shift (rotation) of x by n positions to the
right.

This operation is used by the SHA-256, SHA-384, and SHA-512 algorithms.

I don't want to over complicate things, but i love teaching so here it goes. This is super simplified btw and i am sure someone will wince and bash me over the head for it. I apologize in advance...

The SHA-256 Hash, is an operation that creates a random number, in hexadecimal of course, based off of a value. In the case of bitcoin it would be the current Block, or "Target". The main goal of bitcoin hashing is to try and randomly create a hash that is less value than the target. For example, the current target is

0000000000000E86680000000000000000000000000000000000000000000000

getting a random hash of

0000000000000E86670000000000000000000000000000000000000000000000
                               ^
                              Lesser value
would "win" you the "Target" Block thus giving you 50 BTCs.

Based off of the current Difficulty the probability of winning is 0.0000000000000002015752067216838860908012520667398348450

The lower the difficulty the higher the value of the target and vice versa for the higher diffculty.

The hash also has to be verified as a hash, you can't obviously have a program just throw back, "oh I found the lower value wink wink" lol.

This guy David Perry gives an awesome explanation on why AMD is cornering the market on integer based calculations and proves its not just the "bitcoin" world.

http://bitcoin.stackexchange.com/questions/1523/bitcoin-alternative-designed-for-nvidia

This SHA-256 stuff is and was mostly used for GPGPU processes by hackers. I wouldn't be surprised if it was group of hackers that originally wrote all the bitmining programs we use today. It is the dominate force right now in password cracking, encryption etc.

In closing, nvida, in order to keep up with market demands, will eventually have to start bringing back small integer math into their design to keep up with the next generation uses for things like full hardrive encryption, faster SSL handshakes, etc. Computer Security is ever evolving, once we start getting into the 10-50-100MB encryption algorithms, CPU processing, as it is now, will never be able to keep up; hell it can't keep up with our 2MB encryptions lol.

I don't know if you are a programmer, I assume you must have knowledge else you wouldn't be seeking more knowledge. It's addicting. Here is a post of the process of getting a target in really really simple C language.

http://pastebin.com/n8UEGA86

Thanks,
icvader
newbie
Activity: 31
Merit: 0
I think the problem is you are thinking that the amd GPUs and nvida GPUs are as similar as amd CPUs and intel CPUs. The CPUs have nearly identical instruction sets and meant to be somewhat interchangeable, while the GPUs each have their own entirely separate instruction set.

AMD = more cores, but each core is slower than an nvidia core
nvidia = less cores, but each core is faster than an AMD core

hashing is not really an intensive operation so the speed of the core doesnt really make a difference, what makes a difference is being able to do many at once so the amount of cores is what increases the hash rate.

An FPGA or ASIC takes this to the extreme, many many very dumb cores (but just fast enough to do the hash)
legendary
Activity: 1428
Merit: 1001
Okey Dokey Lokey
Here is a nice explanation about "physx on cpu"

Quote
http://techreport.com/discussions.x/19216
PhysX hobbled on the CPU by x87 code
Quote
x87 has been deprecated for many years now, with Intel and AMD recommending the much faster SSE instructions for the last 5 years. On modern CPUs, code using SSE instructions can easily run 1.5-2X faster than similar code using x87.  By using x87, PhysX diminishes the performance of CPUs, calling into question the real benefits of PhysX on a GPU.

Quote
http://www.rage3d.com/board/showthread.php?t=33965625
PhysX - Intentionally Slow on CPUs? RealWorldTech Investigates.
Quote
PhysX uses x87 because Ageia and now Nvidia want it that way. Nvidia already has PhysX running on consoles using the AltiVec extensions for PPC, which are very similar to SSE. It would probably take about a day or two to get PhysX to emit modern packed SSE2 code, and several weeks for compatibility testing. In fact for backwards compatibility, PhysX could select at install time whether to use an SSE2 version or an x87 version – just in case the elusive gamer with a Pentium Overdrive decides to try it.

But both Ageia and Nvidia use PhysX to highlight the advantages of their hardware over the CPU for physics calculations. In Nvidia’s case, they are also using PhysX to differentiate with AMD’s GPUs. The sole purpose of PhysX is a competitive differentiator to make Nvidia’s hardware look good and sell more GPUs.

You sir, Are all my better arguments, Cleanedup and Slammed down.
Flawless explanation, And by my own experiance, It's compleatly true.

Now then
Lock This Thread.
If you dont know how, Goto you first post and hit Edit, Then near the bottom left will be a small "Lock topic" button
legendary
Activity: 1148
Merit: 1008
If you want to walk on water, get out of the boat
Here is a nice explanation about "physx on cpu"

Quote
http://techreport.com/discussions.x/19216
PhysX hobbled on the CPU by x87 code
Quote
x87 has been deprecated for many years now, with Intel and AMD recommending the much faster SSE instructions for the last 5 years. On modern CPUs, code using SSE instructions can easily run 1.5-2X faster than similar code using x87.  By using x87, PhysX diminishes the performance of CPUs, calling into question the real benefits of PhysX on a GPU.

Quote
http://www.rage3d.com/board/showthread.php?t=33965625
PhysX - Intentionally Slow on CPUs? RealWorldTech Investigates.
Quote
PhysX uses x87 because Ageia and now Nvidia want it that way. Nvidia already has PhysX running on consoles using the AltiVec extensions for PPC, which are very similar to SSE. It would probably take about a day or two to get PhysX to emit modern packed SSE2 code, and several weeks for compatibility testing. In fact for backwards compatibility, PhysX could select at install time whether to use an SSE2 version or an x87 version – just in case the elusive gamer with a Pentium Overdrive decides to try it.

But both Ageia and Nvidia use PhysX to highlight the advantages of their hardware over the CPU for physics calculations. In Nvidia’s case, they are also using PhysX to differentiate with AMD’s GPUs. The sole purpose of PhysX is a competitive differentiator to make Nvidia’s hardware look good and sell more GPUs.
hero member
Activity: 518
Merit: 500
PhysX is an API, just like bullet, havoc etc. It can run on either CPU or GPU, its up to the developer. Its also an open API, if AMD wanted, they could implement it, but they chose not as nVidia owns the IP and somewhat understandably, AMD dont want to support that. But there are hacks around to enable physx on ATI cards.
legendary
Activity: 1428
Merit: 1001
Okey Dokey Lokey
I'll add a little more gasoline if I may, to celebrate my 5th hour. Smiley

Quote
1)Havoc and such? CPU based? No.

http://bulletphysics.org/wordpress/

A physics engine that run on GPU via OpenCL

The reason is that they are supposedly partnered with ATI/AMD.

You will never see this commercially used popular (and I'm surprised a project like this even existed).

PhysX is used in almost every major game, and those that don't, use Havoc (which is CPU).


Quote
2)
Quote
If it wasn't on the card, you would have to do the calculations on the CPU
Of course if i don't use the card i use the CPU. The problem is, physx is made to run well on nvidia card. But it's not the only engine around

As answered above, PhysX:

A) The best
B) The most popular
C) Commercially backed, with Major's like VaLve.
D) It's GPU based, using a PPU.

Quote
3)Please explain me where i am wrong, nvidia did not invent physics engine in gaming. "precompute"? Lolwut, it was possible to do the same thing that physx do before physx was created.

Of course they didn't. Who said that where?

Idk if you are all into game mechanics, or developing (it seems not), games had to precompute (process long before, mainly stored in a file or memory saying what should happen) physics and thus was quite limited. As far as I remember, you would not be able to drive vehicles like you can do now.


Anyways, it would seem I am free. Cheesy

Im am going to fucking kill you if you keep thinking that GPU's Couldnt "Practically" and/or "publicly" and/or "commercially" and/or "residentially" ETC, Do Non-Precomputed physics on the GPU untill nVidiaPhysX was created.

What your saying is that my old PRE PhysX exsistence tech Radeon GPU, CANNOT (on it's gpu) Make a 3d Ball, Fall down, And Randomly (Fuck off i know there is no TRUE random) bounce

You say that if:
on a Pre PhysX Era ATI Radeon GPU
Perfect ball A drops, Falls and Smacks a Completeatly Flawed Polygon (such as a gravel path) that it Cannot, CANNOT do anything unpredictable. Because, By Computing law, That is Impossible to happen Due to the Flawless factor that all "possible" "bounceable" angles are All accounted for..... unless using your CPU or Nvidia's PhysX tech stylings.

That is just flatout stupid.

TL;DR
Link or lies.
Lock this fucking thread.
 
newbie
Activity: 33
Merit: 0
I'll add a little more gasoline if I may, to celebrate my 5th hour. Smiley

Quote
1)Havoc and such? CPU based? No.

http://bulletphysics.org/wordpress/

A physics engine that run on GPU via OpenCL

The reason is that they are supposedly partnered with ATI/AMD.

You will never see this commercially used popular (and I'm surprised a project like this even existed).

PhysX is used in almost every major game, and those that don't, use Havoc (which is CPU).


Quote
2)
Quote
If it wasn't on the card, you would have to do the calculations on the CPU
Of course if i don't use the card i use the CPU. The problem is, physx is made to run well on nvidia card. But it's not the only engine around

As answered above, PhysX:

A) The best
B) The most popular
C) Commercially backed, with Major's like VaLve.
D) It's GPU based, using a PPU.

Quote
3)Please explain me where i am wrong, nvidia did not invent physics engine in gaming. "precompute"? Lolwut, it was possible to do the same thing that physx do before physx was created.

Of course they didn't. Who said that where?

Idk if you are all into game mechanics, or developing (it seems not), games had to precompute (process long before, mainly stored in a file or memory saying what should happen) physics and thus was quite limited. As far as I remember, you would not be able to drive vehicles like you can do now.


Anyways, it would seem I am free. Cheesy
legendary
Activity: 1428
Merit: 1001
Okey Dokey Lokey
1)Havoc and such? CPU based? No.

http://bulletphysics.org/wordpress/

A physics engine that run on GPU via OpenCL

2)
Quote
If it wasn't on the card, you would have to do the calculations on the CPU
Of course if i don't use the card i use the CPU. The problem is, physx is made to run well on nvidia card. But it's not the only engine around

3)Please explain me where i am wrong, nvidia did not invent physics engine in gaming. "precompute"? Lolwut, it was possible to do the same thing that physx do before physx was created.

THANKYOU GABI.

Now, I think that Gabi's post should be the Last. (or this what whatever the fuck)

I think the flame fest is at an even keel right now, And we need to Stop it, Stop the fire dammnit.

Gabi just Litup and Burned down and entire "controlled burn zone" Dont fucking walk in here and toss a jerry can'o'gas at him.

LOCK THIS THREAD, ALL DESIRED INFO OF THE THREAD OP'S QUESTION HAS BEEN ANSWERED
legendary
Activity: 1148
Merit: 1008
If you want to walk on water, get out of the boat
1)Havoc and such? CPU based? No.

http://bulletphysics.org/wordpress/

A physics engine that run on GPU via OpenCL

2)
Quote
If it wasn't on the card, you would have to do the calculations on the CPU
Of course if i don't use the card i use the CPU. The problem is, physx is made to run well on nvidia card. But it's not the only engine around

3)Please explain me where i am wrong, nvidia did not invent physics engine in gaming. "precompute"? Lolwut, it was possible to do the same thing that physx do before physx was created.
hero member
Activity: 756
Merit: 500
Lock this fucking thread

I second that motion, this is a subject that is well covered elsewhere.


+1
714
member
Activity: 438
Merit: 10
Lock this fucking thread

I second that motion, this is a subject that is well covered elsewhere.
legendary
Activity: 1428
Merit: 1001
Okey Dokey Lokey
Lock this fucking thread
newbie
Activity: 11
Merit: 0
I realize that I am asking a question here that is probably at the core of a lot of flame wars.  I promise I did not post it with that foreknowledge, although its pretty obvious given the fact that "fanboyism" is about as rampant as it gets with GPUs (probably only comparable to Ford vs Holden in Australia, where fist fights and stabbings are an occasional result).

I'm not a troll (lol how many times has that been said).  I do not disbelieve, discount, or ignore the statistitics.  They are there in black and white.  Its fact.  AMD cards are almost universally better (I say almost because there might have been maybe one single expensive nvidia card that was better than a super cheap AMD card, but I didn't pay too close attention).

I knew about that before I posted this thread, which is why I repeatedly referred to the "performance discrepancy."  that is why those posts that referred to them are kind of irrelevant.  I know.  those websites with reviews/benchmarks were the REASON why I am posting this thread.

shit.  I'm just asking WHY?

why?  why is it that 2 types of hardware, which are designed for essentially identical tasks:  namely CONSUMER ORIENTED graphics (like video games and movies and shit) produce such wildly different results?

AMD wants to make their shit work well with games.  So does Nvidia.  Everyone learns physics and electrical engineering from similar/same text books, and in many cases from the same PI or a PI who worked with his competitors PI because those PI's worked with the same PI (I cant remember the name of the statistic that measures the PhD "tree" back up to famous people like Einstein, laplace, debroglie, boltzmann, Bragg, feynman, etc. but that is what I am talking about here)

even taking into account the hurdles of patents and Intellectual property, how is it that 2 products, aimed at the same market and designed for IDENTICAL tasks produce such ABSOLUTELY WIDLY different results?  that is my question. 

the answers I have been given mostly sound like:  "because AMD is better, duh."
dark_st3alth  actually answered my question in a clear and cogent way and I do appreciate that.  Thank you. 

he also eluded to what I was implying with PhysX:

Quote
It would seem that miners are not using the CUDA cores as well, but that's another topic.

my point was "targeted" development.  I didnt mean to imply that the BTC mining programmers were like "fuck you nvidia we're only writing for AMD so all you NVidia fanboys can suck a dick."  I meant more along the lines of:  "some guys working in their spare time, who only had access to AMD gear and AMD SDKs developed with the tools they had, and the results are that the software works best with AMD."

maybe they tried to get Nvidia hardware donated and Nvidia said no!  who knows! But think it much more likely that the reason why the software works best on AMD is because it was designed with a focus on AMD hardware.
not maliciously, not angrily, not with some ill intent.  Just because that was the only option.

I would love to hear someone who actually knows about the development weigh in so that I can have that question answered.  Its just a question OIts posted in the noob section for gods sake.  I didnt pronounce it like a fact of god spoken from on high.  I am ruminating.  I am tossing around i

Ithought that was essentially the friggin purpose of a "forum":  a place to discuss things.  that is what I am trying to do.

instead i'm a troll, Im a fanboy, Im just here to start flamewars.
apparently that must be the case. 
newbie
Activity: 33
Merit: 0
Quote
3. AMD/ATI cards are cheaper. It's less costly to setup the miner then with nVidia cards.
I posted on here that ATI/AMD cards probably have less quality. Just google "ATI loud fan".
Should i link exploding nvidia cards?  Roll Eyes

I think your talking about defective cards. If that's the case, It's been identified and fixed. A brand new ATI card has a VERY loud fan problem, I just can't seem to find the video I watched a month ago.


Quote
Quote
PhysX is used for PHYSICS calculations. Here's a nice block for all of you to read:
Wow so much yaddayadda
I'm giving a straight answer that's correct. Where's your answer huh?

Quote
Before PhysX, game designers had to “precompute” how an object would behave in reaction to an event.
Bullshit.
[/quote]

I don't think your knowledgeable in that area sir/madam. It is quite well known, just take a search before you raise the alarms...

On top of that, I don't think nVidea would lie like that. Think about it next time before you say things. Smiley

Quote
PhysX is designed specifically for hardware acceleration by powerful processors with hundreds of processing cores.
Let me rewrite: PhysX is designed specifically to run on nvidia cards.
I wonder why, maybe because uh it's nvidia propietary code?

And lol at games using it. Yeah, to add like what, 2 particle?
[/quote]

No, it does make a difference. If it wasn't on the card, you would have to do the calculations on the CPU - and I hope your an expert on such things as it seems like your saying you are.

Take a look at games that use PhysX. Fluid dynamics is one such area you will need the GPU to do calculations.

Quote
There are other physics engine that run on all systems instead of nvidia only.

Ah, your talking about Havoc and such. Those are all CPU based, but lack a lot of "realism" to what they can do. Source games are a wonderful example. Smiley
legendary
Activity: 1148
Merit: 1008
If you want to walk on water, get out of the boat
Quote
3. AMD/ATI cards are cheaper. It's less costly to setup the miner then with nVidia cards.
I posted on here that ATI/AMD cards probably have less quality. Just google "ATI loud fan".
Should i link exploding nvidia cards?  Roll Eyes


Quote
PhysX is used for PHYSICS calculations. Here's a nice block for all of you to read:
Wow so much yaddayadda

Quote
Before PhysX, game designers had to “precompute” how an object would behave in reaction to an event.
Bullshit.

Quote
PhysX is designed specifically for hardware acceleration by powerful processors with hundreds of processing cores.
Let me rewrite: PhysX is designed specifically to run on nvidia cards.
I wonder why, maybe because uh it's nvidia propietary code?

And lol at games using it. Yeah, to add like what, 2 particle?


There are other physics engine that run on all systems instead of nvidia only.
714
member
Activity: 438
Merit: 10
Stick it with a fork, it's done.
full member
Activity: 235
Merit: 100
I, for one, am very happy with this situation, AMD need every penny they can get, so they can pump it in R&D, so they can compete with Intel & nVidia, so there is healthy competition and not monopoly, so we as customers won't get ass raped too much. Go ATI!
newbie
Activity: 33
Merit: 0
I'll give a real answer instead of what some 10 year olds posted above. /\


I'm a Nvidia lover as well, don't get that wrong. It seems the reasons are:

1. It has more "stream" processors, which can be thought of as CUDA cores.
It would seem that miners are not using the CUDA cores as well, but that's another topic.

2. AMD/ATI cards have a SHA512 (checksum) instruction on them, which speeds up calculations for mining. nVidia cards take 2 or 3 instructions to do this (as of now).
Really, this could change in the near future.

3. AMD/ATI cards are cheaper. It's less costly to setup the miner then with nVidia cards.
I posted on here that ATI/AMD cards probably have less quality. Just google "ATI loud fan".


As the little kids are arguing about PhysX, I'll explain that as well.

PhysX is used for PHYSICS calculations. Here's a nice block for all of you to read:


Quote
Before PhysX, game designers had to “precompute” how an object would behave in reaction to an event. For example, they would draw a sequence of frames showing how a football player falls on the ground after a tackle. The disadvantage of this approach was that the gamer always saw the same “canned” animation. With PhysX, games can now accurately compute the physical behavior of bodies real time! This means that the football player will now bend and twist in all different ways depending on the specific conditions associated with the tackle – thus creating a unique visual experience every time.

PhysX technology is widely adopted by over 150 games and is used by more than 10,000 developers. With hardware-accelerated physics, the world’s leading game designers’ worlds come to life: walls can be realistically torn down, trees bend and break in the wind, and water and smoke flows and interacts with body and force, instead of just getting cut-off by neighboring objects.


And a little more:

Quote
PhysX is designed specifically for hardware acceleration by powerful processors with hundreds of processing cores. Because of this design choice, NVIDIA GeForce GPUs provide a dramatic increase in physics processing power, and take gaming to a new level delivering rich, immersive physical gaming environments with features such as:

    Explosions that create dust and collateral debris
    Characters with complex, jointed geometries, for more life-like motion and interaction
    Spectacular new weapons with incredible effects
    Cloth that drapes and tears naturally
    Dense smoke & fog that billow around objects in motion

There, wasn't so hard was it guys and girls? They just wanted a simple answer.
legendary
Activity: 1148
Merit: 1008
If you want to walk on water, get out of the boat
hero member
Activity: 756
Merit: 500
If you look at his other post regarding using organizational resources to do bitcoin mining, one would suspect that his organization may be using Nvidia so much that he make himself believe that Nvidia is better.
Pages:
Jump to: