Pages:
Author

Topic: Xeon Phi - page 3. (Read 36483 times)

member
Activity: 66
Merit: 10
June 22, 2012, 12:34:06 PM
#80
I was also under the impression the 7970 did have ECC, but I thought it was not used. It would cost performance? My impression is based from some articles and postings such as the two below.

I read this sometime back on: http://www.anandtech.com/Show/Index/4455?cPage=4&all=False&sort=0&page=6&slug=amds-graphics-core-next-preview-amd-architects-for-compute

Quote
Finally on the memory side, AMD is adding proper ECC support to supplement their existing EDC (Error Detection & Correction) functionality, which is used to ensure the integrity of memory transmissions across the GDDR5 memory bus. Both the SRAM and VRAM memory can be ECC protected. For the SRAM this is a free operation, while for the VRAM there will be a performance overhead. We’re assuming that AMD will be using a virtual ECC scheme like NVIDIA, where ECC data is distributed across VRAM rather than using extra memory chips/controllers.

Shamino has done some LN2 overclocking when the 7970 was released, in his forum he wrote for the 7970,

Quote
actually 1800 ram is easy, i ran 2000 ram and it got the ECC correction and the score was worse.

http://kingpincooling.com/forum/showthread.php?t=1559
legendary
Activity: 1162
Merit: 1000
DiabloMiner author
June 22, 2012, 09:46:37 AM
#79
Probably covered some of the stuff in thread, but interesting read on the Xeon Phi:

http://vr-zone.com/articles/intel-xeon-family-finally-accepts-the-larrabee-in-xeon-phi-and-its-futures/16361.html


This article fail:

Quote
So, how does it stand performance wise? Its double precision FP throughput is the same as the typical AMD Radeon HD7970 card which costs one quarter of the amount but with much smaller memory, 3 GB, and no ECC.
No ECC? The 7970 has ECC  Roll Eyes

No it doesn't. What GCN did was add ECC to all internal on-die memory (caches, local stores, etc), but the only cards AMD has that have ECC GDDR5 are FirePro/FireStream cards, and although they're normal GCN chips, they're not referred to as such.
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
June 22, 2012, 06:53:45 AM
#78

The 7970 has ECC  Roll Eyes
Are you sure? That's usually reserved for the expensive enterprisey cards like FirePro.
legendary
Activity: 1148
Merit: 1008
If you want to walk on water, get out of the boat
June 22, 2012, 06:30:19 AM
#77
Probably covered some of the stuff in thread, but interesting read on the Xeon Phi:

http://vr-zone.com/articles/intel-xeon-family-finally-accepts-the-larrabee-in-xeon-phi-and-its-futures/16361.html


This article fail:

Quote
So, how does it stand performance wise? Its double precision FP throughput is the same as the typical AMD Radeon HD7970 card which costs one quarter of the amount but with much smaller memory, 3 GB, and no ECC.
No ECC? The 7970 has ECC  Roll Eyes
legendary
Activity: 1148
Merit: 1008
If you want to walk on water, get out of the boat
June 22, 2012, 06:26:23 AM
#76

Also 1 TFLOP is not that impressive. HD7970 is 947 DP GFLOP and it was released in January and doesn't have access to Intel's 22 nm 3D tri-gate tech.
Protip: Xeon Phi run x86 code

Good luck using the 7970 (or nvidia) computing power, having to fight with opencl and cuda.
legendary
Activity: 1162
Merit: 1000
DiabloMiner author
June 22, 2012, 02:40:37 AM
#75
I wonder how much stuff Intel removed to put 50 cores on a card.
Here's a pdf depicting the organization of Larrabee, the precursor of Phi.
http://users.ece.gatech.edu/lanterma/mpg08/Larrabee_ECE4893.pdf

Im already well aware of how they designed that. Its more butchered than Atom. But from what I've heard, Phi isn't nearly as bad.
legendary
Activity: 1946
Merit: 1006
Bitcoin / Crypto mining Hardware.
June 22, 2012, 01:21:53 AM
#74
I wonder how much stuff Intel removed to put 50 cores on a card.
Here's a pdf depicting the organization of Larrabee, the precursor of Phi.
http://users.ece.gatech.edu/lanterma/mpg08/Larrabee_ECE4893.pdf
legendary
Activity: 1162
Merit: 1000
DiabloMiner author
June 22, 2012, 12:42:50 AM
#73
Yeah, what I described is clearly Step 3 or later. Intel also seems to have finally sold a "step 3" type of device in the Phi, depending on what it actually can do.

Intel seems more interested in incorporating the CPU into the GPU, while AMD is incorporating the GPU into the CPU. Totally different mindsets/endgames/results.

They both want branch/loop happy highly parallel computation. The Radeon's biggest "problem" (and I'm using the term loosely) is that wavefronts are ran in lockstep: both sides of a branch are the same length, even if it requires inserting no-ops, and loops that have lengths that are set at runtime (instead of static/compile time set) are just as nasty.

CPUs, otoh, can't do highly parallel calculations because of all the hardware dedicated dealing with branching, branch prediction, cache prediction, etc etc etc takes up a lot of room, produces a lot of heat, and uses a lot of power. I wonder how much stuff Intel removed to put 50 cores on a card.
full member
Activity: 238
Merit: 100
★YoBit.Net★ 350+ Coins Exchange & Dice
June 21, 2012, 11:54:03 PM
#72
But if it still can perform just as well on highly branchy code, I might have a use for one of those.

That would make it crazy awesome for raytracing.

Yeah, that's why their Tesla stuff shows up in pretty much all of the new supercomputer builds these days, right?  Roll Eyes

their high end tesla's kick some major ass.

On the wallet maybe. Last I checked, a Tesla M2090 was north of $4000.

Also 1 TFLOP is not that impressive. HD7970 is 947 DP GFLOP and it was released in January and doesn't have access to Intel's 22 nm 3D tri-gate tech.
legendary
Activity: 952
Merit: 1000
June 21, 2012, 11:28:05 PM
#71
Yeah, what I described is clearly Step 3 or later. Intel also seems to have finally sold a "step 3" type of device in the Phi, depending on what it actually can do.

Intel seems more interested in incorporating the CPU into the GPU, while AMD is incorporating the GPU into the CPU. Totally different mindsets/endgames/results.
legendary
Activity: 1162
Merit: 1000
DiabloMiner author
June 21, 2012, 11:17:24 PM
#70
Quote
And, at least behind the closed doors, both AMD and Nvidia GPUs have been shown booting Linux on their own, without requiring a CPU.

umm. wat

AMD's Fusion is a product of years of research. AMD "demo'ed" an all HyperTransport Radeon about a year after they bought ATI, and they've also been showing off prototype Fusions that don't just have Radeon pipes on-die* but usable from the x86 interface side, although what "usable" means is still up in the air, but if they've managed to use them as the backend for SIMD instructions (ie, no more dedicated FPU units, and the x86 instruction scheduler issues as many ops as it can in parallel (instead of just, say, 2 per core), instead 512 Radeon ALUs across the entire CPU) this could mean a huge goddamned increase in FP performance without needing a dedicated HAL API like OpenCL.

* On-die Fusion Radeons don't have a Radeon memory controller and natively speak HyperTransport. The up side is, they have direct access to system memory as a native processor and can access stuff directly out of on-die cache: this means you have basically zero wait time to send stuff to the GPU for processing and you have zero cost cache coherency.


This is an old slide, but it gives a good vision of AMD's overall goal. We are somewhere between step 2 and step 3, and it's only going to be getting better! AMD has one of the most creative and innovative visions for the future of consumer computing (as opposed to intel just shrinking nm die sizes), and I  think it's progressing quite well (just look at the success of their APU sales) I also think it's only going to get better for them as they move along with even more amazing features like what you just described.

/amdfanboyrant

Yeah, what I described is clearly Step 3 or later. Intel also seems to have finally sold a "step 3" type of device in the Phi, depending on what it actually can do.
legendary
Activity: 952
Merit: 1000
June 21, 2012, 10:52:19 PM
#69
Quote
And, at least behind the closed doors, both AMD and Nvidia GPUs have been shown booting Linux on their own, without requiring a CPU.

umm. wat

AMD's Fusion is a product of years of research. AMD "demo'ed" an all HyperTransport Radeon about a year after they bought ATI, and they've also been showing off prototype Fusions that don't just have Radeon pipes on-die* but usable from the x86 interface side, although what "usable" means is still up in the air, but if they've managed to use them as the backend for SIMD instructions (ie, no more dedicated FPU units, and the x86 instruction scheduler issues as many ops as it can in parallel (instead of just, say, 2 per core), instead 512 Radeon ALUs across the entire CPU) this could mean a huge goddamned increase in FP performance without needing a dedicated HAL API like OpenCL.

* On-die Fusion Radeons don't have a Radeon memory controller and natively speak HyperTransport. The up side is, they have direct access to system memory as a native processor and can access stuff directly out of on-die cache: this means you have basically zero wait time to send stuff to the GPU for processing and you have zero cost cache coherency.


This is an old slide, but it gives a good vision of AMD's overall goal. We are somewhere between step 2 and step 3, and it's only going to be getting better! AMD has one of the most creative and innovative visions for the future of consumer computing (as opposed to intel just shrinking nm die sizes), and I  think it's progressing quite well (just look at the success of their APU sales) I also think it's only going to get better for them as they move along with even more amazing features like what you just described.

/amdfanboyrant
legendary
Activity: 1162
Merit: 1000
DiabloMiner author
June 21, 2012, 07:29:32 PM
#68
Quote
And, at least behind the closed doors, both AMD and Nvidia GPUs have been shown booting Linux on their own, without requiring a CPU.

umm. wat

AMD's Fusion is a product of years of research. AMD "demo'ed" an all HyperTransport Radeon about a year after they bought ATI, and they've also been showing off prototype Fusions that don't just have Radeon pipes on-die* but usable from the x86 interface side, although what "usable" means is still up in the air, but if they've managed to use them as the backend for SIMD instructions (ie, no more dedicated FPU units, and the x86 instruction scheduler issues as many ops as it can in parallel (instead of just, say, 2 per core), instead 512 Radeon ALUs across the entire CPU) this could mean a huge goddamned increase in FP performance without needing a dedicated HAL API like OpenCL.

* On-die Fusion Radeons don't have a Radeon memory controller and natively speak HyperTransport. The up side is, they have direct access to system memory as a native processor and can access stuff directly out of on-die cache: this means you have basically zero wait time to send stuff to the GPU for processing and you have zero cost cache coherency.
legendary
Activity: 952
Merit: 1000
June 21, 2012, 10:49:35 AM
#67
True story, but only in the hands of the engineers that designed them, no one else that I know of has been able to make it happen.

ah.  I never knew it was possible.  I guess you could think of a gpu as a slower cpu.  It must need a heavily modifies kernel though

Still. Gentoo on an APU is gonna be awesome come 2015!

Architecture:
[ ] x86
[ ] amd64
[X] opencl
[ ] arm
sr. member
Activity: 369
Merit: 250
June 21, 2012, 10:36:02 AM
#66
True story, but only in the hands of the engineers that designed them, no one else that I know of has been able to make it happen.

ah.  I never knew it was possible.  I guess you could think of a gpu as a slower cpu.  It must need a heavily modifies kernel though
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
June 21, 2012, 10:33:53 AM
#65
Quote
And, at least behind the closed doors, both AMD and Nvidia GPUs have been shown booting Linux on their own, without requiring a CPU.

umm. wat
True story, but only in the hands of the engineers that designed them, no one else that I know of has been able to make it happen.
sr. member
Activity: 369
Merit: 250
June 21, 2012, 10:32:06 AM
#64
Quote
And, at least behind the closed doors, both AMD and Nvidia GPUs have been shown booting Linux on their own, without requiring a CPU.

umm. wat
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
June 21, 2012, 10:26:46 AM
#63
Probably covered some of the stuff in thread, but interesting read on the Xeon Phi:

http://vr-zone.com/articles/intel-xeon-family-finally-accepts-the-larrabee-in-xeon-phi-and-its-futures/16361.html


Interesting!
Quote
The 50+ simple two-way in-order Pentium (yes, 1995 Pentium!) like cores feed the same number of 512-bit wide SIMD FP units, with the ability to deliver around 1 TFLOPs peak in double precision at around 1 GHz.
member
Activity: 66
Merit: 10
June 21, 2012, 10:17:59 AM
#62
Probably covered some of the stuff in thread, but interesting read on the Xeon Phi:

http://vr-zone.com/articles/intel-xeon-family-finally-accepts-the-larrabee-in-xeon-phi-and-its-futures/16361.html

sr. member
Activity: 369
Merit: 250
June 20, 2012, 01:19:21 PM
#61
Yeah, that's why their Tesla stuff shows up in pretty much all of the new supercomputer builds these days, right?  Roll Eyes

their high end tesla's kick some major ass.
Pages:
Jump to: