Pages:
Author

Topic: New AMD APUs... [AMD A8-Series] - page 2. (Read 18109 times)

hero member
Activity: 784
Merit: 500
October 24, 2012, 12:42:06 PM
#50
not with any home operating system Smiley
full member
Activity: 182
Merit: 100
October 24, 2012, 10:45:56 AM
#49
A VM does not (usually) have access to the GPU... They use to have a generic GPU, but not the actual hardware...

A VM for production use isn't a good thing ... USB and serial pass through is not guaranteed to work. If it works ...

You could possibly use IOMMU passthrough stuff to get "native" access to the GPU, but not with XP as the base OS :-(
sr. member
Activity: 454
Merit: 250
Technology and Women. Amazing.
October 23, 2012, 10:26:52 PM
#48
A8 is not the newest highend APU, A10 is. I have a friend with an A10-5800k(integrated 7660D) and it pulls just over 100mh/s.
hero member
Activity: 784
Merit: 500
October 23, 2012, 02:16:55 PM
#47
A VM does not (usually) have access to the GPU... They use to have a generic GPU, but not the actual hardware...

A VM for production use isn't a good thing ... USB and serial pass through is not guaranteed to work. If it works ...
full member
Activity: 136
Merit: 100
October 23, 2012, 12:27:21 PM
#46
Not possible; but maybe the other way around...  Grin
full member
Activity: 182
Merit: 100
October 23, 2012, 11:30:43 AM
#45

My only other guess is XP is the problem.  You should be able to buy 7 cheap now that 8 is coming out.

Unfortunately thats not possible, i use these machines for other production works and they must use XP 64 Sad

no other ideas? anyone?  Roll Eyes

Put 7 or linux and run XP 64 in a VM?
full member
Activity: 136
Merit: 100
October 23, 2012, 08:09:12 AM
#44

My only other guess is XP is the problem.  You should be able to buy 7 cheap now that 8 is coming out.

Unfortunately thats not possible, i use these machines for other production works and they must use XP 64 Sad

no other ideas? anyone?  Roll Eyes
legendary
Activity: 1012
Merit: 1000
October 23, 2012, 02:17:49 AM
#43

You need to install the latest Catalyst driver (cleanly).  Uninstall that SDK shit and any other AMD software first.

thanks,

Yes, that was the first thing i tried, only catalyst, in a new system, latests version, but guiminer told me that no opencl device was installed, and from command line things didnt work also; then i tried with SDK  Huh
My only other guess is XP is the problem.  You should be able to buy 7 cheap now that 8 is coming out.
full member
Activity: 136
Merit: 100
October 22, 2012, 02:43:14 PM
#42

You need to install the latest Catalyst driver (cleanly).  Uninstall that SDK shit and any other AMD software first.

thanks,

Yes, that was the first thing i tried, only catalyst, in a new system, latests version, but guiminer told me that no opencl device was installed, and from command line things didnt work also; then i tried with SDK  Huh
legendary
Activity: 1012
Merit: 1000
October 22, 2012, 10:19:58 AM
#41
Trying yo mine with one of these (3870K) and its just impossible;

i get 3MH, obviously GPU is doing nothing,

The problem i think is that i mine in Windows XP64, i have installed latest SDK available, 2.3, but with guiminer - opencl i get only 3MH, and trying to mine from command line i get this error:


C:\guiminer>poclbm
Traceback (most recent call last):
  File "poclbm.py", line 48, in
pyopencl.LogicError: clGetPlatformIDs failed: invalid/unknown error code


All help i find about that error is for linux based systems,

any help?

Thanks,
You need to install the latest Catalyst driver (cleanly).  Uninstall that SDK shit and any other AMD software first.
sr. member
Activity: 437
Merit: 250
October 21, 2012, 11:54:41 PM
#40
has anyone got an a10 yet to test? I believe on the sha-256 bandwidth tests it was about 3 times as fast as the a8 so I was hoping it maybe worth trying if I build new gpu based systems
hero member
Activity: 504
Merit: 500
Decent Programmer to boot!
October 21, 2012, 05:25:50 PM
#39
Are we still debating?

The A4-3400 gets about 100mhash/s. I have firsthand experience in using them.
full member
Activity: 136
Merit: 100
October 21, 2012, 05:23:00 PM
#38
Trying yo mine with one of these (3870K) and its just impossible;

i get 3MH, obviously GPU is doing nothing,

The problem i think is that i mine in Windows XP64, i have installed latest SDK available, 2.3, but with guiminer - opencl i get only 3MH, and trying to mine from command line i get this error:


C:\guiminer>poclbm
Traceback (most recent call last):
  File "poclbm.py", line 48, in
pyopencl.LogicError: clGetPlatformIDs failed: invalid/unknown error code


All help i find about that error is for linux based systems,

any help?

Thanks,
legendary
Activity: 1806
Merit: 1003
October 05, 2011, 01:25:02 PM
#37
Really? share your magic then, I have a 5570 and can't even reach 100 mh/s

It'll be about 92% of a 5570, or about 55 MH/s. Smiley

I have a 5570 and can do 200+ Mh/S... Unless a 92% means something different now, there is a problem somewhere.
donator
Activity: 1218
Merit: 1079
Gerald Davis
October 05, 2011, 01:15:27 PM
#36
It was IBM (and others) however those designs are at micro scale.  Water cooling INSIDE the chip in tiny channels and pumped via electrical impulses inside the chip itself.  We likely are some years away from when that would be commercially viable.

IBM does currently sell some watercooled servers.  Some of their high performance servers use a 32 core Power6 chip.   With up to 44 chips per 44U rack (2 per 2U server), that's 1400 cores per standard datacenter rack.  To dissipate that kind of thermal load IBM sells an enterprise rated watercooling kit.

Many people don't know that most early computers were watercooled.  Most mainframes were (and some still are) liquid cooled.  It was only when power densities dropped that air cooling became viable.  In datacenter servers are getting smaller, more in a rack, and pulling more power.  Using forced air to remove that heat is horribly inefficient and noisy.  As the power density starts to climb we will see more enterprise grade watercooling. 
donator
Activity: 2352
Merit: 1060
between a rock and a block!
October 05, 2011, 01:13:42 PM
#35
I still think it would be hard to create a water block that will be able to absorb 3000W of heat from a square inch.
i think you would then need micro channels or tubes through the GPU chip itself to remove the heat.

No reason for it to be 3000W from 1" squared.  Even if you could do that the outer chip layers would act as an insulator and "cook" the inner chips.

You could simply have a chip-waterblock sandwhich.

waterblock
chip
waterblock
chip
waterblock
chip
waterblock
chip
waterblock

or maybe something more like a stacked grid array (4x4 chips under a waterblock and then stacked)


I think I saw something about interleaving chips with liquid cooling pathways... i think it was IBM?
donator
Activity: 1218
Merit: 1079
Gerald Davis
October 05, 2011, 01:07:21 PM
#34
I still think it would be hard to create a water block that will be able to absorb 3000W of heat from a square inch.
i think you would then need micro channels or tubes through the GPU chip itself to remove the heat.

No reason for it to be 3000W from 1" squared.  Even if you could do that the outer chip layers would act as an insulator and "cook" the inner chips.

You could simply have a chip-waterblock sandwhich.

waterblock
chip
waterblock
chip
waterblock
chip
waterblock
chip
waterblock

or maybe something more like a stacked grid array (4x4 chips under a waterblock and then stacked)
full member
Activity: 235
Merit: 100
October 05, 2011, 12:54:54 PM
#33
if AMD would just make gpu chips stack-able in to arrays that would be something
that would be awsome
get 10 HD6970 chips stacked together..
it could dump 3000 watts of heat in a square inch.

just have to figure out how to cool it  Huh

liquid cooling could handle it.  It takes 4 watt hours of power to raise 1 gallon by 1 deg C.

Thus to keep the temp of 3000 watt chip within 10 deg of ambient it would require 147 gallons per hour.  Sounds like a lot but that is just a mere 2 gallons per minute.  A good water cooling pump has 10x that capacity.

Now to avoid the water heating up you would need a pretty large radiator to dissipate 3kW of heat but it wouldn't require anything exotic.

To correct what someone above said the temp of water is negligible, water is a very effective conductor of heat.  "Warm water" is cools just as well as "ice cold" water is your goal is just to keep the temps below say 60C.

I still think it would be hard to create a water block that will be able to absorb 3000W of heat from a square inch.
i think you would then need micro channels or tubes through the GPU chip itself to remove the heat.
donator
Activity: 1218
Merit: 1079
Gerald Davis
October 05, 2011, 12:23:29 PM
#32
if AMD would just make gpu chips stack-able in to arrays that would be something
that would be awsome
get 10 HD6970 chips stacked together..
it could dump 3000 watts of heat in a square inch.

just have to figure out how to cool it  Huh

liquid cooling could handle it.  It takes 4 watt hours of power to raise 1 gallon by 1 deg C.

Thus to keep the temp of 3000 watt chip within 10 deg of ambient it would require 147 gallons per hour.  Sounds like a lot but that is just a mere 2 gallons per minute.  A good water cooling pump has 10x that capacity.

Now to avoid the water heating up you would need a pretty large radiator to dissipate 3kW of heat but it wouldn't require anything exotic.

To correct what someone above said the temp of water is negligible, water is a very effective conductor of heat.  "Warm water" is cools just as well as "ice cold" water is your goal is just to keep the temps below say 60C.


donator
Activity: 2352
Merit: 1060
between a rock and a block!
October 05, 2011, 12:05:36 PM
#31
if AMD would just make gpu chips stack-able in to arrays that would be something
that would be awsome
get 10 HD6970 chips stacked together..
it could dump 3000 watts of heat in a square inch.

just have to figure out how to cool it  Huh
with a very cold liquid Smiley
Pages:
Jump to: