Author

Topic: New AMD APUs... [AMD A8-Series] (Read 18187 times)

newbie
Activity: 14
Merit: 0
April 12, 2013, 05:41:07 AM
#70
Something I thought was interesting, and sorry for the necro-action . . . but i think this topic best fits my post .  ..

https://lh6.googleusercontent.com/--c8SVI3WFHk/UWdEgybLKfI/AAAAAAAAAKI/vRkhLd8h0ho/s1152/GUIMiner.jpg

I am mining two workers on the same GPU. Apparently there are two different types of hardware, other than the CPU on this APU. One is [0.0] Beaver creek. The other is [0.1] Turks. Typically when not using my desktop I get between 100-104.9 MH/s. Supposedly this APU has "dedicated graphics", in that the GPU part of the APU has a dedicated 1GB( GPUZ shows only 512MB) of memory. Something like a daughter card usable only by the on CPU graphics . . .

Anyhow when I run with the flag -f 0, I *can* get up to ~114.9MH/s, but the screen lags something horrible, and it just seems that accepted shares slows down considerably. Yeah I do not know what that is about, but if fact, I would have to say because of over stressing the GPU. You can see in the screenshot the APU does run pretty hot, but not hot enough to bother me.

Flags:

[0.0] -v -f 80 -w64 all processors set to affinity
[0.1] -v -f 60 -w64 all processors set to affinity.

Increasing the working set seems to have a detrimental effect, if anything. e.g. -w128, removing the vectors flag decreases performance by around 10% If I add another worker to either of the on APU devices, I can get an additional 4-5MH/s per, but heat does go up, and in fact the fan pretty much stays on high all the time. It seems I can only add two workers per device. Overclocking the CPU has little to no effect.

Anyway, if anyone would be so kind to quote or link this post to the post listed above I would be grateful.

EDIT:

Obviously the Image is edited. This has been edited only for sanitizing reasons only. e.g. do you really need to know my computer name, username, and user information for my workers . . . ?
sr. member
Activity: 266
Merit: 250
December 12, 2012, 01:47:57 AM
#69
http://www.amazon.com/Lenovo-59345759-Z585-15-6-Inch-Laptop/dp/B009AEPS4K/ref=sr_1_1?s=electronics&ie=UTF8&qid=1355294804&sr=1-1&keywords=z585

I have a slightly lower version of this, and I can pull 80 MH/s without nay tweaking. I can probably get much higher with a little clock nudging and miner optimization.
legendary
Activity: 1795
Merit: 1208
This is not OK.
December 05, 2012, 04:30:11 PM
#68
Just upgraded my A6 to an A8.
The A6 got ~30MH/s.
The A8 gets ~100MH/s - mainly due to it being unlocked and overclocked.
hero member
Activity: 700
Merit: 500
December 04, 2012, 11:50:34 PM
#67
My A10 system officially got like 4.2MH/s with the CPU (lol) and with the Open CL miner 72.1 MH/s at stock speeds with no flags.

EDIT: I finally found some better flags that someone else recommended and now it's 88.2 MH/s

Just don't mine with the CPU please.. you don't want to waste all that energy Wink


If anyone wants a low water mark for the APUs, I fired up some miners on my E350 powered laptop. 950KH/s on the CPU portion and 11.5MH/s on the GPU side. I didn't run it more than a couple minutes for obvious reasons. At least it can push Fallout 3 on Medium setting on the screen's full resolution using Wine on Ubuntu 12.04
sr. member
Activity: 392
Merit: 250
December 03, 2012, 12:37:29 AM
#66
That was the GPU part of the APU.  The CPU part sucked, lol.  And seeing as how this is a customer machine that's being delivered Monday, I don't think I'll be mining with it...obviously.  Also it'd lose money on electricity.  I believe this chip has 384 cores so my findings in the benchmark make sense.

By the way, according to GUIMiner, the name of the GPU section of the chip was "Devastator."  Strange.  That was the original codename for the entire 7000 series of GPUs by AMD though.
full member
Activity: 210
Merit: 100
Not for hire.
December 02, 2012, 02:20:27 PM
#65
Yeah.  I am curious, what does the apu component of the chip get Desolator?  I had a llano which did about 70mhs which is about the same as a Radeon 5570 card clocked all the way up....
hero member
Activity: 547
Merit: 531
First bits: 12good
December 02, 2012, 02:05:31 PM
#64
My A10 system officially got like 4.2MH/s with the CPU (lol) and with the Open CL miner 72.1 MH/s at stock speeds with no flags.

EDIT: I finally found some better flags that someone else recommended and now it's 88.2 MH/s

Just don't mine with the CPU please.. you don't want to waste all that energy Wink
sr. member
Activity: 392
Merit: 250
December 01, 2012, 01:12:22 PM
#63
My A10 system officially got like 4.2MH/s with the CPU (lol) and with the Open CL miner 72.1 MH/s at stock speeds with no flags.

EDIT: I finally found some better flags that someone else recommended and now it's 88.2 MH/s
sr. member
Activity: 392
Merit: 250
December 01, 2012, 11:50:27 AM
#62
Yeah, I've been there, lol.  I got some really cheap 1866 from Crucial on my A6 system and it got a WEI of 5.9 and I verified its running speeds.  It had really low timings too.  The FSB to NB ratio and the stated NB ratio did not at all match what the board and chip's specs said though so I think they're BSing it somehow.

The vastly superior Gskill set I just picked up rated at 7.4 but most people on newegg reported 7.9 but only in i5 and i7 systems so I assume running at 1600.  This latest A10 setup had a funny FSB:NB ratio too and we're talking about the unmodified stock XMP profile loaded so it's not my configuration that's to blame.  For comparison, my last 1600 CL9 setup on an Ivy bridge i5 from Gskill got a 7.8 so I'm thinking AMD is just fluffing up their numbers on their latest chipsets and there's some oddity going on with the real numbers.  This latest one with the A10 was on the A85X chipset and the board specs said it took 1866 without overclocking.  Regardless, I only buy high end stuff if it's on sale because of the minimal increase in overall performance.

I'll try and run a mining test today.
full member
Activity: 210
Merit: 100
Not for hire.
November 30, 2012, 07:35:02 AM
#61
I bought really expensive 1866 speed last year because I thought it would be good to run at max speed, but I read some video game results that show performance drops off dramatically after 1333, with only a slight improvement of 1866/1600.  Just an interesting tidbit.
sr. member
Activity: 392
Merit: 250
November 29, 2012, 11:44:58 PM
#60
I should come here more often Tongue  I have an A6 APU black edition at a 4.0GHz thoroughly tested overclock as a demo system at my shop and just built an A10 system yesterday.  The 1866MHz memory is still on its way here in the mail but I'll stick some 1600 Ballistix in it temporarily, install Win7, and let you know what it gets for mining speed.
full member
Activity: 156
Merit: 100
November 28, 2012, 12:33:09 PM
#59
I have one of the new APUs in a system I built last month, the numbers were not great.  I'll see if I can find the screen shots from testing it.

That'd be great, i'm very interested in seeing some Trinity results.
member
Activity: 66
Merit: 10
November 28, 2012, 12:18:45 PM
#58
I have one of the new APUs in a system I built last month, the numbers were not great.  I'll see if I can find the screen shots from testing it.
full member
Activity: 130
Merit: 100
November 14, 2012, 03:08:54 PM
#57
I've got an A6 in the laptop I'm using right now.  It averages at about 25MH/s, not much really.  But fun to start out with.
full member
Activity: 210
Merit: 100
Not for hire.
November 07, 2012, 12:40:11 AM
#56
I also have a 2.9 ghz llano a8 3850.  I can get about 68mhs from it with the flags -v 128 -f0 from guiminer.  It is interesting because then I can also add up to 3 video cards to motherboard, I don't mine using the cpu at all.
sr. member
Activity: 333
Merit: 250
October 25, 2012, 12:48:22 PM
#55
These chips also scale linearly with system memory performance.  So try and get your system memory as fast as you can for best performance.


The bitcoin hashing algorithm isn't memory intensive at all, so I doubt it would scale much with higher memory speeds if anything at all.

You know, you are absolutely right about that.  They shouldn't matter.  A friend of mine tried a 3870K on stock settings and got 70Mhz.  Checking on the Bitcoin wiki they show 100Mhz, but they overclocked the CPU.  Memory was only at 1667.  I jumped to conclusions because of the graphics performance and its reliance on memory speed.

I amend my original statement, for everyday 3D graphics performance, these chips scale linearly with memory clocks.  Also for scrypt mining if you are into that (LTC).  BTC should not be affected.

hero member
Activity: 686
Merit: 500
Bitbuy
October 25, 2012, 09:46:12 AM
#54
These chips also scale linearly with system memory performance.  So try and get your system memory as fast as you can for best performance.


The bitcoin hashing algorithm isn't memory intensive at all, so I doubt it would scale much with higher memory speeds if anything at all.
sr. member
Activity: 333
Merit: 250
October 25, 2012, 03:56:52 AM
#53
These chips also scale linearly with system memory performance.  So try and get your system memory as fast as you can for best performance.
full member
Activity: 182
Merit: 100
October 25, 2012, 03:51:23 AM
#52
A8 is not the newest highend APU, A10 is. I have a friend with an A10-5800k(integrated 7660D) and it pulls just over 100mh/s.
100mh/s?

That's barely better than my 3870.

Considering it is sharing the die and TDP with the cpu, it isn't bad.
legendary
Activity: 1012
Merit: 1000
October 25, 2012, 03:27:23 AM
#51
A8 is not the newest highend APU, A10 is. I have a friend with an A10-5800k(integrated 7660D) and it pulls just over 100mh/s.
100mh/s?

That's barely better than my 3870.
hero member
Activity: 784
Merit: 500
October 24, 2012, 11:42:06 AM
#50
not with any home operating system Smiley
full member
Activity: 182
Merit: 100
October 24, 2012, 09:45:56 AM
#49
A VM does not (usually) have access to the GPU... They use to have a generic GPU, but not the actual hardware...

A VM for production use isn't a good thing ... USB and serial pass through is not guaranteed to work. If it works ...

You could possibly use IOMMU passthrough stuff to get "native" access to the GPU, but not with XP as the base OS :-(
sr. member
Activity: 454
Merit: 250
Technology and Women. Amazing.
October 23, 2012, 09:26:52 PM
#48
A8 is not the newest highend APU, A10 is. I have a friend with an A10-5800k(integrated 7660D) and it pulls just over 100mh/s.
hero member
Activity: 784
Merit: 500
October 23, 2012, 01:16:55 PM
#47
A VM does not (usually) have access to the GPU... They use to have a generic GPU, but not the actual hardware...

A VM for production use isn't a good thing ... USB and serial pass through is not guaranteed to work. If it works ...
full member
Activity: 136
Merit: 100
October 23, 2012, 11:27:21 AM
#46
Not possible; but maybe the other way around...  Grin
full member
Activity: 182
Merit: 100
October 23, 2012, 10:30:43 AM
#45

My only other guess is XP is the problem.  You should be able to buy 7 cheap now that 8 is coming out.

Unfortunately thats not possible, i use these machines for other production works and they must use XP 64 Sad

no other ideas? anyone?  Roll Eyes

Put 7 or linux and run XP 64 in a VM?
full member
Activity: 136
Merit: 100
October 23, 2012, 07:09:12 AM
#44

My only other guess is XP is the problem.  You should be able to buy 7 cheap now that 8 is coming out.

Unfortunately thats not possible, i use these machines for other production works and they must use XP 64 Sad

no other ideas? anyone?  Roll Eyes
legendary
Activity: 1012
Merit: 1000
October 23, 2012, 01:17:49 AM
#43

You need to install the latest Catalyst driver (cleanly).  Uninstall that SDK shit and any other AMD software first.

thanks,

Yes, that was the first thing i tried, only catalyst, in a new system, latests version, but guiminer told me that no opencl device was installed, and from command line things didnt work also; then i tried with SDK  Huh
My only other guess is XP is the problem.  You should be able to buy 7 cheap now that 8 is coming out.
full member
Activity: 136
Merit: 100
October 22, 2012, 01:43:14 PM
#42

You need to install the latest Catalyst driver (cleanly).  Uninstall that SDK shit and any other AMD software first.

thanks,

Yes, that was the first thing i tried, only catalyst, in a new system, latests version, but guiminer told me that no opencl device was installed, and from command line things didnt work also; then i tried with SDK  Huh
legendary
Activity: 1012
Merit: 1000
October 22, 2012, 09:19:58 AM
#41
Trying yo mine with one of these (3870K) and its just impossible;

i get 3MH, obviously GPU is doing nothing,

The problem i think is that i mine in Windows XP64, i have installed latest SDK available, 2.3, but with guiminer - opencl i get only 3MH, and trying to mine from command line i get this error:


C:\guiminer>poclbm
Traceback (most recent call last):
  File "poclbm.py", line 48, in
pyopencl.LogicError: clGetPlatformIDs failed: invalid/unknown error code


All help i find about that error is for linux based systems,

any help?

Thanks,
You need to install the latest Catalyst driver (cleanly).  Uninstall that SDK shit and any other AMD software first.
sr. member
Activity: 437
Merit: 250
October 21, 2012, 10:54:41 PM
#40
has anyone got an a10 yet to test? I believe on the sha-256 bandwidth tests it was about 3 times as fast as the a8 so I was hoping it maybe worth trying if I build new gpu based systems
hero member
Activity: 504
Merit: 504
Decent Programmer to boot!
October 21, 2012, 04:25:50 PM
#39
Are we still debating?

The A4-3400 gets about 100mhash/s. I have firsthand experience in using them.
full member
Activity: 136
Merit: 100
October 21, 2012, 04:23:00 PM
#38
Trying yo mine with one of these (3870K) and its just impossible;

i get 3MH, obviously GPU is doing nothing,

The problem i think is that i mine in Windows XP64, i have installed latest SDK available, 2.3, but with guiminer - opencl i get only 3MH, and trying to mine from command line i get this error:


C:\guiminer>poclbm
Traceback (most recent call last):
  File "poclbm.py", line 48, in
pyopencl.LogicError: clGetPlatformIDs failed: invalid/unknown error code


All help i find about that error is for linux based systems,

any help?

Thanks,
legendary
Activity: 1806
Merit: 1003
October 05, 2011, 12:25:02 PM
#37
Really? share your magic then, I have a 5570 and can't even reach 100 mh/s

It'll be about 92% of a 5570, or about 55 MH/s. Smiley

I have a 5570 and can do 200+ Mh/S... Unless a 92% means something different now, there is a problem somewhere.
donator
Activity: 1218
Merit: 1080
Gerald Davis
October 05, 2011, 12:15:27 PM
#36
It was IBM (and others) however those designs are at micro scale.  Water cooling INSIDE the chip in tiny channels and pumped via electrical impulses inside the chip itself.  We likely are some years away from when that would be commercially viable.

IBM does currently sell some watercooled servers.  Some of their high performance servers use a 32 core Power6 chip.   With up to 44 chips per 44U rack (2 per 2U server), that's 1400 cores per standard datacenter rack.  To dissipate that kind of thermal load IBM sells an enterprise rated watercooling kit.

Many people don't know that most early computers were watercooled.  Most mainframes were (and some still are) liquid cooled.  It was only when power densities dropped that air cooling became viable.  In datacenter servers are getting smaller, more in a rack, and pulling more power.  Using forced air to remove that heat is horribly inefficient and noisy.  As the power density starts to climb we will see more enterprise grade watercooling. 
donator
Activity: 2352
Merit: 1060
between a rock and a block!
October 05, 2011, 12:13:42 PM
#35
I still think it would be hard to create a water block that will be able to absorb 3000W of heat from a square inch.
i think you would then need micro channels or tubes through the GPU chip itself to remove the heat.

No reason for it to be 3000W from 1" squared.  Even if you could do that the outer chip layers would act as an insulator and "cook" the inner chips.

You could simply have a chip-waterblock sandwhich.

waterblock
chip
waterblock
chip
waterblock
chip
waterblock
chip
waterblock

or maybe something more like a stacked grid array (4x4 chips under a waterblock and then stacked)


I think I saw something about interleaving chips with liquid cooling pathways... i think it was IBM?
donator
Activity: 1218
Merit: 1080
Gerald Davis
October 05, 2011, 12:07:21 PM
#34
I still think it would be hard to create a water block that will be able to absorb 3000W of heat from a square inch.
i think you would then need micro channels or tubes through the GPU chip itself to remove the heat.

No reason for it to be 3000W from 1" squared.  Even if you could do that the outer chip layers would act as an insulator and "cook" the inner chips.

You could simply have a chip-waterblock sandwhich.

waterblock
chip
waterblock
chip
waterblock
chip
waterblock
chip
waterblock

or maybe something more like a stacked grid array (4x4 chips under a waterblock and then stacked)
full member
Activity: 235
Merit: 100
October 05, 2011, 11:54:54 AM
#33
if AMD would just make gpu chips stack-able in to arrays that would be something
that would be awsome
get 10 HD6970 chips stacked together..
it could dump 3000 watts of heat in a square inch.

just have to figure out how to cool it  Huh

liquid cooling could handle it.  It takes 4 watt hours of power to raise 1 gallon by 1 deg C.

Thus to keep the temp of 3000 watt chip within 10 deg of ambient it would require 147 gallons per hour.  Sounds like a lot but that is just a mere 2 gallons per minute.  A good water cooling pump has 10x that capacity.

Now to avoid the water heating up you would need a pretty large radiator to dissipate 3kW of heat but it wouldn't require anything exotic.

To correct what someone above said the temp of water is negligible, water is a very effective conductor of heat.  "Warm water" is cools just as well as "ice cold" water is your goal is just to keep the temps below say 60C.

I still think it would be hard to create a water block that will be able to absorb 3000W of heat from a square inch.
i think you would then need micro channels or tubes through the GPU chip itself to remove the heat.
donator
Activity: 1218
Merit: 1080
Gerald Davis
October 05, 2011, 11:23:29 AM
#32
if AMD would just make gpu chips stack-able in to arrays that would be something
that would be awsome
get 10 HD6970 chips stacked together..
it could dump 3000 watts of heat in a square inch.

just have to figure out how to cool it  Huh

liquid cooling could handle it.  It takes 4 watt hours of power to raise 1 gallon by 1 deg C.

Thus to keep the temp of 3000 watt chip within 10 deg of ambient it would require 147 gallons per hour.  Sounds like a lot but that is just a mere 2 gallons per minute.  A good water cooling pump has 10x that capacity.

Now to avoid the water heating up you would need a pretty large radiator to dissipate 3kW of heat but it wouldn't require anything exotic.

To correct what someone above said the temp of water is negligible, water is a very effective conductor of heat.  "Warm water" is cools just as well as "ice cold" water is your goal is just to keep the temps below say 60C.


donator
Activity: 2352
Merit: 1060
between a rock and a block!
October 05, 2011, 11:05:36 AM
#31
if AMD would just make gpu chips stack-able in to arrays that would be something
that would be awsome
get 10 HD6970 chips stacked together..
it could dump 3000 watts of heat in a square inch.

just have to figure out how to cool it  Huh
with a very cold liquid Smiley
full member
Activity: 235
Merit: 100
October 05, 2011, 11:03:51 AM
#30
if AMD would just make gpu chips stack-able in to arrays that would be something
that would be awsome
get 10 HD6970 chips stacked together..
it could dump 3000 watts of heat in a square inch.

just have to figure out how to cool it  Huh
donator
Activity: 1218
Merit: 1080
Gerald Davis
October 05, 2011, 09:54:55 AM
#29
Thank you. I have been reading up a bit on them and it is quite confusing. You'd think that in AMD's whole fusion strategy they could allow the GPU and the CPU to access the same cache. Perhaps this is further down the pipeline.

Possibly someday.  One thing is for sure we can expect tighter and tighter integration of CPU and GPU cores.  Currently the GPU doesn't have access to CPU cache though.

A high end "fusion" could be interesting for high performance computing.  Imagine a chip w/ 4 or 8 x86 cores and a much larger shader core one with internal high speed cache and ability to share L2 caches of "traditional" cores.    In a 4 socket server that would give you pretty amazing computing densities. 

Of course AMD whole blending of the lines between CPU & GPU will eventually kill the "CPU friendly" block chains.  It is a futile endeavor.   
sr. member
Activity: 252
Merit: 250
October 05, 2011, 08:58:05 AM
#28
It'll be about 92% of a 5570, or about 55 MH/s. Smiley

I have a 5570 and can do 200+ Mh/S... Unless a 92% means something different now, there is a problem somewhere.

OT

I'm curious about your settings ...
hero member
Activity: 756
Merit: 500
October 05, 2011, 08:22:33 AM
#27
The shader core is a standard GPU (called HD 6550D) its specs are similar to the GPU on HD 6570 video card.
It is 400 shaders vs 480 running at higher clock and it is more efficient using 32nm process but it is for all intents and purposes a GPU.

It has all the advantages and disadvantages of GPU.  Namely it only has 64KB on cache on the GPU which is insufficient to efficiently run scrypt. The CPU "side" of the chip is a rather pedestrian CPU.  It should perform similarly to any other modern CPU.

So while you "could" use it to mine "CPU friendly" you won't get a massive boost over other CPU as the GPU is "crippled".  A higher clocked multi-core chip (like 6 core Phenom II) would still be superior.

Thank you. I have been reading up a bit on them and it is quite confusing. You'd think that in AMD's whole fusion strategy they could allow the GPU and the CPU to access the same cache. Perhaps this is further down the pipeline.
donator
Activity: 1218
Merit: 1080
Gerald Davis
October 05, 2011, 07:39:15 AM
#26
The shader core is a standard GPU (called HD 6550D) its specs are similar to the GPU on HD 6570 video card.
It is 400 shaders vs 480 running at higher clock and it is more efficient using 32nm process but it is for all intents and purposes a GPU.

It has all the advantages and disadvantages of GPU.  Namely it only has 64KB on cache on the GPU which is insufficient to efficiently run scrypt. The CPU "side" of the chip is a rather pedestrian CPU.  It should perform similarly to any other modern CPU.

So while you "could" use it to mine "CPU friendly" you won't get a massive boost over other CPU as the GPU is "crippled".  A higher clocked multi-core chip (like 6 core Phenom II) would still be superior.
hero member
Activity: 756
Merit: 500
October 05, 2011, 07:24:21 AM
#25
Could you use this with something like Tenebrix working with both the CPU and the shaders?
legendary
Activity: 2450
Merit: 1002
July 30, 2011, 03:19:33 PM
#24
I have set one of these up and using CGminer ... the a8 can pull off around 68mh/s .. OS is win7 x64.
Not too bad!
newbie
Activity: 50
Merit: 0
July 30, 2011, 11:59:50 AM
#23
So, the 2.9GHz will do 65Mhash/s...

What about the 2.6GHz version?
The cpu speed is irrelevant. What you look at is how many shaders there are, and how fast the shaders run.

400 shaders at 600 mhz is the strongest Llano you can get.


Running all 4 cores. Getting 65MHash/s
Best to deactivate the cpu mining and only mine with the graphics shaders.
Overclocking the shaders from 600 Mhz should boost your hash rate far more than the cpus ever could achieve.


If they make a power-efficient
It's vliw5 at 32 nm soi. It's far more power efficient than any other graphics card in existence currently.
donator
Activity: 2352
Merit: 1060
between a rock and a block!
July 29, 2011, 06:09:33 PM
#22
I have a A8-3850 Box,

Running all 4 cores. Getting 65MHash/s


How are you cooling it? stock sink/fan or something else?

Have you tried overclocking?
member
Activity: 99
Merit: 10
July 07, 2011, 03:37:02 PM
#21
What software are you using to mine?

GUIMiner 0701

But is GUIMiner taking advantage of the GPU core?

Yes.  You can't get 65mhash out of a normal CPU.
legendary
Activity: 1050
Merit: 1000
July 07, 2011, 02:28:05 PM
#20
if AMD would just make gpu chips stack-able in to arrays that would be something
member
Activity: 68
Merit: 10
High Desert Dweller-Where Space and Time Meet $
July 07, 2011, 01:54:41 PM
#19
What software are you using to mine?

GUIMiner 0701

But is GUIMiner taking advantage of the GPU core?
newbie
Activity: 21
Merit: 0
July 07, 2011, 10:22:00 AM
#18
I have a A8-3850 Box,

Running all 4 cores. Getting 65MHash/s

Have you considered overclocking? 

You should worry about Noise generated when running full load(even just 1 core)
Better spend $30 for a after-market heat-sink+fan.
newbie
Activity: 21
Merit: 0
July 07, 2011, 10:20:53 AM
#17
What software are you using to mine?

GUIMiner 0701
hero member
Activity: 938
Merit: 501
July 07, 2011, 01:03:34 AM
#16
What software are you using to mine?
donator
Activity: 2352
Merit: 1060
between a rock and a block!
July 06, 2011, 10:38:56 PM
#15
So, the 2.9GHz will do 65Mhash/s...

What about the 2.6GHz version? 

I know it's the same wattage and all, but it's $20 cheaper and if it will do same hashing rate, i'd prefer to pay $20 less.
member
Activity: 68
Merit: 10
High Desert Dweller-Where Space and Time Meet $
July 06, 2011, 10:07:01 PM
#14
I have a A8-3850 Box,

Running all 4 cores. Getting 65MHash/s


What's the load on the cores? Wonder how close it is to using 100 watts. Sounds about right though. It's a nice boost over a normal cpu.

Are you using the internal GPU? That's what I want to see tested.
legendary
Activity: 1386
Merit: 1004
July 06, 2011, 05:05:39 PM
#13
I have a A8-3850 Box,

Running all 4 cores. Getting 65MHash/s

Have you considered overclocking? 
hero member
Activity: 728
Merit: 501
CryptoTalk.Org - Get Paid for every Post!
July 06, 2011, 04:46:03 PM
#12
I have a A8-3850 Box,

Running all 4 cores. Getting 65MHash/s


What's the load on the cores? Wonder how close it is to using 100 watts. Sounds about right though. It's a nice boost over a normal cpu.
newbie
Activity: 21
Merit: 0
July 06, 2011, 04:38:38 PM
#11
I have a A8-3850 Box,

Running all 4 cores. Getting 65MHash/s
legendary
Activity: 1386
Merit: 1004
July 06, 2011, 03:22:00 PM
#10
It'll be about 92% of a 5570, or about 55 MH/s. Smiley

Apparently you can do some overclocking on it but 100mh/s would be the max I would expect.  Now the next version might make sense when combined with a stanard video card or two but not this version


member
Activity: 68
Merit: 10
High Desert Dweller-Where Space and Time Meet $
July 05, 2011, 11:31:57 PM
#9
Yes... It should be investigated. I'm more then happy to accept donations of hardware to do this Cheesy
hero member
Activity: 728
Merit: 501
CryptoTalk.Org - Get Paid for every Post!
July 05, 2011, 01:34:04 PM
#8
If they make a power-efficient (I'm thinking 45W) APU that can put out 80-120mhash/s with 400-600 Radeon 7xxx series shaders, CPU mining will be worth it.

Granted it's not much in daily BTC, but it will pay to keep the processor mining, unlike now.


Agreed. For new mining rigs, it might make sense once they have some 3-4-5 pci-e slot boards. You'd get the same GPU production with a bonus 50-80mhs from your cpu.

sr. member
Activity: 252
Merit: 251
July 05, 2011, 10:51:28 AM
#7
If they make a power-efficient (I'm thinking 45W) APU that can put out 80-120mhash/s with 400-600 Radeon 7xxx series shaders, CPU mining will be worth it.

Granted it's not much in daily BTC, but it will pay to keep the processor mining, unlike now.
legendary
Activity: 1148
Merit: 1001
Radix-The Decentralized Finance Protocol
July 05, 2011, 10:48:39 AM
#6
It'll be about 92% of a 5570, or about 55 MH/s. Smiley

I have a 5570 and can do 200+ Mh/S... Unless a 92% means something different now, there is a problem somewhere.

no way a card with 400 sps can do 200 mhash.  the 5770 has 800 and it can do 200mhash with overclock.

Sorry, I misread. I have a 5770, not a 5570.
newbie
Activity: 21
Merit: 0
July 05, 2011, 09:21:18 AM
#5
It'll be about 92% of a 5570, or about 55 MH/s. Smiley

I have a 5570 and can do 200+ Mh/S... Unless a 92% means something different now, there is a problem somewhere.

no way a card with 400 sps can do 200 mhash.  the 5770 has 800 and it can do 200mhash with overclock.
newbie
Activity: 21
Merit: 0
July 05, 2011, 09:02:31 AM
#4
my best guess would be somewhere between 50-75mhash/sec, and honestly i think 75 would be a super extreme top end. more realistic is 50-60mhash.  Sadly the cpu isnt as good as i had hoped, and while yes, if you are in a situation where you need to rely on integrated graphics, nothing even remotely comes close, but if you're building a mining pc, then sempron is much more cost effective way to go, and if you're building a gaming/mining pc then you're much better off getting an i3/i5 or a Phenom II 955 Black.

Now granted, this is just my opinion.  if you feel that buying a $130 processor for an extra 50-60mhash is worth it, then by all means, the A8 is for you.  If you need to build a super cost effective low end gaming machine... again the A8 is for you.  If you dont fit into either one of those, then you're best bet is to get one of the other cpus i mentioned.
legendary
Activity: 1148
Merit: 1001
Radix-The Decentralized Finance Protocol
July 05, 2011, 08:59:35 AM
#3
It'll be about 92% of a 5570, or about 55 MH/s. Smiley

I have a 5570 and can do 200+ Mh/S... Unless a 92% means something different now, there is a problem somewhere.
hero member
Activity: 658
Merit: 500
July 05, 2011, 08:57:26 AM
#2
It'll be about 92% of a 5570, or about 55 MH/s. Smiley
member
Activity: 68
Merit: 10
High Desert Dweller-Where Space and Time Meet $
July 05, 2011, 02:23:44 AM
#1
Doubt anyone has one yet, looks like they came out last week, but they are quad-core CPUs with an internal GPU core with 400 shader processors. While clearly not a powerhouse, I'm curious to hear when someone gets one and tries to mine on it.

Looks like they are about $130-150/retail. Looking at the motherboards that support Socket FM1, the most you'll be able to fit is 2ish GPU cards, but possibly a spare slot for an FPGA PCI array (the mobo's seem to be in the $100ish dollar range). Of course, all this is brand spankin' new to market. I'd probably want to wait for some more developments to arise before I look at getting one.

This may (haven't done the math here) be more cost effective then going with a Sempron considering the low wattage of these new chips (at least the AMD A8-3800 looks like the wise choice along this train of thought). The A8-3850 runs faster, with more wattage, and no TurboCore, but it doesn't really need it. If you mix-in CPU mining, it may be handy to go with the 3850.

Reviews:
http://www.guru3d.com/article/amd-a8-3850-apu-review/1
http://www.bit-tech.net/hardware/cpus/2011/06/30/amd-a8-3850-review/1

Thoughts?

OR IF SOMEONE WANTS TO BUY ME ONE I'LL LET YOU KNOW HOW THE MINING GOES Tongue
Jump to: