Author

Topic: How do Scrypt-Asics work? (Read 790 times)

legendary
Activity: 1960
Merit: 1062
One coin to rule them all
March 22, 2014, 02:30:31 PM
#11
Well, it could be worse. They're not doing that preorder stuff and they're not mining with their clients hardware.. At least the diff(s) are not suggesting that.

+1
legendary
Activity: 1022
Merit: 1004
March 22, 2014, 02:25:46 PM
#10
Well, it could be worse. They're not doing that preorder stuff and they're not mining with their clients hardware.. At least the diff(s) are not suggesting that.
legendary
Activity: 1960
Merit: 1062
One coin to rule them all
March 22, 2014, 02:10:37 PM
#9
With all due respect to Gridseed's design, they "only" do 60 KHash/s per chip.

When that is said, time-to-marked is absolutely critical, and Gridseeds timing is perfect, but more perfect is the price the set by gridseed (read: the price is very-very high).  
The price is just "low" enough to keep a small hope for miners to break-even or even make a small profit.

Gridseed play the cryptocurrency game perfectly!
hero member
Activity: 686
Merit: 500
FUN > ROI
March 22, 2014, 01:40:47 PM
#8
And it was seen to be impossible (by the design of the Algo) to have an Asic
which does not use a lot of very fast memory (or am I wrong with that, TheRealSteve? My GPUs
make heavy use of the graphics memory while mining, I don't believe it's using only 128kB there..
Keep in mind that the memory requirement (at least in the context of LiteCoin) is per hash.  So while your card's using a lot of memory, it's only because it's working on multiple hashes at the same time (with decreasing returns).  At the same time, the GPU may have to communicate with that memory off-die (out the chip, through the leads, through the memory controller, through more leads, into the GDDR board, often through another controller, onto the RAM, and back, while you could design an ASIC with as much memory as you're willing to drop on there very close to where you're actually doing computation.  There's quite impressive speed gains to be had there.  Compare it to the various levels of cache on a CPU vs RAM.  L1 cache is 3 or 4 cycles (divide 1 second by the core frequency, multiply by 3 or 4, that's how long it takes to poke at it - say a nice round 2.5GHz clock gives 0.4 nanoseconds), RAM takes many times longer (tens of nanoseconds at best - still fast, but obviously quite a bit slower than the L1 cache).

GPUs also have a bit of such localized memory, just not very much of it.  I'm guessing that'll change in the future - the 7970 already uses a shiny 768kB of L2 cache (it's not entirely equivalent to CPU levels, suffice it to say it's way faster than the GPU sending bits back and forth to the external (to the chip) on-card memory).

So even though there's less memory and less parallel processing, the processing itself can be done much faster.

Mind you, I'm speaking theoretical - for all I know GridSeed found some actual optimizations (as well).
legendary
Activity: 1022
Merit: 1004
March 22, 2014, 01:23:36 PM
#7
Ah, alright, I was aware of an FPGA-implementation, but I thought it was purely a
proof-of-concept, much worse than the first SHA-FPGA's..

Then I can make sense of it.. So the obstacles were never insanely high, and some
people were just exaggerating how "safe" Scrypt is. Maybe for the peace of mind.. That
GPUs will stay profitable forever Cheesy
legendary
Activity: 1960
Merit: 1062
One coin to rule them all
March 22, 2014, 01:12:46 PM
#6
I have played with this open source scrypt FPGA (I am not the designer of this, all credits goes to the respective designers):
https://github.com/kramble/FPGA-Litecoin-Miner

(Published 8 month ago)

The design uses 1024kb per salsa core (which is of-cause implemented in internal BRAM) , when you can implement a design in a FPGA, then is there not too far to a ASIC design (at least the functional part is done).
If you speed up the clock on this design and implement a lot of cores, then do you have a pretty descent scrypt asic, the amount of logic needed is not insanely high.
 
legendary
Activity: 1022
Merit: 1004
March 22, 2014, 12:59:28 PM
#5
Thanks for your answers! I wanted to add something to my question, maybe I was a bit unprecise:
I don't care so much about how expensive it is/was to design and build the chips. Putting ROI-aspects
to the side, some time ago it was seen as technologically almost impossible to have Scrypt-Asics which are *much*
more power-efficient than GPUs. And it was seen to be impossible (by the design of the Algo) to have an Asic
which does not use a lot of very fast memory (or am I wrong with that, TheRealSteve? My GPUs
make heavy use of the graphics memory while mining, I don't believe it's using only 128kB there..
For GPU-mining SHA in comparison I recall we were underclocking the memory..)

Gridseed proved both of these assumptions wrong. I don't know "how much memory there is inside that
chip", but I don't believe it can be more than a couple of MBs..?
It is this thing that boggles my mind.
newbie
Activity: 25
Merit: 0
March 22, 2014, 12:20:34 PM
#4
From what I understand, what is(was) hard to build is a cost-effective scrypt asic. GPUs are cheap because they are being built by the thousands already.

Creating a new chip means someone has to design it, test it and then produce it as cheap as possible. Otherwise you get too expensive units, which no one will buy.
sr. member
Activity: 395
Merit: 250
March 22, 2014, 11:10:29 AM
#3
I think gridseed has really caught everyone by surprise and yes many are mistaken and obviously had no clue what they're talking about. I myself am guilty of thinking a high-end DDR5 RAM is needed for scrypt ASIC to compete with GPU which apparently is not the case.

Because this is an english speaking forum with US or EU members most of us calculate with high costs on R&D labor and materials by EU or US standards.

But gridseed is avalon and is a chinese company apparently they have "out-of-the-box" advantage. What is obvious is  labor and materials are definitely cheaper, and I guess all that experience with R&Ding BTC ASIC they must've spent time on scrypt asic as well.
hero member
Activity: 686
Merit: 500
FUN > ROI
March 22, 2014, 09:56:28 AM
#2
Can someone explain this to me? Is it that nobody really understood Scrypt before and people who were saying "Scrypt is so memory-intensive bla bla" had no clue?

It is memory intensive ... compared to Bitcoin.  However, most Scrypt implementations don't need all that much memory.  You mention gigabytes - LiteCoin, for example, only needs ~128kB.  That's relatively expensive to bake into an ASIC, but not exactly undoable or cost-prohibitive. (IIRC the small size was used so that you can still mine efficiently on a CPU without having on-CPU-die cache memory swapped out to slower cache memory like that reserved in specific areas of the CPU or worse, RAM.

If you take an altcoin that needs gigabytes of memory, things would be much different.  An ASIC could still be created, but one would have to wonder how much more cost-efficient it would end up being vs GPU's, given the latter are produced in much, much greater volume than any miner.
legendary
Activity: 1022
Merit: 1004
March 22, 2014, 09:28:31 AM
#1
I have a question, probably a stupid one: If I'm right, some months ago the general concensus was
that Asics for Scrypt are very hard to build because of the high memory requirements etc.
Moreover, I remember that people said that if there will be Scrypt-Asics, then they will look much
like a GPU. In particular, no extraordinary economies w.r.t. power consumption are to be expected.

So, now we have the chips, they work great, are very power-efficient, look nothing like a GPU
and I can't see gigabytes of RAM chips soldered next to them.

Can someone explain this to me? Is it that nobody really understood Scrypt before and people who were saying
"Scrypt is so memory-intensive bla bla" had no clue? Or are the gridseed-guys real geniuses who
were able to find "flaws" or "shortcuts"? Does anyone have technical details about what the chip
does? Is this known, after all?
Jump to: