Author

Topic: Are ASIC's the endgame? (Read 3224 times)

legendary
Activity: 1666
Merit: 1057
Marketing manager - GO MP
November 10, 2012, 08:27:13 PM
#31
Well we the Van der Walls radius of a silicon atom is 220pm, so it is reasonable to assume that any transistor must be at least 6 times that. (3 atoms two for each junction) that's 1.32 nm. Then you probably cannot 'dope' a single atom so that figure is double again to 2.64nm.
If you then consider that any semiconductor must include a certain doping ratio that figure (the minimal number of atoms per junction) becomes larger and larger so we are not that far away and it's reasonable to assume that transistors can only get one magnitude smaller than they are currently at best.
Before you go too far with that, 2 and 3nm transistors have been demonstrated.  You still want to say our noses are against the silicon performance wall?   Grin
Yes.
IIRC that weren't 2 to 3 nm transistors but transistors with a resolution of 2 to 3 nm (that's ~6 atoms). It's close to what is physically possible but not the end of the line.
That is I always assumed the nm figure in microelectronics refers to the feature size, alas the average size of an element, the transistor, correct me if I'm wrong. In that case the demonstrated transistor would even be about at the end of the line.

However, current tech approaching it's limit doesn't need to mean progress must stop here. We can for one produce larger chips or even wafer scale designs, stack them for "3D chips" and finally something like "holographic computers" where the properties of the quantum states of individual atoms are used for useful computation. The next limit would be the plank units but that's a long way.
sr. member
Activity: 285
Merit: 250
Turning money into heat since 2011.
November 10, 2012, 08:03:49 PM
#30
Well we the Van der Walls radius of a silicon atom is 220pm, so it is reasonable to assume that any transistor must be at least 6 times that. (3 atoms two for each junction) that's 1.32 nm. Then you probably cannot 'dope' a single atom so that figure is double again to 2.64nm.
If you then consider that any semiconductor must include a certain doping ratio that figure (the minimal number of atoms per junction) becomes larger and larger so we are not that far away and it's reasonable to assume that transistors can only get one magnitude smaller than they are currently at best.
Before you go too far with that, 2 and 3nm transistors have been demonstrated.  You still want to say our noses are against the silicon performance wall?   Grin
sr. member
Activity: 340
Merit: 250
GO http://bitcointa.lk !!! My new nick: jurov
November 10, 2012, 06:02:51 PM
#29
If someone finds someway to reduce the work by for example 20 bits it would be a million times faster.
That trick could work for all current technologies, cpu/gpa/fpga/asic.
But keep in mind, the trick would be something many cryptographers haven't found yet. So personally I would bet on quantum computers.
This is actually very good comment and such thing is already researched: http://en.wikipedia.org/wiki/SHA-2#Cryptanalysis_and_validation . Even if it does not endanger full SHA hash function yet, it is really possible someone soon figures out how to use these "meet-in-the-middle preimage attacks" to reduce the work by at least few bits. I'm sure there are people hard thinking about it already.

EDIT: And it does not mean it will necessarily work for all technologies. If it will need gobs of parallel computation, FPGA/ASICs are way to go. If it will rely on accessing terabytes of memory (think rainbow tables), CPU will suffice.
legendary
Activity: 1666
Merit: 1057
Marketing manager - GO MP
November 10, 2012, 04:01:11 PM
#28
The problem with this "things are going to keep getting smaller" theory is that we're approaching the laws of physics with transistor sizes this small. And we're already quite limited to clock rate because of physics.
Bah.  We've been "approaching the laws of physics" for decades.  About 20 years ago, I heard declarations that silicon will never break 250MHz.  The frequency limit has more to do with line lengths, turns, and signal phasing than transistor sizes.  I've already heard about silicon switching tested around 500GHz.  Doing that with complex data paths and keeping the signals phased correctly is where the magic is.

Well we the Van der Walls radius of a silicon atom is 220pm, so it is reasonable to assume that any transistor must be at least 6 times that. (3 atoms two for each junction) that's 1.32 nm. Then you probably cannot 'dope' a single atom so that figure is double again to 2.64nm.
If you then consider that any semiconductor must include a certain doping ratio that figure (the minimal number of atoms per junction) becomes larger and larger so we are not that far away and it's reasonable to assume that transistors can only get one magnitude smaller than they are currently at best.
sr. member
Activity: 285
Merit: 250
Turning money into heat since 2011.
November 10, 2012, 01:30:35 PM
#27
The problem with this "things are going to keep getting smaller" theory is that we're approaching the laws of physics with transistor sizes this small. And we're already quite limited to clock rate because of physics.
Bah.  We've been "approaching the laws of physics" for decades.  About 20 years ago, I heard declarations that silicon will never break 250MHz.  The frequency limit has more to do with line lengths, turns, and signal phasing than transistor sizes.  I've already heard about silicon switching tested around 500GHz.  Doing that with complex data paths and keeping the signals phased correctly is where the magic is.
sr. member
Activity: 295
Merit: 250
November 10, 2012, 12:43:01 PM
#26
So it's safe to assume the ASIC tech will progress like Pentium......just keeps getting smaller for multi-cores.....

It's already more advanced than than most of the pentium series of processors. I believe they didn't even get to 65nm until the Pentium 4.

The problem with this "things are going to keep getting smaller" theory is that we're approaching the laws of physics with transistor sizes this small. And we're already quite limited to clock rate because of physics.
full member
Activity: 196
Merit: 100
Another block in the wall
November 10, 2012, 09:43:12 AM
#25
So it's safe to assume the ASIC tech will progress like Pentium......just keeps getting smaller for multi-cores.....
legendary
Activity: 1148
Merit: 1008
If you want to walk on water, get out of the boat
November 10, 2012, 09:17:04 AM
#24
AMD makes 28nm GPUs  Wink

hero member
Activity: 602
Merit: 500
November 10, 2012, 04:31:46 AM
#23
Considering BFL has said their going to be running 65nm parts and the big CPU vendors are pushing out 22nm stuff already, it's not a stretch to imagine ASIC's getting down to 22nm or smaller.  That's give you even more power efficiency.

We will see a 22nm ASIC only in 10 years or if bitcoin becomes mainstream before that.

Yes. Let's not forget that the only company doing 22nm is Intel (amd has not been able to get a 22nm process for their CPUs, and I believe Intel boxes them out of fabs for 22nm as well). And the last Intel 65nm CPU was the Intel® Core™2 Extreme Processor QX6700 which came out in Nov-07. 5 years to go from 65nm -> 22nm for a Multi-BILLION dollar microprocessing giant. Admittedly once it has been done, it becomes easier to do again, but the cutting edge of ASIC technology is ~7 years behind.
legendary
Activity: 1176
Merit: 1001
November 10, 2012, 02:30:11 AM
#22
Considering BFL has said their going to be running 65nm parts and the big CPU vendors are pushing out 22nm stuff already, it's not a stretch to imagine ASIC's getting down to 22nm or smaller.  That's give you even more power efficiency.

We will see a 22nm ASIC only in 10 years or if bitcoin becomes mainstream before that.
sr. member
Activity: 378
Merit: 250
November 10, 2012, 12:20:25 AM
#21
Considering BFL has said their going to be running 65nm parts and the big CPU vendors are pushing out 22nm stuff already, it's not a stretch to imagine ASIC's getting down to 22nm or smaller.  That's give you even more power efficiency.
sr. member
Activity: 420
Merit: 250
November 09, 2012, 11:38:10 PM
#20
ASICs might be the 'endgame' in terms of a type of technology, but the realization of that technology has stepping stones and can be improved upon. Process/die size is the big one that can change and that will enable more work to be done per clock or Watt leading to more efficient solutions.

Do you know what type of improvements are we looking at? Double Gh/W from the best (future) existing ratio?

Really it's just of electrical surface area... aka as it gets smaller you can put more transistors on the chip in the same space. This comes out as reduced heat and less wattage used.

Now I'm not sure if parallel or modular design is being used in bitcoin ASICs... I'd guess module tho and that's probably taking it in the wrong direction technologically.

Instead of developing smaller silicon that can run at higher clock rates... we should be designing large gate arrays that could hash an entire nonce in a few clock cycles. Sure they'd be 10x the size and much 'slower' but they'd be fast as hell at actually producing work.

hero member
Activity: 602
Merit: 500
November 08, 2012, 12:19:58 PM
#19
I'd summarize it as:   Once ASICs are out, we're coasting along with Moore's Law.  There will be improvements, such as smaller circuits on chip, more efficient designs, and increased parallelism, but these will be evolutionary improvements, not revolutionary. 


Moore's law is not a law, as much as a target. It has been fine for a multibillion dollar microprocessing industry, but for the bitcoin world?

The rest though, of course. Though I suspect that with the fluctuation of BTC you will see ebbs and flows of evolution not paralleled in other industries

hero member
Activity: 633
Merit: 500
November 08, 2012, 10:50:45 AM
#18
This isn't the question so much as, "What percentage of your mining income do you spend increasing your hardware?"  It'll be yet another kind of market speculation where the winner is the one who is able to make accurate predictions in both exchange rate and shipping times.  In case you're wondering, this race has already started and the runner who picked the company that is first to ship will be the only one who doesn't trip over the first hurdle.  If you bought BFL and Tom ships first... you're playing catch-up for a good long while.
sr. member
Activity: 454
Merit: 250
Technology and Women. Amazing.
November 07, 2012, 09:48:59 PM
#17
I'd summarize it as:   Once ASICs are out, we're coasting along with Moore's Law.  There will be improvements, such as smaller circuits on chip, more efficient designs, and increased parallelism, but these will be evolutionary improvements, not revolutionary. 

this, theoretically the company makes enough money producing gen1 products and is able to come out with a smaller nm fabrication build etc. 32nm asic would be sweet
sr. member
Activity: 285
Merit: 250
Turning money into heat since 2011.
November 06, 2012, 06:00:27 PM
#16
I'd summarize it as:   Once ASICs are out, we're coasting along with Moore's Law.  There will be improvements, such as smaller circuits on chip, more efficient designs, and increased parallelism, but these will be evolutionary improvements, not revolutionary. 
legendary
Activity: 3878
Merit: 1193
November 06, 2012, 03:57:03 PM
#15
Yeah, and eventually you get to a point of diminishing returns.  If you get 10% more out of a unit, but burn it out 15% faster, what's the point?

Because with difficulty increasing, increased performance now will find more shares than longer life.
sr. member
Activity: 378
Merit: 250
November 06, 2012, 02:39:48 PM
#14
I'm not sure if you really want to clock these things at 2GHz.
A normal cpu/gpu has many parts that aren't active together.
If you have a bitcoin miner the size of a modern processor at the same manufacturing process and at the same clock speed, it will generate very much more heat.

Yeah, and eventually you get to a point of diminishing returns.  If you get 10% more out of a unit, but burn it out 15% faster, what's the point?
hero member
Activity: 1596
Merit: 502
November 06, 2012, 01:09:50 PM
#13
I'm not sure if you really want to clock these things at 2GHz.
A normal cpu/gpu has many parts that aren't active together.
If you have a bitcoin miner the size of a modern processor at the same manufacturing process and at the same clock speed, it will generate very much more heat.
legendary
Activity: 952
Merit: 1000
November 06, 2012, 12:52:37 PM
#12
I do think that ASICs are the final step in terms of hardware progression, but can't those ASICs be further improved in the future? Can't they be made to be smaller, use less power, and faster clocked?

Just for comparison, BFL said that their chips can be clocked upwards of 1GHz, but that they won't be clocked that high right out of the box. I want to say they're running at 500MHz, IIRC? Isn't it possible in the next 5 years to create a product that uses a smaller manufacturing process, and is clocked at much higher speeds, say 2GHz? We could have the future SC Single TURBO making 200GH/s @ 40W/

Am I wrong about all of this?
legendary
Activity: 966
Merit: 1000
November 06, 2012, 12:24:29 PM
#11
For all intensive purposes ASICs are as good as it gets.

Mah purposes be mo' intensive den yours.

I agree that the move to ASICs will likely be the last big leap before quantum processing.  That said, within the limits of today's ASIC technology, I think there's still room for a few more doublings of efficiency yet.

Regardless of the technology, there's always the possibility of algorithmic improvements -- "mathematical shortcuts".  Things like this:
http://www.nature.com/news/proof-claimed-for-deep-connection-between-primes-1.11378
legendary
Activity: 1974
Merit: 1029
November 05, 2012, 01:10:13 PM
#10
By the time ASICs as a technology is obsoleted, all the bitcoin blocks will have been mined anyway  Grin

But that doesn't imply an end to mining.
sr. member
Activity: 378
Merit: 250
November 05, 2012, 12:20:42 PM
#9
Ah, I hadn't thought about Quantum, but as someone mentioned above, I don't really see them really being a factor for some time.  Hopefully I'm wrong, as it'd be awesome to have them, but it's just so far removed at this point that I think you can discount them for a while.

I agree that people will be able to tweak and prod ASIC's just like GPU's improved over time, but I was really wondering if there were any future contenders for bitcoin mining and it looks like ASIC's are it for a while.
sr. member
Activity: 560
Merit: 256
November 05, 2012, 12:04:00 PM
#8
ASICs might be the 'endgame' in terms of a type of technology, but the realization of that technology has stepping stones and can be improved upon. Process/die size is the big one that can change and that will enable more work to be done per clock or Watt leading to more efficient solutions.

Do you know what type of improvements are we looking at? Double Gh/W from the best (future) existing ratio?
full member
Activity: 163
Merit: 100
November 05, 2012, 09:41:13 AM
#7
ASICs might be the 'endgame' in terms of a type of technology, but the realization of that technology has stepping stones and can be improved upon. Process/die size is the big one that can change and that will enable more work to be done per clock or Watt leading to more efficient solutions.

By the time ASICs as a technology is obsoleted, all the bitcoin blocks will have been mined anyway  Grin


Edit: Ninja'd by abeaulieu
sr. member
Activity: 295
Merit: 250
November 05, 2012, 09:40:21 AM
#6
Let's not turn this into a troll/fanboi thread on ASIC's.  My question is, is there any technology out there that could push ASIC mining out the window the way ASIC's will do to GPU and GPU's did to CPU, or is ASIC mining 'as good as it gets' and will be the endgame for bitcoin mining?

For all intensive purposes ASICs are as good as it gets. Of course there are different manufacturing processes of ASICs and they are progressively getting smaller. So this is certainly room for improvement within the ASIC domain.

If quantum computing turns into something real that may be the next step, but it's not exactly a feasible step because there have not been many practical implementations of it. These technology have to widely accepted for scientific computation or PC industry before someone would port the technology and make it cheap enough for bitcoin mining.
legendary
Activity: 1600
Merit: 1014
November 05, 2012, 08:28:28 AM
#5
If someone finds someway to reduce the work by for example 20 bits it would be a million times faster.
But keep in mind, the trick would be something many cryptographers haven't found yet. So personally I would bet on quantum computers.

many people wouldn't believe that for many crypto technologies so far... from md5 to SHA-1...
legendary
Activity: 1666
Merit: 1057
Marketing manager - GO MP
November 05, 2012, 08:10:08 AM
#4
There are those who think that ASICs are the endgame for pretty much everything. Meaning that to keep up with Moores Law it will be necessary for eventually every person on earth to work in the microelectronics industry.

Obviously this won't be the case but we are already at a point where it takes an ridiculous amount of manpower to design and construct the things, and it isn't getting better. So according to this theory moores law will be cut short because a lack of human resources.
hero member
Activity: 1596
Merit: 502
November 05, 2012, 08:03:16 AM
#3
If someone finds someway to reduce the work by for example 20 bits it would be a million times faster.
That trick could work for all current technologies, cpu/gpa/fpga/asic.
But keep in mind, the trick would be something many cryptographers haven't found yet. So personally I would bet on quantum computers.
sr. member
Activity: 336
Merit: 250
November 05, 2012, 07:58:32 AM
#2
Yes and No.

Right now ASIC is the final say amongst silicon based chips.

In the future, quantum computing might push it out.
sr. member
Activity: 378
Merit: 250
November 05, 2012, 07:56:34 AM
#1
Let's not turn this into a troll/fanboi thread on ASIC's.  My question is, is there any technology out there that could push ASIC mining out the window the way ASIC's will do to GPU and GPU's did to CPU, or is ASIC mining 'as good as it gets' and will be the endgame for bitcoin mining?
Jump to: