Pages:
Author

Topic: New 14nm miners??????? (Read 5982 times)

DrG
legendary
Activity: 2086
Merit: 1035
September 03, 2014, 03:28:19 PM
#69
If TSMC is booked up well then 28nm it is.
The hash rate is rising so fast that by the time 14nm ASIC fab capability becomes available, fabbing the parts will be uneconomic. We're headed for 1 exahash around the beginning of 2015, about 4x the current hash rate.

By Q1 2015, all miners will be losing money. Then what?


Difficulty will go to the moon!

Well at some point somebody will shut off their miners.  First to go will be most European and Australian miners since they have high energy rates.  If that doesn't bring the difficulty down then the people who thought they had cheap power with $0.08/KWH will get kicked out.

The same thing happened with GPUs - that's when FPGA started becoming popular.  ASICs killed FPGA development.

Sooner or later we'll reach equilibrium.  At that point people with "free" or insanely cheap power can mine against large companies.
legendary
Activity: 1204
Merit: 1002
September 03, 2014, 02:53:17 AM
#68
If TSMC is booked up well then 28nm it is.
The hash rate is rising so fast that by the time 14nm ASIC fab capability becomes available, fabbing the parts will be uneconomic. We're headed for 1 exahash around the beginning of 2015, about 4x the current hash rate.

By Q1 2015, all miners will be losing money. Then what?

Difficulty will go to the moon!
hero member
Activity: 854
Merit: 510
September 03, 2014, 02:05:14 AM
#67
...  Moore's Law does appear to be reaching an end. ....

Heard the same thing in late 1980's.   It is more likely that there will be a shift to other approaches, like stacking many layers or something else.   Still that that doesn't mean it will be a few guys in a bar figuring out how to do it.

I was on my 3rd PC in the late 80s and I never heard such a thing.  I did hear that they would have problems with the clocks because of possible interference with other electronics - turned out that didn't happen  Cheesy

The basic transistor gate design has to change (ie 3D) but even that only has so much it can go.  The concept of a switch or gate has to change.  Graphene or something like that which we haven't used yet.
3rd PC by the late 80's, you started a later them me.   I was on my third computer before 1980.   

The talk about Moore's law in the late 80's was mostly around hard disk density and capacity at the time.   I was working as a Hardware/Software engineer designing sub-systems used in oscilloscopes.  You may have been too young to follow the trade journals at that time.   There wasn't many stories in the popular press as computers weren't mainstream.    Anyway at the time a typical hard disk was measured in MB.   A few GB drive was a monster and very expensive and was physically very large.     

Currently there is a great deal of research on how to improve ICs and we are long ways from the end of improvements, however that might not be true for reducing sizes of transistors.  Also much of the new stuff will take years to work its way out to smaller companies unless they are buying general CPU's etc.   I know for example that stacking layers of transistors has been in the works for at least 5 years, probably more, but so far I haven't actually seen it.   
 

Yeah I'm a physician so all my computer knowledge is hobbywork, but my dad was an EE so my science project in 1st grade was parallel vs series lighting while other kids were doing things with plants and volcanos  Cheesy

And yes I remember MB HDs.  After my IIc I bought a 386 SX16 (turbo 16MHz) from Gateway and my dad payed an extra $500 for the 40MB WD HD.  It was an IDE drive, not the RLL/MFM/ESDI ones.  Yeah he still has punchcards in the garage along with his oscilloscope.  I do indeed remember the talk about the hard drive limits at that time. I learned to program in Fortran, Basic, C, and Cobalt but that's is all erased from my mind and filled with useless medical knowledge.


I still read through the IEEE magazines form time to time and I have the 2011 issue talking about digital currency and Bitcoin.

Anyways the circuit will be revolutionized - just need to have a Eureka moment.  It's not my field of study so I get my information from stuff like this and other forum members:
https://www.youtube.com/watch?v=gjx5y9lrtwU

I once considered medical school, even read up on the MCAT before deciding it wasn't for me.   I respect how much a physician has to go through to get to practice.   It seems medical fields are really starting to explode with innovation!   
DrG
legendary
Activity: 2086
Merit: 1035
September 03, 2014, 01:54:24 AM
#66
...  Moore's Law does appear to be reaching an end. ....

Heard the same thing in late 1980's.   It is more likely that there will be a shift to other approaches, like stacking many layers or something else.   Still that that doesn't mean it will be a few guys in a bar figuring out how to do it.

I was on my 3rd PC in the late 80s and I never heard such a thing.  I did hear that they would have problems with the clocks because of possible interference with other electronics - turned out that didn't happen  Cheesy

The basic transistor gate design has to change (ie 3D) but even that only has so much it can go.  The concept of a switch or gate has to change.  Graphene or something like that which we haven't used yet.
3rd PC by the late 80's, you started a later them me.   I was on my third computer before 1980.   

The talk about Moore's law in the late 80's was mostly around hard disk density and capacity at the time.   I was working as a Hardware/Software engineer designing sub-systems used in oscilloscopes.  You may have been too young to follow the trade journals at that time.   There wasn't many stories in the popular press as computers weren't mainstream.    Anyway at the time a typical hard disk was measured in MB.   A few GB drive was a monster and very expensive and was physically very large.     

Currently there is a great deal of research on how to improve ICs and we are long ways from the end of improvements, however that might not be true for reducing sizes of transistors.  Also much of the new stuff will take years to work its way out to smaller companies unless they are buying general CPU's etc.   I know for example that stacking layers of transistors has been in the works for at least 5 years, probably more, but so far I haven't actually seen it.   
 

Yeah I'm a physician so all my computer knowledge is hobbywork, but my dad was an EE so my science project in 1st grade was parallel vs series lighting while other kids were doing things with plants and volcanos  Cheesy

And yes I remember MB HDs.  After my IIc I bought a 386 SX16 (turbo 16MHz) from Gateway and my dad payed an extra $500 for the 40MB WD HD.  It was an IDE drive, not the RLL/MFM/ESDI ones.  Yeah he still has punchcards in the garage along with his oscilloscope.  I do indeed remember the talk about the hard drive limits at that time. I learned to program in Fortran, Basic, C, and Cobalt but that's is all erased from my mind and filled with useless medical knowledge.


I still read through the IEEE magazines form time to time and I have the 2011 issue talking about digital currency and Bitcoin.

Anyways the circuit will be revolutionized - just need to have a Eureka moment.  It's not my field of study so I get my information from stuff like this and other forum members:
https://www.youtube.com/watch?v=gjx5y9lrtwU
hero member
Activity: 854
Merit: 510
September 03, 2014, 01:39:29 AM
#65
...  Moore's Law does appear to be reaching an end. ....

Heard the same thing in late 1980's.   It is more likely that there will be a shift to other approaches, like stacking many layers or something else.   Still that that doesn't mean it will be a few guys in a bar figuring out how to do it.

I was on my 3rd PC in the late 80s and I never heard such a thing.  I did hear that they would have problems with the clocks because of possible interference with other electronics - turned out that didn't happen  Cheesy

The basic transistor gate design has to change (ie 3D) but even that only has so much it can go.  The concept of a switch or gate has to change.  Graphene or something like that which we haven't used yet.
3rd PC by the late 80's, you started a later them me.   I was on my third computer before 1980.  

The talk about Moore's law in the late 80's was mostly around hard disk density and capacity at the time.   I was working as a Hardware/Software engineer designing sub-systems used in oscilloscopes.  You may have been too young to follow the trade journals at that time.   There wasn't many stories in the popular press as computers weren't mainstream.    Anyway at the time a typical hard disk was measured in MB.   A few GB drive was a monster and very expensive and was physically very large.      

Currently there is a great deal of research on how to improve ICs and we are long ways from the end of improvements, however that might not be true for reducing sizes of transistors.  Also much of the new stuff will take years to work its way out to smaller companies unless they are buying general CPU's etc.   I know for example that stacking layers of transistors has been in the works for at least 5 years, probably more, but so far I haven't actually seen it.  
 
EDIT: In the late 80's there was a general feeling that solid state storage (i.e. flash) would easily replace hard disks within 10 years.   Still hasn't happen, although we are getting closer to the cross over.   In 1995 I wrote a driver for storing information on flash, then the 28F010's Intel, 1 Mb each, wow that was really hard and there was like a 90 second cycle where if you lost power you lost your flash contents.   Things have come a long ways since then.   I know a ton about pre-2000 technology but after about 2000 I've mostly done pure software.    Grin   My electrical skills are getting rusty and I don't read the journals much now.  Kids take up too much time.   
DrG
legendary
Activity: 2086
Merit: 1035
September 03, 2014, 12:38:51 AM
#64
...  Moore's Law does appear to be reaching an end. ....

Heard the same thing in late 1980's.   It is more likely that there will be a shift to other approaches, like stacking many layers or something else.   Still that that doesn't mean it will be a few guys in a bar figuring out how to do it.

I was on my 3rd PC in the late 80s and I never heard such a thing.  I did hear that they would have problems with the clocks because of possible interference with other electronics - turned out that didn't happen  Cheesy

The basic transistor gate design has to change (ie 3D) but even that only has so much it can go.  The concept of a switch or gate has to change.  Graphene or something like that which we haven't used yet.
hero member
Activity: 854
Merit: 510
September 02, 2014, 05:51:24 PM
#63
...  Moore's Law does appear to be reaching an end. ....

Heard the same thing in late 1980's.   It is more likely that there will be a shift to other approaches, like stacking many layers or something else.   Still that that doesn't mean it will be a few guys in a bar figuring out how to do it.
DrG
legendary
Activity: 2086
Merit: 1035
September 02, 2014, 11:48:57 AM
#62
There's no point to going towards 14nm when 20nm hasn't even been optimized.  28nm offerings are still sometimes more efficient than their 20nm brethren.  Spending money on improving the current design before stepping down to an expensive process should seem to be the better route.

What 20nm brethren? There's only KnC's half-arsed rushed attempt so far. Also, making ASICs doesn't work the way you are describing it. You don't just take a highly optimised 28nm design and run it through the 20nm fab and end up with a highly optimised 20nm ASIC. The design usually has to be reworked because the actual fab process has changed and once working, it can then be optimised for that specific process.



Well I meant what you said - may not have come out right.  basically they're trying to take a leap too far ahead if they rush to 14nm.  Since it would be prohibitively expensive to do so they should just work on optimizing the 20 & 28nm stuff that they have right now.

TSMC 20nm is fully booked for at least the rest of the year and probably a good portion of next year too. You've basically got Nvidia, AMD and ARM manufacturers fighting for capacity for their next gen products. The bitcoin ASIC manufacturers would be far better off focusing on 28nm

Meh I've been waiting on those new GPUs for ages now.  The 7970s I have from 2012 are still the same as the current crop of GPUs from the end of 2014 more or less.  Moore's Law does appear to be reaching an end.

If TSMC is booked up well then 28nm it is.
legendary
Activity: 826
Merit: 1004
September 02, 2014, 06:34:13 AM
#61
There's no point to going towards 14nm when 20nm hasn't even been optimized.  28nm offerings are still sometimes more efficient than their 20nm brethren.  Spending money on improving the current design before stepping down to an expensive process should seem to be the better route.

What 20nm brethren? There's only KnC's half-arsed rushed attempt so far. Also, making ASICs doesn't work the way you are describing it. You don't just take a highly optimised 28nm design and run it through the 20nm fab and end up with a highly optimised 20nm ASIC. The design usually has to be reworked because the actual fab process has changed and once working, it can then be optimised for that specific process.



Well I meant what you said - may not have come out right.  basically they're trying to take a leap too far ahead if they rush to 14nm.  Since it would be prohibitively expensive to do so they should just work on optimizing the 20 & 28nm stuff that they have right now.

TSMC 20nm is fully booked for at least the rest of the year and probably a good portion of next year too. You've basically got Nvidia, AMD and ARM manufacturers fighting for capacity for their next gen products. The bitcoin ASIC manufacturers would be far better off focusing on 28nm
DrG
legendary
Activity: 2086
Merit: 1035
September 02, 2014, 03:28:54 AM
#60
There's no point to going towards 14nm when 20nm hasn't even been optimized.  28nm offerings are still sometimes more efficient than their 20nm brethren.  Spending money on improving the current design before stepping down to an expensive process should seem to be the better route.

What 20nm brethren? There's only KnC's half-arsed rushed attempt so far. Also, making ASICs doesn't work the way you are describing it. You don't just take a highly optimised 28nm design and run it through the 20nm fab and end up with a highly optimised 20nm ASIC. The design usually has to be reworked because the actual fab process has changed and once working, it can then be optimised for that specific process.



Well I meant what you said - may not have come out right.  basically they're trying to take a leap too far ahead if they rush to 14nm.  Since it would be prohibitively expensive to do so they should just work on optimizing the 20 & 28nm stuff that they have right now.
legendary
Activity: 826
Merit: 1004
September 01, 2014, 06:24:26 PM
#59
There's no point to going towards 14nm when 20nm hasn't even been optimized.  28nm offerings are still sometimes more efficient than their 20nm brethren.  Spending money on improving the current design before stepping down to an expensive process should seem to be the better route.

What 20nm brethren? There's only KnC's half-arsed rushed attempt so far. Also, making ASICs doesn't work the way you are describing it. You don't just take a highly optimised 28nm design and run it through the 20nm fab and end up with a highly optimised 20nm ASIC. The design usually has to be reworked because the actual fab process has changed and once working, it can then be optimised for that specific process.

hero member
Activity: 854
Merit: 510
September 01, 2014, 05:28:39 PM
#58
There's no point to going towards 14nm when 20nm hasn't even been optimized.  28nm offerings are still sometimes more efficient than their 20nm brethren.  Spending money on improving the current design before stepping down to an expensive process should seem to be the better route.
Good point and besides everything I've read indicates that 14nm requires redesign to get any real advantages.   So companies should be focusing on optimizing first. 
DrG
legendary
Activity: 2086
Merit: 1035
September 01, 2014, 05:06:57 PM
#57
There's no point to going towards 14nm when 20nm hasn't even been optimized.  28nm offerings are still sometimes more efficient than their 20nm brethren.  Spending money on improving the current design before stepping down to an expensive process should seem to be the better route.
legendary
Activity: 826
Merit: 1004
September 01, 2014, 01:55:30 PM
#56
The only way anyone except Intel is making 14nm bitcoin ASICs is if they pay a shit load of money to Intel.

or to someone....14nm is a pretty significant scaling hurdle as we are starting to build chips with components that are only a few atoms across

http://www.extremetech.com/computing/97469-is-14nm-the-end-of-the-road-for-silicon-lithography

Very interesting article.    However, it may be possible for companies to get Intel to make the chips.   I expect that would be very costly though.    Probably currently way out to the reach of smaller companies.   

It is possible for external companies to use Intel's fabs. They're making the Stratix10 14nm FPGAs for Altera and 14nm SoCs for Panasonic to name 2 customers.
sr. member
Activity: 420
Merit: 250
September 01, 2014, 02:18:54 AM
#55
Well either the power becomes a major factor or if the growth is slow then the 2016 halving will stop it.

Yup, once the halving happens it will be a whole new game.   

What will halving do to miners?  Please elaborate
Block awards are half, so income per block is half.

Why would they do that though?
Dude!  You better go learn some basics.  There isn't any wanting involved, it is just part of how bitcoin works.   Bitcoin mining was 50 BTC per block, now it is 25 BTC and in about 2 years it will be 12.5 BTC.   It will keep halving every 3 to 4 years until the award is just dust.   

Any run, don't walk to a bitcoin wiki and read up on some of the basics.   

Sorry I misunderstood you.  I didn't know you were referring to the block award.  Yes I am familiar with that.  Thanks for the post.
hero member
Activity: 854
Merit: 510
September 01, 2014, 02:16:11 AM
#54
Well either the power becomes a major factor or if the growth is slow then the 2016 halving will stop it.

Yup, once the halving happens it will be a whole new game.  

What will halving do to miners?  Please elaborate
Block awards are half, so income per block is half.

Why would they do that though?
Dude!  You better go learn some basics.  There isn't any wanting involved, it is just part of how bitcoin works.   Bitcoin mining was 50 BTC per block, now it is 25 BTC and in about 2 years it will be 12.5 BTC.   It will keep halving every 3 to 4 years until the award is just dust.  

Anyway run, don't walk to a bitcoin wiki and read up on some of the basics.  
sr. member
Activity: 420
Merit: 250
September 01, 2014, 12:52:23 AM
#53
Well either the power becomes a major factor or if the growth is slow then the 2016 halving will stop it.

Yup, once the halving happens it will be a whole new game.   

What will halving do to miners?  Please elaborate
Block awards are half, so income per block is half.

Why would they do that though?
hero member
Activity: 854
Merit: 510
September 01, 2014, 12:19:48 AM
#52
Well either the power becomes a major factor or if the growth is slow then the 2016 halving will stop it.

Yup, once the halving happens it will be a whole new game.   

What will halving do to miners?  Please elaborate
Block awards are half, so income per block is half.
sr. member
Activity: 420
Merit: 250
August 31, 2014, 11:50:55 PM
#51
Well either the power becomes a major factor or if the growth is slow then the 2016 halving will stop it.

Yup, once the halving happens it will be a whole new game.   

What will halving do to miners?  Please elaborate
hero member
Activity: 854
Merit: 510
August 31, 2014, 11:10:18 PM
#50
The only way anyone except Intel is making 14nm bitcoin ASICs is if they pay a shit load of money to Intel.

or to someone....14nm is a pretty significant scaling hurdle as we are starting to build chips with components that are only a few atoms across

http://www.extremetech.com/computing/97469-is-14nm-the-end-of-the-road-for-silicon-lithography

Very interesting article.    However, it may be possible for companies to get Intel to make the chips.   I expect that would be very costly though.    Probably currently way out to the reach of smaller companies.   
Pages:
Jump to: