Author

Topic: IBM takes a leap to 7nm (Read 3251 times)

member
Activity: 99
Merit: 10
June 18, 2016, 03:46:12 AM
#61
BTW - the 8086 didn't have enough memory space to even THINK about trying to process Bitcoin - not sure if the 80*3*86 did for that matter.

Pentium - perhaps, SHA256 isn't that much harder to process than RC5 depending on the RC5 keylength - but it would be incredably SLOW doing so.
 If I had to use something out of that generation I'd go with the AMD K5 over the Pentium.


First of all I can assure you one can (easily? may be) rewrite Bitcoin code in a way that can be compiled to be run under 8086, any Turing complete machine is capable of doing this job.
Secondly, SLOW is good, SLOW is great, SLOW is what we need in Bitcoin. Bitcoin sets difficulty higher and higher to slow down things.

Think! Making faster and faster machines to mine is an attack against bitcoin in its essence. Bitcoin 'defends' against this attack by increasing difficulty, good defense but why one should be happy and excited with advancements in ASIC? ASIC historically is a counter-cryptography attack. Think about it!



One word dude             GREED      Roll Eyes
one of the seven deadly sins   Undecided


Lol! Indeed it is greed.
legendary
Activity: 1456
Merit: 1175
Always remember the cause!
June 18, 2016, 12:07:51 AM
#60
BTW - the 8086 didn't have enough memory space to even THINK about trying to process Bitcoin - not sure if the 80*3*86 did for that matter.

Pentium - perhaps, SHA256 isn't that much harder to process than RC5 depending on the RC5 keylength - but it would be incredably SLOW doing so.
 If I had to use something out of that generation I'd go with the AMD K5 over the Pentium.


First of all I can assure you one can (easily? may be) rewrite Bitcoin code in a way that can be compiled to be run under 8086, any Turing complete machine is capable of doing this job.
Secondly, SLOW is good, SLOW is great, SLOW is what we need in Bitcoin. Bitcoin sets difficulty higher and higher to slow down things.

Think! Making faster and faster machines to mine is an attack against bitcoin in its essence. Bitcoin 'defends' against this attack by increasing difficulty, good defense but why one should be happy and excited with advancements in ASIC? ASIC historically is a counter-cryptography attack. Think about it!



One word dude             GREED      Roll Eyes
one of the seven deadly sins   Undecided
legendary
Activity: 2212
Merit: 1001
June 17, 2016, 05:08:58 PM
#59
BTW - the 8086 didn't have enough memory space to even THINK about trying to process Bitcoin - not sure if the 80*3*86 did for that matter.

Pentium - perhaps, SHA256 isn't that much harder to process than RC5 depending on the RC5 keylength - but it would be incredably SLOW doing so.
 If I had to use something out of that generation I'd go with the AMD K5 over the Pentium.


First of all I can assure you one can (easily? may be) rewrite Bitcoin code in a way that can be compiled to be run under 8086, any Turing complete machine is capable of doing this job.
Secondly, SLOW is good, SLOW is great, SLOW is what we need in Bitcoin. Bitcoin sets difficulty higher and higher to slow down things.

Think! Making faster and faster machines to mine is an attack against bitcoin in its essence. Bitcoin 'defends' against this attack by increasing difficulty, good defense but why one should be happy and excited with advancements in ASIC? ASIC historically is a counter-cryptography attack. Think about it!



One word dude             GREED      Roll Eyes
legendary
Activity: 1456
Merit: 1175
Always remember the cause!
June 17, 2016, 09:21:31 AM
#58
BTW - the 8086 didn't have enough memory space to even THINK about trying to process Bitcoin - not sure if the 80*3*86 did for that matter.

Pentium - perhaps, SHA256 isn't that much harder to process than RC5 depending on the RC5 keylength - but it would be incredably SLOW doing so.
 If I had to use something out of that generation I'd go with the AMD K5 over the Pentium.


First of all I can assure you one can (easily? may be) rewrite Bitcoin code in a way that can be compiled to be run under 8086, any Turing complete machine is capable of doing this job.
Secondly, SLOW is good, SLOW is great, SLOW is what we need in Bitcoin. Bitcoin sets difficulty higher and higher to slow down things.

Think! Making faster and faster machines to mine is an attack against bitcoin in its essence. Bitcoin 'defends' against this attack by increasing difficulty, good defense but why one should be happy and excited with advancements in ASIC? ASIC historically is a counter-cryptography attack. Think about it!

sr. member
Activity: 473
Merit: 250
Sodium hypochlorite, acetone, ethanol
June 16, 2016, 04:38:07 AM
#57
i would go with a Nintendo 8 Bit

http://retrominer.com/

legendary
Activity: 1498
Merit: 1030
June 16, 2016, 03:05:05 AM
#56
BTW - the 8086 didn't have enough memory space to even THINK about trying to process Bitcoin - not sure if the 80*3*86 did for that matter.

Pentium - perhaps, SHA256 isn't that much harder to process than RC5 depending on the RC5 keylength - but it would be incredably SLOW doing so.
 If I had to use something out of that generation I'd go with the AMD K5 over the Pentium.



legendary
Activity: 1498
Merit: 1030
June 16, 2016, 02:56:34 AM
#55

The history of large monolithic mining ASIC's like the Minion (Hashfast?) or BFL's Monarch chips with lots of cores/pipelines/engines/ whatever ya want to call them, let's stick to 'cores', is a travesty of wasted and stolen money (pre-order$).


 Not ALWAYS the case.

 Consider the Spondoolies "Rockerbox" chips in the SP20 and bigger same-gen miners.

 
 There are tradeoffs to make on the chip size debate - smaller chips = higher yield and less heat per chip but you need a LOT MORE CHIPS to achieve comparable miner performance, which leads to more-complex BOARD level design and a lot more components to go bad, as well as more complexity on heatsink design / air path routing.


 Also consider the Gridseed GC3355 - small chip, yet the miners it went into MOSTLY ended up being very poor reliabilidy due to BAD board-level designs (the orb EVENTUALLY managed to achieve decent reliability, but the early versions had a TON of issues. The BLADES never overcame their crap buck design issues).

legendary
Activity: 1456
Merit: 1175
Always remember the cause!
June 15, 2016, 11:39:22 PM
#54
Nice, but we're years away from seeing that in a miner.

Can't understand why people say welcome to this stuff?  Huh

I'm not legendary in this forum or in Blockchain , but I'm sure enough to say there is no advantage in ASIC nm development for the community to enjoy it, and contrariwise a lot of trouble here, decentralization and the core reliability of Blockchain will be put in danger.

We don't need ASIC (being 28 0r 14 or 7 nm) to have an average of 1 block mined  each 10 minutes , I think (supposedly) we don't even need  Pentium processor, a bunch of 8086's distributed almost evenly between some 10 distinct people suffices for Blockchain to survive and perform well.

Most of the people, commenting here, are just lost in the hype of 'advanced technology' , Blockchain is not such a naive technology, it is not mobile, it is not google, it is peer to peer for the god sake!
legendary
Activity: 3822
Merit: 2703
Evil beware: We have waffles!
June 15, 2016, 10:45:57 PM
#53
From the highest yardarm matey with"them" being the designers Wink
legendary
Activity: 3374
Merit: 1859
Curmudgeonly hardware guy
June 15, 2016, 10:28:48 PM
#52
The Minion was BlackArrow. Hashfast's big chip had four independent dies on one BGA base; I know they had separate power rails but I don't know about separate grounds or you probably actually could have strung them.
legendary
Activity: 3822
Merit: 2703
Evil beware: We have waffles!
June 15, 2016, 08:12:01 PM
#51
So the smaller the chip the better? Can someone explain to me why that is? I figure the bigger the chip the more it can handle but I guess that's not the case here is it.

Just to avoid any confusion I'm assuming you mean big chip = lots of pipelines, so more hashing power, small chip = less etc.  
yes.
The history of large monolithic mining ASIC's like the Minion (Hashfast?) or BFL's Monarch chips with lots of cores/pipelines/engines/ whatever ya want to call them, let's stick to 'cores', is a travesty of wasted and stolen money (pre-order$). Whoever did the designs for those and others had no idea about feeding power to and removing heat from those chips. Purely an afterthought. Hell even on 'small' chips Bitmine.ch kept trying to use little stick-on chipsinks for their A1 (and of course having many fall off, burn off). It took the Dragon and clones of it to show them that ya need serious sinks for the top...

With proper attention from the start to even just those 2 points (with #1 being feeding clean power to the logic in the chip) the idea does have merit.

Hmm, possibly do it with several internal banks of string config cores perhaps? Lets ya raise the voltage and drop the current needs vs running all them cores in parallel at low Vc/Massive current... Okay, if a mining chip maker tries it, you 1st read of the idea here from me.....
legendary
Activity: 1666
Merit: 1185
dogiecoin.com
June 15, 2016, 04:02:16 PM
#50
No one is going to be seeing a 7nm miner in the next 3-4 years, that's almost certain (assuming Bitcoin is still around then) because it's still at the prototyping stage, the process has to be fully engineered and characterised and that will be a majot challenge in itself.

I thought a bitcoin mining ASIC chip is pretty basic and straight forwards. Wouldn't it be wise to prototype and to calibrate the machines using mining chips since they are so simple? Just asking.

So they're not great for that because an ASIC can survive many cores being trash as they can just be disabled. A chip like an i7 4670k can't have pretty much anything but some cache disabled without having to be thrown or significantly binned.


So the smaller the chip the better? Can someone explain to me why that is? I figure the bigger the chip the more it can handle but I guess that's not the case here is it.

The number refers to the "feature size" of items etched onto the die (as I understand it). The smaller, the more that can be squeezed onto the die.

More so than that, the smaller the process node the less power doing exactly what you did before uses. Consider a Jigsaw where the size of it is how much power it uses. If you can make smaller pieces, you can make an overall small jigsaw and maybe even add more detail or pieces. For the same power budget you get more done.
legendary
Activity: 3374
Merit: 1859
Curmudgeonly hardware guy
June 15, 2016, 03:15:06 PM
#49
Power density becomes a big problem with large dies and requires complex or exotic cooling systems. High current density is going to kill peripheral parts count and regulation efficiency.
sr. member
Activity: 441
Merit: 250
June 15, 2016, 03:08:00 PM
#48
So the smaller the chip the better? Can someone explain to me why that is? I figure the bigger the chip the more it can handle but I guess that's not the case here is it.

Just to avoid any confusion I'm assuming you mean big chip = lots of pipelines, so more hashing power, small chip = less etc. You are not referring to the process node or feature size, ie 40nm, 28nm and so on? If it's the latter, sorry for butting in. If it's the former most of the asic chip designs have tended to be smaller rather than larger, the logic goes that you have a better chance of avoiding defects on the silicon wafers if you have smaller chips, and even you do hit one it's a smaller and less costly chip thats becomes useless. Bigger chips do have other things on their side but there are valid arguments for both points of view, personally my view is that if Intel and others can routinely make incredibly complex chips of 250 mm2 with very high yields, then making a 120 - 150 mm mining chip with simple, repeated circuitry should be relatively simple.
sr. member
Activity: 475
Merit: 265
Ooh La La, C'est Zoom!
June 15, 2016, 01:35:58 PM
#47
So the smaller the chip the better? Can someone explain to me why that is? I figure the bigger the chip the more it can handle but I guess that's not the case here is it.

This is a bit dated with respect to feature sizes, but the topics explored are all the same.
http://ask.metafilter.com/49898/Why-does-computer-chip-process-size-have-to-keep-getting-smaller

- zed
legendary
Activity: 1274
Merit: 1000
June 15, 2016, 12:17:01 PM
#46
So the smaller the chip the better? Can someone explain to me why that is? I figure the bigger the chip the more it can handle but I guess that's not the case here is it.

also look at this way 22 weight wire is bad in some cases but 12 weight wire is better and you can do more. the number may be smaller but the wire gets bigger only in this case the trans they put on it are super small and making it smaller actually allow for less power usage and makes it run better or is suppose to and gives it more room . so they can do more per chip . kind of the layman term.so look at it in terms of the wire 22 is big but 12 is bigger so it's kind of the same  but it not .it is smaller but bigger in the way it does it Smiley because there is less for the software or whatever is controlling it to do  that lets it use less of everything but do more  and suppose to be less heat but that one is still a problem kind of.
alh
legendary
Activity: 1846
Merit: 1052
June 15, 2016, 09:55:20 AM
#45
So the smaller the chip the better? Can someone explain to me why that is? I figure the bigger the chip the more it can handle but I guess that's not the case here is it.

The number refers to the "feature size" of items etched onto the die (as I understand it). The smaller, the more that can be squeezed onto the die.
sr. member
Activity: 475
Merit: 265
Ooh La La, C'est Zoom!
June 14, 2016, 09:43:39 PM
#44
...it gets cooler? Dang, I'm in the wrong business. That's freakin' sweet.

I believe that the proper term of art for the equipment/process is "electron beam prober"
https://en.wikipedia.org/wiki/Electron_beam_prober
sr. member
Activity: 434
Merit: 250
June 14, 2016, 09:33:10 PM
#43
So the smaller the chip the better? Can someone explain to me why that is? I figure the bigger the chip the more it can handle but I guess that's not the case here is it.
sr. member
Activity: 475
Merit: 265
Ooh La La, C'est Zoom!
June 14, 2016, 09:29:43 PM
#42
If it's a "legit" co-lo facility, security can be part of the package. How secure depends on the facility and how much you want to pay...
Paying for physical security and achieving a physical security always were (and are) two different things. I do remember working at one of those "secure colocation centers" which had armed guards and hand geometry scanners near the front door and a wide-open elephant door in the back where theft was going by the truckloads.

Co-location was and still is full of completely fraudulent security theater.

Since this thread has IBM in the title I also remember that IBM used to do "security ratings" for their partners. The suburban office with separate alarm circuit for the IBM-partner equipment room, no guards whatsoever and the receptionist and all employees mutually recognizing each other by sight would get higher rating than that "secure colocation center".

Edit: Found the link to those "secure" co-locators: https://en.wikipedia.org/wiki/Exodus_Communications
 

Yep.  Caveat emptor, and verify everything if it is that important.

I did a security guard gig for a couple of years while I was going to college before I worked for the chip maker (also while going to college). I can't/won't vouch for the other shifts but when I was on duty I knew who was where and who belonged where. I also knew when there was someone new on the shift. I know I annoyed the fsck out of people making sure that they had their required ID, but it kept the questions down when "things" happened, like the time I discovered the door to the CEO's office unlocked.
legendary
Activity: 2128
Merit: 1073
June 14, 2016, 09:10:40 PM
#41
If it's a "legit" co-lo facility, security can be part of the package. How secure depends on the facility and how much you want to pay...
Paying for physical security and achieving a physical security always were (and are) two different things. I do remember working at one of those "secure colocation centers" which had armed guards and hand geometry scanners near the front door and a wide-open elephant door in the back where theft was going by the truckloads.

Co-location was and still is full of completely fraudulent security theater.

Since this thread has IBM in the title I also remember that IBM used to do "security ratings" for their partners. The suburban office with separate alarm circuit for the IBM-partner equipment room, no guards whatsoever and the receptionist and all employees mutually recognizing each other by sight would get higher rating than that "secure colocation center".

Edit: Found the link to those "secure" co-locators: https://en.wikipedia.org/wiki/Exodus_Communications
 
legendary
Activity: 3374
Merit: 1859
Curmudgeonly hardware guy
June 14, 2016, 08:54:26 PM
#40
...it gets cooler? Dang, I'm in the wrong business. That's freakin' sweet.
sr. member
Activity: 475
Merit: 265
Ooh La La, C'est Zoom!
June 14, 2016, 08:48:23 PM
#39
Then the only remaining issue is the physical security: that the miners using experimental chips did not go missing in the night or the owner of premises for the mining farm repossesses/places a lien on the equipment for nonpayment of the rent or electricity bills.

If it's a "legit" co-lo facility, security can be part of the package. How secure depends on the facility and how much you want to pay...

Edit: I remember seeing the photographs of the DRAM banks for the aforementioned hardware RAM disks. The DRAM chips were all in DIP packages and all socketed. To avoid the theft the DRAM banks were then cemented to steel plates on top to prevent individual removal. One of the "thefts" to prevent was actually not theft (chips goes missing) but unauthorized replacement (experimental chip replaced with an off-the-shelf equivalent). Apparently in the DRAM business seeing and measuring un-binned chips would allow detailed reverse-engineering of the manufacturing process. This includes not only taking the actual possession of the experimental chip and de-capping it but also a temporary removal from the original socket to run a battery of post-manufacturing electrical tests and then putting the chip back in the original circuit.

When I was working for a prominent US DSP chip maker on one of their engineering test floors, there was a Schlumberger scanning electron microscope (I believe) that was used to test silicon. What it displayed on the screen was pretty interesting and what it could do was pretty sweet. You could see all sorts of details, and based on the colors tell what elements were used and whether a particular trace was energized or not, and if I recall you could tell the difference between voltage levels. Virtual probes allowed you to measure signals and display them in a "scope" window. Granted this was in 1989-1990 time-frame so I know there is way more capable and cooler sch!@t available now.

- zed

EDIT: fixed spelling of Schlumberger...
legendary
Activity: 2128
Merit: 1073
June 14, 2016, 08:13:49 PM
#38
For us lowly 'small' miners best we can do is the recycle yard or pass-it-down resale.
I was under impression that many of the 'small' miners are actually hosting/co-locating their mining equipment at the 'moderately large' mining farms. That would be an equivalent of the olden mainframe days where many 'small' mainframe owners were actually co-locating them at reasonably large data centers and frequently renting/leasing peripheral devices (like large hardware RAM disks) only when needed.

Then the only remaining issue is the physical security: that the miners using experimental chips did not go missing in the night or the owner of premises for the mining farm repossesses/places a lien on the equipment for nonpayment of the rent or electricity bills.

Edit: I remember seeing the photographs of the DRAM banks for the aforementioned hardware RAM disks. The DRAM chips were all in DIP packages and all socketed. To avoid the theft the DRAM banks were then cemented to steel plates on top to prevent individual removal. One of the "thefts" to prevent was actually not theft (chips goes missing) but unauthorized replacement (experimental chip replaced with an off-the-shelf equivalent). Apparently in the DRAM business seeing and measuring un-binned chips would allow detailed reverse-engineering of the manufacturing process. This includes not only taking the actual possession of the experimental chip and de-capping it but also a temporary removal from the original socket to run a battery of post-manufacturing electrical tests and then putting the chip back in the original circuit.
legendary
Activity: 3822
Merit: 2703
Evil beware: We have waffles!
June 14, 2016, 06:55:22 PM
#37
For industrial scale btc farms and other large blockchain processing concerns ja it might if IBM & friends could be talked into it. BW's dealling with Samsung may or may not help there. The peta farms are the only ones with large enough equipment turnover and certainty of where the gear is at (ownership) to make it worthwhile. For us lowly 'small' miners best we can do is the recycle yard or pass-it-down resale.
legendary
Activity: 2128
Merit: 1073
June 14, 2016, 06:31:09 PM
#36
To put it in short form, ja miner ASICS could make very simple handy process targets to tinker with. The companies researching 10/7 are not about to make the process targets as for-saleable items.

The fact that IBM and friends have already done a functional mammoth count 7nm test chip says that they are way beyond needing simple targets and have to concentrate on the process and materials basics to make the process usable on a commercially viable scale.
The "for-sale" problem is usually solved by leasing the new chips with mandatory return to the manufacturer after the end of the useful life of the product. I've learned about this trick a while back: hardware RAM disk (mainframe device) manufacturers would lease large quantities of experimental and not-up-to-spec DRAM chips to assemble their "disk drives". IBM (and others) were happy to just lease those "RAM disks".

The general idea of leasing would actually mesh quite well with the Bitcoin mining business model.
legendary
Activity: 3822
Merit: 2703
Evil beware: We have waffles!
June 14, 2016, 05:54:22 PM
#35
No one is going to be seeing a 7nm miner in the next 3-4 years, that's almost certain (assuming Bitcoin is still around then) because it's still at the prototyping stage, the process has to be fully engineered and characterised and that will be a majot challenge in itself.

I thought a bitcoin mining ASIC chip is pretty basic and straight forwards. Wouldn't it be wise to prototype and to calibrate the machines using mining chips since they are so simple? Just asking.
To put it in short form, ja miner ASICS could make very simple handy process targets to tinker with. The companies researching 10/7 are not about to make the process targets as for-saleable items.

The fact that IBM and friends have already done a functional mammoth count 7nm test chip says that they are way beyond needing simple targets and have to concentrate on the process and materials basics to make the process usable on a commercially viable scale.

For 1, the EUV laser based light source process is last I heard from Trumpf is still falling short of the expected target of 250w of EUV light on-target (the photo masks). Current output of the system is around 125-150w. Usable but not a profit maker. To say the least, zapping 50k per-sec 10um droplets of tin with several MW of peak power to turn into the EUV light emitting plasma and then collecting/focusing the light is no small feat. Did I mention that it is also al done under high-vacuum? Air is a very strong absorber of the wavelength produced.
legendary
Activity: 1904
Merit: 1007
June 14, 2016, 04:22:57 PM
#34
No one is going to be seeing a 7nm miner in the next 3-4 years, that's almost certain (assuming Bitcoin is still around then) because it's still at the prototyping stage, the process has to be fully engineered and characterised and that will be a majot challenge in itself.

I thought a bitcoin mining ASIC chip is pretty basic and straight forwards. Wouldn't it be wise to prototype and to calibrate the machines using mining chips since they are so simple? Just asking.
sr. member
Activity: 441
Merit: 250
June 14, 2016, 02:00:56 PM
#33
No one is going to be seeing a 7nm miner in the next 3-4 years, that's almost certain (assuming Bitcoin is still around then) because it's still at the prototyping stage, the process has to be fully engineered and characterised and that will be a majot challenge in itself. Even when IBM or thr others start taking orders it'll be for chips that really, really need the density and/or power characteristics. The NRE's wll initially be truly horrific.

As for some of the comments about devices with lots of 'core' (or more likely pipelines) it's no more difficult to make a big chip with lots of them than a small chip with few, there is such a thing as Designing for Manufacturability which a lot of chip designers seem to ignore just as they ignore the importance of proper product engineering at the back end. Chip design is, or rather should be a team effort.

Just my views, please feel free to express yours.
legendary
Activity: 1904
Merit: 1007
June 14, 2016, 01:39:36 PM
#32
So, Mr. 2112, Do you think we'll be seeing 7nm miners soon in the industry?

He should be the Lead Designer/Project Manager for this chip!
member
Activity: 99
Merit: 10
June 14, 2016, 03:30:47 AM
#31
I suspect IBM will share the tech with their partners - eventually - but they have to get it working reliably first.
The last part of the above sentence is plainly wrong. IBM first shares the technology with the partners to debug it, then once it works reliably starts selling for money to the regular customers. This is why IBM is so picky when forming partnerships: they have to be reasonably assured that the partner relationship will provide them with truthful feedback required to achieve IBM's customers' expectations of quality.

You seem to have this partnership relationship backwards.


So, Mr. 2112, Do you think we'll be seeing 7nm miners soon in the industry?
legendary
Activity: 2128
Merit: 1073
June 14, 2016, 02:51:24 AM
#30
I suspect IBM will share the tech with their partners - eventually - but they have to get it working reliably first.
The last part of the above sentence is plainly wrong. IBM first shares the technology with the partners to debug it, then once it works reliably starts selling for money to the regular customers. This is why IBM is so picky when forming partnerships: they have to be reasonably assured that the partner relationship will provide them with truthful feedback required to achieve IBM's customers' expectations of quality.

You seem to have this partnership relationship backwards.
legendary
Activity: 1498
Merit: 1030
June 14, 2016, 02:02:45 AM
#29
The part that caught my eye was the "die made of a Silicon/Germanium alloy"

 One of the reasons Silicon took over from early Germanium usage was that it could handle HEAT a lot better - gotta wonder what the heat limits on this new alloy are going to be like, and the power handling capability.

 On the other hand, the band gap on Germanium is a bit less than half that of Silicon (.3v vs .7v or something like that IIRC?), which should help a little on lowering power usage.


 I'm not sure about the A1, but the Innosilicon A2 has 432 hash cores / chip (going by the highest core counts on my 88MH unit, the software doesn't show that on the 110s).
 I'd guess the A1 should have had MORE as SHA256 is a simpler algorythm than Scrypt is.



 I suspect IBM will share the tech with their partners - eventually - but they have to get it working reliably first. I think the claim about 7nm being in full production by 2017 is crazy and 2018 wildly optimistic, especially since they need new wafer manufacturing of their NEW wafer material to happen to SUPPORT the new process - that's not an overnight thing to achieve. I just don't think the INFRASTRUCTURE to support the new tech is in place enough to manage 2018, much less 2017 - and even if it IS managed by 2018, IBM will be using it internally for a while first before they start farming it out at all.


 I doubt we'll see this technology used in cryptocoin mining before 2020 - and wouldn't BET on it by then.

legendary
Activity: 2128
Merit: 1073
June 13, 2016, 11:41:50 PM
#28
Ja. Silly question perhaps but how many hashing cores/engines do Bitmains chips have (or Avalon's, BitFury's)? In the 1385 data sheet they always refer to things as 'core'. Singular, not plural or more.

I believe the Monarch had 256 cores/chip or something like that. I know Inno/Bitmine.ch's A1 had "27 highly optimized hashing engines based on custom ASIC cells up to 1.6M" (quote from the A1 spec sheet). IF what they refer to as cells are actually gates (or if each cell is actually 2 gates) then that raises the count but even w/256 nearly 10x more gates it's a long way from the near 2-billion in the latest (affordable) CPU/GPU/s.

2112 wanna pop in on this? Should be right up your alley.
I really don't have anything important to add to this thread.

1) As far as core counts: I wouldn't pay much attention to the marketing blurbs. Technical accuracy isn't an objective there. The words and sentences are mostly meaningless, whatever sounds most impressive wins. Only trust sidehack's calculations or equivalent methods. "Custom cell" may mean as little as "standard cell" with  JTAG/testing connections/circuitry removed/defeated.

2) As far as IBM sharing the access to the 7nm process with fabless boutique designers: I see no problem from the IBM side, IBM typically is quite good with sharing their technology with their long term partners. I see a problem from the coin mining entrepreneurs side: this niche doesn't attract trustworthy/stable people. IBM does thorough vetting of their prospective partners / developers / resellers. Even people much less skeezy that in the Bitcoin milieu fail to make an entry grade in the IBM Developer's Program. Even Spondoolies, the least weird company in the niche, found hiring full time hardware engineering staff impossible.
legendary
Activity: 3374
Merit: 1859
Curmudgeonly hardware guy
June 13, 2016, 08:48:55 PM
#27
I just did the math. Frequency times chip count times core count equals hashrate, for the numbers given for an S7.
legendary
Activity: 3822
Merit: 2703
Evil beware: We have waffles!
June 13, 2016, 08:20:59 PM
#26
BM1380 had 8, 1382 had 64 I think, BM1384 55, BM1385 50.
Hmm. just looked at the 1385 data sheet and dinna see it spelled out. All references are singular, as in 'core'.
legendary
Activity: 3374
Merit: 1859
Curmudgeonly hardware guy
June 13, 2016, 08:18:28 PM
#25
BM1380 had 8, 1382 had 64 I think, BM1384 55, BM1385 50.
legendary
Activity: 3822
Merit: 2703
Evil beware: We have waffles!
June 13, 2016, 07:53:11 PM
#24
Ja. Silly question perhaps but how many hashing cores/engines do Bitmains chips have (or Avalon's, BitFury's)? In the 1385 data sheet they always refer to things as 'core'. Singular, not plural or more.

I believe the Monarch had 256 cores/chip or something like that. I know Inno/Bitmine.ch's A1 had "27 highly optimized hashing engines based on custom ASIC cells up to 1.6M" (quote from the A1 spec sheet). IF what they refer to as cells are actually gates (or if each cell is actually 2 gates) then that raises the count but even w/256 nearly 10x more gates it's a long way from the near 2-billion in the latest (affordable) CPU/GPU/s.

2112 wanna pop in on this? Should be right up your alley.
legendary
Activity: 3374
Merit: 1859
Curmudgeonly hardware guy
June 13, 2016, 07:10:19 PM
#23
...and then requiring about a 300A multiphase buck at probably 85% efficiency, which is pretty generally a bad idea.
legendary
Activity: 3822
Merit: 2703
Evil beware: We have waffles!
June 13, 2016, 07:04:14 PM
#22
As for mining ASIC's at the 10/7nm nodes: Not for a long long time. Read the articles again: The focus is on putting massive #'s of gates in the dies eg, more cpu cores, more and larger L1/2/3 caches, etc. Not fitting more chips on a wafer or dropping power needs. That implies that doing scads of chips with fewer than several billion gates just ain't worth it.

We've seen the results of ASIC companies trying to fit entire miners on just a couple huge chips and the results were not good. Even then they were talking about probably only a few million gates at most in BFL's Monarch chips for one example..
legendary
Activity: 3822
Merit: 2703
Evil beware: We have waffles!
June 13, 2016, 06:54:54 PM
#21
 meet the first guy that introduce the first IBM pc it was joke i was told and a bet the first day they sold out with in hours then the PC time started were we could buy affordable ones.
Ja. IBM - and Apple for the most part - had no idea what they had given birth to. For the IBM-PC it was the accountants buried in dimly lit back rooms or small store fronts that gave praise to it when the first spreadsheet program came out. One story has several breaking out in tears when they first saw how they could now tally up the books and produce records vs filling in each entry on paper and using a comptometer (adding machine) or doing the math in their heads and scribbling down the tallies...
legendary
Activity: 3136
Merit: 1116
June 13, 2016, 04:07:58 PM
#20
I didn't actually read the article and was just kind of talking outta my ass  Shocked

I actually thought Intel was already using SiGe for their chips in production now, and that for future nodes they were considering III-Vs for the n-channel and pure Ge for the p-channel.

legendary
Activity: 4256
Merit: 8551
'The right to privacy matters'
June 13, 2016, 04:03:07 PM
#19
7nm is wild, especially when we're talking the size of nm is around 8 atoms in size (based off hydrogen atom), technology continues to impress me.

This is really impressive. On these dimensions you start to get to the point where the doping of the silicon in the channel is provided by a single phosphorous atom, and the electron's wave function is delocalized over the entire device. So, not exactly quantum computing, but kind of quantum computing.
If ya read the entire article and the link to the more detailed Ars Technica http://arstechnica.co.uk/gadgets/2015/07/ibm-unveils-industrys-first-7nm-chip-moving-beyond-silicon/ one you will find that they are not using just Si.

"Creating a working 7nm chip required moving past pure silicon, IBM revealed. IBM—working with GlobalFoundries, Samsung, SUNY Polytechnic Institute, and others—carved the transistor channels out of silicon-germanium (SiGe) alloy in order to improve electron mobility at such a small scale. Intel has also said 10nm will be the last gasp for pure silicon chips."

Along those (Intel) lines, from the same Ars Technica article, " Earlier this week, a leaked document claimed that Intel was facing difficulties at 10nm and that Cannonlake (due in 2016/2017) had been put on hold. In theory, 7nm should roll around in 2017/2018, but we wouldn't be surprised if it misses that target by some margin."


Yeah I read intel had 10nm issues big time.

Frankly the i5 6600k  with 14nm  was my first ever intel cpu to fail .. I did a bit of research and it seems to have overheating issue more so then prior generations.

My gut feeling is  14/16 nm will be around longer then prior chips.

and that we may stop at 10nm    not 7nm .

or skip 10nm altogether  and jump to 7nm as the alloy will be better.

but I am not a chip guy you are.
legendary
Activity: 1274
Merit: 1000
June 13, 2016, 04:01:18 PM
#18
Nice, but we're years away from seeing that in a miner.

Or it becoming worthwhile for any manufacturer to stump up what, $10M to have a go at 7nm.

Meh, people have been claiming this since BTC asics first showed up.  It took just over 3 years to go from 130nm to 14/16nm.   This space is so full of retards I think it's quite possible someone is going to start working on 7nm as soon as tools are available.

they all ready have someone in the bitfury post about chips  posted a paper on it and  the tools are there it won't be long, I guess long is a mater of terms or how one see time.


or above my post he posted it Smiley,
legendary
Activity: 3822
Merit: 2703
Evil beware: We have waffles!
June 13, 2016, 03:35:54 PM
#17
7nm is wild, especially when we're talking the size of nm is around 8 atoms in size (based off hydrogen atom), technology continues to impress me.

This is really impressive. On these dimensions you start to get to the point where the doping of the silicon in the channel is provided by a single phosphorous atom, and the electron's wave function is delocalized over the entire device. So, not exactly quantum computing, but kind of quantum computing.
If ya read the entire article and the link to the more detailed Ars Technica http://arstechnica.co.uk/gadgets/2015/07/ibm-unveils-industrys-first-7nm-chip-moving-beyond-silicon/ one you will find that they are not using just Si.

"Creating a working 7nm chip required moving past pure silicon, IBM revealed. IBM—working with GlobalFoundries, Samsung, SUNY Polytechnic Institute, and others—carved the transistor channels out of silicon-germanium (SiGe) alloy in order to improve electron mobility at such a small scale. Intel has also said 10nm will be the last gasp for pure silicon chips."

Along those (Intel) lines, from the same Ars Technica article, " Earlier this week, a leaked document claimed that Intel was facing difficulties at 10nm and that Cannonlake (due in 2016/2017) had been put on hold. In theory, 7nm should roll around in 2017/2018, but we wouldn't be surprised if it misses that target by some margin."
legendary
Activity: 4256
Merit: 8551
'The right to privacy matters'
June 13, 2016, 02:56:12 PM
#16
well 10nm bleed/leak at the moment due small walls so to speak.

7nm will be worse.

intel's  14nm cpus overheat and fail more then the previous generation

so asic mining will not be doing 7nm anytime soon.

no one knows how good the 14-16 nm are yet.

no less the 10nm 

then the 7nm

we had 28nm from bitmantech twice  the s-5 then the s-7  .

So s-5 release was Nov 2014

S-9 release was June 2016 that is 19 months

same clock would be Jan 2018  for the 10nm

same clock would be Sept 2019 for the 7nm

So For now I worry about the s-9's coming to me on tues the 14th of june
legendary
Activity: 3136
Merit: 1116
June 13, 2016, 02:40:22 PM
#15
7nm is wild, especially when we're talking the size of nm is around 8 atoms in size (based off hydrogen atom), technology continues to impress me.

This is really impressive. On these dimensions you start to get to the point where the doping of the silicon in the channel is provided by a single phosphorous atom, and the electron's wave function is delocalized over the entire device. So, not exactly quantum computing, but kind of quantum computing.
member
Activity: 99
Merit: 10
June 13, 2016, 03:43:14 AM
#14
Nice, but we're years away from seeing that in a miner.

Well, thinking about it in an optimistic way. If someday in a year or two if we get to see some hardware running on Asic boost as well as 7nm tech. It'll be like next level of efficiency in mining and network hashrate of aboth 10Eh/s. But only if IBM is generous enough to share some details on their 7nm.  Grin

I really hope from the bottom of my heart that this happens.
That would be the ultimate form of mining centralization. Hope that never happens. Because IBM will never open source their design.

Indeed centralization is an issue but still, it can restore home mining to an extent.

Wow,how the hell will this help home miners?? You think you will be able to buy these as soon as they are made?? Or even be able to afford them??

Think again,ONLY Bitmain & Avalon are dumb enough to sell to home miners,all others are doing what with ANY miners they make??

Lets see...what are they doing....what are they doing....Oh selling to the highest "bidder" or self mining,thats right.................. Cheesy

Bruh! I'd like to answer your query with just one quote. "In an era of mining gold openly. The one who earns the most is the one who sells the shovel." Hope this helps you understand what I'm trying to say.
legendary
Activity: 2212
Merit: 1001
June 12, 2016, 05:34:31 PM
#13
Nice, but we're years away from seeing that in a miner.

Well, thinking about it in an optimistic way. If someday in a year or two if we get to see some hardware running on Asic boost as well as 7nm tech. It'll be like next level of efficiency in mining and network hashrate of aboth 10Eh/s. But only if IBM is generous enough to share some details on their 7nm.  Grin

I really hope from the bottom of my heart that this happens.
That would be the ultimate form of mining centralization. Hope that never happens. Because IBM will never open source their design.

Indeed centralization is an issue but still, it can restore home mining to an extent.

Wow,how the hell will this help home miners?? You think you will be able to buy these as soon as they are made?? Or even be able to afford them??

Think again,ONLY Bitmain & Avalon are dumb enough to sell to home miners,all others are doing what with ANY miners they make??

Lets see...what are they doing....what are they doing....Oh selling to the highest "bidder" or self mining,thats right.................. Cheesy
legendary
Activity: 1666
Merit: 1185
dogiecoin.com
June 12, 2016, 02:57:09 PM
#12
And how long between when the first foundry announcements of "16nm is possible" and the first actual marketable chips coming down the line? There was a thread on here a year ago about IBM moving into 7nm. How long is it going to take to make it work once they solve the problems with 10nm, which will probably come after they figure out how to make 14/16 more reliable for mass production?

legendary
Activity: 3374
Merit: 1859
Curmudgeonly hardware guy
June 12, 2016, 02:42:59 PM
#11
And how long between when the first foundry announcements of "16nm is possible" and the first actual marketable chips coming down the line? There was a thread on here a year ago about IBM moving into 7nm. How long is it going to take to make it work once they solve the problems with 10nm, which will probably come after they figure out how to make 14/16 more reliable for mass production?
legendary
Activity: 1666
Merit: 1185
dogiecoin.com
June 12, 2016, 01:11:53 PM
#10
Nice, but we're years away from seeing that in a miner.

Or it becoming worthwhile for any manufacturer to stump up what, $10M to have a go at 7nm.

Meh, people have been claiming this since BTC asics first showed up.  It took just over 3 years to go from 130nm to 14/16nm.   This space is so full of retards I think it's quite possible someone is going to start working on 7nm as soon as tools are available.

That's a terrible example. How long did it take us to actually release 16nm after it was 'available' and even after everyone announced their chips, 18 months?

A year max.  Grin

It was late 2014 when companies started announcing chips to arrive early 2015. It was May 2016 by the time we had public delivered 14/16nm.
member
Activity: 99
Merit: 10
June 12, 2016, 12:48:21 PM
#9
Nice, but we're years away from seeing that in a miner.

Or it becoming worthwhile for any manufacturer to stump up what, $10M to have a go at 7nm.

Meh, people have been claiming this since BTC asics first showed up.  It took just over 3 years to go from 130nm to 14/16nm.   This space is so full of retards I think it's quite possible someone is going to start working on 7nm as soon as tools are available.

That's a terrible example. How long did it take us to actually release 16nm after it was 'available' and even after everyone announced their chips, 18 months?

A year max.  Grin
legendary
Activity: 1666
Merit: 1185
dogiecoin.com
June 12, 2016, 12:45:41 PM
#8
Nice, but we're years away from seeing that in a miner.

Or it becoming worthwhile for any manufacturer to stump up what, $10M to have a go at 7nm.

Meh, people have been claiming this since BTC asics first showed up.  It took just over 3 years to go from 130nm to 14/16nm.   This space is so full of retards I think it's quite possible someone is going to start working on 7nm as soon as tools are available.

That's a terrible example. How long did it take us to actually release 16nm after it was 'available' and even after everyone announced their chips, 18 months?
member
Activity: 99
Merit: 10
June 12, 2016, 12:29:36 PM
#7
Nice, but we're years away from seeing that in a miner.

Well, thinking about it in an optimistic way. If someday in a year or two if we get to see some hardware running on Asic boost as well as 7nm tech. It'll be like next level of efficiency in mining and network hashrate of aboth 10Eh/s. But only if IBM is generous enough to share some details on their 7nm.  Grin

I really hope from the bottom of my heart that this happens.
That would be the ultimate form of mining centralization. Hope that never happens. Because IBM will never open source their design.

Indeed centralization is an issue but still, it can restore home mining to an extent.
legendary
Activity: 1662
Merit: 1050
June 12, 2016, 12:26:16 PM
#6
Nice, but we're years away from seeing that in a miner.

Well, thinking about it in an optimistic way. If someday in a year or two if we get to see some hardware running on Asic boost as well as 7nm tech. It'll be like next level of efficiency in mining and network hashrate of aboth 10Eh/s. But only if IBM is generous enough to share some details on their 7nm.  Grin

I really hope from the bottom of my heart that this happens.
That would be the ultimate form of mining centralization. Hope that never happens. Because IBM will never open source their design.
legendary
Activity: 1512
Merit: 1000
June 12, 2016, 12:23:20 PM
#5
Nice, but we're years away from seeing that in a miner.

Or it becoming worthwhile for any manufacturer to stump up what, $10M to have a go at 7nm.

Meh, people have been claiming this since BTC asics first showed up.  It took just over 3 years to go from 130nm to 14/16nm.   This space is so full of retards I think it's quite possible someone is going to start working on 7nm as soon as tools are available.
member
Activity: 99
Merit: 10
June 12, 2016, 12:13:36 PM
#4
Nice, but we're years away from seeing that in a miner.

Well, thinking about it in an optimistic way. If someday in a year or two if we get to see some hardware running on Asic boost as well as 7nm tech. It'll be like next level of efficiency in mining and network hashrate of aboth 10Eh/s. But only if IBM is generous enough to share some details on their 7nm.  Grin

I really hope from the bottom of my heart that this happens.
legendary
Activity: 1666
Merit: 1185
dogiecoin.com
June 12, 2016, 11:29:01 AM
#3
Nice, but we're years away from seeing that in a miner.

Or it becoming worthwhile for any manufacturer to stump up what, $10M to have a go at 7nm.
legendary
Activity: 1596
Merit: 1000
June 12, 2016, 10:53:02 AM
#2
Nice, but we're years away from seeing that in a miner.
member
Activity: 99
Merit: 10
June 12, 2016, 10:48:33 AM
#1
http://www.pcworld.com/article/2946124/ibm-reveals-worlds-first-working-7nm-processor.html

Whao! When I was reading this I only had one thing in mind. How efficient our miners will be with this advancement. Hats off IBM. Wink
Jump to: