Author

Topic: [Work in progess] Burnins Avalon Chip to mining board service - page 116. (Read 624197 times)

sr. member
Activity: 476
Merit: 250
Would like to get this thread back to being about the board itself, instead of making extra feature requests and big watercooling things, I thought we had a separate thread for that?

sr. member
Activity: 294
Merit: 250
Also, i dont believe it to be interesting to reinvent the wheel. Raspberry Pi is a perfectly cheap solution for running CGminer from out of. It will be much more expensive to have this on each and every board..

Ethernet mac chip for pic32mx7xx cost ~0.99$ (in reel), rj45 socket less 0.5$, some xtal, res, caps etc and ~2cm² PCB area. Say total max 4$. RJ45 socket can leave unpopulated so customer can solder it if need. Other parts pnp machine lay board <0.1s.

One board can act master and chain other 16.
I see raspi ~50$ vs. ~4$ (plus onetime code/firmware generation = port cgminer to mips and some micro tcp stack).

If burning boards are done say 2000pcs and every 16 needs raspi -> 125 x 45$=5600$. I can't say is it enough to modify board now. Atleast it reduce one board + wires when suitable firmware are done.

I see it good option, not reinvent wheel.
A raspberry pi is $25. and its a fully fledged computer running on archlinux, you connect your chain of Bitburners to the USB port, and cgminer on the PI does the work for you.
Where would cgminer be running in your option?
I definetly see it as an option, just not a necessary one for me at all. Especially, when he starts shipping, the difficulty will be way up, and second generation chips would already be taking orders.
Keep it simple!
member
Activity: 93
Merit: 10
A lot of the new requests in this thread seem kinda greedy. I am thrilled just to get working boards compared to all the other developers who are still having issues.  Also many of the options such as ethernet, can create problems for some people shipping to other countries.
newbie
Activity: 29
Merit: 0
Also, i dont believe it to be interesting to reinvent the wheel. Raspberry Pi is a perfectly cheap solution for running CGminer from out of. It will be much more expensive to have this on each and every board..

Ethernet mac chip for pic32mx7xx cost ~0.99$ (in reel), rj45 socket less 0.5$, some xtal, res, caps etc and ~2cm² PCB area. Say total max 4$. RJ45 socket can leave unpopulated so customer can solder it if need. Other parts pnp machine lay board <0.1s.

One board can act master and chain other 16.
I see raspi ~50$ vs. ~4$ (plus onetime code/firmware generation = port cgminer to mips and some micro tcp stack).

If burning boards are done say 2000pcs and every 16 needs raspi -> 125 x 45$=5600$. I can't say is it enough to modify board now. Atleast it reduce one board + wires when suitable firmware are done.

I see it good option, not reinvent wheel.
sr. member
Activity: 294
Merit: 250
Hello I fast browse your thread, but don't find any info how pic (32MX795F512 in demo board. Same chip in production version?) firmware updates are handled?
 If your firmware have bugs/improvements how installed boards are updated? Can it done through CAN bus chained board?

Have you released pic sources yet?

PIC 32MX7xx have ethernet. Have future boards ethernet socket? (maybe somebody develop standalone firmware to avoid raspi..?)

When you plan release BOM? BOM and PIC firmware(/source) are needed bare PCB order customers..


BOM, firmware and board designs will be released when project is done and he is in production.
Also, i dont believe it to be interesting to reinvent the wheel. Raspberry Pi is a perfectly cheap solution for running CGminer from out of. It will be much more expensive to have this on each and every board..
newbie
Activity: 29
Merit: 0
Hello I fast browse your thread, but don't find any info how pic (32MX795F512 in demo board. Same chip in production version?) firmware updates are handled?
 If your firmware have bugs/improvements how installed boards are updated? Can it done through CAN bus chained board?

Have you released pic sources yet?

PIC 32MX7xx have ethernet. Have future boards ethernet socket? (maybe somebody develop standalone firmware to avoid raspi..?)

When you plan release BOM? BOM and PIC firmware(/source) are needed bare PCB order customers..
legendary
Activity: 2674
Merit: 1083
Legendary Escrow Service - Tip Jar in Profile
newbie
Activity: 11
Merit: 0
I think you miss a LOT space with this design. If you would use a rack adapter you would build in way more miners. At the moment you maybe use 200mm, but colocation racks have space for 750mm depth. Much unused space.
The other thing is... why do you use 3U for PSU? ATX-PSU's are designed especially for 2HU Height. So one would pay for 1HU extra.
Yes we are aware of that Wink

To build a solid PSU Panel for 2HE it would have been double-angled wich almost means double-priced Cry and the Rasberry PI would not fit in between the PSUs.

The question is, what does the customer wants. These panels can be produced for a low price, they can be mounted into racks or even in shelves.


We also had the idea for a complete Case with 26 Boards and one PSU in 3HE but building a whole case is sadly to expensive. ~300€ per pice + ~600€ developement cost.
If we somebody order 1000 of them they would only cost <80€ each.
http://unique-modding.de/case.PNG

Quote
in this demo setup you could also mount the PSU Unit and a BitBurner Unit in the front and two BitBurner Units in the back, half the height, same the fun Cheesy


but what i dont get: why are the plugs for the hoses in front of the Panel, that just makes no sense. Leave the Plugs inside, so you could connect the heatsinks from the units in the front easy with those in the back, if it stays like that it will be a pain in the ass to connect all those hoses.

next point: if you really are going for a datacenter you really should think about power supply redundancy.
you could also use one of those sweet 19" redundant 2kw psus like

http://business.fantec.eu/html/en/2/artId/__1128/gid/__2009020590/article.html

yes that is one way to do it to mount it from the front and from the back. We want these panels to be as modular as possible.

There is no place inside for the plugs @3HE.
You could either go for 4HE and mount the plugs where the Anfi-tec engraving is at the moment or go for bigger watercoolers so you would have the plugs between the power connectors. To connect one row of Boards is realy easy because you have enough space around the fittings

If you have another idea where to put the fittings please draw or explain it. We are always open for suggestions.

To go from the back to the front we could make some additional holes in the PSU panel

We are not going for big datacentres with these panels Wink
 
The redundant PSU is realy sweet  - But the price for the 2000w redundant PSU is 8 times as expensive (1600€ for 1700w) as a good PC PSU(100€ for 900w).

Keep in mind that nothing realy happends when the psu fails. Just connect a spare one and you are back in business. (sadly with some lost time).
I guess it would be better to have more miners than 30% less with redundant PSUs.

regards
[Anfi-tec] Finn
member
Activity: 72
Merit: 10
I think you miss a LOT space with this design. If you would use a rack adapter you would build in way more miners. At the moment you maybe use 200mm, but colocation racks have space for 750mm depth. Much unused space.
The other thing is... why do you use 3U for PSU? ATX-PSU's are designed especially for 2HU Height. So one would pay for 1HU extra.

in this demo setup you could also mount the PSU Unit and a BitBurner Unit in the front and two BitBurner Units in the back, half the height, same the fun Cheesy


but what i dont get: why are the plugs for the hoses in front of the Panel, that just makes no sense. Leave the Plugs inside, so you could connect the heatsinks from the units in the front easy with those in the back, if it stays like that it will be a pain in the ass to connect all those hoses.

next point: if you really are going for a datacenter you really should think about power supply redundancy.
you could also use one of those sweet 19" redundant 2kw psus like

http://business.fantec.eu/html/en/2/artId/__1128/gid/__2009020590/article.html
legendary
Activity: 2674
Merit: 1083
Legendary Escrow Service - Tip Jar in Profile
I think you miss a LOT space with this design. If you would use a rack adapter you would build in way more miners. At the moment you maybe use 200mm, but colocation racks have space for 750mm depth. Much unused space.
The other thing is... why do you use 3U for PSU? ATX-PSU's are designed especially for 2HU Height. So one would pay for 1HU extra.
newbie
Activity: 10
Merit: 0
Some spoilers from our side ^^
http://anfi-tec.de/forenbilder/13.07.11%20Bitburner/Anfi-tec%20Bitburner%20XX%201.JPG
and for the dark side
http://anfi-tec.de/forenbilder/13.07.11%20Bitburner/Anfi-tec%20Bitburner%20XX%202.JPG

The first watercooler Prototype arrived today at Burnin.
And onother one is to anodize blue(just like the Final version). We will post some pics here when this one is ready

We will produce them on demand! So make shure you order them (at Burnins shop) in the first 2 days after the Shop is online. We will do our best to have them ready as soon as possible!


Regards Finn

more spoilers from our side:

Anfi-tec 19" frontplate (3HE high) for 8 waterblocks (16 Bitburner XX boards)
http://anfi-tec.de/forenbilder/13.07.14%20Bitburner/Anfi-tec%20Bitburner%20XX%207.JPG
backside
http://anfi-tec.de/forenbilder/13.07.14%20Bitburner/Anfi-tec%20Bitburner%20XX%208.JPG

Anfi-tec 19" frontplate (3HE high) for 2 powersuplys and a Raspery pi or some other things to mount (4x 5mm holes for e.g. some angle-iron)
http://anfi-tec.de/forenbilder/13.07.14%20Bitburner/Anfi-tec%20Bitburner%20XX%205.JPG
backside
http://anfi-tec.de/forenbilder/13.07.14%20Bitburner/Anfi-tec%20Bitburner%20XX%206.JPG

---------------------------------------------------------------------------------------------------------------------------

example of some useres order:
3x Anfi-tec 19" frontplate for 8 waterblocks, 1x Anfi-tec 19" frontplate for 2 powersuplys in a Rittal 19" Rack
http://anfi-tec.de/forenbilder/13.07.14%20Bitburner/Anfi-tec%20Bitburner%20XX%204.JPG

http://anfi-tec.de/forenbilder/13.07.14%20Bitburner/Anfi-tec%20Bitburner%20XX%203.JPG


We will produce the fontplates on demand! So make shure you order them (at Burnins shop) in the first 2 days after the Shop is online. We will do our best to have them ready as soon as possible!

regards
[Anfi-tec] André
sr. member
Activity: 476
Merit: 250
The models already suggested.
legendary
Activity: 2674
Merit: 1083
Legendary Escrow Service - Tip Jar in Profile
I forgot... i asked hetzner what they think about watercooling and it looks they dont deny it from the start because the support asked me: "Is the compressor built into the 19" rack or extern?" and "how high is the probability of water leaks?"
Im not sure what the answers to this are. I guess radiator has to be outside, compressor can be inside right? But its interesting that they dont deny from the start. I wonder if its only hetzner or other datacentres too.

the reason they denied it to me was sth like "security blabla other custormers servers in the racks blabla".

if i am honest, i would not like it if someone would put his watercooled selfmade hellmachines above my production servers in a datacentre.

Thats what i thougt too. But maybe they already have racks with watercooled machines so that its possible? Or they have so much free room that they can give you a new colocation rack? But i wonder anyway it if will work. I mean big radiators are needed isnt it? And even though the power cost is the biggest factor, space cost is high too. And i think one has to build everything into the colocation rack. I doubt they will happily have something stand before the rack. The pictures of the datacentres look very tidy, so most probably a design is needed where everything is inside the colocation rack.

It puzzles me, if you are going to put your miner into a datacenter, why care about watercooling? Put insane fans on it, and let the airco from the DC take care of all the heat... Smiley

Because it maybe allows a way higher overclocking? But of course the effect of watercooling on asic's has to be proved first. I would prefer fans too if it would lead to a optimum clockrate.
You have tips for "insane fans" or you mean the models already suggested?
sr. member
Activity: 476
Merit: 250
I forgot... i asked hetzner what they think about watercooling and it looks they dont deny it from the start because the support asked me: "Is the compressor built into the 19" rack or extern?" and "how high is the probability of water leaks?"
Im not sure what the answers to this are. I guess radiator has to be outside, compressor can be inside right? But its interesting that they dont deny from the start. I wonder if its only hetzner or other datacentres too.

the reason they denied it to me was sth like "security blabla other custormers servers in the racks blabla".

if i am honest, i would not like it if someone would put his watercooled selfmade hellmachines above my production servers in a datacentre.

Thats what i thougt too. But maybe they already have racks with watercooled machines so that its possible? Or they have so much free room that they can give you a new colocation rack? But i wonder anyway it if will work. I mean big radiators are needed isnt it? And even though the power cost is the biggest factor, space cost is high too. And i think one has to build everything into the colocation rack. I doubt they will happily have something stand before the rack. The pictures of the datacentres look very tidy, so most probably a design is needed where everything is inside the colocation rack.

It puzzles me, if you are going to put your miner into a datacenter, why care about watercooling? Put insane fans on it, and let the airco from the DC take care of all the heat... Smiley
legendary
Activity: 2674
Merit: 1083
Legendary Escrow Service - Tip Jar in Profile
I forgot... i asked hetzner what they think about watercooling and it looks they dont deny it from the start because the support asked me: "Is the compressor built into the 19" rack or extern?" and "how high is the probability of water leaks?"
Im not sure what the answers to this are. I guess radiator has to be outside, compressor can be inside right? But its interesting that they dont deny from the start. I wonder if its only hetzner or other datacentres too.

the reason they denied it to me was sth like "security blabla other custormers servers in the racks blabla".

if i am honest, i would not like it if someone would put his watercooled selfmade hellmachines above my production servers in a datacentre.

Thats what i thougt too. But maybe they already have racks with watercooled machines so that its possible? Or they have so much free room that they can give you a new colocation rack? But i wonder anyway it if will work. I mean big radiators are needed isnt it? And even though the power cost is the biggest factor, space cost is high too. And i think one has to build everything into the colocation rack. I doubt they will happily have something stand before the rack. The pictures of the datacentres look very tidy, so most probably a design is needed where everything is inside the colocation rack.
member
Activity: 72
Merit: 10
I forgot... i asked hetzner what they think about watercooling and it looks they dont deny it from the start because the support asked me: "Is the compressor built into the 19" rack or extern?" and "how high is the probability of water leaks?"
Im not sure what the answers to this are. I guess radiator has to be outside, compressor can be inside right? But its interesting that they dont deny from the start. I wonder if its only hetzner or other datacentres too.

the reason they denied it to me was sth like "security blabla other custormers servers in the racks blabla".

if i am honest, i would not like it if someone would put his watercooled selfmade hellmachines above my production servers in a datacentre.
legendary
Activity: 2674
Merit: 1083
Legendary Escrow Service - Tip Jar in Profile
I forgot... i asked hetzner what they think about watercooling and it looks they dont deny it from the start because the support asked me: "Is the compressor built into the 19" rack or extern?" and "how high is the probability of water leaks?"
Im not sure what the answers to this are. I guess radiator has to be outside, compressor can be inside right? But its interesting that they dont deny from the start. I wonder if its only hetzner or other datacentres too.
legendary
Activity: 2674
Merit: 1083
Legendary Escrow Service - Tip Jar in Profile
I wonder if you sell un-assembled PCB and parts later on?Thanks.  Cheesy

Check my sig for a list of assemblers and their offers. There you find all options burnin will offer.
legendary
Activity: 2674
Merit: 1083
Legendary Escrow Service - Tip Jar in Profile
3.6W needs the D12SH, 24W the FFB... But the difference in airflow volume is only 150m³/h to 322... I wonder why.
Not everything in life is linear my friend!

I guess then it would be better to use 2 fans of the first type instead one of the second. More efficient it seems.
newbie
Activity: 50
Merit: 0
I wonder if you sell un-assembled PCB and parts later on?Thanks.  Cheesy
Jump to: