Pages:
Author

Topic: Klondike - 16 chip ASIC Open Source Board - Preliminary - page 85. (Read 435369 times)

newbie
Activity: 46
Merit: 0
The point is you need to leave this thread and beg for a loan elsewhere...this isn't for general discussion of different asicminers but for the development of the Klondike miners.

My apologies. Post deleted. Did not mean to get anybody upset.
hero member
Activity: 854
Merit: 500
Well, I want buy one from BTCGuild. They said 1.2BTC including shipment internationally. That's the best price i know of.

1BTC for the USB and 0.21BTC for shipment.

Stop begging, it's against the rules and stupid and this isn't the place anyway. Get out. Also it'll never mine 0.8BTC in a reasonable amount of time so it's irresponsible to beg for loans to pay for it, get a job and waste your own money. Or sell your GPU.


Not begging. Just thought I would ask. Like everybody else I am waiting for two BFL products. The GPU just don't cut it anymore. The one (1) USB I need is for mining wtih a Raspberry PI already set up with MineFormans Eruptor BOB. I'm not looking to make money / BTC with the thing. I want to write a review and "how to" on my website for those who follow it.

I'm sorry if I pissed you off.

coinlenders.com

Good luck.

P.S. Do you realize that it will take you forever to make .8 BTC with that piece of junk you are looking to buy?
full member
Activity: 238
Merit: 100
The point is you need to leave this thread and beg for a loan elsewhere...this isn't for general discussion of different asicminers but for the development of the Klondike miners.
sr. member
Activity: 308
Merit: 250
Well, I want buy one from BTCGuild. They said 1.2BTC including shipment internationally. That's the best price i know of.

1BTC for the USB and 0.21BTC for shipment.

Stop begging, it's against the rules and stupid and this isn't the place anyway. Get out. Also it'll never mine 0.8BTC in a reasonable amount of time so it's irresponsible to beg for loans to pay for it, get a job and waste your own money. Or sell your GPU.





Good luck with getting the communication worked out bkk, sounds like a headache.
member
Activity: 77
Merit: 10

BTW I've updated the driver to 3.3.1 and pushed to github along with firmware. This is working for me now but rather unreliably. I'm working on improvements.

Thanks for this, brilliant update!
newbie
Activity: 46
Merit: 0
Well, I want buy one from BTCGuild. They said 1.2BTC including shipment internationally. That's the best price i know of.

1BTC for the USB and 0.21BTC for shipment.
sr. member
Activity: 476
Merit: 250
Probably not an appropiate place to ask and also feel like an ass to ask this question.

I hava an oppertunity to purchase a Block Eruptor USB for around 1.2BTC. I currently only have 0.4BTC and mining with the GPU is taking too long with current difficulty as it is. Thing is I'm short on BTC and wondered if anybody would like to "donate" me the odd 0.8BTC to purchase it. I will pay it back in due course.

I will also update this thread as soon as anybody did make a donation and helped me out. This is just to prove I am not out for a scam.

Should you be so kind hearted to donate the bit of BTC I need Please pm me. After you sent it you can also make sure I have updated this thread with your name as donator.

The reason for wanting to buy the USB ASIC is to write a small review for my website and blog. I can give the donator the website and blog address. I will give it only to the donator to verify that the website is legit indeed. I don't want trolls getting a hold of the email address and harrasing me as the email address is used for a lot of people who's interrested in Bitcoin mining. It is in fact a non profit website for Bitcoin and ASIC news and updates.

Thanks to the person who might help. Please pm me if you could help me out.

(Really feel like an ass for having to ask)

Didn't the price just drop to 0.85-ish BTC?
newbie
Activity: 38
Merit: 0
99% of my reads are 14 bytes, a status record, but a nonce result is 7 bytes. Config data is 8 bytes, but I haven't been using that yet. The max size defined by the descriptor is 64 bytes. I'm thinking of just standardizing on 15 byte reads (which allows for a 1 byte queue flag in compact 16 bytes circular queues) and ignoring extra bytes. That way libusb always waits for 15 bytes regardless of what format is being replied. Right now I poll on 31 bytes (my buffer size) and ignore any timeouts due to less bytes being returned. But doing this sometimes results in a second reply appended on the first which seems to be because two packets come in too quickly and don't get read individually.

This is what I was saying about trying to map streams (serial data) onto a bulk pipe. When reading, you never know how much data the device is going to send (ie you never know how much you *need* to read).

It could be less than you're expecting (because you read in the middle of the USB device reading data from the serial port)
It could be exactly what you're expecting (yay!)
It could be more than you're expecting (because you weren't reading data quickly enough and more data arrived at the USB device, etc)

Since there's no way to reliably predict how much data the device is going to send (and thusly how much you can safely read), you need to read in multiples of the packet size. This will ensure you read all of the data the device is going to send you but also ensure you don't run into overflow errors during the read.

It's one of the annoying things about USB. It's too simple and it requires the programmer to jump through hoops.

You can read a little more information here:

http://libusb.sourceforge.net/api-1.0/packetoverflow.html

The usbutils appears to return what I request but always with a timeout as it loops trying to get the full requested size, and fails. It's probably designed to handle larger transfers but isn't very flexible with tiny status updates. By using a fixed size I avoid this. It always returns with what I request and not less. I can easily ignore the extra myself and the sizes are so small that there will be no performance difference.

Most of the writes are only 2 bytes except pushing new work which is 47 bytes. I don't appear to have issues with writing except now and then the device just stops reading even though it continues to write out nonce data. So another bug to work on there.

BTW I've updated the driver to 3.3.1 and pushed to github along with firmware. This is working for me now but rather unreliably. I'm working on improvements.

Writes shouldn't be a problem since USB devices will generally accept whatever you give them.

The other option is to avoid libusb and just use the kernel driver. It handles all of this stuff for you and should be easier to use.
hero member
Activity: 784
Merit: 1009
firstbits:1MinerQ
99% of my reads are 14 bytes, a status record, but a nonce result is 7 bytes. Config data is 8 bytes, but I haven't been using that yet. The max size defined by the descriptor is 64 bytes. I'm thinking of just standardizing on 15 byte reads (which allows for a 1 byte queue flag in compact 16 bytes circular queues) and ignoring extra bytes. That way libusb always waits for 15 bytes regardless of what format is being replied. Right now I poll on 31 bytes (my buffer size) and ignore any timeouts due to less bytes being returned. But doing this sometimes results in a second reply appended on the first which seems to be because two packets come in too quickly and don't get read individually.

The usbutils appears to return what I request but always with a timeout as it loops trying to get the full requested size, and fails. It's probably designed to handle larger transfers but isn't very flexible with tiny status updates. By using a fixed size I avoid this. It always returns with what I request and not less. I can easily ignore the extra myself and the sizes are so small that there will be no performance difference.

Most of the writes are only 2 bytes except pushing new work which is 47 bytes. I don't appear to have issues with writing except now and then the device just stops reading even though it continues to write out nonce data. So another bug to work on there.

BTW I've updated the driver to 3.3.1 and pushed to github along with firmware. This is working for me now but rather unreliably. I'm working on improvements.
newbie
Activity: 38
Merit: 0
I'm actually using cgminer 3.1.1 anyway. Not for any reason other than that's what was current when I cloned it. I wonder if it's a good idea to update everything to the current version. I've edited many files that contain driver related definitions so I'd have to do some sort of merge if I want to get a new version. I should set it up as a proper fork anyway but just have avoided spending time on that.

With usbread in this version if you give it a read count it will timeout if less bytes are available/read. I'm ignoring timeouts for polling reply data because some replies are of different lengths. If I move to a fixed 16 byte record length even when less is needed then every usbread is for the fixed length and it always returns immediately when data is pending, and presumably if two packets come in succession they won't pile up as they do now. What I'd ideally like is a callback function for each packet received with no polling. For now it's a second thread that just repeatedly polls with usbread for any replies and queues them for other threads to use.

Anyway, watching data with usbmon I see that sometimes for no visible (logged) reason it will just decide to do a control sequence to re-init the device. I don't know why it's doing that. It seems to happen after a nonce is read and a -84 code where normally there is a -115 code logged. But so far I haven't found a reference to what those codes mean. Whatever -84 means seems to cause a device re-init, but as doesn't appear to be related to a bad nonce value. Sometimes it happens even when an accepted good nonce is found.


Mapping of stream data onto a USB bulk pipe can be kind of awkward. Bulk pipes send packets of a fixed size (the descriptors document the size of the maximum size).

USB is host driven, so the communication with the device is always polled (on the wire at least) by the host. Effectively the device asks the device "do you have any data for this pipe?". The device then sends data. This can be anything up to the maximum packet size and there is no negotiation between the host and the device on how much the device is expecting.

If the device wants to send more data than the host is expecting, then it results in a babble error and this can ultimately end up in a device reset.

Reads that are larger than the maximum packet size for that pipe will be split into multiple packets of the maximum packet size with the last packet being equal to or less than the maximum packet size. (ie 150 byte read = 64 byte read + 64 byte read + 22 byte read).

This packet framing can be a problem with serial devices because you often don't know how much data is pending on the device side, or you're trying to read a subset of the data pending on the device side (because you're trying to read just enough data that you need for now).

For these serial devices, you should always read a multiple of the maximum packet size. If a device sends less, this is called a short read. Usually this is fine and how the protocol expects devices to operate (but this can be treated as an error too).

One last note about reads that are a multiple of the packet size. In that case, an extra 0-byte read will need to happen to ensure the proper framing occurs (ie data has 192 bytes to send, the host should read 256 bytes, which end up on the wire as 4 64-byte reads, with the last read being a 0-byte short read).

The kernel USB serial driver handles all of this framing for you, but it's certainly possible to handle this using libusb as well.

I'd suggest just always doing a large read (4096 bytes?) that is always a multiple of the device packet size and putting the data you read into an internal buffer. Then pull data off that buffer as you need it.

Also, libusb does support asynchronous transfers. You can place a read on a bulk pipe and you can be notified when the read finishes. The API could probably be easier to use in this case.
hero member
Activity: 784
Merit: 1009
firstbits:1MinerQ
I should set it up as a proper fork anyway but just have avoided spending time on that.

Yeah, create a fork on github, make sure you pull in from CK's regularly and send him a push request when you're happy enough to merge it back in.
That's what I was thinking but I already had a cgminer fork for my personal GPU mining. I had to do a bit of googling but found out how to create a second fork, and have done that. It's called cgminer-klondike and contains driver development sources. I've also kept the cgminer directory in the klondike repo with hard links to the files that have changed. That way users can see right away what's different, but also by cloning/forking from cgminer-klondike they can build the full program with: ./autogen.sh --enable-klondike

Seems to be some funny business tonight with github IP changes so I'll be holding off but expect soon to update the klondike repo with current firmware and driver changes. They're by no means complete. I'm sure I need to add some more code to the driver to support API info properly and also selecting cgminer screens other than the main one seem to cause it to lock up. Not sure why but I expect it has something to do with the driver being incomplete.
full member
Activity: 140
Merit: 100
I'm actually using cgminer 3.1.1 anyway. Not for any reason other than that's what was current when I cloned it. I wonder if it's a good idea to update everything to the current version. I've edited many files that contain driver related definitions so I'd have to do some sort of merge if I want to get a new version. I should set it up as a proper fork anyway but just have avoided spending time on that.

With usbread in this version if you give it a read count it will timeout if less bytes are available/read. I'm ignoring timeouts for polling reply data because some replies are of different lengths. If I move to a fixed 16 byte record length even when less is needed then every usbread is for the fixed length and it always returns immediately when data is pending, and presumably if two packets come in succession they won't pile up as they do now. What I'd ideally like is a callback function for each packet received with no polling. For now it's a second thread that just repeatedly polls with usbread for any replies and queues them for other threads to use.

Anyway, watching data with usbmon I see that sometimes for no visible (logged) reason it will just decide to do a control sequence to re-init the device. I don't know why it's doing that. It seems to happen after a nonce is read and a -84 code where normally there is a -115 code logged. But so far I haven't found a reference to what those codes mean. Whatever -84 means seems to cause a device re-init, but as doesn't appear to be related to a bad nonce value. Sometimes it happens even when an accepted good nonce is found.

hrrm - I'm def uninitiated in this area - only candidates I see are the codes in api.c, such as "MSG_INVNUM 84"
I suppose ckolivas/kano might have insight …
sr. member
Activity: 448
Merit: 250
I should set it up as a proper fork anyway but just have avoided spending time on that.

Yeah, create a fork on github, make sure you pull in from CK's regularly and send him a push request when you're happy enough to merge it back in.
newbie
Activity: 10
Merit: 0
Can this code be generated from a timeout read or based on the count of a x number of nonce....
 
Just a idea (don't throw any stones yet Smiley )
hero member
Activity: 784
Merit: 1009
firstbits:1MinerQ
I'm going to look at USB errors now as some of the HW errors seem to be secondary effects of those. I may move to fixed data records as USBLIB calls don't make it easy to deal with variable length data. You have to know the length in advance or deal with timeout errors, and then it's common to see double records that get truncated.

fwiw - I rolled back to USB-serial based cgminer 3.1.1 due to an abundance of USB errors in newer versions.  However this was on an RPi running Raspbian/Debian.
I'm actually using cgminer 3.1.1 anyway. Not for any reason other than that's what was current when I cloned it. I wonder if it's a good idea to update everything to the current version. I've edited many files that contain driver related definitions so I'd have to do some sort of merge if I want to get a new version. I should set it up as a proper fork anyway but just have avoided spending time on that.

With usbread in this version if you give it a read count it will timeout if less bytes are available/read. I'm ignoring timeouts for polling reply data because some replies are of different lengths. If I move to a fixed 16 byte record length even when less is needed then every usbread is for the fixed length and it always returns immediately when data is pending, and presumably if two packets come in succession they won't pile up as they do now. What I'd ideally like is a callback function for each packet received with no polling. For now it's a second thread that just repeatedly polls with usbread for any replies and queues them for other threads to use.

Anyway, watching data with usbmon I see that sometimes for no visible (logged) reason it will just decide to do a control sequence to re-init the device. I don't know why it's doing that. It seems to happen after a nonce is read and a -84 code where normally there is a -115 code logged. But so far I haven't found a reference to what those codes mean. Whatever -84 means seems to cause a device re-init, but as doesn't appear to be related to a bad nonce value. Sometimes it happens even when an accepted good nonce is found.
hero member
Activity: 924
Merit: 1000
hi

does anyone have receive the chips?

I haven't but it might be better you start new thread for this so more people can chime in and not clutter this thread. Ordered from ragingazn628 Group Buy #1. Avalon Chip Order #10129 made on April 27, 2013.

Code:
JUNE 1
Hey guys I just called Gary and he said the reason why the sample chips are stuck in customs
is because Avalon put the value for them at $3 and we were just unlucky and were part of their random check.
He also said Avalon should not just put them as "IC Chips" and so that's why customs thought it was fishy.
I told him that they were basically un-programmed ASIC Chips and he said that's what he needs
to clear with customs. So it should be cleared by tomorrow.  - ragingazn628
full member
Activity: 140
Merit: 100
I'm going to look at USB errors now as some of the HW errors seem to be secondary effects of those. I may move to fixed data records as USBLIB calls don't make it easy to deal with variable length data. You have to know the length in advance or deal with timeout errors, and then it's common to see double records that get truncated.

fwiw - I rolled back to USB-serial based cgminer 3.1.1 due to an abundance of USB errors in newer versions.  However this was on an RPi running Raspbian/Debian.
sr. member
Activity: 378
Merit: 250
hi

does anyone have receive the chips?

hero member
Activity: 784
Merit: 1009
firstbits:1MinerQ
Todays snapshot:

RC delay, 100 ohm, 220pF, 2 NOR gates with trailing edge circuit.
Looks like a good capture and gives better results but I'm still seeing HW errors, though not so many now.



I'm going to look at USB errors now as some of the HW errors seem to be secondary effects of those. I may move to fixed data records as USBLIB calls don't make it easy to deal with variable length data. You have to know the length in advance or deal with timeout errors, and then it's common to see double records that get truncated.
hero member
Activity: 924
Merit: 1000
Why is it that the Avalon reference design and the klondike both use a rc reset per asic? Why not use a single reset pulse generator for all asics? That would replace 32 components with 2 or 3 and would give a more stable reset. Also has anyone considered a multiphase power supply? If you have 2 or three phases working 180 or 120 degrees out of phase you could achieve a lower noise power supply along with plenty of overhead for overclocking at a higher efficiency. There are several POL controllers out there that go up to 12 or more phases that could operate at very high frequency reducing inductor and capacitor sizes. I only mention this because I see your looking for more current on rev 2. I understand you don't want to totally respin the design but it's something to consider. I was going to say something awhile back but got stuck in newb jail and couldn't post. It's good to see this is still progressing along nicely. Massive respect for BKK and all the others that have put their time into this.

+1 (Sounds good... but what do I know I am not an EE)
Pages:
Jump to: