Pages:
Author

Topic: [ANN][COMB] Haircomb - Quantum proof, anonymity and more - page 3. (Read 6890 times)

sr. member
Activity: 682
Merit: 268
Hello, the comb downloader started to sync correctly, so I thought I would share it.

https://bitbucket.org/watashi564/combdownloader/src/master/

If possible, It would be great to have it moved over to github and make a release!
The usage is pretty simple, the user just runs combdownloader.exe and the community series combfullui.exe and any recent version of bitcoin core.
It will make 8 connections to the bitcoin core (this could perhaps be decreased)
Then it will start pulling blocks into the comb wallet.


There is a distinct possibility to use this in a "lite" mode, that is without having bitcoin core installed. Although then the user would need to
download over +100gb of data still, of which +100 mb would be retained, so I don't know what the benefit would be.

To use in lite mode perhaps the ip address in main.go could be made configurable. That way you can connect to any online bitcoin core in the bitcoin swarm.


EDIT: Important! please do not use this comb downloader just yet. There is an off-by-one error in the code, it syncs block 481824 into the slot for height 481825 etc...

Each block gets synced at the wrong height (1 block higher) The wallet appears to be working fine despite of this error, and you will not lose any funds.

Problem will manifest if the user switches to a correct syncing method, that will cause a loss of one block chain block.

If you've used this comb downloader please delete your commits folder, to recover from the problem.
jr. member
Activity: 76
Merit: 8
The racing problems- I'm catching them while running go build -race and reorging blocks and loading wallets at the same time.


Now there is a mistake at line 231 in merkle.go - there needs to be commits_mutex.RUnlock()

That's all. I've also tested the nested comb trades. They work properly.

I'm ok with the final version - can be released (if there is nothing else) -_-

Fixed. I've committed and uploaded a build; https://github.com/nixed18/combfullui/releases/tag/v0.3.4

Now to update the documentation, then on to light client server stuff.
sr. member
Activity: 682
Merit: 268
The racing problems- I'm catching them while running go build -race and reorging blocks and loading wallets at the same time.


Now there is a mistake at line 231 in merkle.go - there needs to be commits_mutex.RUnlock()

That's all. I've also tested the nested comb trades. They work properly.

I'm ok with the final version - can be released (if there is nothing else) -_-
jr. member
Activity: 76
Merit: 8
Sorry I wasn't very helpful on this end, I'll commit the changes and build a new release later today.

EDIT: Jesus christ, I spent so long just looking at the mining code I forgot how complex the guts of this thing actual are. I gotta get more familiar with it.

I've added the changes and committed it to github, if I didn't mess anything up then I'll build a new release. How did you figure out that this was an issue? Was it just code scanning, or did you generate a crash during testing?
sr. member
Activity: 682
Merit: 268
Here are changes for fixing the racing problems.

1. segmentmerkle.go remove all uses of segments_merkle_mutex RLock() and
RUnlock(), toplevel callers will have to lock that
instead.

code omitted

2. anonminize.go, add 2 mutexes rlock and runlock around segments_coinbase_backgraph:
Code:
for combbase := range bases {
segments_transaction_mutex.RLock()
segments_merkle_mutex.RLock()

segments_coinbase_backgraph(backgraph, make(map[[32]byte]struct{}), target, combbase)

segments_merkle_mutex.RUnlock()
segments_transaction_mutex.RUnlock()

}
   
3. loopdetect.go, add 2 mutexes rlock and runlock to loopdetect() function:
Code:
func loopdetect(norecursion, loopkiller map[[32]byte]struct{}, to [32]byte) (b bool) {
segments_transaction_mutex.RLock()
segments_merkle_mutex.RLock()

var type3 = segments_stack_type(to)
if type3 == SEGMENT_STACK_TRICKLED {
b = segments_stack_loopdetect(norecursion, loopkiller, to)
}
var type2 = segments_merkle_type(to)
if type2 == SEGMENT_MERKLE_TRICKLED {
b = segments_merkle_loopdetect(norecursion, loopkiller, to)
}
var type1 = segments_transaction_type(to)
if type1 == SEGMENT_TX_TRICKLED {
b = segments_transaction_loopdetect(norecursion, loopkiller, to)
} else if type1 == SEGMENT_ANY_UNTRICKLED {
} else if type1 == SEGMENT_UNKNOWN {
}
segments_merkle_mutex.RUnlock()
segments_transaction_mutex.RUnlock()
return b
}

4. merkle.go commits mutex MUST be taken when call merkle_scan_one_leg_activity() here:
Code:
commits_mutex.RLock()
var allright1 = merkle_scan_one_leg_activity(q1)
var allright2 = merkle_scan_one_leg_activity(q2)

if allright1 && allright2 {
reactivate_txid(false, true, tx)
}

commits_mutex.RUnlock()
return true, e[0]
and add 2 mutexes rlock and runlock:

      
Code:
if newactivity {
segments_transaction_mutex.Lock()
segments_merkle_mutex.Lock()
if old, ok1 := e0_to_e1[e[0]]; ok1 && old != e[1] {

fmt.Println("Panic: e0 to e1 already have live path")
panic("")
}

e0_to_e1[e[0]] = e[1]
segments_merkle_mutex.Unlock()
segments_transaction_mutex.Unlock()

segments_transaction_mutex.RLock()
segments_merkle_mutex.RLock()

var maybecoinbase = commit(e[0][0:])
if _, ok1 := combbases[maybecoinbase]; ok1 {
segments_coinbase_trickle_auto(maybecoinbase, e[0])
}

segments_merkle_trickle(make(map[[32]byte]struct{}), e[0])

segments_merkle_mutex.RUnlock()
segments_transaction_mutex.RUnlock()

}
      
      
5. mine.go add write locking:

Code:
        if *tx == (*txidto)[0] {
                segments_transaction_mutex.Lock()
                segments_transaction_next[actuallyfrom] = *txidto
                segments_transaction_mutex.Unlock()
                return false
        }

change 2 add toplevel locking:

Code:
segments_transaction_mutex.RLock()
segments_merkle_mutex.RLock()

var maybecoinbase = commit(actuallyfrom[0:])
if _, ok1 := combbases[maybecoinbase]; ok1 {
segments_coinbase_trickle_auto(maybecoinbase, actuallyfrom)
}

segments_transaction_trickle(make(map[[32]byte]struct{}), actuallyfrom)

segments_merkle_mutex.RUnlock()
segments_transaction_mutex.RUnlock()

change 3:

Code:
segments_transaction_mutex.RLock()

var val = segments_transaction_data[*tx][i]

segments_transaction_mutex.RUnlock()

change 4:

Code:
if oldactivity == 2097151 {
segments_transaction_mutex.Lock()
       
var actuallyfrom = segments_transaction_data[*tx][21]

segments_transaction_untrickle(nil, actuallyfrom, 0xffffffffffffffff)

delete(segments_transaction_next, actuallyfrom)

segments_transaction_mutex.Unlock()
}


6. stack.go surround stack trickle with mutexes:

Code:
segments_transaction_mutex.RLock()
segments_merkle_mutex.RLock()

segments_stack_trickle(make(map[[32]byte]struct{}), hash)

segments_merkle_mutex.RUnlock()
segments_transaction_mutex.RUnlock()


7. txrecv.go tx_receive_transaction_internal(), take both mutexes:

Code:
segments_transaction_mutex.Lock()
segments_merkle_mutex.Lock()
   
and

Code:
segments_merkle_mutex.Unlock()
segments_transaction_mutex.Unlock()



sr. member
Activity: 682
Merit: 268
Ah, the thread thing.

Sorry for not having an exact solution you can apply right now.

It's caused by segments_transaction_mutex and
segments_merkle_mutex mutexes. They protect mainly the maps that tell you
where money should go from transaction address, for merkle transaction (comb trade)
the map is named e0_to_e1 and for haircomb transaction the map is named
segments_transaction_next. The key in both maps is some kind of used (spent) address,
and the value is the next address where all that money should move next.
Actually segments_transaction_next contains the txid too.

Because the logic is the same - the locking should be the same but it isn't, that's the bug.
When you look at segmenttx.go and segmentmerkle.go - that is the money trickling
code.

There are two avenues to fix this one would be simply surround each read from the map(s)
with RLock RUnlock.
This option sounds right for consensus-non-critical paths like inside:
Code:
segments_transaction_loopdetect()
segments_transaction_backgraph()
segments_merkle_loopdetect()
segments_merkle_backgraph()

This is easy.

Another part is to NOT surround each read from the map with RLock RUnlock but instead
the whole invocation of the trickling  is guarded at the highest level.
This part sounds right for consensus critical like balance calculation. This will prevent
somebody to add new transactions while the money is still being propagated along the graph.
Once the dust settles, the queued new transaction adding gets the green light.

Example from txrecv.go, there are other similar top-level places that call into trickling code.
Code:
if newactivity == 2097151 {

Lock(segments_transaction_mutex)

segments_transaction_next[actuallyfrom] = txidandto

Unlock(segments_transaction_mutex)

RLock(segments_transaction_mutex)
RLock(segments_merkle_mutex)

var maybecoinbase = commit(actuallyfrom[0:])
if _, ok1 := combbases[maybecoinbase]; ok1 {
// ...invoke coinbase trickling in case the haircomb was a coinbase
segments_coinbase_trickle_auto(maybecoinbase, actuallyfrom)
}

//..invoke cash trickling here:
segments_transaction_trickle(make(map[[32]byte]struct{}), actuallyfrom)

RUnlock(segments_merkle_mutex)
RUnlock(segments_transaction_mutex)

}
The reason why both read mutexes are taken is that transaction can pay to merkle
transaction which could cause transaction money trickling become merkle tx money trickling.


jr. member
Activity: 76
Merit: 8
I committed the changes, I'll wait until you go over the threading problem before building a new release. I'm waist deep in some work stuff but I'll do my best to keep up lol.

sr. member
Activity: 682
Merit: 268
I've been testing busily and found numerous problems. Let's start with the simpler ones.

used_key.go - used_key_add_new_minimal_commit_height - when deleting from slices l is always 31

instead of:

Code:
l := len(v) - 1

there needs to be the length of the actual slice:

Code:
l := len(used_height_commits[min_height]) - 1

and

Code:
l := len(used_commit_keys[min_commit]) - 1

Explanation: taking length of v which is a hash (always of size 32) is not the intention.

newminer.go - handle_reorg_direct - off by one error in direct mode causes "holes" in database

the fix is rather simple:

Code:
iter := commitsdb.NewIterator(&util.Range{Start: new_height_tag(uint64(target_height)+1), Limit: new_height_tag(uint64(height+1))}, nil)


newminer.go - miner - when reorging we need to provide the height of previous block (not of the reorged one) to become the topmost block

Code:
       // Flush
       var commit_fingerprint hash.Hash
       if dir == -1 {
               commit_fingerprint = miner_mine_commit_pulled("FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF",
                       new_flush_utxotag(uint64(parsed_block.height)-1), 0)
       } else if dir == 1 {
               commit_fingerprint = miner_mine_commit_pulled("FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF",
                       new_flush_utxotag(uint64(parsed_block.height)), 0)
       }

Explanation:  the current reorged one 's previous block height should become the topmost by posttag() inside miner_mine_commit_internal(). If we don't do this,
the check "error: mined first commitment must be on greater height" will wrongly prevent one block after a reorg to get mined in miner mode.

There are a threading problems as well, that I will explain in a later post.



jr. member
Activity: 76
Merit: 8
well I see, in my opinion repairing 1 block doesn't make sense, purely because
a) BTC node isn't guaranteed to be present
b) even if BTC node is present, how do you know it won't send the wrong information again
c) even if it sends the right information, you will need to download all the blocks after that blocks
anyway, for example in case a previously unseen commitment abc...123 was fixed by being "removed" from block 500000 (because it isn't there), all
the 500000+ blocks need to be inspected to confirm that abc...123 doesn't appear there again to "add it" (make it previously unseen in a later block).
d) so you can pretty much repair only errors that are fixed by "adding" commitments, and you will still need to fix up later blocks to remove from them
e) all of this makes it pretty narrow scoped as opposed to node operator simply copying over the "right" database to the node from a known good backup.


here is my final fix for the high severity problem. Only difference between the preliminary fix and this is the removal of the mutex commits_mutex locks+unlocks in the two cases (merkle_mine+merkle_unmine) where it's being held already and it would just cause a deadlock.

merkle.go https://pastebin.com/raw/SR2Y83Qt
txlegs.go https://pastebin.com/raw/vgfRjNYF

I was thinking more along the line of some unseen problem causing a corruption in the commits db, not a wrong input from the BTC. So the scenario would assume that, at one point, Haircomb did have a full, correct commits db file, and then something happened to cause one or more blocks to become corrupted. But you're right, it makes a lot more sense just to restore form a backup in this case.

EDIT: Updated the github and have a new release for the patches, I haven't had a chance to properly test it yet though.
sr. member
Activity: 682
Merit: 268
well I see, in my opinion repairing 1 block doesn't make sense, purely because
a) BTC node isn't guaranteed to be present
b) even if BTC node is present, how do you know it won't send the wrong information again
c) even if it sends the right information, you will need to download all the blocks after that blocks
anyway, for example in case a previously unseen commitment abc...123 was fixed by being "removed" from block 500000 (because it isn't there), all
the 500000+ blocks need to be inspected to confirm that abc...123 doesn't appear there again to "add it" (make it previously unseen in a later block).
d) so you can pretty much repair only errors that are fixed by "adding" commitments, and you will still need to fix up later blocks to remove from them
e) all of this makes it pretty narrow scoped as opposed to node operator simply copying over the "right" database to the node from a known good backup.


here is my final fix for the high severity problem. Only difference between the preliminary fix and this is the removal of the mutex commits_mutex locks+unlocks in the two cases (merkle_mine+merkle_unmine) where it's being held already and it would just cause a deadlock.

merkle.go https://pastebin.com/raw/SR2Y83Qt
txlegs.go https://pastebin.com/raw/vgfRjNYF
jr. member
Activity: 76
Merit: 8
I haven't done any real diving into the tx portion of COMB yet, so I'm afraid I won't be much help here.

I ask about the block pulling because I'm curious if it makes sense to aim for, in the future, segmented orphan repair. Right now if a block is corrupted the entire chain after said block is discarded; using the current block-by-block fingerprinting, as well as including the hash of the previous block's metadata, it seems viable to allow Haircomb to just discard only the corrupted blocks, and redownload them.

I'm not sure if I'm being paranoid or making up corruption scenarios that don't exist though, so I dunno if it's actually worth doing or not right now.
sr. member
Activity: 682
Merit: 268
I'm investigating a max severity crasher issue in comb trades facility (merkle.go)

In the mean time, all users must cease using comb trades for any sort of commerce.

I have a preliminary fix, here:

merkle.go https://pastebin.com/raw/CUUnVbEe
txlegs.go https://pastebin.com/raw/vgfRjNYF

it's a rather large overhaul of the facility.
sr. member
Activity: 682
Merit: 268
Yeah, In case the local node is a full node, the sync would take nearly the same time.
In case the local node is a pruned node, some blocks would have to be pulled over the network, this sync would be slower and capped by the local internet speed.

TESTING

☑ Successful / unsuccessful claiming works.
☑ Transaction works.
☑ Naive double spend is caught correctly.
☑ Liquidity stack loops working.
☑ Coin loop detection worked ok so far.

2 minor severity issues - both existing in the Natasha version too

Issue 1 - Brain wallet keys not added to the used key feature.

Problem: when creating a brain wallet using the HTTP call, the keys that get
generated aren't added to the used key feature. This means once the keys become
used, node restarted and brain wallet re-generated, it will not be visible that
the used key is already spent.

In function: wallet_generate_brain

Solution: copy the "if enable_used_key_feature" lines from the normal key generator
(key_load_data_internal) into wallet_generate_brain too.

Issue 2 - Used keys balance should be forced to be 0 COMB on the user interface on detected outgoing key spend

Problem: when a key is spent, but the transaction is not added to the node using the link, the node will keep
displaying the old balance - the old key balance (nonzero) will be visible even in the event of 1+  confirmations.

Discussion: This is not a consensus issue, if the user attempts double spending using the nonzero balance,
the double spend will get caught and dropped normally.

In function: wallet_view

Solution: In the wallet printing loop, move the used_key_feature block above printing the wallet key+balance row. Inside the used_key_feature block,
set the balance (bal) to zero if outgoing spend got recognized. Finally, print the outgoing spend row below the walled key+balance row from a temp variables.

While we're fixing this, we may also hide the pay button in case of reorg to save user from accidentally double-spending.
 
Reference wallet.go
https://pastebin.com/raw/cLdG5pB3
jr. member
Activity: 76
Merit: 8
Watashi, the program you're working on, am I correct in assuming it's similar to the other program you published a while ago in that it would remove the need for a full node to be running to sync from, and would instead request blocks from peers? If so, then am I also correct in assuming that it will likely take longer to do a full commits build?
jr. member
Activity: 76
Merit: 8
Haircomb Core v0.3.4-beta.1 is up here: https://github.com/nixed18/combfullui/releases/tag/0.3.4-beta.1

Build (Optional)

Do the same steps as the normal combfullui (https://bitcointalksearch.org/topic/m.54605575) but with two differences:

1.1. Rather than cloning a github repo, click on "Code" and download the zip file located here: https://github.com/nixed18/combfullui/tree/338d2775d1a5d894259ed1c6b728e251ef432b5a. Move that folder into the location you want to build COMB and extract it, and continue with the build instructions.
 
2. Type in "go get github.com/syndtr/goleveldb/leveldb" to install LevelDB


Set up bitcoin.conf

1. Navigate to the directory where you have stored your BTC blockchain. By default it is stored in C:\Users\YourUserName\Appdata\Roaming\Bitcoin on Windows. You'll know you're there when you see blocks and chainstate folders, along with some .dat files for stuff like the mempool, peers, etc.

2. Look for a file named "bitcoin.conf". If one doesn't exist, make one by right clicking the whitespace and going New>TextFile. Then rename this file to "bitcoin.conf"

3. Open "bitcoin.conf" in Notepad and add the following two entries, replacing XXXXX with what you want your log info to be. This is only used for your BTC node's RPC access.
Code:
rpcuser=XXXXX
rpcpassword=XXXXX

4. Save and exit


Set up config.txt

1. Navigate to the directory you installed the Haircomb beta.

2. Create a text file called "config.txt".

3. Open "config.txt" in Notepad, and add the following lines, replacing the XXXXX with the same values that you used in your "bitcoin.conf".
Code:
btcuser=XXXXX
btcpass=XXXXX

4. Save and exit. If Haircomb was open during this edit, you must restart the program for your changes to take effect.


Run

Assuming you're using an unmodded BTC, you can run either BTC or Haircomb first, it doesn't matter. While Haircomb is running, it'll keep checking if there's a BTC server running on your machine, and if so, will attempt to connect with it. When you run BTC, either run bitcoind.exe OR run bitcoin-qt.exe with "-server" as an argument. It is also compatible with Natasha's modded BTC, but just remember to launch Haircomb BEFORE the modded BTC.

Watashi has provided instructions and resources to run the beta using the BTC testnet, those can be found here: https://bitcointalksearch.org/topic/m.56935798


The current version's default port is 2121, this can be changed in the config.txt file with the entry "port=XXXX", replacing XXXX with a valid port number. Selecting the direct reorg handle to test can be done by inserting "reorg_type=miner" into your config.txt
sr. member
Activity: 682
Merit: 268
the change 2 minutes ago looks good

☑ default credentials are blank
☑ user warned to configure both programs with the same credentials when comms fails
☑ we'll find any bugs during beta testing

feel free to tag the beta!

jr. member
Activity: 76
Merit: 8
I will ask you differently, do you wanna take responsibility for angry users
who had their BTC wallets wiped clean by malware running on other computers on their network, because they
have typed "user" & "pass" into their bitcoin.conf, just like your software recommended.

I also considered the solution in which we generate a random long user + pass on startup,
but it's worth shit because it changes every startup and nobody will be arsed to change
it every time (in bitcoin.conf).

This is where using the BTC .cookie file makes sense, but the user still needs to provide the path in some way.

Quote
What we want is zero configuration. That will be possible by a middle man software. As follows:

1. user downloads bitcoin from bitcoin.org and starts it
2. user downloads a middle man software that don't exist today but will later (once we do it)
3. user downloads haircomb core
4. user starts all 3 programs above
5. haircomb core starts syncing no config needed.

this will be possible because:
1. haircomb core will request blocks from port 8332 (RPC PORT)
2. middle man SW that listens on port 8332 will redirect the requests to port 8333 where the Bitcoin
is already listening on the peer to peer network. (In the correct format). This is true by default
on Bitcoin core, without any configuration.
3. Bitcoin normally transmits the blocks to middleman (in a special format, NOT in JSON)
4. Middleman transits the required blocks to the comb core (in JSON)


I understand this, but I don't see how it'll be possible to completely eliminate the need for config files right now, before the middleware is developed.

Quote
What are the possible obstacles to this zero configuration?Huh

1. Bitcoin already running on 8332. The middle man will then fail to run. The middle man program
will recommend to delete both BTC and COMB config files.

2. Old version of comb ( the one we are developing right now) will fail to run without a config file!! (this problem needs to be solved RN)

3. Old version of comb recommending the wrong thing when config file have blank credentials or not found. It must not fail with error and it must keep connecting to the middle man with blank credentials despite of the credentials being blank.

4. Middleman software not being available (this will be solved soon)

5. Blank credentials not being testable - sure, not on BTC, but the fake block simulator will serve you something even with blank credentials.

The current version of COMB I have up does not require the config to run, only to access the BTC RPC. I'm not sure exactly what you're trying to convey here; if what you're saying is that the current version of COMB MUST be able to access the BTC and pull blocks without any sort of config file, then I'd love to hear how you think it could be done, because I don't see how to accomplish that. Unless you're suggesting a modification to the current program so that it DOESN'T use the RPC, and instead pulls over 8333 then converts to JSON, but that's just the middleware solution.

So again, I'm not 100% sure exactly what you're proposing the steps forwards are; accessing the BTC RPC requires authentication, either the user provides said authentication or they can't pull over the BTC RPC. We can either attempt to make the provision of the authentication easier, or circumvent the RPC via the P2P network. The most non-intrusive way to have the COMB client and the BTC client communicate is using the .cookie file, but COMB needs to know where that file will be. This means that a) the user will have to provide a path to their BTC data dir, via config file or some other means, or b) the user will have to place combfullui WITHIN the BTC data dir. We can set up either of these options, but you seem fairly adamant that COMB should not require a config file in the slightest, which just leaves the user dumping combfullui in the same directory as the .cookie file.

The reason that the BTC call fails with blank creds isn't anything hard coded into COMB; BTC just doesn't return a usable value. Right now the only time that COMB will say "Hey, put in credentials" is if it attempts to contact BTC and gets a bad response back.

EDIT: We could also create an option to submit the BTC datadir path via the COMB UI, and then have COMB save the path to a file in a hidden directory like APPDATA or something. Then the user doesn't have to create a txt file and paste the path in there, but they still have to find the path and paste in in the browser somewhere so I dunno if it's really that much better.

EDIT2: In the same vein as the previous example, we can combine it with your suggestion; on startup COMB generates a random user/pass, then references a path to the bitcoin.conf, and edits bitcoin.conf to contain rpcuser=x and rpcpassword=y. This just seems like a more convoluted version of the cookie solution though, and would also require the user to always start COMB BEFORE BTC, like before.
sr. member
Activity: 682
Merit: 268
I will ask you differently, do you wanna take responsibility for angry users
who had their BTC wallets wiped clean by malware running on other computers on their network, because they
have typed "user" & "pass" into their bitcoin.conf, just like your software recommended.

I also considered the solution in which we generate a random long user + pass on startup,
but it's worth shit because it changes every startup and nobody will be arsed to change
it every time (in bitcoin.conf).

What we want is zero configuration. That will be possible by a middle man software. As follows:

1. user downloads bitcoin from bitcoin.org and starts it
2. user downloads a middle man software that don't exist today but will later (once we do it)
3. user downloads haircomb core
4. user starts all 3 programs above
5. haircomb core starts syncing no config needed.

this will be possible because:
1. haircomb core will request blocks from port 8332 (RPC PORT)
2. middle man SW that listens on port 8332 will redirect the requests to port 8333 where the Bitcoin
is already listening on the peer to peer network. (In the correct format). This is true by default
on Bitcoin core, without any configuration.
3. Bitcoin normally transmits the blocks to middleman (in a special format, NOT in JSON)
4. Middleman transits the required blocks to the comb core (in JSON)


What are the possible obstacles to this zero configuration?Huh

1. Bitcoin already running on 8332. The middle man will then fail to run. The middle man program
will recommend to delete both BTC and COMB config files.

2. Old version of comb ( the one we are developing right now) will fail to run without a config file!! (this problem needs to be solved RN)

3. Old version of comb recommending the wrong thing when config file have blank credentials or not found. It must not fail with error and it must keep connecting to the middle man with blank credentials despite of the credentials being blank.

4. Middleman software not being available (this will be solved soon)

5. Blank credentials not being testable - sure, not on BTC, but the fake block simulator will serve you something even with blank credentials.

jr. member
Activity: 76
Merit: 8
blank credentials by default are needed when:

 - when running offline/no reason/intention to sync
 - when syncing using a future tool (to be developed), that tool would run on :8332
   and serve blocks from the Bitcoin P2P network. That won't require the BTC RPC/server
   option, and by extension our config file won't be needed.

in any case new_miner_start() must go, even with blank credentials

the blank credentials cannot be entered into bitcoin's config file. that's exactly
what we need! the user who is capable to edit config files won't be able to do the
wrong thing but will instead be forced to set identical strong credentials in both files.

Right now new_miner_start() runs without credentials, it's the make_bitcoin_call() that stalls out without them. I may be taking your statement too literally here though lol. As far as I am aware, there is no way to connect to BTC over RPC WITHOUT a username and password. There are currently 3 options;

1. Config username and password. What we've been using so far.

2. Cookie file. In the absence of a username and password in its config, BTC will generate a .cookie file that has a username and password to use in it. While we could pull the username and password automatically from said file, we'd still need the user to indicate the path to that file. The only option I see, other than having the user place combfullui in the same folder as .cookie, is to use a config file to communicate the path.

3. RPCAuth. This is just a more complicated version of user/pass that generates a config entry for you, which you then have to place in your bitcoin.conf file.



Quote
can't comment on merged main page non synced messages, I don't see the code.

Code:
if height < uint64(fakeheight) || (check_connected() && last_known_btc_height!= -1 && last_known_btc_height < int(our_top_block().Height)) {
fmt.Fprintf(w, `

Wallet appears to be out of sync. Displayed balances are incorrect until wallet is fully synced:

`)
}

I pushed the code so its on my github now, sorry I blanked before lol.

sr. member
Activity: 682
Merit: 268
blank credentials by default are needed when:

 - when running offline/no reason/intention to sync
 - when syncing using a future tool (to be developed), that tool would run on :8332
   and serve blocks from the Bitcoin P2P network. That won't require the BTC RPC/server
   option, and by extension our config file won't be needed.

in any case new_miner_start() must go, even with blank credentials

the blank credentials cannot be entered into bitcoin's config file. that's exactly
what we need! the user who is capable to edit config files won't be able to do the
wrong thing but will instead be forced to set identical strong credentials in both files.

can't comment on merged main page non synced messages, I don't see the code.
Pages:
Jump to: