Pages:
Author

Topic: [ANN] eMunie (EMU) - NOT a BitCoin fork/clone - call for beta testers - page 13. (Read 78416 times)

legendary
Activity: 2632
Merit: 1023
I see, sure I could put something vague together and ignore a lot of the variables opting for an expected mean value for those.

Would maybe give some ideas.

Perhaps I can also setup a test subnet at a later date that generates lots of micro transactions between themselves and do some real benchmarks and report the results.

this would be valuable, and give valuable data, as well as certainty to the future, you don't want this thing grinding to a halt really fast in a short period of time, but it only becomes apparent 3 years down the track



legendary
Activity: 1050
Merit: 1016
I see, sure I could put something vague together and ignore a lot of the variables opting for an expected mean value for those.

Would maybe give some ideas.

Perhaps I can also setup a test subnet at a later date that generates lots of micro transactions between themselves and do some real benchmarks and report the results.
legendary
Activity: 2632
Merit: 1023
not really, too many variables, most of which are out of our control.

Calculating T would be affected by number of transactions required to seek through, and that's about as far as any reliable variable we have.

But T is also affected by the following:

Unexpected Network latency, network load, # hatchers in the network, the performance of those hatchers (hardware specs), degradation of hatcher performance due to fragmentation of either RAM or HDD

If you really want a formula I can give you one, but it would be complicated, inaccurate most of the time, and of no real consequence or use.

do you have any sort of ballpark figure....you think may apply, eg 1,000,000 transaction, of "blockchain" size.....

I guess this may be something people want to know long term that it does not become exponentially long or to a power.

eg some sort of t = f(ax^bn), or is going e^x, where bn and x is positive and >1, may be fatal, unless sorta equivalent to mores law, which is sorta the HD space growth assumption for BTC (putting electrum to one side)

so I think the order of the equation is all that is needed, not the detail.
legendary
Activity: 1050
Merit: 1016
Thanks, it's 100% CPU/Mem/IO bound though, and some of our beta testers have ran hatchers along side mining for BTC and LTC etc.
full member
Activity: 175
Merit: 100
I might be able to take a break from mining FTCs and LTCs. I'll throw a few rigs at it as more details develop.
legendary
Activity: 1050
Merit: 1016
not really, too many variables, most of which are out of our control.

Calculating T would be affected by number of transactions required to seek through, and that's about as far as any reliable variable we have.

But T is also affected by the following:

Unexpected Network latency, network load, # hatchers in the network, the performance of those hatchers (hardware specs), degredation of hatcher performance due to fragmentation of either RAM or HDD

If you really want a formula I can give you one, but it would be complicated, inaccurate most of the time, and of no real consequence or use.
legendary
Activity: 2632
Merit: 1023
The shared history is the public block tree/ledger.

1. Your client does a semi-comprehensive check
2. 2 hatchers (there are ALWAYS at least to verify a block) do a full check, that's all the way back as far as is needed
3. Other client nodes do a fast validation as that transaction has been verified by 2 hatchers

The hatchers do the bulk of the work, which is what they are there for, and can reject that transaction a block inclusion outright before any other nodes in the system even see it.

so could you give some sort of

t = f(dataSize)
legendary
Activity: 1050
Merit: 1016
The shared history is the public block tree/ledger.

1. Your client does a semi-comprehensive check
2. 2 hatchers (there are ALWAYS at least to verify a block) do a full check, that's all the way back as far as is needed
3. Other client nodes do a fast validation as that transaction has been verified by 2 hatchers

The hatchers do the bulk of the work, which is what they are there for, and can reject that transaction a block inclusion outright before any other nodes in the system even see it.
legendary
Activity: 2632
Merit: 1023
You can, but other the hatchers will do the same check, as do all the on a verified block (just a little different for performance on commodity hardware).

You can inject them if you like into your own client to pass the check locally, and send out that transaction, but the other nodes in the system will see that you do not have enough, and that you will be in negative balance.   A negative balance in an honest system is impossible, so if ever a node is calculated to have one, they have been naughty naughty, then ban hammer.

so then the other nodes must look back in time to some sort of available shared history, so then does this not bring up the transaction speed issue, as you have pushed it back to the whole network' collective memory and checking, being the min time step for validation

Do you not confront the same issue then network wide? thus slowing things to a crawl?
legendary
Activity: 1050
Merit: 1016
You can, but other the hatchers will do the same check, as do all the on a verified block (just a little different for performance on commodity hardware).

You can inject them if you like into your own client to pass the check locally, and send out that transaction, but the other nodes in the system will see that you do not have enough, and that you will be in negative balance.   A negative balance in an honest system is impossible, so if ever a node is calculated to have one, they have been naughty naughty, then ban hammer.
legendary
Activity: 2632
Merit: 1023
ok

"most recent receipts and sends that have occurred, not from the beginning until now"

this makes sense,

but leads to a second question, what is to stop me just injecting transactions at that point without some historical reference
legendary
Activity: 1050
Merit: 1016
Perhaps you are thinking of it in the wrong direction, so just to clarify, then we can continue if needed.

It works from the most recent receipts and sends that have occurred, not from the beginning until now.   So lets assume that all the dependency transactions are legit, and so we can leave them out for this example.

I want to send you 20 eMu, I'll be A, you are B, everyone else is a different letter.  Now my balance could be 21 eMu or 21000 eMu, it doesn't matter and this is why....

Starting from the most recent transactions and working back, 2 variables are set at zero, these are:

TR (Total Receipts) = 0
TP  (Total Payments) = 0

Z -> 5 -> A  TR = 5
A -> 10 -> F TP = 10
A -> 5 -> G  TP = 15
D -> 10 -> A TR = 15
X -> 10 -> A  TR = 25

Notice that even though TR @ 25 is greater than the 20 I am sending, this still doesn't qualify, as I have sent out 15 in TP after then, so I am still 10 short

T -> 5 -> A TR = 30
S -> 6 -> A TR = 36

Now the transaction will verify as fundable, as TR-TP > 20.

This transaction chain could continue for 1000's of entries, but we do not care, nor need to look at it, as we know that we have enough to satisfy the 20 eMu I want to send to you.
legendary
Activity: 2632
Merit: 1023
LOL if you have 10,000 transactions in your GUI I think a hatcher scanning them in a timely fashion is going to be the least of your headaches.

Plus, 10k receipt transactions to balance up 20 eMu, that's rather insane, that would be like having $20 in 0.2cent coins!
Go into the bank to pay in all them partial cents, it'll take them longer than just paying with a $20 note, but customers generally don't mind, as they know it's going to take longer.

I'm all for poking around in the limits of a system, but when those limits require scenarios that are virtually non-existent, it makes no sense to spend time pondering them. 

That said there is something worth mentioning here as I have thought about these scenarios (just not quite as drastic as this one) where a client, that's holding lots of micro transactions, can aggregate and consolidate all of those micro transactions in to one, large transaction and substitute that into the block tree instead, which ultimately will reduce block tree size and make future processing a bit easier for everyone.

I guess what I am talking about is say

10000 emu or any suitably large amount is composed of a number of prior transaction, in say 3 years time this could easily be composed of many thousands of transactions to get there....so perhaps I am missing something here

how does this work

legendary
Activity: 1050
Merit: 1016
LOL if you have 10,000 transactions in your GUI I think a hatcher scanning them in a timely fashion is going to be the least of your headaches.

Plus, 10k receipt transactions to balance up 20 eMu, that's rather insane, that would be like having $20 in 0.2cent coins!
Go into the bank to pay in all them partial cents, it'll take them longer than just paying with a $20 note, but customers generally don't mind, as they know it's going to take longer.

I'm all for poking around in the limits of a system, but when those limits require scenarios that are virtually non-existent, it makes no sense to spend time pondering them.  Realistically, you shouldn't have to run further back that a few hundred transactions per dependency unless its a VERY large transaction.

That said there is something worth mentioning here as I have thought about these scenarios (just not quite as drastic as this one) where a client, that's holding lots of micro transactions, can aggregate and consolidate all of those micro transactions in to one, large transaction and substitute that into the block tree instead, which ultimately will reduce block tree size and make future processing a bit easier for everyone.  I guess that is similar to your "snapshot"
legendary
Activity: 2632
Merit: 1023
If its all micro payments then yes, it can be large, and yes the transaction clear time for that one transaction will be longer, though still faster and more efficient than hashing with large difficulties.

In contrast, if that 20 eMu is dependent on say, a few larger transactions, 5-6 eMu each and those also, then transaction times for those will be very low indeed.  Likely seconds as we are seeing.

So the average overall should be acceptable.

I'm not sure how this is going work out when its 10,000 transactions deep x 6 different composite transactions, eg scan though 60,000 priors, and this seems to gets exponentially more difficult due to bitrifications

I hope I'm wrong

what I dont get is why you can't snapshot every so often, and the complexity of that snapshot is signed, thus making it hard to duplicate, and the snapshot size is sufficient
legendary
Activity: 868
Merit: 1000
ADT developer
If its all micro payments then yes, it can be large, and yes the transaction clear time for that one transaction will be longer, though still faster and more efficient than hashing with large difficulties.

In contrast, if that 20 eMu is dependent on say, a few larger transactions, 5-6 eMu each and those also, then transaction times for those will be very low indeed.  Likely seconds as we are seeing.

So the average overall should be acceptable.

when is it going to be releced  ?

sound interesting you have my support
legendary
Activity: 1050
Merit: 1016
If its all micro payments then yes, it can be large, and yes the transaction clear time for that one transaction will be longer, though still faster and more efficient than hashing with large difficulties.

In contrast, if that 20 eMu is dependent on say, a few larger transactions, 5-6 eMu each and those also, then transaction times for those will be very low indeed.  Likely seconds as we are seeing.

So the average overall should be acceptable.
legendary
Activity: 2632
Merit: 1023
No they only have to look back far enough into the dependencies.

So lets say you have a balance of 100, and you want to send out a transaction of 20, you only need to check that 20 of it can be fulfilled by the transaction receipts it is dependent on.

but isn't that still potentially huge?
legendary
Activity: 1050
Merit: 1016
No they only have to look back far enough into the dependencies.

So lets say you have a balance of 100, and you want to send out a transaction of 20, you only need to check that 20 of it can be fulfilled by the transaction receipts it is dependent on.
legendary
Activity: 2632
Merit: 1023
Moderate hardware can process a good volume of transactions, no specialist hardware needed to run a basic hatcher, an old 5yr old Dell desktop will churn out a hundred an hour if pushed.

That means that most clients can run as hatchers too, large amount of hatchers in the network means that transaction processing stays fast and smooth.

We are seeing in the beta transaction times, fully confirmed and safe (due to the different model we use), in 15-20 seconds, and everyone is running pretty moderate hardware.

To add to this, I am running one client in a xubuntu vm with only 1 3ghz core and 1gb of ram and it just hums right along.

And the tx times are AMAZING compared to btc. Waiting for confirmations is dead!

yes but what happens when your record is in the gigs or terrabytes? do not the hatches have to look at/confirm the whole chain?Huh
Pages:
Jump to: