Pages:
Author

Topic: [EMUNIE] THE fastest crypto-currency - page 5. (Read 11691 times)

legendary
Activity: 1764
Merit: 1018
September 29, 2015, 01:50:14 AM
#56
THE fastest crypto-currency is BitShares 2.0 over 100 000 transactions per second scalability (1 second block)
https://bitshares.org/technology/industrial-performance-and-scalability

Will be launched October 13 with 3 seconds block (30 000 transactions per second)
full member
Activity: 210
Merit: 100
September 29, 2015, 12:37:14 AM
#55
legendary
Activity: 3976
Merit: 1421
Life, Love and Laughter...
September 28, 2015, 10:04:18 PM
#54
Ah, no matter.  I'll wait for the open beta.  Thanks.

Should be a few days before the 7 horsemen of the apocalypse approach.

I'll be patient.  Grin
legendary
Activity: 1260
Merit: 1000
September 28, 2015, 09:43:13 PM
#53
Ah, no matter.  I'll wait for the open beta.  Thanks.

Should be a few days before the 7 horsemen of the apocalypse approach.
legendary
Activity: 3976
Merit: 1421
Life, Love and Laughter...
September 28, 2015, 09:39:10 PM
#52
Ah, no matter.  I'll wait for the open beta.  Thanks.
sr. member
Activity: 378
Merit: 250
September 28, 2015, 09:05:29 PM
#51
How can one be an Emunie beta tester?

Dan has been slowly bringing more in over the past 2 years for the closed beta releases. I've been one for close to a year now. I was pretty frustrated in terms of how long it took to be givent the status. Lately it has been mostly founder beta testing (a much smaller group). From this point you would be better off waiting for the open beta. But to try to get closed beta access, you can try through the eMunie forum.
legendary
Activity: 3976
Merit: 1421
Life, Love and Laughter...
September 28, 2015, 08:50:02 PM
#50
How can one be an Emunie beta tester?
member
Activity: 96
Merit: 10
September 28, 2015, 02:57:23 PM
#49
why not a lean KV approach?
an in-memory/disk hybrid like redis can handle such tiny amounts of data in a blink
with lowest cpu usage compared to bloated rdbms. e.g. one million keys occupies only
100 mb memory. persistence is great via fine granulated shapshots. the api provides
all primitives to layer a query model on top. should be a good choice for this kind of task.

if a full persistent solution is needed something like sophia outperforms many other
disk based storages and is nearly as fast as redis but with a fraction in code size.

Will this approach run on a $600 commodity Notebook computer, which is what I am successfully using as an eMunie beta tester?
hero member
Activity: 597
Merit: 500
September 28, 2015, 12:55:29 PM
#48
why not a lean KV approach?
an in-memory/disk hybrid like redis can handle such tiny amounts of data in a blink
with lowest cpu usage compared to bloated rdbms. e.g. one million keys occupies only
100 mb memory. persistence is great via fine granulated shapshots. the api provides
all primitives to layer a query model on top. should be a good choice for this kind of task.

if a full persistent solution is needed something like sophia outperforms many other
disk based storages and is nearly as fast as redis but with a fraction in code size.
full member
Activity: 237
Merit: 100
September 28, 2015, 12:47:50 AM
#47
Where can I find the specifications?
full member
Activity: 179
Merit: 100
September 27, 2015, 05:32:33 PM
#46
If you get something right the first time you don't have to worry about forking it later due to the inability of it to scale to necessary levels.

The "core" txn system is what is being done right the first time.

All other layers on top of that (e.g. client functionality) are merely utilized expressions of the capabilities for the users.  Those can be updated easily.
legendary
Activity: 1260
Merit: 1000
September 27, 2015, 05:08:30 PM
#45
Yup, partitions my friend, that problem goes away Wink

Isn't this system going to be extremely difficult for updates in the long run?  Partitioning and having different groups who don't update, etc.
legendary
Activity: 1050
Merit: 1016
September 27, 2015, 04:31:39 PM
#44
That solution for you (if it fits your purpose) will be very fast, then your IO bottleneck with shift to network I imagine?

Network will become a bottleneck at 12'000 TPS (for 100 Mbps).

Yup, partitions my friend, that problem goes away Wink
legendary
Activity: 2142
Merit: 1010
Newbie
September 27, 2015, 04:30:50 PM
#43
That solution for you (if it fits your purpose) will be very fast, then your IO bottleneck with shift to network I imagine?

Network will become a bottleneck at 12'000 TPS (for 100 Mbps).
legendary
Activity: 1050
Merit: 1016
September 27, 2015, 04:28:35 PM
#42
Hmmm...Ive found that the major bottlenecks on lower end stuff is actually the IO DB writes/reads and not so much crypto related stuff.  Sure it has a positive effect if you can speed it up, but a good 70%+ of optimizing I do is how to get data over the IO quicker and more efficiently.

What DB system do you use? MySQL? I use http://docs.oracle.com/javase/8/docs/api/java/nio/MappedByteBuffer.html.
I have just recalled that Emunie does much more than just payments, in this case we cannot compare our solutions, because our cryptocurrency works with payments only and doesn't need to do sophisticated stuff like order matching.

MySQL and Derby for development, probably go with Derby or H2 for V1.0.  

The data stores themselves are abstracted though, so any DB solution can sit behind them with minor work so long as they implement the basic interface.

That solution for you (if it fits your purpose) will be very fast, then your IO bottleneck will mainly shift to network I imagine?
legendary
Activity: 1050
Merit: 1016
September 27, 2015, 04:22:06 PM
#41
Hmmm...Ive found that the major bottlenecks on lower end stuff is actually the IO DB writes/reads and not so much crypto related stuff.  Sure it has a positive effect if you can speed it up, but a good 70%+ of optimizing I do is how to get data over the IO quicker and more efficiently.

That was like word for word what Bytemaster said in this youtube video heh:  http://www.youtube.com/watch?v=bBlAVeVFWFM

Well IO is always a problem bottleneck, thats simple developer 101
legendary
Activity: 1260
Merit: 1000
September 27, 2015, 04:05:32 PM
#40
Hmmm...Ive found that the major bottlenecks on lower end stuff is actually the IO DB writes/reads and not so much crypto related stuff.  Sure it has a positive effect if you can speed it up, but a good 70%+ of optimizing I do is how to get data over the IO quicker and more efficiently.

That was like word for word what Bytemaster said in this youtube video heh:  http://www.youtube.com/watch?v=bBlAVeVFWFM
legendary
Activity: 2142
Merit: 1010
Newbie
September 27, 2015, 03:59:24 PM
#39
Hmmm...Ive found that the major bottlenecks on lower end stuff is actually the IO DB writes/reads and not so much crypto related stuff.  Sure it has a positive effect if you can speed it up, but a good 70%+ of optimizing I do is how to get data over the IO quicker and more efficiently.

What DB system do you use? MySQL? I use http://docs.oracle.com/javase/8/docs/api/java/nio/MappedByteBuffer.html.
I have just recalled that Emunie does much more than just payments, in this case we cannot compare our solutions, because our cryptocurrency works with payments only and doesn't need to do sophisticated stuff like order matching.
legendary
Activity: 1050
Merit: 1016
September 27, 2015, 03:50:30 PM
#38
You're probably the one and only developer I wouldn't mind proving me wrong Smiley

If you can show me proof of an existing crypto processing 250 tx/s sustained, I'll be mildly impressed.   If it can be done on 20 or less nodes, I'll be suitably impressed.  If you can show the same doing 500 tx/s+ I'll be very impressed.

Likewise if you can show peaks of 500, 1000 & 2000+ the corresponding impressed level as described above will be in effect Smiley

What are requirements for bandwidth, RAM and CPU of a node?

PS: I'm using a hash-based signing algorithm, it's much faster than elliptic curve stuff (even faster than Ed25519), so I may have some advantage right from the start.

The lowest spec node I ran in that test was an Asus i3-3217 laptop, 32bit OS, 256MB of heap allocated to Java and the DB running on its 5400 rpm platter drive.  Its a really crappy machine, the CPU is Quad Core, but it barely manages 2000 passmark points and the HD...well....pen and paper would be faster I swear.  It *just* about kept up acting as a ledger node, so I guess that or lower is your target Wink

Not sure if anyone in the network was running lower grade hardware than that.

Hmmm...Ive found that the major bottlenecks on lower end stuff is actually the IO DB writes/reads and not so much crypto related stuff.  Sure it has a positive effect if you can speed it up, but a good 70%+ of optimizing I do is how to get data over the IO quicker and more efficiently.
legendary
Activity: 2142
Merit: 1010
Newbie
September 27, 2015, 12:40:35 PM
#37
You're probably the one and only developer I wouldn't mind proving me wrong Smiley

If you can show me proof of an existing crypto processing 250 tx/s sustained, I'll be mildly impressed.   If it can be done on 20 or less nodes, I'll be suitably impressed.  If you can show the same doing 500 tx/s+ I'll be very impressed.

Likewise if you can show peaks of 500, 1000 & 2000+ the corresponding impressed level as described above will be in effect Smiley

What are requirements for bandwidth, RAM and CPU of a node?

PS: I'm using a hash-based signing algorithm, it's much faster than elliptic curve stuff (even faster than Ed25519), so I may have some advantage right from the start.
Pages:
Jump to: