Pages:
Author

Topic: The MOST Important Change to Bitcoin (Read 17180 times)

hero member
Activity: 1078
Merit: 520
December 11, 2016, 11:42:24 AM
#48
I would say that rewarding none mining nodes a small amount of the block reward would have been a great idea but probably not a need that could have been foreseen, i think i am right in saying that satoshi never foreseen Asic chips and massive mining farms in china.  I think he thought that every node would be mining and thus would eventually be rewarded.
newbie
Activity: 42
Merit: 0
December 11, 2016, 10:47:45 AM
#47
The most important change to bitcoin hmm i think there is nothing to change at all. In my opinion it is already better in terms of the development of it. Rather bitcoin change the world for the better. The internet brought us free distribution of information and bitcoin brings us free distribution of commerce.
hero member
Activity: 952
Merit: 515
December 09, 2016, 01:07:54 AM
#46
I don't think there is still need to change, I trust the one who made it. I know that he deeply study it. And for me, it is great. But, I believe that there is still needs to improve like faster transaction confirmation and more advertisement but over all it is good.
member
Activity: 67
Merit: 10
December 08, 2016, 09:24:59 PM
#45
I'm really not sure to be honest

we need to really keep it decentralised tho
legendary
Activity: 3066
Merit: 1383
Join the world-leading crypto sportsbook NOW!
December 08, 2016, 04:04:30 PM
#44
That was an interesting topic, so I decided to put it on the first page. What can be changed? I think, the quantity of btc in general. It is true that 21 million is too small if we talk about bitcoin becoming very popular and global. The price will increase, because of the limited amount and then 20 000 satoshi per transaction may become too much. And there just won't be enough satoshi to use and there is nothing smaller than a satoshi now.
full member
Activity: 150
Merit: 100
July 29, 2010, 09:34:31 AM
#43

I don't understand what you're saying about speed, protocol buffers were designed by Google to satisfy three requirements:

My complaint here is using XML as the yardstick of efficiency.  It is hardly the most efficient data protocol format, but the advantages it does offer and the typical application which uses XML doesn't necessarily need the hyper-efficient data formatting either.  I'm not disputing what Google has done here either, as it gives a sort of XML-like data protocol framework with increased efficiency, including CPU speed.  It just doesn't solve all of the problems for everybody and it shouldn't be viewed as the ultimate solution for programming either.

I guess I'm sort of comparing an efficient data framework to programming in a high level language like C or Python vs. Assembly language.  Assembly programming is by far and away more efficient, but it does take some skill to be able to program in that manner.  It is also something very poorly (if at all) taught by most computer programming courses of study.

I wouldn't have used XML as a good indicator of performance personally, however it's the only numbers presented on the protobuf website. I might throw together a test using the C# implementation of protocol buffers to get some numbers for protocol buffers vs hardcoded binary packets. However, I suspect speed isn't *really* a problem here, the serialisation time is on the scale of hundreds of nanoseconds - not a problem!

As for size, I suspect protocol buffers are going to be smaller than a handwritten packet layout by satoshi for a couple of reasons:
1) Bitcoin includes reserved fields for forwards compatibility, ptocol buffers don't
2) Protocol buffers include complex things like variable length encoding etc, which would be a silly micro optimisation for Satoshi to include, but comes for free with protocol buffers (and can significantly decrease the size of a packet)
3) Losing a couple of bytes on the size of a packet (if, indeed packets do get bigger, I suspect they won't) but gaining cross language compatibility, standardisation of the packet layout, significant ease of use in code AND forwards compatibility is a *very* good tradeoff.
full member
Activity: 224
Merit: 141
July 28, 2010, 12:32:15 PM
#42
Generally a compact custom format works better in terms of bandwidth and disk usage, but I do see some advantages for something like this.  I am curious about how this works for forward compatibility where a new field or some aspect is added that wasn't accounted for in an earlier specification.  It does become a big deal, and I'll admit that XML and similar kinds of data protocols tend not to break as easily compared to rigid custom protocols.
By the same logic, C is faster than Python. However, I've run across a few Python programs that were orders of magnitude faster than similar C programs. How? The C programs were sloppy and Python's standard library functions are increasingly well-optimized.

It is a myth that C is the most optimized programming language and one that produces the most efficient software.  If you want to get into language bashing wars, I'm all for it and C would be one of my first targets.  For myself, I prefer Object Pascal as my preferred programming language, but some of that is habit and intimate knowledge of the fine points of the compilers which use that programming language.  I have certainly issued a challenge to any developer to compare binary files for similar implementations, and also note compilation speeds on larger (> 100k lines of code) projects.  Most C compilers lose on nearly every metric.  Enough of that diversion.

A custom specification that doesn't rely upon a protocol framework is almost always going to be faster, but it is also much more fragile in terms of future changes and debugging.  It doesn't have to be fragile in terms of forward compatibility, but you have to be very careful in terms of how the protocol is implemented to get that to happen.  A formal protocol framework helps you avoid those kind of problems with an existing structure, but it does add overhead to the implementation.

I don't understand what you're saying about speed, protocol buffers were designed by Google to satisfy three requirements:

My complaint here is using XML as the yardstick of efficiency.  It is hardly the most efficient data protocol format, but the advantages it does offer and the typical application which uses XML doesn't necessarily need the hyper-efficient data formatting either.  I'm not disputing what Google has done here either, as it gives a sort of XML-like data protocol framework with increased efficiency, including CPU speed.  It just doesn't solve all of the problems for everybody and it shouldn't be viewed as the ultimate solution for programming either.

I guess I'm sort of comparing an efficient data framework to programming in a high level language like C or Python vs. Assembly language.  Assembly programming is by far and away more efficient, but it does take some skill to be able to program in that manner.  It is also something very poorly (if at all) taught by most computer programming courses of study.
full member
Activity: 210
Merit: 104
July 28, 2010, 02:15:56 AM
#41
Generally a compact custom format works better in terms of bandwidth and disk usage, but I do see some advantages for something like this.  I am curious about how this works for forward compatibility where a new field or some aspect is added that wasn't accounted for in an earlier specification.  It does become a big deal, and I'll admit that XML and similar kinds of data protocols tend not to break as easily compared to rigid custom protocols.
By the same logic, C is faster than Python. However, I've run across a few Python programs that were orders of magnitude faster than similar C programs. How? The C programs were sloppy and Python's standard library functions are increasingly well-optimized.

As an example, the CAddress serialization in Bitcoin, passed around in at least the addr message and version message, is 26 bytes. That's 8 bytes for the nServices field (uint64, and always 1 on my system), 12 bytes marked "reserved", and the standard 6 bytes: 4 bytes for IP, 2 bytes for port number. While I agree that including the ability to support IPv6 and other services (or whatever nServices is for) in the future is a great idea, I also think it's a bit wasteful to use 26 bytes for what can currently be encoded as 6 bytes. With protocol buffers, this would be smaller now but retain the ability to extend in the future.

I agree that encryption and compression are a bit harder to take into account, but at the very least they could be layered on top of the protocol buffers. Build the byte string with something extensible, then encrypt/compress it and wrap it in a tiny header that says that you did that. You lose 2 or 4 bytes for the trouble, but you gain the ability to change the format down the road. Either that, or encrypt only certain protocol buffer fields and put the serialization on top of the encryption.
sr. member
Activity: 252
Merit: 250
July 26, 2010, 10:30:55 AM
#40
ASN.1  Shocked
full member
Activity: 150
Merit: 100
July 26, 2010, 08:46:20 AM
#39
I don't understand what you'resaying about speed, protocol buffers were designed by google to satisfy three requirements:

1) Forwards compatibility (Google changes their protocols all the time, protocol buffers allow them to do this with ease)
2) Speed (every millisecond matters, and protocol buffers are around the fastest serialisation method out there, the documentations says "are 20 to 100 times faster [than XML]")
3) Size (Protocol buffers are tiny, the documentation says "are 3 to 10 times smaller [than XML]")

Protocol buffers were designed almost for the exact problem bitcoin is facing.

Addendum: If you want to know how they work, have fun
full member
Activity: 224
Merit: 141
July 26, 2010, 08:42:10 AM
#38
It is also possible to make protocols that are forward compatible to changes that may be made in the future.

Making forward compatible protocols is something I've been trying to tackle for my Computer science final year project, the solution I came up with was protocol buffers, it just struck me that they're perfect for bitcoin for a whole load of reasons:

Generally a compact custom format works better in terms of bandwidth and disk usage, but I do see some advantages for something like this.  I am curious about how this works for forward compatibility where a new field or some aspect is added that wasn't accounted for in an earlier specification.  It does become a big deal, and I'll admit that XML and similar kinds of data protocols tend not to break as easily compared to rigid custom protocols.

Another problem with something of this nature is that you have to stay within the framework of the specification language or however else this data structure is organized, and it doesn't anticipate things like encryption and compression.

While there certainly are many applications that could use a tool of this nature, I'm not entirely sure that Bitcoins is necessarily "a perfect application" of this particular tool.  Considering the nature of this project and anticipated scales of operation, overhead and abstraction can be quite costly for bandwidth even if it is just a couple of bytes and a few extra machine cycles.  That is something which matters.
full member
Activity: 150
Merit: 100
July 26, 2010, 06:59:41 AM
#37
Nope, the standard implementation of protocol buffers is under the new BSD license. Furthermore, there are loads of versions of protocol buffers in many different languages all published under a variety of licenses. Some people have even written entire probuf parsers in 100 lines, so if we really wanted bitcoin could have its own implementation of a protobuf parser and that code is entirely ours. However, the standard implementation is just fine Wink
hero member
Activity: 938
Merit: 500
CryptoTalk.Org - Get Paid for every Post!
July 26, 2010, 05:06:48 AM
#36
Google maintains the rights to code they have designed.  Incorporating it would require data reporting to google.  Generally not a good thing.
full member
Activity: 150
Merit: 100
July 25, 2010, 08:59:58 PM
#35
I don't know much about the scripts, but if you're right and this is what scripts were designed to address then I'd push for protocol buffers, they're designed and built by google so (no offence to satoshi) they're probably better Wink

Also, the fact that protocol buffer are supported by lots of other languages would make building clients (without generation) in other languages (with protobuffer support) absolutely trivial.

Edit:: Changing the packet structure to use protocol buffers would be difficult to do, although I would still highly recommend it. However, changing the structure of the local files to use protocol buffers isn't a breaking change, which means that it would be an excellent idea to do in my opinion (smaller, faster, neater code, easier to parse in other languages etc etc)
full member
Activity: 210
Merit: 104
July 25, 2010, 08:39:18 PM
#34
It is also possible to make protocols that are forward compatible to changes that may be made in the future.

Making forward compatible protocols is something I've been trying to tackle for my Computer science final year project, the solution I came up with was protocol buffers, it just struck me that they're perfect for bitcoin for a whole load of reasons:

-> Small (suitable for entworking and hard disk storage)
-> Very fast
-> Implementations in loads of languages (So writing new clients become a lot simpler)
-> Forwards compatible (indeed, this is most of the point of protocol buffers)
-> Dead simple to use in code
-> Support for custom fields in packets (so, for example, a new client could start embedding messages in packets, and all the other clients would silently ignore this field)

So I guess the most important change to bitcoin for me is to start using protocol buffers for networking and saving the wallet file
Wow, that's awesome. That seems a lot like what the SCRIPT part of transactions was designed to address, but this seems like a better way...
full member
Activity: 150
Merit: 100
July 25, 2010, 08:25:22 PM
#33
It is also possible to make protocols that are forward compatible to changes that may be made in the future.

Making forward compatible protocols is something I've been trying to tackle for my Computer science final year project, the solution I came up with was protocol buffers, it just struck me that they're perfect for bitcoin for a whole load of reasons:

-> Small (suitable for entworking and hard disk storage)
-> Very fast
-> Implementations in loads of languages (So writing new clients become a lot simpler)
-> Forwards compatible (indeed, this is most of the point of protocol buffers)
-> Dead simple to use in code
-> Support for custom fields in packets (so, for example, a new client could start embedding messages in packets, and all the other clients would silently ignore this field)

So I guess the most important change to bitcoin for me is to start using protocol buffers for networking and saving the wallet file
Red
full member
Activity: 210
Merit: 111
July 25, 2010, 08:12:39 PM
#32
I would make the rate of coin generation constant ...

You might try reading in the economics forum. Lot's of competing suggestions in there.

Beware, it's a hornet's nest!
newbie
Activity: 2
Merit: 0
July 25, 2010, 07:15:13 PM
#31
I would make the rate of coin generation constant rather than decreasing, so that there would not be constant deflation as people lose coins.  If one assumes that coins are lost at a constant proportional rate (-kC, where C = the number of coins and t = time), and coins are generated at a constant (unchanging) rate (G), then there is some circulation C for which G - kC = 0 (in other words, the circulation stays constant).  That level is C = G/k.  So if we want the circulation to stabilize at Cf, then G should be set to kCf.  For example, if 5% of coins are lost each year, then k is equal to -ln(0.95) per year = 0.0513 per year (a weird unit).  If we want C_f to be a trillion bitcoins, then roughly 51.3 bitcoins per year (a bit more than 5% of 1 trillion bitcoins) should be created.
sr. member
Activity: 252
Merit: 268
July 25, 2010, 06:35:22 PM
#30
A breaking change could be made without breaking the network by setting the change to not take affect until a certain block six months or more in the future. That would give most everyone plenty of time to upgrade their client. The few that didn't notice, would notice still notice sometime after the change and would still be able to upgrade and retain bitcoins obtained before the change.
full member
Activity: 152
Merit: 100
July 25, 2010, 06:10:12 PM
#29
Quote
We might not care that the minting rate is cut by 3/4 for months on end, but taking 4x longer to complete a transaction is a fairly big deal.
You mean it will be slow for months because it was difficulty was ramped up and then someone bailed? In order fall to 25% wouldn't 75% of the computing power have to be pulled? If someone can do that they control the project anyway.

Yes, the complete attack would require adding 3x the current CPU capacity for 3-4 days, long enough for the difficulty to increase by 4x, and then removing it. It's not necessary that all that CPU capacity be under the control of one person, however. A sudden, temporary surge of interest from many unrelated individuals could do the same thing (think "slashdotting"). So the attack could happen even without anyone taking control of the project.
Pages:
Jump to: