Pages:
Author

Topic: Thoughts on type safety and crypto RNGs (Read 3645 times)

hero member
Activity: 899
Merit: 1002
December 27, 2014, 04:58:13 PM
#29
Every OS has a proper method for obtaining keystream (/dev/urandom)  https://news.ycombinator.com/item?id=8049739 the problem is if you or somebody else chroot the application, and forget to make userspace CSPRNG available with it, so directly obtaining from the kernel a good idea.

Another problem are people making browser client side js wallets and not containing it inside a browser addon http://matasano.com/articles/javascript-cryptography/
legendary
Activity: 1526
Merit: 1134
December 27, 2014, 04:36:06 PM
#28
(I thought it was really weird of Mike brought up manual memory management, I see I made an error in not correcting him.

Sigh. You know I have the deepest respect for you Gregory, but this is not the first time I get the feeling you're commenting on things I've written without having read them closely Sad

I said:

Quote
The main concern with Core is not that the code is insecure today, but what happens in the years to come

I know how the code is currently written. I first read it a few months after it was released, remember Wink But being concerned about an extremely common class of errors is hardly weird. Multiple people on this thread have brought it up.

My experience of working on several large C++ server codebases at Google is that it's quite possible to write robust code ... for a while. When you have a single thread, everything is written by one guy and all data is request scoped, the tools C++ provides can work very well.

But eventually one of the following happens:

1) Someone introduces multi-threading for better scalability, resource management, use of a blocking library etc, and accidentally writes code that races
2) Someone refactors code written by someone else and uninitialised data creeps in
3) Someone starts using a third party library that isn't written in the same way and requires manual heap management (like OpenSSL)
4) Someone profiles and decides to reduce the amount of copying that is going on

More generally: things change, teams change and software gets more complicated. Because nothing in C++ forbids manual memory management and some things require it, eventually it ends up being used. And some time after that, someone makes a mistake.

We can't magically convert Bitcoin Core to a safer language with a stricter type system. We can anticipate that mistakes will happen, and try to put in place systems to automatically catch and handle them.

Quote from: petertodd
You might be interested to find out that Python is actually moving towards static types; the language recently added support for specifying function argument types in the syntax. How the types are actually checked is undefined in the language itself - you can use third-party modules to impose your desired rules.

This sounds somewhat like the Checker framework. It is a pluggable type system for Java. I'd like to see it adapted for Scala and Kotlin too. It has a number of very practical type systems that catch practical errors like mixing up seconds and milliseconds and other unit mismatches.
hero member
Activity: 836
Merit: 1030
bits of proof
December 25, 2014, 07:04:51 AM
#27
Yes, that "safe" subset of C++ is emulating a simple and restricted reference counting runtime by hand. Certainly doable.
That isn't the case. Yes, reference counting is one tool, in that box but it has considerable costs. Most things are handled by RAII. And then there is unique_ptr... "By hand" is also perhaps misleading... in that, for better or worse, the developer themselves doesn't see the machinery under the hood any more than they see boundary checking in Java.
RAII and unique_ptr implement a reference counting store where the (implicit) use count can be 1 or 0, right?
"By hand" does not mean copy and paste. The solutions you use are likely best attainable in the C++ environment.

My point is, that better support for program correctness is available elsewhere and should be used if permissible. When permissible is open for discussion, but I do not buy that the answer would be never.

I even suspect that a language and runtime that is safer to exclude side effects, implicit inputs and aids reasoning on correctness of algorithms is a better choice even for consensus definition.
staff
Activity: 4242
Merit: 8672
December 25, 2014, 06:26:13 AM
#26
Yes, that "safe" subset of C++ is emulating a simple and restricted reference counting runtime by hand. Certainly doable.
That isn't the case. Yes, reference counting is one tool, in that box but it has considerable costs. Most things are handled by RAII. And then there is unique_ptr... "By hand" is also perhaps misleading... in that, for better or worse, the developer themselves doesn't see the machinery under the hood any more than they see boundary checking in Java.
hero member
Activity: 836
Merit: 1030
bits of proof
December 25, 2014, 05:45:58 AM
#25
Huh? I quite clearly give Bitcoin Core as an example of C++ done right, precisely because it uses a safe subset of the language that is a higher-level language via the abstractions used.

Yes, that "safe" subset of C++ is emulating a simple and restricted reference counting runtime by hand. Certainly doable. Apple is e.g. successful forcing reference counting to application level programmer on iOS, although Objective C gives nice support for that pattern.

Continuing that line of though you could define a "safe" subset of C only using the stack, maybe even functional programming with assembler macros. Runtimes and compiler do no magic therefore talented programmers can emulate any feature of them in any language. It requires skills and it can be fun.

It is not Bitcoin the first program that moves around billions of dollars in value. It is just a new one. Most programs I wrote moved more value than Bitcoin's market capitalization, therefore I know that once you deal with other people's money it gets difficult to argument for not using a help technology offers.
newbie
Activity: 28
Merit: 0
December 24, 2014, 05:25:13 PM
#24
It's no surprise that easier languages attract even less skilled programmers who make more mistakes, but it's foolish to think that giving skilled programmers a tool other than a footgun is going to result in more mistakes. I think the unfortunate thing - maybe the root cause of this problem in the industry - is you definitely do need to teach programmers C at some point in their education so they understand how computers actually work. For that matter we need to teach them assembler too. The problem is C is nice enough to actually use - even the nicest machine architectures aren't - and people trained that way tend to reach for that footgun over and over again in the rest of their careers when really the language should be put on a shelf and only brought out to solve highly specialized tasks - just like assembler.

"Easy" applies to e.g. Python, but is unlikely the motivation for those who turn to Haskell or Scala. It is rather skilled programmers who turn to functional languages after they shot into their foot enough to reconsider what they stand on.

Equally, how many computer science graduates finish their education with a good understanding of the fact that a programing language is fundamentally a user interface layer between them and machine code?

Unfortunatelly many of those who get that think that they are better than the compiler and its runtime. Some might be really better, maybe even consistently, but staying ahead of compiler and runtime development is getting harder and their advantage less and less likely.

Couldn't have said this better myself.
legendary
Activity: 1120
Merit: 1152
December 24, 2014, 05:02:05 PM
#23
I'm mainly concerned about whether or not using C(++) with manual memory management is acceptable practice. Screwing up manual memory management exposes you to the king of all implicit details: what garbage happens to be in memory at that very moment.
We mostly do not use manual memory management in Bitcoin core. Virtually all use of delete is in explicit destructors, most things are just RAII. I looked a while back and think found only something like three or four instances of use delete outside of destructors, and I assume those cases will all be changed the next time they're touched (e.g. examples include the wallet encryption, which hasn't been touched in years).

(I thought it was really weird of Mike brought up manual memory management, I see I made an error in not correcting him.


Huh? I quite clearly give Bitcoin Core as an example of C++ done right, precisely because it uses a safe subset of the language that is a higher-level language via the abstractions used. You brought up C, which just isn't a safe language to write code in.

I read Mike's post as pointing out that knowing how to use C++ correctly - what subset to use - is something that does take skill. It's notable that we've changed a few things in Bitcoin Core to, for instance, use pointers where we didn't before, gradually decreasing the safety of the system by using parts of the language beyond that safe subset.
staff
Activity: 4242
Merit: 8672
December 24, 2014, 04:23:37 PM
#22
I'm mainly concerned about whether or not using C(++) with manual memory management is acceptable practice. Screwing up manual memory management exposes you to the king of all implicit details: what garbage happens to be in memory at that very moment.
We mostly do not use manual memory management in Bitcoin core. Virtually all use of delete is in explicit destructors, most things are just RAII. I looked a while back and think found only something like three or four instances of use delete outside of destructors, and I assume those cases will all be changed the next time they're touched (e.g. examples include the wallet encryption, which hasn't been touched in years).

(I thought it was really weird of Mike brought up manual memory management, I see I made an error in not correcting him.
hero member
Activity: 836
Merit: 1030
bits of proof
December 24, 2014, 06:02:10 AM
#21
It's no surprise that easier languages attract even less skilled programmers who make more mistakes, but it's foolish to think that giving skilled programmers a tool other than a footgun is going to result in more mistakes. I think the unfortunate thing - maybe the root cause of this problem in the industry - is you definitely do need to teach programmers C at some point in their education so they understand how computers actually work. For that matter we need to teach them assembler too. The problem is C is nice enough to actually use - even the nicest machine architectures aren't - and people trained that way tend to reach for that footgun over and over again in the rest of their careers when really the language should be put on a shelf and only brought out to solve highly specialized tasks - just like assembler.

"Easy" applies to e.g. Python, but is unlikely the motivation for those who turn to Haskell or Scala. It is rather skilled programmers who turn to functional languages after they shot into their foot enough to reconsider what they stand on.

Equally, how many computer science graduates finish their education with a good understanding of the fact that a programing language is fundamentally a user interface layer between them and machine code?

Unfortunatelly many of those who get that think that they are better than the compiler and its runtime. Some might be really better, maybe even consistently, but staying ahead of compiler and runtime development is getting harder and their advantage less and less likely.
legendary
Activity: 1120
Merit: 1152
December 24, 2014, 03:57:58 AM
#20
Dynamically-typed languages are the worse, because you cannot fully understand the consequences of function without looking at every existent function call to see the argument types (and sometimes you cannot infer those without going deeper in the call tree!)

You might be interested to find out that Python is actually moving towards static types; the language recently added support for specifying function argument types in the syntax. How the types are actually checked is undefined in the language itself - you can use third-party modules to impose your desired rules. IIRC next major version, 3.6 (?) will be including a module with one approach to actually enforcing those argument types as a part of the standard library. Similarly class attributes have syntax support for specifying types, and again you can already use third-party modules to enforce those rules.

I wouldn't be surprised if the "sweet spot" for most tasks is a language much like Python with the ability to specify type information as well as the ability to easily enforce 100% usage of that technique in important code, while still giving programmers the option of writing quick-n-dirty untyped code where desired. And of course, with a bit of type information writing compilers that produce reasonably fast code becomes fairly easy - Cython does that already for Python without too much fuss.
legendary
Activity: 1120
Merit: 1152
December 24, 2014, 03:46:35 AM
#19
Equally the demographics of people writing the tiny amount of C / C++ code out there is very different than the demographics writing in more modern languages.
Maybe it is true where you live. Where I live C++ enjoys resurgence in the form of superset/subset language SystemC, where certain things about the programs can be proven.

I'm referring specifically to the demographics of people writing code for Bitcoin-related applications.

Likewise, gmaxwell posted here information about new research where a specific C subset (targeting specific TinyRAM architecture) can be used to produce machine-verifiable proofs. AFAIK this is still a long-shot option for Bitcoin, not something usable currently.

My comment here pertains to the consensus-critical code in the dichotomy you've mentioned later.

C with machine-verifiable proofs has nothing to do with the type of C programming I'm criticizing; neither does SystemC. Those types of environments are so far removed from the vanilla and unsafe C(++) programming that gets people into trouble that you might as well call them different languages in all but name.
legendary
Activity: 1120
Merit: 1152
December 24, 2014, 03:37:19 AM
#18
Actually no, you're catching the point I'm making but missing it.  Cryptographic systems in general have the property that you live or die based on implicit details. Cryptographic consensus makes the matter worse only in that a larger class of surprises which turn out to be fatal security vulnerabilities. It's quite possible, and has been observed in practise, to go end up with exploitable systems because some burred/abstracted behaviour is different than you expected. A common example is propagating errors up to to the far side when authentication fails and leaking data about the failure allowing incrementally recovering secret data.  Other examples are that implicit padding behaviour leaking information about keys (there is an example of this in Bitcoin core: OpenSSL's symmetric crypto routines had implicit padding behaviour that make the wallet encryption faster to crack than had been intended.)

I'm mainly concerned about whether or not using C(++) with manual memory management is acceptable practice. Screwing up manual memory management exposes you to the king of all implicit details: what garbage happens to be in memory at that very moment.

Given that we have at least C++ available which can insulate you from manual memory management(1), there's just no excuse to be writing code that way anymore by default. Equally writing C++ in a way that exposes you to that class of errors is generally unacceptable.

Bitcoin itself is a perfect example, where some simple "don't be an idiot" development practices have resulted in a whole class of errors having never been an issue for us, letting development focus on the remaining types of errors.

1) Where manual memory management == things that can cause memory corruption and invalid accesses. There are of course other meanings of the term that refer to practices where memory is still "managed" manually at some level, e.g. allocation, but corruption and invalid accesses are not possible.

I'm certainly a fan of smarter tools that make software safer (I'm conceptually a big fan of Rust, for example). But what I'm seeing deployed out in the wider world is that more actual deployed weak cryptography software is resulting from reasons unrelated to language.  This doesn't necessarily mean anything about non-cryptographic software. And some of it is probably just an attitude correlation; you don't get far in C if you're not willing to pay attention to details. So we might expect other languages to be denser in sloppy approaches. But that doesn't suggest that someone equally attentive might not do better, generally, in something with better properties. (I guess this is basically your demographic correlation).  So I'm certainly not disagreeing with these points; but I am disagreeing with the magic bullet thinking which is provably untrue: Writing in FooLang will absolutely not make your programs safe for people to use. It _may_ be helpful, indeed, but it is neither necessary nor sufficient, as demonstrated by the software deployed in the field.

And since when did I say anything about "magic bullets"? I'm talking about acceptable bare minimum practices. Over and over again we've seen that doing manual memory management requires Herculean efforts to get right, yet people do get far enough in C(++) to cause serious problems doing it.

It's no surprise that easier languages attract even less skilled programmers who make more mistakes, but it's foolish to think that giving skilled programmers a tool other than a footgun is going to result in more mistakes. I think the unfortunate thing - maybe the root cause of this problem in the industry - is you definitely do need to teach programmers C at some point in their education so they understand how computers actually work. For that matter we need to teach them assembler too. The problem is C is nice enough to actually use - even the nicest machine architectures aren't - and people trained that way tend to reach for that footgun over and over again in the rest of their careers when really the language should be put on a shelf and only brought out to solve highly specialized tasks - just like assembler.

Equally, how many computer science graduates finish their education with a good understanding of the fact that a programing language is fundamentally a user interface layer between them and machine code?
hero member
Activity: 836
Merit: 1030
bits of proof
December 24, 2014, 01:27:18 AM
#17
So I'm certainly not disagreeing with these points; but I am disagreeing with the magic bullet thinking which is provably untrue: Writing in FooLang will absolutely not make your programs safe for people to use. It _may_ be helpful, indeed, but it is neither necessary nor sufficient, as demonstrated by the software deployed in the field.

Neither Mike nor myself advertized a language as a magic bullet that makes programs safe.

You however seem to belive in superior powers of maintainer that outweighs advances of languages and runtime enviroments of the last decades.

I'd say you play a more dangerous game than us.

You wrote, "Most exploits arise from programming errors in low level weakly typed languages". I pointed out that in our space we've observed the opposite: There have been more serious cryptographic weaknesses in software written in very high level languages like python, javascript, php, Java. etc. Thats all.  Please tone down the personal insults. You're very close to earning an ignore button press from me. I have scrupulously avoided besmirching your skills-- or even saying that I think your preferred tools are not _good_, only that that people using them suffer errors too-- but in every response you make you attack my competence.

If you define your space with Bitcoin core, then yes, it shows very high quality, maintained by remarkable talents of which your are one of.
No doubt on that. I had no intention to insult you with incompetence.

The model that has been successful with Bitcoin core however failed so many of times that it fills libraries with dos and dont's of pointer arithmetic, anatomy of buffer overflow and zero delimited string exploits. I know, Bitcoin core developer carefully avoid those sources, it still did not protect against a bug in OpenSSL. That bug was not cryptographic in nature, but exposing the memory of the process as a consequence of missing array bounds check in the C/C++ runtime. Sure there are arguments for not having those checks in run-time, but those arguments work especially well with languages that check more at compile time, such that runtime violations are less probable.

While exceptional care can be successful, as we observe, it is hard to scale and sustain. This is why the software industry has been moving away from C/C++. It retained relevance in certain areas just like any good technology.

We need magnitudes more code and developer than Bitcoin core to build this economy, therefore it is sane to take any attainable help to sustain quality. I believe that type safe and functional languages, modern runtime enviroments do help. I do not think you doubt this, so please calm down too. I am not attacking you just personally, but doubt the extensibility of your successful model to all projects that use Bitcoin or its innovations.









legendary
Activity: 2128
Merit: 1073
December 23, 2014, 11:04:17 PM
#16
I would prefer that low-lever crypto code (key management, prng, signature, encryption, authentication) is written in c/c++ (e.g. Sipa's secp256k1 library in Bitcoin) and every other layer is written in a more modern static typed language, such as Java.
I disagree that such a combination would be safer and easier to audit. Java and C++ runtimes are very hard to properly interface, especially in the exception handling and threading aspects. So the purported audit would not only involve auditing the code of the Bitcoin core but also auditing a large portion of the Java runtime.

One could make one or two restrictions in the mixed architecture you're proposing:

1) C/C++ code are only "leaves" on the call tree, i.e. only Java calls C++, C++ never calls Java.

2) "Java" is understood to mean not "validated standard conforming Java" but "subset of Java supported by the gcj ahead-of-time compiler" matched with the gcc/g++ used for the C/C++ code.

otherwise the mixed-language program will have a large minefield in the inter-language interface layer.

Edit:

Historical note: if "Java" would mean "Microsoft Visual J++" with J/Direct instead of JNI as an inter-language layer that could also work relatively smoothly. Those things are of historical interest only although there is at least one vendor in Russia that still maintains a Java toolchain that is unofficially compatible with the historical code: http://www.excelsior-usa.com/ .
hero member
Activity: 555
Merit: 654
December 23, 2014, 10:31:14 PM
#15
All coders make mistakes. In every language, in every library. Formal verification methods are generally too expensive. That's why peer review and audits exists. To detect those errors. And the more auditors, the better.
 
C++ code is generally more concise because of a higher versatility of the grammar (e.g. overloaded operators), but not as easy to understand to anyone but the programmer. C++ is very powerful, but can more easily hide information from the auditor. However the programmer has grater control regarding timing side-channels and secrets leakage.
 
Java code is generally more explicit and descriptive. It forces to do things that make the auditor's work simpler, such as class-file separation.
Obviously you can program C++ as if it were Java, but that's not how c++ libraries are built, nor how c++ programmers have learn. Nobody changes a language standard semantics.

Dynamically-typed languages are the worse, because you cannot fully understand the consequences of function without looking at every existent function call to see the argument types (and sometimes you cannot infer those without going deeper in the call tree!)

One example I remember now is Python strong pseudo-random generator seeding function. If you call the seeding function with a BigInt, it uses the BigInt as seed, but if you call it with an hexadecimal or binary string (and I've seen this), it performs a 32bit hash of the string, and then seeds the random with a 32 bit number. And this is allowed because a 32 bit hash is a default for every object. You can write Python that does not make use of dynamic typing, but that requires checking the type of every argument received, which nobody does.

I would prefer that low-lever crypto code (key management, prng, signature, encryption, authentication) is written in c/c++ (e.g. Sipa's secp256k1 library in Bitcoin) and every other layer is written in a more modern static typed language, such as Java. For most projects, that probably means that 90% of the code would be in Java and 10% would be in c/c++ (and that would probably be crypto library code)
The 90% Java code would be more secure not because Java code is more secure per se, but because it's would be easier to audit. The 10% would be harder but since it would be small you would be able to double the audit time for that part.
 
At the end, you get a more secure system having used the same audit or peer review time.
staff
Activity: 4242
Merit: 8672
December 23, 2014, 09:50:07 PM
#14
So I'm certainly not disagreeing with these points; but I am disagreeing with the magic bullet thinking which is provably untrue: Writing in FooLang will absolutely not make your programs safe for people to use. It _may_ be helpful, indeed, but it is neither necessary nor sufficient, as demonstrated by the software deployed in the field.

Neither Mike nor myself advertized a language as a magic bullet that makes programs safe.

You however seem to belive in superior powers of maintainer that outweighs advances of languages and runtime enviroments of the last decades.

I'd say you play a more dangerous game than us.

You wrote, "Most exploits arise from programming errors in low level weakly typed languages". I pointed out that in our space we've observed the opposite: There have been more serious cryptographic weaknesses in software written in very high level languages like python, javascript, php, Java. etc. Thats all.  Please tone down the personal insults. You're very close to earning an ignore button press from me. I have scrupulously avoided besmirching your skills-- or even saying that I think your preferred tools are not _good_, only that that people using them suffer errors too-- but in every response you make you attack my competence.
hero member
Activity: 836
Merit: 1030
bits of proof
December 23, 2014, 06:36:11 PM
#13
So I'm certainly not disagreeing with these points; but I am disagreeing with the magic bullet thinking which is provably untrue: Writing in FooLang will absolutely not make your programs safe for people to use. It _may_ be helpful, indeed, but it is neither necessary nor sufficient, as demonstrated by the software deployed in the field.

Neither Mike nor myself advertized a language as a magic bullet that makes programs safe.

You however seem to belive in superior powers of maintainer that outweighs advances of languages and runtime enviroments of the last decades.

I'd say you play a more dangerous game than us.
legendary
Activity: 2128
Merit: 1073
December 22, 2014, 07:08:36 PM
#12
Equally the demographics of people writing the tiny amount of C / C++ code out there is very different than the demographics writing in more modern languages.
Maybe it is true where you live. Where I live C++ enjoys resurgence in the form of superset/subset language SystemC, where certain things about the programs can be proven.

Likewise, gmaxwell posted here information about new research where a specific C subset (targeting specific TinyRAM architecture) can be used to produce machine-verifiable proofs. AFAIK this is still a long-shot option for Bitcoin, not something usable currently.

My comment here pertains to the consensus-critical code in the dichotomy you've mentioned later.
 
staff
Activity: 4242
Merit: 8672
December 22, 2014, 04:09:22 PM
#11
I've made some suggestions on how to do this in the past (auto restart on crash, use Boehm GC)
Our process is not "don't make mistakes", Bitcoin Core largely uses a safer subset of C++ that structurally prevents certain kinds of errors (assuming the subset is followed, we don't have any mechanical enforcement).  I don't believe anyone writing or reviewing code for the project would describe things primary safety strategy as coming from "don't make mistakes", not with the level of review and the general avoidance of riskier techniques.

Though even equip with automatic theorem provers that could reason about cryptographic constructs no language or language facility can free you from having to avoid errors (though avoiding errors is much more than "just don't make them").

Things like "restart on crash" can be quite dangerous, because they let an attacker try their attack over and over, or keep the software running (and mining / authoring irreversible transactions) on a failing system. In most cases if we know that something that the software hasn't accounted for has happened just being shut down is better. If doing this results in a DOS attack, ... DOS attacks against the network are bad, but they're preferable to less recoverable outcomes. I think if anything we'd be likely to go the other way: On a "can never happen" indication of  corruption, write out a "your_system_appears_busted_and_bitcoin_wont_run_until_you_test_it_and_remove_th is_file.txt" that gets checked for at startup.

You're also conflating two separate problems. It may turn out that writing consensus-critical code in other languages is harder, but that's a very different problem than writing secure code in the more general sense.
Actually no, you're catching the point I'm making but missing it.  Cryptographic systems in general have the property that you live or die based on implicit details. Cryptographic consensus makes the matter worse only in that a larger class of surprises which turn out to be fatal security vulnerabilities. It's quite possible, and has been observed in practise, to go end up with exploitable systems because some burred/abstracted behaviour is different than you expected. A common example is propagating errors up to to the far side when authentication fails and leaking data about the failure allowing incrementally recovering secret data.  Other examples are that implicit padding behaviour leaking information about keys (there is an example of this in Bitcoin core: OpenSSL's symmetric crypto routines had implicit padding behaviour that make the wallet encryption faster to crack than had been intended.)

I'm certainly a fan of smarter tools that make software safer (I'm conceptually a big fan of Rust, for example). But what I'm seeing deployed out in the wider world is that more actual deployed weak cryptography software is resulting from reasons unrelated to language.  This doesn't necessarily mean anything about non-cryptographic software. And some of it is probably just an attitude correlation; you don't get far in C if you're not willing to pay attention to details. So we might expect other languages to be denser in sloppy approaches. But that doesn't suggest that someone equally attentive might not do better, generally, in something with better properties. (I guess this is basically your demographic correlation).  So I'm certainly not disagreeing with these points; but I am disagreeing with the magic bullet thinking which is provably untrue: Writing in FooLang will absolutely not make your programs safe for people to use. It _may_ be helpful, indeed, but it is neither necessary nor sufficient, as demonstrated by the software deployed in the field.
legendary
Activity: 1526
Merit: 1134
December 22, 2014, 02:39:22 PM
#10
I would say that we've got very lucky with respect to Bitcoin Core:  Satoshi was a very careful developer who knew C++ very well and maximised use of its features to increase safety. The developers who followed him are also very skilled, know C++ very well and know how to avoid the worst traps.

The main concern with Core is not that the code is insecure today, but what happens in the years to come. Will the people who follow Gavin, Pieter, Gregory etc be as good? What about alt coins? What if a refactoring or multi-threading of some performance bottleneck introduces a double free? Anyway, not much we can do about this except try and make the environment as safe as possible. I've made some suggestions on how to do this in the past (auto restart on crash, use Boehm GC) and normally Gregory likes to point out possible downsides Smiley but I'm not super comfortable relying on "don't make mistakes" as a policy over the long run.

WRT RNG issues in Java, I'm not aware of any beyond the Android bugs, which were very severe but didn't have anything to do with Java as a language or platform. If there have been issues in Java SE I don't recall hearing about them. Bypassing in-process RNGs is still a good idea though.

Quote
You should not do crypto in JS or Java in the first place. In those languages, you do not have control about memory management. For example in JS, you have no control over how and were the browser stores your secret data (keys etc.). There is no way to enforce the physical deletion of private data.

It's also true of C (e.g. AES keys can persist in XMM registers for a long time after use). Although hexafraction is right that on HotSpot you can do manual heap allocations, it doesn't matter much. If an attacker has complete access to your address space then this is so close to "game over" that it hardly makes any odds whether there are multiple copies in RAM. Even if the password isn't lying around, they can just wait until it is. I'm not a big fan of spending time trying to "clean" address spaces of passwords or keys.

Note that for core crypto, it's looking more and more like long term everything will have to be done in assembly anyway. Pain.
Pages:
Jump to: