Pages:
Author

Topic: Avoiding theft using trusted computing (Read 4950 times)

legendary
Activity: 1526
Merit: 1134
September 05, 2012, 06:09:14 AM
#29
You don't need a serial cable. A regular network cable is plenty secure enough as long as you tightly control the traffic allowed to it from other machines. In this case it sounds like there was an SSH daemon listening and that is how the attacker was able to log in. So don't do that - require physical access to change the configuration of the server.

The reason for investigating TPMs and related technologies is that if they are quite general and easily integrated by server vendors, so it's cheap and totally reasonable to provision yourself a trusted computing environment in a remote datacenter then do everything via SSH - whilst still having similar security properties that a piece of dedicated hardware would give you. Yes, it's advanced and exotic today, but we're building the payment system of tomorrow, right? So it's worth thinking about these things.

In any event, this currency exchange was still running on Linode months after that provider was found to be unable to withstand a motivated assault. I think these kinds of hacks can be avoided by people doing basic due diligence if they're about to (reality check!) entrust large sums of money to a random guy off the internet!
legendary
Activity: 938
Merit: 1001
bitcoin - the aerogel of money
September 05, 2012, 01:46:46 AM
#28
I am curious why a TPM and all kinds of high-tech kernel wizardry with elaborate cutting edge hardware would be more desirable than a low-tech dedicated computer doing this same thing over RS232 where the airgap is visible, easily explained, and easy to comprehend?  
[...]

TC doesn't require physical access to the server. Your RS232 solution does. 

Apart from being more costly than using a dedicated hosting service, this simply isn't practicable for a lot of bitcoin entrepreneurs, eg. those with a nomadic lifestyle.  Where is Zhou Tong going set up this machine? In his student dorm?
vip
Activity: 1386
Merit: 1140
The Casascius 1oz 10BTC Silver Round (w/ Gold B)
September 04, 2012, 06:52:36 PM
#27
received as a PM, thought I'd post in the thread:


I was reading  your response to the TC/TPM thread.  Somehow it seems too complex for the gains.

In light of the Bitfloor incident maybe an rs232 black box project is worth pursuing?  Seems to me multiple layers of protocols between boxes makes attack vectors just a pain in the ass that DMZ configurations can't do much about.

I definitely think so...

There needs to be a base use case where the box is a wallet and does nothing more than ask a user to confirm a proposed transaction on its own display, and signs the proposed transaction if the user agreed to it.  Basic messages to the box would include "getnewaddress", "please sign this transaction", and "here is a new transaction/block I heard on the network".  Basic messages from the box would include "here is a new receiving address", "here is your signed transaction", and "I refuse to sign this transaction".

Then the protocol and/or app could be added with user extensions (different for everyone using it for more than a wallet) to implement the security policy of the operator's choice.  The app could be extended to limit total withdrawals within a 24h period without console intervention, or the protocol could be extended to allow the wallet to learn PGP public keys and then require PGP signatures for user withdrawals, or whatever.

The advantage of RS232 is that it's dumb, slow, universally compatible, and well-understood.  The "black box" can be implemented on another PC, a credit card machine, a Raspberry Pi, or even a microcontroller.
legendary
Activity: 1526
Merit: 1134
September 03, 2012, 03:37:02 PM
#26
Superb, Jonathan got TrustVisor open sourced. For a while I think they were considering keeping it proprietary. I talked about TrustVisor with him before, it seems they went even beyond that. I will explore it later.

Thanks for the pointer, Hal. I agree with everything you wrote: I think the design of a supervisor server suitable for use on exchanges/trading platforms/other businesses with hot wallets is very much an open question. Once you have figured out such a design though, moving it into a PAL seems like a fairly straightforward transformation. Issues like how to do rate limiting, to stop a hacked system replacing addresses all need to be resolved for a multi-signer solution to work anyway.

Yes, finding a source of secure time is annoying. Doing an HTTPS request to google.com works - I should know because I've written software that does that before Smiley Of course google.com is really not supposed to be a global time server, but serving 404s with a date header is fortunately very cheap. IIRC every Google server is synced to GPS time using an internal NTP pool so it should always be accurate.

Throttling/limiting the damage is a valuable goal in and of itself, as it gives you much more time to notice something has gone wrong and throw the off switch. Even if you don't have any automated way to detect badness, the ground truth is often user complaints. If the losses are small enough you can make things right from your profits.

Hal
vip
Activity: 314
Merit: 4276
September 01, 2012, 05:08:44 PM
#25
Apologies for resurrecting this old thread, but I wanted to mention a new development. The people that brought you Flicker and TrustVisor, both mentioned by Mike, have a new project out. xmhf is a hypervisor framework built around Trusted Computing. Its main advantage is that it works on both Intel and AMD, but you still need a newer, relatively high-end machine. TrustVisor has been ported to xmhf, so now it works on both architectures whereas previously it was just AMD.

I agree with the comments above that TC may not be quite right for bitcoin. For one thing these secure program compartments can't do any I/O directly. They have to rely on the insecure code to relay data, although crypto keeps the data secure. So if we wanted the user to approve a transaction, you'd have to send the data to a secure device for approval. In which case you might as well use multisig or even just keep all the keys there.

You could try to implement a self-contained policy like rate limiting, although as discussed above you need a secure time source and state rollback protection. I'm worried that using the blockchain as a time standard might be vulnerable to timing games by the untrusted code, although there might be mitigations. A couple of other potential sources of secure time: the network time protocol, which is how a lot of computers keep their time in sync, has a crypto layer. Unfortunately it doesn't seem suitable for public use, although I found out the US NIST will supposedly supply you with an authenticated time if you go through a complicated application process, http://www.nist.gov/pml/div688/grp40/auth-ntp.cfm.

Thinking way outside the box, you could open an SSL connection to a web page that's updated frequently, and use the Date: header from the http response. You'd hard code the CA root cert and all the relaying could be done by the untrusted code and still be secure. I've tried this with https://google.com and the time is pretty accurate. TrustVisor includes a version of openssl so this would seem to be feasible.

But even if you got rate-limiting working, the untrusted code could substitute its addresses for the target addresses, or maybe just skim a percentage off each transaction, hoping to evade detection. A lot of things have to go right for the created transaction to match the user's intentions. Assuming a malware takeover and still trying to protect the user is aiming too high IMO. Maybe we can limit the damage though.


full member
Activity: 196
Merit: 100
Web Dev, Db Admin, Computer Technician
March 07, 2012, 01:22:26 PM
#24
You want to implement something like this, but without the secure switch and additional physical servers?
http://www.nsa.gov/ia/media_center/video/orlando2010/flash.shtml

legendary
Activity: 1358
Merit: 1003
Ron Gross
March 07, 2012, 10:54:28 AM
#23
If you look at the thread from slush, he says he used Linode because connectivity from Prague is poor so he does not want to run the pool there. VM providers are very popular these days, I guess because many people don't want to run servers from home. Having a machine with no remote access as the monitor basically requires it be physically close to you, either at home (limited bandwidth, running servers may violate consumer ISP ToS), or at a local colo (prices/connectivity may not be competitive).

The hardware isn't really that obscure, there are over 200 million TPMs out there. The combination of TPM+all the other stuff is a bit more exotic but still available from several well known manufacturers.

Once the initial setup work is done, it can be made available to everyone relatively easily. People can SSH in to their account, run their site and have the sensitive parts run in the isolated space where even the people with physical access to the box would find it hard to compromise. But you can still get the benefits of having other people manage power, cooling, connectivity, etc.

That said, until the work is actually done, I agree that the two server solution is more straightforward.

I would point out there is a 3rd option.

Secure colocation.  Maybe it didn't make sense in slush case but for Bitcoinica there is simply no excuse.

They were holding $250K with $50 in security.  No jeweler would have done that, no gold miner would do that.  The "safe" should be priced accordingly to the

So not to hit zou while he is down but hopefully that is a wakeup call.  $250K in assets should be protected like $250K in assets.

Renting private cage space secured with private key and ID verified access is sub $1000 in many parts of the world.  Things like KVM over IP, remote switched PDUs, and private firewalls allow you to build a physical fortress around your hardware that ensures there is no "backdoors".  The cage would never even need to be opened except to replace hardware.  The amount that was stolen could have paid for 20 years of such co-location.

Information Security starts with physical security.  Using a virtual machine is fine to house your personal wallet to $100 in assets.  $250K in assets should have enterprise grade PHYSICAL SECURITY.  That doesn't mean things like 2 factor authentication and TPM aren't important.  They are because future thieves will only become more resourceful but one needs to start with ironclad physical security.

+1

It's very easy to start Bitcoin businesses ... securing them as they explode and grow larger is difficult sometimes.
I'm sure Gavin is much less devastated about the few BTC stolen from the faucet than Zhou. What's fit for a freemium non-profit website like the Faucet isn't what's fit for a medium-sized business, isn't what's fit for Mt. Gox, Bitcoinica or any semi-serious Bitcoin exchange for that matter.
donator
Activity: 1218
Merit: 1079
Gerald Davis
March 07, 2012, 10:46:26 AM
#22
Once the initial setup work is done, it can be made available to everyone relatively easily. People can SSH in to their account, run their site and have the sensitive parts run in the isolated space where even the people with physical access to the box would find it hard to compromise. But you can still get the benefits of having other people manage power, cooling, connectivity, etc.

I would point out there is a 3rd option.  Secure colocation.  

Maybe it didn't make sense in slush case but for Bitcoinica there is simply no excuse.  They were holding $250K with $50 in security.  It would be like locking up a briefcase holding $250K in cash with a bycicle lock.  No jeweler would have done that, no gold miner would do that.  The "safe" should be priced accordingly to the assets it is protecting.

So not to hit zou while he is down but hopefully that is a wakeup call.  $250K in digital assets should be protected like any other assets worth $250K.

Renting private cage space secured with private key and ID verified access is sub $1000 in many parts of the world.  Most provide secure shipping and remote installation of servers and have bonded employees.  Data security during installation can be ensured with things like truecrypt.  Features like KVM over IP, remote switched PDUs, and private firewalls allow you to build a physical fortress around your hardware which holds your information[/b] that ensures there is no "backdoors".  The cage would never even need to be opened except to replace hardware.  The amount that was stolen could have paid for 20 years of such remote co-location. 

Information Security starts with physical security.  Using a virtual machine is fine to house your personal wallet to $100 in assets.  $250K in assets should have enterprise grade PHYSICAL SECURITY.  That doesn't mean things like 2 factor authentication and TPM aren't important.  They are because future thieves will only become more resourceful but one needs to start with ironclad physical security.
legendary
Activity: 1526
Merit: 1134
March 07, 2012, 04:36:34 AM
#21
If you look at the thread from slush, he says he used Linode because connectivity from Prague is poor so he does not want to run the pool there. VM providers are very popular these days, I guess because many people don't want to run servers from home. Having a machine with no remote access as the monitor basically requires it be physically close to you, either at home (limited bandwidth, running servers may violate consumer ISP ToS), or at a local colo (prices/connectivity may not be competitive).

The hardware isn't really that obscure, there are over 200 million TPMs out there. The combination of TPM+all the other stuff is a bit more exotic but still available from several well known manufacturers.

Once the initial setup work is done, it can be made available to everyone relatively easily. People can SSH in to their account, run their site and have the sensitive parts run in the isolated space where even the people with physical access to the box would find it hard to compromise. But you can still get the benefits of having other people manage power, cooling, connectivity, etc.

That said, until the work is actually done, I agree that the two server solution is more straightforward.
vip
Activity: 1386
Merit: 1140
The Casascius 1oz 10BTC Silver Round (w/ Gold B)
March 06, 2012, 11:34:41 PM
#20
I am curious why a TPM and all kinds of high-tech kernel wizardry with elaborate cutting edge hardware would be more desirable than a low-tech dedicated computer doing this same thing over RS232 where the airgap is visible, easily explained, and easy to comprehend?  Not to mention the hardware very common and cheap and the validation rules can be written in any well-known language, high level or low, for any platform or OS.  RS232 is oldschool and boring, but that's the point - it'll never be overly complicated, and compatible hardware is everywhere at every price point and in every form factor.

RS232 essentially guarantees that the access footprint is slow, with no remote access to the OS, and supports no functionality that hasn't been explicitly implemented.  It keeps the computer pretty much airgapped, wired just enough to do its intended job.  On the other hand, if you need a special kind of obscure hardware that has an obscure chip in it and requires obscure OS support, and someone writes open source support for it, hardly anybody's going to bother with it because the bar of them caring enough to acquire the special hardware is that much higher.

For an average setup (even one Bitcoinica's size), Zhou need only two computers at his house.  All he needs is a "hot" client that runs on some internet-facing computer and passes the transactions across RS232 to the "secure" client that hosts the keys and signs the transactions.  The hot client relays the signed transactions back to the network.  If the secure side needs to know about blocks and the block chain to make its decision (or unconfirmed transactions), then the RS232 protocol simply shall contain a "here's an incoming block/transaction" message.  If his secure computer decides it wants human interaction before signing a transaction (example: unusual big transaction), then it can blow bells and whistles out the local speaker, or ask the "hot" computer to send him an SMS.

legendary
Activity: 1222
Merit: 1016
Live and Let Live
March 06, 2012, 10:45:11 PM
#19
You could have code on the TPM that checks that the said bitcoin block is next in the chain, form the last block hash that it keeps in secure memory.
Since blocks have a timestamp, this could be used as an 'ever increasing' date system.  Giving a general guarantee of the date.

A more advanced chip could make sure that the transactions it signs are from blocks that it has ‘seen.’

I think that in the future the block chain can be used as a secure time source.
donator
Activity: 1218
Merit: 1079
Gerald Davis
March 06, 2012, 12:38:27 PM
#18
I'd caution against just buying a TPM. A full TC setup requires a TPM, but also support from the CPU, motherboard, BIOS, etc. Intels implementation in particular is notorous for bricking motherboards if you don't use the absolute latest versions of the BIOS.

Good warning although I would imagine that since the MB has a header, and BIOS includes options for enabling TPM they must be anticipating some users would take the option.  Still thanks for the warning.  If I brick it then I only have myself to blame.

Still larger point was that a replacement TPM isn't that difficulty to obtain so even using them as a limited lifespan product would be viable.  It all depends on how much money you are protecting.  For personal wallet an increment every 10 blocks is likely fine.  For a Bitcoinica wallet increment on each block and buy a replacement $20 part every year to be safe.
legendary
Activity: 1526
Merit: 1134
March 06, 2012, 12:29:48 PM
#17
I'd caution against just buying a TPM. A full TC setup requires a TPM, but also support from the CPU, motherboard, BIOS, etc. Intels implementation in particular is notorous for bricking motherboards if you don't use the absolute latest versions of the BIOS. And by "brick" I'm not using it in the script-kiddie sense that you so often see these days. I mean it actually turns the motherboard into a worthless chunk of metal that has to be thrown out. There are also systems that theoretically support TC but contain serious hardware bugs which render the setup worthless. It's really your best bet to purchase a full system that's been tested and known to work.

I've been wanting to play with these technologies for ages, but never had the time. That said, I'm in contact with one of the foremost researchers (one of the guys who did Flicker). He is interested in the intersection of TC and Bitcoin and if somebody serious steps up, he'd be willing to provide accounts on TC systems that have TrustVisor installed. TrustVisor is pretty much the cutting edge of minimally-sized TCBs and it makes it easy for code to switch in and out of secure space - you just separate your (c/c++) program and then you can make regular function calls into and out of the monitor (they call it a "piece of application logic" or PAL). TrustVisor handles parameter marshalling and other things for you. It has good performance and it virtualizes the TPM in clever ways so you can get more performance out of it.

If anyone has the necessary skills (C++/assembly should not scare you), time and interest, let me know and I'll put you in touch. I'm happy to answer any questions about this sort of technology as well.

BTW it isn't only for servers. You can use it to make end-user wallets that can't be compromised by malware on your laptop. That's actually a more straightforward problem than the server case because the secure code can talk to a human via the screen to confirm the action - there's no need for throttling or risk analysis on the transactions.
donator
Activity: 1218
Merit: 1079
Gerald Davis
March 06, 2012, 12:20:55 PM
#16
Good point. You can select arbitrary tradeoffs. Increment the counter every 10 blocks instead and now it lasts nearly 20 years, which is probably longer than the expected lifetime of the rest of the hardware. It means you can potentially replay blocks within the last couple of hours, but depending on your throttling scheme it may not be an issue.

I'd hope that if one day people were using TPMs regularly better/stronger chips would come onto the market as well. Until then there are various convoluted workarounds to get more out of the limited devices.

Yeah that is a good point.  Even incrementing once every 3 blocks gives you 6 years.  TPM module is usually replaceable (simply plugs into MB via 10 pin header) and only costs $20 or so for max security buying a $20 part every year or so isn't a bad tradeoff.  My MB has a header for it.  I thought about buying one just to play around.

On edit: Link removed to protect the uninformed from themselves due to potential system damage (see post below).
legendary
Activity: 1526
Merit: 1134
March 06, 2012, 12:18:19 PM
#15
Good point. You can select arbitrary tradeoffs. Increment the counter every 10 blocks instead and now it lasts nearly 20 years, which is probably longer than the expected lifetime of the rest of the hardware. It means you can potentially replay blocks within the last couple of hours, but depending on your throttling scheme it may not be an issue.

I'd hope that if one day people were using TPMs regularly better/stronger chips would come onto the market as well. Until then there are various convoluted workarounds to get more out of the limited devices.
donator
Activity: 1218
Merit: 1079
Gerald Davis
March 06, 2012, 09:35:44 AM
#14
The TPM provides counters that are guaranteed to only increase, and which can be incorporated into sealed state, exactly to avoid rollback/replay attacks.

So what you can do is have the block chain be presented to the secure code. It looks at the timestamps and performs whatever throttling it wants. Then it returns in the sealed state the current chain head hash, and the current value of the monotonic counter which it then increases by one. The next time it runs, it checks that the counter matches. If it doesn't you are being presented with old data.

One problem is that TPMs are quite slow and limited, so techniques to use their limited NVRAM write cycles most effectively can be obscure. This is why we'd need libraries and tools to make all this easy. It's harder than the two server approach at first, but once set up it should let anyone who has access to the relevant hardware have a higher degree of security for less effort and cost.

There's a paper on how best to achieve state continuity here:

http://www.ece.cmu.edu/~jmmccune/papers/PLDMM2011.pdf

Interesting read but this seems like a killer for what we are looking at:
Quote
Even worse, the NVRAM is quite slow
and is only expected to support 100K write cycles across
its entire lifetime
; writing once every second would exhaust
NVRAM in less than 28 hours.

If incremented on each block it is only good for ~1.9 years.  The paper shows a novel approach to only incrementing the counter on boots but I don't see how they helps us for our issue.
donator
Activity: 1218
Merit: 1079
Gerald Davis
March 06, 2012, 09:32:08 AM
#13
On edit: as indicated above DUH the blockchain.  Close enough for this work and nearly impossible to spoof at least economically.

I was thinking of a kind of replay attack:

+ Control the TPM's view of the world (e.g. make it seem like it is Jan 1, 2010)
+ Get the TPM to sign a small transaction, shut it down.
+ Increment time, get it to sign another transaction
+ Repeat.

Replace "time" with "blockchain" and you've got the same problem: can the TPM know that it's view of the external world is correct?  If it sends a nonce (to prevent replay attacks) to some external service that adds a timestamp and signs it with a public key known to the TPM code... then we're back to using two different servers.


I knew you were going somewhere with it I just couldn't see it.  Yes it would be nice if TPM had secure timer but IIRC (from a non-Bitcoin project) it doesn't.

I wonder if building a secure hardware timer is that cost prohibitive.  The TPM could simply get verifiable timestamps from a second module.
legendary
Activity: 1526
Merit: 1134
March 06, 2012, 08:41:14 AM
#12
The TPM provides counters that are guaranteed to only increase, and which can be incorporated into sealed state, exactly to avoid rollback/replay attacks.

So what you can do is have the block chain be presented to the secure code. It looks at the timestamps and performs whatever throttling it wants. Then it returns in the sealed state the current chain head hash, and the current value of the monotonic counter which it then increases by one. The next time it runs, it checks that the counter matches. If it doesn't you are being presented with old data.

One problem is that TPMs are quite slow and limited, so techniques to use their limited NVRAM write cycles most effectively can be obscure. This is why we'd need libraries and tools to make all this easy. It's harder than the two server approach at first, but once set up it should let anyone who has access to the relevant hardware have a higher degree of security for less effort and cost.

There's a paper on how best to achieve state continuity here:

http://www.ece.cmu.edu/~jmmccune/papers/PLDMM2011.pdf
legendary
Activity: 1652
Merit: 2301
Chief Scientist
March 06, 2012, 08:34:16 AM
#11
On edit: as indicated above DUH the blockchain.  Close enough for this work and nearly impossible to spoof at least economically.

I was thinking of a kind of replay attack:

+ Control the TPM's view of the world (e.g. make it seem like it is Jan 1, 2010)
+ Get the TPM to sign a small transaction, shut it down.
+ Increment time, get it to sign another transaction
+ Repeat.

Replace "time" with "blockchain" and you've got the same problem: can the TPM know that it's view of the external world is correct?  If it sends a nonce (to prevent replay attacks) to some external service that adds a timestamp and signs it with a public key known to the TPM code... then we're back to using two different servers.
legendary
Activity: 1526
Merit: 1134
March 06, 2012, 08:33:58 AM
#10
Agree that you could just use the block chain itself if you wanted to throttle payouts. More complex conditions would have to rely on a challenge-able trusted clock, ie, one that returns signed timestamps including your nonce.

I think a more interesting exploration is how to have the monitor linked to your database such that it can know the DB has not been tampered with, without maintaining the entire DB itself. Is it possible to do for general data models? If you simply replicate the entire database the monitor has no guarantee it's seeing the database as the primary server sees it. That means the monitor has to examine not only the snapshot state of the DB but actually, every single mutation. It's not easy. Probably some research on this topic can be found in the literature.

Alternatively, having the untrusted side provide a regular key/value store and have the secure side simply return encrypted/signed key/value pairs to update could also work. So then you don't have the full overhead of a database in the secure space (disk management, cleaning, compacting, etc) but you can still do arbitrary data storage there.

Other problems that need to be considered:

  • Backups. You need to be able to export the secure data, but that must not become a weak point for attack. The secure code could maybe only export data encrypted under a public key that is hard-coded into it. You'd have to keep the private part safe somewhere unhackable (like a piece of paper). The problem then becomes how do you restore the backup .... you'd need a different private key in the software and keep the public part safe somewhere unhackable.
  • Upgrades. Sealing data to the software means a compromised host OS can't access the wallet by changing the code that runs, but it also means YOU cannot change the code that runs because then you'd lose access to the data. You could handle it by seeing this as a specific case of backup/restore, but then you are frequently loading the backup keys onto some internet connected device. A possible alternative is to sign new versions of the code offline, like you'd take the new binary on a USB stick to a device, sign it with a key intended only for that purpose, and then the previous version could export its data sealed under a key available only when the CPU is running the new code (which is known to be valid because it's signed). That way the sensitive data is never readable by the developer, even. It just moves from signed version to signed version. You can use the TPM monotonic counters to prevent downgrade attacks in the case of a secure monitor that had a vulnerability.
Pages:
Jump to: