Author

Topic: Time to switch to i2P? (Read 4281 times)

full member
Activity: 174
Merit: 101
June 01, 2016, 07:26:56 PM
#46
Please support this I2P proposal on Stack Exchange:

https://area51.stackexchange.com/proposals/99297/i2p

Although the proposal was posted by a Monero developer it is meant to benefit the entire I2P community, not Monero specifically (Monero has its own Stack Exchange proposal already https://area51.stackexchange.com/proposals/98617/monero)

If you are interested in the C++ I2P router project  happening now you can track its progress here:
https://github.com/monero-project/kovri
hero member
Activity: 532
Merit: 500
November 11, 2014, 08:11:27 PM
#45
If you have a node that is receiving the data then you know what is fake (because you disregard it) and what is real (because you continue to relay it)
Good thing we were talking about protecting against traffic analysis by a passive third party attacker instead of a peer.
It is still a valid point and something that needs to be overcome in order to prevent your identity from being discovered by an attacker. If you only solve for one problem while ignoring others you are doing nothing to protect yourself.
https://wiki.freenetproject.org/Darknet

https://wiki.freenetproject.org/Papers#Small-world_networks
Anything that reaches any kind of scale would potentially be vulnerable to an active attacker as "friend-of-friend" connections are visible so an attacker would need to become "friends" with popular nodes   
legendary
Activity: 1400
Merit: 1013
November 11, 2014, 02:55:43 PM
#44
If you have a node that is receiving the data then you know what is fake (because you disregard it) and what is real (because you continue to relay it)
Good thing we were talking about protecting against traffic analysis by a passive third party attacker instead of a peer.
It is still a valid point and something that needs to be overcome in order to prevent your identity from being discovered by an attacker. If you only solve for one problem while ignoring others you are doing nothing to protect yourself.
https://wiki.freenetproject.org/Darknet

https://wiki.freenetproject.org/Papers#Small-world_networks
hero member
Activity: 532
Merit: 500
November 11, 2014, 02:21:59 PM
#43
If you have a node that is receiving the data then you know what is fake (because you disregard it) and what is real (because you continue to relay it)
Good thing we were talking about protecting against traffic analysis by a passive third party attacker instead of a peer.
It is still a valid point and something that needs to be overcome in order to prevent your identity from being discovered by an attacker. If you only solve for one problem while ignoring others you are doing nothing to protect yourself.
legendary
Activity: 1400
Merit: 1013
November 11, 2014, 06:40:50 AM
#42
If you have a node that is receiving the data then you know what is fake (because you disregard it) and what is real (because you continue to relay it)
Good thing we were talking about protecting against traffic analysis by a passive third party attacker instead of a peer.
legendary
Activity: 1176
Merit: 1020
November 11, 2014, 02:04:02 AM
#41
Fake data is just some real data padded and then encrypted. There is no way to distinguish the two apart without the decryption keys. The fake data is simply generated by the sender, and then discarded on the receiving end.

This reminds me of mp3's. Music used to be encoded as constant bit rate (CBR), now there are more variable bit rate (VBR) files. We are simply going backwards with the technology.
If you have a node that is receiving the data then you know what is fake (because you disregard it) and what is real (because you continue to relay it)

Excellent point.  Unfortunately.
full member
Activity: 197
Merit: 100
November 11, 2014, 01:23:31 AM
#40
Fake data is just some real data padded and then encrypted. There is no way to distinguish the two apart without the decryption keys. The fake data is simply generated by the sender, and then discarded on the receiving end.

This reminds me of mp3's. Music used to be encoded as constant bit rate (CBR), now there are more variable bit rate (VBR) files. We are simply going backwards with the technology.
If you have a node that is receiving the data then you know what is fake (because you disregard it) and what is real (because you continue to relay it)
legendary
Activity: 1176
Merit: 1020
November 11, 2014, 12:24:45 AM
#39
This reminds me of mp3's. Music used to be encoded as constant bit rate (CBR), now there are more variable bit rate (VBR) files. We are simply going backwards with the technology.

Exactly.  I hesitate to make a statement with two absolutes, but I'll go out on a limb: the concept is literally the opposite of compression.  A constant stream would even be resistant to the active attack Theymos described.  In the context of TOR, imagine all nodes doing their best to keep a constant 1Mbps of data flowing between peers in both directions.  Then imagine big brother decides to modulate your home internet connection in some way in hopes of tracing and/or confirming an effect somewhere else.  The constant flow of data would cause the modulation to be 100% damped upon reaching the first node, resulting in a downstream signal to noise ratio of zero.
legendary
Activity: 3416
Merit: 1912
The Concierge of Crypto
November 10, 2014, 10:36:08 PM
#38
Fake data is just some real data padded and then encrypted. There is no way to distinguish the two apart without the decryption keys. The fake data is simply generated by the sender, and then discarded on the receiving end.

This reminds me of mp3's. Music used to be encoded as constant bit rate (CBR), now there are more variable bit rate (VBR) files. We are simply going backwards with the technology.
full member
Activity: 197
Merit: 100
November 10, 2014, 09:44:51 PM
#37
In order to use i2p you have to relay traffic also, making correlation and timing attacks harder.
I2p also doesn't use a static 3-hop one-way path for it's traffic changing every 10 minutes, in and outbound paths in i2p are different and randomized in hop length with a set min and max. Besides this it also uses multiple paths to the destination, not just one.


If you have enough nodes then you can still potentially execute a timing attack, although the randomized number of hops and the additional traffic will make such attacks much more difficult but still not impossible.
hero member
Activity: 644
Merit: 500
P2P The Planet!
November 10, 2014, 10:30:37 AM
#36
maidsafe  Wink
member
Activity: 110
Merit: 10
November 09, 2014, 10:43:29 PM
#35
While think kind of countermeasure could potentially work, I would think that an adversary would eventually be able to figure out what is fake data and what is "real" traffic and be able to strip out the fake data from their analysis. IMO the only real way to prevent timing attacks is to have a large amount of "real" traffic flowing to your site on a regular basis
If the cryptography is properly implemented, "real data" should appear identical to "junk data".  The trick would be merging and alternating between them in a seamless way.
I think that if an attacker is able to see the real data not merged with the junk data in a perfect way then someone can differentiate between the two (they may not be able to tell 100% for sure which data is "real" and which data is "junk" however they could potentially see two separate levels of data/bandwidth).
legendary
Activity: 1176
Merit: 1020
November 09, 2014, 07:52:42 PM
#34
While think kind of countermeasure could potentially work, I would think that an adversary would eventually be able to figure out what is fake data and what is "real" traffic and be able to strip out the fake data from their analysis. IMO the only real way to prevent timing attacks is to have a large amount of "real" traffic flowing to your site on a regular basis
If the cryptography is properly implemented, "real data" should appear identical to "junk data".  The trick would be merging and alternating between them in a seamless way.
full member
Activity: 155
Merit: 100
November 09, 2014, 05:03:43 PM
#33
The main problem with I2P and Tor is that they only try to protect you against mostly-passive attackers who have absolutely no idea of where you might actually be on the Internet. The Tor threat model says (and this is also true of I2P):

Quote
By observing both ends, passive attackers can confirm a suspicion that Alice is talking to Bob if the timing and volume patterns of the traffic on the connection are distinct enough; active attackers can induce timing signatures on the traffic to force distinct patterns. Rather than focusing on these traffic confirmation attacks, we aim to prevent traffic analysis attacks, where the adversary uses traffic patterns to learn which points in the network he should attack.

But attackers looking for the real IP of a target hidden service can significantly narrow the set of possible targets by enumerating all active Tor/I2P users (using widespread traffic analysis or by having a lot of nodes on the network), and then they can further narrow it by doing intersection attacks. Once they've narrowed it down to a few hundred possibilities, they can try timing attacks against each one to get solid proof that they're the target.

(I wonder if the hidden services that were not taken down in the recent bust have anything in common. Are they in a particular country that's unfriendly to NSA demands? Do they use a fixed set of trusted entry guards? Probably we won't find out, unfortunately.)

I just don't think that low-latency client<->server networks can be secure. What we need are distributed data stores like Freenet so that the originator/owner of content doesn't need to always be online and moreover has plausible deniability even if they are under active surveillance. However, I really doubt that any existing anonymous data store could actually stand up to targeted traffic analysis of the content originator. Freenet seems to be put together in an especially haphazard way, without much theoretical basis for its claimed anonymity.

I like a lot of what I've read about GNUnet. I think that a good path forward for anonymous networks would be:
- Make the GNUnet software user-friendly.
- Create message board and Web functionality (like FProxy) on top of GNUnet.
- Make GNUnet work over I2P.
- Increase the popularity of GNUnet+I2P so that attackers can't just do traffic analysis of every single user.
There's an solution to traffic pattern attacks - it's just really expensive.

They way you solve traffic pattern analysis is to make your protocol consume a constant amount of bandwidth all the time, regardless of whether anything is actually going on or not.

I've been waiting to see 'constant bandwidth' solutions for quite some time.  It would help the anonymity networks.  The technique could also be applied to something like VoIP.  It would consume lots of bandwidth but not necessarily and unreasonable or unworkable amount.  Further, 24/7 availability could be given up in favor of some window of time, maybe one hour per day, where the constant bandwidth would be applied.  It would be up to the user to take advantage of that time window.  Command and control instructions, and text, take up very little bandwidth, so at least those kind of activities should only take a small bit of fake data to effectively pad the timing.
While think kind of countermeasure could potentially work, I would think that an adversary would eventually be able to figure out what is fake data and what is "real" traffic and be able to strip out the fake data from their analysis. IMO the only real way to prevent timing attacks is to have a large amount of "real" traffic flowing to your site on a regular basis
legendary
Activity: 1176
Merit: 1020
November 09, 2014, 03:28:18 PM
#32
The main problem with I2P and Tor is that they only try to protect you against mostly-passive attackers who have absolutely no idea of where you might actually be on the Internet. The Tor threat model says (and this is also true of I2P):

Quote
By observing both ends, passive attackers can confirm a suspicion that Alice is talking to Bob if the timing and volume patterns of the traffic on the connection are distinct enough; active attackers can induce timing signatures on the traffic to force distinct patterns. Rather than focusing on these traffic confirmation attacks, we aim to prevent traffic analysis attacks, where the adversary uses traffic patterns to learn which points in the network he should attack.

But attackers looking for the real IP of a target hidden service can significantly narrow the set of possible targets by enumerating all active Tor/I2P users (using widespread traffic analysis or by having a lot of nodes on the network), and then they can further narrow it by doing intersection attacks. Once they've narrowed it down to a few hundred possibilities, they can try timing attacks against each one to get solid proof that they're the target.

(I wonder if the hidden services that were not taken down in the recent bust have anything in common. Are they in a particular country that's unfriendly to NSA demands? Do they use a fixed set of trusted entry guards? Probably we won't find out, unfortunately.)

I just don't think that low-latency client<->server networks can be secure. What we need are distributed data stores like Freenet so that the originator/owner of content doesn't need to always be online and moreover has plausible deniability even if they are under active surveillance. However, I really doubt that any existing anonymous data store could actually stand up to targeted traffic analysis of the content originator. Freenet seems to be put together in an especially haphazard way, without much theoretical basis for its claimed anonymity.

I like a lot of what I've read about GNUnet. I think that a good path forward for anonymous networks would be:
- Make the GNUnet software user-friendly.
- Create message board and Web functionality (like FProxy) on top of GNUnet.
- Make GNUnet work over I2P.
- Increase the popularity of GNUnet+I2P so that attackers can't just do traffic analysis of every single user.
There's an solution to traffic pattern attacks - it's just really expensive.

They way you solve traffic pattern analysis is to make your protocol consume a constant amount of bandwidth all the time, regardless of whether anything is actually going on or not.

I've been waiting to see 'constant bandwidth' solutions for quite some time.  It would help the anonymity networks.  The technique could also be applied to something like VoIP.  It would consume lots of bandwidth but not necessarily and unreasonable or unworkable amount.  Further, 24/7 availability could be given up in favor of some window of time, maybe one hour per day, where the constant bandwidth would be applied.  It would be up to the user to take advantage of that time window.  Command and control instructions, and text, take up very little bandwidth, so at least those kind of activities should only take a small bit of fake data to effectively pad the timing.
full member
Activity: 155
Merit: 100
November 09, 2014, 02:43:09 PM
#31
In order to use i2p you have to relay traffic also, making correlation and timing attacks harder.
I2p also doesn't use a static 3-hop one-way path for it's traffic changing every 10 minutes, in and outbound paths in i2p are different and randomized in hop length with a set min and max. Besides this it also uses multiple paths to the destination, not just one.
I think this makes using i2P more risky for the "average" user who is not intending to use it for illegal purposes. The "average" user would risk that traffic would exit from their node that is doing illegal things and they would not have the cover of running a tor exit node that tor exit nodes have
sr. member
Activity: 307
Merit: 250
et rich or die tryi
November 09, 2014, 11:26:48 AM
#30
I think it is time. I2P is objectively better and the adage is that the slower the system the more secure.
legendary
Activity: 1937
Merit: 1001
November 09, 2014, 09:52:28 AM
#29
In order to use i2p you have to relay traffic also, making correlation and timing attacks harder.
I2p also doesn't use a static 3-hop one-way path for it's traffic changing every 10 minutes, in and outbound paths in i2p are different and randomized in hop length with a set min and max. Besides this it also uses multiple paths to the destination, not just one.

legendary
Activity: 3416
Merit: 1912
The Concierge of Crypto
November 09, 2014, 09:10:41 AM
#28
Someone make a simple "How-To" if you ever want to do something similar. That's not illegal is it? (I mean, information that is not an illegal number.) Like, don't host in Bulgaria or something.
hero member
Activity: 868
Merit: 1000
November 09, 2014, 06:23:10 AM
#27
From what I was reading earlier today, multiple sites which were taken down were using the same Bulgarian host.

Link, please?

Link to reddit thread in which translation of article is posted.

https://www.reddit.com/r/DarkNetMarkets/comments/2lm01y/129_onions_seized_on_bulgarian_hosting_company/

Not all of them were drug sites, but many of them were offering illegal services.

This is also...interesting.

https://blog.torservers.net/20141109/three-servers-offline-likely-seized.html

Y'all might find this interesting, too.  Doxbin got taken down and this is from the person who operated it.

https://lists.torproject.org/pipermail/tor-dev/2014-November/007731.html
full member
Activity: 139
Merit: 100
November 09, 2014, 06:09:24 AM
#26
From what I was reading earlier today, multiple sites which were taken down were using the same Bulgarian host.

Link, please?
full member
Activity: 206
Merit: 100
November 09, 2014, 03:46:18 AM
#25
Well both attacks by the government on both SR's were user/admin error (although SR2 made much worse errors then SR1).

Ahhh.. Well, there you go.

SR3, don't host it in the United States. Morons. hehehe.

As for physical security, ... there are lots of methods, and although expensive you can host it yourself. Have any of the SR1 and SR2 operators seen "See More Buds" ? That talks a bit about physical security of grow houses.

I've never used SR1 or SR2 or any of the others that died, so I don't know if the user interface would be affected if I had done things differently or took care of things on my end, or set up shop in some remote mountain with walls like UBL (but UBL did not have internet, bummer.)

From what I was reading earlier today, multiple sites which were taken down were using the same Bulgarian host.  If you're using the same host as another illicit service, there's always the chance that you'll get caught in a dragnet intended for someone else.  Operator stupidity is also rampant.  The operator of C9 actually posted on reddit that one of her servers had been seized and that she was looking for a new host - not smart.
I believe that reddit accepts connections from tor exit nodes so it is very well possible that the operator of C9 was connecting from tor (I am not sure what C9 even is).

Also do you have a link to that many of the sites were all using the same hosting provider? This would explain how law enforcement was able to take down so many sites
hero member
Activity: 868
Merit: 1000
November 09, 2014, 03:16:48 AM
#24
Well both attacks by the government on both SR's were user/admin error (although SR2 made much worse errors then SR1).

Ahhh.. Well, there you go.

SR3, don't host it in the United States. Morons. hehehe.

As for physical security, ... there are lots of methods, and although expensive you can host it yourself. Have any of the SR1 and SR2 operators seen "See More Buds" ? That talks a bit about physical security of grow houses.

I've never used SR1 or SR2 or any of the others that died, so I don't know if the user interface would be affected if I had done things differently or took care of things on my end, or set up shop in some remote mountain with walls like UBL (but UBL did not have internet, bummer.)

From what I was reading earlier today, multiple sites which were taken down were using the same Bulgarian host.  If you're using the same host as another illicit service, there's always the chance that you'll get caught in a dragnet intended for someone else.  Operator stupidity is also rampant.  The operator of C9 actually posted on reddit that one of her servers had been seized and that she was looking for a new host - not smart.
hero member
Activity: 532
Merit: 500
November 09, 2014, 03:14:00 AM
#23
Would hidden service hosts not be relatively obvious, if only for the amount of data they upload? Users of dark and normal web through Tor would be downloading more than they ever upload. Does Tor use distributed storage? I2P I expect is even more obvious, just disrupt the connections and see a site go offline.  Huh
The entry guards and the "middle nodes" would also be uploading a large amount of tor related traffic.

It would probably be advantageous for a hidden service to also act as a middle node in order to hide it's identity
sr. member
Activity: 531
Merit: 260
Vires in Numeris
November 08, 2014, 08:43:08 PM
#22
Would hidden service hosts not be relatively obvious, if only for the amount of data they upload? Users of dark and normal web through Tor would be downloading more than they ever upload. Does Tor use distributed storage? I2P I expect is even more obvious, just disrupt the connections and see a site go offline.  Huh
legendary
Activity: 1540
Merit: 1000
November 08, 2014, 08:41:52 PM
#21
What we need are things like mesh networks and some nice speed to go with it so it's practical to use, the technology seems pretty far off right now though but I would love to play games and use a totally decentralised internet without the need for an ISP.
hero member
Activity: 658
Merit: 501
November 08, 2014, 08:29:56 PM
#20
I tried tor once and it was pretty boring so slow and it was a pain in the ass to search things so I gave up.
Is i2p faster and has it like a search page or you have to hop around sites?

Tor is better at security just from the fact that there is more oversight, more development, and an order of magnitude more nodes. I2P has properties that make it better for torrenting files.

Here is some more info :
https://gnunet.org/sites/default/files/herrmann2011mt.pdf

If an attacker has enough of the network they can effectively DeAnonymize the user on Tor with the entry and exit nodes.
The solution is simply to grow the amount of node relays , but especially exit nodes as trusted ones are in short supply.
legendary
Activity: 3416
Merit: 1912
The Concierge of Crypto
November 08, 2014, 08:15:01 PM
#19
Well both attacks by the government on both SR's were user/admin error (although SR2 made much worse errors then SR1).

Ahhh.. Well, there you go.

SR3, don't host it in the United States. Morons. hehehe.

As for physical security, ... there are lots of methods, and although expensive you can host it yourself. Have any of the SR1 and SR2 operators seen "See More Buds" ? That talks a bit about physical security of grow houses.

I've never used SR1 or SR2 or any of the others that died, so I don't know if the user interface would be affected if I had done things differently or took care of things on my end, or set up shop in some remote mountain with walls like UBL (but UBL did not have internet, bummer.)
legendary
Activity: 1372
Merit: 1252
November 08, 2014, 08:05:46 PM
#18
I tried tor once and it was pretty boring so slow and it was a pain in the ass to search things so I gave up.
Is i2p faster and has it like a search page or you have to hop around sites?
hero member
Activity: 658
Merit: 501
November 08, 2014, 06:33:18 PM
#17
The great thing about TOR is it has a lot of support and infrastructure. This is also what makes it so dangerous as a certain percentage of exit nodes, bridges, and relays are controlled and owned by the NSA/GCHQ. What we need to do is increase the amount of high speed nodes and especially exit relays.

With security it is impossible to become 100% secure but you can certainly make it impractical and costly to attack.

Shutting down SR1 and SR2 was probably a very costly exercise and individually investigating dealers on a decentralized platform where all escrow funds were held in a mutisig account that couldn't be seized would be an effort in futility. The "war on drugs" is mostly funded by asset forfeiture and the theft of both the dealers and clients money. What happens when those funds are held in method where they cannot be taken?
full member
Activity: 197
Merit: 100
November 08, 2014, 06:22:12 PM
#16
But attackers looking for the real IP of a target hidden service can significantly narrow the set of possible targets by enumerating all active Tor/I2P users (using widespread traffic analysis or by having a lot of nodes on the network), and then they can further narrow it by doing intersection attacks. Once they've narrowed it down to a few hundred possibilities, they can try timing attacks against each one to get solid proof that they're the target.
I would say a solution to this would be to have a lot more tor/onion sites that are legitimate and receive a lot of traffic. This would make a timing attack much more difficult as there would be more traffic to analyze which makes each data point less significant.
(I wonder if the hidden services that were not taken down in the recent bust have anything in common. Are they in a particular country that's unfriendly to NSA demands? Do they use a fixed set of trusted entry guards? Probably we won't find out, unfortunately.)
I am not 100% sure if this is technologically possible but maybe they were set up in a way so that only "x" percent of traffic will go to a specific server with each request being routed to a server at random. Another possibility is that whoever runs the sites that were not taken down were much better at fighting DDoS/timing attacks by shutting down/going offline whenever there is an increase in traffic above "x" percent.
I just don't think that low-latency client<->server networks can be secure. What we need are distributed data stores like Freenet so that the originator/owner of content doesn't need to always be online and moreover has plausible deniability even if they are under active surveillance. However, I really doubt that any existing anonymous data store could actually stand up to targeted traffic analysis of the content originator. Freenet seems to be put together in an especially haphazard way, without much theoretical basis for its claimed anonymity.
This sounds a lot like storJ to me

legendary
Activity: 1400
Merit: 1013
November 08, 2014, 06:20:53 PM
#15
The main problem with I2P and Tor is that they only try to protect you against mostly-passive attackers who have absolutely no idea of where you might actually be on the Internet. The Tor threat model says (and this is also true of I2P):

Quote
By observing both ends, passive attackers can confirm a suspicion that Alice is talking to Bob if the timing and volume patterns of the traffic on the connection are distinct enough; active attackers can induce timing signatures on the traffic to force distinct patterns. Rather than focusing on these traffic confirmation attacks, we aim to prevent traffic analysis attacks, where the adversary uses traffic patterns to learn which points in the network he should attack.

But attackers looking for the real IP of a target hidden service can significantly narrow the set of possible targets by enumerating all active Tor/I2P users (using widespread traffic analysis or by having a lot of nodes on the network), and then they can further narrow it by doing intersection attacks. Once they've narrowed it down to a few hundred possibilities, they can try timing attacks against each one to get solid proof that they're the target.

(I wonder if the hidden services that were not taken down in the recent bust have anything in common. Are they in a particular country that's unfriendly to NSA demands? Do they use a fixed set of trusted entry guards? Probably we won't find out, unfortunately.)

I just don't think that low-latency client<->server networks can be secure. What we need are distributed data stores like Freenet so that the originator/owner of content doesn't need to always be online and moreover has plausible deniability even if they are under active surveillance. However, I really doubt that any existing anonymous data store could actually stand up to targeted traffic analysis of the content originator. Freenet seems to be put together in an especially haphazard way, without much theoretical basis for its claimed anonymity.

I like a lot of what I've read about GNUnet. I think that a good path forward for anonymous networks would be:
- Make the GNUnet software user-friendly.
- Create message board and Web functionality (like FProxy) on top of GNUnet.
- Make GNUnet work over I2P.
- Increase the popularity of GNUnet+I2P so that attackers can't just do traffic analysis of every single user.
There's an solution to traffic pattern attacks - it's just really expensive.

They way you solve traffic pattern analysis is to make your protocol consume a constant amount of bandwidth all the time, regardless of whether anything is actually going on or not.
administrator
Activity: 5222
Merit: 13032
November 08, 2014, 06:07:28 PM
#14
The main problem with I2P and Tor is that they only try to protect you against mostly-passive attackers who have absolutely no idea of where you might actually be on the Internet. The Tor threat model says (and this is also true of I2P):

Quote
By observing both ends, passive attackers can confirm a suspicion that Alice is talking to Bob if the timing and volume patterns of the traffic on the connection are distinct enough; active attackers can induce timing signatures on the traffic to force distinct patterns. Rather than focusing on these traffic confirmation attacks, we aim to prevent traffic analysis attacks, where the adversary uses traffic patterns to learn which points in the network he should attack.

But attackers looking for the real IP of a target hidden service can significantly narrow the set of possible targets by enumerating all active Tor/I2P users (using widespread traffic analysis or by having a lot of nodes on the network), and then they can further narrow it by doing intersection attacks. Once they've narrowed it down to a few hundred possibilities, they can try timing attacks against each one to get solid proof that they're the target.

(I wonder if the hidden services that were not taken down in the recent bust have anything in common. Are they in a particular country that's unfriendly to NSA demands? Do they use a fixed set of trusted entry guards? Probably we won't find out, unfortunately.)

I just don't think that low-latency client<->server networks can be secure. What we need are distributed data stores like Freenet so that the originator/owner of content doesn't need to always be online and moreover has plausible deniability even if they are under active surveillance. However, I really doubt that any existing anonymous data store could actually stand up to targeted traffic analysis of the content originator. Freenet seems to be put together in an especially haphazard way, without much theoretical basis for its claimed anonymity.

I like a lot of what I've read about GNUnet. I think that a good path forward for anonymous networks would be:
- Make the GNUnet software user-friendly.
- Create message board and Web functionality (like FProxy) on top of GNUnet.
- Make GNUnet work over I2P.
- Increase the popularity of GNUnet+I2P so that attackers can't just do traffic analysis of every single user.
full member
Activity: 173
Merit: 100
November 08, 2014, 02:00:50 PM
#13
Did those onion sites get attacked technologically? Or did they get attacked because of user/admin error?

What could they have done different that would have prevented discovery?
Well both attacks by the government on both SR's were user/admin error (although SR2 made much worse errors then SR1).

I think the main issue is somewhat technological as there are very few large onion sites so any kind of DDoS attack on an onion site would make it easy for anyone with the ability to monitor overall tor traffic to be able to see where a lot of tor traffic is going when they are getting DDoS'ed
legendary
Activity: 1204
Merit: 1000
฿itcoin: Currency of Resistance!
November 08, 2014, 01:49:55 PM
#12
You guys needs to know more about CJDNS and Hyperboria.    Wink
staff
Activity: 3290
Merit: 4114
November 08, 2014, 01:39:57 PM
#11
I2P just requires a more nodes to be up and running, that would mean it should be more secure than Tor, but of course if you want more nodes more people have to take the plunge initially when it's not so secure. Although, it's still going to be vulnerable to the attacks which Tor has undergone the past few years. Although, I believe it can be more secure than Tor with more people creating nodes.

At the moment Tor is probably more secure due to more nodes, but I2P why not be in the main scope at the moment because of the lack of users, thus this would make it more secure because of the lack of attacks, although there have been quite large attacks on both.
full member
Activity: 183
Merit: 100
November 08, 2014, 01:25:07 PM
#10

I don't believe that any anonymity network in existence today is safe enough to directly run an illegal website on, unfortunately.

You'd definitely need multiple layers of protection, and not just technological. If it were me, at the very least I would also like to have some passive eyes in the physical environment hosting the hidden services. E.g. an employee at the hosting provider in my payroll, preferably someone in security or compliance roles, whose task would be to discretely inform me if the feds came and started poking around.
The problem with having "eyes" in the physical environment is that this will expose your identity somewhat as well as the fact that you are hosing something that is illegal when the hosing provider may not otherwise notice the illegality of what you are hosing. Plus you would need to trust the person you are using as your "eyes" 
legendary
Activity: 1400
Merit: 1013
November 08, 2014, 01:16:25 PM
#9
I2P is very similar to Tor technologically. If the Feds are using technical attacks against Tor, then the same attacks will probably also work against I2P. In fact, some attacks are easier against I2P because it has far fewer users and its network isn't carefully managed in the same way that Tor's network is.
What I2P has going for it is a better theoretical basis, and a focus on hidden services rather than proxying to the clearnet.

The only thing Tor has going for it is more users - if I2P has the same number of nodes I'd expect their hidden services to be more secure than Tor's hidden services.
donator
Activity: 1617
Merit: 1012
November 08, 2014, 12:58:25 PM
#8

I don't believe that any anonymity network in existence today is safe enough to directly run an illegal website on, unfortunately.

You'd definitely need multiple layers of protection, and not just technological. If it were me, at the very least I would also like to have some passive eyes in the physical environment hosting the hidden services. E.g. an employee at the hosting provider in my payroll, preferably someone in security or compliance roles, whose task would be to discretely inform me if the feds came and started poking around.
legendary
Activity: 3416
Merit: 1912
The Concierge of Crypto
November 08, 2014, 12:46:57 PM
#7
Did those onion sites get attacked technologically? Or did they get attacked because of user/admin error?

What could they have done different that would have prevented discovery?
administrator
Activity: 5222
Merit: 13032
November 08, 2014, 12:34:57 PM
#6
I2P is very similar to Tor technologically. If the Feds are using technical attacks against Tor, then the same attacks will probably also work against I2P. In fact, some attacks are easier against I2P because it has far fewer users and its network isn't carefully managed in the same way that Tor's network is.

I don't believe that any anonymity network in existence today is safe enough to directly run an illegal website on, unfortunately.
legendary
Activity: 1937
Merit: 1001
November 08, 2014, 09:44:28 AM
#5
I don't think this would have much of a difference of ability of the government being able to de-anonimize any site that is trying to keep their identity secret. 

It does, to some extent, the security model of i2p is quite a bit more advanced than that of tor, i would say 'next level'.
The things that make me say toe is inherently insecure were addressed in i2p many years ago already.

There have been a few successful attacks against i2p services in the past, all very well documented, iirc none were to blame on i2p itself. (Please correct me if I'm wrong, it's been a while)

I2p is generally faster in regards to bandwidth, more resilient against hidden service attacks and has a much lower latency. I've ran a VoIP server on i2p for a while, which worked great, try that on tor... You'd probably even have to modify your software to be able to work with tor, not so the case with i2p.

One of the biggest 'issues' with i2p is that it's basically a closed darkness, unlike tor which provides 'synonymized' access to the normal web for most of it's (or at least that would be my guess) users. So the goal of the two are somewhat different. I did notice the outproxy on i2p was working again about a week ago, but that's not the networks main goal AFAIK.

Best thing about roughly 90% less trolls than tor.
copper member
Activity: 1498
Merit: 1528
No I dont escrow anymore.
hero member
Activity: 686
Merit: 500
November 08, 2014, 06:01:27 AM
#3
What does  i2P refer to exactly ?
member
Activity: 100
Merit: 10
November 08, 2014, 04:48:23 AM
#2
I don't think this would have much of a difference of ability of the government being able to de-anonimize any site that is trying to keep their identity secret. 
full member
Activity: 182
Merit: 100
November 07, 2014, 06:04:37 PM
#1

http://www.coindesk.com/day-reckoning-dark-markets-hundreds-illicit-domains/

With the US gov't reaching nout with its fat, well-funded arms out to squash out the TOR darknet network, maybe its time that we switch to i2P for a greater amount of anonymity?
Jump to: