Pages:
Author

Topic: About Mt. Gox flaw from a security expert (Read 34182 times)

member
Activity: 84
Merit: 10
July 11, 2011, 12:18:36 AM
I'm sure this conversation should be allowed to die an ugly and painful death...but I've been away from the thread for several days and just realised that I (and others) have been accused of dismissing MuadDib's arguments out of hand.  As you might have noticed, I agree that OpenBSD has an excellent reputation and has had one for many years.  A similar argument about reputation was why I brought up VMS to begin with.  But that is not the point.  Real security lies in the administrator, not in the operating system.  Its a worldview issue more than a coding issue.  Out of the box features are great specifically because out of the box admins are morons.  By and large.
full member
Activity: 126
Merit: 100
well gentlemen, that was one hell of a conversation.

thank you all kindly.  as a lowly network designer, i learned a lot.  didn't cost anything, either.

whattaya youse guys think of Qubes, and their 'security by isolation' approach?

i've got no dog in the fight - but i'd really like to know what your opinions are.

My $0.02.  

I only just read over this and someone correct me if I'm wrong but this appears to be using Xen to isolate (groups of) applications in their own VM on a single host.

My short answer:


This is at best a one trick pony and it's possibly the wrong approach.

Why (or my long answer):

I'm going to talk about some  "classes" of defense here (and these are terms I just made up so feel free to take some shots at them):

i) A defense which foils an attack (or some significant percentage of attacks) forcing attackers to use a completely different approach (ALSR - I'm sure everyones sick of me mentioning this)
ii) A defense which introduces a measurable and significant increase in difficulty to exploiting an existing flaw. (Password complexity rules, firewalls)
iii) A defense that removes one attack vector with known problems and replaces it with another which is less known. (Switching from IIS to Apache in an IIS shop)

I submit that i) is intrinsically superior to ii) and both are superior to iii)

VM isolation is at best a "Type II" defense, as it introduces the problem of detecting and compromising the hypervisor before compromising the machine and at worst it could be considered a Type III defense.  My assumption here is that a successful attack on the hypervisor means complete ownership of the machine.  Ergo,  we have reduced the problem for attackers from attacking application X from a very large selection of applications.   To attacking Hypervisor X for which the list is much smaller.    The upside is that - hopefully this also reduces the attack surface for the defenders.   This would normally be a good thing but it's only true if you assume the hypervisior is more secure than your other applications.

e.g. If I had a machine that had to run a webserver and Tomcat to provide a very simple web service to a very targeted application.  Removing that and replacing it with a few lines of well audited code could be considered reducing the attack surface of that machine.   However the hypervisor isn't a small piece of software and it's attack surface isn't well known.

It might be safer if I couldn't already do half the job: Detect running in a VM.   For lots of people who haven't installed the vmware tools on their host a simple check of the time with an external source will tell you that you're running on a VM.   Depending on the guest OS I've read about at least fifty different markers for VM detection.

Also it's worth noting that the point of Qubes seems to be the antithesis of what I understand to be best practice with regard to VM's these days.  Some of us think that depending on VM isolation is a bad idea.  It violates the principle of DiD.  So, like in my shop we consider it a bad idea to mix VMs with differing security privileges on the same host.  In other words we don't run the payment gateway software on a VM on the same machine we are running the Drupal VM.  Yet this seems to be the whole point of Qubes.  

This is a good presentation. http://handlers.sans.org/tliston/ThwartingVMDetection_Liston_Skoudis.pdf

More info on the VMChat teaser at the end: http://www.foolmoon.net/cgi-bin/blog/index.cgi?category=Security%20News



thanks for that.

i do like the bare-metal approach, but yes - "My assumption here is that a successful attack on the hypervisor means complete ownership of the machine."

i also like that networking runs in an untrusted security ring.

i've got a play machine ready to install, and i guess i'll have to see how it goes with the newly released beta.  the isolation rule sets appear to be key.  oughtta be fun, anyway...
full member
Activity: 140
Merit: 100
well gentlemen, that was one hell of a conversation.

thank you all kindly.  as a lowly network designer, i learned a lot.  didn't cost anything, either.

whattaya youse guys think of Qubes, and their 'security by isolation' approach?

i've got no dog in the fight - but i'd really like to know what your opinions are.

My $0.02.  

I only just read over this and someone correct me if I'm wrong but this appears to be using Xen to isolate (groups of) applications in their own VM on a single host.

My short answer:


This is at best a one trick pony and it's possibly the wrong approach.

Why (or my long answer):

I'm going to talk about some  "classes" of defense here (and these are terms I just made up so feel free to take some shots at them):

i) A defense which foils an attack (or some significant percentage of attacks) forcing attackers to use a completely different approach (ALSR - I'm sure everyones sick of me mentioning this)
ii) A defense which introduces a measurable and significant increase in difficulty to exploiting an existing flaw. (Password complexity rules, firewalls)
iii) A defense that removes one attack vector with known problems and replaces it with another which is less known. (Switching from IIS to Apache in an IIS shop)

I submit that i) is intrinsically superior to ii) and both are superior to iii)

VM isolation is at best a "Type II" defense, as it introduces the problem of detecting and compromising the hypervisor before compromising the machine and at worst it could be considered a Type III defense.  My assumption here is that a successful attack on the hypervisor means complete ownership of the machine.  Ergo,  we have reduced the problem for attackers from attacking application X from a very large selection of applications.   To attacking Hypervisor X for which the list is much smaller.    The upside is that - hopefully this also reduces the attack surface for the defenders.   This would normally be a good thing but it's only true if you assume the hypervisior is more secure than your other applications.

e.g. If I had a machine that had to run a webserver and Tomcat to provide a very simple web service to a very targeted application.  Removing that and replacing it with a few lines of well audited code could be considered reducing the attack surface of that machine.   However the hypervisor isn't a small piece of software and it's attack surface isn't well known.

It might be safer if I couldn't already do half the job: Detect running in a VM.   For lots of people who haven't installed the vmware tools on their host a simple check of the time with an external source will tell you that you're running on a VM.   Depending on the guest OS I've read about at least fifty different markers for VM detection.

Also it's worth noting that the point of Qubes seems to be the antithesis of what I understand to be best practice with regard to VM's these days.  Some of us think that depending on VM isolation is a bad idea.  It violates the principle of DiD.  So, like in my shop we consider it a bad idea to mix VMs with differing security privileges on the same host.  In other words we don't run the payment gateway software on a VM on the same machine we are running the Drupal VM.  Yet this seems to be the whole point of Qubes.  

This is a good presentation. http://handlers.sans.org/tliston/ThwartingVMDetection_Liston_Skoudis.pdf

More info on the VMChat teaser at the end: http://www.foolmoon.net/cgi-bin/blog/index.cgi?category=Security%20News

legendary
Activity: 1050
Merit: 1000
You are WRONG!
well gentlemen, that was one hell of a conversation.

thank you all kindly.  as a lowly network designer, i learned a lot.  didn't cost anything, either.

whattaya youse guys think of Qubes, and their 'security by isolation' approach?

i've got no dog in the fight - but i'd really like to know what your opinions are.
it will work, as long as:
a) the hypervisor is not compromised
b) the machines does not interact with each other, or share the same passwords.
full member
Activity: 126
Merit: 100
well gentlemen, that was one hell of a conversation.

thank you all kindly.  as a lowly network designer, i learned a lot.  didn't cost anything, either.

whattaya youse guys think of Qubes, and their 'security by isolation' approach?

i've got no dog in the fight - but i'd really like to know what your opinions are.
member
Activity: 84
Merit: 10
Specifically, I went to school to understand theory.  It was pure, uncontaminated by "random" errors in measurement, precise to a degree that only real mathematicians can see.  Along the way, I noticed that it was disconnected from the world, it existed in pure minds as a silver blade that was perfect for fighting ghosts, should you find yourself plagued by ghosts.  Perfect for dismantling the arguments of those dimmer folk that crawl along the walls of the ivory tower, and perfect for claiming intellectual victory over those less informed.  Those with less clarity of mind.

It was also false.  Mental masturbation on a higher level.  A symptom of having the tools of math and engineering in your belt more than a sign of them.  Engineers do not quote theory, they make engines.  So I say again to the community.  Buy stuff sold for bitcoins.  Make stuff you cann sell for bitcoins.  Do not trust either the theorists or the engineers in this game.  If you happen to be one of those, be attentive to the limitations of your skill-set as well as to the advantages.
member
Activity: 84
Merit: 10
I will take that as a compliment.  What masks we choose to wear is sometimes as informative as just going around wearing our own faces.
full member
Activity: 140
Merit: 100
1.  All statistical modelers are statisticians.

2.  Not all statisticians are statistical modelers
Name something from statistics which is not a model of something or a modeling tool.  Grin  If you read your own article you'd see that both the so-called Pure statistician and Applied statistician are modeling something.   The only difference is the kind of validation they are willing to consider.  So the "pure" statistical model is considered valid (in this case) when it conforms to some dogma about colinearity and the "applied" model is valid when (among other things) it succeeds in predicting something.

Anyway you're cute when you just shovel barely applicable google cites and pretend that somehow makes your point.

But in the interests of you actually contributing something....I'll try to keep in mind that when you say "statistical modeler" you mean "Applied statistician".  Not that you show much understanding of what the second term means anyway.

Quote from: iCEMAKER
I'm glad you finally accepted that the tension between your overly strict denotation of "statistical modeler" and the common, widespread informal connotation of "applied statistician"

Your prejudicial language aside.  I think you misread my post.  I accept that when you use the term 'statistical modeler' you are referring to some idealized trope identified as "applied statistician" by some person on the internet.  For whom the term doesn't really denote a presence or absence of statistical modeling just some polarized ideas about model validation.

Quote from: iCEMAKER
My approach is superior because it preserves linguistic information, while your misinterpretation destroys the sometimes subtle distinction between a working statistician (ie statistical modeler) and the purely theoretical academic egghead (ie capitol-S Statistician).

Not really.  Your approach is simply to assume that you are unquestionably correct for something where 'correct' is difficult to ascertain and without citing any useful corroborating evidence.  Which is just what you did with regard to the opinion of some group about the security of FreeBSD (or perhaps BSD's in general).   It's not much of an approach but I can see how it might fool the locals.

Wheras what I did was just recognize that language is fluid and, your prescriptivism aside allow for your particular definition to stand for the discussion I am having with you.   Back at the office your postings are a subject of much derision by the (few) other mathematicians we employ.  Just sayin...

Quote from: iCEMAKER
That's why I showed you the differences with the copypasta illustrating and demonstrating their existence at length, and in excruciating detail.

In the words of BB's icon...."You keep using those words.  I do not think it means what you think it means."  I think it's kind of obvious that you didn't understand much of what you read.  Since the only thing that was strongly contrasted in your article between these two hypothesized opposites is something you didn't mention and everything else was not directly covered.

"excruciating detail" - I guess, to someone who doesn't understand what they read. - absolutely precious.  If I could keep you like a pet I would.

So now for the second time I accept how you are using the term...any chance you will actually contribute something?  Probably not.
legendary
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
Quote
1.  All statistical modelers are statisticians.

2.  Not all statisticians are statistical modelers

Name something from statistics which is not a model of something or a modeling tool.  Grin  If you read your own article you'd see that both the so-called Pure statistician and Applied statistician are modeling something.   The only difference is the kind of validation they are willing to consider.  So the "pure" statistical model is considered valid (in this case) when it conforms to some dogma about colinearity and the "applied" model is valid when (among other things) it succeeds in predicting something.

Anyway you're cute when you just shovel barely applicable google cites and pretend that somehow makes your point.

But in the interests of you actually contributing something....I'll try to keep in mind that when you say "statistical modeler" you mean "Applied statistician".  Not that you show much understanding of what the second term means anyway.

Dang it jgraham, stop being so funny or we might end up as friends or something!   Angry

I'm glad you finally accepted that the tension between your overly strict denotation of "statistical modeler" and the common, widespread informal connotation of "applied statistician"  is best resolved, in popular usage (check the job postings for both terms), by the simple understanding that "statistician" implies more theoretical or academic work while "statistical modeler" implies a more applied or industrial frame of reference.

My approach is superior because it preserves linguistic information, while your misinterpretation destroys the sometimes subtle distinction between a working statistician (ie statistical modeler) and the purely theoretical academic egghead (ie capitol-S Statistician).

You seemed to be claiming that 'all statisticians use statistical models, therefore all terms referring to them are all interchangeable, no matter their particular function, specialty, talent, or role.'

That's why I showed you the differences with the copypasta illustrating and demonstrating their existence at length, and in excruciating detail.

I knew that would win you over.   Grin

Cheers brah!

/Hella Laim Flaimwar
member
Activity: 84
Merit: 10
(I may be casting my vote for Messr. Graham's arguments, in few words)  Just because it looks good on paper, don't mean it flies.  Doing your homework means trying it yourself, not quoting "authoritative" sources like, oh, Wikipedia.  Don't get me wrong.  I check WP all the time.  And the articles' reference materials.
member
Activity: 84
Merit: 10
I already know that I'm a bit of a troll, being a bohunk, backwoods IT guy and all...but the biggest problem I had with math wasn't the math.  It was the conclusion that because your math was brilliant that it must therefore also be true.  The only real things in this world do not just exist in your mind.  What physics do engineers do?
full member
Activity: 140
Merit: 100
So, again, what are the professionals using and why?  And how?
If by "professionals" we mean people who are in an industry where security is prioritized.   It's not always clear cut.   The financial institutions I worked for used OS/390 machines simply because they had invested huge amounts of money into them.  Not because of any pretense of security.

I'd like to see the 3 Stooges who insulted, attacked, and ran you off try to browbeat MtGox into switching to Linux, using similar thug tactics of ganging up and gainsaying everything said to them.

I think you have officially entered the twilight zone now.   It was maud_dib who chastised Mt. Gox for using Linux.  Now of course, he says they don't so if we assume, like you have that his opinion was evidence based that is there was something that made him think the OS was at fault (after all he's a "trained statistical modeler" :-)  ) Then either those selfsame indicators would apply to FreeBSD OR the opinion was not evidence based it was assumed based on his presumption that Linux is insecure by comparison.  An opinion neither you nor maud_dib has provided any useful objective evidence for.

But we don't talk about that....just like you don't talk about maud_dibs done more than his fair share of insults...We just talk about the insults he's received.  Am I clear on where you are coming from?

Anyway your implied question has already been answered oh delusional one.  If Linux is as good as BSD then there is little reason to switch.

Quote
I wonder why they, who know compsec ever-so-much-better than MtGox, Muad-dib, and myself, simply don't start up their own clearinghouse and compete with MtGox.

Well, Mt. Gox has made some noobish mistakes but they were all, from my understanding policy and implementation errors.  Unlike maud_dib (initially) I don't have a problem with their choice of OS.  I don't really know anything about starting up a monetary exchange and my side-projects already consume enough of my time.  I really don't see why in your opinion everyone who understands computer security needs to start monetary exchanges but perhaps I'm just not drinking heavily enough.

On the other hand I've already proposed $500 USD in BTC as a prize for a contest for breaking into a hardened Linux box.     The way you talk it would be easy money but considering the way you act I suspect it isn't.  Grin

Maybe you'll answer that now.

Quote
Adorable?  Thanks.  I guess you have a thing for articulate nerds with a deep understanding of both math and language.
No but I'll let you know when you start showing signs of either of those. :-)

Quote
1.  All statistical modelers are statisticians.

2.  Not all statisticians are statistical modelers

Name something from statistics which is not a model of something or a modeling tool.  Grin  If you read your own article you'd see that both the so-called Pure statistician and Applied statistician are modeling something.   The only difference is the kind of validation they are willing to consider.  So the "pure" statistical model is considered valid (in this case) when it conforms to some dogma about colinearity and the "applied" model is valid when (among other things) it succeeds in predicting something.

Anyway you're cute when you just shovel barely applicable google cites and pretend that somehow makes your point.

But in the interests of you actually contributing something....I'll try to keep in mind that when you say "statistical modeler" you mean "Applied statistician".  Not that you show much understanding of what the second term means anyway.
legendary
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
Quote
your use of "statistical modeler" instead of simply "statistician" is adorable!

Adorable?  Thanks.  I guess you have a thing for articulate nerds with a deep understanding of both math and language.

Let's break it down into simple chunks that will be easier for you to digest.

1.  All statistical modelers are statisticians.

2.  Not all statisticians are statistical modelers.


Now just wait a second, and don't get all upset or confused before letting me resolve this seemingly inexplicable paradox of connotation versus denotation.

You see, statistical modeling is part of what's called 'applied statistics.'

To do applied statistics (in the form of modeling) you first need a background in what's called 'theoretical statistics.'


Once you have that, you can stay in the world of theory and be an ivory-tower egghead statistician, or you can enter the real world and help build statistical models of real things (like computer security) by applying your theoretical background, as a statistical modeler.

Some statisticians do leave academia and enter the private sector, but do not build statistical models.  They remain statisticians, and do not become statistical modelers.

Here is some further reading for you, to gratify your demonstrated deep curiosity regarding this mysterious, crucial, and often misunderstood distinction.  Enjoy!

Quote
There are many controversial topics actively discussed among business analysts who follow divergent schools of thought.  The most common schools of thought can be categorized into two groups:  the first group being the theoretical statisticians, and the second being represented by those individuals who embrace “applied” statistics.  Generally, the theoretical statisticians apply what they’ve learned in an academic setting, and follow the “laws” set forth by their institutions.  On the other end of the spectrum, the applied statisticians rely heavily on market testing and key performance indicators (e.g., financial impact) to determine their own set of experientially-based statistical methods and axioms.

Neither school is inherently good or bad.  All seasoned analytic managers have met new analysts who come straight out of school with misconceptions of the value and place for various mathematical procedures and rules.  We’ve all also faced analysts with significant career experience who have carried their academic theoretical statistical knowledge with them as an unchanging edict, despite the limited (or detrimental) applicability of some of these doctrines in the marketplace.  Similarly, we’ve all also encountered business-focused “applied statisticians” whose lack of adherence to theory has resulted in unstable strategic analytic products that look great on paper, but fail in practice.

Of all the points of conflict between theoretical and applied statisticians, one of the most heated relates to the utility of the measurement of colinearity in predictive modeling.  In predictive modeling, colinearity is the amount to which two independent variables correspond to the same dependent variable.  It can also refer to the amount a single independent variable corresponds to a dependent variable.

The theoretical statistician will argue that intensively managing colinearity is of great importance in building predictive models.  A few of the arguments they will cite to support this position include that if colinearity isn’t removed:

    We cannot clearly explain the value of each independent variable in the model’s predictive algorithm
    We are endorsing a final product that may not conform with standard mathematical partiality towards a solution that is parsimonious in nature
    Parameter estimates might be unstable from sample to sample (or from validation to marketplace execution)

The applied statistician will argue that colinearity is not relevant as:

    We are seeking lift , not explanation.  If the new model makes more money in the marketplace, the ability to explain “why” becomes academic
    Parameter estimate stability can be enhanced through various exercises during the model build phase

The reality is that both sides may be correct, at specific application points, and in specific situations.  We just need to moderate academic rigor with real-world findings in order to uncover when to implement a rule, when to bend it, and when to discard it.  To address each of the five points (above):

    Explaining an individual variable’s contribution to a multivariate prediction may or may not have relevance.
        If you are in a market research company, this is a key concern.  You will need to let your clients know not only “what will be,” but “why.”
        If you are in a direct marketing company, explanation may not be relevant.  As an example, if you work for a catalog company, maximum incremental financial lift is far more important than explaining the “percent of predictive value” driven by individual model components.

    Ideally, we want a parsimonious solution as they tend to be more stable.  But, what if you find that your less parsimonious option (having been tested on multiple out-of-time validation samples) is almost identical in stability?  What if, during those same tests you find that it produces a far more robust prediction?  In short:
        Generally, you will want to favor a more parsimonious solution
        But, if you have a model that is relatively less parsimonious, but already proven stable and robust, there may not be any additional value in reworking the solution for the sake of a mathematical preference

    If you are conducting a model building strategy that does not manage colinearity, but is laser-focused on lift, and you find that your parameter estimates are not stable, a likely cause is inadequate sample size in the build data set.  As a result:
        You can increase your sample size substantially (which will typically eliminate this issue)

    For most predictive model applications in industry, lift is the goal.  But you need to be apprised of the perspective of senior management and clients.  Until they are comfortable with your track record, they may require you to explain the nature, source and quantified relevance of each individual variable in your model…and you’ll need to provide this explanation in business terms they can understand

    Managing parameter estimate instability can’t always be achieved:
        The most common way to reduce model instability (caused by collinear variables) is to increase the build and validation sample sizes.  But, for many organizations, there simply isn’t enough data to do this effectively (especially for smaller organizations that are not engaged in direct marketing).
        Another potential parameter estimate instability cure is to examine each variable and appropriately bin them relative to the dependent variable in question.  Keep in mind, though, that the more you bin, the more you will also be reducing variable information value…and this may end up reducing the overall predictive power of the model.

Overall, the positions held by the “pure” theoretical statistician and the “pure” applied statistician both have strengths and weaknesses that can be demonstrated in actual market testing.  To improve effectiveness, each group needs to move beyond a mastery of one philosophy, and become a pragmatist of both.





legendary
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
So, again, what are the professionals using and why?  And how?

Mt. Gox Uses FreeBSD.

Of course they do. 

It's been the OS of choice for the security-conscious crowd since long before the first generation of cryptocash.

I'd like to see the 3 Stooges who insulted, attacked, and ran you off try to browbeat MtGox into switching to Linux, using similar thug tactics of ganging up and gainsaying everything said to them.

That would be funny!

I wonder why they, who know compsec ever-so-much-better than MtGox, Muad-dib, and myself, simply don't start up their own clearinghouse and compete with MtGox.

MtTux would be 100% Linux, and therefore immune to the security problems presented by BSD (all TWO of them, LOL).



full member
Activity: 140
Merit: 100
You, BB, and Tux may huff, puff, insult, gainsay, and dissemble until blue in the face, but that won't change anything in reality.

True but I'm hardly doing any of those things.  Grin

Quote
Bickering and playing word games don't cut it, especially when a statistical modeling expert (specialized in computer security) is schooling you on the facts and logic of the issue at hand.
What, other than Maud-dibs say-so has you thinking he's any kind of expert in statistics?  I mean other than that he appears to agree with you.  Can you point me to a specific, well supported point he has made?  From where I sit if there was an award for uninformative posts.   I think maud_dib would be a contender.

Quote
I repeat: Referring to a commonly known fact, such as the security of BSD vs Linux, is not an argument.

Actually it is.  We call it an implied argument from popularity.  It's no more compelling than people who say: "It's a well known fact that is ".  

Quote

I think you must be pulling my leg here...any reason that you take the word of this random person on the internet?  I mean other than that they appear to agree with you?   

Quote from: some random person on the internet
Security is a difficult and sometimes controversial thing to analyze. The only truly "secure" operating systems are those that have no contact with the outside world. The firmware in your DVD player is a good example.

This reads a lot like someone who heard a college lecture and is making "broken telephone" mistakes in repeating it.  I've heard this used as a theoretical example.  That is, a system which has no contact with people or machines at all is secure by definition but that's because 'security' is probably being defined as 'Allowing only the right people access to the right things'.  Allowing nobody access to anything is clearly conforming to that definition.  However that is also an example of the most useless system.   So sometimes people use this example to refer to attacks that are network related.  So again, yes removing the ability to talk on the network is yet again, conforming to our definition.    However the network isn't the only way people gain access to information.  Ergo while a non-networked computer is immune to network attacks.   It doesn't mean it's immune from the wrong people getting access to information.   A computer could have no network card but be physically available.   Terminals in our university library were able to access some machines without using a network connection.   As they were hardwired into a serial console connected to the computer.

Using a DVD player as an example is either wrong, dated or unclear.  DVD Players allow physical access and some even allow network access.

Quote from: some random person on the internet
only two remote attack vulnerabilities have been found in the last ten years. This is because OpenBSD doesn't create a large attack surface by running a large number of networked apps.

Actually this isn't quite correct.  The actual tagline was:

"Only two remote holes in the default install, in a heck of a long time!" (emphasis mine)

I addressed this already.  The best argument you can make here is that an OpenBSD box with nothing else installed is secure from a remote attack.  However that doesn't really tell you much about OpenBSD code, review procedure, or their overall security model.   So it doesn't say the average OpenBSD box is secure nor anything about how secure an OpenBSD box would be when running a common application in a production environment and it sure doesn't say anything about when compared to a Linux machine that has been secured by someone qualified to do so. 

What OpenBSD attempts to do is commendable but the statement is closer to marketing hype than a useful security metric.

Quote
I've met Linus Torvalds in person.  He's a nice guy, and it sucks his baby is being reprsented here by fanboi suffering from Tiny E-peen Complex.

Linus has publicly been more critical of the OpenBSD development model than I have been in any of my posts.  In case you keep missing it.  I simply deny that there is clear evidence that in any real-world environment a secured OpenBSD (or a FreeBSD) box is more secure than a secured Linux box.  That isn't saying that OpenBSD isn't good, nor is it saying that Linux is the best.


Quote
Very well put; an elegant statement.
...and inaccurate.  I've already given examples as to how OpenBSD has avoided proactive security measures either because they consider their existing security sufficient or Theo D. has gone a little nuts.

Quote
I love that some ITT Tech foolio is questioning the methodology of a trained statistical modeler.

First, your use of "statistical modeler" instead of simply "statistician" is adorable!  Second I think it's more me saying "Where exactly *is* your methodology?" and maud_dib kind of pretending that methodological transparency isn't important.

Quote from: someone who has certifications but is not CS which Icebreaker keeps implying is a lesser situation
there’s evidence to suggest that most Linux distributions are not up to the standards of FreeBSD, for instance — let alone OpenBSD, with possibly the best security record of any general-purpose operating system.

This article is all over the place.  At first he says there is evidence to suggest but he doesn't say what that is,   The only thing he seems to mention is the OpenBSD tagline (which says nothing about FreeBSD).  He does make this other interesting quote later on:

Quote
One of the most common criteria used by people who don’t really understand security, and by those who do understand it but want to manipulate those who don’t with misdirection and massaged statistics, is vulnerability discovery rates.

Isn't this one of the two primary metrics that maud_dib espoused?  According to this guy he says maud_dib 'doesn't understand security'.   I've already given my rationale for why these rates aren't such a good metric.

He goes on to mention a few more metrics without any rational why these are particularly useful.

Quote
    i) code quality auditing
    ii) default security configuration
    iii) patch quality and response time
    iv) privilege separation architecture

i) I don't agree with this as prima facie it's difficult to express it as a metric.   What units does "code quality auditing" come in?
ii) Likewise this is hard to express usefully as a number and it's really only meaningful to people who are in the habit of deploying systems in their default configs.
iii) If by 'quality' we mean a binary condition consisting of: a) Does it fix the security problem b) Does it cause another security problem.   This is a metric I actually like but there's no information as to how FreeBSD, Linux and OpenBSD differ in this respect.
iv) Again I like this idea but it's difficult to express in units.   Perhaps some categorical?


Quote
It's entertaining how, when his on-topic nonsense is corralled and put down, he simply disputes whether or not you *really* have an MS in stat.

...so on your Internet nobody pretends they're something they're not?  I can see how security seems so easy over there then.  Just make your login sequence ask: "Hey are you *really* supposed to be accessing this system?" since where you are everyone is completely honest.  All attackers will be forced to say "No".  Then you can log them out.

Where I am people regularly attempt to fake it.   As I've already said my position is simple.  Maud_dib has provided very little in the way of what he was attempting to do with his "psi", how it was meaningful to computer security and what data he was supplying to it.  He has had multiple opportunities to clear up some very simple questions.  I think that means the alleged statistician has earned some skepticism.

Besides I've provided a dearth of information as to why I hold the positions I do and I'm open to argument on those points.   So far all you want to say is your position is a "fact" and therefore requires no support.   Which is fine, have any sort of religion about computers you want but it's hardly surprising when those of us above the age of seventeen think the world is a little more complicated than you suggest.
legendary
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
im finding it interesting that you are quoting answers.com are you serious?

Do you understand what this means, Professor Gainsayer?

Quote
OpenBSD has an extremely stringent security auditing policy; only two remote attack vulnerabilities have been found in the last ten years. This is because OpenBSD doesn't create a large attack surface by running a large number of networked apps.


It's called 'Answers.com' for a reason!
legendary
Activity: 1050
Merit: 1000
You are WRONG!
im finding it interesting that you are quoting answers.com are you serious?
legendary
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.

- BSD has proactive security, Linux security is reactive

- BSD is designed from the ground for security, Linux instead has a more chaotic architecture

Very well put; an elegant statement.

I love that some ITT Tech foolio is questioning the methodology of a trained statistical modeler.

It's entertaining how, when his on-topic nonsense is corralled and put down, he simply disputes whether or not you *really* have an MS in stat. 

The word for when someone contradicts everything you write is called 'gainsaying.'  It's not a polite or nice thing to do, and hence is often considered trolling or flamebaiting.

You've led the donkeys to water, but the stubborn asses won't drink.

No wonder you've added the 3 Stooges to your plonk file (wish BTCforum would add an 'ignore' feature).

It didn't have to be this difficult:

http://tinyurl.com/3lfxm4x
Quote
Is Linux the most secure OS?

Linux-based systems get a lot of press in IT trade publications. A lot of that press relates to its security characteristics. In fact, some claim “Linux is the most secure operating system (OS) of them all.” Such statements are, of course, unsupportable hyperbole; while many Linux distributions may outshine both MS Windows and Apple MacOS X by a significant margin, there’s evidence to suggest that most Linux distributions are not up to the standards of FreeBSD, for instance — let alone OpenBSD, with possibly the best security record of any general-purpose operating system.

legendary
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
In the CS community, it's well known that BSD is more stable, secure, and the best OS for critical infrastructure, while Linux is more friendly, flexible, and better for hobbyists or businesses that can save money (by hiring cheaper Linux fanboi rather than expensive real computer scientists).

Referring to a commonly known fact, such as the security of BSD vs Linux, is not an argument.

If it were a fact, then you would be able to point to some clear and objective evidence of that right?  (Keep in mind that because you are referring to 'security' as some kind of blanket term you'd be responsible for providing that kind of evidence for the majority of aspects of the term and of course how exactly you know that your set of aspects is the majority).

Nice labeling there mac.  This isn't gainsaying.  I, simply as a IT security professional and the holder of a degree in computer science, have seen no set of well-defined, broadly scoped evidence that BSD is superior in "security" to Linux.  Nor in my conversation with other security professionals or members of the CS community (like my alumni, Usenix attendees)  see any clear consensus as to the superiority of BSD.  I have, certainly met people who make that claim but they always seem to fall down when trying to come up with a general definition of security or if they do they fall down in substantiating it with regard to their favored OS/Platform/Giant Spider.  Ergo it seems reasonable to me to call such a term "complex" furthermore given that even the most secure systems from a theoretical point of view can be entirely undone in implementation (such as EMF side-channel attacks on QKDS) it seems again reasonable to me to call such a system "nuanced".  Given these two facts (using the term correctly here).  I think it is entirely justified to be mistrustful of any and all who consider "security' as an open and shut case for product (or platform or giant spider) X over product (you get the idea) Y.

What do you want from me here guy? The two sentences above tell me to look at your use of the term "well-known" as: your opinion of the opinions of two very large groups of which your sample size is probably so small and poorly randomized it's useless.  Not to mention that even if the majority of those two groups held the opinion you claim it still isn't necessarily meaningful   Computer Science and EECS people do not always have a background in computer security.   Making their opinion anywhere from questionable to useless.   Given the size of the groups and the variance in the population's skill set you could easily be getting the opinion of the least qualified people. I mean would you really rank the opinion of someone's who's focus was in Combinatorics or AI or Queuing Theory as equal or greater than Bruce Schneier or (going old school) D. J. Bernstien when it comes to an application or operating systems "security".  If you don't then how many Combinatoricists, AI researchers or Queuing Theorists make one Bruce or Dan?  

Not to mention it's not hard to find high-profile people in computer security who disagree on "well-known" concepts.

You, BB, and Tux may huff, puff, insult, gainsay, and dissemble until blue in the face, but that won't change anything in reality.

Bickering and playing word games don't cut it, especially when a statistical modeling expert (specialized in computer security) is schooling you on the facts and logic of the issue at hand.  Thanks Maud-dib, for attempting to educate these stubborn script kiddies (1337 RHEL cert notwithstanding, LOL!).

I repeat: Referring to a commonly known fact, such as the security of BSD vs Linux, is not an argument.

Quote
What is the most secure operating system?
In: Operating Systems, Computer Security

http://wiki.answers.com/Q/What_is_the_most_secure_operating_system
   
Answer:

Security is a difficult and sometimes controversial thing to analyze. The only truly "secure" operating systems are those that have no contact with the outside world. The firmware in your DVD player is a good example.

Among all modern general purpose operating systems (Windows, Mac OS X, Linux, Solaris, FreeBSD, NetBSD, OpenBSD) the most secure by default is by far OpenBSD.

OpenBSD has an extremely stringent security auditing policy; only two remote attack vulnerabilities have been found in the last ten years. This is because OpenBSD doesn't create a large attack surface by running a large number of networked apps.


I've met Linus Torvalds in person.  He's a nice guy, and it sucks his baby is being represented here by fanboi suffering from Tiny E-peen Complex.

member
Activity: 84
Merit: 10
Bravo.  The devil may be in the details, but the more important part is that Mt Gox was overly optimistic and got lazy - I still think Bitcoins are a toy, but they will be far more than that when non-technical people can expect that the technical people won't let their stored value be secured by a kiss and a promise.
Pages:
Jump to: