Posted Nov 6, 2015 20:50 UTC (Fri) by PaXTeam (guest, #24616) [Link]1. this WP article was the fifth in a collection of articles following the safety of the web from its beginnings to related matters of right this moment. discussing the security of linux (or lack thereof) matches nicely in there. it was additionally a effectively-researched article with over two months of research and interviews, something you can't quite claim yourself in your current pieces on the subject. you do not like the details? then say so. and even higher, do something constructive about them like Kees and others have been attempting. nevertheless foolish comparisons to previous crap like the Mindcraft research and fueling conspiracies do not exactly assist your case. 2. "We do a reasonable job of discovering and fixing bugs." let's begin here. is that this statement based mostly on wishful considering or chilly hard details you are going to share in your response? in accordance with Kees, the lifetime of safety bugs is measured in years. that's more than the lifetime of many gadgets folks purchase and use and ditch in that period. 3. "Issues, whether they're security-related or not, are patched shortly," some are, some aren't: let's not overlook the latest NMI fixes that took over 2 months to trickle right down to stable kernels and we even have a user who has been ready for over 2 weeks now: http://thread.gmane.org/gmane.comp.file-programs.btrfs/49500 (FYI, the overflow plugin is the primary one Kees is attempting to upstream, think about the shitstorm if bugreports might be treated with this perspective, let's hope btrfs guys are an exception, not the rule). anyway, two examples usually are not statistics, so as soon as again, do you may have numbers or is it all wishful pondering? (it's partly a trick question as a result of you will also have to clarify how one thing gets to be decided to be safety associated which as we all know is a messy enterprise in the linux world) 4. "and the stable-update mechanism makes those patches accessible to kernel customers." besides when it does not. and sure, i have numbers: grsec carries 200+ backported patches in our 3.14 stable tree. 5. "Specifically, the few builders who're working on this area have never made a severe try and get that work integrated upstream." you don't should be shy about naming us, after all you did so elsewhere already. and we also explained the the reason why we haven't pursued upstreaming our code: https://lwn.web/Articles/538600/ . since i do not expect you and your readers to read any of it, here's the tl;dr: if you would like us to spend 1000's of hours of our time to upstream our code, you will have to pay for it. no ifs no buts, that's how the world works, that's how >90% of linux code will get in too. i personally discover it pretty hypocritic that nicely paid kernel developers are bitching about our unwillingness and inability to serve them our code on a silver platter at no cost. and before somebody brings up the CII, go check their mail archives, after some initial exploratory discussions i explicitly requested them about supporting this long drawn out upstreaming work and obtained no solutions.Posted Nov 6, 2015 21:39 UTC (Fri) by patrick_g (subscriber, #44470) [Hyperlink]Cash (aha) quote : > I suggest you spend none of your free time on this. Zero. I suggest you get paid to do that. And effectively. No person count on you to serve your code on a silver platter without spending a dime. The Linux foundation and massive corporations utilizing Linux (Google, Crimson Hat, Oracle, Samsung, and so forth.) should pay security specialists such as you to upstream your patchs.Posted Nov 6, 2015 21:57 UTC (Fri) by nirbheek (subscriber, #54111) [Link]I would simply like to point out that the way in which you phrased this makes your remark a tone argument[1][2]; you've got (in all probability unintentionally) dismissed all of the guardian's arguments by pointing at its presentation. The tone of PAXTeam's comment displays the frustration constructed up over the years with the way in which issues work which I think must be taken at face value, empathized with, and understood moderately than merely dismissed. 1. http://rationalwiki.org/wiki/Tone_argument 2. http://geekfeminism.wikia.com/wiki/Tone_argument Cheers,Posted Nov 7, 2015 0:Fifty five UTC (Sat) by josh (subscriber, #17465) [Hyperlink]Posted Nov 7, 2015 1:21 UTC (Sat) by PaXTeam (visitor, #24616) [Link]why, is upstream recognized for its basic civility and decency? have you ever even read the WP post below discussion, never mind past lkml site visitors?Posted Nov 7, 2015 5:37 UTC (Sat) by josh (subscriber, #17465) [Link]Posted Nov 7, 2015 5:34 UTC (Sat) by gmatht (guest, #58961) [Link]No ArgumentPosted Nov 7, 2015 6:09 UTC (Sat) by josh (subscriber, #17465) [Hyperlink]Please don't; it would not belong there both, and it especially would not want a cheering section as the tech press (LWN typically excepted) tends to provide.Posted Nov 8, 2015 8:36 UTC (Sun) by gmatht (visitor, #58961) [Link]Okay, however I was thinking of Linus TorvaldsPosted Nov 8, 2015 16:Eleven UTC (Solar) by pbonzini (subscriber, #60935) [Hyperlink]Posted Nov 6, 2015 22:Forty three UTC (Fri) by PaXTeam (visitor, #24616) [Link]Posted Nov 6, 2015 23:00 UTC (Fri) by pr1268 (subscriber, #24648) [Link]Why should you assume solely money will repair this downside? Yes, I agree extra sources should be spent on fixing Linux kernel security points, however do not assume someone giving a corporation (ahem, PAXTeam) money is the only solution. (Not mean to impugn PAXTeam's safety efforts.)The Linux development neighborhood may have had the wool pulled over its collective eyes with respect to security issues (either actual or perceived), however simply throwing money at the issue will not repair this.And yes, I do understand the business Linux distros do tons (most?) of the kernel improvement these days, and that implies oblique financial transactions, however it's much more concerned than just that.Posted Nov 7, 2015 0:36 UTC (Sat) by PaXTeam (guest, #24616) [Hyperlink]Posted Nov 7, 2015 7:34 UTC (Sat) by nix (subscriber, #2304) [Link]Posted Nov 7, 2015 9:Forty nine UTC (Sat) by PaXTeam (visitor, #24616) [Link]Posted Nov 6, 2015 23:13 UTC (Fri) by dowdle (subscriber, #659) [Link]I believe you undoubtedly agree with the gist of Jon's argument... not enough focus has been given to safety within the Linux kernel... the article will get that half right... money hasn't been going in the direction of safety... and now it needs to. Aren't you glad?Posted Nov 7, 2015 1:37 UTC (Sat) by PaXTeam (visitor, #24616) [Hyperlink]they talked to spender, not me personally, however yes, this facet of the coin is effectively represented by us and others who have been interviewed. the identical method Linus is an effective consultant of, nicely, his own pet venture referred to as linux. > And if Jon had solely talked to you, his would have been too. on condition that i'm the author of PaX (a part of grsec) sure, talking to me about grsec matters makes it one of the best methods to analysis it. but if you realize of someone else, be my guest and identify them, i am pretty positive the lately formed kernel self-safety people would be dying to interact them (or not, i do not assume there is a sucker out there with 1000's of hours of free time on their hand). > [...]it additionally contained quite a number of of groan-worthy statements. nothing is ideal but contemplating the viewers of the WP, this is considered one of the higher journalistic pieces on the topic, no matter how you and others don't like the sorry state of linux safety exposed in there. if you need to discuss extra technical particulars, nothing stops you from talking to us ;). talking of your complaints about journalistic qualities, since a previous LWN article noticed it match to incorporate a number of typical dismissive claims by Linus about the quality of unspecified grsec options with no evidence of what expertise he had with the code and how latest it was, how come we didn't see you or anyone else complaining about the standard of that article? > Aren't you glad? no, or not yet anyway. i've heard lots of empty phrases through the years and nothing ever manifested or worse, all the money has gone to the pointless exercise of fixing individual bugs and related circus (that Linus rightfully despises FWIW).Posted Nov 7, 2015 0:18 UTC (Sat) by bojan (subscriber, #14302) [Link]Posted Nov 8, 2015 13:06 UTC (Solar) by k3ninho (subscriber, #50375) [Link]Proper now we have acquired developers from huge names saying that doing all that the Linux ecosystem does *safely* is an itch that they have. Sadly, the encircling cultural attitude of builders is to hit functional targets, and occasionally efficiency goals. Security targets are often neglected. Ideally, the tradition would shift in order that we make it troublesome to comply with insecure habits, patterns or paradigms -- that is a process that may take a sustained effort, not merely the upstreaming of patches. Regardless of the tradition, these patches will go upstream eventually anyway as a result of the concepts that they embody are actually timely. I can see a technique to make it happen: Linus will accept them when a giant end-person (say, Intel, Google, Facebook or Amazon) delivers stuff with notes like 'this is a set of enhancements, we're already using them to unravel this type of problem, this is how every part will stay working as a result of $proof, observe rigorously that you're staring down the barrels of a fork as a result of your tree is now evolutionarily disadvantaged'. It's a game and might be gamed; I'd desire that the neighborhood shepherds users to comply with the pattern of declaring downside + resolution + useful check evidence + efficiency check proof + safety take a look at evidence. K3n.Posted Nov 9, 2015 6:49 UTC (Mon) by jospoortvliet (guest, #33164) [Hyperlink]And about that fork barrel: I would argue it's the opposite manner around. Google forked and misplaced already.Posted Nov 12, 2015 6:25 UTC (Thu) by Garak (visitor, #99377) [Link]Posted Nov 23, 2015 6:33 UTC (Mon) by jospoortvliet (guest, #33164) [Link]Posted Nov 7, 2015 3:20 UTC (Sat) by corbet (editor, #1) [Link]So I must confess to a certain quantity of confusion. I might swear that the article I wrote mentioned exactly that, but you've put a good quantity of effort into flaming it...?Posted Nov 8, 2015 1:34 UTC (Sun) by PaXTeam (guest, #24616) [Link]Posted Nov 6, 2015 22:Fifty two UTC (Fri) by flussence (subscriber, #85566) [Hyperlink]I personally suppose you and Nick Krause share reverse sides of the identical coin. Programming means and fundamental civility.Posted Nov 6, 2015 22:59 UTC (Fri) by dowdle (subscriber, #659) [Link]Posted Nov 7, 2015 0:Sixteen UTC (Sat) by rahvin (visitor, #16953) [Hyperlink]I hope I'm improper, however a hostile perspective is not going to assist anyone get paid. It is a time like this the place something you appear to be an "expert" at and there is a demand for that experience the place you display cooperation and willingness to take part because it is an opportunity. I am relatively shocked that someone does not get that, however I'm older and have seen a number of of those opportunities in my profession and exploited the hell out of them. You solely get just a few of these in the typical profession, and handful at probably the most. Typically you have to spend money on proving your expertise, and that is one of those moments. It appears the Kernel neighborhood could lastly take this security lesson to coronary heart and embrace it, as stated within the article as a "mindcraft moment". This is a chance for developers that may want to work on Linux security. Some will exploit the chance and others will thumb their noses at it. Ultimately these developers that exploit the opportunity will prosper from it. I really feel previous even having to write down that.Posted Nov 7, 2015 1:00 UTC (Sat) by josh (subscriber, #17465) [Hyperlink]Maybe there is a chicken and egg drawback here, but when seeking out and funding people to get code upstream, it helps to pick out people and groups with a history of having the ability to get code upstream. It's perfectly reasonable to desire working out of tree, providing the ability to develop impressive and critical safety advances unconstrained by upstream necessities. That is work someone may additionally wish to fund, if that meets their wants.Posted Nov 7, 2015 1:28 UTC (Sat) by PaXTeam (guest, #24616) [Hyperlink]Posted Nov 7, 2015 19:12 UTC (Sat) by jejb (subscriber, #6654) [Link]You make this argument (implying you do research and Josh does not) and then fail to support it by any cite. It would be much more convincing for those who surrender on the Onus probandi rhetorical fallacy and truly cite info. > living proof, it was *them* who advised that they wouldn't fund out-of-tree work however would consider funding upstreaming work, besides when pressed for the small print, all i got was silence. For those following alongside at house, that is the related set of threads: http://lists.coreinfrastructure.org/pipermail/cii-focus on... A quick precis is that they informed you your mission was unhealthy as a result of the code was by no means going upstream. You informed them it was because of kernel developers perspective so they need to fund you anyway. They instructed you to submit a grant proposal, you whined extra about the kernel attitudes and finally even your apologist informed you that submitting a proposal is likely to be the smartest thing to do. At that time you went silent, not vice versa as you imply above. > obviously i won't spend time to put in writing up a begging proposal just to be instructed that 'no sorry, we don't fund multi-year projects in any respect'. that's one thing that one needs to be instructed upfront (or heck, be part of some public guidelines so that others will know the foundations too). You seem to have a fatally flawed grasp of how public funding works. If you do not tell people why you need the money and how you'll spend it, they're unlikely to disburse. Saying I'm sensible and I know the issue now hand over the money does not even work for most Teachers who've a stable reputation in the sphere; which is why most of them spend >30% of their time writing grant proposals. > as for getting code upstream, how about you verify the kernel git logs (minus the stuff that was not correctly credited)? jejb@jarvis> git log|grep -i 'Creator: pax.*crew'|wc -l 1 Stellar, I need to say. And before you light off on these who have misappropriated your credit score, please do not forget that getting code upstream on behalf of reluctant or incapable actors is a massively valuable and time consuming talent and one in all the reasons groups like Linaro exist and are well funded. If extra of your stuff does go upstream, it is going to be due to the not inconsiderable efforts of other people in this area. You now have a business mannequin selling non-upstream safety patches to customers. There's nothing mistaken with that, it is a reasonably common first stage business model, but it surely does fairly rely on patches not being upstream in the primary place, calling into question the earnestness of your try to place them there. Now this is some free advice in my area, which is aiding corporations align their businesses in open source: The selling out of tree patch route is at all times an eventual failure, particularly with the kernel, as a result of if the functionality is that helpful, it will get upstreamed or reinvented in your regardless of, leaving you with nothing to sell. If your business plan B is promoting experience, you will have to remember that it may be a tough promote when you've no out of tree differentiator left and git history denies that you had something to do with the in-tree patches. The truth is "loopy safety person" will become a self fulfilling prophecy. The advice? it was apparent to everybody else who learn this, but for you, it is do the upstreaming your self before it gets finished for you. That method you've gotten a legit historic declare to Plan B and also you might actually have a Plan A selling a rollup of upstream monitor patches built-in and delivered earlier than the distributions get around to it. Even your utility to the CII couldn't be dismissed as a result of your work wasn't going wherever. Your various is to proceed playing the position of Cassandra and probably endure her eventual destiny.Posted Nov 7, 2015 23:20 UTC (Sat) by PaXTeam (guest, #24616) [Hyperlink]> Second, for the probably viable pieces this can be a multi-yr > full time job. Is the CII prepared to fund projects at that stage? If not > all of us would end up with plenty of unfinished and partially damaged options. please show me the answer to that question. with no definitive 'sure' there is no such thing as a level in submitting a proposal as a result of this is the time frame that for my part the job will take and any proposal with that requirement would be shot down instantly and be a waste of my time. and i stand by my claim that such simple primary requirements ought to be public information. > Stellar, I need to say. "Lies, damned lies, and statistics". you understand there's a couple of option to get code into the kernel? how about you employ your git-fu to find all the bugreports/prompt fixes that went in resulting from us? as for specifically me, Greg explicitly banned me from future contributions via af45f32d25cc1 so it is no marvel i do not send patches directly in (and that one commit you discovered that went in regardless of stated ban is actually a really bad instance as a result of it's also the one that Linus censored for no good purpose and made me determine to by no means ship security fixes upstream until that follow changes). > You now have a enterprise mannequin promoting non-upstream security patches to customers. now? we have had paid sponsorship for our numerous stable kernel collection for 7 years. i wouldn't name it a business mannequin although as it hasn't paid anyone's payments. > [...]calling into question the earnestness of your attempt to put them there. i should be missing one thing right here but what attempt? i've by no means in my life tried to submit PaX upstream (for all the reasons mentioned already). the CII mails were exploratory to see how critical that whole organization is about really securing core infrastructure. in a way i've received my answers, there's nothing extra to the story. as to your free advice, let me reciprocate: advanced problems do not clear up themselves. code fixing complicated problems would not write itself. individuals writing code solving advanced problems are few and much between that you'll discover out in short order. such folks (area consultants) do not work without cost with few exceptions like ourselves. biting the hand that feeds you will only finish you up in hunger. PS: since you're so certain about kernel builders' capability to reimplement our code, possibly take a look at what parallel features i nonetheless maintain in PaX regardless of vanilla having a 'completely-not-reinvented-right here' implementation and take a look at to know the explanation. or simply take a look at all the CVEs that affected say vanilla's ASLR however did not affect mine. PPS: Cassandra never wrote code, i do. criticizing the sorry state of kernel safety is a aspect challenge when i'm bored or simply waiting for the following kernel to compile (i want LTO was extra environment friendly).Posted Nov 8, 2015 2:28 UTC (Sun) by jejb (subscriber, #6654) [Link]In other words, you tried to define their process for them ... I can not think why that would not work. > "Lies, damned lies, and statistics". The issue with advert hominem attacks is that they're singularly ineffective against a transparently factual argument. I posted a one line command anybody might run to get the number of patches you've got authored within the kernel. Why do not you put up an equal that offers figures you like extra? > i've by no means in my life tried to submit PaX upstream (for all the reasons discussed already). So the grasp plan is to exhibit your expertise by the variety of patches you haven't submitted? great plan, world domination beckons, sorry that one bought away from you, however I'm sure you won't let it occur again.Posted Nov 8, 2015 2:56 UTC (Sun) by PaXTeam (visitor, #24616) [Hyperlink]what? since when does asking a query outline something? isn't that how we find out what someone else thinks? is not that what *they* have that webform (by no means mind the mailing lists) for as well? in other phrases you admit that my question was not really answered . > The issue with ad hominem attacks is that they are singularly ineffective in opposition to a transparently factual argument. you did not have an argument to start with, that's what i defined in the half you rigorously chose to not quote. i'm not right here to defend myself in opposition to your clearly idiotic attempts at proving no matter you are making an attempt to prove, as they are saying even in kernel circles, code speaks, bullshit walks. you can have a look at mine and decide what i can or can't do (not that you've got the information to know most of it, thoughts you). that stated, there're clearly different extra capable people who've finished so and decided that my/our work was price something else no one would have been feeding off of it for the past 15 years and nonetheless counting. and as unimaginable as it might appear to you, life does not revolve across the vanilla kernel, not everyone's dying to get their code in there especially when it means to place up with such silly hostility on lkml that you simply now also demonstrated right here (it's ironic the way you came to the defense of josh who particularly asked folks to not deliver that infamous lkml style here. good job there James.). as for world domination, there're many ways to realize it and something tells me that you're clearly out of your league here since PaX has already achieved that. you're running such code that implements PaX features as we communicate.Posted Nov 8, 2015 16:52 UTC (Sun) by jejb (subscriber, #6654) [Hyperlink]I posted the one line git script giving your authored patches in response to this unique request by you (this one, just in case you've got forgotten http://lwn.web/Articles/663591/): > as for getting code upstream, how about you check the kernel git logs (minus the stuff that was not correctly credited)? I take it, by the best way you've got shifted floor in the earlier threads, that you just wish to withdraw that request?Posted Nov 8, 2015 19:31 UTC (Solar) by PaXTeam (guest, #24616) [Link]Posted Nov 8, 2015 22:31 UTC (Sun) by pizza (subscriber, #46) [Hyperlink]Please provide one that is not unsuitable, or much less fallacious. It would take much less time than you've already wasted here.Posted Nov 8, 2015 22:49 UTC (Sun) by PaXTeam (visitor, #24616) [Hyperlink]anyway, since it's you guys who've a bee in your bonnet, let's take a look at your degree of intelligence too. first figure out my e-mail deal with and project title then attempt to find the commits that say they come from there (it introduced again some reminiscences from 2004 already, how times flies! i am stunned i actually managed to perform this a lot with explicitly not trying, think about if i did :). it is an incredibly complicated process so by accomplishing it you'll prove yourself to be the top dog here on lwn, no matter that's worth ;).Posted Nov 8, 2015 23:25 UTC (Solar) by pizza (subscriber, #46) [Hyperlink]*shrug* Or don't; you are solely sullying your individual repute.Posted Nov 9, 2015 7:08 UTC (Mon) by jospoortvliet (guest, #33164) [Hyperlink]Posted Nov 9, 2015 11:38 UTC (Mon) by hkario (subscriber, #94864) [Link]I would not bothPosted Nov 12, 2015 2:09 UTC (Thu) by jschrod (subscriber, #1646) [Link]Posted Nov 12, 2015 8:50 UTC (Thu) by nwmcsween (guest, #62367) [Hyperlink]Posted Nov 8, 2015 3:38 UTC (Solar) by PaXTeam (guest, #24616) [Link]Posted Nov 12, 2015 13:47 UTC (Thu) by nix (subscriber, #2304) [Link]Ah. I thought my memory wasn't failing me. Examine to PaXTeam's response to . PaXTeam just isn't averse to outright mendacity if it means he will get to seem proper, I see. Perhaps PaXTeam's memory is failing, and this obvious contradiction is not a brazen lie, but on condition that the two posts had been made inside a day of one another I doubt it. (PaXTeam's total unwillingness to assume good faith in others deserves some reflection. Yes, I *do* think he's mendacity by implication right here, and doing so when there's virtually nothing at stake. God alone is aware of what he's willing to stoop to when something *is* at stake. Gosh I wonder why his fixes aren't going upstream very fast.)Posted Nov 12, 2015 14:11 UTC (Thu) by PaXTeam (guest, #24616) [Link]> and that one commit you discovered that went in regardless of mentioned ban also someone's ban doesn't suggest it's going to translate into someone else's execution of that ban as it is clear from the commit in question. it's somewhat unhappy that it takes a safety repair to expose the fallacy of this policy although. the rest of your pithy ad hominem speaks for itself higher than i ever could ;).Posted Nov 12, 2015 15:58 UTC (Thu) by andreashappe (subscriber, #4810) [Link]Posted Nov 7, 2015 19:01 UTC (Sat) by cwillu (guest, #67268) [Link]I do not see this message in my mailbox, so presumably it bought swallowed.Posted Nov 7, 2015 22:33 UTC (Sat) by ssmith32 (subscriber, #72404) [Link]You are aware that it's entirely attainable that everyone is improper here , right? That the kernel maintainers need to focus extra on safety, that the article was biased, that you're irresponsible to decry the state of security, and do nothing to help, and that your patchsets wouldn't assist that much and are the incorrect course for the kernel? That simply because the kernel maintainers aren't 100% proper it doesn't suggest you are?Posted Nov 9, 2015 9:50 UTC (Mon) by njd27 (guest, #5770) [Link]I feel you have got him backwards there. Jon is comparing this to Mindcraft as a result of he thinks that regardless of being unpalatable to quite a lot of the community, the article would possibly in fact include numerous fact.Posted Nov 9, 2015 14:03 UTC (Mon) by corbet (editor, #1) [Link]Posted Nov 9, 2015 15:Thirteen UTC (Mon) by spender (guest, #23067) [Hyperlink]"There are rumors of dark forces that drove the article in the hopes of taking Linux down a notch. All of this could effectively be true" Just as you criticized the article for mentioning Ashley Madison even though within the very first sentence of the next paragraph it mentions it did not involve the Linux kernel, you cannot give credence to conspiracy theories with out incurring the identical criticism (in different phrases, you cannot play the Glenn Beck "I'm simply asking the questions here!" whose "questions" gasoline the conspiracy theories of others). Very similar to mentioning Ashley Madison for example for non-technical readers concerning the prevalence of Linux on this planet, if you're criticizing the point out then should not likening a non-FUD article to a FUD article also deserve criticism, especially given the rosy, self-congratulatory picture you painted of upstream Linux security? Because the PaX Team identified in the preliminary post, the motivations aren't arduous to know -- you made no mention in any respect about it being the 5th in an extended-running collection following a reasonably predictable time trajectory. No, we didn't miss the general analogy you were trying to make, we just do not think you possibly can have your cake and eat it too. -BradPosted Nov 9, 2015 15:18 UTC (Mon) by karath (subscriber, #19025) [Hyperlink]Posted Nov 9, 2015 17:06 UTC (Mon) by k3ninho (subscriber, #50375) [Hyperlink]It is gracious of you not to blame your readers. I determine they're a good goal: there's that line about those ignorant of history being condemned to re-implement Unix -- as your readers are! :-) K3n.Posted Nov 9, 2015 18:Forty three UTC (Mon) by bojan (subscriber, #14302) [Link]Sadly, I don't understand neither the "safety" of us (PaXTeam/spender), nor the mainstream kernel people in terms of their perspective. I confess I've completely no technical capabilities on any of these subjects, but when all of them decided to work collectively, instead of having infinite and pointless flame wars and blame game exchanges, a whole lot of the stuff would have been done already. And all of the while everyone involved might have made one other large pile of money on the stuff. They all appear to need to have a greater Linux kernel, so I've acquired no concept what the issue is. It seems that nobody is keen to yield any of their positions even somewhat bit. As an alternative, each sides appear to be bent on attempting to insult their approach into forcing the opposite facet to give up. Which, in fact, by no means works - it just causes extra pushback. Perplexing stuff...Posted Nov 9, 2015 19:00 UTC (Mon) by sfeam (subscriber, #2841) [Hyperlink]Posted Nov 9, 2015 19:44 UTC (Mon) by bojan (subscriber, #14302) [Link]Take a scientific computational cluster with an "air gap", as an example. You'd most likely want most of the security stuff turned off on it to achieve most performance, because you'll be able to trust all customers. Now take a couple of billion cell phones which may be troublesome or sluggish to patch. You'd probably need to kill lots of the exploit courses there, if those gadgets can nonetheless run fairly well with most safety features turned on. So, it's not either/or. It is probably "it relies upon". But, if the stuff is not there for everybody to compile/use within the vanilla kernel, it will likely be tougher to make it part of on a regular basis decisions for distributors and customers.Posted Nov 6, 2015 22:20 UTC (Fri) by artem (subscriber, #51262) [Hyperlink]How unhappy. This Dijkstra quote involves mind instantly: Software engineering, after all, presents itself as one other worthy cause, but that's eyewash: if you happen to rigorously read its literature and analyse what its devotees actually do, you'll uncover that software program engineering has accepted as its charter "Easy methods to program if you can't."Posted Nov 7, 2015 0:35 UTC (Sat) by roc (subscriber, #30627) [Hyperlink]I suppose that truth was too unpleasant to suit into Dijkstra's world view.Posted Nov 7, 2015 10:Fifty two UTC (Sat) by ms (subscriber, #41272) [Link]Certainly. And the attention-grabbing thing to me is that once I attain that time, checks are not enough - mannequin checking at a minimum and actually proofs are the only manner forwards. I am no safety professional, my discipline is all distributed programs. I perceive and have implemented Paxos and i believe I can clarify how and why it really works to anyone. However I'm at present doing some algorithms combining Paxos with a bunch of variations on VectorClocks and reasoning about causality and consensus. No test is enough because there are infinite interleavings of occasions and my head simply couldn't cope with engaged on this both at the pc or on paper - I discovered I could not intuitively purpose about these items at all. So I began defining the properties and wanted and step-by-step proving why every of them holds. Without my notes and proofs I am unable to even explain to myself, not to mention anybody else, why this factor works. I discover this each fully obvious that this will occur and completely terrifying - the maintenance cost of these algorithms is now an order of magnitude increased.Posted Nov 19, 2015 12:24 UTC (Thu) by Wol (subscriber, #4433) [Hyperlink]> Indeed. And the fascinating thing to me is that once I reach that time, tests are usually not sufficient - mannequin checking at a minimum and actually proofs are the only approach forwards. Or are you just utilizing the incorrect maths? Hobbyhorse time again :-) however to quote a fellow Decide developer ... "I typically stroll into a SQL improvement store and see that wall - you recognize, the one with the large SQL schema that no-one totally understands on it - and surprise how I can easily hold the whole schema for a Decide database of the identical or larger complexity in my head". However it's easy - by education I'm a Chemist, by interest a Physical Chemist (and by profession an unemployed programmer :-). And when I am fascinated with chemistry, I can ask myself "what is an atom made of" and assume about things like the robust nuclear drive. Subsequent degree up, how do atoms stick collectively and make molecules, and suppose concerning the electroweak power and electron orbitals, and how do chemical reactions occur. Then I feel about molecules stick collectively to make supplies, and assume about metals, and/or Van de Waals, and stuff. Point is, you'll want to *layer* stuff, and take a look at issues, and say "how can I cut up elements off into 'black containers' so at any one stage I can assume the other levels 'just work'". For example, with Choose a FILE (table to you) stores a category - a collection of identical objects. One object per Record (row). And, similar as relational, one attribute per Field (column). Can you map your relational tables to actuality so easily? :-) Going again THIRTY years, I remember a story about a guy who built little laptop crabs, that would quite happily scuttle around in the surf zone. Because he did not attempt to work out how to solve all the issues directly - each of his (incredibly puny by right this moment's requirements - this is the 8080/Z80 period!) processors was set to just process a bit little bit of the problem and there was no central "mind". However it worked ... Maybe you need to simply write a bunch of small modules to resolve every particular person downside, and let ultimate answer "simply happen". Cheers, WolPosted Nov 19, 2015 19:28 UTC (Thu) by ksandstr (guest, #60862) [Link]To my understanding, this is strictly what a mathematical abstraction does. For instance in Z notation we might assemble schemas for the various modifying ("delta") operations on the bottom schema, and then argue about preservation of formal invariants, properties of the end result, and transitivity of the operation when chained with itself, or the preceding aggregate schema composed of schemas A through O (for which they've been already argued). The end result is a set of operations that, executed in arbitrary order, lead to a set of properties holding for the end result and outputs. Thus proving the formal design right (w/ caveat lectors concerning scope, correspondence with its implementation [although that can be confirmed as effectively], and read-solely ["xi"] operations).Posted Nov 20, 2015 11:23 UTC (Fri) by Wol (subscriber, #4433) [Hyperlink]Trying through the history of computing (and probably loads of other fields too), you'll most likely discover that people "cannot see the wood for the bushes" more typically that not. They dive into the detail and completely miss the large picture. (Drugs, and interest of mine, suffers from that too - I remember any individual speaking about the consultant desirous to amputate a gangrenous leg to save lots of someone's life - oblivious to the truth that the affected person was dying of cancer.) Cheers, WolPosted Nov 7, 2015 6:35 UTC (Sat) by dgc (subscriber, #6611) [Hyperlink]https://www.youtube.com/watch?v=VpuVDfSXs-g (LCA 2015 - "Programming Considered Harmful") FWIW, I believe that this talk could be very relevant to why writing secure software is so exhausting.. -Dave.Posted Nov 7, 2015 5:Forty nine UTC (Sat) by kunitz (subscriber, #3965) [Hyperlink]While we're spending millions at a mess of safety issues, kernel points are not on our prime-priority record. Truthfully I remember only as soon as having discussing a kernel vulnerability. The result of the evaluation has been that each one our systems have been running kernels that have been older because the kernel that had the vulnerability. However "patch management" is an actual issue for us. Software program must proceed to work if we set up security patches or update to new releases because of the end-of-life coverage of a vendor. The income of the corporate is relying on the IT programs operating. So "not breaking consumer space" is a safety function for us, as a result of a breakage of one element of our several ten 1000's of Linux systems will cease the roll-out of the security replace. One other drawback is embedded software or firmware. These days almost all hardware methods embody an working system, typically some Linux model, offering a fill community stack embedded to help distant management. Frequently those methods do not survive our obligatory safety scan, because vendors still did not update the embedded openssl. The actual challenge is to provide a software program stack that can be operated within the hostile atmosphere of the Internet sustaining full system integrity for ten years and even longer without any buyer upkeep. The present state of software engineering would require support for an automated replace course of, but vendors must perceive that their enterprise model must be able to finance the assets offering the updates. Overall I am optimistic, networked software program shouldn't be the primary expertise utilized by mankind inflicting issues that were addressed later. Steam engine use might lead to boiler explosions but the "engineers" had been able to cut back this danger considerably over just a few many years.Posted Nov 7, 2015 10:29 UTC (Sat) by ms (subscriber, #41272) [Hyperlink]The next is all guess work; I would be keen to know if others have evidence both a technique or one other on this: The people who discover ways to hack into these programs by kernel vulnerabilities know that they skills they've learnt have a market. Thus they don't are likely to hack in an effort to wreak havoc - certainly on the whole the place knowledge has been stolen in an effort to launch and embarrass people, it _appears_ as if those hacks are through much simpler vectors. I.e. lesser expert hackers discover there is an entire load of low-hanging fruit which they'll get at. They are not being paid forward of time for the data, in order that they flip to extortion as a substitute. They do not cover their tracks, and they will usually be found and charged with criminal offences. So if your security meets a certain basic level of proficiency and/or your organization isn't doing something that puts it close to the top of "corporations we might like to embarrass" (I suspect the latter is far more effective at maintaining techniques "secure" than the previous), then the hackers that get into your system are more likely to be skilled, paid, and doubtless not going to do a lot injury - they're stealing information for a competitor / state. So that doesn't trouble your backside line - at least not in a method which your shareholders will be aware of. So why fund security?Posted Nov 7, 2015 17:02 UTC (Sat) by citypw (visitor, #82661) [Hyperlink]However, some effective mitigation in kernel degree would be very helpful to crush cybercriminal/skiddie's try. If one in all your customer operating a future trading platform exposes some open API to their shoppers, and if the server has some reminiscence corruption bugs might be exploited remotely. Then you know there are recognized assault methods( similar to offset2lib) can help the attacker make the weaponized exploit so much simpler. Will you clarify the failosophy "A bug is bug" to your customer and tell them it might be okay? Btw, offset2lib is useless to PaX/Grsecurity's ASLR imp. To essentially the most business makes use of, more security mitigation inside the software program will not price you extra finances. You will nonetheless must do the regression take a look at for each upgrade.Posted Nov 12, 2015 16:14 UTC (Thu) by andreashappe (subscriber, #4810) [Hyperlink]Understand that I specialize in exterior internet-based penetration-tests and that in-house exams (native LAN) will possible yield different outcomes.Posted Nov 7, 2015 20:33 UTC (Sat) by mattdm (subscriber, #18) [Hyperlink]I keep studying this headline as "a new Minecraft moment", and considering that maybe they've determined to follow up the .Web factor by open-sourcing Minecraft. Oh nicely. I imply, security is good too, I suppose.Posted Nov 7, 2015 22:24 UTC (Sat) by ssmith32 (subscriber, #72404) [Hyperlink]Posted Nov 12, 2015 17:29 UTC (Thu) by smitty_one_each (subscriber, #28989) [Link]Posted Nov 8, 2015 10:34 UTC (Sun) by jcm (subscriber, #18262) [Hyperlink]Posted Nov 9, 2015 7:15 UTC (Mon) by jospoortvliet (guest, #33164) [Hyperlink]Posted Nov 9, 2015 15:53 UTC (Mon) by neiljerram (subscriber, #12005) [Link](Oh, and I used to be also still wondering how Minecraft had taught us about Linux efficiency - so because of the opposite comment thread that identified the 'd', not 'e'.)Posted Nov 9, 2015 11:31 UTC (Mon) by ortalo (guest, #4654) [Hyperlink]I might similar to to add that for my part, there's a normal problem with the economics of laptop security, which is particularly seen at present. Two problems even possibly. First, the money spent on laptop safety is commonly diverted in the direction of the so-known as security "circus": fast, simple solutions that are primarily selected simply in order to "do something" and get better press. It took me a very long time - possibly decades - to assert that no safety mechanism at all is healthier than a foul mechanism. But now I firmly believe on this angle and would somewhat take the danger knowingly (offered that I can save money/useful resource for myself) than take a bad approach at solving it (and haven't any cash/resource left when i notice I should have completed something else). And i find there are lots of unhealthy or incomplete approaches currently obtainable in the pc safety area. Those spilling our uncommon cash/sources on prepared-made useless tools ought to get the dangerous press they deserve. And, we certainly have to enlighten the press on that because it is not so easy to appreciate the efficiency of protection mechanisms (which, by definition, ought to prevent things from happening). Second, and that may be newer and extra worrying. The circulation of money/resource is oriented in the course of attack instruments and vulnerabilities discovery much greater than in the course of latest safety mechanisms. This is especially worrying as cyber "protection" initiatives look increasingly like the standard idustrial projects geared toward producing weapons or intelligence systems. Moreover, dangerous ineffective weapons, as a result of they're solely working towards our very susceptible present systems; and bad intelligence systems as even fundamental college-stage encryption scares them all the way down to useless. Nonetheless, all of the ressources are for these adult teenagers playing the white hat hackers with not-so-difficult programming tips or community monitoring or WWI-stage cryptanalysis. And now also for the cyberwarriors and cyberspies that have but to show their usefulness solely (particularly for peace protection...). Personnally, I might fortunately leave them all the hype; however I am going to forcefully declare that they have no proper in any respect on any of the finances allocation selections. Only those working on safety ought to. And yep, it means we should determine where to place there resources. We now have to claim the unique lock for ourselves this time. (and I assume the PaXteam may very well be among the first to benefit from such a change). While desirous about it, I would not even depart white-hat or cyber-guys any hype in the long run. That's extra publicity than they deserve. I crave for the day I will learn within the newspaper that: "One other of these sick suggested debutant programmer hooligans that pretend to be cyber-pirates/warriors modified some well-known virus program code exploiting a programmer mistake and managed nonetheless to convey one of those unfinished and dangerous high quality packages, X, that we are all obliged to use to its knees, annoying millions of standard customers along with his unfortunate cyber-vandalism. All of the protection consultants unanimously advocate that, once once more, the funds of the cyber-command be retargetted, or at the least leveled-off, with the intention to carry more safety engineer positions in the academic domain or civilian trade. And that X's producer, XY Inc., be liable for the potential losses if proved to be unprofessional on this affair."Hmmm - cyber-hooligans - I like the label. Although it doesn't apply well to the battlefield-oriented variant.Posted Nov 9, 2015 14:28 UTC (Mon) by drag (guest, #31333) [Hyperlink]The state of 'software program safety industry' is a f-ng catastrophe. Failure of the very best order. There is very large amounts of money that goes into 'cyber security', but it's often spent on government compliance and audit efforts. This implies as a substitute of actually putting effort into correcting issues and mitigating future problems, the vast majority of the trouble goes into taking existing applications and making them conform to committee-driven tips with the minimal quantity of effort and adjustments. Some degree of regulation and standardization is absolutely wanted, however lay persons are clueless and are utterly unable to discern the difference between anyone who has helpful expertise versus some company that has spent millions on slick advertising and 'native promoting' on massive web sites and pc magazines. The people with the cash unfortunately only have their very own judgment to depend on when buying into 'cyber security'. > These spilling our rare money/resources on prepared-made ineffective tools should get the dangerous press they deserve. There is no such thing as a such thing as 'our uncommon cash/sources'. You have got your cash, I've mine. Money being spent by some company like Redhat is their cash. Cash being spent by governments is the federal government's cash. (you, actually, have much more management in how Walmart spends it's money then over what your authorities does with their's) > This is very worrying as cyber "protection" initiatives look an increasing number of like the same old idustrial initiatives geared toward producing weapons or intelligence systems. Moreover, bad useless weapons, because they are only working towards our very vulnerable present methods; and dangerous intelligence techniques as even primary college-level encryption scares them right down to ineffective. Having secure software program with robust encryption mechanisms in the fingers of the public runs counter to the pursuits of most main governments. Governments, like some other for-revenue group, are primarily concerned about self-preservation. Money spent on drone initiatives or banking auditing/oversight regulation compliance is Way more helpful to them then attempting to help the general public have a safe mechanism for making cellphone calls. Especially when these secure mechanisms interfere with knowledge assortment efforts. Sadly you/I/us can't depend on some magical benefactor with deep pockets to sweep in and make Linux better. It is just not going to happen. minecraft towny servers Companies like Redhat have been massively helpful to spending assets to make Linux kernel more succesful.. nonetheless they're pushed by a the necessity to turn a profit, which suggests they should cater on to the the kind of requirements established by their customer base. Prospects for EL are typically way more targeted on decreasing costs related to administration and software growth then security on the low-level OS. Enterprise Linux clients tend to rely on bodily, human policy, and network security to protect their 'smooth' interiors from being exposed to external threats.. assuming (rightly) that there's very little they'll do to actually harden their methods. In truth when the selection comes between safety vs comfort I'm positive that most prospects will happily defeat or strip out any security mechanisms introduced into Linux. On top of that when most Enterprise software program is extraordinarily bad. So much so that 10 hours spent on enhancing an online entrance-finish will yield extra real-world security benefits then a a thousand hours spent on Linux kernel bugs for most businesses. Even for 'regular' Linux users a security bug in their Firefox's NAPI flash plugin is way more devastating and poses a massively higher threat then a obscure Linux kernel buffer over stream problem. It's simply not really important for attackers to get 'root' to get entry to the necessary info... generally all of which is contained in a single consumer account. Ultimately it's as much as people such as you and myself to place the trouble and cash into bettering Linux security. For each ourselves and other folks.Posted Nov 10, 2015 11:05 UTC (Tue) by ortalo (guest, #4654) [Link]Spilling has always been the case, however now, to me and in pc safety, most of the money seems spilled as a result of dangerous religion. And this is mostly your cash or mine: both tax-fueled governemental resources or corporate costs that are instantly reimputed on the prices of products/software we are instructed we're *obliged* to purchase. (Have a look at company firewalls, residence alarms or antivirus software marketing discourse.) I feel it is time to level out that there are several "malicious malefactors" around and that there's a real need to establish and sanction them and confiscate the assets they've one way or the other managed to monopolize. And that i do *not* assume Linus is among such culprits by the way. But I think he may be among the ones hiding their heads within the sand in regards to the aforementioned evil actors, whereas he probably has more leverage to counteract them or oblige them to reveal themselves than many of us. I find that to be of brown-paper-bag stage (though head-in-the-sand is somehow a new interpretation). Ultimately, I feel you are right to say that at present it is solely up to us people to attempt honestly to do one thing to improve Linux or laptop security. But I still suppose that I am right to say that this isn't normal; especially while some very critical folks get very critical salaries to distribute randomly some tough to judge budgets. [1] A paradoxical scenario whenever you think about it: in a site where you're firstly preoccupied by malicious individuals everybody ought to have factual, clear and sincere behavior as the first precedence of their thoughts.Posted Nov 9, 2015 15:Forty seven UTC (Mon) by MarcB (subscriber, #101804) [Hyperlink]It even has a nice, seven line Primary-pseudo-code that describes the present scenario and clearly exhibits that we're caught in an infinite loop. It doesn't reply the massive query, though: How to write better software program. The sad factor is, that this is from 2005 and all of the things that were obviously stupid ideas 10 years ago have proliferated much more.Posted Nov 10, 2015 11:20 UTC (Tue) by ortalo (guest, #4654) [Hyperlink]Be aware IMHO, we must always examine additional why these dumb issues proliferate and get so much assist. If it is only human psychology, nicely, let's struggle it: e.g. Mozilla has shown us that they can do wonderful things given the proper message. If we're dealing with active folks exploiting public credulity: let's identify and battle them. But, more importantly, let's capitalize on this information and safe *our* methods, to showcase at a minimal (and more later on in fact). Your reference conclusion is particularly nice to me. "problem [...] the typical wisdom and the status quo": that job I would happily accept.Posted Nov 30, 2015 9:39 UTC (Mon) by paulj (subscriber, #341) [Hyperlink]That rant is itself a bunch of "empty calories". The converse to the gadgets it rants about, which it's suggesting at some level, would be as unhealthy or worse, and indicative of the worst type of safety thinking that has put a lot of people off. Alternatively, it is only a rant that offers little of worth. Personally, I feel there is no magic bullet. Security is and all the time has been, in human history, an arms race between defenders and attackers, and one that's inherently a trade-off between usability, dangers and prices. If there are mistakes being made, it's that we should most likely spend more sources on defences that could block complete classes of assaults. E.g., why is the GRSec kernel hardening stuff so arduous to use to common distros (e.g. there isn't any dependable source of a GRSec kernel for Fedora or RHEL, is there?). Why does all the Linux kernel run in one safety context? Why are we nonetheless writing a lot of software in C/C++, typically without any fundamental security-checking abstractions (e.g. primary bounds-checking layers in between I/O and parsing layers, say)? Can hardware do more to supply security with velocity? No doubt there are a lot of people engaged on "block lessons of assaults" stuff, the question is, why aren't there extra sources directed there?Posted Nov 10, 2015 2:06 UTC (Tue) by timrichardson (subscriber, #72836) [Link]>There are a lot of the explanation why Linux lags behind in defensive security technologies, but certainly one of the important thing ones is that the businesses getting cash on Linux haven't prioritized the development and integration of these applied sciences. This seems like a reason which is de facto price exploring. Why is it so? I believe it is not apparent why this doesn't get some extra consideration. Is it attainable that the individuals with the money are proper not to extra extremely prioritise this? Afterall, what interest have they got in an unsecure, exploitable kernel? Where there may be frequent trigger, linux development will get resourced. It's been this fashion for a few years. If filesystems qualify for common interest, absolutely security does. So there would not seem to be any obvious motive why this challenge does not get extra mainstream attention, except that it really already will get sufficient. You may say that disaster has not struck yet, that the iceberg has not been hit. Nevertheless it appears to be that the linux improvement course of shouldn't be overly reactive elsewhere.Posted Nov 10, 2015 15:Fifty three UTC (Tue) by raven667 (subscriber, #5198) [Hyperlink]That is an fascinating query, certainly that's what they actually imagine no matter what they publicly say about their commitment to safety applied sciences. What's the truly demonstrated draw back for Kernel developers and the organizations that pay them, as far as I can inform there will not be sufficient consequence for the lack of Security to drive more funding, so we're left begging and cajoling unconvincingly.Posted Nov 12, 2015 14:37 UTC (Thu) by ortalo (guest, #4654) [Hyperlink]The key problem with this domain is it pertains to malicious faults. So, when consequences manifest themselves, it is too late to act. And if the current dedication to an absence of voluntary strategy persists, we are going to oscillate between phases of relaxed inconscience and anxious paranoia. Admittedly, kernel developpers appear fairly resistant to paranoia. That is an effective factor. However I'm waiting for the days the place armed land-drones patrol US streets in the vicinity of their kids faculties for them to find the feeling. They aren't so distants the days when innocent lives will unconsciouly depend on the security of (linux-primarily based) computer methods; under water, that is already the case if I remember appropriately my final dive, as well as in a number of current automobiles in response to some reviews.Posted Nov 12, 2015 14:32 UTC (Thu) by MarcB (subscriber, #101804) [Hyperlink]Traditional internet hosting corporations that use Linux as an uncovered front-end system are retreating from development whereas HPC, cellular and "generic enterprise", i.E. RHEL/SLES, are pushing the kernel of their instructions. This is absolutely not that stunning: For internet hosting needs the kernel has been "completed" for fairly some time now. In addition to assist for current hardware there just isn't a lot use for newer kernels. Linux 3.2, or even older, works just advantageous. Hosting doesn't need scalability to tons of or hundreds of CPU cores (one makes use of commodity hardware), complicated instrumentation like perf or tracing (systems are locked down as a lot as potential) or superior energy-administration (if the system does not have constant excessive load, it is not making enough cash). So why ought to hosting firms still make strong investments in kernel improvement? Even if that they had one thing to contribute, the hurdles for contribution have change into increased and higher. For his or her safety wants, hosting corporations already use Grsecurity. I don't have any numbers, however some expertise means that Grsecurity is mainly a hard and fast requirement for shared internet hosting. However, kernel security is sort of irrelevant on nodes of an excellent pc or on a system running giant enterprise databases that are wrapped in layers of center-ware. And cell distributors merely do not care.Posted Nov 10, 2015 4:18 UTC (Tue) by bronson (subscriber, #4806) [Hyperlink]LinkingPosted Nov 10, 2015 13:15 UTC (Tue) by corbet (editor, #1) [Link]Posted Nov 11, 2015 22:38 UTC (Wed) by rickmoen (subscriber, #6943) [Link]The assembled doubtless recall that in August 2011, kernel.org was root compromised. I'm sure the system's arduous drives had been sent off for forensic examination, and we have all been ready patiently for the answer to an important question: What was the compromise vector? From shortly after the compromise was discovered on August 28, 2011, right through April 1st, 2013, kernel.org included this be aware at the top of the location Information: 'Thanks to all in your patience and understanding throughout our outage and please bear with us as we carry up the completely different kernel.org methods over the next few weeks. We will likely be writing up a report on the incident in the future.' (Emphasis added.) That comment was eliminated (together with the rest of the positioning News) during a May 2013 edit, and there hasn't been -- to my knowledge -- a peep about any report on the incident since then. This has been disappointing. When the Debian Challenge found sudden compromise of a number of of its servers in 2007, Wichert Akkerman wrote and posted a superb public report on precisely what happened. Likewise, the Apache Foundation likewise did the precise thing with good public autopsies of the 2010 Net site breaches. Arstechnica's Dan Goodin was still attempting to comply with up on the lack of an autopsy on the kernel.org meltdown -- in 2013. Two years ago. He wrote: Linux developer and maintainer Greg Kroah-Hartman advised Ars that the investigation has yet to be accomplished and gave no timetable for when a report could be released. [...] Kroah-Hartman additionally instructed Ars kernel.org programs have been rebuilt from scratch following the assault. Officials have developed new tools and procedures since then, but he declined to say what they're. "There might be a report later this 12 months about site [sic] has been engineered, however do not quote me on when it is going to be released as I'm not accountable for it," he wrote. Who's responsible, then? Is anyone? Anyone? Bueller? Or is it a state secret, or what? Two years since Greg Okay-H stated there would be a report 'later this yr', and 4 years since the meltdown, nothing yet. How about some info? Rick Moen [email protected] Nov 12, 2015 14:19 UTC (Thu) by ortalo (guest, #4654) [Hyperlink]Much less critically, notice that if even the Linux mafia does not know, it should be the venusians; they're notoriously stealth of their invasions.Posted Nov 14, 2015 12:46 UTC (Sat) by error27 (subscriber, #8346) [Link]I do know the kernel.org admins have given talks about some of the brand new protections which were put into place. There aren't any extra shell logins, as a substitute everything uses gitolite. The completely different companies are on completely different hosts. There are extra kernel.org employees now. Individuals are utilizing two issue identification. Another stuff. Do a search for Konstantin Ryabitsev.Posted Nov 14, 2015 15:Fifty eight UTC (Sat) by rickmoen (subscriber, #6943) [Link]I beg your pardon if I was one way or the other unclear: That was mentioned to have been the trail of entry to the machine (and that i can readily believe that, as it was also the exact path to entry into shells.sourceforge.net, a few years prior, round 2002, and into many other shared Web hosts for a few years). However that's not what's of major curiosity, and isn't what the forensic research lengthy promised would primarily concern: How did intruders escalate to root. To quote kernel.org administrator within the August 2011 Dan Goodin article you cited: 'How they managed to use that to root access is presently unknown and is being investigated'. Ok, people, you've now had 4 years of investigation. What was the trail of escalation to root? (Also, other details that might logically be lined by a forensic examine, such as: Whose key was stolen? Who stole the important thing?) That is the sort of autopsy was promised prominently on the entrance web page of kernel.org, to reporters, and elsewhere for a very long time (after which summarily eliminated as a promise from the front web page of kernel.org, with out remark, together with the rest of the positioning Information part, and apparently dropped). It still would be acceptable to know and share that knowledge. Particularly the datum of whether the path to root privilege was or was not a kernel bug (and, if not, what it was). Rick Moen [email protected] Nov 22, 2015 12:42 UTC (Solar) by rickmoen (subscriber, #6943) [Hyperlink]I've carried out a better evaluate of revelations that came out quickly after the break-in, and think I've found the answer, through a leaked copy of kernel.org chief sysadmin John H. 'Warthog9' Hawley's Aug. 29, 2011 e-mail to shell users (two days before the general public was informed), plus Aug. 31st feedback to The Register's Dan Goodin by 'two safety researchers who were briefed on the breach': Root escalation was via exploit of a Linux kernel safety gap: Per the 2 security researchers, it was one each extraordinarily embarrassing (wide-open access to /dev/mem contents including the working kernel's image in RAM, in 2.6 kernels of that day) and known-exploitable for the prior six years by canned 'sploits, one of which (Phalanx) was run by some script kiddie after entry using stolen dev credentials. Different tidbits: - Site admins left the root-compromised Web servers running with all companies nonetheless lit up, for multiple days. - Site admins and Linux Foundation sat on the information and failed to inform the public for those self same a number of days. - Site admins and Linux Basis have never revealed whether trojaned Linux source tarballs were posted within the http/ftp tree for the 19+ days earlier than they took the location down. (Yes, git checkout was positive, however what concerning the thousands of tarball downloads?) - After promising a report for several years after which quietly removing that promise from the entrance web page of kernel.org, Linux Basis now stonewalls press queries.I posted my best attempt at reconstructing the story, absent an actual report from insiders, to SVLUG's principal mailing listing yesterday. (Essentially, there are surmises. If the people with the details have been extra forthcoming, we might know what happened for sure.) I do have to marvel: If there's another embarrassing screwup, will we even be informed about it at all? Rick Moen [email protected] Nov 22, 2015 14:25 UTC (Sun) by spender (guest, #23067) [Hyperlink]Additionally, it is preferable to make use of reside memory acquisition prior to powering off the system, otherwise you lose out on reminiscence-resident artifacts you could carry out forensics on. -BradHow concerning the lengthy overdue autopsy on the August 2011 kernel.org compromise?Posted Nov 22, 2015 16:28 UTC (Solar) by rickmoen (subscriber, #6943) [Hyperlink]Thanks on your feedback, Brad. I might been relying on Dan Goodin's claim of Phalanx being what was used to achieve root, in the bit where he cited 'two security researchers who were briefed on the breach' to that impact. Goodin additionally elaborated: 'Fellow safety researcher Dan Rosenberg mentioned he was also briefed that the attackers used Phalanx to compromise the kernel.org machines.' This was the first time I've heard of a rootkit being claimed to be bundled with an assault instrument, and that i noted that oddity in my posting to SVLUG. That having been stated, yeah