Windows officially admits their data mining activity and gives users so-called options to “choose” what they share. Third parties have uncovered time and time again, these user choices are ignored and there is no way to disable data gathering completely.
Let’s take a look at net effect on privacy:
A securely coded windows that resists third party spyware + includes data snooping in its core = net loss of end user freedom/privacy and security risk as NSA has been know to use windows error reporting for aiding exploitation.
A less defended libre kernel is more vulnerable to active attacks + no privacy invasive code include by default = net gain of privacy by default as nothing is being reported anywhere unless someone decides to target you.
Windows is malware because of what it does. I don’t care if you trust that particular party for some reason with all the data it collects. Their compiler was even caught slipping in telemetry features in apps compiled with it. Classic backdooring.
macOS has added telemetry to their local folder search.
Proprietary software doesn’t need more defenders. I am sure their massive budgets and monopolistic agreements with OEMs and user ignorance has done more than enough to secure their tyranny. Let’s look at how we can improve what we have here so users have a reasonable shot at having any privacy in this age.
You even acknowledged yourself that it could be a useful feature, not a backdoor and even considered “backdooring” Whonix too.
You’re completely misrepresenting what they’re actually doing. As said in the articles linked, Microsoft gives some companies early access to vulnerability info/releases so they can patch their systems before it’s public.
This is done everywhere and isn’t an issue. Linux does this too.
Fixes for sensitive bugs, such as those that might lead to privilege escalations, may need to be coordinated with the private <firstname.lastname@example.org> mailing list so that distribution vendors are well prepared to issue a fixed kernel upon public disclosure of the upstream fix.
It’s what you’re saying.
It’s still hardly “sabotage”. Should be put in the user freedoms restrictions: “Only paying customers can postpone updates”.
Not a big difference since we’ve already covered that hiding backdoors in open source code is just as easy.
It makes no sense to claim malware on e.g. Debian won’t work on Ubuntu when they use nearly all of the same software. They just come from different repositories.
Still not true. It’s easy to talk to Microsoft devs. Again, many even have Twitter accounts where any random person can talk to them. I can even give examples if you want me to.
Files on devices can be deleted if they were downloaded from sources competing with Apple companies.
I don’t see that in the GNU page.
Intentional backdoors allow remote root privileges, wipes and deletion of applications.
No, the “remote root backdoor” was a bug that was fixed. Perfect example of GNU’s FUD. They immediately call every bug in proprietary software a “backdoor” with no evidence of such.
The deleting apps thing is behind a paywall so I can’t see it.
An insecure design allows execution of malicious code by applications and the extraction of messaging history.
Big deal. It had a few bugs in the past. Everything has.
Devices are bricked if fixed by an “unauthorized” repair shop.
That’s true and is shitty but it’s not a privacy/security issue.
Devices are bricked that were unlocked without permission.
This just seems like they fixed a verified boot bypass.
Biometric markers like fingerprints are used for device authorization.
That’s not an issue. You can get fingerprint readers on Linux too.
Extensive personal information is sent to Apple servers, such as:
All telemetry can be disabled.
And there were no real rebuttals to my points.
Straw man. Not once have I claimed that Windows doesn’t have privacy issues. I’ve acknowledged Windows’ privacy issues numerous times now. Read the discussion, stop making wild assumptions and stop putting words in my mouth.
I know Windows is spyware. I’m not claiming otherwise.
All macOS telemetry can easily be disabled and you can verify that it is with simple network monitoring.
honestly, i don’t think this is truly fair. it was a horrible choice of variable wording on microsoft’s part, which also became public knowledge around the same time of the controversy involving the secret nsa router closet with at&t as i recall. microsoft did ackowledge the controversy. but, if i also recall correctly, the discussions on this broke down.
this also wouldn’t be the first time that something shady or unethical was exposed with microsoft. as an example, despite microsoft’s “anti-piracy” aggressive litigation stance, metadata in wav files for their media player with xp demonstrated that a version of soundforge was used to process the wav files was supplied by a well known cracking group. despite the horrible public relations that could have caused, microsoft missed that, even though it should have been obvious. microsoft has a rediculously huge development team, both in house and out sourced. is it that unrealistic to believe that employees involved may be nefarious in the context raised in this paragraph regarding “nsakey”? it’s a valid concern, despite being paranoid.
yes, i agree with you that “open source” doesn’t absolutely provide greater security. but, the option to audit is there, which is absent with microsoft. and that is a fair critique at the end of the day. does “open source” make something more secure? obviously not. the ancient bash vulns discovered way too late obviously prove that. but, they were discovered eventually due to it being open source, which may never have been discovered or addressed by the likes of microsoft absent a very open and problematic exploit in the wild that stood to harm their stock prices. if the exploit was discovered by microsoft privately, and it didn’t stand to affect their market share if not disclosed, it’s not an unfair critique to believe that microsoft may have avoided addressing it if the thought was there that it could harm their bottom line if publicly addressed. after all, that’s the oracle way, no?
furthermore, since you brought up the debate regarding privacy vs. security, it would appear that we agree that debian respects privacy more than microsoft, apple, google, etc. whonix host is looking to plug the security holes that exist in vanilla debian. thus, when whonix host is reaady, while i agree with you that the “linux is more secure than windows” argument is largely bogus from various technical standpoints at this point as far as exploits are concerned, i think the whonix team will be able to make a case for being better for both privacy and security once whonix host is released. in my honest opinion, that should be the focus. once whonix host is ready for delivery, the “other os” wikis can be focused on that, which i think will be more beneficial.
if anyone thinks i’m off base here, please let me know. but, let’s keep this away from a “microsoft/apple vs. linux” debate. there are way too many subjective uses which makes that debate unfinishable. but, for what whonix adrdesses, which is a fairly specific use case, i think we can do it without engaging that debate.
point blank, whonix will never be a panacea. but, for people who want a best case scenario for anonymity with an operating system, whonix fulfills a need there, which will be even better with whonix host. if we keep the focus on that without engaging in fud, hyperbole, or pie in the sky promises, i will continue to believe, and promote, that whonix is the best os for this use scenario. it will never be perfect. but, what compares?
absent qubes that implemented whonix templates, i can’t offer much as an example in that regard referenced above. but, as someone who was once involved with very problematic activism as far as some govs were concerned, compatriots of mine who didn’t use whonix, but used tor, got busted due to very trivial mistakes. i’m still free. that is a huge selling point for me. whonix was the main difference, and i’m not implying that i engaged in anything criminal. whonix kept me free of harassment that could have affected my immediate freedom, right to travel, or employment opportunities. whonix alone wasn’t the answer there. but it was an incredibly significant part, which freed me of relying on a number of custom scripts and steps to anonymize a debian host, which i’d developed for my own use over years of experience, and could still screw up. and, for that, i will forever be thankful. if the majority of clients i have now knew of my involvement with “anonymous,” i would not have a job, despite being no threat to them. that is part of the reason that i started publicly sharing an originally private document through anonymous on how to set up a basic system using debian as a host with whonix as virtual machines. and it’s why i publicly updated it for years.
in the end, i think we all need to keep focused on the notion that whonix is both a secure and private os for people who want anonymity. that is the end goal, correct? the debates on the flaws of other operating systems are less relevant there, since the enhancements that whonix team actively works on is better for people who want anonymity in comparison to the others. let’s keep the focus there. we don’t need to bother with the “linux vs” arguments, since this is “whonix vs” for those who want an anonymity geared operating system.
I disagree and then you are going to say “I don’t have to to refute them”. I.e. no agreement will be reached. But it’s not necessarily you that has to refute them anyhow. GNU/FSF are popular. Meaning:
If GNU/FSF make libelous claims, it is likely that they will be on the receiving end of a defamation lawsuit. This didn’t happen yet to my knowledge.
The internet is big. Others would have made a rebuttal. If you can find a good one, that might be a a good alternative as rebuttal.
Any write-up is non-perfect and the GNU one was a comprehensive one.
Agreed. Who build the security and for what purpose. Benefit of user or maximizing profit at expense of privacy and security from vendor.
It’s besides the point. Please don’t cling on a single phrase “Level Security” and then view everything through that lens. That chapter has to be viewed in a bigger context.
The headline iPhone and Android Level Security for Linux Desktop Distributions is also bad for other more pragmatic reasons. Through conversations I’ve learned that many people know about how bad many phones/mobile apps are in their default configuration for privacy they equate this with security, and then intuitively discard the idea that iPhone / Android have any worthwhile security features worth porting to Linux desktop. I.e. even if iPhone and Android Level Security for Linux Desktop Distributions was fully possible in theory and even if madaidan would agree, it would still be bad self-representation of the project. Will change chapter title to Kicksecure Development Goals.
Big companies like Google or Apple don’t care about them.
I’m not clinging to that. I don’t really have much of an issue with the title.
Just look at the comparison table. It’s wrong to pretend that the full system MAC policy in Android and Kicksecure are similar. SELinux is ingrained into Android’s architecture and the entire ecosystem was shaped around it. Additionally, SELinux allows for far more restrictive policies (e.g. ioctl filtering or even just stricter permissions for files) than apparmor.
We’re slapping an apparmor policy on top of an OS that it wasn’t intended for. While this is good and we can make some great progress with it, it’ll never be as good as a strict policy on top of an OS that was designed for it.
Another example is the hardened kernel row. Our hardened-kernel is nice but it’s not the same as Android. Android kernels contain a lot of hardening patches including fine-grained forward-edge Clang Control-Flow Integrity and ShadowCallStack to prevent code reuse attacks (CFI/SCS is only on Pixels >=3 though). CFI isn’t in mainline or linux-hardened and won’t be for a long time. ShadowCallStack isn’t even possible on x86 due to the way it handles returns.
Although, I’m looking more into Android/Qualcomm’s hardening patches and might submit some to linux-hardened (I’ve been talking to Daniel Micay about this on Matrix).
The comparison table is also neglecting to mention all the advantages of Android over Kicksecure. One example is that Android has the majority of the system written in memory safe languages (Java). Another example is that Android/iPhone has modern user space exploit mitigations like CFI/PAC.
This subject is too complex to be a simple Yes/No comparison table which is why I removed it and expanded a bit below it. What I meant by “Security is not just a checklist of features” is that the implementation matters. Not the general topic. Sure, you can have a “sandbox” but that doesn’t mean it’ll actually restrict anything meaningful for example.
I don’t think it should mention mitigations specifically since it’s not just mitigations vendors introduce. They add tons of bloatware that contain their own security vulnerabilities. I’ve found Samsung to be particularly egregious in this regard although sane vendors like Google are usually fine.
I’m not. The comparison table just doesn’t make sense.
I missed those, misunderstood, disagreed etc. But anyhow. It’s too much of a detail for me to spend time on it. As said…
…unless there’s a better, similar write-up, the the current links are good enough and I won’t debate them further.
It’s still missing the purpose of that comparison table / chapter. It’s not an security from exploitation from third parties comparison table for Android AOSP vs Linux desktop/server distributions.
That’s good to know and valuable knowledge but again not an security from exploitation from third parties comparison table for Android AOSP vs Linux desktop/server distributions.
It would be a net benefit for the knowledge of the world if this information was documented somewhere. But not on whonix.org. Too time consuming and too far off-topic from the goals of Whonix project to get involved deeply involved into creating a perfect comparison table or write-up on that subject. Wikipedia might be interested to host this information or any other more general knowledge wiki / comparison site. I would certainly a minimum be a reader. Probably also add a link to it from Whonix wiki. Having this information well laid out could help to get these issues fixed. Without awareness of the issue it’s even less likely of getting fixed.
To make this less of a daunting task… That ends up in the backlog not being worked on… Since this is one of the most controversial technical discussions here ever…
I suggest this needs to be split into small chunks. Because if it’s too many points at the same time, it quickly gets messy, overwhelming.
Please bring up one small point. (Or I will soon bring up one small point and ask for clarification.) Then stick to that point until that’s resolved. And meanwhile that point is being discussed, don’t bring up other stuff. One point such as “this and this is a Windows backdoor or not”. If it’s not possible in this forum thread, use a separate one and make the on-topic very clear. I’d then try to moderate as restrictive as possible and move any posts too broad back to this one.
Not sure when we start this modus of operation. In separate forum topic, post any time.
Otherwise, you could also have patience with me for a week or so. It’s “just” 63 posts for now. I am going to re-read all. And then, I’ll be attempting to integrate your criticisms and answer them right on the same wiki page.
In other situations I also often very much understand the usefulness of sometimes to make a “summary answer”. If too many people bring up too many things, not everything can get answered. Cannot discuss with everyone until consensus is found or giving up due to fatigue. Similar for long articles / wiki pages where one feels that just too much is wrong to go into everything in detail. However, in this case, in improvements should be made, I very much suggest to split into small chunks, keeping working on it continued. It’s not that many bullet points in total.
It is effectively impossible to directly talk to developers for most people.
Well, twitter with a 140 character limit isn’t exactly known for being a productive discussion platform.
Any examples of any productive discussions that resulted in enhancements and/or bug fixes?
The main point is:
There is no public issue tracker for Microsoft Windows. In comparison for Open Source projects, issue tracker are most often public for everyone (with exception of security issues under embargo until fixed).
A wide variety of malware types exist, including computer viruses, worms, Trojan horses, ransomware, spyware, adware, rogue software, wiper and scareware.
If that definition is accepted… It therefore follows, if one agrees that “Windows is Spyware”, it then logically follows “Windows is also Malware”. This is to explain the GNU Project opinion of calling Windows “Malware”.
Alright. I am dropping the “talk to developers” directly point.
My main point:
There is no public issue tracker for Microsoft Windows where any reasonable user is allowed to post or reply. There is a public list of vulnerabilities[archive] but without public discussion among developers and/or users. In comparison for Open Source projects, issue tracker are most often public for everyone to post and reply (with exception of security issues under embargo until fixed).
There is https://answers.microsoft.com but I’ve never seen developers asking users for debug information (maybe rarely needed due to telemetry?) or telling what bug gets fixed with what update, any workaround, bug confirmed/closed/wontfix etc.
I’ve looked thorugh a few random threads but cannot see any Microsoft employees either.
All seems user-to-user.
This is much different from let’s say Debian or Qubes where almost every ticket at some point gets tagged/reply from some developer.
Microsoft internally certainly must have some issue tracker but it’s not public. That’s the difference I would like to work out. Safe to say, Open Source development generally “more open”. Windows development detail discussions seem a lot more private.
Microsoft deals with an enormous user base, compared to most open source projects. The developers don’t have the time to provide support like that. Especially not for trivial issues like most threads there.
That’s how closed source software works in general. The community can’t participate as much in development.
Also, I’ve noticed that you have continued to add misleading parts to the page.
By comparison, also other operating systems, even Whonix and Kicksecure source code contain the string snippet nsa. For example in package security-misc file /usr/lib/security-misc/pam_tally2-info contains string xscreensaver has its own failed login counter. The word xscreensaver contains xscreensaver that however is an absurd comparison. Things have to be compared in proper context. Whonix and Kicksecure source code there is no variable, function or symbol name with any meaning containing “nsa”. Words such as unsave have nothing to do with it. This can be confirmed by auditing the related parts of the source code.
“These are quite clearly commands to enable the NSA Linux kernel backdoor to steal user passwords.”
There is no evidence to say that _NSAKEY even stood for the “National Security Agency”. There is no expansion of the acronym or a space between “NSA” and “KEY”. You’d have a stronger argument with the examples I listed above because there are spaces between them and the National Security Agency actually do have a history of contributing to the Linux kernel — you don’t do this though because it is absurd, just like with _NSAKEY.
(Need to use code tags as the forum eats <rev> tags.)
There is no public issue tracker for Microsoft Windows where any reasonable user is allowed to post or reply. There is a public [https://msrc.microsoft.com/update-guide/vulnerability list of vulnerabilities] but without public discussion among developers and/or users. <ref>
https://answers.microsoft.com is mostly(?) user-to-user discussion. Mostly: hard to find any employees posting there or very low interaction. [https://answers.microsoft.com/en-us/page/faq#faqWhosWho1 A volunteer moderator isn't a developer.]
There is also https://techcommunity.microsoft.com.
</ref> Microsoft's internal issue tracker is private, unavailable for the public even for reading. <ref>
Link as evidence pointing to the fact that Microsoft does have an internal issue tracker: https://www.engadget.com/2017-10-17-microsoft-bug-database-hacked-in-2013.html
</ref> The ability of the public of getting insights into the planning, thought process of Microsoft, participation in the development of Windows is much more limited. This is the case for many closed source, proprietary software projects. The community cannot participate as much in development. In comparison for Open Source projects, issue tracker are most often public for everyone to post and reply (with exception of security issues under embargo until fixed).
I explained the “nsa search results”… Because…
I guess that was sarcastic but I found it informative to work out the differences.
Moved the Whonix source code comparison part to a footnote and answered the Linux comparison instead.
Added that link. Only fair to allow the accused to explain their side of the story.
Microsoft said the key is labeled “NSA key” because NSA is the technical review authority for U.S. export controls, and the key ensures compliance with U.S. export laws.
Then where in the U.S. export laws it is said that there need to be two keys or a key labeled “NSA key” or some other phrase in the law which explains that?
I disagree with Bruce Schneier from 1999 too. But I don’t think it’s realistic to contact him for discussion on that one. Too bad that’s not one of his articles with comments enabled (was a newsletter, perhaps before the blog that supports comments was introduced, dunno).
Third, why in the world would anyone call a secret NSA key “NSAKEY”?
You tell me.
Lots of people have access to source code within Microsoft;
Why assume it’s in the source code that relevant developers work on (or nowadays in the version shared through the shared source program)? It was found in forgotten to remove debugging symbols only.
The source code most developers work with could be clean. A backdoor might only be introduced during compilation, which is most likely done on a different machine, a build machine.
Access to Microsoft source code is most likely not a all or nothing situation. Not every developer working on let’s say Skype or Edge don’t necessarily always has access to all source code of other components of let’s say kernel, crypto all the time. Compartmentalization is clever to avoid leaks.
Therefore even if Microsoft had at some point 47000 developers or so (dunno how many it where in 1999), doesn’t mean all of them would have access to that part of the source code.
Anyone with a debugger could have found this “NSAKEY.”
The public: An independent security researcher was only able to find it since Microsoft forgot to remove debugging symbols. This mistake was probably fixed by Microsoft nowadays.
A Microsoft developer:
Most Microsoft developers would be provided, and work with the clean source code, without any backdoors. If they created a build including debug symbols, “NSAKEY.” would not have been included.
Even if a Microsoft developer found “NSAKEY” and they asked the management about it, the management could just say “that’s alright” or some other explanation. The developer might not be suspicious. Even if suspicious, it is unreasonable to assume every developer becoming a whisteblowser, risk their current employment, income, legal action and further employment opportunities. Anonymous whisteblowing wouldn’t be worth it without evidence / source code. Then it would be a minor note and disregarded as FUD. Any if leaked including source code / further evidence, then the number of suspects who could have leaked it would be tiny.
“NSAKEY.” could have been inserted by a much smaller group of developers at a later stage (before building / on the build machine).
If this is a covert mechanism, it’s not very covert.
Don’t assume the perfect crime. Humans make mistakes.
In 1997, Lotus negotiated an agreement with the NSA that allowed export of a version that supported stronger keys with 64 bits, but 24 of the bits were encrypted with a special key and included in the message to provide a “workload reduction factor” for the NSA.
But even then Microsoft isn’t forthcoming about it.
To strengthen the argument made on that page, to not distract from more important and stronger points. I’ll remove that part now.