To make this less of a daunting task… That ends up in the backlog not being worked on… Since this is one of the most controversial technical discussions here ever…
I suggest this needs to be split into small chunks. Because if it’s too many points at the same time, it quickly gets messy, overwhelming.
Please bring up one small point. (Or I will soon bring up one small point and ask for clarification.) Then stick to that point until that’s resolved. And meanwhile that point is being discussed, don’t bring up other stuff. One point such as “this and this is a Windows backdoor or not”. If it’s not possible in this forum thread, use a separate one and make the on-topic very clear. I’d then try to moderate as restrictive as possible and move any posts too broad back to this one.
Not sure when we start this modus of operation. In separate forum topic, post any time.
Otherwise, you could also have patience with me for a week or so. It’s “just” 63 posts for now. I am going to re-read all. And then, I’ll be attempting to integrate your criticisms and answer them right on the same wiki page.
In other situations I also often very much understand the usefulness of sometimes to make a “summary answer”. If too many people bring up too many things, not everything can get answered. Cannot discuss with everyone until consensus is found or giving up due to fatigue. Similar for long articles / wiki pages where one feels that just too much is wrong to go into everything in detail. However, in this case, in improvements should be made, I very much suggest to split into small chunks, keeping working on it continued. It’s not that many bullet points in total.
It is effectively impossible to directly talk to developers for most people.
Well, twitter with a 140 character limit isn’t exactly known for being a productive discussion platform.
Any examples of any productive discussions that resulted in enhancements and/or bug fixes?
The main point is:
There is no public issue tracker for Microsoft Windows. In comparison for Open Source projects, issue tracker are most often public for everyone (with exception of security issues under embargo until fixed).
A wide variety of malware types exist, including computer viruses, worms, Trojan horses, ransomware, spyware, adware, rogue software, wiper and scareware.
If that definition is accepted… It therefore follows, if one agrees that “Windows is Spyware”, it then logically follows “Windows is also Malware”. This is to explain the GNU Project opinion of calling Windows “Malware”.
Alright. I am dropping the “talk to developers” directly point.
My main point:
There is no public issue tracker for Microsoft Windows where any reasonable user is allowed to post or reply. There is a public list of vulnerabilities[archive] but without public discussion among developers and/or users. In comparison for Open Source projects, issue tracker are most often public for everyone to post and reply (with exception of security issues under embargo until fixed).
There is https://answers.microsoft.com but I’ve never seen developers asking users for debug information (maybe rarely needed due to telemetry?) or telling what bug gets fixed with what update, any workaround, bug confirmed/closed/wontfix etc.
I’ve looked thorugh a few random threads but cannot see any Microsoft employees either.
All seems user-to-user.
This is much different from let’s say Debian or Qubes where almost every ticket at some point gets tagged/reply from some developer.
Microsoft internally certainly must have some issue tracker but it’s not public. That’s the difference I would like to work out. Safe to say, Open Source development generally “more open”. Windows development detail discussions seem a lot more private.
Microsoft deals with an enormous user base, compared to most open source projects. The developers don’t have the time to provide support like that. Especially not for trivial issues like most threads there.
That’s how closed source software works in general. The community can’t participate as much in development.
Also, I’ve noticed that you have continued to add misleading parts to the page.
By comparison, also other operating systems, even Whonix and Kicksecure source code contain the string snippet nsa. For example in package security-misc file /usr/lib/security-misc/pam_tally2-info contains string xscreensaver has its own failed login counter. The word xscreensaver contains xscreensaver that however is an absurd comparison. Things have to be compared in proper context. Whonix and Kicksecure source code there is no variable, function or symbol name with any meaning containing “nsa”. Words such as unsave have nothing to do with it. This can be confirmed by auditing the related parts of the source code.
“These are quite clearly commands to enable the NSA Linux kernel backdoor to steal user passwords.”
There is no evidence to say that _NSAKEY even stood for the “National Security Agency”. There is no expansion of the acronym or a space between “NSA” and “KEY”. You’d have a stronger argument with the examples I listed above because there are spaces between them and the National Security Agency actually do have a history of contributing to the Linux kernel — you don’t do this though because it is absurd, just like with _NSAKEY.
(Need to use code tags as the forum eats <rev> tags.)
There is no public issue tracker for Microsoft Windows where any reasonable user is allowed to post or reply. There is a public [https://msrc.microsoft.com/update-guide/vulnerability list of vulnerabilities] but without public discussion among developers and/or users. <ref>
https://answers.microsoft.com is mostly(?) user-to-user discussion. Mostly: hard to find any employees posting there or very low interaction. [https://answers.microsoft.com/en-us/page/faq#faqWhosWho1 A volunteer moderator isn't a developer.]
There is also https://techcommunity.microsoft.com.
</ref> Microsoft's internal issue tracker is private, unavailable for the public even for reading. <ref>
Link as evidence pointing to the fact that Microsoft does have an internal issue tracker: https://www.engadget.com/2017-10-17-microsoft-bug-database-hacked-in-2013.html
</ref> The ability of the public of getting insights into the planning, thought process of Microsoft, participation in the development of Windows is much more limited. This is the case for many closed source, proprietary software projects. The community cannot participate as much in development. In comparison for Open Source projects, issue tracker are most often public for everyone to post and reply (with exception of security issues under embargo until fixed).
I explained the "nsa search results"… Because…
I guess that was sarcastic but I found it informative to work out the differences.
Moved the Whonix source code comparison part to a footnote and answered the Linux comparison instead.
Added that link. Only fair to allow the accused to explain their side of the story.
Microsoft said the key is labeled “NSA key” because NSA is the technical review authority for U.S. export controls, and the key ensures compliance with U.S. export laws.
Then where in the U.S. export laws it is said that there need to be two keys or a key labeled “NSA key” or some other phrase in the law which explains that?
I disagree with Bruce Schneier from 1999 too. But I don’t think it’s realistic to contact him for discussion on that one. Too bad that’s not one of his articles with comments enabled (was a newsletter, perhaps before the blog that supports comments was introduced, dunno).
Third, why in the world would anyone call a secret NSA key “NSAKEY”?
You tell me.
Lots of people have access to source code within Microsoft;
Why assume it’s in the source code that relevant developers work on (or nowadays in the version shared through the shared source program)? It was found in forgotten to remove debugging symbols only.
The source code most developers work with could be clean. A backdoor might only be introduced during compilation, which is most likely done on a different machine, a build machine.
Access to Microsoft source code is most likely not a all or nothing situation. Not every developer working on let’s say Skype or Edge don’t necessarily always has access to all source code of other components of let’s say kernel, crypto all the time. Compartmentalization is clever to avoid leaks.
Therefore even if Microsoft had at some point 47000 developers or so (dunno how many it where in 1999), doesn’t mean all of them would have access to that part of the source code.
Anyone with a debugger could have found this “NSAKEY.”
The public: An independent security researcher was only able to find it since Microsoft forgot to remove debugging symbols. This mistake was probably fixed by Microsoft nowadays.
A Microsoft developer:
Most Microsoft developers would be provided, and work with the clean source code, without any backdoors. If they created a build including debug symbols, “NSAKEY.” would not have been included.
Even if a Microsoft developer found “NSAKEY” and they asked the management about it, the management could just say “that’s alright” or some other explanation. The developer might not be suspicious. Even if suspicious, it is unreasonable to assume every developer becoming a whisteblowser, risk their current employment, income, legal action and further employment opportunities. Anonymous whisteblowing wouldn’t be worth it without evidence / source code. Then it would be a minor note and disregarded as FUD. Any if leaked including source code / further evidence, then the number of suspects who could have leaked it would be tiny.
“NSAKEY.” could have been inserted by a much smaller group of developers at a later stage (before building / on the build machine).
If this is a covert mechanism, it’s not very covert.
Don’t assume the perfect crime. Humans make mistakes.
In 1997, Lotus negotiated an agreement with the NSA that allowed export of a version that supported stronger keys with 64 bits, but 24 of the bits were encrypted with a special key and included in the message to provide a “workload reduction factor” for the NSA.
But even then Microsoft isn’t forthcoming about it.
To strengthen the argument made on that page, to not distract from more important and stronger points. I’ll remove that part now.
I assume the key is used to indicate that the NSA has reviewed the cryptography and determined that it is fully compliant with the law. The key itself is not a legal requirement — it’s used to indicate that the actual requirements have been met. The US used to have really tight restrictions on cryptography (they have since been relaxed).
This doesn’t seem very far-fetched.
Yes, anyone. Anyone has the ability to reverse engineer Windows source code and still do today. Windows is reverse engineered often for various reasons.
I don’t know what you mean by “debugging symbols”. This isn’t anything like that.
It’s not simply a small mistake. It would have to be an enormous failure for something as seemingly obvious as this. That just wouldn’t happen.
You mean you want me to bring up some points?
First off, you have far too many subsections and not everything is relevant to security or privacy such as the nuisances part. There are also many parts that are duplicated and it’s just a huge wall of misleading text.
I have gone over this before. They are not providing adversaries with zero days — they are giving a variety of organisations early access to embargoed security patches so they can ship them out immediately after public disclosure. Linux does the exact same thing and this isn’t a major issue: https://www.kernel.org/doc/html/latest/admin-guide/security-bugs.html
The crucial difference between Microsoft bug embargoes and Linux bug embargoes is that Microsoft notifies intelligence agencies which are then known to exploit vulnerabilities while the Linux kernel security team has a much more transparent bug embargo process where trusted parties, huge Linux distributions receive an early notification for the purpose of wide availability of the software upgrade containing the fix before to prevent wide exploitation by attackers in the wild.
No, it’s just naive to assume intelligence agencies aren’t getting these notifications. The NSA doesn’t even need these since Linux is so trivial to exploit anyway which brings me on to the next point…
(Non-technical comment, so take with a grain of salt)
I agree that page is super messy. A logical approach would be:
split into separate pages Windows v macOS v other
clearly demarcate privacy and security issues on those child pages (yes, they’re related I know, but it’s too messy right now)
add the main missing element - Xen and like systems with Type I hypervisors & virtualized, separated domains for various elements - networking, firewalls, USB, GUIVM, ‘dangerous applications’ etc.
I’m pretty sure security professionals were saying in recent times that all monolithic OSes are clusterf**ks for security. Too much code, massive kernels running with 10s of millions of lines of code etc. means they can never be properly secured (until maybe we have quantum computer fuzzing operations or similar doing tests in massive paralell i.e. something with 300 qubits or so).
So, we must assume in advance all systems will be pwned by any competent adversary. That suggests fine-grained separation is the only solution, preferably with those VM instances running in minimal templates, all in a disposable fashion i.e. Qubes architecture.
I think it’s hard to argue that Windows 10, even with 10s of thousands of developers and billions of dollars of investment/man hours, is more secure than say Xen hypervisor with disposable netVM, disposable firewall-VM, disposable USB-VM, GUIVM, all applications run in minimised disposableVMs for single use purposes etc.
Maybe those Qubes VMs can be pwned at a rate of 2-3 times (say) a Windows instance, but who gives a shit if the minute I shut down the disposable VM the miscreant’s presence is killed? If they are performing VM breakouts and infecting dom0, then you’ve bigger things to worry about, because that is apparently not trivial.
I also wonder what OS the NSA and others run BTW? I searched for that in the past and couldn’t see any clear answers. Pretty sure they’re not running Windows 10.
Even if Windows 10 is far more secure in its architecture than your best Linux OS today, I don’t think our community cares i.e. they want privacy and not 100s of open channels to the Microsoft mothership that can never be turned off despite all best efforts.
An honest appraisal at the end of the day might say - yes, Windows 10 is far more secure than your stock standard Linux OS as madaidan points out, but a privacy disaster. On the other hand, none of the monolithic OSes come near Type I hypervisor arrangements. So, I’d suggest that Qubes-Whonix is then the best compromise under the circumstances. Reasonably secure, privacy-focused, and can limit the long term impact of breaches by malicious turds when properly configured with a bit of effort.
They’re using Hyper-V as a backend which is far stronger than Xen, a hypervisor that is known for its lack of basic self-protection features and didn’t even have things like ASLR or NX support for a long time.
You can’t boil this down into such a simple conclusion. It’s a very complicated thing, especially when talking about Qubes which is drastically different from traditional operating systems. The security of Qubes also heavily depends on how it’s being used. Virtualization won’t save you if you put all of your personal data into a single VM that gets persistently compromised. Threat models and user behaviour are critical in recommending a secure OS to someone.
This isn’t to say that Windows is certainly more secure than Qubes, just that it’s difficult to quantify security in such a way.
You can disable most privacy-invasive telemetry in the settings if you care to and in the Enterprise edition, all of it can be disabled. Windows can be a privacy disaster but only if you let it.
Debug symbols are usually not in production builds. However, microsoft forgot to remove debug symbols. That’s why textual string “nsakey” was found inside Windows: unstripped debug symbols in production build
Quote from the original which started the speculation:
Note 1: many people have written us and assumed that we “reverse engineered” Microsoft’s code. This is not true; we did not reverse engineer Microsoft code at any time. In fact, the debugging symbols were found using standard Microsoft-purchased programmer’s tools, completely by accident, when debugging one of our own programs.
If Microsoft didn’t forget to strip debug symbols in production build, then textual string “nsakey” would be nowhere to be found.
Yes, but I guess we have enough points in this thread. Could easily get overwhelming.
That’s a stylistic issue rather than factual claim issue?
It doesn’t have to be. That wiki page doesn’t say it’s limited to security and privacy only. It’s a Freedom Software Linux distribution advocating for use a Freedom Software whenever possible. Summarizing as reasonably sufficient arguments, as neutral, concise, factual as reasonably possible.
If you point out any duplication, I’ll try to reduce. However, some things are duplicated because without re-mentioning the conclusion chapters couldn’t be reasonably argued. That’s also if someone jumps to the conclusion, then many of claims may seem unlikely, big, … therefore adding internal links to the parts of the page where these points are made in detail with links to sources.
That kind of duplication isn’t a big deal. Since that claim seems so strong, it’s good to point at various sources to proof that this interpretation isn’t just an outlier.
Each one has to be interpreted by itself. I don’t interpret them the way you do. For example, they don’t say “Linux has security issues. Use Windows 10 instead.”
Needs to be more specific.
These are irrelevant since Windows fails at the finishing line. Already addressed with this part:
That’s simply not true. There is no technical limitation on this. Everyone can objdump -D /path/to/binary
I don’t think you get how this works.
How exactly do you think vulnerabilities are discovered / malware is written for Windows? People reverse engineer the code to uncover security vulnerabilities or test the strength of security features. A few examples:
Windows is not a blackbox. It’s picked apart by external security researchers all the time. It’s reverse engineered for non-security purposes also. An example of such is ReactOS:
I can continue to give more examples if you want. It doesn’t seem like you’ve looked into this at all.
Not by huge companies like Microsoft or government agencies like the NSA. The backdoor in Dual_EC_DRBG was only proven by Snowden.
No, since it results in gish galloping:
Then please put them all into 1 separate section and make it clear they’re not relevant to security or privacy.
Some of them do and quite explicitly.
That part is also nonsensical. Implementing modern exploit mitigations and breaking common bug classes / exploit techniques is not tyrannical…
Any anti-features of Windows such as telemetry cannot be excused by “but it can be disabled”. That’s a workaround at best. Not a fix. Fact remains, for most users, if it’s enabled by default, it’ll tend to stay on.
Except it’s not actually enabled by default. Users are asked upon startup if they wish for it to be enabled.
The methodologies used to verify if telemetry is disabled in the other sources you link are dubious. Simply measuring the amount of connections a piece of software makes proves nothing. There is no proof those connections are transmitting anything privacy-sensitive. whonixcheck and sdwdate both connect to various servers but does that make it telemetry? Of course not. If you really want to analyse the telemetry, install a root TLS certificate and examine exactly what is being sent.
Microsoft has encrypted the network traffic and has implemented certificate pinning
as a regular security measure against unauthorised access. However, the specific
way in which Microsoft has implemented the certificate pinning, also prevents a
trusted network proxy from inspecting the data, i.e., the use of a man in the middle
proxy. That is why BSI used the debugger tool. The use of this method results in a
view of the telemetry data that is similar to the Diagnostic Data Viewer provided by
Also a related interesting consideration: Then also some malware behaves different if run inside a a VM or detects a debugger. I am not saying Windows is currently doing that, but it could in theory. Therefore the concepts Open Source, Reproducible Builds and in-toto transport for apt are the future, the way to go.
But also such a traffic analysis should already exist since…?
Then such an analysis would age quite rapidly too… Also quoted from the same PDF as above:
BSI writes that Microsoft is retrieving and updating the configuration telemetry data several times
per hour. 44
Third, the collection of telemetry data is highly dynamic. Microsoft engineers can add new types of events to the telemetry stream without prior notice to the users, if they follow internal privacy procedures.47 According to the BSI research quoted in paragraph 2.2 the configuration of the telemetry data flow is modified several times
per hour. Each modification can mean that a new ETW provider wants access to
data, or an existing ETW provider wants access to other log data.
Microsoft engineers explain the importance of the dynamic nature of creating
telemetry events as follows: “In data-driven environments, such instrumentation is
moving towards rule-based approaches, where instrumentation can be added once
and then toggled without having to change the code itself. This functionality has
enabled data-driven organizations to collect data not just during testing, but long
after the product is deployed into retail. As Austindev remarks, “What’s really great
for us is to be able to real-time turn on and off what log stuff you’re collecting at a
pretty granular level. And to be able to get the performance back when it is turned
off is a big thing for us.”
Microsoft has explained to the Dutch DPA in 2017 that the collection of telemetry
data in Windows 10 is controlled by organisational policy rules. There is no reason to
assume that such policy rules would not apply to the collection of telemetry data
from Windows 10 Enterprise. However, Microsoft does not provide any information
about this policy, nor any audit results with regard to compliance with those policy
rules. The limitations to these audits are described in section 5 of this report,
‘Controller, processor and sub-processors’.
This is just semantics. Reverse engineering allows you to inspect every single CPU instruction. The way the code is displayed, whether it’s high-level C source code or assembly, doesn’t matter at the end of the day.
Nobody at all has the complete picture. A huge operating system like Windows cannot be fully understood by a singular person. The same goes for Linux; that’s why there are dedicated maintainers for each subsystem.
It seems like what you’re implying by saying it can’t be reverse engineered / analysed.
The points should be clear, concise and with valid up-to-date sources based on fact, not misconceptions.
The nuisances were just one example. Many parts all over aren’t relevant like:
Software Choice and Deletion
Windows User Freedom Restrictions
You also have things like:
Windows Historic Insecurity
Neither of these make sense (want me to bring up Linux security flaws from 2009 too?) but they shouldn’t be separate sections anyway.
Windows Software Sources
No Ecosystem Diversity Advantage
These also shouldn’t be separate sections but rather everything should be under one security section (those sections don’t make sense either though).
Opinion by GNU Project
Opinion by Free Software Foundation
2 “opinions” (i.e. nonsensical slander) from essentially the same people shouldn’t have 2 separate sections.
Other than ChromeOS (including Android apps), the closest thing to what you want from a security perspective would be Windows 10 S, not any Linux distribution.
My real recommendations would be ChromeOS (+ Android apps) and Windows 10 S for most people.
Traditional Linux distributions aren’t very secure overall, and the Linux desktop stack is a disaster without any semblance of an application security model. There’s ongoing work to address this problems but it’s barely started and it will be a long and painful process. Windows and OS X have similar issues to address, but Microsoft is the furthest along in tackling the issues and takes it the most seriously. Non-mobile security as a whole is awful.
Windows has good exploit mitigations. They’ve been the leaders in this area among mainstream operating systems. Windows security is still a disaster on the desktop due to the legacy security model without Windows 10 S and that’s a painful sacrifice to make. Windows 10 S also isn’t really on par with Android and iOS yet but it’s most of the way there.
They’re incorrect/outdated and that’s clear if you have ever installed Windows in the past few years.
If they do, that’s not on Microsoft. Windows makes these options pretty explicit and they’re clearly visible during setup.
They don’t admit that telemetry cannot be disabled, unlike some of the sources on the wiki page.
Debuggers don’t execute any code. They read from the binary itself. That’s not possible.
The purpose of anti-debugging is to hinder the process of reverse engineering. There could be several general approaches: (1) to detect the existence of a debugger, and behave differently when a debugger is attached to the current process; (2) to disrupt or crash a debugger. Approach (1) is the mostly frequently applied (see an excellent survey in ). Approach (2) is rare (it targets and attacks a debugger - and we will see several examples in Max++ later). Today, we concentrate on Approach (1).