Host Operating System Selection Wiki Page Discussion

To make this less of a daunting task… That ends up in the backlog not being worked on… Since this is one of the most controversial technical discussions here ever…

I suggest this needs to be split into small chunks. Because if it’s too many points at the same time, it quickly gets messy, overwhelming.

Please bring up one small point. (Or I will soon bring up one small point and ask for clarification.) Then stick to that point until that’s resolved. And meanwhile that point is being discussed, don’t bring up other stuff. One point such as “this and this is a Windows backdoor or not”. If it’s not possible in this forum thread, use a separate one and make the on-topic very clear. I’d then try to moderate as restrictive as possible and move any posts too broad back to this one.

Not sure when we start this modus of operation. In separate forum topic, post any time.

Otherwise, you could also have patience with me for a week or so. It’s “just” 63 posts for now. I am going to re-read all. And then, I’ll be attempting to integrate your criticisms and answer them right on the same wiki page.

In other situations I also often very much understand the usefulness of sometimes to make a “summary answer”. If too many people bring up too many things, not everything can get answered. Cannot discuss with everyone until consensus is found or giving up due to fatigue. Similar for long articles / wiki pages where one feels that just too much is wrong to go into everything in detail. However, in this case, in improvements should be made, I very much suggest to split into small chunks, keeping working on it continued. It’s not that many bullet points in total.

1 Like

It is effectively impossible to directly talk to developers for most people.

Well, twitter with a 140 character limit isn’t exactly known for being a productive discussion platform.

Any examples of any productive discussions that resulted in enhancements and/or bug fixes?

The main point is:

There is no public issue tracker for Microsoft Windows. In comparison for Open Source projects, issue tracker are most often public for everyone (with exception of security issues under embargo until fixed).

I guess I don’t need to show examples for that.

How’s that done for Windows?

Word definitions: Spyware is a type of malware.

Quote wikipedia malware [archive]:

A wide variety of malware types exist, including computer viruses, worms, Trojan horses, ransomware, spyware, adware, rogue software, wiper and scareware.

If that definition is accepted… It therefore follows, if one agrees that “Windows is Spyware”, it then logically follows “Windows is also Malware”. This is to explain the GNU Project opinion of calling Windows “Malware”.

1 Like

Twitter is where nearly all of the security community is. For example, here a few Microsoft security researchers I follow:


Same goes for other companies like Google, Apple, Amazon, Facebook, etc.

Here’s an example of one directly relevant to us and resulted in an improvement to kconfig-hardened-check:

It depends on the issue. Microsoft regularly assign CVEs to security issues.


I meant spyware as derogatory term for “lots of privacy invasive telemetry”, not in a literal sense.

1 Like

Alright. I am dropping the “talk to developers” directly point.

My main point:

There is no public issue tracker for Microsoft Windows where any reasonable user is allowed to post or reply. There is a public list of vulnerabilities [archive] but without public discussion among developers and/or users. In comparison for Open Source projects, issue tracker are most often public for everyone to post and reply (with exception of security issues under embargo until fixed).

There is https://answers.microsoft.com but I’ve never seen developers asking users for debug information (maybe rarely needed due to telemetry?) or telling what bug gets fixed with what update, any workaround, bug confirmed/closed/wontfix etc.

1 Like

Here’s one I found randomly: https://answers.microsoft.com/en-us/microsoftedge/forum/all/edge-update-fails/3ff48699-6c20-4f58-b5d8-7ce4b9a25112

Please use the feedback option within the browser (Alt+Shift+i) to report the error when it happens, including diagnostic data so they can see what’s going wrong.

There’s also https://techcommunity.microsoft.com/

A volunteer moderator isn’t a developer.


I’ve looked thorugh a few random threads but cannot see any Microsoft employees either.

All seems user-to-user.

This is much different from let’s say Debian or Qubes where almost every ticket at some point gets tagged/reply from some developer.

Microsoft internally certainly must have some issue tracker but it’s not public. That’s the difference I would like to work out. Safe to say, Open Source development generally “more open”. Windows development detail discussions seem a lot more private.

…if you have any re-wording suggestions for that.

1 Like

Microsoft deals with an enormous user base, compared to most open source projects. The developers don’t have the time to provide support like that. Especially not for trivial issues like most threads there.

That’s how closed source software works in general. The community can’t participate as much in development.

Also, I’ve noticed that you have continued to add misleading parts to the page.

By comparison, also other operating systems, even Whonix and Kicksecure source code contain the string snippet nsa. For example in package security-misc file /usr/lib/security-misc/pam_tally2-info contains string xscreensaver has its own failed login counter. The word xscreensaver contains xscreensaver that however is an absurd comparison. Things have to be compared in proper context. Whonix and Kicksecure source code there is no variable, function or symbol name with any meaning containing “nsa”. Words such as unsave have nothing to do with it. This can be confirmed by auditing the related parts of the source code.

Here’s a more fair comparision then: https://github.com/torvalds/linux/search?q=nsa

  • pass_NSA
  • nsa_mode

“These are quite clearly commands to enable the NSA Linux kernel backdoor to steal user passwords.”

There is no evidence to say that _NSAKEY even stood for the “National Security Agency”. There is no expansion of the acronym or a space between “NSA” and “KEY”. You’d have a stronger argument with the examples I listed above because there are spaces between them and the National Security Agency actually do have a history of contributing to the Linux kernel — you don’t do this though because it is absurd, just like with _NSAKEY.

A much more likely explanation for the naming is: https://web.archive.org/web/20121024225124/http://www.microsoft.com/en-us/news/press/1999/sept99/rsapr.aspx

Microsoft said the key is labeled “NSA key” because NSA is the technical review authority for U.S. export controls, and the key ensures compliance with U.S. export laws.

Even Bruce Schneier doesn’t buy this nonsense: https://www.schneier.com/crypto-gram/archives/1999/0915.html#NSAKeyinMicrosoftCryptoAPI

1 Like

Clarified the point on open development.

(Need to use code tags as the forum eats <rev> tags.)

There is no public issue tracker for Microsoft Windows where any reasonable user is allowed to post or reply. There is a public [https://msrc.microsoft.com/update-guide/vulnerability list of vulnerabilities] but without public discussion among developers and/or users. <ref>
https://answers.microsoft.com is mostly(?) user-to-user discussion. Mostly: hard to find any employees posting there or very low interaction. [https://answers.microsoft.com/en-us/page/faq#faqWhosWho1 A volunteer moderator isn't a developer.]

There is also https://techcommunity.microsoft.com.
</ref> Microsoft's internal issue tracker is private, unavailable for the public even for reading. <ref>
Link as evidence pointing to the fact that Microsoft does have an internal issue tracker: https://www.engadget.com/2017-10-17-microsoft-bug-database-hacked-in-2013.html
</ref> The ability of the public of getting insights into the planning, thought process of Microsoft, participation in the development of Windows is much more limited. This is the case for many closed source, proprietary software projects. The community cannot participate as much in development. In comparison for Open Source projects, issue tracker are most often public for everyone to post and reply (with exception of security issues under embargo until fixed).

I explained the "nsa search results"… Because…

I guess that was sarcastic but I found it informative to work out the differences.

Moved the Whonix source code comparison part to a footnote and answered the Linux comparison instead.

Added that link. Only fair to allow the accused to explain their side of the story.

Microsoft said the key is labeled “NSA key” because NSA is the technical review authority for U.S. export controls, and the key ensures compliance with U.S. export laws.

Then where in the U.S. export laws it is said that there need to be two keys or a key labeled “NSA key” or some other phrase in the law which explains that?

Will take under consideration. Added for now:

Bruce Schneier in post NSA Key in Microsoft Crypto API? [archive] does not believe NSAKEY has any malicious purpose.

I disagree with Bruce Schneier from 1999 too. But I don’t think it’s realistic to contact him for discussion on that one. Too bad that’s not one of his articles with comments enabled (was a newsletter, perhaps before the blog that supports comments was introduced, dunno).

Third, why in the world would anyone call a secret NSA key “NSAKEY”?

You tell me.

Lots of people have access to source code within Microsoft;

Why assume it’s in the source code that relevant developers work on (or nowadays in the version shared through the shared source program)? It was found in forgotten to remove debugging symbols only.

The source code most developers work with could be clean. A backdoor might only be introduced during compilation, which is most likely done on a different machine, a build machine.

Access to Microsoft source code is most likely not a all or nothing situation. Not every developer working on let’s say Skype or Edge don’t necessarily always has access to all source code of other components of let’s say kernel, crypto all the time. Compartmentalization is clever to avoid leaks.

Therefore even if Microsoft had at some point 47000 developers or so (dunno how many it where in 1999), doesn’t mean all of them would have access to that part of the source code.

Anyone with a debugger could have found this “NSAKEY.”

Not anyone.

  • The public: An independent security researcher was only able to find it since Microsoft forgot to remove debugging symbols. This mistake was probably fixed by Microsoft nowadays.
  • A Microsoft developer:
    • Most Microsoft developers would be provided, and work with the clean source code, without any backdoors. If they created a build including debug symbols, “NSAKEY.” would not have been included.
    • Even if a Microsoft developer found “NSAKEY” and they asked the management about it, the management could just say “that’s alright” or some other explanation. The developer might not be suspicious. Even if suspicious, it is unreasonable to assume every developer becoming a whisteblowser, risk their current employment, income, legal action and further employment opportunities. Anonymous whisteblowing wouldn’t be worth it without evidence / source code. Then it would be a minor note and disregarded as FUD. Any if leaked including source code / further evidence, then the number of suspects who could have leaked it would be tiny.
    • “NSAKEY.” could have been inserted by a much smaller group of developers at a later stage (before building / on the build machine).

If this is a covert mechanism, it’s not very covert.

Don’t assume the perfect crime. Humans make mistakes.

What might have really happened:

Quote https://en.wikipedia.org/wiki/IBM_Notes#Security

In 1997, Lotus negotiated an agreement with the NSA that allowed export of a version that supported stronger keys with 64 bits, but 24 of the bits were encrypted with a special key and included in the message to provide a “workload reduction factor” for the NSA.

But even then Microsoft isn’t forthcoming about it.

To strengthen the argument made on that page, to not distract from more important and stronger points. I’ll remove that part now.

(And move to https://www.whonix.org/wiki/Deprecated (set to noindex just now).)



I assume the key is used to indicate that the NSA has reviewed the cryptography and determined that it is fully compliant with the law. The key itself is not a legal requirement — it’s used to indicate that the actual requirements have been met. The US used to have really tight restrictions on cryptography (they have since been relaxed).

This doesn’t seem very far-fetched.

Yes, anyone. Anyone has the ability to reverse engineer Windows source code and still do today. Windows is reverse engineered often for various reasons.

I don’t know what you mean by “debugging symbols”. This isn’t anything like that.

It’s not simply a small mistake. It would have to be an enormous failure for something as seemingly obvious as this. That just wouldn’t happen.

You mean you want me to bring up some points?

First off, you have far too many subsections and not everything is relevant to security or privacy such as the nuisances part. There are also many parts that are duplicated and it’s just a huge wall of misleading text.

Microsoft has a history of informing adversaries of bugs before they are fixed. Microsoft reportedly gives adversaries security tips [archive] (archive.is [archive]) on how to crack into Windows computers.

Microsoft’s willingness to consult with adversaries and provide zero days [archive] before public fixes are announced logically places Windows users at greater risk

(That’s duplicated, by the way)

I have gone over this before. They are not providing adversaries with zero days — they are giving a variety of organisations early access to embargoed security patches so they can ship them out immediately after public disclosure. Linux does the exact same thing and this isn’t a major issue: https://www.kernel.org/doc/html/latest/admin-guide/security-bugs.html

The crucial difference between Microsoft bug embargoes and Linux bug embargoes is that Microsoft notifies intelligence agencies which are then known to exploit vulnerabilities while the Linux kernel security team has a much more transparent bug embargo process where trusted parties, huge Linux distributions receive an early notification for the purpose of wide availability of the software upgrade containing the fix before to prevent wide exploitation by attackers in the wild.

No, it’s just naive to assume intelligence agencies aren’t getting these notifications. The NSA doesn’t even need these since Linux is so trivial to exploit anyway which brings me on to the next point…

All the claims on “Windows Insecurity” are simply nonsensical. Microsoft have made enormous strides on security and is decades ahead of Linux: https://madaidans-insecurities.github.io/linux.html

Any credible security researcher will tell you the same thing: https://madaidans-insecurities.github.io/linux.html#security-researcher-views

And no, source models aren’t magic security properties.

(Non-technical comment, so take with a grain of salt)

I agree that page is super messy. A logical approach would be:

  • split into separate pages Windows v macOS v other
  • clearly demarcate privacy and security issues on those child pages (yes, they’re related I know, but it’s too messy right now)
  • add the main missing element - Xen and like systems with Type I hypervisors & virtualized, separated domains for various elements - networking, firewalls, USB, GUIVM, ‘dangerous applications’ etc.

I’m pretty sure security professionals were saying in recent times that all monolithic OSes are clusterf**ks for security. Too much code, massive kernels running with 10s of millions of lines of code etc. means they can never be properly secured (until maybe we have quantum computer fuzzing operations or similar doing tests in massive paralell i.e. something with 300 qubits or so).

So, we must assume in advance all systems will be pwned by any competent adversary. That suggests fine-grained separation is the only solution, preferably with those VM instances running in minimal templates, all in a disposable fashion i.e. Qubes architecture.

I think it’s hard to argue that Windows 10, even with 10s of thousands of developers and billions of dollars of investment/man hours, is more secure than say Xen hypervisor with disposable netVM, disposable firewall-VM, disposable USB-VM, GUIVM, all applications run in minimised disposableVMs for single use purposes etc.

Maybe those Qubes VMs can be pwned at a rate of 2-3 times (say) a Windows instance, but who gives a shit if the minute I shut down the disposable VM the miscreant’s presence is killed? If they are performing VM breakouts and infecting dom0, then you’ve bigger things to worry about, because that is apparently not trivial.

I also wonder what OS the NSA and others run BTW? I searched for that in the past and couldn’t see any clear answers. Pretty sure they’re not running Windows 10.

Even if Windows 10 is far more secure in its architecture than your best Linux OS today, I don’t think our community cares i.e. they want privacy and not 100s of open channels to the Microsoft mothership that can never be turned off despite all best efforts.

An honest appraisal at the end of the day might say - yes, Windows 10 is far more secure than your stock standard Linux OS as madaidan points out, but a privacy disaster. On the other hand, none of the monolithic OSes come near Type I hypervisor arrangements. So, I’d suggest that Qubes-Whonix is then the best compromise under the circumstances. Reasonably secure, privacy-focused, and can limit the long term impact of breaches by malicious turds when properly configured with a bit of effort.

1 Like

Windows is implementing many Qubes-esque security features like VBS, WDAG/MDAG and Windows Sandbox.


They’re using Hyper-V as a backend which is far stronger than Xen, a hypervisor that is known for its lack of basic self-protection features and didn’t even have things like ASLR or NX support for a long time.

https://www.qubes-os.org/attachment/wiki/QubesArchitecture/arch-spec-0.3.pdf (particularly 3.3)

Also see:

You can’t boil this down into such a simple conclusion. It’s a very complicated thing, especially when talking about Qubes which is drastically different from traditional operating systems. The security of Qubes also heavily depends on how it’s being used. Virtualization won’t save you if you put all of your personal data into a single VM that gets persistently compromised. Threat models and user behaviour are critical in recommending a secure OS to someone.

This isn’t to say that Windows is certainly more secure than Qubes, just that it’s difficult to quantify security in such a way.

You can disable most privacy-invasive telemetry in the settings if you care to and in the Enterprise edition, all of it can be disabled. Windows can be a privacy disaster but only if you let it.

1 Like

I am referring to this:

Debug symbols are usually not in production builds. However, microsoft forgot to remove debug symbols. That’s why textual string “nsakey” was found inside Windows: unstripped debug symbols in production build

Quote from the original which started the speculation:


Note 1: many people have written us and assumed that we “reverse engineered” Microsoft’s code. This is not true; we did not reverse engineer Microsoft code at any time. In fact, the debugging symbols were found using standard Microsoft-purchased programmer’s tools, completely by accident, when debugging one of our own programs.

If Microsoft didn’t forget to strip debug symbols in production build, then textual string “nsakey” would be nowhere to be found.

Quote https://en.wikibooks.org/wiki/X86_Disassembly/Disassemblers_and_Decompilers#Lost_Information

User defined textual identifiers, such as variable names, label names, and macros are removed by the assembly process.

Nobody can reverse engineer the source code.

Citation required.
(Required for Finding Backdoors in Freedom Software vs Non-Freedom Software not so much for NSAKEY.)

Enabling/disabling debugging symbols is just a single variable.

Obvious backdoors such as hardcoded username / password happened in past. Wikipedia has a small list: https://en.wikipedia.org/wiki/Backdoor_(computing)#List_of_known_backdoors

Yes, but I guess we have enough points in this thread. Could easily get overwhelming.

That’s a stylistic issue rather than factual claim issue?

It doesn’t have to be. That wiki page doesn’t say it’s limited to security and privacy only. It’s a Freedom Software Linux distribution advocating for use a Freedom Software whenever possible. Summarizing as reasonably sufficient arguments, as neutral, concise, factual as reasonably possible.

If you point out any duplication, I’ll try to reduce. However, some things are duplicated because without re-mentioning the conclusion chapters couldn’t be reasonably argued. That’s also if someone jumps to the conclusion, then many of claims may seem unlikely, big, … therefore adding internal links to the parts of the page where these points are made in detail with links to sources.

That kind of duplication isn’t a big deal. Since that claim seems so strong, it’s good to point at various sources to proof that this interpretation isn’t just an outlier.

Each one has to be interpreted by itself. I don’t interpret them the way you do. For example, they don’t say “Linux has security issues. Use Windows 10 instead.”

Needs to be more specific.

These are irrelevant since Windows fails at the finishing line. Already addressed with this part:

Microsoft provides Tyrant Security. Not Freedom Security. (Tyrant Security vs Freedom Security) Windows comes with some innovative security technologies, however privacy and user freedom is terrible. Security and privacy have a strong connection. Quote Bruce Schneier Security vs. Privacy [archive], The Value of Privacy [archive]:

There is no security without privacy.

Quote HulaHoop [archive]:

I equate privacy with security because they are very much related in the real world especially for whistleblowers.

Windows already is on its dedicated page:

The chapters could be re-organized. Content shuffled around. But or now, I am mostly interested in precise factual claims.

Interesting counter viewpoint:
(Link shared originally by @madaidan.)

Right. There’s a lot to analyze, document. I’ll work on that once a standalone release of kicksecure.org and Kicksecure is done.

One thing: users still need to use a secure browser without any CVE currently being exploited in the wild.

Chromium Browser for Kicksecure Discussions (not Whonix)

…because you don’t want to test the the robustness of the virtualizer against locally malware running local code execution in order to break out of virtualizer or do other highly unwanted activity:

During such times of compromise (temporary inside disposable VM or as long as a persistent VM gets re-used), some points from The Importance of a Malware Free System apply.

If ignoring or disagreeing with most points of https://www.whonix.org/wiki/Windows_Hosts and concentrating on https://madaidans-insecurities.github.io/linux.html alone I can now even understand that point as well as the point “Windows more secure than Linux”. Just that we don’t agree on various premises.

Which premises? That’s what https://www.whonix.org/wiki/Windows_Hosts is for.

To address that, please refer to these chapters:

1 Like

It’s not a debug symbol.

That’s simply not true. There is no technical limitation on this. Everyone can objdump -D /path/to/binary

I don’t think you get how this works.

How exactly do you think vulnerabilities are discovered / malware is written for Windows? People reverse engineer the code to uncover security vulnerabilities or test the strength of security features. A few examples:


https://i.blackhat.com/USA-20/Thursday/us-20-Amar-Breaking-VSM-By-Attacking-SecureKernal.pdf (by MSRC researchers but they reverse engineered the code themselves because as I have already said, reverse engineering is useful regardless of source code access)


Windows is not a blackbox. It’s picked apart by external security researchers all the time. It’s reverse engineered for non-security purposes also. An example of such is ReactOS:

I can continue to give more examples if you want. It doesn’t seem like you’ve looked into this at all.

Not by huge companies like Microsoft or government agencies like the NSA. The backdoor in Dual_EC_DRBG was only proven by Snowden.

No, since it results in gish galloping:

Then please put them all into 1 separate section and make it clear they’re not relevant to security or privacy.

Some of them do and quite explicitly.

That part is also nonsensical. Implementing modern exploit mitigations and breaking common bug classes / exploit techniques is not tyrannical…

Any anti-features of Windows such as telemetry cannot be excused by “but it can be disabled”. That’s a workaround at best. Not a fix. Fact remains, for most users, if it’s enabled by default, it’ll tend to stay on.

Except it’s not actually enabled by default. Users are asked upon startup if they wish for it to be enabled.

The methodologies used to verify if telemetry is disabled in the other sources you link are dubious. Simply measuring the amount of connections a piece of software makes proves nothing. There is no proof those connections are transmitting anything privacy-sensitive. whonixcheck and sdwdate both connect to various servers but does that make it telemetry? Of course not. If you really want to analyse the telemetry, install a root TLS certificate and examine exactly what is being sent.

Btw already far too many points discussed at the same time. Hard catch all.

I’ve already quoted the original source of that claim.

Disassembly is not the source code.

disassembly code | source code

I don’t think disassembly code should be called source code.

Finding vulnerabilities isn’t a complete reverse engineering of Windows. No outsider has the complete picture.

No such claim was made.

  • If I don’t quote enough sources, accusation will be dubious sources.
  • If I quote multiple sources, it’s called gish galloping.

Why would it be required for example for chapter https://www.whonix.org/wiki/Windows_Hosts#Nuisances to say “This isn’t relevant to security/privacy.”? What that chapter is about is clear form the title and the content of the chapter.

Didn’t see such statements in any of the links.
Link + specific quote required.

There are 3 sources now quoted on the default.

Which then most users go with the defaults and click “accept”. Which then leads to https://www.whonix.org/wiki/Windows_Hosts#Conclusion - looking at what’s actually happening.

Does Windows result in a world wide net gain or net loss of privacy?

Quotes from the Microsoft website alone are already admitting enough to make this a clear decision for me. If there was such an analysis, it would be interesting to look at it.

Also install root TLS certificate won’t work. Quote https://www.government.nl/binaries/government/documents/publications/2019/06/11/dpia-windows-10-enterprise-v.1809-and-preview-v.-1903/DPIA+Windows+10+version+1.5+11+June+2019.pdf

Microsoft has encrypted the network traffic and has implemented certificate pinning
as a regular security measure against unauthorised access. However, the specific
way in which Microsoft has implemented the certificate pinning, also prevents a
trusted network proxy from inspecting the data, i.e., the use of a man in the middle
proxy. That is why BSI used the debugger tool. The use of this method results in a
view of the telemetry data that is similar to the Diagnostic Data Viewer provided by

Also a related interesting consideration: Then also some malware behaves different if run inside a a VM or detects a debugger. I am not saying Windows is currently doing that, but it could in theory. Therefore the concepts Open Source, Reproducible Builds and in-toto transport for apt are the future, the way to go.

But also such a traffic analysis should already exist since…?

Then such an analysis would age quite rapidly too… Also quoted from the same PDF as above:

BSI writes that Microsoft is retrieving and updating the configuration telemetry data several times
per hour. 44

Third, the collection of telemetry data is highly dynamic. Microsoft engineers can add new types of events to the telemetry stream without prior notice to the users, if they follow internal privacy procedures.47 According to the BSI research quoted in paragraph 2.2 the configuration of the telemetry data flow is modified several times
per hour. Each modification can mean that a new ETW provider wants access to
data, or an existing ETW provider wants access to other log data.
Microsoft engineers explain the importance of the dynamic nature of creating
telemetry events as follows: “In data-driven environments, such instrumentation is
moving towards rule-based approaches, where instrumentation can be added once
and then toggled without having to change the code itself. This functionality has
enabled data-driven organizations to collect data not just during testing, but long
after the product is deployed into retail. As Austindev remarks, “What’s really great
for us is to be able to real-time turn on and off what log stuff you’re collecting at a
pretty granular level. And to be able to get the performance back when it is turned
off is a big thing for us.”
Microsoft has explained to the Dutch DPA in 2017 that the collection of telemetry
data in Windows 10 is controlled by organisational policy rules. There is no reason to
assume that such policy rules would not apply to the collection of telemetry data
from Windows 10 Enterprise. However, Microsoft does not provide any information
about this policy, nor any audit results with regard to compliance with those policy
rules. The limitations to these audits are described in section 5 of this report,
‘Controller, processor and sub-processors’.

This is just semantics. Reverse engineering allows you to inspect every single CPU instruction. The way the code is displayed, whether it’s high-level C source code or assembly, doesn’t matter at the end of the day.

Nobody at all has the complete picture. A huge operating system like Windows cannot be fully understood by a singular person. The same goes for Linux; that’s why there are dedicated maintainers for each subsystem.

It seems like what you’re implying by saying it can’t be reverse engineered / analysed.

The points should be clear, concise and with valid up-to-date sources based on fact, not misconceptions.

The nuisances were just one example. Many parts all over aren’t relevant like:

Software Choice and Deletion
Windows User Freedom Restrictions
Shared Source
Terrible Company

You also have things like:

Windows Insecurity
Windows Historic Insecurity

Neither of these make sense (want me to bring up Linux security flaws from 2009 too?) but they shouldn’t be separate sections anyway.

Windows Software Sources
No Ecosystem Diversity Advantage

These also shouldn’t be separate sections but rather everything should be under one security section (those sections don’t make sense either though).

Opinion by GNU Project
Opinion by Free Software Foundation

2 “opinions” (i.e. nonsensical slander) from essentially the same people shouldn’t have 2 separate sections.


https://www.youtube.com/watch?v=v7_mwg5f2cE talks about Windows security at some parts but I don’t have timestamps.


I’m reinforced in my belief that security of mainstream platforms (from Apple, Google, MS) will continue to improve, likely exceeding the “open source” offerings.


In the past I think you mentioned Windows 10 is secure or more secure than a typical Linux Distro because of Windows 10’s implementation of sandboxing?

Among other things, like not being many years behind on exploit mitigations.

“In the past” being https://old.reddit.com/r/CopperheadOS/comments/85dia6/comparable_desktop_os/dvwndnt/

Other than ChromeOS (including Android apps), the closest thing to what you want from a security perspective would be Windows 10 S, not any Linux distribution.

My real recommendations would be ChromeOS (+ Android apps) and Windows 10 S for most people.

Traditional Linux distributions aren’t very secure overall, and the Linux desktop stack is a disaster without any semblance of an application security model. There’s ongoing work to address this problems but it’s barely started and it will be a long and painful process. Windows and OS X have similar issues to address, but Microsoft is the furthest along in tackling the issues and takes it the most seriously. Non-mobile security as a whole is awful.

Windows has good exploit mitigations. They’ve been the leaders in this area among mainstream operating systems. Windows security is still a disaster on the desktop due to the legacy security model without Windows 10 S and that’s a painful sacrifice to make. Windows 10 S also isn’t really on par with Android and iOS yet but it’s most of the way there.

They’re incorrect/outdated and that’s clear if you have ever installed Windows in the past few years.

If they do, that’s not on Microsoft. Windows makes these options pretty explicit and they’re clearly visible during setup.

They don’t admit that telemetry cannot be disabled, unlike some of the sources on the wiki page.

Debuggers don’t execute any code. They read from the binary itself. That’s not possible.

Going to address other parts later.

Anti-disassembly, Anti-debugging and Anti-VM are properties of sophisticated malware and other programs.
(I am not saying this is perfect, unbreakable.)

Quote https://sites.google.com/a/khanhnn.com/ebooks/vii-reverse-engineering/3-malware-analysis-3-int2d-anti-debugging-part-i

The purpose of anti-debugging is to hinder the process of reverse engineering. There could be several general approaches: (1) to detect the existence of a debugger, and behave differently when a debugger is attached to the current process; (2) to disrupt or crash a debugger. Approach (1) is the mostly frequently applied (see an excellent survey in [2]). Approach (2) is rare (it targets and attacks a debugger - and we will see several examples in Max++ later). Today, we concentrate on Approach (1).

Many web search results for:


You don’t necessarily need to attach a debugger to the current running code or even execute it at all. One could use e.g. objdump and extract all the static assembly.

That won’t get you far with obfuscated binaries such as Skype. See how difficult that was:

[Imprint] [Privacy Policy] [Cookie Policy] [Terms of Use] [E-Sign Consent] [DMCA] [Contributors] [Investors] [Priority Support] [Professional Support]