Daniel Micay Quotes

source

I am paraphrasing.

  • In quotes (") is my not so serious summary.
  • Blow using forum quotation “>” is what he really said.

You should read the full quotes at its source to see the context for yourself.

“FreeBSD is shit.”
“systemd is shit.”

Also, all these things about desktop Linux completely apply to anything else using the software stack. It doesn’t matter if it’s FreeBSD or whatever. FreeBSD also has a less secure kernel, malloc, etc. but at least it doesn’t have nonsense like systemd greatly expanding attack surface written with tons of poorly written C code.

“QubesOS is kinda shit.”

QubesOS would be far better off with a different OS inside the guests. It’s not really a Linux distribution though and can be assembled out of other distributions. Most of the work has been Linux integration though. The biggest flaw with it is that it’s trying to assemble a secure system out of garbage (x86, desktop Linux). It does a great job at implementing some of the best compartmentalization available despite the challenges. It could be a lot better if the components it uses cared more about security.

“Desktop operating system encryption is shit.”

The traditional desktop OS approach to disk encryption is also awful since it’s totally opposed to keeping data at rest. I recommend looking at the approach on iOS which Android has mostly adopted at this point. In addition to all the hardware support, the OS needs to go out of the way to support for fine-grained encryption where lots of data can be kept at rest when locked. Android also provides per-profile encryption keys, but has catching up to do in terms of making it easier to keep data at rest when locked. It has https://developer.android.com/reference/android/security/keystore/KeyGenParameterSpec.Builder.html#setUnlockedDeviceRequired(boolean) now as a nicer approach to keeping hardware-backed keys at rest, but iOS makes it easier by letting you just mark files as being in one of 2 encryption classes that can become at rest when locked. It even has a way to use asymmetric encryption to append to files when locked, without being able to read them.

“Linux applications are shit.”

The userspace Linux desktop software stack is far worse relative to the others. Security and privacy are such low priorities. It’s really a complete joke and it’s hard to even choose where to start in terms of explaining how bad it is. There’s almost a complete disregard for sandboxing / privilege separation / permission models, exploit mitigations, memory safe languages (lots of cultural obsession with using memory unsafe C everywhere), etc. and there isn’t even much effort put into finding and fixing the bugs.

“Debian is shit.”

Look at something like Debian where software versions are totally frozen and only a tiny subset of security fixes receiving CVEs are backported, the deployment of even the legacy exploit mitigations from 2 decades ago is terrible and work on systems integration level security features like verified boot, full system MAC policies, etc. is near non-existent. That’s what passes as secure though when it’s the opposite. When people tell you that Debian is secure, it’s like someone trying to claim that Windows XP with partial security updates (via their extended support) would be secure. It’s just not based in any kind of reality with any actual reasoning / thought behind it.

“Linux kernerl is shit, macOS, Windows is also shit”

The Linux kernel is a security disaster, but so are the kernels in macOS / iOS and Windows, although they are moving towards changing. For example, iOS moved a lot of the network stack to userspace, among other things.

“Open Souce is shit.”

It’s just the fallacy that open source is more secure and privacy respecting. It’s quite often not the case. There’s also the mistaken belief that closed source software is a black box that cannot be inspected / audited, and the massively complex hardware underneath is the real black box. A lot of the underlying microcode / firmware is also a lot higher to inspect.

Really, people just like saying that their preferred software stack is secure, or that open source software is secure, when in reality it’s not the case. Desktop Linux is falling further and further behind in nearly all of these areas.

“Firejail is shit.”

Firejail specifically is extremely problematic and I would say it substantially reduces the security of the system by acting as a massive privilege escalation hole.

“Flatpak is shit.”

The work to try catching up like Flatpak is extremely flawed and is a failure from day 1 by not actually aiming to achieve meaningful goals with a proper threat model.

Note by me: flatpak uses bubblewrap so this might indirectly concern bubblewrap too.


Him sharing his thoughts is appreciated. In some points I agree. Others not. Or don’t know. Too much to debate and not productive.

3 Likes

No matter how shit Linux and Qubes OS are security-wise, the fact will always remain that Windows, macOS and Google-branded hardware/software have morphed into 100% surveillance platforms that have monetized users in a thousand different ways.

Corporations endorsed data siphoning as their primary business strategy after government failed to enact strict regulation, laws and stiff penalties for stealing and hoarding personal user data. This is also no mistake, since it supercharges the government panopticon fetish. Hence the term “government-corporate-surveillance complex”.

Give me less-secure, open source Linux/Xen shit any day of the week over any corporate option that feeds billion $ bottom lines - they are data whores masquerading as alternative platforms and an essential part of the modern security state.

Unfortunately in the end calculation, all computers/peripherals, all code etc. is unfit for purpose for providing proper security from skilled and well-resourced outfits. Nothing is going to change that. Only mid-tier to pissant adversaries will be swatted away by privacy enthusiasts who go the whole nine yards.

And Google etal. won’t stop sucking government c**k anytime soon. So, we’re left with definitely backdoored shit like Windows, or partially useless open source. Easy choice then for stuff like comms/browsing and so on that one is willing to risk i.e. doesn’t mind sharing (in probability) with 5 eyes buddies who attack the entire privacy-minded population at whim (illegally and with no penalty), see: How the NSA Attacks Tor/Firefox Users With QUANTUM and FOXACID - Schneier on Security

If people think they’ll allow the .02% of Internet data they can’t immediately siphon to be left alone, they are sadly mistaken and haven’t done their homework.

Until then, anything you really treasure, don’t want to share with any misfits should be done 100% offline. No electronic peripherals of any kind. Ever. (Be proud of the probable dossier you are building on a Utah server :wink: )

Unfortunately the only viable, online privacy solution for the masses is widescale adoption of Tor, .onion infrastructure and so on – but that is itself mostly a pipe dream with the apathetic public, 94% of whom are glued to a smartphone daily and shuffling around like zombies while patched into Facebook to catch up with the latest social media drivel…

In summary, Daniel’s probably right, but opensource shit tastes better than a corporate shit sandwich.

1 Like

“Everything’s shit. Burn it all down and use an abacus” --Daniel Micay

1 Like

All these points are true. I don’t see anything wrong with them.

I’m pretty sure he said he likes Qubes and even uses it, just that the default guests aren’t exactly brilliant.

He doesn’t say it’s shit. He just says that it doesn’t make your software completely secure like many people assume.

1 Like

That’s where we/(you?) come in. Hardening the guests to prevent SHTF before the hax0rs take shots at the hypervisor.

2 Likes

That is a straw man argument. Who seriously argued that open source software makes anything completely secure? No one as far as I can see… I’ve read the most uninformed comments on the web too - and none of them say that. A straw man is when you twist your opponents argument into something they’d never argue - and then you argue against that.

The sad part is that you’re a well respected security researcher and this is typical of the argumentation you make. People take what you say to heart. For example, when you promote ChromeOS as more secure than Linux - you are making a value judgement about what kind of attacker one should try to stop. People have made arguments that have yet to be overturned about how global companies working with governments to reduce freedom and increase regulation are more dangerous & more serious adversaries than petty criminals. A basis of security analysis is considering the scenarios.

If a government can act like they’ve regulated Google, then Google could simply flip the switch and take back control of its hardware and software that you’ve built up to the status of the most secure in the world. Meanwhile the FOSS enthusiasts you & Daniel keep denigrating would be fine - or at least more able to react… You’re welcome to disagree with the scenario, but you’re not welcome to imply that anyone hypothesizing such a scenario is stupid or uneducated. Your personal value can be to defend against petty criminals - or whatever - and ours can be defending against global control.

I am also attempting to understand this ideological split and document it. Here’s what I came up with:
Tyrant Security vs Freedom Security

Assumptions (Not necessarily mine! For sake of argument.):

  • Monopolies such as Google and Co. don’t do evil at the the moment.
  • Under some threat models Google and Co. software/hardware is safer from some adversaries.

That would be all great and fine at the moment, however Google and Co. might in (small) steps turn more and more evil. By that time, we’d wish that it would be:

Since that is not the case and/or tyrant security, there are these very different viewpoints. What seems shiny and great now versus long term reliability, stability, freedom, security.

1 Like

What do you think of my above breakdown? I don’t think it is a split, though it may seem so. There are some logical fallacies being perpetuated. The one I described above is the logical fallacy that the security experts get to decide the threat modeling for the customers/plebs/non-experts, etc. However, this goes against real security practice, which is individualized. For example, if you are totally part of the system, your income comes from government or Google, etc - then you’re welcome to threat model as you please. However, those who feel they know that they are on the same side as the global system(s) want be able to verify what their computer is doing. Each person can’t verify the whole stack, but a community can do a decent job (of at least ruling out obvious remote access points for all passwords & keys).

The biggest offenders these days seem to be “trusted environment subsystems” in the hardware. You can run the most secure software on them and potentially the (actual) owners can extract the passwords & keys even in the registers using their tooling - and potentially remotely too.

I have yet to see any real argumentation against the above from what you call “tyrant security” side, @Patrick. They typically just censor/ban you if you make these arguments.

It’s interesting. I see your point. But I am wondering how it could be misunderstood. Or made better for wiki enhancement. :slight_smile:

Straw man: I am not sure it’s productive to call it that.

Generally speaking, there are so many implicit assumptions, that it’s easy to talk past one another.

Related, you might also enjoy:

Attempting to describe freedom security better, how can the software vendor exclude itself from need to be trusted as much as possible? A development goal which is hard to describe. “Giving the user security from the software vendor.” Quoted from the last link:

Prevention of targeted malicious upgrades. [25]

As in singling out specific users. Shipping malicious upgrades to select users only.

Most android phones have a feature which allows to login on google play web/desktop version using the same e-mail address which is used on the phone. Usually the same gmail address. When clicking install for an app using the google play web/desktop version, the user will be prompter (in case of having registred multiple devices) on which device the app should be installed. After pressing install, the app will be installed on the phone. This video [archive] demonstrates this. It is therefore established that the google website can result in remote app installation on the phone. It follows that a coerced or compromised google play website could do the same. Since the gmail based web login can be linked to the same gmail address on the phone, pushing targeted malicious upgrades is esspecially easy. Even if a phone was always fully torified (all traffic routed over Tor) the gmail identifier could still be used. While Tor can anonymize the connection, it does not (and should not) attempt to modify anything inside the traffic (the gmail identifier).

Linux distributions usually do not require an e-mail based login to receive upgrades. Users can still be singled out by IP addresses unless users opt-in for using something such as apt-transport-tor which is not the default.

Kicksecure / Whonix:

All upgrades are downloaded over Tor. There is no way for the server to ship legit upgrade packages to most users while singling out specific users for targeted attacks.

Reproducible builds further go into that direction.

Other later hopefully coming steps in the freedom security ecosystem would be fixing security vulnerabilities, systematic audits and then somehow also slow down the speed of development, potential vulnerable code being (re-)introduced after audits.

1 Like

Yes, I understand; it isn’t meant for the wiki but I think this straw man is self-evidently across these types of discussions. I do think it is important when using logical discourse to stick to the literal rules of logic, as taught in textbooks and universities. That includes the list of logical fallacies.

Great, happy to work on that. I think “tyranny vs freedom” software is a false dichotomy in the sense of security - which I didn’t do a good job of describing yet. How can one have verifiable security if they cannot test or see what the software is doing on the device that they own. Of course, we are all familiar with the arguments of “do you really own it” if the manufacturer can potentially have remote access / exploits / updates / brick-it.

I see the above often being misinterpreted; indeed we have to work with what we’ve got. But that is separate from the goal we are working towards: verifiable security. If there are black boxes, then by definition it is not verifiable. In terms of the name “freedom security,” I think some people might actually feel less free with it: considering that acknowledging the above truths forces them to feel like they have to build something from scratch (which is a massive workload / not inspiring the sense of freedom). But I’m trying to be clear here as to not get wires crossed. It is great to use the “tyranny” tools as best as we can, and I don’t think we should tie the term to FOSS entirely. I’m all for app developers selling closed source stuff and customers buying it. However, we are in a situation now where it is extremely difficult to get a verifiable system stack - and impossible to get one that is well tested. I’m speaking of course about Intel’s management engine, AMD’s PSP, Google’s Trust Zone, and the like…

“Spotting backdoors is already very difficult in Freedom Software where the full source code is available to the general public. Spotting backdoors in non-freedom software, obfuscated binaries is much exponentially more difficult.”

This is another area where I feel the need to point out an assumption being made. The discoverability of backdoors being negative is a value judgement. It is not an established security truth. Closed software means a very limited amount of people can discover/implement backdoors with ease. Open software means the discoverability of backdoors is way less limited (as your table points out). So, if your security threat model is other people installing backdoors, then you would tend towards the open source model. If your security threat model is your company wrote all the software and want to reduce the ability for other to find problems, then you might tend towards the closed model. I’m generalizing to make a point in the last two sentences; it isn’t that simple considering that lots of open source gets recycled inside of closed source stacks, for example.

Does this not provide an added incentive for those with access to the source code (the creators/maintainers/etc) to sell back door access? It also means backdoors in closed software can be a lot more persistent over the years. Security via obscurity certainly has useful applications, but in the case of people running systems without billion $ resources - I think we can achieve lot by: 1) reducing complexity 2) reducing LoC 3) opening source (having backdoors more easily found) 4) encouraging more testers 5) having a CI system that can run automated tests that detect possible backdoors on the codebase, which anyone can contribute.

I don’t think it makes sense to compare stock Android… Not really FOSS. GrapheneOS for example has made massive strides / is my personal example of the best mobile OS we have. “Freedom software” in this case is more about the unverifiable software/firmware running on the chips that no independent dev can feasibly reverse engineer. Yes it is verifiable from a trusting Google perspective, but empirically unverifiable (in the true sense empiricism; to verify using your own direct experience).

Reproducible builds further go into that direction.

In the interest of staying focused, I’ll just comment that I don’t think Tor is the right way to go for software distribution. But verifiable builds would be a great accomplishment for distros & packaging alike.

Overall, if the bedrock of the computer is not trustworthy, then all the filesystem encryption in the world cannot help if certain groups have backdoors into the CPU registers holding your passwords & keys. Some of those are network accessible - for others they’d have to gain physical access to your device(s). I think the typical responses like: “oh well you should just air gap all of your computers” / “you’re a conspiracy theorist” / “have a separate computer for accessing the internet” - are all tired tropes, yet genius ways to derail the above inquiry & argumentation.

Doesn’t make sense in context of Mobile Operating System Comparison?

Not sure what you mean… Hopefully my comment was not too verbose :slight_smile:
Yes, that Android Privacy Issues chart makes sense in this context.

Mobile Operating System Comparison

The whole point of that chapter is to compare most stock roms, what most people are using, what is actually happening (Most iPhone / Android devices [8] as named in the comparison table) with other things. Many stock roms have some interesting security features, also issues. It’s hard for me to justify this any better as I guess that comparison table is already the best way to put it. It was reviewed, modified by multiple contributors. If you still think comparing most stock android in that table makes no sense, then I guess it will be hard to agree with anything whatsoever.

Random analogy: Comparison table. Gold, silver, etc. Weight, volume, whatnot. Then saying “I don’t think it makes sense to compare gold” - I wouldn’t know what to answer.

In comparison table indirectly mentioned as:
“Libre Android” [9]

Not mentioned specifically since that’s not the main point which is supposed to be made. Not a nit pick this libre android vs that libre android with lots of changing technical differences. Already mentioned in the footnote.

That however I understand better.

Fully Open Source firmware would be awesome. Fully empirically verifiable by the community would be awesome too. It’s worthwhile to support any project going into that direction.

However, even with fully Open Source firmware we would not be much safer. Still would need to trust. Still not fully empirically verifiable. That is because the blueprints of hardware remain secret. Even Open Source Hardware wouldn’t make it fully empirically verifiable. We’d still lack the production capabilities and vulnerabilities / backdoors could still be injected at the production level. And even if in theory owning a production factory, would all the trusted equipment would need to be OSH and empirically verified.

I guess the people interested in empirical verification need to make their case better. This is highly complex technical stuff.

Capabilities are in my experience not widely known.

Hence, the following pages are a contribution into that direction (not hardware related):

This needs concise write-ups, with solid sources, published by great sources, other media formats, different styles, from diverse people to reach as many people as possible.

I am mostly interested in the biggest impact, highest productivity. Source code and documentation.

All words are imperfect.

All statements are either false or incomplete. [archive]

Unless there’s something better…

In theory it security could be verifiable while there are still user freedom restrictions in place such as most stock roms pre-installing applications which cannot be as easily removed as any other applications. Hence, the invention of the term Freedom Security. Verifiable Security isn’t wrong either. Perhaps Verifiable Security is a subset of Freedom Security. Perhaps Verifiable Security is a prerequisite of Freedom Security.

1 Like

I guess there are people who believe the backdoors would be obvious in the firmware if it were open source - and people who trust that they’re either extremely well hidden or not purposefully put there. I’m in the prior camp. One must pick lowest hanging fruit if they’re going to be highly effective in these areas. Many security experts seem to imply the adversary is someone trying to steal your bank account number or something. But if you consider global government/corporate partnerships to be, then you’re in a different camp. At the very least, I should be able to know if an OS running in a black box in my computer is reading my keys from memory & registers or potentially remote controlled.