Joanna Rutkowska’s Thoughts on the “physically secure” ORWL computer.
Frankly I don’t see anything more secure about it more than any other off the shelf ware. The marketing was pretty strong when it came out. Nice site though.
Ironically with the supply chain and manufacturing security risks we see, buying a “secure” anything is an invitation for a poisoned well attack.
This blog post is a pile of rubbish. She does not know much on HW and most of the comments are not fair/irrelevant. There is nothing equivalent to this device. She was involved at the beginning of the project…
Can you be more specific? What doesn’t she know about HW that you do? Its not helpful to make claims and have nothing to back them up. Maybe go point by point in her blog and tell us what she got wrong?
I think that would mean she has a little better understanding that most of us.
Any detailed rebuttal available for us mortals? For me it sounds quite convincing what she writes, so I am happy to stand corrected.
I wonder if ORWL knows about her blog post and would like to comment.
fair comments. give me a little bit of time and I will put something together…
a few HW items to consider:
ME engine. ORWL is only computer to disconnect USB and HDMI ports until user is 2 factor authenticated and user is present (less than 10m from device). This is managed by secure controller before Intel is booted. So there is no ME engine attacks possible unless user is present and insert something rogue onto the ports…
All keys (users ID, SSD encryption keys…) are stored in secure controller. This prevent many attacks such as die opening, TPM recent fiasco and low level attacks on boot or Bios. (SGX for instance is not protected) Keys are released only when user are authenticated. See here for die attacks : http://www.blackhat.com/presentations/bh-dc-08/Tarnovsky/Presentation/bh-dc-08-tarnovsky.pdf
Temperature attacks and 32k fault injection is also prevented by secure controller. recent example includes Trezor. https://medium.com/@Zero404Cool/frozen-trezor-data-remanence-attacks-de4d70c9ee8c
System is protected under active mesh even when power is disconnected. This prevent memory fault injection, PCB modifications, interfaces snooping and other SPI, I2C, USB… attacks. This link show attack performed on ARM CPU but similar attacks are done on Intel. https://eprint.iacr.org/2015/147.pdf
secure controller and keyfob on ORWL have active die shield. see https://www.researchgate.net/profile/Sylvain_Guilley/publication/271472284_Cryptographically_secure_shields/links/550f497f0cf2752610a00cc9/Cryptographically-secure-shields.pdf
side channel is not protected on most system. ORWL Secure controller is preventing this. These attacks can be performed over the air as presented in this video or using SDA / DPA analysis if attacker can monitor power supply line. So any VPN, login key or encryption keys are exposed on PC without this. https://www.youtube.com/watch?v=4L8rnYhnLt8 and power analysis on Wikipedia
In term of logistic, ORWL ship with separate hash keys by PIN mailer. If really concerned, schematics and code is available and you can reprogram the device (fairly intensive work but possible).
enjoy the reading…
Link? This is the central question. Everything you mentioned depends on an opaque secure controller.
If the platform isn’t open to scrutiny, then Rutkowska’s argument stands. The ORWL may reverse climate change and do a lot of wonderful things but unless it can be audited, no one knows for sure.
If you read through the blog post you will see that Joanna was expressing her frustration with vendors not pursuing trustworthy personal computers which would in fact be a major step forward.
Also see State Considered Harmful:
I think the point of her blog was - you can add all security mechanisms you want but it will not be a trustful computer unless (at a minimum ):
1. The data sheet for secure uC must be made public. 2. All the firmware sources must be made public. 3. Firmware process would have to be reproducible. 4. Tool chain for firmware builds need to be made public 5. The uC should expose a reliable way to dump whole firmware through some HW mechanism.
The main point of her blog was ORWL is not a trustful computer. However you have not shown anything to the contrary. I think there is a reason for that. No one could prove that unless (at a minimum ) points 1 - 5 above was met.
That’s where it’s getting really dubious from Joanna to fingerpoint ORWL specifically when all the other laptops approved have major security opening well known and documented such as ME attacks using USB. Design SHIFT paid Joanna to help architect ORWL and optimize the system to support Qubes out of the box and the architecture, firmware, schematics and source code were provided to Invisible lab. The Intel CPU version was recommended by Joanna and she has the first prototype of ORWL. These below are the minutes of the architecture selection.
Ok, so we have spent almost 7 hours in a restaurant yesterday discussing this…
As mentioned, nobody likes the Option #1.
Option #3 turned out to be a no-go, because at the time of the main CPU wakes up
from S3 we don’t have access to Qubes VMs, where we could delegate NFC
processing to occur. And we can’t have this access till we authenticate the
user. It’s a classic chicken-and-egg problem here.
Option #4 alone (i.e. without #3) is, of course, not good enough because we want
ORWL unlocking to also work with a default (cheap) token, whatever it would be.
So, the option #2 is what’s left. After some discussions it seems like we should
be able to utilize MPU on the Maxim to get the satisfactory security isolation
in a similar way as we wanted from the MMU unit (i.e to isolate secure-boot and
flash handling-related code from the auth/unlocking related, and from the dirty
NFC stacks and drivers). Additionally it seems like this would not necessarily
mean writing a new microkernel by us or Genode (something which would likely be
prohibitively expensive), because Rafal just found this promisingly-looking
…which looks like it might be very useful for our case. I wonder if any of
you: Marc, or Gupta, or your team, has any experience with this FreeROTS?
So, that looks promising!
As for the details regarding the actual NFC token, and if it should also play
the role of a smart card (so carry user secret(s) with it – I think these are
quite secondary topics that could be left to be determined later. However, if
you already had some specific NFC tokens/smart cards at your considerations,
please send over their datasheets our way (or better: push them to the
Some other things we discussed included a method for the end user to verify the
device he or she got has not been tampered during the shipment. This uses a
secret in the uC and the trusted OLED. So, this would work well with this Maxim
uC. This method does not, however, solve the problem of the user needing to
trust the factory. This topic is more complex to solve and requires more
thinking. As a temporary solution we could say: those who don’t trust the
factory can just build the device themselves (as it’s gonna be Open Hardware
project), but this option doesn’t sound quite satisfactory in practice.
Looking to your feedback. If everything looks ok, I will update the arch docs
early next week.
The Wiki needs tons of work and Github has more folders pending some NDA. several people have done some audit and toolchains are available.
What happen to this anyway?
There’s no reason to believe it changed from the hyped up dog shit it started as.