[lowrisc-dev] Ideas for system integrity protection
asb at asbradbury.org
Tue Feb 24 09:30:40 GMT 2015
On 23 February 2015 at 00:40, Eve FooBar <eve.foobar at gmail.com> wrote:
> Having recently discovered Qubes-OS and Anti Evil Maid I've been reading up
> on x86 trusted boot and find it shocking how many little flaws it has that
> add up to it being near worthless. As such, I'd like to propose my own take
> on trusted boot and other aspects of system integrity protection.
Thanks for your email. I am familiar with Qubes, but hadn't seen their
Anti Evil Maid work.
> First, processors should have a dedicated minion core running an extremely
> simple, well tested firmware capable only of taking and storing SHA256
> hashes, the hash to be changeable with firmware upgrades if/when SHA256 is
> broken/upgraded. This minion core then has read only access to all system
> memory, including CPU caches and nonvolatile firmware storage for all other
> minions. Its own firmware storage should be exposed read only to a hardware
> pin for external verification. This minion core would then have its own
> external bus directly to a TPM or equivalent security chip.
This is line with our current thinking, that a 'master' minion core
would be responsible for initial boot.
> The minion core then reports hashes of all firmware and boot binaries
> (basically, anything modifiable that can't be protected by full disk
> encryption) via PCR registers. The minion can also check the hashes against
> its own values for extra security by enabling use of non TPM supported
> hashes in parallel and convenience by use of non TPM based key stores like
> smart cards (with more advanced firmware to manage unsealing operations).
> Use of cryptographic signatures could also be considered. The system would
> refuse to boot if any hash marked as critical did not match, and issue an
> obvious warning if non critical hashes didn't match.
> In order to protect against DMA attacks, the IOMMU should also isolate all
> DMA devices by default, configuring each device with a small, unshared
> buffer so that the system can't be tampered with between trusted booting and
> OS configured IOMMU protection can be enabled. This would mean that not all
> minion cores would need to be untampered with to boot securely (that's why
> there's non critical hashes above), allowing use of still trusted features
> of the system while awaiting repair.
Being able to isolate devices at will on the on-chip network is also
useful for e.g. a periodic software attestation.
> Finally, a means of preventing unauthorized write access in the first place
> is needed. One way might be to have a dedicated minion core with exclusive
> write access to all system firmware, including a store for the system's boot
> partition. Another might be an equivalent to ARM's TrustZone, with only the
> secure context getting write access to the firmware and boot stores. Either
> way, when booted to allow firmware upgrades, invoked by a hardware switch
> only, the system will use the trust minion core to verify the secure
> operating system and any firmware/software it relies on, then enable users
> to select firmware images designed for different parts of the system and
> verify cryptographic signatures on them before flashing to the chip and
> resealing the TPM.
> Caveat: if the user signs their firmware on the same machine, then
> compromise of that machine would still compromise the signing keys and
> enable the attacker to write seemingly legitimate boot firmware. A dedicated
> firmware writing system only really accomplishes 2 things; 1, less systems
> to fully trust, since a single, highly (software) secured system could be
> used to sign firmware for multiple systems, and 2, prevents the attacker
> from writing bad firmware immediately, as the secure boot mode can be
> triggered by hardware, allowing the user more time to realize the problem
> and correct it by loading new keys.
> What isn't covered here is a way to load signing keys securely, as I'm not
> sure how. Perhaps a dedicated hardware input allowing connection to another
> trusted machine?
> The only other thing I'd like to add about this is that I'm not a security
> researcher, so while I think this would make system integrity a whole lot
> better without too much extra hardware I might have missed something
> important, so if only one thing comes from this, seek advice from a security
> researcher who works with system integrity, such as Joanna Rutkowska (who
> wrote a lot of the stuff I'm basing this on).
Could you perhaps share references for works you're basing your proposal on?
More information about the lowrisc-dev