XSA-248 through XSA-251 do not affect the security of Qubes OS
https://www.qubes-os.org/news/2017/12/12/xsa-245-251-qubes-not-affected/
The Xen Project has published Xen Security Advisories 248 through 251 (XSA-248 through XSA-251).
These XSAs do not affect the security of Qubes OS, and no user action is necessary.
These XSAs have been added to the XSA Tracker (https://www.qubes-os.org/security/xsa/):
https://www.qubes-os.org/security/xsa/#248
https://www.qubes-os.org/security/xsa/#249
https://www.qubes-os.org/security/xsa/#250
https://www.qubes-os.org/security/xsa/#251
https://www.qubes-os.org/news/2017/12/12/xsa-245-251-qubes-not-affected/
The Xen Project has published Xen Security Advisories 248 through 251 (XSA-248 through XSA-251).
These XSAs do not affect the security of Qubes OS, and no user action is necessary.
These XSAs have been added to the XSA Tracker (https://www.qubes-os.org/security/xsa/):
https://www.qubes-os.org/security/xsa/#248
https://www.qubes-os.org/security/xsa/#249
https://www.qubes-os.org/security/xsa/#250
https://www.qubes-os.org/security/xsa/#251
What’s New in the Xen Project Hypervisor 4.10
https://blog.xenproject.org/2017/12/14/whats-new-in-the-xen-project-hypervisor-4-10/
I am pleased to announce the release of the Xen Project Hypervisor 4.10. As always, we focused on improving code quality, security hardening as well as enabling new features. The Xen Project Hypervisor 4.10 continues to take a security-first approach with improved architecture and more centralized documentation. The release is equipped with the latest hardware […]
https://blog.xenproject.org/2017/12/14/whats-new-in-the-xen-project-hypervisor-4-10/
I am pleased to announce the release of the Xen Project Hypervisor 4.10. As always, we focused on improving code quality, security hardening as well as enabling new features. The Xen Project Hypervisor 4.10 continues to take a security-first approach with improved architecture and more centralized documentation. The release is equipped with the latest hardware […]
Xen Project Member Spotlight: Bitdefender
https://blog.xenproject.org/2017/12/18/xen-project-member-spotlight-bitdefender/
The Xen Project is comprised of a diverse set of member companies and contributors that are committed to the growth and success of the Xen Project Hypervisor. The Xen Project Hypervisor is a staple technology for server and cloud vendors, and is gaining traction in the embedded, security and automotive space. This blog series highlights […]
https://blog.xenproject.org/2017/12/18/xen-project-member-spotlight-bitdefender/
The Xen Project is comprised of a diverse set of member companies and contributors that are committed to the growth and success of the Xen Project Hypervisor. The Xen Project Hypervisor is a staple technology for server and cloud vendors, and is gaining traction in the embedded, security and automotive space. This blog series highlights […]
Comment on Unikraft: Unleashing the Power of Unikernels by Unikraft: Unleashing the Power of Unikernels – Nerd Junkie
https://blog.xenproject.org/2017/12/05/unikraft-unleashing-the-power-of-unikernels/#comment-421
[…] This article originally appeared at Xen Project. […]
https://blog.xenproject.org/2017/12/05/unikraft-unleashing-the-power-of-unikernels/#comment-421
[…] This article originally appeared at Xen Project. […]
Comment on Unikraft: Unleashing the Power of Unikernels by Проект Xen представил Unikraft для выполнения приложений поверх гипервизора
https://blog.xenproject.org/2017/12/05/unikraft-unleashing-the-power-of-unikernels/#comment-423
[…] гипервизора Xen анонсировали проект Unikraft, в рамках которого развивается […]
https://blog.xenproject.org/2017/12/05/unikraft-unleashing-the-power-of-unikernels/#comment-423
[…] гипервизора Xen анонсировали проект Unikraft, в рамках которого развивается […]
Announcing the Windows PV HID Drivers
https://blog.xenproject.org/2017/12/20/announcing-the-windows-pv-hid-drivers/
Some recent patches to the QEMU source fix a long standing problem where the PV vkbd backend was unable to function correctly without the PV fb backend, which effectively made it pointless to implement PV HID (i.e. keyboard and mouse) frontends for HVM guests. Now that the problem has been fixed, I’m happy to announce […]
https://blog.xenproject.org/2017/12/20/announcing-the-windows-pv-hid-drivers/
Some recent patches to the QEMU source fix a long standing problem where the PV vkbd backend was unable to function correctly without the PV fb backend, which effectively made it pointless to implement PV HID (i.e. keyboard and mouse) frontends for HVM guests. Now that the problem has been fixed, I’m happy to announce […]
Comment on What’s New in the Xen Project Hypervisor 4.10 by fbifido
https://blog.xenproject.org/2017/12/12/whats-new-in-the-xen-project-hypervisor-4-10/#comment-436
When are we going to get native hyper-converge like pernixdata-fvp?
https://blog.xenproject.org/2017/12/12/whats-new-in-the-xen-project-hypervisor-4-10/#comment-436
When are we going to get native hyper-converge like pernixdata-fvp?
Comment on What’s New in the Xen Project Hypervisor 4.10 by Xen Hypervisor 4.10 Focuses on Security and Better ARM Support
https://blog.xenproject.org/2017/12/12/whats-new-in-the-xen-project-hypervisor-4-10/#comment-438
[…] Xen Project released version 4.10 of their hypervisor with an improved architecture for x86, support for ARM processor hardware […]
https://blog.xenproject.org/2017/12/12/whats-new-in-the-xen-project-hypervisor-4-10/#comment-438
[…] Xen Project released version 4.10 of their hypervisor with an improved architecture for x86, support for ARM processor hardware […]
Announcement regarding XSA-254 (Meltdown and Spectre attacks)
https://www.qubes-os.org/news/2018/01/04/xsa-254-meltdown-spectre/
The Qubes Security Team is currently investigating the extent to which
XSA-254 (https://xenbits.xen.org/xsa/advisory-254.html) (and the Meltdown (https://meltdownattack.com/) and Spectre (https://spectreattack.com/) attacks more generally)
affect the security of Qubes OS. The practical impact of these attacks
on Qubes is currently unclear. While the Qubes Security Team is a
member of the Xen predisclosure list (https://www.xenproject.org/security-policy.html), XSA-254 (https://xenbits.xen.org/xsa/advisory-254.html) was disclosed on an
accelerated timetable ahead of schedule, so our team has not yet had a
chance to analyze these attacks, nor has the Xen Project released any
patches associated with XSA-254 (https://xenbits.xen.org/xsa/advisory-254.html). We are continuing to monitor the
situation closely. Once the Security Team makes a determination about
the impact on Qubes, we will make another announcement, update the
XSA Tracker (https://www.qubes-os.org/security/xsa/), and, if appropriate, issue a Qubes Security Bulletin (https://www.qubes-os.org/security/bulletins/)
with information about patching.
https://www.qubes-os.org/news/2018/01/04/xsa-254-meltdown-spectre/
The Qubes Security Team is currently investigating the extent to which
XSA-254 (https://xenbits.xen.org/xsa/advisory-254.html) (and the Meltdown (https://meltdownattack.com/) and Spectre (https://spectreattack.com/) attacks more generally)
affect the security of Qubes OS. The practical impact of these attacks
on Qubes is currently unclear. While the Qubes Security Team is a
member of the Xen predisclosure list (https://www.xenproject.org/security-policy.html), XSA-254 (https://xenbits.xen.org/xsa/advisory-254.html) was disclosed on an
accelerated timetable ahead of schedule, so our team has not yet had a
chance to analyze these attacks, nor has the Xen Project released any
patches associated with XSA-254 (https://xenbits.xen.org/xsa/advisory-254.html). We are continuing to monitor the
situation closely. Once the Security Team makes a determination about
the impact on Qubes, we will make another announcement, update the
XSA Tracker (https://www.qubes-os.org/security/xsa/), and, if appropriate, issue a Qubes Security Bulletin (https://www.qubes-os.org/security/bulletins/)
with information about patching.
Xen Project Spectre/Meltdown FAQ
https://blog.xenproject.org/2018/01/04/xen-project-spectremeltdown-faq/
Google’s Project Zero announced several information leak vulnerabilities affecting all modern superscalar processors. Details can be found on their blog, and in the Xen Project Advisory 254. To help our users understand the impact and our next steps forward, we put together the following FAQ. Note that we will update the FAQ as new information […]
https://blog.xenproject.org/2018/01/04/xen-project-spectremeltdown-faq/
Google’s Project Zero announced several information leak vulnerabilities affecting all modern superscalar processors. Details can be found on their blog, and in the Xen Project Advisory 254. To help our users understand the impact and our next steps forward, we put together the following FAQ. Note that we will update the FAQ as new information […]
Fedora 26 TemplateVM Upgrade
https://www.qubes-os.org/news/2018/01/06/fedora-26-upgrade/
Fedora 25 reached EOL (end-of-life (https://fedoraproject.org/wiki/Fedora_Release_Life_Cycle#Maintenance_Schedule)) on 2017-12-12. We sincerely
apologize for our failure to provide timely notice of this event. It
is strongly recommend that all Qubes users upgrade their Fedora 25
TemplateVMs and StandaloneVMs to Fedora 26 immediately. We provide
step-by-step upgrade instructions (https://www.qubes-os.org/doc/template/fedora/upgrade-25-to-26/) for upgrading your existing
TemplateVMs and StandaloneVMs in-place on both Qubes 3.2 and Qubes
4.0. For a complete list of TemplateVM versions supported for your
specific version of Qubes, see Supported TemplateVM Versions (https://www.qubes-os.org/doc/supported-versions/#templatevms).
We also provide fresh Fedora 26 TemplateVM packages through the
official Qubes repositories, which you can get with the following
commands (in dom0).
Standard Fedora 26 TemplateVM:
$ sudo qubes-dom0-update qubes-template-fedora-26
Minimal (https://www.qubes-os.org/doc/templates/fedora-minimal/) Fedora 26 TemplateVM:
$ sudo qubes-dom0-update qubes-template-fedora-26-minimal
After upgrading to a Fedora 26 TemplateVM, please remember to set all
qubes that were using the old template to use the new one. The
instructions to do this can be found in the upgrade instructions (https://www.qubes-os.org/doc/template/fedora/upgrade-25-to-26/)
for your specific version.
Please note that no user action is required regarding the OS version
in dom0. If you’re using Qubes 3.2 or 4.0, there is no dom0 OS
upgrade available, since none is currently required. For details,
please see our Note on dom0 and EOL (https://www.qubes-os.org/doc/supported-versions/#note-on-dom0-and-eol).
If you’re using an older version of Qubes than 3.2, we strongly
recommend that you upgrade to 3.2, as older versions are no longer
supported.
https://www.qubes-os.org/news/2018/01/06/fedora-26-upgrade/
Fedora 25 reached EOL (end-of-life (https://fedoraproject.org/wiki/Fedora_Release_Life_Cycle#Maintenance_Schedule)) on 2017-12-12. We sincerely
apologize for our failure to provide timely notice of this event. It
is strongly recommend that all Qubes users upgrade their Fedora 25
TemplateVMs and StandaloneVMs to Fedora 26 immediately. We provide
step-by-step upgrade instructions (https://www.qubes-os.org/doc/template/fedora/upgrade-25-to-26/) for upgrading your existing
TemplateVMs and StandaloneVMs in-place on both Qubes 3.2 and Qubes
4.0. For a complete list of TemplateVM versions supported for your
specific version of Qubes, see Supported TemplateVM Versions (https://www.qubes-os.org/doc/supported-versions/#templatevms).
We also provide fresh Fedora 26 TemplateVM packages through the
official Qubes repositories, which you can get with the following
commands (in dom0).
Standard Fedora 26 TemplateVM:
$ sudo qubes-dom0-update qubes-template-fedora-26
Minimal (https://www.qubes-os.org/doc/templates/fedora-minimal/) Fedora 26 TemplateVM:
$ sudo qubes-dom0-update qubes-template-fedora-26-minimal
After upgrading to a Fedora 26 TemplateVM, please remember to set all
qubes that were using the old template to use the new one. The
instructions to do this can be found in the upgrade instructions (https://www.qubes-os.org/doc/template/fedora/upgrade-25-to-26/)
for your specific version.
Please note that no user action is required regarding the OS version
in dom0. If you’re using Qubes 3.2 or 4.0, there is no dom0 OS
upgrade available, since none is currently required. For details,
please see our Note on dom0 and EOL (https://www.qubes-os.org/doc/supported-versions/#note-on-dom0-and-eol).
If you’re using an older version of Qubes than 3.2, we strongly
recommend that you upgrade to 3.2, as older versions are no longer
supported.
Comment on Xen Project Spectre/Meltdown FAQ by Meltdown und Spectre - Updates bringen Performance Probleme - JACOB Blog
https://blog.xenproject.org/2018/01/04/xen-project-spectremeltdown-faq/#comment-447
[…] Security Advisory (XSA-254) / FAQ […]
https://blog.xenproject.org/2018/01/04/xen-project-spectremeltdown-faq/#comment-447
[…] Security Advisory (XSA-254) / FAQ […]
QSB #37: Information leaks due to processor speculative execution bugs (XSA-254, Meltdown & Spectre)
https://www.qubes-os.org/news/2018/01/11/qsb-37/
Dear Qubes Community,
We have just published Qubes Security Bulletin (QSB) #37:
Information leaks due to processor speculative execution bugs.
The text of this QSB is reproduced below. This QSB and its accompanying
signatures will always be available in the Qubes Security Pack
(qubes-secpack).
View QSB #37 in the qubes-secpack:
https://github.com/QubesOS/qubes-secpack/blob/master/QSBs/qsb-037-2018.txt
Learn about the qubes-secpack, including how to obtain, verify, and
read it:
https://www.qubes-os.org/security/pack/
View all past QSBs:
https://www.qubes-os.org/security/bulletins/
View XSA-254 in the XSA Tracker:
https://www.qubes-os.org/security/xsa/#254
---===[ Qubes Security Bulletin #37 ]===---
January 11, 2018
Information leaks due to processor speculative execution bugs
Summary
========
On the night of January 3, two independent groups of researchers
announced the results of their months-long work into abusing modern
processors' so-called speculative mode to leak secrets from the system's
privileged memory [1][2][3][4]. As a response, the Xen Security Team
published Xen Security Advisory 254 [5]. The Xen Security Team did _not_
previously share information about these problems via their (non-public)
security pre-disclosure list, of which the Qubes Security Team is a
member.
In the limited time we've had to analyze the issue, we've come to the
following conclusions about the practical impact on Qubes OS users and
possible remedies. We'll also share a plan to address the issues in a
more systematic way in the coming weeks.
Practical impact and limiting factors for Qubes users
======================================================
## Fully virtualized VMs offer significant protection against Meltdown
Meltdown, the most reliable attack of the three discussed, cannot be
exploited _from_ a fully-virtualized (i.e. HVM or PVH) VM. It does not
matter whether the _target_ VM (i.e. the one from which the attacker
wants to steal secrets) is fully-virtualized. In Qubes 3.x, all VMs are
para-virtualized (PV) by default, though users can choose to create
fully-virtualized VMs. PV VMs do not protect against the Meltdown
attack. In Qubes 4.0, almost all VMs are fully-virtualized by default
and thus offer protection. However, the fully-virtualized VMs in Qubes
3.2 and in release candidates 1-3 of Qubes 4.0 still rely on PV-based
"stub domains", making it possible for an attacker who can chain another
exploit for qemu to attempt the Meltdown attack.
## Virtualization makes at least one variant of Spectre seem difficult
Of the two Spectre variants, it _seems_ that at least one of them might
be significantly harder to exploit under Xen than under monolithic
systems because there are significantly fewer options for the attacker
to interact with the hypervisor.
## All attacks are read-only
It's important to stress that these attacks allow only _reading_ memory,
not modifying it. This means that an attacker cannot use Spectre or
Meltdown to plant any backdoors or otherwise compromise the system in
any persistent way. Thanks to the Qubes OS template mechanism, which is
used by default for all user and system qubes (AppVMs and ServiceVMs),
simply restarting a VM should bring it back to a good known state for
most attacks, wiping out the potential attacking code in the
TemplateBasedVM (unless an attacker found a way to put triggers within
the user's home directory; please see [8] for more discussion).
## Only running VMs are vulnerable
Since Qubes OS is a memory-hungry system, it seems that an attacker
would only be able to steal secrets from VMs running concurrently with
the attacking VM. This is because any pages from shutdown VMs will
typically very quickly get allocated to other, running VMs and get wiped
https://www.qubes-os.org/news/2018/01/11/qsb-37/
Dear Qubes Community,
We have just published Qubes Security Bulletin (QSB) #37:
Information leaks due to processor speculative execution bugs.
The text of this QSB is reproduced below. This QSB and its accompanying
signatures will always be available in the Qubes Security Pack
(qubes-secpack).
View QSB #37 in the qubes-secpack:
https://github.com/QubesOS/qubes-secpack/blob/master/QSBs/qsb-037-2018.txt
Learn about the qubes-secpack, including how to obtain, verify, and
read it:
https://www.qubes-os.org/security/pack/
View all past QSBs:
https://www.qubes-os.org/security/bulletins/
View XSA-254 in the XSA Tracker:
https://www.qubes-os.org/security/xsa/#254
---===[ Qubes Security Bulletin #37 ]===---
January 11, 2018
Information leaks due to processor speculative execution bugs
Summary
========
On the night of January 3, two independent groups of researchers
announced the results of their months-long work into abusing modern
processors' so-called speculative mode to leak secrets from the system's
privileged memory [1][2][3][4]. As a response, the Xen Security Team
published Xen Security Advisory 254 [5]. The Xen Security Team did _not_
previously share information about these problems via their (non-public)
security pre-disclosure list, of which the Qubes Security Team is a
member.
In the limited time we've had to analyze the issue, we've come to the
following conclusions about the practical impact on Qubes OS users and
possible remedies. We'll also share a plan to address the issues in a
more systematic way in the coming weeks.
Practical impact and limiting factors for Qubes users
======================================================
## Fully virtualized VMs offer significant protection against Meltdown
Meltdown, the most reliable attack of the three discussed, cannot be
exploited _from_ a fully-virtualized (i.e. HVM or PVH) VM. It does not
matter whether the _target_ VM (i.e. the one from which the attacker
wants to steal secrets) is fully-virtualized. In Qubes 3.x, all VMs are
para-virtualized (PV) by default, though users can choose to create
fully-virtualized VMs. PV VMs do not protect against the Meltdown
attack. In Qubes 4.0, almost all VMs are fully-virtualized by default
and thus offer protection. However, the fully-virtualized VMs in Qubes
3.2 and in release candidates 1-3 of Qubes 4.0 still rely on PV-based
"stub domains", making it possible for an attacker who can chain another
exploit for qemu to attempt the Meltdown attack.
## Virtualization makes at least one variant of Spectre seem difficult
Of the two Spectre variants, it _seems_ that at least one of them might
be significantly harder to exploit under Xen than under monolithic
systems because there are significantly fewer options for the attacker
to interact with the hypervisor.
## All attacks are read-only
It's important to stress that these attacks allow only _reading_ memory,
not modifying it. This means that an attacker cannot use Spectre or
Meltdown to plant any backdoors or otherwise compromise the system in
any persistent way. Thanks to the Qubes OS template mechanism, which is
used by default for all user and system qubes (AppVMs and ServiceVMs),
simply restarting a VM should bring it back to a good known state for
most attacks, wiping out the potential attacking code in the
TemplateBasedVM (unless an attacker found a way to put triggers within
the user's home directory; please see [8] for more discussion).
## Only running VMs are vulnerable
Since Qubes OS is a memory-hungry system, it seems that an attacker
would only be able to steal secrets from VMs running concurrently with
the attacking VM. This is because any pages from shutdown VMs will
typically very quickly get allocated to other, running VMs and get wiped
as part of this procedure.
## PGP and other cryptographic keys are at risk
For VMs that happen to be running concurrently with the attacking VM, it
seems possible that these attacks might allow the attacker to steal
cryptographic keys, including private PGP keys.
## Disk encryption and screenlocker passwords are at risk
There is one VM that is always running concurrently with other VMs: the
AdminVM (dom0). This VM contains at least two important user secrets:
- The disk (LUKS) encryption key (and likely the passphrase)
- The screenlocker passphrase
In order to make use of these secrets, however, the attacker would have
to conduct a physical attack on the user's computer (e.g. steal the
laptop physically). Users who use the same passphrase to encrypt their
backups may also be affected.
Additional remedies available to Qubes users
=============================================
Thanks to the explicit Qubes partitioning model, it should be
straightforward for users to implement additional hygiene by ensuring
that, whenever less trusted VMs are running, highly sensitive VMs are
shut down.
Additionally, for some of the VMs that must run anyway (e.g. networking
and USB qubes), it is possible to recreate the VM each time the user
suspects it may have been compromised, e.g. after disconnecting from a
less trusted Wi-Fi network, or unplugging an untrusted USB device. In
Qubes 4.0, this is even easier, since Disposable VMs can now be used for
the networking and USB VMs (see [10]).
The Qubes firewalling and networking systems also make it easy to limit
the networking resources VMs can reach, including making VMs completely
offline. While firewalling in Qubes is not intended to be a
leak-prevention mechanism, it likely has this effect in a broad class
class of attack scenarios. Moreover, making a VM completely offline
(i.e. setting its NetVM to "none") is a more robust way to limit the
ability of an attacker to leak secrets stolen from memory to the outside
world. While this mechanism should not be considered bullet-proof -- it
is still possible to mount a specialized attack that exploits a covert
channel to leak the data -- it could be considered as an additional
layer of defense.
Finally, Qubes offers mechanisms to allow for additional protection of
user secrets, especially cryptographic keys, such as PGP keys used for
encryption and signing. Qubes Split GPG [6] allows the user to keep
these keys in an isolated VM. So, for example, the user might be running
her "development" qube in parallel with a compromised qube, while
keeping the GPG backend VM (where she keeps the signing key that she
uses to sign her software releases) shut down most of the time (because
it's only needed when a release is being made). This way, the software
signing keys will be protected from the attack.
The user could take this further by using Qubes Split GPG with a backend
qube running on a physically separate computer, as has been demonstrated
with the Qubes USB Armory project [7].
(Proper) patching
==================
Mitigations against the CPU bugs discussed here are in development but
have not yet been released. The Xen Project is working on a set of
patches (see XSA 254 [5] for updates). At the same time, we are working
on similar mitigations where feasible.
## Qubes 4.0
As explained above, almost all the VMs in Qubes 4.0 are
fully-virtualized by default (specifically, they are HVMs), which
mitigates the most severe issue, Meltdown. The only PV domains in
Qubes 4.0 are stub domains, which we plan to eliminate by switching to
PVH where possible. This will be done in Qubes 4.0-rc4 and also
released as a normal update for existing Qubes 4.0 installations. The
only remaining PV stub domains will be those used for VMs with PCI
devices. (In the default configuration, these are sys-net and
sys-usb.) The Xen Project has not yet provided any solution for this
[9].
## Qubes 3.2
## PGP and other cryptographic keys are at risk
For VMs that happen to be running concurrently with the attacking VM, it
seems possible that these attacks might allow the attacker to steal
cryptographic keys, including private PGP keys.
## Disk encryption and screenlocker passwords are at risk
There is one VM that is always running concurrently with other VMs: the
AdminVM (dom0). This VM contains at least two important user secrets:
- The disk (LUKS) encryption key (and likely the passphrase)
- The screenlocker passphrase
In order to make use of these secrets, however, the attacker would have
to conduct a physical attack on the user's computer (e.g. steal the
laptop physically). Users who use the same passphrase to encrypt their
backups may also be affected.
Additional remedies available to Qubes users
=============================================
Thanks to the explicit Qubes partitioning model, it should be
straightforward for users to implement additional hygiene by ensuring
that, whenever less trusted VMs are running, highly sensitive VMs are
shut down.
Additionally, for some of the VMs that must run anyway (e.g. networking
and USB qubes), it is possible to recreate the VM each time the user
suspects it may have been compromised, e.g. after disconnecting from a
less trusted Wi-Fi network, or unplugging an untrusted USB device. In
Qubes 4.0, this is even easier, since Disposable VMs can now be used for
the networking and USB VMs (see [10]).
The Qubes firewalling and networking systems also make it easy to limit
the networking resources VMs can reach, including making VMs completely
offline. While firewalling in Qubes is not intended to be a
leak-prevention mechanism, it likely has this effect in a broad class
class of attack scenarios. Moreover, making a VM completely offline
(i.e. setting its NetVM to "none") is a more robust way to limit the
ability of an attacker to leak secrets stolen from memory to the outside
world. While this mechanism should not be considered bullet-proof -- it
is still possible to mount a specialized attack that exploits a covert
channel to leak the data -- it could be considered as an additional
layer of defense.
Finally, Qubes offers mechanisms to allow for additional protection of
user secrets, especially cryptographic keys, such as PGP keys used for
encryption and signing. Qubes Split GPG [6] allows the user to keep
these keys in an isolated VM. So, for example, the user might be running
her "development" qube in parallel with a compromised qube, while
keeping the GPG backend VM (where she keeps the signing key that she
uses to sign her software releases) shut down most of the time (because
it's only needed when a release is being made). This way, the software
signing keys will be protected from the attack.
The user could take this further by using Qubes Split GPG with a backend
qube running on a physically separate computer, as has been demonstrated
with the Qubes USB Armory project [7].
(Proper) patching
==================
Mitigations against the CPU bugs discussed here are in development but
have not yet been released. The Xen Project is working on a set of
patches (see XSA 254 [5] for updates). At the same time, we are working
on similar mitigations where feasible.
## Qubes 4.0
As explained above, almost all the VMs in Qubes 4.0 are
fully-virtualized by default (specifically, they are HVMs), which
mitigates the most severe issue, Meltdown. The only PV domains in
Qubes 4.0 are stub domains, which we plan to eliminate by switching to
PVH where possible. This will be done in Qubes 4.0-rc4 and also
released as a normal update for existing Qubes 4.0 installations. The
only remaining PV stub domains will be those used for VMs with PCI
devices. (In the default configuration, these are sys-net and
sys-usb.) The Xen Project has not yet provided any solution for this
[9].
## Qubes 3.2
For Qubes 3.2, we plan to release an update that will make almost all
VMs run in a fully-virtualized mode. Specifically, we plan to backport
PVH support from Qubes 4.0 and enable it for all VMs without PCI
devices. After this update, all VMs that previously ran in PV mode (and
that do not have PCI devices) will subsequently run in PVH mode, with
the exception of stub domains. Any HVMs will continue to run in HVM
mode.
There are two important points regarding the Qubes 3.2 update. First,
this update will work only when the hardware supports VT-x or equivalent
technology. Qubes 3.2 will continue to work on systems without VT-x, but
there will be no mitigation against Meltdown on such systems. Users on
systems that do not support VT-x are advised to take this into
consideration when assessing the trustworthiness of their systems.
Second, the Qubes 3.2 update will also switch any VMs that use a custom
kernel to PVH mode, which will temporarily prevent them from working.
This is a deliberate security choice to protect the system as a whole
(rather than leaving VMs with custom kernels in PV mode, which would
allow attackers to use them to mount Meltdown attacks). In order to use
a VM with a custom kernel after the update (whether the custom kernel
was installed in dom0 or inside the VM), users must either manually
change the VM back to PV or change the kernel that the VM uses. (Kernel
>=4.11 is required, and booting an in-VM kernel is not supported in PVH
mode.)
We'll update this bulletin and issue a separate announcement once
patches are available.
Suggested actions after patching
=================================
While the potential attacks discussed in this bulletin are severe,
recovering from these potential attacks should be easier than in the
case of an exploit that allows the attacker to perform arbitrary code
execution, resulting in a full system compromise. Specifically, we don't
believe it is necessary to use Qubes Paranoid Backup Restore Mode to
address these vulnerabilities because of the strict read-only character
of the attacks discussed. Instead, users who believe they are affected
should consider taking the following actions:
1. Changing the screenlocker passphrase.
2. Changing the disk encryption (LUKS) passphrase.
3. Re-encrypting the disk to force a change of the disk encryption
_key_. (In practice, this can be done by reinstalling Qubes and
restoring from a backup.)
4. Evaluating the odds that other secrets have been compromised,
such as other passwords and cryptographic keys (e.g. private
PGP, SSH, or TLS keys), and generate new secrets. It is unclear
how easy it might be for attackers to steal such data in a
real world Qubes environment.
Technical discussion
=====================
From a (high-level) architecture point of view, the attacks discussed in
this bulletin should not concern Qubes OS much. This is because,
architecture-wise, there should be no secrets or other sensitive data in
the hypervisor memory. This is in stark contrast to traditional
monolithic systems, where there is an abundance of sensitive information
living in the kernel (supervisor).
Unfortunately, for rather accidental reasons, the implementation of the
particular hypervisor we happen to be using to implement isolation for
Qubes, i.e. the Xen hypervisor, undermines this clean architecture by
internally mapping all physical memory pages into its address space. Of
course, under normal circumstances, this isn't a security problem,
because no one is able to read the hypervisor memory. However, the bugs
we're discussing today might allow an attacker to do just that. This is
a great example of how difficult it can be to analyze the security
impact of a feature when limiting oneself to only one layer of
abstraction, especially a high-level one (also known as the "PowerPoint
level").
At the same time, we should point out that the use of full
VMs run in a fully-virtualized mode. Specifically, we plan to backport
PVH support from Qubes 4.0 and enable it for all VMs without PCI
devices. After this update, all VMs that previously ran in PV mode (and
that do not have PCI devices) will subsequently run in PVH mode, with
the exception of stub domains. Any HVMs will continue to run in HVM
mode.
There are two important points regarding the Qubes 3.2 update. First,
this update will work only when the hardware supports VT-x or equivalent
technology. Qubes 3.2 will continue to work on systems without VT-x, but
there will be no mitigation against Meltdown on such systems. Users on
systems that do not support VT-x are advised to take this into
consideration when assessing the trustworthiness of their systems.
Second, the Qubes 3.2 update will also switch any VMs that use a custom
kernel to PVH mode, which will temporarily prevent them from working.
This is a deliberate security choice to protect the system as a whole
(rather than leaving VMs with custom kernels in PV mode, which would
allow attackers to use them to mount Meltdown attacks). In order to use
a VM with a custom kernel after the update (whether the custom kernel
was installed in dom0 or inside the VM), users must either manually
change the VM back to PV or change the kernel that the VM uses. (Kernel
>=4.11 is required, and booting an in-VM kernel is not supported in PVH
mode.)
We'll update this bulletin and issue a separate announcement once
patches are available.
Suggested actions after patching
=================================
While the potential attacks discussed in this bulletin are severe,
recovering from these potential attacks should be easier than in the
case of an exploit that allows the attacker to perform arbitrary code
execution, resulting in a full system compromise. Specifically, we don't
believe it is necessary to use Qubes Paranoid Backup Restore Mode to
address these vulnerabilities because of the strict read-only character
of the attacks discussed. Instead, users who believe they are affected
should consider taking the following actions:
1. Changing the screenlocker passphrase.
2. Changing the disk encryption (LUKS) passphrase.
3. Re-encrypting the disk to force a change of the disk encryption
_key_. (In practice, this can be done by reinstalling Qubes and
restoring from a backup.)
4. Evaluating the odds that other secrets have been compromised,
such as other passwords and cryptographic keys (e.g. private
PGP, SSH, or TLS keys), and generate new secrets. It is unclear
how easy it might be for attackers to steal such data in a
real world Qubes environment.
Technical discussion
=====================
From a (high-level) architecture point of view, the attacks discussed in
this bulletin should not concern Qubes OS much. This is because,
architecture-wise, there should be no secrets or other sensitive data in
the hypervisor memory. This is in stark contrast to traditional
monolithic systems, where there is an abundance of sensitive information
living in the kernel (supervisor).
Unfortunately, for rather accidental reasons, the implementation of the
particular hypervisor we happen to be using to implement isolation for
Qubes, i.e. the Xen hypervisor, undermines this clean architecture by
internally mapping all physical memory pages into its address space. Of
course, under normal circumstances, this isn't a security problem,
because no one is able to read the hypervisor memory. However, the bugs
we're discussing today might allow an attacker to do just that. This is
a great example of how difficult it can be to analyze the security
impact of a feature when limiting oneself to only one layer of
abstraction, especially a high-level one (also known as the "PowerPoint
level").
At the same time, we should point out that the use of full
virtualization prevents at least one of the attacks, and incidentally
the most powerful one, i.e. the Meltdown attack.
However, we should also point out that, in Qubes 3.2, even HVMs still
rely on PV stub domains to provide I/O emulation (qemu). In the case of
an additional vulnerability within qemu, an attacker might compromise
the PV stub domain and attempt to perform the Meltdown attack from
there.
This limitation also applies to HVMs in release candidates 1-3 of Qubes
4.0. Qubes 4.0-rc4, which we plan to release next week, should be using
PVH instead of HVM for almost all VMs without PCI devices by default,
thus eliminating this avenue of attack. As discussed in the Patching
section, VMs with PCI devices will be the exception, which means that
the Meltdown attack could in theory still be conducted if the attacker
compromises a VM with PCI devices and afterward compromises the
corresponding stub domain via a hypothetical qemu exploit.
Unfortunately, there is not much we can do about this without
cooperation from the Xen project [9][11].
Here is an overview of the VM modes that correspond to each Qubes OS
version:
VM type \ Qubes OS version | 3.2 | 3.2+ | 4.0-rc1-3 | 4.0-rc4 |
---------------------------------- | --- | ---- | --------- | ------- |
Default VMs without PCI devices | PV | PVH | HVM | PVH |
Default VMs with PCI devices | PV | PV | HVM | HVM |
Stub domains - VMs w/o PCI devices | PV | N/A | PV | N/A |
Stub domains - VMs w/ PCI devices | PV | PV | PV | PV |
("3.2+" denotes Qubes 3.2 after applying the update discussed above,
which will result in most VMs running in PVH mode. "N/A" means "not
applicable," since PVH VMs do not require stub domains.)
Credits
========
See the original Xen Security Advisory.
References
===========
[1] https://googleprojectzero.blogspot.com/2018/01/reading-privileged-memory-with-side.html
[2] https://meltdownattack.com/
[3] https://meltdownattack.com/meltdown.pdf
[4] https://spectreattack.com/spectre.pdf
[5] https://xenbits.xen.org/xsa/advisory-254.html
[6] https://www.qubes-os.org/doc/split-gpg/
[7] https://github.com/inversepath/qubes-qrexec-to-tcp
[8] https://www.qubes-os.org/news/2017/04/26/qubes-compromise-recovery/
[9] https://lists.xenproject.org/archives/html/xen-devel/2018-01/msg00403.html
[10] https://www.qubes-os.org/news/2017/10/03/core3/
[11] https://blog.xenproject.org/2018/01/04/xen-project-spectremeltdown-faq/
--
The Qubes Security Team
https://www.qubes-os.org/security/
the most powerful one, i.e. the Meltdown attack.
However, we should also point out that, in Qubes 3.2, even HVMs still
rely on PV stub domains to provide I/O emulation (qemu). In the case of
an additional vulnerability within qemu, an attacker might compromise
the PV stub domain and attempt to perform the Meltdown attack from
there.
This limitation also applies to HVMs in release candidates 1-3 of Qubes
4.0. Qubes 4.0-rc4, which we plan to release next week, should be using
PVH instead of HVM for almost all VMs without PCI devices by default,
thus eliminating this avenue of attack. As discussed in the Patching
section, VMs with PCI devices will be the exception, which means that
the Meltdown attack could in theory still be conducted if the attacker
compromises a VM with PCI devices and afterward compromises the
corresponding stub domain via a hypothetical qemu exploit.
Unfortunately, there is not much we can do about this without
cooperation from the Xen project [9][11].
Here is an overview of the VM modes that correspond to each Qubes OS
version:
VM type \ Qubes OS version | 3.2 | 3.2+ | 4.0-rc1-3 | 4.0-rc4 |
---------------------------------- | --- | ---- | --------- | ------- |
Default VMs without PCI devices | PV | PVH | HVM | PVH |
Default VMs with PCI devices | PV | PV | HVM | HVM |
Stub domains - VMs w/o PCI devices | PV | N/A | PV | N/A |
Stub domains - VMs w/ PCI devices | PV | PV | PV | PV |
("3.2+" denotes Qubes 3.2 after applying the update discussed above,
which will result in most VMs running in PVH mode. "N/A" means "not
applicable," since PVH VMs do not require stub domains.)
Credits
========
See the original Xen Security Advisory.
References
===========
[1] https://googleprojectzero.blogspot.com/2018/01/reading-privileged-memory-with-side.html
[2] https://meltdownattack.com/
[3] https://meltdownattack.com/meltdown.pdf
[4] https://spectreattack.com/spectre.pdf
[5] https://xenbits.xen.org/xsa/advisory-254.html
[6] https://www.qubes-os.org/doc/split-gpg/
[7] https://github.com/inversepath/qubes-qrexec-to-tcp
[8] https://www.qubes-os.org/news/2017/04/26/qubes-compromise-recovery/
[9] https://lists.xenproject.org/archives/html/xen-devel/2018-01/msg00403.html
[10] https://www.qubes-os.org/news/2017/10/03/core3/
[11] https://blog.xenproject.org/2018/01/04/xen-project-spectremeltdown-faq/
--
The Qubes Security Team
https://www.qubes-os.org/security/
Qubes Air: Generalizing the Qubes Architecture
https://www.qubes-os.org/news/2018/01/22/qubes-air/
The Qubes OS project has been around for nearly 8 years now, since its original
announcement (https://blog.invisiblethings.org/2010/04/07/introducing-qubes-os.html) back in April 2010 (and the actual origin
date can be traced back to November 11th, 2009, when an initial email
introducing this project was sent within ITL internally). Over these years Qubes
has achieved reasonable success: according to our estimates, it has (https://www.qubes-os.org/statistics/)
nearly 30k regular users. This could even be considered a great success given
that 1) it is a new operating system, rather than an application that can be
installed in the user’s favorite OS; 2) it has introduced a (radically?) new
approach (https://www.qubes-os.org/video-tours/) to managing one’s digital life (i.e. an explicit
partitioning model into security domains); and last but not least, 3) it has
very specific hardware requirements, which is the result of using Xen
as the hypervisor and Linux-based Virtual Machines (VMs) for networking and USB
qubes. (The term “qube” refers to a compartment – not necessarily a VM –
inside a Qubes OS system. We’ll explain this in more detail below.)
For the past several years, we’ve been working hard to bring you Qubes
4.0 (https://www.qubes-os.org/news/2017/07/31/qubes-40-rc1/), which features state-of-the-art technology not seen in previous
Qubes versions, notably the next generation Qubes Core Stack (https://www.qubes-os.org/news/2017/10/03/core3/) and
our unique Admin API (https://www.qubes-os.org/news/2017/06/27/qubes-admin-api/). We believe this new platform (Qubes 4
represents a major rewrite of the previous Qubes codebase!) paves the way to
solving many of the obstacles mentioned above.
The new, flexible architecture of Qubes 4 will also open up new possibilities,
and we’ve recently been thinking about how Qubes OS should evolve in the long
term. In this article, I discuss this vision, which we call Qubes Air. It should
be noted that what I describe in this article has not been implemented yet.
Why?
Before we take a look at the long-term vision, it might be helpful to understand
why we would like the Qubes architecture to further evolve. Let us quickly recap
some of the most important current weaknesses of Qubes OS (including Qubes 4.0).
Deployment cost (aka “How do I find a Qubes-compatible laptop?”)
Probably the biggest current problem with Qubes OS – a problem that prevents
its wider adoption – is the difficulty of finding a compatible laptop on which
to install it. Then, the whole process of needing to install a new operating
system, rather than just adding a new application, scares many people away.
It’s hard to be surprised by that.
This problem of deployment is not limited to Qubes OS, by the way. It’s just
that, in the case of Qubes OS, these problems are significantly more pronounced
due to the aggressive use of virtualization technology to isolate not just apps,
but also devices, as well as incompatibilities between Linux drivers and modern
hardware. (While these driver issues are not inherent to the architecture of
Qubes OS, they affected us nonetheless, since we use Linux-based VMs to handle
devices.)
The hypervisor as a single point of failure
Since the beginning, we’ve relied on virtualization technology to isolate
individual qubes from one another. However, this has led to the problem of
over-dependence on the hypervisor. In recent years, as more and more top notch
researchers have begun scrutinizing Xen, a number of security bugs (https://xenbits.xen.org/xsa/)
have been discovered. While many (https://www.qubes-os.org/security/xsa/) of them did not affect the
security of Qubes OS, there were still too many that did. :(
Potential Xen bugs present just one, though arguably the most serious, security
problem. Other problems arise from the underlying architecture of the x86
https://www.qubes-os.org/news/2018/01/22/qubes-air/
The Qubes OS project has been around for nearly 8 years now, since its original
announcement (https://blog.invisiblethings.org/2010/04/07/introducing-qubes-os.html) back in April 2010 (and the actual origin
date can be traced back to November 11th, 2009, when an initial email
introducing this project was sent within ITL internally). Over these years Qubes
has achieved reasonable success: according to our estimates, it has (https://www.qubes-os.org/statistics/)
nearly 30k regular users. This could even be considered a great success given
that 1) it is a new operating system, rather than an application that can be
installed in the user’s favorite OS; 2) it has introduced a (radically?) new
approach (https://www.qubes-os.org/video-tours/) to managing one’s digital life (i.e. an explicit
partitioning model into security domains); and last but not least, 3) it has
very specific hardware requirements, which is the result of using Xen
as the hypervisor and Linux-based Virtual Machines (VMs) for networking and USB
qubes. (The term “qube” refers to a compartment – not necessarily a VM –
inside a Qubes OS system. We’ll explain this in more detail below.)
For the past several years, we’ve been working hard to bring you Qubes
4.0 (https://www.qubes-os.org/news/2017/07/31/qubes-40-rc1/), which features state-of-the-art technology not seen in previous
Qubes versions, notably the next generation Qubes Core Stack (https://www.qubes-os.org/news/2017/10/03/core3/) and
our unique Admin API (https://www.qubes-os.org/news/2017/06/27/qubes-admin-api/). We believe this new platform (Qubes 4
represents a major rewrite of the previous Qubes codebase!) paves the way to
solving many of the obstacles mentioned above.
The new, flexible architecture of Qubes 4 will also open up new possibilities,
and we’ve recently been thinking about how Qubes OS should evolve in the long
term. In this article, I discuss this vision, which we call Qubes Air. It should
be noted that what I describe in this article has not been implemented yet.
Why?
Before we take a look at the long-term vision, it might be helpful to understand
why we would like the Qubes architecture to further evolve. Let us quickly recap
some of the most important current weaknesses of Qubes OS (including Qubes 4.0).
Deployment cost (aka “How do I find a Qubes-compatible laptop?”)
Probably the biggest current problem with Qubes OS – a problem that prevents
its wider adoption – is the difficulty of finding a compatible laptop on which
to install it. Then, the whole process of needing to install a new operating
system, rather than just adding a new application, scares many people away.
It’s hard to be surprised by that.
This problem of deployment is not limited to Qubes OS, by the way. It’s just
that, in the case of Qubes OS, these problems are significantly more pronounced
due to the aggressive use of virtualization technology to isolate not just apps,
but also devices, as well as incompatibilities between Linux drivers and modern
hardware. (While these driver issues are not inherent to the architecture of
Qubes OS, they affected us nonetheless, since we use Linux-based VMs to handle
devices.)
The hypervisor as a single point of failure
Since the beginning, we’ve relied on virtualization technology to isolate
individual qubes from one another. However, this has led to the problem of
over-dependence on the hypervisor. In recent years, as more and more top notch
researchers have begun scrutinizing Xen, a number of security bugs (https://xenbits.xen.org/xsa/)
have been discovered. While many (https://www.qubes-os.org/security/xsa/) of them did not affect the
security of Qubes OS, there were still too many that did. :(
Potential Xen bugs present just one, though arguably the most serious, security
problem. Other problems arise from the underlying architecture of the x86
platform, where various inter-VM side- and covert-channels are made possible
thanks to the aggressively optimized multi-core CPU architecture, most
spectacularly demonstrated by the recently published Meltdown and Spectre
attacks (https://meltdownattack.com/). Fundamental problems in other areas of the underlying
hardware have also been discovered, such as the Row Hammer Attack (https://googleprojectzero.blogspot.com/2015/03/exploiting-dram-rowhammer-bug-to-gain.html).
This leads us to a conclusion that, at least for some applications, we would
like to be able to achieve better isolation than currently available hypervisors
and commodity hardware can provide.
How?
One possible solution to these problems is actually to “move Qubes to the
cloud.” Readers who are allergic to the notion of having their private
computations running in the (untrusted) cloud should not give up reading just
yet. Rest assured that we will also discuss other solutions not involving the
cloud. The beauty of Qubes Air, we believe, lies in the fact that all these
solutions are largely isomorphic, from both an architecture and code point of
view.
Example: Qubes in the cloud
Let’s start with one critical need that many of our customers have expressed:
Can we have “Qubes in the Cloud”?
As I’ve emphasized over the years, the essence of Qubes does not rest in the Xen
hypervisor, or even in the simple notion of “isolation,” but rather in the
careful decomposition (https://invisiblethingslab.com/resources/2014/Software_compartmentalization_vs_physical_separation.pdf) of various workflows, devices, apps
across securely compartmentalized containers. Right now, these are mostly
desktop workflows, and the compartments just happen to be implemented as Xen
VMs, but neither of these aspects is essential to the nature of Qubes.
Consequently, we can easily imagine Qubes running on top of VMs that are hosted
in some cloud, such as Amazon EC2, Microsoft Azure, Google Compute Engine, or
even a decentralized computing network, such as Golem (https://golem.network/). This is illustrated (in
a very simplified way) in the diagram below:
thanks to the aggressively optimized multi-core CPU architecture, most
spectacularly demonstrated by the recently published Meltdown and Spectre
attacks (https://meltdownattack.com/). Fundamental problems in other areas of the underlying
hardware have also been discovered, such as the Row Hammer Attack (https://googleprojectzero.blogspot.com/2015/03/exploiting-dram-rowhammer-bug-to-gain.html).
This leads us to a conclusion that, at least for some applications, we would
like to be able to achieve better isolation than currently available hypervisors
and commodity hardware can provide.
How?
One possible solution to these problems is actually to “move Qubes to the
cloud.” Readers who are allergic to the notion of having their private
computations running in the (untrusted) cloud should not give up reading just
yet. Rest assured that we will also discuss other solutions not involving the
cloud. The beauty of Qubes Air, we believe, lies in the fact that all these
solutions are largely isomorphic, from both an architecture and code point of
view.
Example: Qubes in the cloud
Let’s start with one critical need that many of our customers have expressed:
Can we have “Qubes in the Cloud”?
As I’ve emphasized over the years, the essence of Qubes does not rest in the Xen
hypervisor, or even in the simple notion of “isolation,” but rather in the
careful decomposition (https://invisiblethingslab.com/resources/2014/Software_compartmentalization_vs_physical_separation.pdf) of various workflows, devices, apps
across securely compartmentalized containers. Right now, these are mostly
desktop workflows, and the compartments just happen to be implemented as Xen
VMs, but neither of these aspects is essential to the nature of Qubes.
Consequently, we can easily imagine Qubes running on top of VMs that are hosted
in some cloud, such as Amazon EC2, Microsoft Azure, Google Compute Engine, or
even a decentralized computing network, such as Golem (https://golem.network/). This is illustrated (in
a very simplified way) in the diagram below: