So far, this is nothing radically new compared to what we already have in Qubes
OS, especially since we have nearly completed our effort to abstract the Qubes
architecture away from Xen-specific details – an effort we code-named Qubes
Odyssey.
What is radically different is that we now want to allow more than one Zone to
exist in a single Qubes system!
In order to support multiple Zones, we have to provide transparent proxying of
qrexec services across Zones, so that a qube need not be aware that another qube
from which it requests a service resides in a different zone. This is the main
reason we’ve introduce multiple “local” Admin qubes – one for each Zone. Slave
Admin qubes are also bridges that allow the Master Admin qube to manage the
whole system (e.g. request the creation of new qubes, connect and set up storage
for qubes, and set up networking between qubes).
Under the hood: qubes’ interfaces
Within one Zone, there are multiple qubes. Let me stress that the term “qube”
is very generic and does not imply any specific technology. It could be a VM
under some virtualization system. It could be some kind of a container or a
physically separate computing device, such as a Raspberry PI, Arduino board, or
similar device.
While a qube can be implemented in many different ways, there are certain
features it should have:
A qube should implement a vchan endpoint (https://github.com/QubesOS/qubes-core-vchan-xen). The actual technology on
top of which this will be implemented – whether some shared memory within a
virtualization or containerization system, TCP/IP, or something
else (https://tools.ietf.org/html/rfc1149) – will be specific to the kind
of Zone it occupies.
A qube should implement a qrexec (https://www.qubes-os.org/doc/qrexec3/) endpoint, though this should be very
straightforward if a vchan endpoint has already been implemented. This
ensures that most (all?) the qrexec services, which are the basis for most
of the integration, apps, and services we have created for Qubes, should
Just Work(TM).
Optionally, for some qubes, a GUI endpoint should also be implemented (see
the discussion below).
In order to be compatible with Qubes networking (https://blog.invisiblethings.org/2011/09/28/playing-with-qubes-networking-for-fun.html), a qube should expect
one uplink network interface (to be exposed by the management technology
specific to that particular Zone), and (optionally) multiple downlink
network interfaces (if it is to work as a proxy qube, e.g. VPN or
firewalling qube).
Finally, a qube should expect two kinds of volumes to be exposed by the
Zone-specific management stack:
one read-only, which is intended to be used as a root filesystem by the
qube (the management stack might also expose an auxiliary volume for
implementing copy-on-write illusion for the VM, like the volatile.img
we currently expose on Qubes),
and one read-writable, which is specific to this qube, and which is
intended to be used as home directory-like storage.
This is, naturally, to allow the implementation of Qubes templates (https://www.qubes-os.org/getting-started/#appvms-qubes-and-templatevms), a
mechanism that we believe brings not only a lot of convenience but also some
security benefits (https://www.qubes-os.org/news/2017/04/26/qubes-compromise-recovery/).
GUI virtualization considerations
Since the very beginning (https://www.qubes-os.org/attachment/wiki/QubesArchitecture/arch-spec-0.3.pdf), Qubes was envisioned as a system for
desktop computing (as opposed to servers). This implied that GUI
virtualization (https://www.qubes-os.org/doc/gui/) was part of the core Qubes infrastructure.
However, with some of the security-optimized management infrastructure we have
recently added to Qubes OS, i.e. Salt stack integration (https://www.qubes-os.org/news/2015/12/14/mgmt-stack/) (which
significantly shrinks the attack surface on the system TCB compared to more
OS, especially since we have nearly completed our effort to abstract the Qubes
architecture away from Xen-specific details – an effort we code-named Qubes
Odyssey.
What is radically different is that we now want to allow more than one Zone to
exist in a single Qubes system!
In order to support multiple Zones, we have to provide transparent proxying of
qrexec services across Zones, so that a qube need not be aware that another qube
from which it requests a service resides in a different zone. This is the main
reason we’ve introduce multiple “local” Admin qubes – one for each Zone. Slave
Admin qubes are also bridges that allow the Master Admin qube to manage the
whole system (e.g. request the creation of new qubes, connect and set up storage
for qubes, and set up networking between qubes).
Under the hood: qubes’ interfaces
Within one Zone, there are multiple qubes. Let me stress that the term “qube”
is very generic and does not imply any specific technology. It could be a VM
under some virtualization system. It could be some kind of a container or a
physically separate computing device, such as a Raspberry PI, Arduino board, or
similar device.
While a qube can be implemented in many different ways, there are certain
features it should have:
A qube should implement a vchan endpoint (https://github.com/QubesOS/qubes-core-vchan-xen). The actual technology on
top of which this will be implemented – whether some shared memory within a
virtualization or containerization system, TCP/IP, or something
else (https://tools.ietf.org/html/rfc1149) – will be specific to the kind
of Zone it occupies.
A qube should implement a qrexec (https://www.qubes-os.org/doc/qrexec3/) endpoint, though this should be very
straightforward if a vchan endpoint has already been implemented. This
ensures that most (all?) the qrexec services, which are the basis for most
of the integration, apps, and services we have created for Qubes, should
Just Work(TM).
Optionally, for some qubes, a GUI endpoint should also be implemented (see
the discussion below).
In order to be compatible with Qubes networking (https://blog.invisiblethings.org/2011/09/28/playing-with-qubes-networking-for-fun.html), a qube should expect
one uplink network interface (to be exposed by the management technology
specific to that particular Zone), and (optionally) multiple downlink
network interfaces (if it is to work as a proxy qube, e.g. VPN or
firewalling qube).
Finally, a qube should expect two kinds of volumes to be exposed by the
Zone-specific management stack:
one read-only, which is intended to be used as a root filesystem by the
qube (the management stack might also expose an auxiliary volume for
implementing copy-on-write illusion for the VM, like the volatile.img
we currently expose on Qubes),
and one read-writable, which is specific to this qube, and which is
intended to be used as home directory-like storage.
This is, naturally, to allow the implementation of Qubes templates (https://www.qubes-os.org/getting-started/#appvms-qubes-and-templatevms), a
mechanism that we believe brings not only a lot of convenience but also some
security benefits (https://www.qubes-os.org/news/2017/04/26/qubes-compromise-recovery/).
GUI virtualization considerations
Since the very beginning (https://www.qubes-os.org/attachment/wiki/QubesArchitecture/arch-spec-0.3.pdf), Qubes was envisioned as a system for
desktop computing (as opposed to servers). This implied that GUI
virtualization (https://www.qubes-os.org/doc/gui/) was part of the core Qubes infrastructure.
However, with some of the security-optimized management infrastructure we have
recently added to Qubes OS, i.e. Salt stack integration (https://www.qubes-os.org/news/2015/12/14/mgmt-stack/) (which
significantly shrinks the attack surface on the system TCB compared to more
traditional “management” solutions), the Qubes Admin API (https://www.qubes-os.org/news/2017/06/27/qubes-admin-api/) (which
allows for the fine-grained decomposition of management roles), and deeply
integrated features such as templates (https://www.qubes-os.org/getting-started/#appvms-qubes-and-templatevms), we think Qubes Air may also be useful
in some non-desktop applications, such as the embedded appliance space, and
possibly even on the server/services side. In this case, it makes perfect sense
to have qubes not implement GUI protocol endpoints.
However, I still think that the primary area where Qubes excels is in securing
desktop workflows. For these, we need GUI virtualizationmultiplexing, and
the qubes need to implement GUI protocol endpoints. Below, we discuss some of
the trade-offs involved here.
The Qubes GUI protocol (https://www.qubes-os.org/doc/gui/) is optimized for security. This means that
the protocol is designed to be extremely simple, allowing only for very simple
processing on incoming packets, thus significantly limiting the attack surface
on the GUI daemon (which is usually considered trusted). The price we pay for
this security is the lack of various optimizations, such as on-the-fly
compression, which others protocols, such as VNC and RDP, naturally offer. So
far, we’ve been able to get away with these trade-offs, because in current Qubes
releases the GUI protocol runs over Xen shared memory. DRAM is very fast (i.e
has low latency and super-high bandwidth), and the implementation on Xen smartly
makes use of page sharing rather than memory copying, so that it achieves
near native speed (of course with the limitation that we don’t expose GPU
functionalities to VMs, which might limit the experience in some graphical
applications anyway).
However, when qubes run on remote computers (e.g in the cloud) or on physically
separate computers (e.g. on a cluster of Raspberry PIs), we face the potential
problem of graphics performance. The solution we see is to introduce a local
copy of the GUI qube into each zone. Here, we make the assumption that there
should be a significantly faster communication channel available between qubes
within a Zone than between Zones. For example, inter-VM communication within
one data center should be significantly faster than between the user’s laptop
and the cloud. The Qubes GUI protocol is then used between qubes and the local
GUI qube within a single zone, but a more efficient (and more complex) protocol
is used to aggregate the GUI into the Master GUI qube from all the Slave GUI
qubes. Thanks to this combined setup, we still get the benefit of a reasonably
secure GUI. Untrusted qubes still use the Qubes secure GUI protocol to
communicate with the local GUI qube. However, we also benefit from the greater
efficiency of remote access-optimized protocols such as RDP and VNC to get the
GUI onto the user’s device over the network. (Here, we make the assumption that
the Slave GUI qubes are significantly more trustworthy than other
non-privileged qubes in the Zone. If that’s not the case, and if we’re also
worried about an attacker who has compromised a Slave GUI qube to exploit a
potential bug in the VNC or RDP protocol in order to attack the Master GUI
qube, we could still resort to the fine-grained Qubes Admin API to limit the
potential damage the attacker might inflict.)
Digression on the “cloudification” of apps
It’s hard not to notice how the model of desktop applications has changed over
the past decade or so, where many standalone applications that previously ran on
desktop computers now run in the cloud and have only their frontends executed in
a browser running on the client system. How does the Qubes compartmentalization
model, and more importantly Qubes as a desktop OS, deal with this change?
Above, we discussed how it’s possible to move Qubes VMs from the user’s local
machine to the cloud (or to physically separate computers) without the user
allows for the fine-grained decomposition of management roles), and deeply
integrated features such as templates (https://www.qubes-os.org/getting-started/#appvms-qubes-and-templatevms), we think Qubes Air may also be useful
in some non-desktop applications, such as the embedded appliance space, and
possibly even on the server/services side. In this case, it makes perfect sense
to have qubes not implement GUI protocol endpoints.
However, I still think that the primary area where Qubes excels is in securing
desktop workflows. For these, we need GUI virtualizationmultiplexing, and
the qubes need to implement GUI protocol endpoints. Below, we discuss some of
the trade-offs involved here.
The Qubes GUI protocol (https://www.qubes-os.org/doc/gui/) is optimized for security. This means that
the protocol is designed to be extremely simple, allowing only for very simple
processing on incoming packets, thus significantly limiting the attack surface
on the GUI daemon (which is usually considered trusted). The price we pay for
this security is the lack of various optimizations, such as on-the-fly
compression, which others protocols, such as VNC and RDP, naturally offer. So
far, we’ve been able to get away with these trade-offs, because in current Qubes
releases the GUI protocol runs over Xen shared memory. DRAM is very fast (i.e
has low latency and super-high bandwidth), and the implementation on Xen smartly
makes use of page sharing rather than memory copying, so that it achieves
near native speed (of course with the limitation that we don’t expose GPU
functionalities to VMs, which might limit the experience in some graphical
applications anyway).
However, when qubes run on remote computers (e.g in the cloud) or on physically
separate computers (e.g. on a cluster of Raspberry PIs), we face the potential
problem of graphics performance. The solution we see is to introduce a local
copy of the GUI qube into each zone. Here, we make the assumption that there
should be a significantly faster communication channel available between qubes
within a Zone than between Zones. For example, inter-VM communication within
one data center should be significantly faster than between the user’s laptop
and the cloud. The Qubes GUI protocol is then used between qubes and the local
GUI qube within a single zone, but a more efficient (and more complex) protocol
is used to aggregate the GUI into the Master GUI qube from all the Slave GUI
qubes. Thanks to this combined setup, we still get the benefit of a reasonably
secure GUI. Untrusted qubes still use the Qubes secure GUI protocol to
communicate with the local GUI qube. However, we also benefit from the greater
efficiency of remote access-optimized protocols such as RDP and VNC to get the
GUI onto the user’s device over the network. (Here, we make the assumption that
the Slave GUI qubes are significantly more trustworthy than other
non-privileged qubes in the Zone. If that’s not the case, and if we’re also
worried about an attacker who has compromised a Slave GUI qube to exploit a
potential bug in the VNC or RDP protocol in order to attack the Master GUI
qube, we could still resort to the fine-grained Qubes Admin API to limit the
potential damage the attacker might inflict.)
Digression on the “cloudification” of apps
It’s hard not to notice how the model of desktop applications has changed over
the past decade or so, where many standalone applications that previously ran on
desktop computers now run in the cloud and have only their frontends executed in
a browser running on the client system. How does the Qubes compartmentalization
model, and more importantly Qubes as a desktop OS, deal with this change?
Above, we discussed how it’s possible to move Qubes VMs from the user’s local
machine to the cloud (or to physically separate computers) without the user
👍1
having to notice. I think it will be a great milestone when we finally get
there, as it will open up many new applications, as well as remove many
obstacles that today prevent the easy deployment of Qubes OS (such as the need
to find and maintain dedicated hardware).
However, it’s important to ask ourselves how relevant this model will be in the
coming years. Even with our new approach, we’re still talking about classic
standalone desktop applications running in qubes, while the rest of the world
seems to be moving toward an app-as-a-service model in which everything is
hosted in the cloud (e.g. Google Docs and Microsoft Office 365). How relevant
is the whole Qubes architecture, even the cloud-based version, in the
app-as-a-service model?
I’d like to argue that the Qubes architecture still makes perfect sense in this
new model.
First, it’s probably easy to accept that there will always be applications that
users, both individual and corporate, will prefer (or be forced) to run locally,
or at least on trusted servers. At the same time, it’s very likely that these
same users will want to embrace the general, public cloud with its multitude of
app-as-a-service options. Not surprisingly, there will be a need for isolating
these workloads from interfering with each other.
Some examples of payloads that are better suited as traditional, local
applications (and consequently within qubes), are MS Office for sensitive
documents, large data-processing applications, and… networking and USB drivers
and stacks. The latter things may not be very visible to the user, but we can’t
really offload them to the cloud. We have to host them on the local machine, and
they present a huge attack surface that jeopardizes the user’s other data and
applications.
What about isolating web apps from each other, as well as protecting the host
from them? Of course, that’s the primary task of the Web browser. Yet, despite
vendors’ best efforts, browser security measures are still being circumvented.
Continued expansion of the APIs that modern browsers expose to Web applications,
such as WebGL (https://en.wikipedia.org/wiki/WebGL), suggests that this state of affairs may not significantly
improve in the foreseeable future.
What makes the Qubes model especially useful, I think, is that it allows us to
put the whole browser in a container that is isolated by stronger mechanisms
(simply because Qubes does not have to maintain all the interfaces that the
browser must) and is managed by Qubes-defined policies. It’s rather natural to
imagine, e.g. a Chrome OS-based template for Qubes (perhaps even a
unikernel-based one), from which lightweight browser VMs could be created,
running either on the user’s local machine, or in the cloud, as described above.
Again, there will be pros and cons to both approaches, but Qubes should support
both – and mostly seamlessly from the user’s and admin’s points of view (as
well the Qubes service developer’s point of view!).
Summary
Qubes Air is the next step on our roadmap to making the concept of “Security
through Compartmentalization” applicable to more scenarios. It is also an
attempt to address some of the biggest problems and weaknesses plaguing the
current implementation of Qubes, specifically the difficulty of deployment and
virtualization as a single point of failure. While Qubes-as-a-Service is one
natural application that could be built on top of Qubes Air, it is certainly not
the only one. We have also discussed running Qubes over clusters of physically
isolated devices, as well as various hybrid scenarios. I believe the approach to
security that Qubes has been implementing for years will continue to be valid
for years to come, even in a world of apps-as-a-service.
there, as it will open up many new applications, as well as remove many
obstacles that today prevent the easy deployment of Qubes OS (such as the need
to find and maintain dedicated hardware).
However, it’s important to ask ourselves how relevant this model will be in the
coming years. Even with our new approach, we’re still talking about classic
standalone desktop applications running in qubes, while the rest of the world
seems to be moving toward an app-as-a-service model in which everything is
hosted in the cloud (e.g. Google Docs and Microsoft Office 365). How relevant
is the whole Qubes architecture, even the cloud-based version, in the
app-as-a-service model?
I’d like to argue that the Qubes architecture still makes perfect sense in this
new model.
First, it’s probably easy to accept that there will always be applications that
users, both individual and corporate, will prefer (or be forced) to run locally,
or at least on trusted servers. At the same time, it’s very likely that these
same users will want to embrace the general, public cloud with its multitude of
app-as-a-service options. Not surprisingly, there will be a need for isolating
these workloads from interfering with each other.
Some examples of payloads that are better suited as traditional, local
applications (and consequently within qubes), are MS Office for sensitive
documents, large data-processing applications, and… networking and USB drivers
and stacks. The latter things may not be very visible to the user, but we can’t
really offload them to the cloud. We have to host them on the local machine, and
they present a huge attack surface that jeopardizes the user’s other data and
applications.
What about isolating web apps from each other, as well as protecting the host
from them? Of course, that’s the primary task of the Web browser. Yet, despite
vendors’ best efforts, browser security measures are still being circumvented.
Continued expansion of the APIs that modern browsers expose to Web applications,
such as WebGL (https://en.wikipedia.org/wiki/WebGL), suggests that this state of affairs may not significantly
improve in the foreseeable future.
What makes the Qubes model especially useful, I think, is that it allows us to
put the whole browser in a container that is isolated by stronger mechanisms
(simply because Qubes does not have to maintain all the interfaces that the
browser must) and is managed by Qubes-defined policies. It’s rather natural to
imagine, e.g. a Chrome OS-based template for Qubes (perhaps even a
unikernel-based one), from which lightweight browser VMs could be created,
running either on the user’s local machine, or in the cloud, as described above.
Again, there will be pros and cons to both approaches, but Qubes should support
both – and mostly seamlessly from the user’s and admin’s points of view (as
well the Qubes service developer’s point of view!).
Summary
Qubes Air is the next step on our roadmap to making the concept of “Security
through Compartmentalization” applicable to more scenarios. It is also an
attempt to address some of the biggest problems and weaknesses plaguing the
current implementation of Qubes, specifically the difficulty of deployment and
virtualization as a single point of failure. While Qubes-as-a-Service is one
natural application that could be built on top of Qubes Air, it is certainly not
the only one. We have also discussed running Qubes over clusters of physically
isolated devices, as well as various hybrid scenarios. I believe the approach to
security that Qubes has been implementing for years will continue to be valid
for years to come, even in a world of apps-as-a-service.
Xen Project Spectre / Meltdown FAQ (Jan 22 Update)
https://blog.xenproject.org/2018/01/22/xen-project-spectre-meltdown-faq-jan-22-update/
On January 3rd, 2018, Google’s Project Zero announced several information leak vulnerabilities affecting all modern superscalar processors. Details can be found on their blog, and in the Xen Project Advisory 254. To help our users understand the impact and our next steps forward, we put together the following FAQ. We divided the FAQ into several […]
https://blog.xenproject.org/2018/01/22/xen-project-spectre-meltdown-faq-jan-22-update/
On January 3rd, 2018, Google’s Project Zero announced several information leak vulnerabilities affecting all modern superscalar processors. Details can be found on their blog, and in the Xen Project Advisory 254. To help our users understand the impact and our next steps forward, we put together the following FAQ. We divided the FAQ into several […]
Qubes and Whonix now have next-generation Tor onion services!
https://www.qubes-os.org/news/2018/01/23/qubes-whonix-next-gen-tor-onion-services/
The Qubes and Whonix projects now have next-generation Tor onion
services (https://blog.torproject.org/tors-fall-harvest-next-generation-onion-services) (a.k.a. “v3 onion services”), which provide several
security improvements (https://trac.torproject.org/projects/tor/wiki/doc/NextGenOnions) over v2 onion services:
Qubes: http://sik5nlgfc5qylnnsr57qrbm64zbdx6t4lreyhpon3ychmxmiem7tioad.onion (http://sik5nlgfc5qylnnsr57qrbm64zbdx6t4lreyhpon3ychmxmiem7tioad.onion/)
Whonix: http://dds6qkxpwdeubwucdiaord2xgbbeyds25rbsgr73tbfpqpt4a6vjwsyd.onion (http://dds6qkxpwdeubwucdiaord2xgbbeyds25rbsgr73tbfpqpt4a6vjwsyd.onion/)
These services run alongside our existing (“v2”) onion services:
Qubes: http://qubesos4rrrrz6n4.onion (http://qubesos4rrrrz6n4.onion/)
Whonix: http://kkkkkkkkkk63ava6.onion (http://kkkkkkkkkk63ava6.onion/)
For instructions on accessing the new addresses and further details,
please see the Whonix announcement (https://www.whonix.org/blog/whonix-new-v3-onion-address). Our sincere thanks go to the
Whonix team, and especially fortasse, the Whonix server
administrator, for doing this.
https://www.qubes-os.org/news/2018/01/23/qubes-whonix-next-gen-tor-onion-services/
The Qubes and Whonix projects now have next-generation Tor onion
services (https://blog.torproject.org/tors-fall-harvest-next-generation-onion-services) (a.k.a. “v3 onion services”), which provide several
security improvements (https://trac.torproject.org/projects/tor/wiki/doc/NextGenOnions) over v2 onion services:
Qubes: http://sik5nlgfc5qylnnsr57qrbm64zbdx6t4lreyhpon3ychmxmiem7tioad.onion (http://sik5nlgfc5qylnnsr57qrbm64zbdx6t4lreyhpon3ychmxmiem7tioad.onion/)
Whonix: http://dds6qkxpwdeubwucdiaord2xgbbeyds25rbsgr73tbfpqpt4a6vjwsyd.onion (http://dds6qkxpwdeubwucdiaord2xgbbeyds25rbsgr73tbfpqpt4a6vjwsyd.onion/)
These services run alongside our existing (“v2”) onion services:
Qubes: http://qubesos4rrrrz6n4.onion (http://qubesos4rrrrz6n4.onion/)
Whonix: http://kkkkkkkkkk63ava6.onion (http://kkkkkkkkkk63ava6.onion/)
For instructions on accessing the new addresses and further details,
please see the Whonix announcement (https://www.whonix.org/blog/whonix-new-v3-onion-address). Our sincere thanks go to the
Whonix team, and especially fortasse, the Whonix server
administrator, for doing this.
Update for QSB #37: Information leaks due to processor speculative execution bugs (XSA-254, Meltdown & Spectre)
https://www.qubes-os.org/news/2018/01/24/qsb-37-update/
Dear Qubes Community,
We have just updated Qubes Security Bulletin (QSB) #37:
Information leaks due to processor speculative execution bugs.
The text of the main changes are reproduced below. For the full
text, please see the complete QSB in the qubes-secpack:
https://github.com/QubesOS/qubes-secpack/blob/master/QSBs/qsb-037-2018.txt
Learn about the qubes-secpack, including how to obtain, verify, and
read it:
https://www.qubes-os.org/security/pack/
View all past QSBs:
https://www.qubes-os.org/security/bulletins/
View XSA-254 in the XSA Tracker:
https://www.qubes-os.org/security/xsa/#254
Changelog
==========
2018-01-11: Original QSB published
2018-01-23: Updated mitigation plan to XPTI; added Xen package versions
[...]
(Proper) patching
==================
## Qubes 4.0
As explained above, almost all the VMs in Qubes 4.0 are
fully-virtualized by default (specifically, they are HVMs), which
mitigates the most severe issue, Meltdown. The only PV domains in Qubes
4.0 are stub domains, which we plan to eliminate by switching to PVH
where possible. This will be done in Qubes 4.0-rc4 and also released as
a normal update for existing Qubes 4.0 installations. The only remaining
PV stub domains will be those used for VMs with PCI devices. (In the
default configuration, these are sys-net and sys-usb.) To protect those
domains, we will provide the Xen page-table isolation (XPTI) patch, as
described in the following section on Qubes 3.2.
## Qubes 3.2
Previously, we had planned to release an update for Qubes 3.2 that would
have made almost all VMs run in PVH mode by backporting support for this
mode from Qubes 4.0. However, a much less drastic option has become
available sooner than we and the Xen Security Team anticipated: what the
Xen Security Team refers to as a "stage 1" implementation of the Xen
page-table isolation (XPTI) mitigation strategy [5]. This mitigation
will make the most sensitive memory regions (including all of physical
memory mapped into Xen address space) immune to the Meltdown attack. In
addition, this mitigation will work on systems that lack VT-x support.
(By contrast, our original plan to backport PVH would have worked only
when the hardware supported VT-x or equivalent technology.)
Please note that this mitigation is expected to have a noticeable
performance impact. While there will be an option to disable the
mitigation (and thereby avoid the performance impact), doing so will
return the system to a vulnerable state.
The following packages contain the patches described above:
- Xen packages, version 4.6.6-36
[...]
Here is an overview of the VM modes that correspond to each Qubes OS
version:
VM type \ Qubes OS version | 3.2 | 4.0-rc1-3 | 4.0-rc4 |
---------------------------------- | --- | --------- | ------- |
Default VMs without PCI devices | PV | HVM | PVH |
Default VMs with PCI devices | PV | HVM | HVM |
Stub domains - Default VMs w/o PCI | N/A | PV | N/A |
Stub domains - Default VMs w/ PCI | N/A | PV | PV |
Stub domains - HVMs | PV | PV | PV |
https://www.qubes-os.org/news/2018/01/24/qsb-37-update/
Dear Qubes Community,
We have just updated Qubes Security Bulletin (QSB) #37:
Information leaks due to processor speculative execution bugs.
The text of the main changes are reproduced below. For the full
text, please see the complete QSB in the qubes-secpack:
https://github.com/QubesOS/qubes-secpack/blob/master/QSBs/qsb-037-2018.txt
Learn about the qubes-secpack, including how to obtain, verify, and
read it:
https://www.qubes-os.org/security/pack/
View all past QSBs:
https://www.qubes-os.org/security/bulletins/
View XSA-254 in the XSA Tracker:
https://www.qubes-os.org/security/xsa/#254
Changelog
==========
2018-01-11: Original QSB published
2018-01-23: Updated mitigation plan to XPTI; added Xen package versions
[...]
(Proper) patching
==================
## Qubes 4.0
As explained above, almost all the VMs in Qubes 4.0 are
fully-virtualized by default (specifically, they are HVMs), which
mitigates the most severe issue, Meltdown. The only PV domains in Qubes
4.0 are stub domains, which we plan to eliminate by switching to PVH
where possible. This will be done in Qubes 4.0-rc4 and also released as
a normal update for existing Qubes 4.0 installations. The only remaining
PV stub domains will be those used for VMs with PCI devices. (In the
default configuration, these are sys-net and sys-usb.) To protect those
domains, we will provide the Xen page-table isolation (XPTI) patch, as
described in the following section on Qubes 3.2.
## Qubes 3.2
Previously, we had planned to release an update for Qubes 3.2 that would
have made almost all VMs run in PVH mode by backporting support for this
mode from Qubes 4.0. However, a much less drastic option has become
available sooner than we and the Xen Security Team anticipated: what the
Xen Security Team refers to as a "stage 1" implementation of the Xen
page-table isolation (XPTI) mitigation strategy [5]. This mitigation
will make the most sensitive memory regions (including all of physical
memory mapped into Xen address space) immune to the Meltdown attack. In
addition, this mitigation will work on systems that lack VT-x support.
(By contrast, our original plan to backport PVH would have worked only
when the hardware supported VT-x or equivalent technology.)
Please note that this mitigation is expected to have a noticeable
performance impact. While there will be an option to disable the
mitigation (and thereby avoid the performance impact), doing so will
return the system to a vulnerable state.
The following packages contain the patches described above:
- Xen packages, version 4.6.6-36
[...]
Here is an overview of the VM modes that correspond to each Qubes OS
version:
VM type \ Qubes OS version | 3.2 | 4.0-rc1-3 | 4.0-rc4 |
---------------------------------- | --- | --------- | ------- |
Default VMs without PCI devices | PV | HVM | PVH |
Default VMs with PCI devices | PV | HVM | HVM |
Stub domains - Default VMs w/o PCI | N/A | PV | N/A |
Stub domains - Default VMs w/ PCI | N/A | PV | PV |
Stub domains - HVMs | PV | PV | PV |
Xen Project 4.8.3 is available
https://blog.xenproject.org/2018/01/24/xen-project-4-8-3-is-available/
I am pleased to announce the release of Xen 4.8.3. Xen Project Maintenance releases are released in line with our Maintenance Release Policy. We recommend that all users of the 4.8 stable series update to the latest point release. The release is available from its git repository xenbits.xenproject.org/gitweb/?p=xen.git;a=shortlog;h=refs/heads/stable-4.8 (tag RELEASE-4.8.3) or from the Xen Project […]
https://blog.xenproject.org/2018/01/24/xen-project-4-8-3-is-available/
I am pleased to announce the release of Xen 4.8.3. Xen Project Maintenance releases are released in line with our Maintenance Release Policy. We recommend that all users of the 4.8 stable series update to the latest point release. The release is available from its git repository xenbits.xenproject.org/gitweb/?p=xen.git;a=shortlog;h=refs/heads/stable-4.8 (tag RELEASE-4.8.3) or from the Xen Project […]
Qubes OS 4.0-rc4 has been released!
https://www.qubes-os.org/news/2018/01/31/qubes-40-rc4/
We’re pleased to announce the fourth release candidate for Qubes 4.0!
This release contains important safeguards against the Spectre and
Meltdown attacks (https://www.qubes-os.org/news/2018/01/11/qsb-37/), as well as bug fixes for many of the issues
discovered in the [previous release candidate][4.0-rc3]. A full list of
the Qubes 4.0 issues closed so far is available here (https://github.com/QubesOS/qubes-issues/issues?q=is%3Aissue+milestone%3A%22Release+4.0%22+is%3Aclosed).
Further details about this release, including full installation
instructions, are available in the Qubes 4.0 release
notes (https://www.qubes-os.org/doc/releases/4.0/release-notes/). The new installation image is available on the
Downloads (https://www.qubes-os.org/downloads/) page.
As always, we’re immensely grateful to our community of testers for
taking the time to discover and report bugs (https://www.qubes-os.org/doc/reporting-bugs/). Thanks to your efforts,
we’re able to fix these bugs before the final release of Qubes 4.0. We
encourage you to continue diligently testing this fourth release
candidate so that we can work together to improve Qubes 4.0 before the
stable release.
Major changes in Qubes 4.0-rc4
The Qubes VM Manager is back by popular demand! The returning Qubes
Manager will be slightly different from the 3.2 version. Specifically,
it will not duplicate functionality that is already provided by the new
4.0 widgets. Specific examples include attaching and detaching block
devices, attaching and detaching the microphone, and VM CPU usage.
In addition, the default TemplateVMs have been upgraded to Fedora 26 and
Debian 9.
The Qubes 4.0 stable release
If the testing of 4.0-rc4 does not reveal any major problems, we hope to
declare it the stable 4.0 release without any further significant
changes. In this scenario, any bugs discovered during the testing
process would be fixed in subsequent updates.
If, on the other hand, a major issue is discovered, we will continue
with the standard release schedule (https://www.qubes-os.org/doc/version-scheme/#release-schedule), and Qubes 4.0 stable will be a
separate, later release.
Current Qubes 4.0 Users
Current users of Qubes 4.0-rc3 can upgrade in-place by downloading the
latest updates from the testing repositories in both
dom0 (https://www.qubes-os.org/doc/software-update-dom0/#testing-repositories) and TemplateVMs (https://www.qubes-os.org/doc/software-update-vm/#testing-repositories). As explained in
QSB #37 (https://www.qubes-os.org/news/2018/01/11/qsb-37/), Qubes 4.0-rc4 uses PVH instead of HVM for almost all
VMs without PCI devices by default as a security measure against
Meltdown, and this change will also be released as a patch for existing
Qubes 4.0 installations in the coming days. Therefore, current Qubes 4.0
users will benefit from this change whether they upgrade in-place from a
previous release candidate or perform a clean installation of 4.0-rc4.
If you wish to upgrade in-place and have manually changed your VM
settings, please note the following:
By default, Qubes 4.0-rc3 used kernel 4.9.x. However, PVH mode will
require kernel >= 4.11. This is fine, because we will include kernel
4.14 in the PVH update. However, if you have manually changed the
kernel setting for any of your VMs, the update will not automatically
override that setting. Those VMs will still be using an old kernel,
so they will not work in PVH mode. Therefore, you must must either
change their settings to use the new kernel or change the VM mode
back to HVM.
If you have created a Windows VM, and you rely on it running in HVM
mode, you must explicitly set its mode to HVM (since the default mode
after applying the PVH update will be PVH rather than HVM). You can
do this either through the VM Settings GUI or by using the
qvm-prefs command-line tool to change the virt_mode property.
https://www.qubes-os.org/news/2018/01/31/qubes-40-rc4/
We’re pleased to announce the fourth release candidate for Qubes 4.0!
This release contains important safeguards against the Spectre and
Meltdown attacks (https://www.qubes-os.org/news/2018/01/11/qsb-37/), as well as bug fixes for many of the issues
discovered in the [previous release candidate][4.0-rc3]. A full list of
the Qubes 4.0 issues closed so far is available here (https://github.com/QubesOS/qubes-issues/issues?q=is%3Aissue+milestone%3A%22Release+4.0%22+is%3Aclosed).
Further details about this release, including full installation
instructions, are available in the Qubes 4.0 release
notes (https://www.qubes-os.org/doc/releases/4.0/release-notes/). The new installation image is available on the
Downloads (https://www.qubes-os.org/downloads/) page.
As always, we’re immensely grateful to our community of testers for
taking the time to discover and report bugs (https://www.qubes-os.org/doc/reporting-bugs/). Thanks to your efforts,
we’re able to fix these bugs before the final release of Qubes 4.0. We
encourage you to continue diligently testing this fourth release
candidate so that we can work together to improve Qubes 4.0 before the
stable release.
Major changes in Qubes 4.0-rc4
The Qubes VM Manager is back by popular demand! The returning Qubes
Manager will be slightly different from the 3.2 version. Specifically,
it will not duplicate functionality that is already provided by the new
4.0 widgets. Specific examples include attaching and detaching block
devices, attaching and detaching the microphone, and VM CPU usage.
In addition, the default TemplateVMs have been upgraded to Fedora 26 and
Debian 9.
The Qubes 4.0 stable release
If the testing of 4.0-rc4 does not reveal any major problems, we hope to
declare it the stable 4.0 release without any further significant
changes. In this scenario, any bugs discovered during the testing
process would be fixed in subsequent updates.
If, on the other hand, a major issue is discovered, we will continue
with the standard release schedule (https://www.qubes-os.org/doc/version-scheme/#release-schedule), and Qubes 4.0 stable will be a
separate, later release.
Current Qubes 4.0 Users
Current users of Qubes 4.0-rc3 can upgrade in-place by downloading the
latest updates from the testing repositories in both
dom0 (https://www.qubes-os.org/doc/software-update-dom0/#testing-repositories) and TemplateVMs (https://www.qubes-os.org/doc/software-update-vm/#testing-repositories). As explained in
QSB #37 (https://www.qubes-os.org/news/2018/01/11/qsb-37/), Qubes 4.0-rc4 uses PVH instead of HVM for almost all
VMs without PCI devices by default as a security measure against
Meltdown, and this change will also be released as a patch for existing
Qubes 4.0 installations in the coming days. Therefore, current Qubes 4.0
users will benefit from this change whether they upgrade in-place from a
previous release candidate or perform a clean installation of 4.0-rc4.
If you wish to upgrade in-place and have manually changed your VM
settings, please note the following:
By default, Qubes 4.0-rc3 used kernel 4.9.x. However, PVH mode will
require kernel >= 4.11. This is fine, because we will include kernel
4.14 in the PVH update. However, if you have manually changed the
kernel setting for any of your VMs, the update will not automatically
override that setting. Those VMs will still be using an old kernel,
so they will not work in PVH mode. Therefore, you must must either
change their settings to use the new kernel or change the VM mode
back to HVM.
If you have created a Windows VM, and you rely on it running in HVM
mode, you must explicitly set its mode to HVM (since the default mode
after applying the PVH update will be PVH rather than HVM). You can
do this either through the VM Settings GUI or by using the
qvm-prefs command-line tool to change the virt_mode property.
Comment on PV Calls: a new paravirtualized protocol for POSIX syscalls by Container Security for Kubernetes on AWS, Azure, GCP, and Private Clouds
https://blog.xenproject.org/2016/08/30/pv-calls-a-new-paravirtualized-protocol-for-posix-syscalls/#comment-461
[…] up to 4X network bandwidth compared to the traditional Xen networking PV drivers. See this article (https://blog.xenproject.org/2016/08/30/pv-calls-a-new-paravirtualized-protocol-for-posix-syscalls/) for more […]
https://blog.xenproject.org/2016/08/30/pv-calls-a-new-paravirtualized-protocol-for-posix-syscalls/#comment-461
[…] up to 4X network bandwidth compared to the traditional Xen networking PV drivers. See this article (https://blog.xenproject.org/2016/08/30/pv-calls-a-new-paravirtualized-protocol-for-posix-syscalls/) for more […]
Meet us at FOSDEM 2018
https://blog.xenproject.org/2018/02/01/meet-us-at-fosdem-2018/
As in the past, the Xen Project will have a booth at Europe’s biggest open source conference FOSDEM (taking place February 3rd and 4th in Brussels, Belgium). Where? During FOSDEM community volunteers will man our booth, which is located in bulding K (level 1, group C). Meet the Team! You will have the opportunity to speak […]
https://blog.xenproject.org/2018/02/01/meet-us-at-fosdem-2018/
As in the past, the Xen Project will have a booth at Europe’s biggest open source conference FOSDEM (taking place February 3rd and 4th in Brussels, Belgium). Where? During FOSDEM community volunteers will man our booth, which is located in bulding K (level 1, group C). Meet the Team! You will have the opportunity to speak […]
The Xen Project is participating in 2018 Summer round of Outreachy
https://blog.xenproject.org/2018/02/13/the-xen-project-is-participating-in-2018-summer-round-of-outreachy/
This is a quick reminder that the Xen Project is again participating in Outreachy (May 2018 to August 2018 Round). Please check the Outreachy application page for more information. Outreach Program for Women has been helping women (cis and trans), trans men, and genderqueer people get involved in free and open source software worldwide. Note […]
https://blog.xenproject.org/2018/02/13/the-xen-project-is-participating-in-2018-summer-round-of-outreachy/
This is a quick reminder that the Xen Project is again participating in Outreachy (May 2018 to August 2018 Round). Please check the Outreachy application page for more information. Outreach Program for Women has been helping women (cis and trans), trans men, and genderqueer people get involved in free and open source software worldwide. Note […]
Xen Project Contributor Spotlight: Kevin Tian
https://blog.xenproject.org/2018/02/14/xen-project-contributor-spotlight-kevin-tian/
The Xen Project is comprised of a diverse set of member companies and contributors that are committed to the growth and success of the Xen Project Hypervisor. The Xen Project Hypervisor is a staple technology for server and cloud vendors, and is gaining traction in the embedded, security and automotive space. This blog series highlights […]
https://blog.xenproject.org/2018/02/14/xen-project-contributor-spotlight-kevin-tian/
The Xen Project is comprised of a diverse set of member companies and contributors that are committed to the growth and success of the Xen Project Hypervisor. The Xen Project Hypervisor is a staple technology for server and cloud vendors, and is gaining traction in the embedded, security and automotive space. This blog series highlights […]
QSB #38: Qrexec policy bypass and possible information leak
https://www.qubes-os.org/news/2018/02/20/qsb-38/
Dear Qubes Community,
We have just published Qubes Security Bulletin (QSB) #38:
Qrexec policy bypass and possible information leak.
The text of this QSB is reproduced below. This QSB and its accompanying
signatures will always be available in the Qubes Security Pack
(qubes-secpack).
View QSB #38 in the qubes-secpack:
https://github.com/QubesOS/qubes-secpack/blob/master/QSBs/qsb-038-2018.txt
Learn about the qubes-secpack, including how to obtain, verify, and
read it:
https://www.qubes-os.org/security/pack/
View all past QSBs:
https://www.qubes-os.org/security/bulletins/
---===[ Qubes Security Bulletin #38 ]===---
February 20, 2018
Qrexec policy bypass and possible information leak
Summary
========
One of our developers, Wojtek Porczyk, discovered a vulnerability in the way
qube names are handled, which can result in qrexec policies being bypassed, a
theoretical information leak, and possibly other vulnerabilities. The '$'
character, when part of a qrexec RPC name and/or destination
specification (like '$adminvm', '$default', or one of the variants of
'$dispvm') is expanded according to shell parameter expansion [1]
after evaluating the qrexec policy but before invoking the RPC handler
executable.
Impact
=======
1. Potential policy bypass. The qrexec argument value that is delivered to the
handler executable can be different from the value that is present in the
RPC policy at the time the policy is evaluated. This is especially
problematic when the policy is defined as a blacklist of arguments rather
than a whitelist, e.g. "permit any arguments to example.Call but
PROHIBITED". If an attacker were to call 'example.Call+PROHIBITED$invalid',
the argument would not match the blacklisted variable at the time of policy
evaluation, so it would be admitted. However, performing shell parameter
expansion on the argument results in the prohibited value, which is what the
actual handler receives.
2. Potential information leak. If the qrexec handler acts upon the argument,
the attacker could read or deduce the contents of those variables.
3. Other potential vulnerabilities. Some of the variables present in the
environment, like $HOME and $PATH, also contain characters that are not
permissible in qrexec names or arguments that could theoretically lead to
other classes of vulnerabilities, such as directory traversal.
Technical details
==================
The '$' character is used in several places in qrexec and is therefore an
allowed character in parameters to Qubes RPC calls. It is also allowed as part
of the RPC name. The validation code is as follows [2]:
static void sanitize_name(char * untrusted_s_signed, char *extra_allowed_chars)
{
unsigned char * untrusted_s;
for (untrusted_s=(unsigned char*)untrusted_s_signed; *untrusted_s; untrusted_s++) {
if (*untrusted_s >= 'a' && *untrusted_s <= 'z')
continue;
if (*untrusted_s >= 'A' && *untrusted_s <= 'Z')
continue;
if (*untrusted_s >= '0' && *untrusted_s <= '9')
continue;
if (*untrusted_s == '$' ||
*untrusted_s == '_' ||
*untrusted_s == '-' ||
*untrusted_s == '.')
continue;
if (extra_allowed_chars && strchr(extra_allowed_chars, *untrusted_s))
continue;
*untrusted_s = '_';
}
}
and is invoked as [3]:
sanitize_name(untrusted_params.service_name, "+");
sanitize_name(untrusted_params.target_domain, ":");
Those arguments are part of the basis of policy evaluation. If policy
evaluation was successful, the parameters are then forwarded to the destination
domain over qrexec, and the call is executed using the qubes-rpc-multiplexer
https://www.qubes-os.org/news/2018/02/20/qsb-38/
Dear Qubes Community,
We have just published Qubes Security Bulletin (QSB) #38:
Qrexec policy bypass and possible information leak.
The text of this QSB is reproduced below. This QSB and its accompanying
signatures will always be available in the Qubes Security Pack
(qubes-secpack).
View QSB #38 in the qubes-secpack:
https://github.com/QubesOS/qubes-secpack/blob/master/QSBs/qsb-038-2018.txt
Learn about the qubes-secpack, including how to obtain, verify, and
read it:
https://www.qubes-os.org/security/pack/
View all past QSBs:
https://www.qubes-os.org/security/bulletins/
---===[ Qubes Security Bulletin #38 ]===---
February 20, 2018
Qrexec policy bypass and possible information leak
Summary
========
One of our developers, Wojtek Porczyk, discovered a vulnerability in the way
qube names are handled, which can result in qrexec policies being bypassed, a
theoretical information leak, and possibly other vulnerabilities. The '$'
character, when part of a qrexec RPC name and/or destination
specification (like '$adminvm', '$default', or one of the variants of
'$dispvm') is expanded according to shell parameter expansion [1]
after evaluating the qrexec policy but before invoking the RPC handler
executable.
Impact
=======
1. Potential policy bypass. The qrexec argument value that is delivered to the
handler executable can be different from the value that is present in the
RPC policy at the time the policy is evaluated. This is especially
problematic when the policy is defined as a blacklist of arguments rather
than a whitelist, e.g. "permit any arguments to example.Call but
PROHIBITED". If an attacker were to call 'example.Call+PROHIBITED$invalid',
the argument would not match the blacklisted variable at the time of policy
evaluation, so it would be admitted. However, performing shell parameter
expansion on the argument results in the prohibited value, which is what the
actual handler receives.
2. Potential information leak. If the qrexec handler acts upon the argument,
the attacker could read or deduce the contents of those variables.
3. Other potential vulnerabilities. Some of the variables present in the
environment, like $HOME and $PATH, also contain characters that are not
permissible in qrexec names or arguments that could theoretically lead to
other classes of vulnerabilities, such as directory traversal.
Technical details
==================
The '$' character is used in several places in qrexec and is therefore an
allowed character in parameters to Qubes RPC calls. It is also allowed as part
of the RPC name. The validation code is as follows [2]:
static void sanitize_name(char * untrusted_s_signed, char *extra_allowed_chars)
{
unsigned char * untrusted_s;
for (untrusted_s=(unsigned char*)untrusted_s_signed; *untrusted_s; untrusted_s++) {
if (*untrusted_s >= 'a' && *untrusted_s <= 'z')
continue;
if (*untrusted_s >= 'A' && *untrusted_s <= 'Z')
continue;
if (*untrusted_s >= '0' && *untrusted_s <= '9')
continue;
if (*untrusted_s == '$' ||
*untrusted_s == '_' ||
*untrusted_s == '-' ||
*untrusted_s == '.')
continue;
if (extra_allowed_chars && strchr(extra_allowed_chars, *untrusted_s))
continue;
*untrusted_s = '_';
}
}
and is invoked as [3]:
sanitize_name(untrusted_params.service_name, "+");
sanitize_name(untrusted_params.target_domain, ":");
Those arguments are part of the basis of policy evaluation. If policy
evaluation was successful, the parameters are then forwarded to the destination
domain over qrexec, and the call is executed using the qubes-rpc-multiplexer
executable, which is invoked by a POSIX shell. The exact mechanism differs
between dom0 and other qubes [4]:
if self.target == 'dom0':
cmd = '{multiplexer} {service} {source} {original_target}'.format(
multiplexer=QUBES_RPC_MULTIPLEXER_PATH,
service=self.service,
source=self.source,
original_target=self.original_target)
else:
cmd = '{user}:QUBESRPC {service} {source}'.format(
user=(self.rule.override_user or 'DEFAULT'),
service=self.service,
source=self.source)
# ...
try:
subprocess.call([QREXEC_CLIENT] + qrexec_opts + [cmd])
For the dom0 case, these are the relevant parts from the executable referenced
as QREXEC_CLIENT above [5]:
/* called from do_fork_exec */
void do_exec(const char *prog)
{
execl("/bin/bash", "bash", "-c", prog, NULL);
}
/* ... */
static void prepare_local_fds(char *cmdline)
{
/* ... */
do_fork_exec(cmdline, &local_pid, &local_stdin_fd, &local_stdout_fd,
NULL);
}
/* ... */
int main(int argc, char **argv)
{
/* ... */
if (strcmp(domname, "dom0") == 0) {
/* ... */
prepare_local_fds(remote_cmdline);
For qubes other than dom0, the command line is reconstructed from the command
passed through qrexec [6]:
void do_exec(const char *cmd)
{
char buf[strlen(QUBES_RPC_MULTIPLEXER_PATH) + strlen(cmd) - RPC_REQUEST_COMMAND_LEN + 1];
char *realcmd = index(cmd, ':'), *user;
/* ... */
/* replace magic RPC cmd with RPC multiplexer path */
if (strncmp(realcmd, RPC_REQUEST_COMMAND " ", RPC_REQUEST_COMMAND_LEN+1)==0) {
strcpy(buf, QUBES_RPC_MULTIPLEXER_PATH);
strcpy(buf + strlen(QUBES_RPC_MULTIPLEXER_PATH), realcmd + RPC_REQUEST_COMMAND_LEN);
realcmd = buf;
}
/* ... */
#ifdef HAVE_PAM
/* ... */
shell_basename = basename (pw->pw_shell);
/* this process is going to die shortly, so don't care about freeing */
arg0 = malloc (strlen (shell_basename) + 2);
/* ... */
/* FORK HERE */
child = fork ();
switch (child) {
case -1:
goto error;
case 0:
/* child */
if (setgid (pw->pw_gid))
exit(126);
if (setuid (pw->pw_uid))
exit(126);
setsid();
/* This is a copy but don't care to free as we exec later anyways. */
env = pam_getenvlist (pamh);
execle(pw->pw_shell, arg0, "-c", realcmd, (char*)NULL, env);
/* ... */
#else
execl("/bin/su", "su", "-", user, "-c", realcmd, NULL);
perror("execl");
exit(1);
#endif
Notice that the '$' character is unescaped in all cases when it is passed to
the shell and is interpreted according to the rules of parameter expansion [1].
Mitigating factors
===================
Only the '$' shell special character character was allowed, so only the
corresponding simple form of parameter expansion is permitted [1]. The '{}'
characters are prohibited, so other forms of parameter expansion are not
possible. Had other characters like '()', been permitted, which is not the
case, this vulnerability would amount to code execution.
The qrexec calls that are present in a default Qubes OS installation and that
have, by default, a policy that would actually allow them to be called:
- do not contain the '$' character; and
- do not act upon differences in their arguments.
Therefore, this vulnerability is limited to custom RPCs and/or custom policies.
The attacker is constrained to preexisting environment variables and shell
special variables, which do not appear to contain very valuable information.
between dom0 and other qubes [4]:
if self.target == 'dom0':
cmd = '{multiplexer} {service} {source} {original_target}'.format(
multiplexer=QUBES_RPC_MULTIPLEXER_PATH,
service=self.service,
source=self.source,
original_target=self.original_target)
else:
cmd = '{user}:QUBESRPC {service} {source}'.format(
user=(self.rule.override_user or 'DEFAULT'),
service=self.service,
source=self.source)
# ...
try:
subprocess.call([QREXEC_CLIENT] + qrexec_opts + [cmd])
For the dom0 case, these are the relevant parts from the executable referenced
as QREXEC_CLIENT above [5]:
/* called from do_fork_exec */
void do_exec(const char *prog)
{
execl("/bin/bash", "bash", "-c", prog, NULL);
}
/* ... */
static void prepare_local_fds(char *cmdline)
{
/* ... */
do_fork_exec(cmdline, &local_pid, &local_stdin_fd, &local_stdout_fd,
NULL);
}
/* ... */
int main(int argc, char **argv)
{
/* ... */
if (strcmp(domname, "dom0") == 0) {
/* ... */
prepare_local_fds(remote_cmdline);
For qubes other than dom0, the command line is reconstructed from the command
passed through qrexec [6]:
void do_exec(const char *cmd)
{
char buf[strlen(QUBES_RPC_MULTIPLEXER_PATH) + strlen(cmd) - RPC_REQUEST_COMMAND_LEN + 1];
char *realcmd = index(cmd, ':'), *user;
/* ... */
/* replace magic RPC cmd with RPC multiplexer path */
if (strncmp(realcmd, RPC_REQUEST_COMMAND " ", RPC_REQUEST_COMMAND_LEN+1)==0) {
strcpy(buf, QUBES_RPC_MULTIPLEXER_PATH);
strcpy(buf + strlen(QUBES_RPC_MULTIPLEXER_PATH), realcmd + RPC_REQUEST_COMMAND_LEN);
realcmd = buf;
}
/* ... */
#ifdef HAVE_PAM
/* ... */
shell_basename = basename (pw->pw_shell);
/* this process is going to die shortly, so don't care about freeing */
arg0 = malloc (strlen (shell_basename) + 2);
/* ... */
/* FORK HERE */
child = fork ();
switch (child) {
case -1:
goto error;
case 0:
/* child */
if (setgid (pw->pw_gid))
exit(126);
if (setuid (pw->pw_uid))
exit(126);
setsid();
/* This is a copy but don't care to free as we exec later anyways. */
env = pam_getenvlist (pamh);
execle(pw->pw_shell, arg0, "-c", realcmd, (char*)NULL, env);
/* ... */
#else
execl("/bin/su", "su", "-", user, "-c", realcmd, NULL);
perror("execl");
exit(1);
#endif
Notice that the '$' character is unescaped in all cases when it is passed to
the shell and is interpreted according to the rules of parameter expansion [1].
Mitigating factors
===================
Only the '$' shell special character character was allowed, so only the
corresponding simple form of parameter expansion is permitted [1]. The '{}'
characters are prohibited, so other forms of parameter expansion are not
possible. Had other characters like '()', been permitted, which is not the
case, this vulnerability would amount to code execution.
The qrexec calls that are present in a default Qubes OS installation and that
have, by default, a policy that would actually allow them to be called:
- do not contain the '$' character; and
- do not act upon differences in their arguments.
Therefore, this vulnerability is limited to custom RPCs and/or custom policies.
The attacker is constrained to preexisting environment variables and shell
special variables, which do not appear to contain very valuable information.
Since writing policies in the blacklist paradigm is a poor security practice in
general, it is perhaps less common among the security-conscious Qubes userbase.
All users who write custom RPCs or policies are henceforth advised to adopt the
whitelist paradigm.
Resolution
===========
We've decided to deprecate the '$' character from qrexec-related usage.
Instead, to denote special tokens, we will use the '@' character,
which we believe is less likely to be interpreted in a special way
by the relevant software.
This is a forward-incompatible change for existing systems, specifically in
policy syntax, remote domain parameters to the qrexec-client and
qrexec-client-vm tools, and the API exposed to the qrexec handler noscript. In
order to maintain backward compatibility, these tools will accept older
keywords while parsing policy and command line parameters, then translate them
to the new keywords before evaluating the policy or invoking the actual call,
respectively.
It will no longer be possible to define calls and receive arguments containing
the '$' character. However, we believe that no such calls exist. Had they
existed, this bug would have been disclosed earlier.
In addition, the shell will not be used to call qubes-rpc-multiplexer.
The environment variable specifying the original target qube will also be
specified differently for cases that, in the past, would have contained the '$'
character. However, this wasn't working as specified anyway, so we believe the
impact of this change to be minimal. The new variables will be as follows:
- QREXEC_REQUESTED_TARGET_TYPE
with value of either 'name' or 'keyword'
- QREXEC_REQUESTED_TARGET
set only when QREXEC_REQUESTED_TARGET_TYPE set to 'name'
- QREXEC_REQUESTED_TARGET_KEYWORD
set only when QREXEC_REQUESTED_TARGET_TYPE set to 'keyword'
Patching
=========
The specific packages that resolve the problem discussed in this bulletin are
as follows:
For Qubes 3.2, dom0:
- qubes-utils 3.2.7
- qubes-core-dom0-linux 3.2.17
For Qubes 3.2, domUs:
- qubes-utils 3.2.7
- qubes-core-vm (Fedora) / qubes-core-agent (Debian) 3.2.24
For Qubes 4.0, dom0:
- qubes-utils 4.0.16
- qubes-core-dom0 4.0.23
- qubes-core-dom0-linux 4.0.11
For Qubes 4.0, domUs:
- qubes-utils 4.0.16
- qubes-core-agent 4.0.22
The packages for dom0 are to be installed in dom0 via the Qubes VM Manager or
via the qubes-dom0-update command as follows:
For updates from the stable repository (not immediately available):
$ sudo qubes-dom0-update
For updates from the security-testing repository:
$ sudo qubes-dom0-update --enablerepo=qubes-dom0-security-testing
The packages for domUs are to be installed in TemplateVMs and StandaloneVMs via
the Qubes VM Manager or via the respective package manager:
For updates to Fedora from the stable repository (not immediately available):
$ sudo dnf update
For updates to Fedora from the security-testing repository:
$ sudo dnf update --enablerepo=qubes-vm-*-security-testing
For updates to Debian from the stable repository (not immediately available):
$ sudo apt update && sudo apt dist-upgrade
For updates to Debian from the security-testing repository:
First, uncomment the line below "Qubes security updates testing repository" in
/etc/apt/sources.list.d/qubes-r*.list
Then:
$ sudo apt update && sudo apt dist-upgrade
A restart is required for these changes to take effect. In the case of dom0,
this entails a full system restart. In the case of TemplateVMs, this entails
shutting down the TemplateVM before restarting all the TemplateBasedVMs based
on that TemplateVM.
These packages will migrate from the security-testing repository to the current
(stable) repository over the next two weeks after being tested by the
community.
Timeline
=========
2011-07-22 Commit c23cc48 permits '$' character [7].
2016-03-27 Commit 0607d90 introduces qrexec arguments [8][9].
general, it is perhaps less common among the security-conscious Qubes userbase.
All users who write custom RPCs or policies are henceforth advised to adopt the
whitelist paradigm.
Resolution
===========
We've decided to deprecate the '$' character from qrexec-related usage.
Instead, to denote special tokens, we will use the '@' character,
which we believe is less likely to be interpreted in a special way
by the relevant software.
This is a forward-incompatible change for existing systems, specifically in
policy syntax, remote domain parameters to the qrexec-client and
qrexec-client-vm tools, and the API exposed to the qrexec handler noscript. In
order to maintain backward compatibility, these tools will accept older
keywords while parsing policy and command line parameters, then translate them
to the new keywords before evaluating the policy or invoking the actual call,
respectively.
It will no longer be possible to define calls and receive arguments containing
the '$' character. However, we believe that no such calls exist. Had they
existed, this bug would have been disclosed earlier.
In addition, the shell will not be used to call qubes-rpc-multiplexer.
The environment variable specifying the original target qube will also be
specified differently for cases that, in the past, would have contained the '$'
character. However, this wasn't working as specified anyway, so we believe the
impact of this change to be minimal. The new variables will be as follows:
- QREXEC_REQUESTED_TARGET_TYPE
with value of either 'name' or 'keyword'
- QREXEC_REQUESTED_TARGET
set only when QREXEC_REQUESTED_TARGET_TYPE set to 'name'
- QREXEC_REQUESTED_TARGET_KEYWORD
set only when QREXEC_REQUESTED_TARGET_TYPE set to 'keyword'
Patching
=========
The specific packages that resolve the problem discussed in this bulletin are
as follows:
For Qubes 3.2, dom0:
- qubes-utils 3.2.7
- qubes-core-dom0-linux 3.2.17
For Qubes 3.2, domUs:
- qubes-utils 3.2.7
- qubes-core-vm (Fedora) / qubes-core-agent (Debian) 3.2.24
For Qubes 4.0, dom0:
- qubes-utils 4.0.16
- qubes-core-dom0 4.0.23
- qubes-core-dom0-linux 4.0.11
For Qubes 4.0, domUs:
- qubes-utils 4.0.16
- qubes-core-agent 4.0.22
The packages for dom0 are to be installed in dom0 via the Qubes VM Manager or
via the qubes-dom0-update command as follows:
For updates from the stable repository (not immediately available):
$ sudo qubes-dom0-update
For updates from the security-testing repository:
$ sudo qubes-dom0-update --enablerepo=qubes-dom0-security-testing
The packages for domUs are to be installed in TemplateVMs and StandaloneVMs via
the Qubes VM Manager or via the respective package manager:
For updates to Fedora from the stable repository (not immediately available):
$ sudo dnf update
For updates to Fedora from the security-testing repository:
$ sudo dnf update --enablerepo=qubes-vm-*-security-testing
For updates to Debian from the stable repository (not immediately available):
$ sudo apt update && sudo apt dist-upgrade
For updates to Debian from the security-testing repository:
First, uncomment the line below "Qubes security updates testing repository" in
/etc/apt/sources.list.d/qubes-r*.list
Then:
$ sudo apt update && sudo apt dist-upgrade
A restart is required for these changes to take effect. In the case of dom0,
this entails a full system restart. In the case of TemplateVMs, this entails
shutting down the TemplateVM before restarting all the TemplateBasedVMs based
on that TemplateVM.
These packages will migrate from the security-testing repository to the current
(stable) repository over the next two weeks after being tested by the
community.
Timeline
=========
2011-07-22 Commit c23cc48 permits '$' character [7].
2016-03-27 Commit 0607d90 introduces qrexec arguments [8][9].
2018-02-14 The vulnerability is discovered and reported internally.
2018-02-20 The vulnerability is patched, and this bulletin is released.
Credits
========
The issue was discovered by Wojtek Porczyk.
References
===========
[1] http://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#tag_18_06_02
[2] https://github.com/QubesOS/qubes-core-admin-linux/blob/v4.0.10/qrexec/qrexec-daemon.c#L643-L662
[3] https://github.com/QubesOS/qubes-core-admin-linux/blob/v4.0.10/qrexec/qrexec-daemon.c#L685-L686
[4] https://github.com/QubesOS/qubes-core-admin/blob/v4.0.22/qubespolicy/__init__.py#L452
[5] https://github.com/QubesOS/qubes-core-admin-linux/blob/v4.0.10/qrexec/qrexec-daemon.c
[6] https://github.com/QubesOS/qubes-core-agent-linux/blob/v4.0.21/qrexec/qrexec-agent.c#L136
[7] https://github.com/QubesOS/qubes-core-admin/commit/c23cc48#diff-3aa52ac2dd3e25700efd40e77b02b2d0
[8] https://github.com/QubesOS/qubes-core-admin-linux/commit/0607d90
[9] https://github.com/QubesOS/qubes-issues/issues/1876
--
The Qubes Security Team
https://www.qubes-os.org/security/
2018-02-20 The vulnerability is patched, and this bulletin is released.
Credits
========
The issue was discovered by Wojtek Porczyk.
References
===========
[1] http://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#tag_18_06_02
[2] https://github.com/QubesOS/qubes-core-admin-linux/blob/v4.0.10/qrexec/qrexec-daemon.c#L643-L662
[3] https://github.com/QubesOS/qubes-core-admin-linux/blob/v4.0.10/qrexec/qrexec-daemon.c#L685-L686
[4] https://github.com/QubesOS/qubes-core-admin/blob/v4.0.22/qubespolicy/__init__.py#L452
[5] https://github.com/QubesOS/qubes-core-admin-linux/blob/v4.0.10/qrexec/qrexec-daemon.c
[6] https://github.com/QubesOS/qubes-core-agent-linux/blob/v4.0.21/qrexec/qrexec-agent.c#L136
[7] https://github.com/QubesOS/qubes-core-admin/commit/c23cc48#diff-3aa52ac2dd3e25700efd40e77b02b2d0
[8] https://github.com/QubesOS/qubes-core-admin-linux/commit/0607d90
[9] https://github.com/QubesOS/qubes-issues/issues/1876
--
The Qubes Security Team
https://www.qubes-os.org/security/
XSA-252, XSA-255, and XSA-256 do not affect the security of Qubes OS
https://www.qubes-os.org/news/2018/02/27/xsa-252-255-256-qubes-not-affected/
The Xen Project has published Xen Security Advisories 252, 255, and 256
(XSA-252, XSA-255, and XSA-256, respectively). These XSAs do not
affect the security of Qubes OS, and no user action is necessary.
These XSAs have been added to the XSA Tracker (https://www.qubes-os.org/security/xsa/):
https://www.qubes-os.org/security/xsa/#252
https://www.qubes-os.org/security/xsa/#255
https://www.qubes-os.org/security/xsa/#256
https://www.qubes-os.org/news/2018/02/27/xsa-252-255-256-qubes-not-affected/
The Xen Project has published Xen Security Advisories 252, 255, and 256
(XSA-252, XSA-255, and XSA-256, respectively). These XSAs do not
affect the security of Qubes OS, and no user action is necessary.
These XSAs have been added to the XSA Tracker (https://www.qubes-os.org/security/xsa/):
https://www.qubes-os.org/security/xsa/#252
https://www.qubes-os.org/security/xsa/#255
https://www.qubes-os.org/security/xsa/#256
Xen Project Member Spotlight: DornerWorks
https://blog.xenproject.org/2018/02/28/xen-project-member-spotlight-dornerworks/
The Xen Project is comprised of a diverse set of member companies and contributors that are committed to the growth and success of the Xen Project Hypervisor. The Xen Project Hypervisor is a staple technology for server and cloud vendors, and is gaining traction in the embedded, security and automotive space. This blog series highlights […]
https://blog.xenproject.org/2018/02/28/xen-project-member-spotlight-dornerworks/
The Xen Project is comprised of a diverse set of member companies and contributors that are committed to the growth and success of the Xen Project Hypervisor. The Xen Project Hypervisor is a staple technology for server and cloud vendors, and is gaining traction in the embedded, security and automotive space. This blog series highlights […]
Comment on Xen Project Spectre/Meltdown FAQ by Meltdown and Spectre Explained - WWT
https://blog.xenproject.org/2018/01/04/xen-project-spectremeltdown-faq/#comment-464
[…] Security Advisory (XSA-254) / FAQ […]
https://blog.xenproject.org/2018/01/04/xen-project-spectremeltdown-faq/#comment-464
[…] Security Advisory (XSA-254) / FAQ […]
Comment on What’s New in the Xen Project Hypervisor 4.9? by prome.4
https://blog.xenproject.org/2017/06/28/whats-new-in-the-xen-project-hypervisor-4-9/#comment-465
Is It here any tutol how to install new update?
https://blog.xenproject.org/2017/06/28/whats-new-in-the-xen-project-hypervisor-4-9/#comment-465
Is It here any tutol how to install new update?