QSB #33: Xen hypervisor (XSA-231 through XSA-234)
https://www.qubes-os.org/news/2017/09/12/qsb-33/
Dear Qubes Community,
We have just published Qubes Security Bulletin (QSB) #33:
Xen hypervisor (XSA-231 through XSA-234).
The text of this QSB is reproduced below. This QSB and its accompanying
signatures will always be available in the Qubes Security Pack (qubes-secpack).
View QSB #33 in the qubes-secpack:
https://github.com/QubesOS/qubes-secpack/blob/master/QSBs/qsb-033-2017.txt
Learn about the qubes-secpack, including how to obtain, verify, and read it:
https://www.qubes-os.org/security/pack/
View all past QSBs:
https://www.qubes-os.org/security/bulletins/
View XSA-231 through XSA-234 in the XSA Tracker:
https://www.qubes-os.org/security/xsa/
---===[ Qubes Security Bulletin #33 ]===---
September 12, 2017
Xen hypervisor (XSA-231 through XSA-234)
Summary
========
The Xen Security Team released several Xen Security Advisories today
(XSA-231 through XSA-234). The impact of these advisories ranges from
system crashes to privilege escalations. See our commentary below for
details.
Technical details
==================
Xen Security Advisory 231 [1]:
| The function `alloc_heap_pages` allows callers to specify the first
| NUMA node that should be used for allocations through the `memflags`
| parameter; the node is extracted using the `MEMF_get_node` macro.
|
| While the function checks to see if the special constant
| `NUMA_NO_NODE` is specified, it otherwise does not handle the case
| where `node >= MAX_NUMNODES`. This allows an out-of-bounds access
| to an internal array.
|
| An attacker using crafted hypercalls can execute arbitrary code within
| Xen.
Xen Security Advisory 232 [2]:
| The function `__gnttab_cache_flush` handles GNTTABOP_cache_flush grant
| table operations. It checks to see if the calling domain is the owner
| of the page that is to be operated on. If it is not, the owner's grant
| table is checked to see if a grant mapping to the calling domain
| exists for the page in question.
|
| However, the function does not check to see if the owning domain
| actually has a grant table or not. Some special domains, such as
| `DOMID_XEN`, `DOMID_IO` and `DOMID_COW` are created without grant
| tables. Hence, if __gnttab_cache_flush operates on a page owned by
| these special domains, it will attempt to dereference a null pointer
| in the domain struct.
|
| The guest can get Xen to dereference a NULL pointer.
|
| For ARM guests, and x86 HVM guests, and x86 PV guests on systems with
| SMAP enabled, this will cause a host crash (denial-of-service).
|
| For x86 PV guests on systems without SMAP enabled, an attacker can map
| a crafted grant structure at virtual address 0. This can be leveraged
| to increment an arbitrary virtual address, which can then probably be
| leveraged into a full privilege escalation.
Xen Security Advisory 234 [4]:
| When removing or replacing a grant mapping, the x86 PV specific path
| needs to make sure page table entries remain in sync with other
| accounting done. Although the identity of the page frame was
| validated correctly, neither the presence of the mapping nor page
| writability were taken into account.
|
| A malicious or buggy x86 PV guest could escalate its privileges or
| crash the hypervisor.
The Xen Security Team also released Xen Security Advisory 233 [3], with
only DoS impact:
| When shutting down a VM with a stubdomain, a race in cxenstored may
| cause a double-free.
|
| The xenstored daemon may crash, resulting in a DoS of any parts of the
| system relying on it (including domain creation / destruction,
| ballooning, device changes, etc).
Commentary from the Qubes Security Team
========================================
This batch of Xen security advisories reassures us in our decision to
abandon default para-virtualization (PV) in Qubes 4.0. Indeed, only
https://www.qubes-os.org/news/2017/09/12/qsb-33/
Dear Qubes Community,
We have just published Qubes Security Bulletin (QSB) #33:
Xen hypervisor (XSA-231 through XSA-234).
The text of this QSB is reproduced below. This QSB and its accompanying
signatures will always be available in the Qubes Security Pack (qubes-secpack).
View QSB #33 in the qubes-secpack:
https://github.com/QubesOS/qubes-secpack/blob/master/QSBs/qsb-033-2017.txt
Learn about the qubes-secpack, including how to obtain, verify, and read it:
https://www.qubes-os.org/security/pack/
View all past QSBs:
https://www.qubes-os.org/security/bulletins/
View XSA-231 through XSA-234 in the XSA Tracker:
https://www.qubes-os.org/security/xsa/
---===[ Qubes Security Bulletin #33 ]===---
September 12, 2017
Xen hypervisor (XSA-231 through XSA-234)
Summary
========
The Xen Security Team released several Xen Security Advisories today
(XSA-231 through XSA-234). The impact of these advisories ranges from
system crashes to privilege escalations. See our commentary below for
details.
Technical details
==================
Xen Security Advisory 231 [1]:
| The function `alloc_heap_pages` allows callers to specify the first
| NUMA node that should be used for allocations through the `memflags`
| parameter; the node is extracted using the `MEMF_get_node` macro.
|
| While the function checks to see if the special constant
| `NUMA_NO_NODE` is specified, it otherwise does not handle the case
| where `node >= MAX_NUMNODES`. This allows an out-of-bounds access
| to an internal array.
|
| An attacker using crafted hypercalls can execute arbitrary code within
| Xen.
Xen Security Advisory 232 [2]:
| The function `__gnttab_cache_flush` handles GNTTABOP_cache_flush grant
| table operations. It checks to see if the calling domain is the owner
| of the page that is to be operated on. If it is not, the owner's grant
| table is checked to see if a grant mapping to the calling domain
| exists for the page in question.
|
| However, the function does not check to see if the owning domain
| actually has a grant table or not. Some special domains, such as
| `DOMID_XEN`, `DOMID_IO` and `DOMID_COW` are created without grant
| tables. Hence, if __gnttab_cache_flush operates on a page owned by
| these special domains, it will attempt to dereference a null pointer
| in the domain struct.
|
| The guest can get Xen to dereference a NULL pointer.
|
| For ARM guests, and x86 HVM guests, and x86 PV guests on systems with
| SMAP enabled, this will cause a host crash (denial-of-service).
|
| For x86 PV guests on systems without SMAP enabled, an attacker can map
| a crafted grant structure at virtual address 0. This can be leveraged
| to increment an arbitrary virtual address, which can then probably be
| leveraged into a full privilege escalation.
Xen Security Advisory 234 [4]:
| When removing or replacing a grant mapping, the x86 PV specific path
| needs to make sure page table entries remain in sync with other
| accounting done. Although the identity of the page frame was
| validated correctly, neither the presence of the mapping nor page
| writability were taken into account.
|
| A malicious or buggy x86 PV guest could escalate its privileges or
| crash the hypervisor.
The Xen Security Team also released Xen Security Advisory 233 [3], with
only DoS impact:
| When shutting down a VM with a stubdomain, a race in cxenstored may
| cause a double-free.
|
| The xenstored daemon may crash, resulting in a DoS of any parts of the
| system relying on it (including domain creation / destruction,
| ballooning, device changes, etc).
Commentary from the Qubes Security Team
========================================
This batch of Xen security advisories reassures us in our decision to
abandon default para-virtualization (PV) in Qubes 4.0. Indeed, only
one of the potential privilege-escalation bugs discussed in this
advisory affects non-PV virtualization: XSA-231. This bug is a prime
example of the common problems associated with expanding the codebase
in order to implement "exotic" functionality (in this case, NUMA
support). While the Xen Project has made some progress recently in
allowing extra features to be disabled at compile time, the code for
NUMA support could not easily be deactivated, which is the reason for
the inclusion of this bug in today's advisory.
While the departure from para-virtualization (PV) in Qubes 4.0 will
obviate many such vulnerabilities in the future, please note that
Qubes 3.2 (the current, stable version of Qubes) still uses PV mode
for most of the VMs. Therefore, all the bugs in this bulletin affect
Qubes 3.2, and users should patch immediately.
Compromise Recovery
====================
Starting with Qubes 3.2, we offer Paranoid Backup Restore Mode, which
was designed specifically to aid in the recovery of a (potentially)
compromised Qubes OS system. Thus, if you believe your system might have
been compromised (perhaps because of the bugs discussed in this
bulletin), then you should read and follow the procedure described here:
https://www.qubes-os.org/news/2017/04/26/qubes-compromise-recovery/
Patching
=========
The specific packages that resolve the problems discussed in this
bulletin are as follows:
For Qubes 3.2:
- Xen packages, version 4.6.6-30
For Qubes 4.0:
- Xen packages, version 4.8.2-2
The packages are to be installed in dom0 via the Qubes VM Manager or via
the qubes-dom0-update command, as follows:
For updates from the stable repository (not immediately available):
$ sudo qubes-dom0-update
For updates from the security-testing repository:
$ sudo qubes-dom0-update --enablerepo=qubes-dom0-security-testing
A system restart will be required afterwards.
These packages will migrate from the security-testing repository to the
current (stable) repository over the next two weeks after being tested
by the community.
If you use Anti Evil Maid, you will need to reseal your secret
passphrase to new PCR values, as PCR18+19 will change due to the new
Xen binaries.
Credits
========
See the original Xen Security Advisories.
References
===========
[1] https://xenbits.xen.org/xsa/advisory-231.html
[2] https://xenbits.xen.org/xsa/advisory-232.html
[3] https://xenbits.xen.org/xsa/advisory-233.html
[4] https://xenbits.xen.org/xsa/advisory-234.html
--
The Qubes Security Team
https://www.qubes-os.org/security/
advisory affects non-PV virtualization: XSA-231. This bug is a prime
example of the common problems associated with expanding the codebase
in order to implement "exotic" functionality (in this case, NUMA
support). While the Xen Project has made some progress recently in
allowing extra features to be disabled at compile time, the code for
NUMA support could not easily be deactivated, which is the reason for
the inclusion of this bug in today's advisory.
While the departure from para-virtualization (PV) in Qubes 4.0 will
obviate many such vulnerabilities in the future, please note that
Qubes 3.2 (the current, stable version of Qubes) still uses PV mode
for most of the VMs. Therefore, all the bugs in this bulletin affect
Qubes 3.2, and users should patch immediately.
Compromise Recovery
====================
Starting with Qubes 3.2, we offer Paranoid Backup Restore Mode, which
was designed specifically to aid in the recovery of a (potentially)
compromised Qubes OS system. Thus, if you believe your system might have
been compromised (perhaps because of the bugs discussed in this
bulletin), then you should read and follow the procedure described here:
https://www.qubes-os.org/news/2017/04/26/qubes-compromise-recovery/
Patching
=========
The specific packages that resolve the problems discussed in this
bulletin are as follows:
For Qubes 3.2:
- Xen packages, version 4.6.6-30
For Qubes 4.0:
- Xen packages, version 4.8.2-2
The packages are to be installed in dom0 via the Qubes VM Manager or via
the qubes-dom0-update command, as follows:
For updates from the stable repository (not immediately available):
$ sudo qubes-dom0-update
For updates from the security-testing repository:
$ sudo qubes-dom0-update --enablerepo=qubes-dom0-security-testing
A system restart will be required afterwards.
These packages will migrate from the security-testing repository to the
current (stable) repository over the next two weeks after being tested
by the community.
If you use Anti Evil Maid, you will need to reseal your secret
passphrase to new PCR values, as PCR18+19 will change due to the new
Xen binaries.
Credits
========
See the original Xen Security Advisories.
References
===========
[1] https://xenbits.xen.org/xsa/advisory-231.html
[2] https://xenbits.xen.org/xsa/advisory-232.html
[3] https://xenbits.xen.org/xsa/advisory-233.html
[4] https://xenbits.xen.org/xsa/advisory-234.html
--
The Qubes Security Team
https://www.qubes-os.org/security/
Joanna Rutkowska: Reasonably Secure Computing in the Decentralized World
https://www.qubes-os.org/news/2017/09/13/joanna-rutkowska-secure-computing-decentralized-world/
Joanna Rutkowska (https://www.qubes-os.org/team/#joanna-rutkowska) recently gave a presentation noscriptd
“Reasonably Secure Computing in the Decentralized World (An Operating System
Architect’s Perspective)” at a public event hosted by
The Golem Project (https://golem.network/) in Berlin, Germany
called “Golem and Friends: Data, Security, Scaling and More….”
The slides from her presentation are available
here (https://www.qubes-os.org/attachment/wiki/slides/Secure_Computing_in_Decentralized_World.pdf).
The event was streamed live, and the video is available
here (https://www.youtube.com/watch?v=B1QCm09BvP4&feature=youtu.be&t=31m52s).
https://www.qubes-os.org/news/2017/09/13/joanna-rutkowska-secure-computing-decentralized-world/
Joanna Rutkowska (https://www.qubes-os.org/team/#joanna-rutkowska) recently gave a presentation noscriptd
“Reasonably Secure Computing in the Decentralized World (An Operating System
Architect’s Perspective)” at a public event hosted by
The Golem Project (https://golem.network/) in Berlin, Germany
called “Golem and Friends: Data, Security, Scaling and More….”
The slides from her presentation are available
here (https://www.qubes-os.org/attachment/wiki/slides/Secure_Computing_in_Decentralized_World.pdf).
The event was streamed live, and the video is available
here (https://www.youtube.com/watch?v=B1QCm09BvP4&feature=youtu.be&t=31m52s).
Thank You for Supporting Qubes!
https://www.qubes-os.org/news/2017/09/15/thank-you-for-supporting-qubes/
Dear Qubes Community,
When we reflect on how the Qubes userbase has grown (https://www.qubes-os.org/statistics/) over
the past few years, we are humbled by the number of people who have
chosen to join us in entrusting the security of their digital lives to
Qubes. We recognize the immense responsibility this places on us. This
sense of duty is what drives our work to make Qubes as secure as it can
be.
We are further humbled by the many generous donations that have been
made this year. Qubes is protecting real people around the world in ever
greater numbers, and many of you have shown your appreciation by giving
back to the project. We are truly grateful for your support. Thank you.
Top Donors of 2017
We’d like to take this opportunity to thank the top donors of 2017 (so
far!):
50,000 EUR from Mullvad!
10 BTC from an anonymous donor!
10,000 USD from zby, angel investor!
1,000 USD recurring annual donation from Eric Grosse!
Thank you to these donors and to everyone who has donated to the Qubes
Decentralized Bitcoin Fund (https://www.qubes-os.org/news/2016/07/13/qubes-distributed-fund/) and the Qubes Open Collective (https://opencollective.com/qubes-os)!
Your donations continue to fund work on Qubes OS. Thanks to your
support, we’ve just released Qubes 4.0-rc1 (https://www.qubes-os.org/news/2017/07/31/qubes-40-rc1/), and we’re getting
ever closer to a stable release!
Our Work Continues
Today, Qubes safeguards tens of thousands of users around the globe in
their work and personal lives, including every member of the Qubes Team.
But the path here has been a long and difficult one, in terms of both
the great dedication required of the team and the monetary costs that
Invisible Things Lab has borne, and continues to bear, so that the
project could continue throughout the years.
Without a doubt, it’s all been worth it. Qubes is our passion. It’s part
of our lives. We’re gratified and exhilarated to see Qubes bringing real
value to people around the world, and we’re more determined than ever to
make Qubes the best free and open-source secure operating system it can
be – for everyone. We know that many of you feel the same way we do.
If Qubes is important to you, please consider joining us in supporting
its ongoing development (https://www.qubes-os.org/donate/). Everyone’s support is valuable to
us, no matter how large or how small. Together, we can ensure that Qubes
is around to protect us all for a long time to come.
Sincerely,
The Qubes OS Team
https://www.qubes-os.org/news/2017/09/15/thank-you-for-supporting-qubes/
Dear Qubes Community,
When we reflect on how the Qubes userbase has grown (https://www.qubes-os.org/statistics/) over
the past few years, we are humbled by the number of people who have
chosen to join us in entrusting the security of their digital lives to
Qubes. We recognize the immense responsibility this places on us. This
sense of duty is what drives our work to make Qubes as secure as it can
be.
We are further humbled by the many generous donations that have been
made this year. Qubes is protecting real people around the world in ever
greater numbers, and many of you have shown your appreciation by giving
back to the project. We are truly grateful for your support. Thank you.
Top Donors of 2017
We’d like to take this opportunity to thank the top donors of 2017 (so
far!):
50,000 EUR from Mullvad!
10 BTC from an anonymous donor!
10,000 USD from zby, angel investor!
1,000 USD recurring annual donation from Eric Grosse!
Thank you to these donors and to everyone who has donated to the Qubes
Decentralized Bitcoin Fund (https://www.qubes-os.org/news/2016/07/13/qubes-distributed-fund/) and the Qubes Open Collective (https://opencollective.com/qubes-os)!
Your donations continue to fund work on Qubes OS. Thanks to your
support, we’ve just released Qubes 4.0-rc1 (https://www.qubes-os.org/news/2017/07/31/qubes-40-rc1/), and we’re getting
ever closer to a stable release!
Our Work Continues
Today, Qubes safeguards tens of thousands of users around the globe in
their work and personal lives, including every member of the Qubes Team.
But the path here has been a long and difficult one, in terms of both
the great dedication required of the team and the monetary costs that
Invisible Things Lab has borne, and continues to bear, so that the
project could continue throughout the years.
Without a doubt, it’s all been worth it. Qubes is our passion. It’s part
of our lives. We’re gratified and exhilarated to see Qubes bringing real
value to people around the world, and we’re more determined than ever to
make Qubes the best free and open-source secure operating system it can
be – for everyone. We know that many of you feel the same way we do.
If Qubes is important to you, please consider joining us in supporting
its ongoing development (https://www.qubes-os.org/donate/). Everyone’s support is valuable to
us, no matter how large or how small. Together, we can ensure that Qubes
is around to protect us all for a long time to come.
Sincerely,
The Qubes OS Team
RT @andrewdavidwong: On behalf of our entire team:
Thank you for supporting Qubes!
https://t.co/tmtjvmfOJr
Thank you for supporting Qubes!
https://t.co/tmtjvmfOJr
Qubes OS
Qubes OS is a security-oriented, open-source operating system for personal computers. It uses virtualization to implement security by compartmentalization and supports both Linux and Windows virtual environments.
RT @mullvadnet: We hope our donation to @QubesOS will inspire others to support this vital #itsec project. https://t.co/SG9PgTAFTg #security #opensource
mullvad.net
Mullvad
Mullvad is a VPN service that helps keep your online activity, identity and location private. Only €5/month. We accept Bitcoin, cash, bank wire, credit card (PayPal), and Swish
XSA-245 does not affect the security of Qubes OS
https://www.qubes-os.org/news/2017/09/28/xsa-245/
The Xen Project has published Xen Security Advisory 245 (XSA-245).
This XSA does not affect the security of Qubes OS, and no user action is necessary.
This XSA has been added to the XSA Tracker (https://www.qubes-os.org/security/xsa/):
https://www.qubes-os.org/security/xsa/#245
https://www.qubes-os.org/news/2017/09/28/xsa-245/
The Xen Project has published Xen Security Advisory 245 (XSA-245).
This XSA does not affect the security of Qubes OS, and no user action is necessary.
This XSA has been added to the XSA Tracker (https://www.qubes-os.org/security/xsa/):
https://www.qubes-os.org/security/xsa/#245
RT @kennwhite: Periodic shout out to @rootkovska and the @QubesOS team for *years* of hard work to improve desktop security. Qubes is the real deal.
RT @rootkovska: New post: "Introducing the Next Generation Qubes Core Stack", a practical intro to Qubes 4.0 for power users:
https://t.co/…
https://t.co/…
Introducing the Next Generation Qubes Core Stack
https://www.qubes-os.org/news/2017/10/03/core3/
This is the 2nd post from the “cool things coming in Qubes 4.0” series, and it
discusses the next generation Qubes Core Stack version 3, which is the heart of
the new Qubes 4.x releases. The previous part (https://www.qubes-os.org/news/2017/06/27/qubes-admin-api/) discussed the
Admin API which we also introduced in Qubes 4.0 and which heavily relies on this
new Qubes Core Stack.
Qubes Core Stack vs. Qubes OS
Qubes Core Stack is, as the name implies, the core component of Qubes OS. It’s
the glue that connects all the other components together, and which allows users
and admins to interact with and configure the system. For the record, the other
components of the Qubes system include:
VM-located core agents (implementing e.g. qrexec endpoints used by various
Qubes services),
VM-customizations (making the VMs lightweight and working well with seamless
GUI virtualization),
Qubes GUI virtualization (the protocol, VM-located agents, and daemons
located in the GUI domain which, for now, happens to be the same as dom0),
GUI domain customizations (Desktop Environment customizations, decoration
coloring plugin, etc),
The AdminVM distribution (various customizations, special services, such as
for receiving and verifying updates, in the future: custom distro),
The Xen hypervisor (with a bunch of customization patches, occasional
hardening) or - in the future - some other virtualising or containerizing
software or technology,
Multiple “Qubes Apps” (various services built on top of Qubes qrexec
infrastructure, such as: trusted PDF and Image converters, Split GPG, safe
USB proxies for HID devices, USB proxy for offering USB devices (exposed via
qvm-usb), Yubikey support, USB Armory support, etc)
Various ready-to-use templates (e.g. Debian-, Whonix-based), which are used
to create actual VMs, i.e. provide the root filesystem to the VMs,
Salt Stack integration
And all these components are “glued together” by the Qubes Core Stack. The
diagram below illustrates the location of all these components in the overall
system architecture. Unlike many other Qubes architecture diagrams, this one
takes an AppVM-centric approach. (Click the image for the full size version.)
https://www.qubes-os.org/news/2017/10/03/core3/
This is the 2nd post from the “cool things coming in Qubes 4.0” series, and it
discusses the next generation Qubes Core Stack version 3, which is the heart of
the new Qubes 4.x releases. The previous part (https://www.qubes-os.org/news/2017/06/27/qubes-admin-api/) discussed the
Admin API which we also introduced in Qubes 4.0 and which heavily relies on this
new Qubes Core Stack.
Qubes Core Stack vs. Qubes OS
Qubes Core Stack is, as the name implies, the core component of Qubes OS. It’s
the glue that connects all the other components together, and which allows users
and admins to interact with and configure the system. For the record, the other
components of the Qubes system include:
VM-located core agents (implementing e.g. qrexec endpoints used by various
Qubes services),
VM-customizations (making the VMs lightweight and working well with seamless
GUI virtualization),
Qubes GUI virtualization (the protocol, VM-located agents, and daemons
located in the GUI domain which, for now, happens to be the same as dom0),
GUI domain customizations (Desktop Environment customizations, decoration
coloring plugin, etc),
The AdminVM distribution (various customizations, special services, such as
for receiving and verifying updates, in the future: custom distro),
The Xen hypervisor (with a bunch of customization patches, occasional
hardening) or - in the future - some other virtualising or containerizing
software or technology,
Multiple “Qubes Apps” (various services built on top of Qubes qrexec
infrastructure, such as: trusted PDF and Image converters, Split GPG, safe
USB proxies for HID devices, USB proxy for offering USB devices (exposed via
qvm-usb), Yubikey support, USB Armory support, etc)
Various ready-to-use templates (e.g. Debian-, Whonix-based), which are used
to create actual VMs, i.e. provide the root filesystem to the VMs,
Salt Stack integration
And all these components are “glued together” by the Qubes Core Stack. The
diagram below illustrates the location of all these components in the overall
system architecture. Unlike many other Qubes architecture diagrams, this one
takes an AppVM-centric approach. (Click the image for the full size version.)
There are also a bunch of additional components not shown on the diagram above,
and which technically are not part of a Qubes system, but which are instrumental
in building of the system:
qubes-builder,
template-builder,
tons of automatic tests,
as well as all the supporting infrastructure for building, testing and
distributing Qubes and the updates, and hosting of the website and
documentation.
As you can see Qubes is a pretty complex creature, and the Qubes Core Stack is
central to its existence.
Qubes VMs: the building blocks for Explicit Partitioning Model
Qubes implements explicit partitioning security model, which means that users
(and/or admins) can define multiple security domains and decide what these
domains can and cannot do. This is a different model than the popular sandboxing
model, as implemented by increasingly many applications and products today,
where every application is automatically sandboxed and some more-or-less
pre-defined set of rules is used to prevent behaviour considered “unsafe”
(whatever that might mean…). I believe the explicit partitioning model
provides many benefits over the sandboxing model, among the most important one
being that it is information-oriented, rather than application-oriented. In
other words it tries to limit damage to the (user’s) data, rather to the
(vendor’s) code and infrastructure.
There have always been a few different kinds of VMs in Qubes: AppVMs, Template
VMs, Standalone VMs, NetVMs, ProxyVMs, DispVM, etc. In Qubes 4 we have slightly
simplified and cleaned up these categories.
First we’ve hidden the PV vs HVM distinction. Now each VM has a property named
virt_mode which is used to decide whether it should be virtualized using Xen
para-virtualization (PV), full virtualization with auxiliary qemu in isolated
“stub domain” (HVM), or – in the near future – as full virtualization
without qemu and the additional stub domain (PVH). This means we no longer
classify VMs as PV vs HVMs, because every VM can be easily switched between
various modes of virtualization with a flip of a property.
We also no longer distinguish between AppVMs, NetVMs and Proxy VMs. Instead,
each VM has a property provides_network, which is false by default, except for
the VMs which we want to expose networking to other VMs (e.g. because they might
have some networking device assigned, such as cellular modem or WiFi device, or
because they act as VPNs or proxies of some sort).
We discuss what the properties are and how to check/set them, later in this
article.
So, to recap, starting with Qubes 4.0 we have only the following classes of VMs:
AppVM - covers what we called AppVMs, NetVMs, and ProxyVMs in earlier Qubes
releases,
DispVM - defined by not having a persistent private image across VM
reboots (see also below),
TemplateVM - these provide root filesystem for AppVMs, the semantics
haven’t changed in Qubes 4,
StandaloneVM - these are like AppVMs, but are not based on any
TemplateVM,
AdminVM a singleton (i.e. a class which has only one instance), which
represents the Administrator VM (Up until recently called dom0, a term
we’re departing from, given it is Xen-specific).
One can list all the VM classes known to the stack using the following command:
[user@dom0 ~]$ qvm-create --help-classes
BTW, we’ve been recently promoting the use of alternative names to “VM(s)”, such
as domain(s), and - more recently - “qube(s)”. This is to stress the connection
between Qubes and virtualization technology is less tight that many might
perceive. Another reason is that, we believe, for many users these alternative
terms might be more friendly than “VMs”, which apparently have strong technical
connotation. The author is quite used to the term VM, however, so should be
excused for (ab)using this term throughout her writings.
Properties, Features and Tags
Each VM (domain) in Qubes can have a number of properties, features and tags,
and which technically are not part of a Qubes system, but which are instrumental
in building of the system:
qubes-builder,
template-builder,
tons of automatic tests,
as well as all the supporting infrastructure for building, testing and
distributing Qubes and the updates, and hosting of the website and
documentation.
As you can see Qubes is a pretty complex creature, and the Qubes Core Stack is
central to its existence.
Qubes VMs: the building blocks for Explicit Partitioning Model
Qubes implements explicit partitioning security model, which means that users
(and/or admins) can define multiple security domains and decide what these
domains can and cannot do. This is a different model than the popular sandboxing
model, as implemented by increasingly many applications and products today,
where every application is automatically sandboxed and some more-or-less
pre-defined set of rules is used to prevent behaviour considered “unsafe”
(whatever that might mean…). I believe the explicit partitioning model
provides many benefits over the sandboxing model, among the most important one
being that it is information-oriented, rather than application-oriented. In
other words it tries to limit damage to the (user’s) data, rather to the
(vendor’s) code and infrastructure.
There have always been a few different kinds of VMs in Qubes: AppVMs, Template
VMs, Standalone VMs, NetVMs, ProxyVMs, DispVM, etc. In Qubes 4 we have slightly
simplified and cleaned up these categories.
First we’ve hidden the PV vs HVM distinction. Now each VM has a property named
virt_mode which is used to decide whether it should be virtualized using Xen
para-virtualization (PV), full virtualization with auxiliary qemu in isolated
“stub domain” (HVM), or – in the near future – as full virtualization
without qemu and the additional stub domain (PVH). This means we no longer
classify VMs as PV vs HVMs, because every VM can be easily switched between
various modes of virtualization with a flip of a property.
We also no longer distinguish between AppVMs, NetVMs and Proxy VMs. Instead,
each VM has a property provides_network, which is false by default, except for
the VMs which we want to expose networking to other VMs (e.g. because they might
have some networking device assigned, such as cellular modem or WiFi device, or
because they act as VPNs or proxies of some sort).
We discuss what the properties are and how to check/set them, later in this
article.
So, to recap, starting with Qubes 4.0 we have only the following classes of VMs:
AppVM - covers what we called AppVMs, NetVMs, and ProxyVMs in earlier Qubes
releases,
DispVM - defined by not having a persistent private image across VM
reboots (see also below),
TemplateVM - these provide root filesystem for AppVMs, the semantics
haven’t changed in Qubes 4,
StandaloneVM - these are like AppVMs, but are not based on any
TemplateVM,
AdminVM a singleton (i.e. a class which has only one instance), which
represents the Administrator VM (Up until recently called dom0, a term
we’re departing from, given it is Xen-specific).
One can list all the VM classes known to the stack using the following command:
[user@dom0 ~]$ qvm-create --help-classes
BTW, we’ve been recently promoting the use of alternative names to “VM(s)”, such
as domain(s), and - more recently - “qube(s)”. This is to stress the connection
between Qubes and virtualization technology is less tight that many might
perceive. Another reason is that, we believe, for many users these alternative
terms might be more friendly than “VMs”, which apparently have strong technical
connotation. The author is quite used to the term VM, however, so should be
excused for (ab)using this term throughout her writings.
Properties, Features and Tags
Each VM (domain) in Qubes can have a number of properties, features and tags,
which describe both how it should behave, as well as what features or services
it offers to other VMs:
Properties, as the name suggests, are used to change the behaviour of the
VM and/or how it is treated by the Qubes Core Stack. The virt_mode property
mentioned above is a good example. For a list of other properties, take a
look here (https://dev.qubes-os.org/projects/core-admin/en/latest/qubes.html#properties). A command which can be used to list, read and
set properties for a VM is qvm-prefs (see below).
Features are very similar to properties, except that they are mostly
opaque to the Core (unlike properties, which are well defined and sanitized).
Features are essentially a dictionary of ‘key=value’ pairs assigned to each
VM. They can be used in various places outside of the main logic of the Core
Stack, specifically in Core Extensions (https://dev.qubes-os.org/projects/core-admin/en/latest/qubes-ext.html). A new tool in
Qubes 4 used to inspect or set features is called qvm-features.
A good example of a mechanism implemented on top of features are
services, which have been reimplemented in Qubes 4. Services are used to
let the VM agents know about whether various additional services, such as
Network Manager, should be enabled or not. Indeed, the qvm-service tool now
(i.e. in Qubes 4.0) internally simply creates features named service.XYZ,
where XYZ is the name of the service passed to qvm-service. It also takes
care of interpreting the true/false values.
Finally, each VM can also have some tags associated with it. Unlike
features, tags do not have a value – a VM can either have a specific tag
(“can be tagged with it”) or not. Unlike properties, they are not interpreted
by any core logic of the Core Stack. The sole purpose of tags is that they
can be used for qrexec policy rules, as discussed below. (This is also the
reason why we wanted to keep them absolutely simple – any complexity within
qrexec policy parsing code would definitely be asking for troubles…).
Last but not least, we should mention that each of the VM has a unique
name associated with it. VM names in Qubes are like filenames in the
filesystem – not only they are unique, but they also are used as a primary
way of identifying VMs, especially for security-related decisions (e.g. for
qrexec policy construction, see below).
And just as one can get away with using generic tagging schemes in place of
referring to paths and filenames in some security systems, similarly in Qubes OS
one can refer to tags (mentioned above and discussed later) in the policy
(but currently not in firewalling rules).
Internally, a VM’s name is implemented as the property
name and thus can be read using qvm-prefs. It cannot be changed, however,
because, starting with Qubes 4.0, we treat it as an immutable property.
User-exposed tools, however, do have a “VM rename” operation, which is
implemented as creating a copy of the VM with the new name and removing the old
VM. Thanks to the new volume
manager we also introduced in Qubes 4 (and which will be the topic of another
post), this operation is actually very cheap, disk-wise.
So, let us now start a console in AdminVM (dom0) and play a bit with these
mechanisms.
Let’s start with something very simple and let us take a look at the properties
of the AdminVM (in the current Qubes implemented by Xen’s dom0):
[user@dom0 ~]$ qvm-prefs dom0
default_dispvm D fedora-25-dvm
label - black
name D dom0
qid D 0
uuid D 00000000-0000-0000-0000-000000000000
As we can see, there aren’t many properties for the AdminVM, and none of them
can be modified by the user or admin. While we’re here, we should mention there
exists a similar tool, qubes-prefs which allows one to view and modify the
global system properties, and these properties should not be confused with
the properties of the AdminVM:
[user@dom0 ~]$ qubes-prefs
check_updates_vm D True
it offers to other VMs:
Properties, as the name suggests, are used to change the behaviour of the
VM and/or how it is treated by the Qubes Core Stack. The virt_mode property
mentioned above is a good example. For a list of other properties, take a
look here (https://dev.qubes-os.org/projects/core-admin/en/latest/qubes.html#properties). A command which can be used to list, read and
set properties for a VM is qvm-prefs (see below).
Features are very similar to properties, except that they are mostly
opaque to the Core (unlike properties, which are well defined and sanitized).
Features are essentially a dictionary of ‘key=value’ pairs assigned to each
VM. They can be used in various places outside of the main logic of the Core
Stack, specifically in Core Extensions (https://dev.qubes-os.org/projects/core-admin/en/latest/qubes-ext.html). A new tool in
Qubes 4 used to inspect or set features is called qvm-features.
A good example of a mechanism implemented on top of features are
services, which have been reimplemented in Qubes 4. Services are used to
let the VM agents know about whether various additional services, such as
Network Manager, should be enabled or not. Indeed, the qvm-service tool now
(i.e. in Qubes 4.0) internally simply creates features named service.XYZ,
where XYZ is the name of the service passed to qvm-service. It also takes
care of interpreting the true/false values.
Finally, each VM can also have some tags associated with it. Unlike
features, tags do not have a value – a VM can either have a specific tag
(“can be tagged with it”) or not. Unlike properties, they are not interpreted
by any core logic of the Core Stack. The sole purpose of tags is that they
can be used for qrexec policy rules, as discussed below. (This is also the
reason why we wanted to keep them absolutely simple – any complexity within
qrexec policy parsing code would definitely be asking for troubles…).
Last but not least, we should mention that each of the VM has a unique
name associated with it. VM names in Qubes are like filenames in the
filesystem – not only they are unique, but they also are used as a primary
way of identifying VMs, especially for security-related decisions (e.g. for
qrexec policy construction, see below).
And just as one can get away with using generic tagging schemes in place of
referring to paths and filenames in some security systems, similarly in Qubes OS
one can refer to tags (mentioned above and discussed later) in the policy
(but currently not in firewalling rules).
Internally, a VM’s name is implemented as the property
name and thus can be read using qvm-prefs. It cannot be changed, however,
because, starting with Qubes 4.0, we treat it as an immutable property.
User-exposed tools, however, do have a “VM rename” operation, which is
implemented as creating a copy of the VM with the new name and removing the old
VM. Thanks to the new volume
manager we also introduced in Qubes 4 (and which will be the topic of another
post), this operation is actually very cheap, disk-wise.
So, let us now start a console in AdminVM (dom0) and play a bit with these
mechanisms.
Let’s start with something very simple and let us take a look at the properties
of the AdminVM (in the current Qubes implemented by Xen’s dom0):
[user@dom0 ~]$ qvm-prefs dom0
default_dispvm D fedora-25-dvm
label - black
name D dom0
qid D 0
uuid D 00000000-0000-0000-0000-000000000000
As we can see, there aren’t many properties for the AdminVM, and none of them
can be modified by the user or admin. While we’re here, we should mention there
exists a similar tool, qubes-prefs which allows one to view and modify the
global system properties, and these properties should not be confused with
the properties of the AdminVM:
[user@dom0 ~]$ qubes-prefs
check_updates_vm D True
clockvm - sys-net
default_dispvm - fedora-25-dvm
default_fw_netvm D None
default_kernel - 4.9.45-21
default_netvm - sys-firewall
default_pool D lvm
default_pool_kernel - linux-kernel
default_pool_private D lvm
default_pool_root D lvm
default_pool_volatile D lvm
default_template - fedora-25
stats_interval D 3
updatevm - sys-firewall
Now, let’s inspect an ordinary AppVM, say work (which is one of the default
VMs created by the installer, but, of course, the user might have it named
differently):
[user@dom0 ~]$ qvm-prefs work
autostart D False
backup_timestamp D
debug D False
default_dispvm D fedora-25-dvm
default_user D user
gateway D
include_in_backups D True
installed_by_rpm D False
ip D
kernel D 4.9.45-21
kernelopts D nopat
label - blue
mac D 00:16:3E:5E:6C:00
maxmem D 4000
memory D 400
name - work
netvm - None
provides_network D False
qid - 8
qrexec_timeout D 60
stubdom_mem U
stubdom_xid D 6
template - fedora-25
template_for_dispvms D False
updateable D False
uuid - fab9a577-2531-4971-bce1-ca0c9b511f27
vcpus D 2
virt_mode D hvm
visible_gateway D
visible_ip D
visible_netmask D
xid D 5
We see lots of properties which have a D flag displayed, which indicates the
property uses the core-provided default value. In most cases the user is able to
override these values for specific VMs. For example the virt_mode property is
by default set to hvm for all AppVMs, which means that full virtualization
mode is used for the domain, but we can override this for specific VM(s) with
pv and thus force them to be virtualized using para-virtualization mode, which
is what the installer noscripts do for the sys-net and sys-usb VMs in Qubes
4.0-rc1 in order to work around problems with PCI passthrough support that we’ve
observed on some platforms.
As a simple exercise with changing the property, we can switch off networking
for one of the AppVMs by setting its netvm property to an empty value:
[user@dom0 ~]$ qvm-prefs --set work netvm ""
We can now confirm that indeed the VM named work has no network. E.g. we can
use the qvm-ls command (it should print - in the column which indicated the
netvm), or open a terminal in the work VM itself and try to see if there is a
virtual networking device exposing the network (we can e.g. use the ifconfig
or ip a commands).
Instead of setting an empty value as the netvm, we could have also set some
other VM, thus forcing the networking traffic to pass through that other VM,
e.g. to route all the VM’s traffic through a Whonix Tor gateway:
[user@dom0 ~]$ qvm-prefs --set work netvm sys-whonix
The VM which we want to use as the provider of networking services to other VMs,
such as sys-whonix in the example above, should have its provides_network
property set to true.
To revert back to the system-default netvm (as specified by qubes-prefs),
we can use the --default switch:
[user@dom0 ~]$ qvm-prefs --default work netvm
Now let us take a look at the features and services. It might be most
illustrative to look at both of these mechanisms together. Let’s suppose we
would like to enable the Network Manager service in some VM. Perhaps we would
like to use it to create a VPN connection terminated in one of the AppVMs, say
in the work VM. (An alternative option would be to create a separate VM, say
work-vpn, set its provides_network property to true, enable the
network-manager service in there, and use it to provide networking to the other
VMs, e.g. work.)
default_dispvm - fedora-25-dvm
default_fw_netvm D None
default_kernel - 4.9.45-21
default_netvm - sys-firewall
default_pool D lvm
default_pool_kernel - linux-kernel
default_pool_private D lvm
default_pool_root D lvm
default_pool_volatile D lvm
default_template - fedora-25
stats_interval D 3
updatevm - sys-firewall
Now, let’s inspect an ordinary AppVM, say work (which is one of the default
VMs created by the installer, but, of course, the user might have it named
differently):
[user@dom0 ~]$ qvm-prefs work
autostart D False
backup_timestamp D
debug D False
default_dispvm D fedora-25-dvm
default_user D user
gateway D
include_in_backups D True
installed_by_rpm D False
ip D
kernel D 4.9.45-21
kernelopts D nopat
label - blue
mac D 00:16:3E:5E:6C:00
maxmem D 4000
memory D 400
name - work
netvm - None
provides_network D False
qid - 8
qrexec_timeout D 60
stubdom_mem U
stubdom_xid D 6
template - fedora-25
template_for_dispvms D False
updateable D False
uuid - fab9a577-2531-4971-bce1-ca0c9b511f27
vcpus D 2
virt_mode D hvm
visible_gateway D
visible_ip D
visible_netmask D
xid D 5
We see lots of properties which have a D flag displayed, which indicates the
property uses the core-provided default value. In most cases the user is able to
override these values for specific VMs. For example the virt_mode property is
by default set to hvm for all AppVMs, which means that full virtualization
mode is used for the domain, but we can override this for specific VM(s) with
pv and thus force them to be virtualized using para-virtualization mode, which
is what the installer noscripts do for the sys-net and sys-usb VMs in Qubes
4.0-rc1 in order to work around problems with PCI passthrough support that we’ve
observed on some platforms.
As a simple exercise with changing the property, we can switch off networking
for one of the AppVMs by setting its netvm property to an empty value:
[user@dom0 ~]$ qvm-prefs --set work netvm ""
We can now confirm that indeed the VM named work has no network. E.g. we can
use the qvm-ls command (it should print - in the column which indicated the
netvm), or open a terminal in the work VM itself and try to see if there is a
virtual networking device exposing the network (we can e.g. use the ifconfig
or ip a commands).
Instead of setting an empty value as the netvm, we could have also set some
other VM, thus forcing the networking traffic to pass through that other VM,
e.g. to route all the VM’s traffic through a Whonix Tor gateway:
[user@dom0 ~]$ qvm-prefs --set work netvm sys-whonix
The VM which we want to use as the provider of networking services to other VMs,
such as sys-whonix in the example above, should have its provides_network
property set to true.
To revert back to the system-default netvm (as specified by qubes-prefs),
we can use the --default switch:
[user@dom0 ~]$ qvm-prefs --default work netvm
Now let us take a look at the features and services. It might be most
illustrative to look at both of these mechanisms together. Let’s suppose we
would like to enable the Network Manager service in some VM. Perhaps we would
like to use it to create a VPN connection terminated in one of the AppVMs, say
in the work VM. (An alternative option would be to create a separate VM, say
work-vpn, set its provides_network property to true, enable the
network-manager service in there, and use it to provide networking to the other
VMs, e.g. work.)
To enable the Network Manager service and a handy widget that shows the status
of the VPN, we can do:
[user@dom0 ~]$ qvm-service --enable work network-manager
This should result in the following output:
[user@dom0 ~]$ qvm-service work
network-manager on
We see the service is enabled, and if we restart the work VM, we should see
the Network Manager widget appear in the tray.
Now let’s view the features of this same VM:
[user@dom0 ~]$ qvm-features work
service.network-manager 1
We can now clearly see how the services are internally implemented on top of
features. This is done internally by the qvm-services command and by the
noscripts in the VMs, which interpret specifically-formatted features as services.
The qvm-service tool also takes care about properly setting and interpreting
the true/false and empty/non-empty strings to make sense for services states
(on/off), eliminating user mistakes.
One thing left for us to explore are tags, discussed above, but we will defer
this discussion until a later chapter, which talks about qrexec policy, as tags
are exclusively for use within policies. For now, suffice to say, that there is
a dedicated tool, qvm-tags used to check and set/modify the tags for each of
the VM (in addition to some tags being automatically created by the Core Stack,
such as e.g. the created-by-XYZ tag, discussed in the previous article on
Admin API).
As a final note we should stress one more time that VM names, despite being
normal property, are special in that the are immutable. In order to change the
name one needs to 1) create a new VM, 2) import the volume from the original VM
(using admin.vm.volume.CloneFrom and admin.vm.volume.CloneTo calls), as well
as 3) copy all the properties, tags, and features from the original VM, and
4) remove the original VM.
Each of these operations can be controlled independently via qrexec policy.
While this might seem like a superficial complication at first sight, we believe
it allows to simplify the implementation, as well as to minimize the amount of
potential mistakes when creating policies, notably when there is more than one
management VMs in the system, such as e.g. the (de-privileged) GUI domain and
some corporate-owned (but also semi-privileged) management VM. This has been
discussed in more details in the previous post on Admin API (https://www.qubes-os.org/news/2017/06/27/qubes-admin-api/).
Disposable VMs redesigned
We have also redesigned how Disposable VMs (DispVMs) work in Qubes 4.0. Before
4.0, DispVMs had two kinds of largely unrelated characteristics: 1) being
disposable, i.e. having no persistent private image, and 2) booting from
pre-created savefiles in order to save on startup time.
In Qubes 4.0 we have redefined DispVMs solely by this first property, i.e.
private-image-not-having, which we believe to be the essential characteristics
of a Disposable VM. The underlying mechanism, e.g. whether the VM is restored
from a savefile to speed up the startup time, might or might not be implemented
for any VM and should not concern the user.
Another major change to DispVMs semantics in Qubes 4 (largely allowed by this
relaxation of DispVM definition) is that any of the AppVMs can be now used as a
template for the DispVM.
This provides lots of flexibility, e.g. it’s easy to create a customized DispVM
for signing PDF documents, or various disposable service VMs, such as Net- and
VPN- VMs. One just creates a normal AppVM, sets everything up there, e.g. loads
keys, configures VPNs, connects to known WiFi networks, and then shuts down the
AppVM and in place of it starts a DispVM based on that AppVM.
This way, whenever one has a gut feeling that something’s wrong with the VM
(e.g. because one just connected a USB stick to copy some slides), it’s easy to
just restart the DispVM in question and benefit from a new clean root
filesystem! Admittedly sometimes this might not be enough to get rid of an
of the VPN, we can do:
[user@dom0 ~]$ qvm-service --enable work network-manager
This should result in the following output:
[user@dom0 ~]$ qvm-service work
network-manager on
We see the service is enabled, and if we restart the work VM, we should see
the Network Manager widget appear in the tray.
Now let’s view the features of this same VM:
[user@dom0 ~]$ qvm-features work
service.network-manager 1
We can now clearly see how the services are internally implemented on top of
features. This is done internally by the qvm-services command and by the
noscripts in the VMs, which interpret specifically-formatted features as services.
The qvm-service tool also takes care about properly setting and interpreting
the true/false and empty/non-empty strings to make sense for services states
(on/off), eliminating user mistakes.
One thing left for us to explore are tags, discussed above, but we will defer
this discussion until a later chapter, which talks about qrexec policy, as tags
are exclusively for use within policies. For now, suffice to say, that there is
a dedicated tool, qvm-tags used to check and set/modify the tags for each of
the VM (in addition to some tags being automatically created by the Core Stack,
such as e.g. the created-by-XYZ tag, discussed in the previous article on
Admin API).
As a final note we should stress one more time that VM names, despite being
normal property, are special in that the are immutable. In order to change the
name one needs to 1) create a new VM, 2) import the volume from the original VM
(using admin.vm.volume.CloneFrom and admin.vm.volume.CloneTo calls), as well
as 3) copy all the properties, tags, and features from the original VM, and
4) remove the original VM.
Each of these operations can be controlled independently via qrexec policy.
While this might seem like a superficial complication at first sight, we believe
it allows to simplify the implementation, as well as to minimize the amount of
potential mistakes when creating policies, notably when there is more than one
management VMs in the system, such as e.g. the (de-privileged) GUI domain and
some corporate-owned (but also semi-privileged) management VM. This has been
discussed in more details in the previous post on Admin API (https://www.qubes-os.org/news/2017/06/27/qubes-admin-api/).
Disposable VMs redesigned
We have also redesigned how Disposable VMs (DispVMs) work in Qubes 4.0. Before
4.0, DispVMs had two kinds of largely unrelated characteristics: 1) being
disposable, i.e. having no persistent private image, and 2) booting from
pre-created savefiles in order to save on startup time.
In Qubes 4.0 we have redefined DispVMs solely by this first property, i.e.
private-image-not-having, which we believe to be the essential characteristics
of a Disposable VM. The underlying mechanism, e.g. whether the VM is restored
from a savefile to speed up the startup time, might or might not be implemented
for any VM and should not concern the user.
Another major change to DispVMs semantics in Qubes 4 (largely allowed by this
relaxation of DispVM definition) is that any of the AppVMs can be now used as a
template for the DispVM.
This provides lots of flexibility, e.g. it’s easy to create a customized DispVM
for signing PDF documents, or various disposable service VMs, such as Net- and
VPN- VMs. One just creates a normal AppVM, sets everything up there, e.g. loads
keys, configures VPNs, connects to known WiFi networks, and then shuts down the
AppVM and in place of it starts a DispVM based on that AppVM.
This way, whenever one has a gut feeling that something’s wrong with the VM
(e.g. because one just connected a USB stick to copy some slides), it’s easy to
just restart the DispVM in question and benefit from a new clean root
filesystem! Admittedly sometimes this might not be enough to get rid of an
infection, as discussed in the recent post on Compromise
Recovery (https://www.qubes-os.org/news/2017/04/26/qubes-compromise-recovery/), but for many cases this feature is highly desirable.
But opening up this flexibility comes at a price. We have to be careful about
which VMs can create which DispVMs. After all, it would be a disaster to allow
your casual Internet-browsing AppVM to spawn a DispVM based on your sensitive
work AppVM. The casual Internet-browsing VM could have the new DispVM open a
malicious file that compromises the DispVM. The compromised DispVM would then be
able to leak sensitive work-related data, since it uses the work VM as its
template.
There are two mechanisms in place to prevent such mistakes:
Each AppVM has a property called template_for_dispvms, which controls
whether this VM can serve as a template for Disposable VMs (i.e., whether any
DispVMs based on this VM are allowed in the system. By default,
this property is false for all AppVMs and needs to be manually enabled.
The choice of the template (i.e. specific AppVM) for the Disposable VM must
be provided by the qrexec policy (which the source VM cannot modify), and
defaults to the source VM’s default_dispvm property, which by default has a
value as specified via qubes-prefs. The resulting AppVM must have the
template_for_dispvms property set, or otherwise an error will occur.
Here we will take a look at how the template can be specified explicitly when
starting the DispVM from dom0:
[user@dom0 ~]$ qvm-run --dispvm=work --service qubes.StartApp+firefox
Running 'qubes.StartApp+firefox' on $dispvm:work
$dispvm:work: Refusing to create DispVM out of this AppVM, because template_for_dispvms=False
As mentioned above, we also need to explicitly enable use of the work AppVM as
Disposable VMs templates:
[user@dom0 ~]$ qvm-prefs --set work template_for_dispvms True
[user@dom0 ~]$ qvm-run --dispvm=work --service qubes.StartApp+firefox
Running 'qubes.StartApp+firefox' on $dispvm:work
We will look into how the DispVM can be specified via qrexec policy in a later
chapter below.
Qubes Remote Execution (qrexec): the underlying integration framework
Qubes OS is more than just a collection of isolated domains (currently
implemented as Xen VMs). The essential feature of Qubes OS, which sets
it apart from ordinary virtualization systems, is the unique way in which it
securely integrates these isolated domains for use in a single endpoint
system.
It’s probably accurate to say that the great majority of our efforts is on
building this integration in a manner so that it doesn’t ruin the isolation that
Xen provides to us.
There are several layers of integration infrastructure in Qubes OS, as depicted
in the following diagram (click for full size) and described below:
Recovery (https://www.qubes-os.org/news/2017/04/26/qubes-compromise-recovery/), but for many cases this feature is highly desirable.
But opening up this flexibility comes at a price. We have to be careful about
which VMs can create which DispVMs. After all, it would be a disaster to allow
your casual Internet-browsing AppVM to spawn a DispVM based on your sensitive
work AppVM. The casual Internet-browsing VM could have the new DispVM open a
malicious file that compromises the DispVM. The compromised DispVM would then be
able to leak sensitive work-related data, since it uses the work VM as its
template.
There are two mechanisms in place to prevent such mistakes:
Each AppVM has a property called template_for_dispvms, which controls
whether this VM can serve as a template for Disposable VMs (i.e., whether any
DispVMs based on this VM are allowed in the system. By default,
this property is false for all AppVMs and needs to be manually enabled.
The choice of the template (i.e. specific AppVM) for the Disposable VM must
be provided by the qrexec policy (which the source VM cannot modify), and
defaults to the source VM’s default_dispvm property, which by default has a
value as specified via qubes-prefs. The resulting AppVM must have the
template_for_dispvms property set, or otherwise an error will occur.
Here we will take a look at how the template can be specified explicitly when
starting the DispVM from dom0:
[user@dom0 ~]$ qvm-run --dispvm=work --service qubes.StartApp+firefox
Running 'qubes.StartApp+firefox' on $dispvm:work
$dispvm:work: Refusing to create DispVM out of this AppVM, because template_for_dispvms=False
As mentioned above, we also need to explicitly enable use of the work AppVM as
Disposable VMs templates:
[user@dom0 ~]$ qvm-prefs --set work template_for_dispvms True
[user@dom0 ~]$ qvm-run --dispvm=work --service qubes.StartApp+firefox
Running 'qubes.StartApp+firefox' on $dispvm:work
We will look into how the DispVM can be specified via qrexec policy in a later
chapter below.
Qubes Remote Execution (qrexec): the underlying integration framework
Qubes OS is more than just a collection of isolated domains (currently
implemented as Xen VMs). The essential feature of Qubes OS, which sets
it apart from ordinary virtualization systems, is the unique way in which it
securely integrates these isolated domains for use in a single endpoint
system.
It’s probably accurate to say that the great majority of our efforts is on
building this integration in a manner so that it doesn’t ruin the isolation that
Xen provides to us.
There are several layers of integration infrastructure in Qubes OS, as depicted
in the following diagram (click for full size) and described below:
