Qubes OS – Telegram
Qubes OS
1.99K subscribers
51 photos
2 videos
819 links
A reasonably secure operating system for personal computers.

Qubes-OS.org

⚠️This channel is updated after devs make an announcement to the project.

[Community ran channel]

Help?
English: @QubesChat

German: @QubesOS_user_de

Boost: t.me/QubesOS?boost
Download Telegram
It should be clear that such a setup automatically eliminates the deployment
problem discussed above, as the user is no longer expected to perform any
installation steps herself. Instead, she can access Qubes-as-a-Service with just
a Web browser or a mobile app. This approach may trade security for convenience
(if the endpoint device used to access Qubes-as-a-Service is insufficiently
protected) or privacy for convenience (if the cloud operator is not trusted).
For many use cases, however, the ability to access Qubes from any device and any
location makes the trade-off well worth it.

We said above that we can imagine “Qubes running on top of VMs” in some cloud,
but what exactly does that mean?

First and foremost, we’d want the Qubes Core Stack (https://www.qubes-os.org/news/2017/10/03/core3/) connected to
that cloud’s management API, so that whenever the user executes, say,
qvm-create (or, more generally, issues any Admin API (https://www.qubes-os.org/news/2017/06/27/qubes-admin-api/) call, in
this case admin.vm.Create.*) a new VM gets created and properly connected in
the Qubes infrastructure.

This means that most (all?) Qubes Apps (e.g. Split GPG, PDF and image
converters, and many more), which are built around qrexec, should Just Work (TM)
when run inside a Qubes-as-a-Service setup.

Now, what about the Admin and GUI domains? Where would they go in a
Qubes-as-a-Service scenario? This is an important question, and the answer is
much less obvious. We’ll return to it below. First, let’s look at a couple more
examples that demonstrate how Qubes Air could be implemented.

Example: Hybrid Mode

Some users might decide to run a subset of their qubes (perhaps some personal
ones) on their local laptops, while using the cloud only for other, less
privacy-sensitive VMs. In addition to privacy, another bonus of running some of
the VMs locally would be much lower GUI latency (as we discuss below).

The ability to run some VMs locally and some in the cloud is what I refer to as
Hybrid Mode. The beauty of Hybrid Mode is that the user doesn’t even have to
be aware (unless specifically interested!) in whether a particular VM is running
locally or in the cloud. The Admin API, qrexec services, and even the GUI,
should all automatically handle both cases. Here’s an example of a Hybrid Mode
configuration:
Another benefit of Hybrid Mode is that it can be used to host VMs across several
different cloud providers, not just one. This allows us to solve the problem of
over-dependence on a single isolation technology, e.g. on one specific
hypervisor. Now, if a fatal security bug is discovered that affects one of the
cloud services hosting a group of our VMs, the vulnerability will not
automatically affect the security of our other groups of VMs, since the other
groups may be hosted on different cloud services, or not in the cloud at all.
Crucially, different groups of VMs may be run on different underlying
containerization technologies and different hardware, allowing us to diversify
our risk exposure against any single class of attack.

Example: Qubes on “air-gapped” devices

This approach even allows us to host each qube (or groups of them) on a
physically distinct computer, such as a Raspberry PI or USB Armory (https://inversepath.com/usbarmory).
Despite the fact that these are physically separate devices, the Admin API
calls, qrexec services, and even GUI virtualization should all work seamlessly
across these qubes!
For some users, it may be particularly appealing to host one’s Split GPG
backend (https://www.qubes-os.org/doc/split-gpg/) or password manager on a physically separate qube. Of course,
it should also be possible to run normal GUI-based apps, such as office suites,
if one wants to dedicate a physically separate qube to work on a sensitive
project.

The ability to host qubes on distinct physical devices of radically different
kinds opens up numerous possibilities for working around the security problems
with hypervisors and processors we face today.

Under the hood: Qubes Zones

We’ve been thinking about what changes to the current Qubes architecture,
especially to the Qubes Core Stack (https://www.qubes-os.org/news/2017/10/03/core3/), would be necessary to make
the scenarios outlined above easy (and elegant) to implement.

There is one important new concept that should make it possible to support all
these scenarios with a unified architecture. We’ve named it Qubes Zones.

A Zone is a concept that combines several things together:


An underlying “isolation technology” used to implement qubes, which may or
may not be VMs. For example, they could be Raspberry PIs, USB Armory devices,
Amazon EC2 VMs, or Docker containers.


The inter-qube communication technology. In the case of qubes implemented as
Xen-based VMs (as in existing Qubes OS releases), the Xen-specific shared
memory mechanism (so called Grant Tables) is used to implement the
communication between qubes. In the case of Raspberry PIs, Ethernet
technology would likely be used. In the case of Qubes running in the cloud,
some form of cloud-provided networking would provide inter-qube
communication. Technically speaking, this is about how Qubes’ vchan would be
implemented, as the qrexec layer should remain the same across all possible
platforms.


A “local copy” of an Admin qube (previously referred to as the “AdminVM”),
used mainly to orchestrate VMs and make policing decisions for all the qubes
within the Zone. This Admin qube can be in either “Master” or “Slave” mode,
and there can only be one Admin qube running as Master across all the Zones
in one Qubes system.


Optionally, a “local copy” of GUI qube (previously referred to as the “GUI
domain” or “GUIVM”). As with the Admin qube, the GUI qube runs in either
Master or Slave mode. The user is expected to connect (e.g. with the RDP
protocol) or log into the GUI qube that runs in Master mode (and only that
one), which has the job of combining all the GUI elements exposed via the
other GUI qubes (all of which must run in Slave mode).


Some technology to implement storage for the qubes running within the Zone.
In the case of Qubes OS running Xen, the local disk is used to store VM
images (more specifically, in Qubes 4.0 we use Storage
Pools (https://github.com/QubesOS/qubes-issues/issues/1842) by default). In the case of a Zone composed of a
cluster of Raspberry PIs or similar devices, the storage could be a bunch of
micro-SD cards (each plugged into one Raspberry PI) or some kind of network
storage.
So far, this is nothing radically new compared to what we already have in Qubes
OS, especially since we have nearly completed our effort to abstract the Qubes
architecture away from Xen-specific details – an effort we code-named Qubes
Odyssey.

What is radically different is that we now want to allow more than one Zone to
exist in a single Qubes system!

In order to support multiple Zones, we have to provide transparent proxying of
qrexec services across Zones, so that a qube need not be aware that another qube
from which it requests a service resides in a different zone. This is the main
reason we’ve introduce multiple “local” Admin qubes – one for each Zone. Slave
Admin qubes are also bridges that allow the Master Admin qube to manage the
whole system (e.g. request the creation of new qubes, connect and set up storage
for qubes, and set up networking between qubes).

Under the hood: qubes’ interfaces

Within one Zone, there are multiple qubes. Let me stress that the term “qube”
is very generic and does not imply any specific technology. It could be a VM
under some virtualization system. It could be some kind of a container or a
physically separate computing device, such as a Raspberry PI, Arduino board, or
similar device.

While a qube can be implemented in many different ways, there are certain
features it should have:


A qube should implement a vchan endpoint (https://github.com/QubesOS/qubes-core-vchan-xen). The actual technology on
top of which this will be implemented – whether some shared memory within a
virtualization or containerization system, TCP/IP, or something
else (https://tools.ietf.org/html/rfc1149) – will be specific to the kind
of Zone it occupies.


A qube should implement a qrexec (https://www.qubes-os.org/doc/qrexec3/) endpoint, though this should be very
straightforward if a vchan endpoint has already been implemented. This
ensures that most (all?) the qrexec services, which are the basis for most
of the integration, apps, and services we have created for Qubes, should
Just Work(TM).


Optionally, for some qubes, a GUI endpoint should also be implemented (see
the discussion below).


In order to be compatible with Qubes networking (https://blog.invisiblethings.org/2011/09/28/playing-with-qubes-networking-for-fun.html), a qube should expect
one uplink network interface (to be exposed by the management technology
specific to that particular Zone), and (optionally) multiple downlink
network interfaces (if it is to work as a proxy qube, e.g. VPN or
firewalling qube).


Finally, a qube should expect two kinds of volumes to be exposed by the
Zone-specific management stack:
one read-only, which is intended to be used as a root filesystem by the
qube (the management stack might also expose an auxiliary volume for
implementing copy-on-write illusion for the VM, like the volatile.img
we currently expose on Qubes),
and one read-writable, which is specific to this qube, and which is
intended to be used as home directory-like storage.
This is, naturally, to allow the implementation of Qubes templates (https://www.qubes-os.org/getting-started/#appvms-qubes-and-templatevms), a
mechanism that we believe brings not only a lot of convenience but also some
security benefits (https://www.qubes-os.org/news/2017/04/26/qubes-compromise-recovery/).

GUI virtualization considerations

Since the very beginning (https://www.qubes-os.org/attachment/wiki/QubesArchitecture/arch-spec-0.3.pdf), Qubes was envisioned as a system for
desktop computing (as opposed to servers). This implied that GUI
virtualization (https://www.qubes-os.org/doc/gui/) was part of the core Qubes infrastructure.

However, with some of the security-optimized management infrastructure we have
recently added to Qubes OS, i.e. Salt stack integration (https://www.qubes-os.org/news/2015/12/14/mgmt-stack/) (which
significantly shrinks the attack surface on the system TCB compared to more
traditional “management” solutions), the Qubes Admin API (https://www.qubes-os.org/news/2017/06/27/qubes-admin-api/) (which
allows for the fine-grained decomposition of management roles), and deeply
integrated features such as templates (https://www.qubes-os.org/getting-started/#appvms-qubes-and-templatevms), we think Qubes Air may also be useful
in some non-desktop applications, such as the embedded appliance space, and
possibly even on the server/services side. In this case, it makes perfect sense
to have qubes not implement GUI protocol endpoints.

However, I still think that the primary area where Qubes excels is in securing
desktop workflows. For these, we need GUI virtualizationmultiplexing, and
the qubes need to implement GUI protocol endpoints. Below, we discuss some of
the trade-offs involved here.

The Qubes GUI protocol (https://www.qubes-os.org/doc/gui/) is optimized for security. This means that
the protocol is designed to be extremely simple, allowing only for very simple
processing on incoming packets, thus significantly limiting the attack surface
on the GUI daemon (which is usually considered trusted). The price we pay for
this security is the lack of various optimizations, such as on-the-fly
compression, which others protocols, such as VNC and RDP, naturally offer. So
far, we’ve been able to get away with these trade-offs, because in current Qubes
releases the GUI protocol runs over Xen shared memory. DRAM is very fast (i.e
has low latency and super-high bandwidth), and the implementation on Xen smartly
makes use of page sharing rather than memory copying, so that it achieves
near native speed (of course with the limitation that we don’t expose GPU
functionalities to VMs, which might limit the experience in some graphical
applications anyway).

However, when qubes run on remote computers (e.g in the cloud) or on physically
separate computers (e.g. on a cluster of Raspberry PIs), we face the potential
problem of graphics performance. The solution we see is to introduce a local
copy of the GUI qube into each zone. Here, we make the assumption that there
should be a significantly faster communication channel available between qubes
within a Zone than between Zones. For example, inter-VM communication within
one data center should be significantly faster than between the user’s laptop
and the cloud. The Qubes GUI protocol is then used between qubes and the local
GUI qube within a single zone, but a more efficient (and more complex) protocol
is used to aggregate the GUI into the Master GUI qube from all the Slave GUI
qubes. Thanks to this combined setup, we still get the benefit of a reasonably
secure GUI. Untrusted qubes still use the Qubes secure GUI protocol to
communicate with the local GUI qube. However, we also benefit from the greater
efficiency of remote access-optimized protocols such as RDP and VNC to get the
GUI onto the user’s device over the network. (Here, we make the assumption that
the Slave GUI qubes are significantly more trustworthy than other
non-privileged qubes in the Zone. If that’s not the case, and if we’re also
worried about an attacker who has compromised a Slave GUI qube to exploit a
potential bug in the VNC or RDP protocol in order to attack the Master GUI
qube, we could still resort to the fine-grained Qubes Admin API to limit the
potential damage the attacker might inflict.)

Digression on the “cloudification” of apps

It’s hard not to notice how the model of desktop applications has changed over
the past decade or so, where many standalone applications that previously ran on
desktop computers now run in the cloud and have only their frontends executed in
a browser running on the client system. How does the Qubes compartmentalization
model, and more importantly Qubes as a desktop OS, deal with this change?

Above, we discussed how it’s possible to move Qubes VMs from the user’s local
machine to the cloud (or to physically separate computers) without the user
👍1
having to notice. I think it will be a great milestone when we finally get
there, as it will open up many new applications, as well as remove many
obstacles that today prevent the easy deployment of Qubes OS (such as the need
to find and maintain dedicated hardware).

However, it’s important to ask ourselves how relevant this model will be in the
coming years. Even with our new approach, we’re still talking about classic
standalone desktop applications running in qubes, while the rest of the world
seems to be moving toward an app-as-a-service model in which everything is
hosted in the cloud (e.g. Google Docs and Microsoft Office 365). How relevant
is the whole Qubes architecture, even the cloud-based version, in the
app-as-a-service model?

I’d like to argue that the Qubes architecture still makes perfect sense in this
new model.

First, it’s probably easy to accept that there will always be applications that
users, both individual and corporate, will prefer (or be forced) to run locally,
or at least on trusted servers. At the same time, it’s very likely that these
same users will want to embrace the general, public cloud with its multitude of
app-as-a-service options. Not surprisingly, there will be a need for isolating
these workloads from interfering with each other.

Some examples of payloads that are better suited as traditional, local
applications (and consequently within qubes), are MS Office for sensitive
documents, large data-processing applications, and… networking and USB drivers
and stacks. The latter things may not be very visible to the user, but we can’t
really offload them to the cloud. We have to host them on the local machine, and
they present a huge attack surface that jeopardizes the user’s other data and
applications.

What about isolating web apps from each other, as well as protecting the host
from them? Of course, that’s the primary task of the Web browser. Yet, despite
vendors’ best efforts, browser security measures are still being circumvented.
Continued expansion of the APIs that modern browsers expose to Web applications,
such as WebGL (https://en.wikipedia.org/wiki/WebGL), suggests that this state of affairs may not significantly
improve in the foreseeable future.

What makes the Qubes model especially useful, I think, is that it allows us to
put the whole browser in a container that is isolated by stronger mechanisms
(simply because Qubes does not have to maintain all the interfaces that the
browser must) and is managed by Qubes-defined policies. It’s rather natural to
imagine, e.g. a Chrome OS-based template for Qubes (perhaps even a
unikernel-based one), from which lightweight browser VMs could be created,
running either on the user’s local machine, or in the cloud, as described above.
Again, there will be pros and cons to both approaches, but Qubes should support
both – and mostly seamlessly from the user’s and admin’s points of view (as
well the Qubes service developer’s point of view!).

Summary

Qubes Air is the next step on our roadmap to making the concept of “Security
through Compartmentalization” applicable to more scenarios. It is also an
attempt to address some of the biggest problems and weaknesses plaguing the
current implementation of Qubes, specifically the difficulty of deployment and
virtualization as a single point of failure. While Qubes-as-a-Service is one
natural application that could be built on top of Qubes Air, it is certainly not
the only one. We have also discussed running Qubes over clusters of physically
isolated devices, as well as various hybrid scenarios. I believe the approach to
security that Qubes has been implementing for years will continue to be valid
for years to come, even in a world of apps-as-a-service.
Xen Project Spectre / Meltdown FAQ (Jan 22 Update)
https://blog.xenproject.org/2018/01/22/xen-project-spectre-meltdown-faq-jan-22-update/

On January 3rd, 2018, Google’s Project Zero announced several information leak vulnerabilities affecting all modern superscalar processors. Details can be found on their blog, and in the Xen Project Advisory 254. To help our users understand the impact and our next steps forward, we put together the following FAQ. We divided the FAQ into several […]
Qubes and Whonix now have next-generation Tor onion services!
https://www.qubes-os.org/news/2018/01/23/qubes-whonix-next-gen-tor-onion-services/

The Qubes and Whonix projects now have next-generation Tor onion
services (https://blog.torproject.org/tors-fall-harvest-next-generation-onion-services) (a.k.a. “v3 onion services”), which provide several
security improvements (https://trac.torproject.org/projects/tor/wiki/doc/NextGenOnions) over v2 onion services:

Qubes: http://sik5nlgfc5qylnnsr57qrbm64zbdx6t4lreyhpon3ychmxmiem7tioad.onion (http://sik5nlgfc5qylnnsr57qrbm64zbdx6t4lreyhpon3ychmxmiem7tioad.onion/)
Whonix: http://dds6qkxpwdeubwucdiaord2xgbbeyds25rbsgr73tbfpqpt4a6vjwsyd.onion (http://dds6qkxpwdeubwucdiaord2xgbbeyds25rbsgr73tbfpqpt4a6vjwsyd.onion/)

These services run alongside our existing (“v2”) onion services:

Qubes: http://qubesos4rrrrz6n4.onion (http://qubesos4rrrrz6n4.onion/)
Whonix: http://kkkkkkkkkk63ava6.onion (http://kkkkkkkkkk63ava6.onion/)

For instructions on accessing the new addresses and further details,
please see the Whonix announcement (https://www.whonix.org/blog/whonix-new-v3-onion-address). Our sincere thanks go to the
Whonix team, and especially fortasse, the Whonix server
administrator, for doing this.
Update for QSB #37: Information leaks due to processor speculative execution bugs (XSA-254, Meltdown & Spectre)
https://www.qubes-os.org/news/2018/01/24/qsb-37-update/

Dear Qubes Community,

We have just updated Qubes Security Bulletin (QSB) #37:
Information leaks due to processor speculative execution bugs.

The text of the main changes are reproduced below. For the full
text, please see the complete QSB in the qubes-secpack:

https://github.com/QubesOS/qubes-secpack/blob/master/QSBs/qsb-037-2018.txt

Learn about the qubes-secpack, including how to obtain, verify, and
read it:

https://www.qubes-os.org/security/pack/

View all past QSBs:

https://www.qubes-os.org/security/bulletins/

View XSA-254 in the XSA Tracker:

https://www.qubes-os.org/security/xsa/#254

Changelog
==========

2018-01-11: Original QSB published
2018-01-23: Updated mitigation plan to XPTI; added Xen package versions

[...]

(Proper) patching
==================

## Qubes 4.0

As explained above, almost all the VMs in Qubes 4.0 are
fully-virtualized by default (specifically, they are HVMs), which
mitigates the most severe issue, Meltdown. The only PV domains in Qubes
4.0 are stub domains, which we plan to eliminate by switching to PVH
where possible. This will be done in Qubes 4.0-rc4 and also released as
a normal update for existing Qubes 4.0 installations. The only remaining
PV stub domains will be those used for VMs with PCI devices. (In the
default configuration, these are sys-net and sys-usb.) To protect those
domains, we will provide the Xen page-table isolation (XPTI) patch, as
described in the following section on Qubes 3.2.

## Qubes 3.2

Previously, we had planned to release an update for Qubes 3.2 that would
have made almost all VMs run in PVH mode by backporting support for this
mode from Qubes 4.0. However, a much less drastic option has become
available sooner than we and the Xen Security Team anticipated: what the
Xen Security Team refers to as a "stage 1" implementation of the Xen
page-table isolation (XPTI) mitigation strategy [5]. This mitigation
will make the most sensitive memory regions (including all of physical
memory mapped into Xen address space) immune to the Meltdown attack. In
addition, this mitigation will work on systems that lack VT-x support.
(By contrast, our original plan to backport PVH would have worked only
when the hardware supported VT-x or equivalent technology.)

Please note that this mitigation is expected to have a noticeable
performance impact. While there will be an option to disable the
mitigation (and thereby avoid the performance impact), doing so will
return the system to a vulnerable state.

The following packages contain the patches described above:

- Xen packages, version 4.6.6-36

[...]

Here is an overview of the VM modes that correspond to each Qubes OS
version:

VM type \ Qubes OS version | 3.2 | 4.0-rc1-3 | 4.0-rc4 |
---------------------------------- | --- | --------- | ------- |
Default VMs without PCI devices | PV | HVM | PVH |
Default VMs with PCI devices | PV | HVM | HVM |
Stub domains - Default VMs w/o PCI | N/A | PV | N/A |
Stub domains - Default VMs w/ PCI | N/A | PV | PV |
Stub domains - HVMs | PV | PV | PV |
Xen Project 4.8.3 is available
https://blog.xenproject.org/2018/01/24/xen-project-4-8-3-is-available/

I am pleased to announce the release of Xen 4.8.3. Xen Project Maintenance releases are released in line with our Maintenance Release Policy. We recommend that all users of the 4.8 stable series update to the latest point release. The release is available from its git repository xenbits.xenproject.org/gitweb/?p=xen.git;a=shortlog;h=refs/heads/stable-4.8 (tag RELEASE-4.8.3) or from the Xen Project […]
Qubes OS 4.0-rc4 has been released!
https://www.qubes-os.org/news/2018/01/31/qubes-40-rc4/

We’re pleased to announce the fourth release candidate for Qubes 4.0!
This release contains important safeguards against the Spectre and
Meltdown attacks (https://www.qubes-os.org/news/2018/01/11/qsb-37/), as well as bug fixes for many of the issues
discovered in the [previous release candidate][4.0-rc3]. A full list of
the Qubes 4.0 issues closed so far is available here (https://github.com/QubesOS/qubes-issues/issues?q=is%3Aissue+milestone%3A%22Release+4.0%22+is%3Aclosed).
Further details about this release, including full installation
instructions, are available in the Qubes 4.0 release
notes (https://www.qubes-os.org/doc/releases/4.0/release-notes/). The new installation image is available on the
Downloads (https://www.qubes-os.org/downloads/) page.

As always, we’re immensely grateful to our community of testers for
taking the time to discover and report bugs (https://www.qubes-os.org/doc/reporting-bugs/). Thanks to your efforts,
we’re able to fix these bugs before the final release of Qubes 4.0. We
encourage you to continue diligently testing this fourth release
candidate so that we can work together to improve Qubes 4.0 before the
stable release.

Major changes in Qubes 4.0-rc4

The Qubes VM Manager is back by popular demand! The returning Qubes
Manager will be slightly different from the 3.2 version. Specifically,
it will not duplicate functionality that is already provided by the new
4.0 widgets. Specific examples include attaching and detaching block
devices, attaching and detaching the microphone, and VM CPU usage.

In addition, the default TemplateVMs have been upgraded to Fedora 26 and
Debian 9.

The Qubes 4.0 stable release

If the testing of 4.0-rc4 does not reveal any major problems, we hope to
declare it the stable 4.0 release without any further significant
changes. In this scenario, any bugs discovered during the testing
process would be fixed in subsequent updates.

If, on the other hand, a major issue is discovered, we will continue
with the standard release schedule (https://www.qubes-os.org/doc/version-scheme/#release-schedule), and Qubes 4.0 stable will be a
separate, later release.

Current Qubes 4.0 Users

Current users of Qubes 4.0-rc3 can upgrade in-place by downloading the
latest updates from the testing repositories in both
dom0 (https://www.qubes-os.org/doc/software-update-dom0/#testing-repositories) and TemplateVMs (https://www.qubes-os.org/doc/software-update-vm/#testing-repositories). As explained in
QSB #37 (https://www.qubes-os.org/news/2018/01/11/qsb-37/), Qubes 4.0-rc4 uses PVH instead of HVM for almost all
VMs without PCI devices by default as a security measure against
Meltdown, and this change will also be released as a patch for existing
Qubes 4.0 installations in the coming days. Therefore, current Qubes 4.0
users will benefit from this change whether they upgrade in-place from a
previous release candidate or perform a clean installation of 4.0-rc4.

If you wish to upgrade in-place and have manually changed your VM
settings, please note the following:


By default, Qubes 4.0-rc3 used kernel 4.9.x. However, PVH mode will
require kernel >= 4.11. This is fine, because we will include kernel
4.14 in the PVH update. However, if you have manually changed the
kernel setting for any of your VMs, the update will not automatically
override that setting. Those VMs will still be using an old kernel,
so they will not work in PVH mode. Therefore, you must must either
change their settings to use the new kernel or change the VM mode
back to HVM.


If you have created a Windows VM, and you rely on it running in HVM
mode, you must explicitly set its mode to HVM (since the default mode
after applying the PVH update will be PVH rather than HVM). You can
do this either through the VM Settings GUI or by using the
qvm-prefs command-line tool to change the virt_mode property.
Comment on PV Calls: a new paravirtualized protocol for POSIX syscalls by Container Security for Kubernetes on AWS, Azure, GCP, and Private Clouds
https://blog.xenproject.org/2016/08/30/pv-calls-a-new-paravirtualized-protocol-for-posix-syscalls/#comment-461

[…] up to 4X network bandwidth compared to the traditional Xen networking PV drivers. See this article (https://blog.xenproject.org/2016/08/30/pv-calls-a-new-paravirtualized-protocol-for-posix-syscalls/) for more […]
Meet us at FOSDEM 2018
https://blog.xenproject.org/2018/02/01/meet-us-at-fosdem-2018/

As in the past, the Xen Project will have a booth at Europe’s biggest open source conference FOSDEM (taking place February 3rd and 4th in Brussels, Belgium). Where? During FOSDEM community volunteers will man our booth, which is located in bulding K (level 1, group C). Meet the Team! You will have the opportunity to speak […]
The Xen Project is participating in 2018 Summer round of Outreachy
https://blog.xenproject.org/2018/02/13/the-xen-project-is-participating-in-2018-summer-round-of-outreachy/

This is a quick reminder that the Xen Project is again participating in Outreachy (May 2018 to August 2018 Round). Please check the Outreachy application page for more information. Outreach Program for Women has been helping women (cis and trans), trans men, and genderqueer people get involved in free and open source software worldwide. Note […]
Xen Project Contributor Spotlight: Kevin Tian
https://blog.xenproject.org/2018/02/14/xen-project-contributor-spotlight-kevin-tian/

The Xen Project is comprised of a diverse set of member companies and contributors that are committed to the growth and success of the Xen Project Hypervisor. The Xen Project Hypervisor is a staple technology for server and cloud vendors, and is gaining traction in the embedded, security and automotive space. This blog series highlights […]
QSB #38: Qrexec policy bypass and possible information leak
https://www.qubes-os.org/news/2018/02/20/qsb-38/

Dear Qubes Community,

We have just published Qubes Security Bulletin (QSB) #38:
Qrexec policy bypass and possible information leak.
The text of this QSB is reproduced below. This QSB and its accompanying
signatures will always be available in the Qubes Security Pack
(qubes-secpack).

View QSB #38 in the qubes-secpack:

https://github.com/QubesOS/qubes-secpack/blob/master/QSBs/qsb-038-2018.txt

Learn about the qubes-secpack, including how to obtain, verify, and
read it:

https://www.qubes-os.org/security/pack/

View all past QSBs:

https://www.qubes-os.org/security/bulletins/

---===[ Qubes Security Bulletin #38 ]===---

February 20, 2018


Qrexec policy bypass and possible information leak

Summary
========

One of our developers, Wojtek Porczyk, discovered a vulnerability in the way
qube names are handled, which can result in qrexec policies being bypassed, a
theoretical information leak, and possibly other vulnerabilities. The '$'
character, when part of a qrexec RPC name and/or destination
specification (like '$adminvm', '$default', or one of the variants of
'$dispvm') is expanded according to shell parameter expansion [1]
after evaluating the qrexec policy but before invoking the RPC handler
executable.

Impact
=======

1. Potential policy bypass. The qrexec argument value that is delivered to the
handler executable can be different from the value that is present in the
RPC policy at the time the policy is evaluated. This is especially
problematic when the policy is defined as a blacklist of arguments rather
than a whitelist, e.g. "permit any arguments to example.Call but
PROHIBITED". If an attacker were to call 'example.Call+PROHIBITED$invalid',
the argument would not match the blacklisted variable at the time of policy
evaluation, so it would be admitted. However, performing shell parameter
expansion on the argument results in the prohibited value, which is what the
actual handler receives.

2. Potential information leak. If the qrexec handler acts upon the argument,
the attacker could read or deduce the contents of those variables.

3. Other potential vulnerabilities. Some of the variables present in the
environment, like $HOME and $PATH, also contain characters that are not
permissible in qrexec names or arguments that could theoretically lead to
other classes of vulnerabilities, such as directory traversal.

Technical details
==================

The '$' character is used in several places in qrexec and is therefore an
allowed character in parameters to Qubes RPC calls. It is also allowed as part
of the RPC name. The validation code is as follows [2]:

static void sanitize_name(char * untrusted_s_signed, char *extra_allowed_chars)
{
unsigned char * untrusted_s;
for (untrusted_s=(unsigned char*)untrusted_s_signed; *untrusted_s; untrusted_s++) {
if (*untrusted_s >= 'a' && *untrusted_s <= 'z')
continue;
if (*untrusted_s >= 'A' && *untrusted_s <= 'Z')
continue;
if (*untrusted_s >= '0' && *untrusted_s <= '9')
continue;
if (*untrusted_s == '$' ||
*untrusted_s == '_' ||
*untrusted_s == '-' ||
*untrusted_s == '.')
continue;
if (extra_allowed_chars && strchr(extra_allowed_chars, *untrusted_s))
continue;
*untrusted_s = '_';
}
}

and is invoked as [3]:

sanitize_name(untrusted_params.service_name, "+");
sanitize_name(untrusted_params.target_domain, ":");

Those arguments are part of the basis of policy evaluation. If policy
evaluation was successful, the parameters are then forwarded to the destination
domain over qrexec, and the call is executed using the qubes-rpc-multiplexer
executable, which is invoked by a POSIX shell. The exact mechanism differs
between dom0 and other qubes [4]:

if self.target == 'dom0':
cmd = '{multiplexer} {service} {source} {original_target}'.format(
multiplexer=QUBES_RPC_MULTIPLEXER_PATH,
service=self.service,
source=self.source,
original_target=self.original_target)
else:
cmd = '{user}:QUBESRPC {service} {source}'.format(
user=(self.rule.override_user or 'DEFAULT'),
service=self.service,
source=self.source)

# ...

try:
subprocess.call([QREXEC_CLIENT] + qrexec_opts + [cmd])

For the dom0 case, these are the relevant parts from the executable referenced
as QREXEC_CLIENT above [5]:

/* called from do_fork_exec */
void do_exec(const char *prog)
{
execl("/bin/bash", "bash", "-c", prog, NULL);
}

/* ... */

static void prepare_local_fds(char *cmdline)
{
/* ... */
do_fork_exec(cmdline, &local_pid, &local_stdin_fd, &local_stdout_fd,
NULL);
}

/* ... */

int main(int argc, char **argv)
{
/* ... */

if (strcmp(domname, "dom0") == 0) {
/* ... */

prepare_local_fds(remote_cmdline);

For qubes other than dom0, the command line is reconstructed from the command
passed through qrexec [6]:

void do_exec(const char *cmd)
{
char buf[strlen(QUBES_RPC_MULTIPLEXER_PATH) + strlen(cmd) - RPC_REQUEST_COMMAND_LEN + 1];
char *realcmd = index(cmd, ':'), *user;

/* ... */

/* replace magic RPC cmd with RPC multiplexer path */
if (strncmp(realcmd, RPC_REQUEST_COMMAND " ", RPC_REQUEST_COMMAND_LEN+1)==0) {
strcpy(buf, QUBES_RPC_MULTIPLEXER_PATH);
strcpy(buf + strlen(QUBES_RPC_MULTIPLEXER_PATH), realcmd + RPC_REQUEST_COMMAND_LEN);
realcmd = buf;
}

/* ... */

#ifdef HAVE_PAM
/* ... */
shell_basename = basename (pw->pw_shell);
/* this process is going to die shortly, so don't care about freeing */
arg0 = malloc (strlen (shell_basename) + 2);

/* ... */

/* FORK HERE */
child = fork ();

switch (child) {
case -1:
goto error;
case 0:
/* child */

if (setgid (pw->pw_gid))
exit(126);
if (setuid (pw->pw_uid))
exit(126);
setsid();
/* This is a copy but don't care to free as we exec later anyways. */
env = pam_getenvlist (pamh);

execle(pw->pw_shell, arg0, "-c", realcmd, (char*)NULL, env);

/* ... */

#else
execl("/bin/su", "su", "-", user, "-c", realcmd, NULL);
perror("execl");
exit(1);
#endif

Notice that the '$' character is unescaped in all cases when it is passed to
the shell and is interpreted according to the rules of parameter expansion [1].

Mitigating factors
===================

Only the '$' shell special character character was allowed, so only the
corresponding simple form of parameter expansion is permitted [1]. The '{}'
characters are prohibited, so other forms of parameter expansion are not
possible. Had other characters like '()', been permitted, which is not the
case, this vulnerability would amount to code execution.

The qrexec calls that are present in a default Qubes OS installation and that
have, by default, a policy that would actually allow them to be called:

- do not contain the '$' character; and
- do not act upon differences in their arguments.

Therefore, this vulnerability is limited to custom RPCs and/or custom policies.

The attacker is constrained to preexisting environment variables and shell
special variables, which do not appear to contain very valuable information.