QSB #34: GUI issue and Xen vulnerabilities (XSA-237 through XSA-244)
https://www.qubes-os.org/news/2017/10/12/qsb-34/
Dear Qubes Community,
We have just published Qubes Security Bulletin (QSB) #34:
GUI issue and Xen vulnerabilities (XSA-237 through XSA-244).
The text of this QSB is reproduced below. This QSB and its accompanying
signatures will always be available in the Qubes Security Pack (qubes-secpack).
View QSB #34 in the qubes-secpack:
https://github.com/QubesOS/qubes-secpack/blob/master/QSBs/qsb-034-2017.txt
Learn about the qubes-secpack, including how to obtain, verify, and read it:
https://www.qubes-os.org/security/pack/
View all past QSBs:
https://www.qubes-os.org/security/bulletins/
View the XSA Tracker:
https://www.qubes-os.org/security/xsa/
---===[ Qubes Security Bulletin #34 ]===---
October 12, 2017
GUI issue and Xen vulnerabilities (XSA-237 through XSA-244)
Summary
========
One of our developers, Simon Gaiser (aka HW42), while working on
improving support for device isolation in Qubes 4.0, discovered a
potential security problem with the way Xen handles MSI-capable devices.
The Xen Security Team has classified this problem as XSA-237 [01], which
was published today.
At the same time, the Xen Security Team released several other Xen
Security Advisories (XSA-238 through XSA-244). The impact of these
advisories ranges from system crashes to potential privilege
escalations. However, the latter seem to be mostly theoretical. See our
commentary below for details.
Finally, Eric Larsson discovered a situation in which Qubes GUI
virtualization could allow a VM to produce a window that has no colored
borders (which are used in Qubes as front-line indicators of trust).
A VM cannot use this vulnerability to draw different borders in place of
the correct one, however. We discuss this issue extensively below.
Technical details
==================
Xen issues
-----------
Xen Security Advisory 237 [01]:
| Multiple issues exist with the setup of PCI MSI interrupts:
| - unprivileged guests were permitted access to devices not owned by
| them, in particular allowing them to disable MSI or MSI-X on any
| device
| - HVM guests can trigger a codepath intended only for PV guests
| - some failure paths partially tear down previously configured
| interrupts, leaving inconsistent state
| - with XSM enabled, caller and callee of a hook disagreed about the
| data structure pointed to by a type-less argument
|
| A malicious or buggy guest may cause the hypervisor to crash, resulting
| in Denial of Service (DoS) affecting the entire host. Privilege
| escalation and information leaks cannot be excluded.
Xen Security Advisory 238 [02]:
| DMOPs (which were a subgroup of HVMOPs in older releases) allow guests
| to control and drive other guests. The I/O request server page mapping
| interface uses range sets to represent I/O resources the emulation of
| which is provided by a given I/O request server. The internals of the
| range set implementation require that ranges have a starting value no
| lower than the ending one. Checks for this fact were missing.
|
| Malicious or buggy stub domain kernels or tool stacks otherwise living
| outside of Domain0 can mount a denial of service attack which, if
| successful, can affect the whole system.
|
| Only domains controlling HVM guests can exploit this vulnerability.
| (This includes domains providing hardware emulation services to HVM
| guests.)
Xen Security Advisory 239 [03]:
| Intercepted I/O operations may deal with less than a full machine
| word's worth of data. While read paths had been the subject of earlier
| XSAs (and hence have been fixed), at least one write path was found
| where the data stored into an internal structure could contain bits
| from an uninitialized hypervisor stack slot. A subsequent emulated
| read would then be able to retrieve these bits.
|
https://www.qubes-os.org/news/2017/10/12/qsb-34/
Dear Qubes Community,
We have just published Qubes Security Bulletin (QSB) #34:
GUI issue and Xen vulnerabilities (XSA-237 through XSA-244).
The text of this QSB is reproduced below. This QSB and its accompanying
signatures will always be available in the Qubes Security Pack (qubes-secpack).
View QSB #34 in the qubes-secpack:
https://github.com/QubesOS/qubes-secpack/blob/master/QSBs/qsb-034-2017.txt
Learn about the qubes-secpack, including how to obtain, verify, and read it:
https://www.qubes-os.org/security/pack/
View all past QSBs:
https://www.qubes-os.org/security/bulletins/
View the XSA Tracker:
https://www.qubes-os.org/security/xsa/
---===[ Qubes Security Bulletin #34 ]===---
October 12, 2017
GUI issue and Xen vulnerabilities (XSA-237 through XSA-244)
Summary
========
One of our developers, Simon Gaiser (aka HW42), while working on
improving support for device isolation in Qubes 4.0, discovered a
potential security problem with the way Xen handles MSI-capable devices.
The Xen Security Team has classified this problem as XSA-237 [01], which
was published today.
At the same time, the Xen Security Team released several other Xen
Security Advisories (XSA-238 through XSA-244). The impact of these
advisories ranges from system crashes to potential privilege
escalations. However, the latter seem to be mostly theoretical. See our
commentary below for details.
Finally, Eric Larsson discovered a situation in which Qubes GUI
virtualization could allow a VM to produce a window that has no colored
borders (which are used in Qubes as front-line indicators of trust).
A VM cannot use this vulnerability to draw different borders in place of
the correct one, however. We discuss this issue extensively below.
Technical details
==================
Xen issues
-----------
Xen Security Advisory 237 [01]:
| Multiple issues exist with the setup of PCI MSI interrupts:
| - unprivileged guests were permitted access to devices not owned by
| them, in particular allowing them to disable MSI or MSI-X on any
| device
| - HVM guests can trigger a codepath intended only for PV guests
| - some failure paths partially tear down previously configured
| interrupts, leaving inconsistent state
| - with XSM enabled, caller and callee of a hook disagreed about the
| data structure pointed to by a type-less argument
|
| A malicious or buggy guest may cause the hypervisor to crash, resulting
| in Denial of Service (DoS) affecting the entire host. Privilege
| escalation and information leaks cannot be excluded.
Xen Security Advisory 238 [02]:
| DMOPs (which were a subgroup of HVMOPs in older releases) allow guests
| to control and drive other guests. The I/O request server page mapping
| interface uses range sets to represent I/O resources the emulation of
| which is provided by a given I/O request server. The internals of the
| range set implementation require that ranges have a starting value no
| lower than the ending one. Checks for this fact were missing.
|
| Malicious or buggy stub domain kernels or tool stacks otherwise living
| outside of Domain0 can mount a denial of service attack which, if
| successful, can affect the whole system.
|
| Only domains controlling HVM guests can exploit this vulnerability.
| (This includes domains providing hardware emulation services to HVM
| guests.)
Xen Security Advisory 239 [03]:
| Intercepted I/O operations may deal with less than a full machine
| word's worth of data. While read paths had been the subject of earlier
| XSAs (and hence have been fixed), at least one write path was found
| where the data stored into an internal structure could contain bits
| from an uninitialized hypervisor stack slot. A subsequent emulated
| read would then be able to retrieve these bits.
|
| A malicious unprivileged x86 HVM guest may be able to obtain sensitive
| information from the host or other guests.
Xen Security Advisory 240 [04]:
| x86 PV guests are permitted to set up certain forms of what is often
| called "linear page tables", where pagetables contain references to
| other pagetables at the same level or higher. Certain restrictions
| apply in order to fit into Xen's page type handling system. An
| important restriction was missed, however: Stacking multiple layers
| of page tables of the same level on top of one another is not very
| useful, and the tearing down of such an arrangement involves
| recursion. With sufficiently many layers such recursion will result
| in a stack overflow, commonly resulting in Xen to crash.
|
| A malicious or buggy PV guest may cause the hypervisor to crash,
| resulting in Denial of Service (DoS) affecting the entire host.
| Privilege escalation and information leaks cannot be excluded.
Xen Security Advisory 241 [05]:
| x86 PV guests effect TLB flushes by way of a hypercall. Xen tries to
| reduce the number of TLB flushes by delaying them as much as possible.
| When the last type reference of a page is dropped, the need for a TLB
| flush (before the page is re-used) is recorded. If a guest TLB flush
| request involves an Inter Processor Interrupt (IPI) to a CPU in which
| is the process of dropping the last type reference of some page, and
| if that IPI arrives at exactly the right instruction boundary, a stale
| time stamp may be recorded, possibly resulting in the later omission
| of the necessary TLB flush for that page.
|
| A malicious x86 PV guest may be able to access all of system memory,
| allowing for all of privilege escalation, host crashes, and
| information leaks.
Xen Security Advisory 242 [06]:
| The page type system of Xen requires cleanup when the last reference
| for a given page is being dropped. In order to exclude simultaneous
| updates to a given page by multiple parties, pages which are updated
| are locked beforehand. This locking includes temporarily increasing
| the type reference count by one. When the page is later unlocked, the
| context precludes cleanup, so the reference that is then dropped must
| not be the last one. This was not properly enforced.
|
| A malicious or buggy PV guest may cause a memory leak upon shutdown
| of the guest, ultimately perhaps resulting in Denial of Service (DoS)
| affecting the entire host.
Xen Security Advisory 243 [07]:
| The shadow pagetable code uses linear mappings to inspect and modify the
| shadow pagetables. A linear mapping which points back to itself is known as
| self-linear. For translated guests, the shadow linear mappings (being in a
| separate address space) are not intended to be self-linear. For
| non-translated guests, the shadow linear mappings (being the same
| address space) are intended to be self-linear.
|
| When constructing a monitor pagetable for Xen to run on a vcpu with, the shadow
| linear slot is filled with a self-linear mapping, and for translated guests,
| shortly thereafter replaced with a non-self-linear mapping, when the guest's
| %cr3 is shadowed.
|
| However when writeable heuristics are used, the shadow mappings are used as
| part of shadowing %cr3, causing the heuristics to be applied to Xen's
| pagetables, not the guest shadow pagetables.
|
| While investigating, it was also identified that PV auto-translate mode was
| insecure. This mode was removed in Xen 4.7 due to being unused, unmaintained
| and presumed broken. We are not aware of any guest implementation of PV
| auto-translate mode.
|
| A malicious or buggy HVM guest may cause a hypervisor crash, resulting in a
| Denial of Service (DoS) affecting the entire host, or cause hypervisor memory
| corruption. We cannot rule out a guest being able to escalate its privilege.
Xen Security Advisory 244 [08]:
| The x86-64 architecture allows interrupts to be run on distinct stacks.
| information from the host or other guests.
Xen Security Advisory 240 [04]:
| x86 PV guests are permitted to set up certain forms of what is often
| called "linear page tables", where pagetables contain references to
| other pagetables at the same level or higher. Certain restrictions
| apply in order to fit into Xen's page type handling system. An
| important restriction was missed, however: Stacking multiple layers
| of page tables of the same level on top of one another is not very
| useful, and the tearing down of such an arrangement involves
| recursion. With sufficiently many layers such recursion will result
| in a stack overflow, commonly resulting in Xen to crash.
|
| A malicious or buggy PV guest may cause the hypervisor to crash,
| resulting in Denial of Service (DoS) affecting the entire host.
| Privilege escalation and information leaks cannot be excluded.
Xen Security Advisory 241 [05]:
| x86 PV guests effect TLB flushes by way of a hypercall. Xen tries to
| reduce the number of TLB flushes by delaying them as much as possible.
| When the last type reference of a page is dropped, the need for a TLB
| flush (before the page is re-used) is recorded. If a guest TLB flush
| request involves an Inter Processor Interrupt (IPI) to a CPU in which
| is the process of dropping the last type reference of some page, and
| if that IPI arrives at exactly the right instruction boundary, a stale
| time stamp may be recorded, possibly resulting in the later omission
| of the necessary TLB flush for that page.
|
| A malicious x86 PV guest may be able to access all of system memory,
| allowing for all of privilege escalation, host crashes, and
| information leaks.
Xen Security Advisory 242 [06]:
| The page type system of Xen requires cleanup when the last reference
| for a given page is being dropped. In order to exclude simultaneous
| updates to a given page by multiple parties, pages which are updated
| are locked beforehand. This locking includes temporarily increasing
| the type reference count by one. When the page is later unlocked, the
| context precludes cleanup, so the reference that is then dropped must
| not be the last one. This was not properly enforced.
|
| A malicious or buggy PV guest may cause a memory leak upon shutdown
| of the guest, ultimately perhaps resulting in Denial of Service (DoS)
| affecting the entire host.
Xen Security Advisory 243 [07]:
| The shadow pagetable code uses linear mappings to inspect and modify the
| shadow pagetables. A linear mapping which points back to itself is known as
| self-linear. For translated guests, the shadow linear mappings (being in a
| separate address space) are not intended to be self-linear. For
| non-translated guests, the shadow linear mappings (being the same
| address space) are intended to be self-linear.
|
| When constructing a monitor pagetable for Xen to run on a vcpu with, the shadow
| linear slot is filled with a self-linear mapping, and for translated guests,
| shortly thereafter replaced with a non-self-linear mapping, when the guest's
| %cr3 is shadowed.
|
| However when writeable heuristics are used, the shadow mappings are used as
| part of shadowing %cr3, causing the heuristics to be applied to Xen's
| pagetables, not the guest shadow pagetables.
|
| While investigating, it was also identified that PV auto-translate mode was
| insecure. This mode was removed in Xen 4.7 due to being unused, unmaintained
| and presumed broken. We are not aware of any guest implementation of PV
| auto-translate mode.
|
| A malicious or buggy HVM guest may cause a hypervisor crash, resulting in a
| Denial of Service (DoS) affecting the entire host, or cause hypervisor memory
| corruption. We cannot rule out a guest being able to escalate its privilege.
Xen Security Advisory 244 [08]:
| The x86-64 architecture allows interrupts to be run on distinct stacks.
| The choice of stack is encoded in a field of the corresponding
| interrupt denoscriptor in the Interrupt Denoscriptor Table (IDT). That
| field selects an entry from the active Task State Segment (TSS).
|
| Since, on AMD hardware, Xen switches to an HVM guest's TSS before
| actually entering the guest, with the Global Interrupt Flag still set,
| the selectors in the IDT entry are switched when guest context is
| loaded/unloaded.
|
| When a new CPU is brought online, its IDT is copied from CPU0's IDT,
| including those selector fields. If CPU0 happens at that moment to be
| in HVM context, wrong values for those IDT fields would be installed
| for the new CPU. If the first guest vCPU to be run on that CPU
| belongs to a PV guest, it will then have the ability to escalate its
| privilege or crash the hypervisor.
|
| A malicious or buggy x86 PV guest could escalate its privileges or
| crash the hypervisor.
|
| Avoiding to online CPUs at runtime will avoid this vulnerability.
GUI daemon issue
-----------------
Qubes OS's GUI virtualization enforces colored borders around all VM
windows. There are two types of windows. The first type are normal
windows (with borders, noscriptbars, etc.). In this case, we modify the
window manager to take care of coloring the borders. The second type are
borderless windows (with the override_redirect property set to True in
X11 terminology). Here, the window manager is not involved at all, and
our GUI daemon needs to draw a border itself. This is done by drawing a
2px border whenever window content is changed beneath that area. The bug
was that if the VM application had never sent any updates for (any part
of) the border area, the frame was never drawn. The relevant code is in
the gui-daemon component [09], specifically in gui-daemon/xside.c [10]:
/* update given fragment of window image
* can be requested by VM (MSG_SHMIMAGE) and Xserver (XExposeEvent)
* parameters are not sanitized earlier - we must check it carefully
* also do not let to cover forced colorful frame (for undecoraded windows)
*/
static void do_shm_update(Ghandles * g, struct windowdata *vm_window,
int untrusted_x, int untrusted_y, int untrusted_w,
int untrusted_h)
{
/* ... */
if (!vm_window->image && !(g->screen_window && g->screen_window->image))
return;
/* force frame to be visible: */
/* * left */
delta = border_width - x;
if (delta > 0) {
w -= delta;
x = border_width;
do_border = 1;
}
/* * right */
delta = x + w - (vm_window->width - border_width);
if (delta > 0) {
w -= delta;
do_border = 1;
}
/* * top */
delta = border_width - y;
if (delta > 0) {
h -= delta;
y = border_width;
do_border = 1;
}
/* * bottom */
delta = y + h - (vm_window->height - border_width);
if (delta > 0) {
h -= delta;
do_border = 1;
}
/* ... */
}
The above code is responsible for deciding whether the colored border
needs to be updated. It is updated if both:
a) there is any window image (vm_window->image)
b) the updated area includes a border anywhere
If neither of these conditions is met, no border is drawn. Note that if
the VM tries to draw anything there (for example, a fake border in a
different color), whatever is drawn will be overridden with the correct
borders, which will stay there until the window is destroyed.
Eric Larsson discovered that this situation (not updating the border
area) is reachable -- and even happens with some real world applications
-- when the VM shows a splash screen with a custom shape. While custom
window shapes are not supported in Qubes OS, VMs do not know this. The
VM still thinks the custom-shaped window is there, so it does not send
| interrupt denoscriptor in the Interrupt Denoscriptor Table (IDT). That
| field selects an entry from the active Task State Segment (TSS).
|
| Since, on AMD hardware, Xen switches to an HVM guest's TSS before
| actually entering the guest, with the Global Interrupt Flag still set,
| the selectors in the IDT entry are switched when guest context is
| loaded/unloaded.
|
| When a new CPU is brought online, its IDT is copied from CPU0's IDT,
| including those selector fields. If CPU0 happens at that moment to be
| in HVM context, wrong values for those IDT fields would be installed
| for the new CPU. If the first guest vCPU to be run on that CPU
| belongs to a PV guest, it will then have the ability to escalate its
| privilege or crash the hypervisor.
|
| A malicious or buggy x86 PV guest could escalate its privileges or
| crash the hypervisor.
|
| Avoiding to online CPUs at runtime will avoid this vulnerability.
GUI daemon issue
-----------------
Qubes OS's GUI virtualization enforces colored borders around all VM
windows. There are two types of windows. The first type are normal
windows (with borders, noscriptbars, etc.). In this case, we modify the
window manager to take care of coloring the borders. The second type are
borderless windows (with the override_redirect property set to True in
X11 terminology). Here, the window manager is not involved at all, and
our GUI daemon needs to draw a border itself. This is done by drawing a
2px border whenever window content is changed beneath that area. The bug
was that if the VM application had never sent any updates for (any part
of) the border area, the frame was never drawn. The relevant code is in
the gui-daemon component [09], specifically in gui-daemon/xside.c [10]:
/* update given fragment of window image
* can be requested by VM (MSG_SHMIMAGE) and Xserver (XExposeEvent)
* parameters are not sanitized earlier - we must check it carefully
* also do not let to cover forced colorful frame (for undecoraded windows)
*/
static void do_shm_update(Ghandles * g, struct windowdata *vm_window,
int untrusted_x, int untrusted_y, int untrusted_w,
int untrusted_h)
{
/* ... */
if (!vm_window->image && !(g->screen_window && g->screen_window->image))
return;
/* force frame to be visible: */
/* * left */
delta = border_width - x;
if (delta > 0) {
w -= delta;
x = border_width;
do_border = 1;
}
/* * right */
delta = x + w - (vm_window->width - border_width);
if (delta > 0) {
w -= delta;
do_border = 1;
}
/* * top */
delta = border_width - y;
if (delta > 0) {
h -= delta;
y = border_width;
do_border = 1;
}
/* * bottom */
delta = y + h - (vm_window->height - border_width);
if (delta > 0) {
h -= delta;
do_border = 1;
}
/* ... */
}
The above code is responsible for deciding whether the colored border
needs to be updated. It is updated if both:
a) there is any window image (vm_window->image)
b) the updated area includes a border anywhere
If neither of these conditions is met, no border is drawn. Note that if
the VM tries to draw anything there (for example, a fake border in a
different color), whatever is drawn will be overridden with the correct
borders, which will stay there until the window is destroyed.
Eric Larsson discovered that this situation (not updating the border
area) is reachable -- and even happens with some real world applications
-- when the VM shows a splash screen with a custom shape. While custom
window shapes are not supported in Qubes OS, VMs do not know this. The
VM still thinks the custom-shaped window is there, so it does not send
updates of content outside of that custom shape.
We fixed the issue by forcing an update of the whole window before
making it visible:
static void handle_map(Ghandles * g, struct windowdata *vm_window)
{
/* ... */
/* added code */
if (vm_window->override_redirect) {
/* force window update to draw colorful frame, even when VM have not
* sent any content yet */
do_shm_update(g, vm_window, 0, 0, vm_window->width, vm_window->height);
}
(void) XMapWindow(g->display, vm_window->local_winid);
}
This needs some auxiliary changes in the do_shm_update function, to draw
the frame also in cases when there is no window content yet
(vm_window->image is NULL).
Commentary from the Qubes Security Team
========================================
For the most part, this batch of Xen Security Advisories affects Qubes
OS 3.2 only theoretically. In the case of Qubes OS 4.0, half of them do
not apply at all. We'll comment briefly on each one:
XSA-237 - The impact is believed to be denial of service only. In addition,
we believe proper use of Interrupt Remapping should offer a generic
solution to similar problems, to reduce them to denial of
service at worst.
XSA-238 - The stated impact is denial of service only.
XSA-239 - The attacking domain has no control over what information
is leaked.
XSA-240 - The practical impact is believed to be denial of service (and does not
affect HVMs).
XSA-241 - The issue applies only to PV domains, so the attack vector
is largely limited in Qubes OS 4.0, which uses HVM domains
by default. In addition, the Xen Security Team considers this
bug to be hard to exploit in practice (see advisory).
XSA-242 - The stated impact is denial of service only. In addition, the
issue applies only to PV domains.
XSA-243 - The practical impact is believed to be denial of service. In addition,
the vulnerable code (shadow page tables) is build-time disabled
in Qubes OS 4.0.
XSA-244 - The vulnerable code path (runtime CPU hotplug) is not used
in Qubes OS.
These results reassure us that switching to HVM domains in Qubes OS 4.0
was a good decision.
Compromise Recovery
====================
Starting with Qubes 3.2, we offer Paranoid Backup Restore Mode, which
was designed specifically to aid in the recovery of a (potentially)
compromised Qubes OS system. Thus, if you believe your system might have
been compromised (perhaps because of the bugs discussed in this
bulletin), then you should read and follow the procedure described here:
https://www.qubes-os.org/news/2017/04/26/qubes-compromise-recovery/
Patching
=========
The specific packages that resolve the problems discussed in this
bulletin are as follows:
For Qubes 3.2:
- Xen packages, version 4.6.6-32
- qubes-gui-dom0, version 3.2.12
For Qubes 4.0:
- Xen packages, version 4.8.2-6
- qubes-gui-dom0, version 4.0.5
The packages are to be installed in dom0 via the Qubes VM Manager or via
the qubes-dom0-update command as follows:
For updates from the stable repository (not immediately available):
$ sudo qubes-dom0-update
For updates from the security-testing repository:
$ sudo qubes-dom0-update --enablerepo=qubes-dom0-security-testing
A system restart will be required afterwards.
These packages will migrate from the security-testing repository to the
current (stable) repository over the next two weeks after being tested
by the community.
If you use Anti Evil Maid, you will need to reseal your secret
passphrase to new PCR values, as PCR18+19 will change due to the new
Xen binaries.
Credits
========
The GUI daemon issue was discovered by Eric Larsson.
The PCI MSI issues were discovered by Simon Gaiser (aka HW42).
For other issues, see the original Xen Security Advisories.
References
===========
We fixed the issue by forcing an update of the whole window before
making it visible:
static void handle_map(Ghandles * g, struct windowdata *vm_window)
{
/* ... */
/* added code */
if (vm_window->override_redirect) {
/* force window update to draw colorful frame, even when VM have not
* sent any content yet */
do_shm_update(g, vm_window, 0, 0, vm_window->width, vm_window->height);
}
(void) XMapWindow(g->display, vm_window->local_winid);
}
This needs some auxiliary changes in the do_shm_update function, to draw
the frame also in cases when there is no window content yet
(vm_window->image is NULL).
Commentary from the Qubes Security Team
========================================
For the most part, this batch of Xen Security Advisories affects Qubes
OS 3.2 only theoretically. In the case of Qubes OS 4.0, half of them do
not apply at all. We'll comment briefly on each one:
XSA-237 - The impact is believed to be denial of service only. In addition,
we believe proper use of Interrupt Remapping should offer a generic
solution to similar problems, to reduce them to denial of
service at worst.
XSA-238 - The stated impact is denial of service only.
XSA-239 - The attacking domain has no control over what information
is leaked.
XSA-240 - The practical impact is believed to be denial of service (and does not
affect HVMs).
XSA-241 - The issue applies only to PV domains, so the attack vector
is largely limited in Qubes OS 4.0, which uses HVM domains
by default. In addition, the Xen Security Team considers this
bug to be hard to exploit in practice (see advisory).
XSA-242 - The stated impact is denial of service only. In addition, the
issue applies only to PV domains.
XSA-243 - The practical impact is believed to be denial of service. In addition,
the vulnerable code (shadow page tables) is build-time disabled
in Qubes OS 4.0.
XSA-244 - The vulnerable code path (runtime CPU hotplug) is not used
in Qubes OS.
These results reassure us that switching to HVM domains in Qubes OS 4.0
was a good decision.
Compromise Recovery
====================
Starting with Qubes 3.2, we offer Paranoid Backup Restore Mode, which
was designed specifically to aid in the recovery of a (potentially)
compromised Qubes OS system. Thus, if you believe your system might have
been compromised (perhaps because of the bugs discussed in this
bulletin), then you should read and follow the procedure described here:
https://www.qubes-os.org/news/2017/04/26/qubes-compromise-recovery/
Patching
=========
The specific packages that resolve the problems discussed in this
bulletin are as follows:
For Qubes 3.2:
- Xen packages, version 4.6.6-32
- qubes-gui-dom0, version 3.2.12
For Qubes 4.0:
- Xen packages, version 4.8.2-6
- qubes-gui-dom0, version 4.0.5
The packages are to be installed in dom0 via the Qubes VM Manager or via
the qubes-dom0-update command as follows:
For updates from the stable repository (not immediately available):
$ sudo qubes-dom0-update
For updates from the security-testing repository:
$ sudo qubes-dom0-update --enablerepo=qubes-dom0-security-testing
A system restart will be required afterwards.
These packages will migrate from the security-testing repository to the
current (stable) repository over the next two weeks after being tested
by the community.
If you use Anti Evil Maid, you will need to reseal your secret
passphrase to new PCR values, as PCR18+19 will change due to the new
Xen binaries.
Credits
========
The GUI daemon issue was discovered by Eric Larsson.
The PCI MSI issues were discovered by Simon Gaiser (aka HW42).
For other issues, see the original Xen Security Advisories.
References
===========
[01] https://xenbits.xen.org/xsa/advisory-237.html
[02] https://xenbits.xen.org/xsa/advisory-238.html
[03] https://xenbits.xen.org/xsa/advisory-239.html
[04] https://xenbits.xen.org/xsa/advisory-240.html
[05] https://xenbits.xen.org/xsa/advisory-241.html
[06] https://xenbits.xen.org/xsa/advisory-242.html
[07] https://xenbits.xen.org/xsa/advisory-243.html
[08] https://xenbits.xen.org/xsa/advisory-244.html
[09] https://github.com/QubesOS/qubes-gui-daemon/
[10] https://github.com/QubesOS/qubes-gui-daemon/blob/master/gui-daemon/xside.c#L1317-L1447
--
The Qubes Security Team
https://www.qubes-os.org/security/
[02] https://xenbits.xen.org/xsa/advisory-238.html
[03] https://xenbits.xen.org/xsa/advisory-239.html
[04] https://xenbits.xen.org/xsa/advisory-240.html
[05] https://xenbits.xen.org/xsa/advisory-241.html
[06] https://xenbits.xen.org/xsa/advisory-242.html
[07] https://xenbits.xen.org/xsa/advisory-243.html
[08] https://xenbits.xen.org/xsa/advisory-244.html
[09] https://github.com/QubesOS/qubes-gui-daemon/
[10] https://github.com/QubesOS/qubes-gui-daemon/blob/master/gui-daemon/xside.c#L1317-L1447
--
The Qubes Security Team
https://www.qubes-os.org/security/
RT @kylerankin: There's a reason @QubesOS marks the network VM as untrusted. Safer to treat your network that way #KRACK or not.
A Brief Introduction to the Xen Project and Virtualization from Mohsen Mostafa Jokar
https://blog.xenproject.org/2017/10/17/a-brief-introduction-to-the-xen-project-and-virtualization-from-mohsen-mostafa-jokar/
Mohsen Mostafa Jokar is a Linux administrator who works at the newspaper Hamshahri as a network and virtualization administrator. His interest in virtualization goes back to when he was at school and saw a Microsoft Virtual PC for the first time. He installed it on a PC with 256 MB of RAM and used it […]
https://blog.xenproject.org/2017/10/17/a-brief-introduction-to-the-xen-project-and-virtualization-from-mohsen-mostafa-jokar/
Mohsen Mostafa Jokar is a Linux administrator who works at the newspaper Hamshahri as a network and virtualization administrator. His interest in virtualization goes back to when he was at school and saw a Microsoft Virtual PC for the first time. He installed it on a PC with 256 MB of RAM and used it […]
MSI support for PCI device pass-through with stub domains
https://www.qubes-os.org/news/2017/10/18/msi-support/
Introduction
In this post, we will describe how we fixed MSI support for VMs running in HVM mode in Qubes 4.0.
First, allow us to provide some background about the MSI feature and why we need it in the first place.
In Qubes 4.0, we switched from paravirtualized (PV) virtual machines to hardware virtual machines (HVMs, also known as “fully virtualized” or “hardware-assisted” VMs) for improved security (see the 4.0-rc1 announcement (https://www.qubes-os.org/news/2017/07/31/qubes-40-rc1/#fully-virtualized-vms-for-better-isolation) for details).
For VMs running as HVMs, Xen requires software that can emulate hardware (such as network cards) called QEMU in order to provide device emulation.
By default, Xen runs QEMU in the most trusted domain, dom0, and QEMU has quite a large attack surface.
Running QEMU in dom0 would jeopardize the security of Qubes, so it is necessary to run QEMU outside of dom0.
We do this by using a Xen feature that allows us to run QEMU inside a second “helper” VM called a “stub domain”.*
This way, an attacker who exploits a bug in QEMU will be confined to the stub domain rather than getting full access to dom0.
Admittedly, stub domains run in PV mode, which means that an attacker who were to successfully exploit QEMU would gain the ability to exploit potential Xen bugs in paravirtualization.
Nonetheless, we believe using HVMs to host PCI devices is still a considerable improvement.
Of course, in the long term, we would like to switch to using PVH VMs, but at the moment this is not feasible.
In our testing, we found that pass-through PCI devices did not work in HVMs on some machines.
On the affected machines, networking devices and USB devices, for example, were not usable as they are in Qubes 3.2.
(The kernel driver failed to initialize the device.)
This was a major problem that would have blocked us from moving entirely from PV to HVM in Qubes 4.0.
For this reason, the Qubes 4.0-rc1 installer configures all VMs that have attached PCI devices to use PV mode so that those PCI devices will function correctly.
Problems
After much further testing, we discovered that the affected PCI devices don’t work without MSI support.
(MSI is a method to trigger an interrupt from a PCI device.)
The devices we observed to be problematic were all newer Intel devices (integrated USB controllers and a Wi-Fi card).
While the PCIe standard allows for devices that don’t support legacy interrupts, all the affected devices advertised support for legacy interrupts.
But no interrupts were ever delivered after the driver configured the device.
This made the bug tricky to track down, since we were looking for an error on the software side.
To get those devices working, we needed MSI support.
When running QEMU in dom0, MSI support (and therefore the problematic devices) worked, but with stub domains, it was broken.
This is why, until now, we’ve had patches in place to hide MSI capability from the guest so that the driver doesn’t try to use it (one patch for the Mini-OS-based stub domain (https://github.com/QubesOS/qubes-vmm-xen/blob/ff5eaaa777e9d6ba42242479d1cabacfbdc728ca/patches.misc/hvmpt02-disable-msi-caps.patch) and another for the new Linux-based stub domain (https://github.com/QubesOS/qubes-vmm-xen-stubdom-linux/blob/71a01b41a9cf69d580c652a7147c0a8eb33ced97/qemu/patches/disable-msi-caps.patch)).
We found two issues that were preventing MSI support from working with stub domains.
First, the stub domain did not have the required permission on the IRQ, which is reserved for the MSI in the map_pirq hypercall QEMU makes.
(The IRQ is basically a number to distinguish between interrupts from different devices.)
https://www.qubes-os.org/news/2017/10/18/msi-support/
Introduction
In this post, we will describe how we fixed MSI support for VMs running in HVM mode in Qubes 4.0.
First, allow us to provide some background about the MSI feature and why we need it in the first place.
In Qubes 4.0, we switched from paravirtualized (PV) virtual machines to hardware virtual machines (HVMs, also known as “fully virtualized” or “hardware-assisted” VMs) for improved security (see the 4.0-rc1 announcement (https://www.qubes-os.org/news/2017/07/31/qubes-40-rc1/#fully-virtualized-vms-for-better-isolation) for details).
For VMs running as HVMs, Xen requires software that can emulate hardware (such as network cards) called QEMU in order to provide device emulation.
By default, Xen runs QEMU in the most trusted domain, dom0, and QEMU has quite a large attack surface.
Running QEMU in dom0 would jeopardize the security of Qubes, so it is necessary to run QEMU outside of dom0.
We do this by using a Xen feature that allows us to run QEMU inside a second “helper” VM called a “stub domain”.*
This way, an attacker who exploits a bug in QEMU will be confined to the stub domain rather than getting full access to dom0.
Admittedly, stub domains run in PV mode, which means that an attacker who were to successfully exploit QEMU would gain the ability to exploit potential Xen bugs in paravirtualization.
Nonetheless, we believe using HVMs to host PCI devices is still a considerable improvement.
Of course, in the long term, we would like to switch to using PVH VMs, but at the moment this is not feasible.
In our testing, we found that pass-through PCI devices did not work in HVMs on some machines.
On the affected machines, networking devices and USB devices, for example, were not usable as they are in Qubes 3.2.
(The kernel driver failed to initialize the device.)
This was a major problem that would have blocked us from moving entirely from PV to HVM in Qubes 4.0.
For this reason, the Qubes 4.0-rc1 installer configures all VMs that have attached PCI devices to use PV mode so that those PCI devices will function correctly.
Problems
After much further testing, we discovered that the affected PCI devices don’t work without MSI support.
(MSI is a method to trigger an interrupt from a PCI device.)
The devices we observed to be problematic were all newer Intel devices (integrated USB controllers and a Wi-Fi card).
While the PCIe standard allows for devices that don’t support legacy interrupts, all the affected devices advertised support for legacy interrupts.
But no interrupts were ever delivered after the driver configured the device.
This made the bug tricky to track down, since we were looking for an error on the software side.
To get those devices working, we needed MSI support.
When running QEMU in dom0, MSI support (and therefore the problematic devices) worked, but with stub domains, it was broken.
This is why, until now, we’ve had patches in place to hide MSI capability from the guest so that the driver doesn’t try to use it (one patch for the Mini-OS-based stub domain (https://github.com/QubesOS/qubes-vmm-xen/blob/ff5eaaa777e9d6ba42242479d1cabacfbdc728ca/patches.misc/hvmpt02-disable-msi-caps.patch) and another for the new Linux-based stub domain (https://github.com/QubesOS/qubes-vmm-xen-stubdom-linux/blob/71a01b41a9cf69d580c652a7147c0a8eb33ced97/qemu/patches/disable-msi-caps.patch)).
We found two issues that were preventing MSI support from working with stub domains.
First, the stub domain did not have the required permission on the IRQ, which is reserved for the MSI in the map_pirq hypercall QEMU makes.
(The IRQ is basically a number to distinguish between interrupts from different devices.)
Fortunately, this problem had already been tracked down by OpenXT (http://openxt.org/), and they made a patch for it (the original (https://github.com/OpenXT/xenclient-oe/blob/5e0e7304a5a3c75ef01240a1e3673665b2aaf05e/recipes-extended/xen/files/stubdomain-msi-irq-access.patch) and our pull request (https://github.com/QubesOS/qubes-vmm-xen/pull/15/commits/2a5229f24296347a40ba3250465a61ca425a6146) based on their patch).
The second problem was that, after setting MSI up, it needed to be enabled.
This is done by setting the MSI enable flag in the PCI config space, which is a special memory mapped region used to configure a PCI device.
However, this write did not reach the device, and therefore no interrupts were delivered.
When running in dom0, the config space write from QEMU goes directly to the real PCI device.
By contrast, inside a stub domain, the write goes to the pcifront driver inside the stub domain and is then blocked by the pciback running in dom0.
With a test patch, we verified that the only problem was with the enable flag write.
No other config space writes were problematic.
Solutions
It appeared that we had to choose from the following options:
Enable permissive mode.
In permissive mode, pciback allows (almost) all writes to the PCI config space.
This seems to be the solution OpenXT chose (see this issue (https://openxt.atlassian.net/browse/OXT-894) and this commit (https://github.com/OpenXT/manager/pull/52/commits/5950ebe73f2411f3af37f5dd56c5c70619e5d99f)).
Allow just the write to this specific config space location (i.e. the enable flag).
Unlike option (1), this option entails allowing only the required write and no others.
Since pcifront has the ability to issue a specific command to pciback to enable MSI, maybe QEMU should send this command instead of the write to the config space.
Something else.
For example, maybe the MSI config should not be handled by QEMU at all in the case of stub domains but should instead be handled directly by Xen.
Option (3) didn’t appear promising.
The enable command that pcifront sends is intended for the normal PV use case where the device is passed to the VM itself (via pcifront) rather than to the stub domain target.
While the command is called enable_msi, pciback does much more than simply setting the enable flag.
It also configures IRQ handling in the dom0 kernel, adapts the MSI masking, and more.
This makes sense in the PV case, but in the HVM case, the MSI configuration is done by QEMU, so this most likely won’t work correctly.
Option (1) would have been the easiest solution.
We would just need to set the option in the domain config.
After discussing it, however, we weren’t convinced that this option is safe (but we also don’t claim it isn’t).
See the paper discussed below and this thread (https://lists.xenproject.org/archives/html/xen-devel/2010-07/msg00257.html) for some potential problems.
So, what about option (2)?
We had to think about whether this might enable a new attack.
If we were to implement option (2), the security scenario would be different from the scenario in which QEMU runs in dom0.
When QEMU runs in dom0, it ensures that MSI is configured in a certain way before enabling MSIs (details (https://git.qemu.org/?p=qemu.git;a=blob;f=hw/xen/xen_pt_config_init.c;h=6f18366f6768ee3d7b72f588dc990a6329124a04;hb=359c41abe32638adad503e386969fa428cecff52#l1114)).
However, we’ve put QEMU in a stub domain so that we don’t have to trust it.
This means that we can no longer trust it to ensure that MSI is configured safely.
What would happen if, for example, a malicious stub domain were to set the enable flag of a PCI device without first configuring it?
As it turns out, ITL has published research relevant to assessing this risk.
The second problem was that, after setting MSI up, it needed to be enabled.
This is done by setting the MSI enable flag in the PCI config space, which is a special memory mapped region used to configure a PCI device.
However, this write did not reach the device, and therefore no interrupts were delivered.
When running in dom0, the config space write from QEMU goes directly to the real PCI device.
By contrast, inside a stub domain, the write goes to the pcifront driver inside the stub domain and is then blocked by the pciback running in dom0.
With a test patch, we verified that the only problem was with the enable flag write.
No other config space writes were problematic.
Solutions
It appeared that we had to choose from the following options:
Enable permissive mode.
In permissive mode, pciback allows (almost) all writes to the PCI config space.
This seems to be the solution OpenXT chose (see this issue (https://openxt.atlassian.net/browse/OXT-894) and this commit (https://github.com/OpenXT/manager/pull/52/commits/5950ebe73f2411f3af37f5dd56c5c70619e5d99f)).
Allow just the write to this specific config space location (i.e. the enable flag).
Unlike option (1), this option entails allowing only the required write and no others.
Since pcifront has the ability to issue a specific command to pciback to enable MSI, maybe QEMU should send this command instead of the write to the config space.
Something else.
For example, maybe the MSI config should not be handled by QEMU at all in the case of stub domains but should instead be handled directly by Xen.
Option (3) didn’t appear promising.
The enable command that pcifront sends is intended for the normal PV use case where the device is passed to the VM itself (via pcifront) rather than to the stub domain target.
While the command is called enable_msi, pciback does much more than simply setting the enable flag.
It also configures IRQ handling in the dom0 kernel, adapts the MSI masking, and more.
This makes sense in the PV case, but in the HVM case, the MSI configuration is done by QEMU, so this most likely won’t work correctly.
Option (1) would have been the easiest solution.
We would just need to set the option in the domain config.
After discussing it, however, we weren’t convinced that this option is safe (but we also don’t claim it isn’t).
See the paper discussed below and this thread (https://lists.xenproject.org/archives/html/xen-devel/2010-07/msg00257.html) for some potential problems.
So, what about option (2)?
We had to think about whether this might enable a new attack.
If we were to implement option (2), the security scenario would be different from the scenario in which QEMU runs in dom0.
When QEMU runs in dom0, it ensures that MSI is configured in a certain way before enabling MSIs (details (https://git.qemu.org/?p=qemu.git;a=blob;f=hw/xen/xen_pt_config_init.c;h=6f18366f6768ee3d7b72f588dc990a6329124a04;hb=359c41abe32638adad503e386969fa428cecff52#l1114)).
However, we’ve put QEMU in a stub domain so that we don’t have to trust it.
This means that we can no longer trust it to ensure that MSI is configured safely.
What would happen if, for example, a malicious stub domain were to set the enable flag of a PCI device without first configuring it?
As it turns out, ITL has published research relevant to assessing this risk.
In “Following the White Rabbit: Software attacks against Intel(R) VT-d technology” (https://invisiblethingslab.com/resources/2011/Software%20Attacks%20on%20Intel%20VT-d.pdf), Rafał Wojtczuk and Joanna Rutkowska describe an attack against VT-d on machines without interrupt remapping support.
For our purposes, the result they describe on page 8 is very important:
Even without access to the PCI config space, a malicious guest is, in many cases, able to generate arbitrary MSIs.
So long as writing to the MSI enable flag does not have any unrelated side effects, there’s no obvious way in which allowing it can worsen security, since an attacker who can set it can already generate arbitrary MSIs anyway.
Meanwhile, we reap the benefits of using HVMs to better isolate VMs with attached PCI devices.
So, we decided to implement option (2).
Based on the analysis above, one could argue that we might as well allow writes to the enable flags for all VMs with attached PCI devices, since doing so shouldn’t decrease security.
To be extra cautious, however, we only allow writes to the enable flags for stub domains.
In other cases, it’s not necessary.
(Here are our patches for pciback (https://github.com/QubesOS/qubes-linux-kernel/pull/12/commits/96b956b38cb24230848a563d3e1ce359c8d8db66) and libxl (https://github.com/QubesOS/qubes-vmm-xen/pull/15/commits/55ef595451d9e2e5583a31c4a3600507ae5500f7).)
Now, the previously problematic devices function correctly inside HVMs.
(Here are the full pull requests: 1 (https://github.com/QubesOS/qubes-linux-kernel/pull/12), 2 (https://github.com/QubesOS/qubes-vmm-xen/pull/15), 3 (https://github.com/QubesOS/qubes-vmm-xen-stubdom-linux/pull/3).)
We just merged this feature, and it will be included in Qubes 4.0-rc2, which we plan to release next week.
After these patches undergo further testing, we plan to upstream them so that all Xen users can benefit from our work.
If you have any questions or comments, please write to us on qubes-devel (https://www.qubes-os.org/mailing-lists/#qubes-devel).
*We’ve switch from the Mini-OS-based stub domain to a Linux-based stub domain in Qubes 4.0 based on patches (https://lists.xenproject.org/archives/html/xen-devel/2015-02/msg00426.html) from Anthony Perad and Eric Shelton.
The switch is not significant for the purposes of this article.
For our purposes, the result they describe on page 8 is very important:
Even without access to the PCI config space, a malicious guest is, in many cases, able to generate arbitrary MSIs.
So long as writing to the MSI enable flag does not have any unrelated side effects, there’s no obvious way in which allowing it can worsen security, since an attacker who can set it can already generate arbitrary MSIs anyway.
Meanwhile, we reap the benefits of using HVMs to better isolate VMs with attached PCI devices.
So, we decided to implement option (2).
Based on the analysis above, one could argue that we might as well allow writes to the enable flags for all VMs with attached PCI devices, since doing so shouldn’t decrease security.
To be extra cautious, however, we only allow writes to the enable flags for stub domains.
In other cases, it’s not necessary.
(Here are our patches for pciback (https://github.com/QubesOS/qubes-linux-kernel/pull/12/commits/96b956b38cb24230848a563d3e1ce359c8d8db66) and libxl (https://github.com/QubesOS/qubes-vmm-xen/pull/15/commits/55ef595451d9e2e5583a31c4a3600507ae5500f7).)
Now, the previously problematic devices function correctly inside HVMs.
(Here are the full pull requests: 1 (https://github.com/QubesOS/qubes-linux-kernel/pull/12), 2 (https://github.com/QubesOS/qubes-vmm-xen/pull/15), 3 (https://github.com/QubesOS/qubes-vmm-xen-stubdom-linux/pull/3).)
We just merged this feature, and it will be included in Qubes 4.0-rc2, which we plan to release next week.
After these patches undergo further testing, we plan to upstream them so that all Xen users can benefit from our work.
If you have any questions or comments, please write to us on qubes-devel (https://www.qubes-os.org/mailing-lists/#qubes-devel).
*We’ve switch from the Mini-OS-based stub domain to a Linux-based stub domain in Qubes 4.0 based on patches (https://lists.xenproject.org/archives/html/xen-devel/2015-02/msg00426.html) from Anthony Perad and Eric Shelton.
The switch is not significant for the purposes of this article.
New article by Qubes team member Simon Gaiser (HW42): "MSI support for PCI device pass-through with stub domains"
https://t.co/eDVzduPoWY
https://t.co/eDVzduPoWY
Qubes OS
MSI support for PCI device pass-through with stub domains
Introduction In this post, we will describe how we fixed MSI support for VMs running in HVM mode in Qubes 4.0. First, allow us to provide some background about the MSI feature and why we need it i...
Announcing the Xen Project 4.10 RC and Test Day Schedules
https://blog.xenproject.org/2017/10/19/announcing-the-xen-project-4-10-rc-and-test-day-schedules/
On Monday, we created Xen 4.10 RC1 and will release a new release candidate every MONDAY, until we declare a release candidate as the final candidate and cut the Xen 4.10 release. We will also hold a Test Day every WEDNESDAY for the release candidate that was released the week prior to the Test Day […]
https://blog.xenproject.org/2017/10/19/announcing-the-xen-project-4-10-rc-and-test-day-schedules/
On Monday, we created Xen 4.10 RC1 and will release a new release candidate every MONDAY, until we declare a release candidate as the final candidate and cut the Xen 4.10 release. We will also hold a Test Day every WEDNESDAY for the release candidate that was released the week prior to the Test Day […]
Qubes OS 4.0-rc2 has been released!
https://www.qubes-os.org/news/2017/10/23/qubes-40-rc2/
We’re pleased to announce the second release candidate for Qubes 4.0! As usual,
this release candidate includes numerous bug fixes over the
previous one (https://www.qubes-os.org/news/2017/07/31/qubes-40-rc1/).
Two of the most important changes are:
Improved PCI passthrough (https://www.qubes-os.org/news/2017/10/18/msi-support/) with sys-net and sys-usb running in
HVM mode by default. With this change, we’ve officially achieved our
Qubes 4.0 goal (https://www.qubes-os.org/news/2016/07/21/new-hw-certification-for-q4/) of having all qubes running in HVM mode by
default for enhanced security.
Whonix templates are back! We’ve fixed various (https://phabricator.whonix.org/T697)
blocking (https://phabricator.whonix.org/T710) issues (https://phabricator.whonix.org/T491), and the Whonix templates are
available in the default installer once again. We’re even debuting new
Whonix-based DispVMs!
There were two primary reasons for the substantial delay (https://www.qubes-os.org/doc/releases/4.0/schedule/) of this
release. The first was our discovery of the security issue (https://github.com/QubesOS/qubes-secpack/blob/master/QSBs/qsb-034-2017.txt) that would
come to be known as XSA-237 (https://xenbits.xen.org/xsa/advisory-237.html). As part of our coordination with the Xen Project
Security Team, we had to wait through the embargo period until XSA-237 was
publicly released before integrating various PCI passthrough fixes.
The second reason for the delay was the last-minute discovery of a bug related
to resizing the root filesystem of a qube. We faced a choice (https://groups.google.com/d/msgid/qubes-devel/20171013222228.GF10749%40mail-itl)
between (1) keeping the partition layout the same at the cost of increasing
maintenance complexity in the future or (2) changing the partition layout to
simplify the code at the cost of rebuilding all the templates and delaying the
release. We chose (https://groups.google.com/d/msgid/qubes-devel/20171016134524.GI1059%40mail-itl) the second option, which resulted in an
additional one week delay, but we’re confident that this is the most prudent
move in the long run.
As a consequence of the partition layout change, it will be necessary for
current 4.0-rc1 testers to perform a clean reinstall of 4.0-rc2 rather than
attempting to upgrade in-place. We sincerely apologize for this inconvenience.
Our growing community has been putting in a lot of hard work testing the first
4.0 release candidate and submitting bug reports, and we greatly appreciate your
effort. From the whole Qubes team, thank you! Your feedback helps us make the
final release as stable as possible. Please keep up the great work!
Further details, including full installation and update instructions,
are available in the Qubes 4.0 release notes (https://www.qubes-os.org/doc/releases/4.0/release-notes/). The new
installation image is available on the Downloads (https://www.qubes-os.org/downloads/) page.
https://www.qubes-os.org/news/2017/10/23/qubes-40-rc2/
We’re pleased to announce the second release candidate for Qubes 4.0! As usual,
this release candidate includes numerous bug fixes over the
previous one (https://www.qubes-os.org/news/2017/07/31/qubes-40-rc1/).
Two of the most important changes are:
Improved PCI passthrough (https://www.qubes-os.org/news/2017/10/18/msi-support/) with sys-net and sys-usb running in
HVM mode by default. With this change, we’ve officially achieved our
Qubes 4.0 goal (https://www.qubes-os.org/news/2016/07/21/new-hw-certification-for-q4/) of having all qubes running in HVM mode by
default for enhanced security.
Whonix templates are back! We’ve fixed various (https://phabricator.whonix.org/T697)
blocking (https://phabricator.whonix.org/T710) issues (https://phabricator.whonix.org/T491), and the Whonix templates are
available in the default installer once again. We’re even debuting new
Whonix-based DispVMs!
There were two primary reasons for the substantial delay (https://www.qubes-os.org/doc/releases/4.0/schedule/) of this
release. The first was our discovery of the security issue (https://github.com/QubesOS/qubes-secpack/blob/master/QSBs/qsb-034-2017.txt) that would
come to be known as XSA-237 (https://xenbits.xen.org/xsa/advisory-237.html). As part of our coordination with the Xen Project
Security Team, we had to wait through the embargo period until XSA-237 was
publicly released before integrating various PCI passthrough fixes.
The second reason for the delay was the last-minute discovery of a bug related
to resizing the root filesystem of a qube. We faced a choice (https://groups.google.com/d/msgid/qubes-devel/20171013222228.GF10749%40mail-itl)
between (1) keeping the partition layout the same at the cost of increasing
maintenance complexity in the future or (2) changing the partition layout to
simplify the code at the cost of rebuilding all the templates and delaying the
release. We chose (https://groups.google.com/d/msgid/qubes-devel/20171016134524.GI1059%40mail-itl) the second option, which resulted in an
additional one week delay, but we’re confident that this is the most prudent
move in the long run.
As a consequence of the partition layout change, it will be necessary for
current 4.0-rc1 testers to perform a clean reinstall of 4.0-rc2 rather than
attempting to upgrade in-place. We sincerely apologize for this inconvenience.
Our growing community has been putting in a lot of hard work testing the first
4.0 release candidate and submitting bug reports, and we greatly appreciate your
effort. From the whole Qubes team, thank you! Your feedback helps us make the
final release as stable as possible. Please keep up the great work!
Further details, including full installation and update instructions,
are available in the Qubes 4.0 release notes (https://www.qubes-os.org/doc/releases/4.0/release-notes/). The new
installation image is available on the Downloads (https://www.qubes-os.org/downloads/) page.
RT @josephfcox: Qubes releases version 4.0-rc2. Includes disposable Whonix VMs; for when you want to be anonymous and ephemeral https://t.co/Q8jJqoCjC5 https://t.co/7avM2L0l2i
Qubes OS
Qubes OS 4.0-rc2 has been released!
We’re pleased to announce the second release candidate for Qubes 4.0! As usual, this release candidate includes numerous bug fixes over the previous one. Two of the most important changes are: ...
Qubes Security Bulletin #35:
Xen hypervisor issue related to grant tables (XSA-236)
https://t.co/dY3o3KmPCo
Xen hypervisor issue related to grant tables (XSA-236)
https://t.co/dY3o3KmPCo
GitHub
QubesOS/qubes-secpack
qubes-secpack - Qubes Security Pack
QSB #35: Xen hypervisor issue related to grant tables (XSA-236)
https://www.qubes-os.org/news/2017/10/25/qsb-35/
Dear Qubes Community,
We have published Qubes Security Bulletin (QSB) #35:
Xen hypervisor issue related to grant tables (XSA-236).
The text of this QSB is reproduced below. This QSB and its accompanying
signatures will always be available in the Qubes Security Pack (qubes-secpack).
View QSB #35 in the qubes-secpack:
https://github.com/QubesOS/qubes-secpack/blob/master/QSBs/qsb-035-2017.txt
Learn about the qubes-secpack, including how to obtain, verify, and read it:
https://www.qubes-os.org/security/pack/
View all past QSBs:
https://www.qubes-os.org/security/bulletins/
View XSA-236 in the XSA Tracker:
https://www.qubes-os.org/security/xsa/#236
---===[ Qubes Security Bulletin #35 ]===---
October 24, 2017
Xen hypervisor issue related to grant tables (XSA-236)
Summary
========
The Xen Security Team has published Xen Security Advisory 236, which
concerns an issue with the grant tables mechanism used to share memory
between domains. The practical impact of this advisory is believed to
be denial of service only. However, privilege escalation and information
leaks are theoretically possible.
Technical details
==================
Xen Security Advisory 236 [1]:
| Grant copying code made an implication that any grant pin would be
| accompanied by a suitable page reference. Other portions of code,
| however, did not match up with that assumption. When such a grant
| copy operation is being done on a grant of a dying domain, the
| assumption turns out wrong.
|
| A malicious guest administrator can cause hypervisor memory
| corruption, most likely resulting in host crash and a Denial of
| Service. Privilege escalation and information leaks cannot be ruled
| out.
Compromise Recovery
====================
Beginning with Qubes 3.2, we offer Paranoid Backup Restore Mode, which
was designed specifically to aid in the recovery of a potentially
compromised Qubes OS system. If you believe your system may be
compromised (perhaps because of the issue discussed in this bulletin),
please read and follow the procedure described here:
https://www.qubes-os.org/news/2017/04/26/qubes-compromise-recovery/
Patching
=========
The specific packages that resolve the problem discussed in this
bulletin are as follows:
For Qubes 3.2:
- Xen packages, version 4.6.6-34
For Qubes 4.0:
- Xen packages, version 4.8.2-9
The packages are to be installed in dom0 via the Qubes VM Manager or via
the qubes-dom0-update command as follows:
For updates from the stable repository (not immediately available):
$ sudo qubes-dom0-update
For updates from the security-testing repository:
$ sudo qubes-dom0-update --enablerepo=qubes-dom0-security-testing
A system restart will be required afterwards.
These packages will migrate from the security-testing repository to the
current (stable) repository over the next two weeks after being tested
by the community.
If you use Anti Evil Maid, you will need to reseal your secret
passphrase to new PCR values, as PCR18+19 will change due to the new
Xen binaries.
Credits
========
See the original Xen Security Advisory.
References
===========
[1] https://xenbits.xen.org/xsa/advisory-236.html
--
The Qubes Security Team
https://www.qubes-os.org/security/
https://www.qubes-os.org/news/2017/10/25/qsb-35/
Dear Qubes Community,
We have published Qubes Security Bulletin (QSB) #35:
Xen hypervisor issue related to grant tables (XSA-236).
The text of this QSB is reproduced below. This QSB and its accompanying
signatures will always be available in the Qubes Security Pack (qubes-secpack).
View QSB #35 in the qubes-secpack:
https://github.com/QubesOS/qubes-secpack/blob/master/QSBs/qsb-035-2017.txt
Learn about the qubes-secpack, including how to obtain, verify, and read it:
https://www.qubes-os.org/security/pack/
View all past QSBs:
https://www.qubes-os.org/security/bulletins/
View XSA-236 in the XSA Tracker:
https://www.qubes-os.org/security/xsa/#236
---===[ Qubes Security Bulletin #35 ]===---
October 24, 2017
Xen hypervisor issue related to grant tables (XSA-236)
Summary
========
The Xen Security Team has published Xen Security Advisory 236, which
concerns an issue with the grant tables mechanism used to share memory
between domains. The practical impact of this advisory is believed to
be denial of service only. However, privilege escalation and information
leaks are theoretically possible.
Technical details
==================
Xen Security Advisory 236 [1]:
| Grant copying code made an implication that any grant pin would be
| accompanied by a suitable page reference. Other portions of code,
| however, did not match up with that assumption. When such a grant
| copy operation is being done on a grant of a dying domain, the
| assumption turns out wrong.
|
| A malicious guest administrator can cause hypervisor memory
| corruption, most likely resulting in host crash and a Denial of
| Service. Privilege escalation and information leaks cannot be ruled
| out.
Compromise Recovery
====================
Beginning with Qubes 3.2, we offer Paranoid Backup Restore Mode, which
was designed specifically to aid in the recovery of a potentially
compromised Qubes OS system. If you believe your system may be
compromised (perhaps because of the issue discussed in this bulletin),
please read and follow the procedure described here:
https://www.qubes-os.org/news/2017/04/26/qubes-compromise-recovery/
Patching
=========
The specific packages that resolve the problem discussed in this
bulletin are as follows:
For Qubes 3.2:
- Xen packages, version 4.6.6-34
For Qubes 4.0:
- Xen packages, version 4.8.2-9
The packages are to be installed in dom0 via the Qubes VM Manager or via
the qubes-dom0-update command as follows:
For updates from the stable repository (not immediately available):
$ sudo qubes-dom0-update
For updates from the security-testing repository:
$ sudo qubes-dom0-update --enablerepo=qubes-dom0-security-testing
A system restart will be required afterwards.
These packages will migrate from the security-testing repository to the
current (stable) repository over the next two weeks after being tested
by the community.
If you use Anti Evil Maid, you will need to reseal your secret
passphrase to new PCR values, as PCR18+19 will change due to the new
Xen binaries.
Credits
========
See the original Xen Security Advisory.
References
===========
[1] https://xenbits.xen.org/xsa/advisory-236.html
--
The Qubes Security Team
https://www.qubes-os.org/security/
RT @okayultra: LOL, @QubesOS and its painful-but-necessary clipboard isolation has prevented literally hundreds such incidents for me. https://t.co/aLNP9vXJ8X
Twitter
Adrienne Porter Felt
Ummmmmm I accidentally emailed this comic to several contractors instead of our building plans
Thank you Peter for you continued support! https://t.co/hLdDqAUZej
Twitter
Peter Todd
Donated a % of my consulting company's last year revenue to @QubesOS I rely on it for all my work, and recommend it to clients too.
Forwarded from Qubes OS
Join the group!
Rules:
>No spam/ads/NSFW/shit posts
>Keep media like/stickers to a minimum.
>Respect others
>Use common sense
>Stay on topic
If you need help please ask!
Rules are enforced with ban or restrictions.
Link:
t.me/joinchat/B8FHpkEToMeDRgQ_cI60XQ
Rules:
>No spam/ads/NSFW/shit posts
>Keep media like/stickers to a minimum.
>Respect others
>Use common sense
>Stay on topic
If you need help please ask!
Rules are enforced with ban or restrictions.
Link:
t.me/joinchat/B8FHpkEToMeDRgQ_cI60XQ
RT @B_Meson: @micahflee correct me if I’m wrong but also won’t effect you if you use Qubes with a Whonix VM https://t.co/K43SHLOMlM
Twitter
Ars Technica
Critical Tor flaw leaks users’ real IP address—update now https://t.co/SmwqDmywV3 by @dangoodin001