Qubes OS – Telegram
Qubes OS
1.99K subscribers
51 photos
2 videos
819 links
A reasonably secure operating system for personal computers.

Qubes-OS.org

⚠️This channel is updated after devs make an announcement to the project.

[Community ran channel]

Help?
English: @QubesChat

German: @QubesOS_user_de

Boost: t.me/QubesOS?boost
Download Telegram
Forwarded from Pavel Durov
Since some journalists don’t read my Telegram channel (a shame!), I made a Telegraph story about rumors on Telegram moving servers to weird places. It repeats some of the stuff from the last two posts from here, but could be useful as a summary of all our CDN-related posts. Spread the word!

http://telegra.ph/On-Rumors-About-Telegram-Servers-in-Weird-Places-07-30
RT @rootkovska: FWIW, Qubes' main goal & challenge is in how to provide *integration* on top of isolated compartments, without negating the isolation... https://t.co/7qSLP7Yp65
Qubes OS 4.0-rc1 has been released!
https://www.qubes-os.org/news/2017/07/31/qubes-40-rc1/

Finally, after years of work, we’re releasing the first release candidate for
Qubes 4.0!

Next Generation Qubes Core Stack for better integration

No doubt this release marks a major milestone in Qubes OS development. The
single most import undertaking which sets this release apart, is the complete
rewrite of the Qubes Core Stack. We have a separate set of posts detailing the
changes (Why/What/How), and the first post is planned to be released in the
coming 2 weeks.

This new Core Stack allows to easily extend the Qubes Architecture in new
directions, allowing us to finally build (in a clean way) lots of things we’ve
wanted for years, but which would have been too complex to build on the “old”
Qubes infrastructure. The new Qubes Admin API, which we introduced in a
recent post (https://www.qubes-os.org/news/2017/06/27/qubes-admin-api/), is a prime example of one such feature.
(Technically speaking, we’ve neatly put the Admin API at the heart of the new
Qubes Core Stack so that it really is part of the Core Stack, not merely an
“application” built on top of it.)

There are many more benefits that the new Core Stack brings besides the Admin
API. Just to name a few that might be most visible to the user or admin:

Simpler to customize and more flexible Disposable VMs,
More flexible and expressive (qrexec) policy definitions,
Flexible VM volume manager (easy to keep VMs on external drives, or in
memory-only),
… and many more! The new Core Stack also brings lots of simplifications for
developers of Qubes-specific apps and services. Again, we plan to publish posts
about all these cool new features in the coming weeks and months.

One last important comment is that all the work we have done in this area has
been Xen-agnostic, aligned with our long-stated goal (https://blog.invisiblethings.org/2013/03/21/introducing-qubes-odyssey-framework.html) to make
Qubes easily portable between different VMMs (hypervisors) and even non-VM-based
systems, such as container-based ones.

Fully virtualized VMs for better isolation

Another important change in this release (this time Xen-specific) is that we
have ditched para-virtualized mode and embraced fully-virtualized mode for
Qubes VMs. The reason for this move has been entirely security-related, as
explained here (https://github.com/QubesOS/qubes-secpack/blob/master/QSBs/qsb-024-2016.txt#L92-L132) and here (https://www.qubes-os.org/news/2016/07/21/new-hw-certification-for-q4/).

Originally, we planned to utilize the PVH mode of virtualization, which combines
the benefits of processor virtualization technologies (VT-x and EPT), allowing
for simpler code in the hypervisor, thus improving security, with
paravirtualized drivers for better performance and improved security due to
simplified interfaces to virtualized devices. Even though we have long been
using (https://blog.invisiblethings.org/2012/03/03/windows-support-coming-to-qubes.html) isolated stub domains to keep device I/O
emulators outside of the TCB, these stub domains themselves run in PV mode,
which we are now moving away from.

Sadly, due to the Linux kernel still not fully supporting this PVH mode
(specifically problems with booting the kernels in this mode (http://markmail.org/message/ddds3tb4b23gmtgo)),
we decided to go with the HVM-based VMs for this rc1 release. We plan to switch
to full PVH either in the later rc-releases, or in 4.1, depending on the
progress of PVH support in the Linux kernel.

Also, as an additional last-minute issue, we discovered that PCI pass-through
does not work that well on some systems when using HVM virtualization. Typically
this affects USB VMs and only on some systems. Nevertheless, as a precaution, in
the default installation we decided to switch the mode of virtualization for
these VMs back to PV mode. (The new Core Stack allows one to do this with the
flip of a switchproperty :). Here our rationale is that it’s still much
better to have PV-based isolation for USB VMs rather than not having USB
controllers isolated at all! Again, we anticipate this will be resolved in the
upcoming rc-releases.

New approach to UX/UI for better integration

In Qubes 4.0 we also decided (https://github.com/QubesOS/qubes-issues/issues/2132) to redesign the User
Experience (UX) a little bit. Aligned with our long-term vision to hide as
much of the Qubes internals from the casual user as practically viable, we made
a bold move and… removed the Qubes Manager altogether!

Instead, we believe it makes more sense to utilize as much of the infrastructure
already built by professional UX designers as possible. Consequently, most of
the Qubes persistent configuration (creation of new VMs, changing their settings
as well as the global ones) is accessible through the standard application menu
aka “Start Menu”. In addition, we wrote two tiny widgets, which should work with
most desktop environments compatible with Qubes (currently this list includes
the default Xfce4, the once-default KDE, the community-maintained i3, and
awesome). These widgets are used to show live info about the running system
state, such as which VMs are currently running, their memory usage, as well as
which devices are available to connect to different VMs (and yes, now it is
possible to connect USB devices using the GUI, a long requested feature by many
of our users).

Advanced Qubes users will surely appreciate, on the other hand, the much more
flexible and powerful qvm-* tools, such as the completely rewritten qvm-ls
and qvm-prefs, to name just two (again, more on them in the upcoming posts).

Better compatibility and all the rest

Besides the above, there have been lots of other improvements and bug fixes
compared to the 3.2 release. We list most of them in the release
notes (https://www.qubes-os.org/doc/releases/4.0/release-notes/).

Perhaps one worth singling out here, in the context of hardware compatibility,
is the upgrade of the default dom0 distribution to Fedora 25. (Before we
decompose dom0 into separate GUI and Admin VMs, which we plan to do in 4.1, the
dom0 distribution determines how well the GPU is supported.)

Summary

Qubes 4.0 is a significant milestone on our roadmap to implement a reasonably
secure desktop/client OS based on the “Security by Compartmentalization”
principle (using “Explicit Partitioning Model”, in contrast to the recently
popular “Sandboxing Model”).

This is the first release candidate of a largely rewritten complex system, and
no doubt early adopters will discover some rough edges here and there. Despite
our increasingly sophisticated automatic testing infrastructure, this is simply
unavoidable. Consequently, if you want to use Qubes for production, stick to
Qubes 3.2 until we release (https://www.qubes-os.org/doc/version-scheme/#release-schedule) the stable version of Qubes 4.0.

But if you would like to start learning and experimenting with the advanced new
features that 4.0 brings, such as the Admin API, or would like to help us reach
a stable 4.0 more quickly, or you’re just curious, or want to show off to your
friends what a bleeding edge system you have, then please do so and go straight
to the download page (https://www.qubes-os.org/downloads/)!

On behalf of the whole Qubes OS Core Team (https://www.qubes-os.org/team/),

joanna.
Helpful tip:

If you are going to delete a template after upgrading to a new one. Make sure you show hidden internal VMs and delete the appvm dvm before deleting the template VM.
RT @micahflee: @Hak5 @QubesOS protects against this if you use a USB VM. You can tell it to not trust USB keyboards or to ask before allowing them to type 17/18 https://t.co/MRhNEeY1l3
The @QubeOS talk at Debconf starts here in a couple of minutes: https://debconf17.debconf.org/schedule/venue/4/
The presentation starts now
Qubes Security Bulletin #32: Xen hypervisor and Linux kernel vulnerabilities (XSA-226 through XSA-230):

https://t.co/a3J1LpNlUk
QSB #32: Xen hypervisor and Linux kernel vulnerabilities (XSA-226 through XSA-230)
https://www.qubes-os.org/news/2017/08/15/qsb-32/

Dear Qubes Community,

We have just published Qubes Security Bulletin (QSB) #32:
Xen hypervisor and Linux kernel vulnerabilities (XSA-226 through XSA-230).
The text of this QSB is reproduced below. This QSB and its accompanying
signatures will always be available in the Qubes Security Pack (qubes-secpack).

View QSB #32 in the qubes-secpack:

https://github.com/QubesOS/qubes-secpack/blob/master/QSBs/qsb-032-2017.txt

Learn about the qubes-secpack, including how to obtain, verify, and read it:

https://www.qubes-os.org/security/pack/

View all past QSBs:

https://www.qubes-os.org/security/bulletins/

View XSA-226 through XSA-230 in the XSA Tracker:

https://www.qubes-os.org/security/xsa/



---===[ Qubes Security Bulletin #32 ]===---

August 15, 2017


Xen hypervisor and Linux kernel vulnerabilities (XSA-226 through XSA-230)

Summary
========

The Xen Security Team released several Xen Security Advisories today (XSA-226
through XSA-230) related to the grant tables mechanism used to share memory
between domains. The impact of these advisories ranges from data leaks to
system crashes and privilege escalations. See our commentary below for details.

Technical details
==================

Xen Security Advisory 226 [1]:

| Code to handle copy operations on transitive grants has built in retry
| logic, involving a function reinvoking itself with unchanged
| parameters. Such use assumes that the compiler would also translate
| this to a so called "tail call" when generating machine code.
| Empirically, this is not commonly the case, allowing for theoretically
| unbounded nesting of such function calls.
|
| A malicious or buggy guest may be able to crash Xen. Privilege
| escalation and information leaks cannot be ruled out.

Xen Security Advisory 227 [2]:

| When mapping a grant reference, a guest must inform Xen of where it
| would like the grant mapped. For PV guests, this is done by nominating
| an existing linear address, or an L1 pagetable entry, to be altered.
|
| Neither of these PV paths check for alignment of the passed parameter.
| The linear address path suitably truncates the linear address when
| calculating the L1 entry to use, but the path which uses a directly
| nominated L1 entry performs no checks.
|
| This causes Xen to make an incorrectly-aligned update to a pagetable,
| which corrupts both the intended entry and the subsequent entry with
| values which are largely guest controlled. If the misaligned value
| crosses a page boundary, then an arbitrary other heap page is
| corrupted.
|
| A PV guest can elevate its privilege to that of the host.

Xen Security Advisory 228 [3]:

| The grant table code in Xen has a bespoke semi-lockfree allocator for
| recording grant mappings ("maptrack" entries). This allocator has a
| race which allows the free list to be corrupted.
|
| Specifically: the code for removing an entry from the free list, prior
| to use, assumes (without locking) that if inspecting head item shows
| that it is not the tail, it will continue to not be the tail of the
| list if it is later found to be still the head and removed with
| cmpxchg. But the entry might have been removed and replaced, with the
| result that it might be the tail by then. (The invariants for the
| semi-lockfree data structure were never formally documented.)
|
| Additionally, a stolen entry is put on the free list with an incorrect
| link field, which will very likely corrupt the list.
|
| A malicious guest administrator can crash the host, and can probably
| escalate their privilege to that of the host.

Xen Security Advisory 229 [4]:

| The block layer in Linux may choose to merge adjacent block IO requests.
| When Linux is running as a Xen guest, the default merging algorithm is
| replaced with a Xen-specific one. When Linux is running as an x86 PV
| guest, some BIO's are erroneously merged, corrupting the data stream
| to/from the block device.
|
| This can result in incorrect access to an uncontrolled adjacent frame.
|
| A buggy or malicious guest can cause Linux to read or write incorrect
| memory when processing a block stream. This could leak information from
| other guests in the system or from Xen itself, or be used to DoS or
| escalate privilege within the system.

Xen Security Advisory 230 [5]:

| Xen maintains the _GTF_{read,writ}ing bits as appropriate, to inform the
| guest that a grant is in use. A guest is expected not to modify the
| grant details while it is in use, whereas the guest is free to
| modify/reuse the grant entry when it is not in use.
|
| Under some circumstances, Xen will clear the status bits too early,
| incorrectly informing the guest that the grant is no longer in use.
|
| A guest may prematurely believe that a granted frame is safely private
| again, and reuse it in a way which contains sensitive information, while
| the domain on the far end of the grant is still using the grant.


Commentary from the Qubes Security Team
========================================

It looks like the most severe of the vulnerabilities published today is
XSA-227, which is another example of a bug in memory management code for
para-virtualized (PV) VMs. As discussed before, in Qubes 4.0 [6], we've decided
to retire the use of PV virtualization mode in favour of fully virtualized VMs,
precisely in order to to prevent this class of vulnerabilities from affecting
the security of Qubes OS. We note however, that Qubes 3.2 uses PV for all VMs
by default.

XSA-228 seems to be another potentially serious vulnerability. While this does
not seem to be limited only to PV virtualization, we should note that it is a
race condition type of bug. Such types of vulnerabilities are typically
significantly more difficult to reliably exploit in practice.

The remaining vulnerabilities (XSA-229 and XSA-230) look even more theoretical.
We should also note that XSA-229 is a vulnerability in the Linux kernel's
implementation of the Xen PV block (disk) backend, not in the Xen hypervisor.
The Qubes architecture partly mitigates potential successful attacks exploiting
this vulnerability thanks to offloading some of the storage backend to USB and
(optionally) other VMs. The main system block backend still runs in dom0,
however, hence the inclusion of this bug in the bulletin.

Compromise Recovery
====================

Starting with Qubes 3.2, we offer Paranoid Backup Restore Mode, which was
designed specifically to aid in the recovery of a (potentially) compromised
Qubes OS system. Thus, if you believe your system might have been compromised
(perhaps because of the bugs discussed in this bulletin), then you should read
and follow the procedure described here:

https://www.qubes-os.org/news/2017/04/26/qubes-compromise-recovery/

Patching
=========

The specific packages that resolve the problems discussed in this
bulletin are as follows:

For Qubes 3.2:
- Xen packages, version 4.6.6-29
- Kernel packages, version 4.9.35-20

For Qubes 4.0:
- Xen packages, version 4.8.1-5
- Kernel packages, version 4.9.35-20

The packages are to be installed in dom0 via the qubes-dom0-update command or
via the Qubes VM Manager. A system restart will be required afterwards.

If you use Anti Evil Maid, you will need to reseal your secret passphrase to
new PCR values, as PCR18+19 will change due to the new Xen and kernel binaries,
and because of the regenerated initramfs.

These packages will migrate to the current (stable) repository over the next
two weeks after being tested by the community.

Credits
========

See the original Xen Security Advisories.

References
===========

[1] https://xenbits.xen.org/xsa/advisory-226.html
[2] https://xenbits.xen.org/xsa/advisory-227.html
XSA-235 does not affect the security of Qubes OS
https://www.qubes-os.org/news/2017/08/23/xsa-235/

The Xen Project has published Xen Security Advisory 235 (XSA-235).
This XSA does not affect the security of Qubes OS, and no user action is necessary.

This XSA has been added to the XSA Tracker (https://www.qubes-os.org/security/xsa/):

https://www.qubes-os.org/security/xsa/#235
My GSoC experience: Fuzzing the hypervisor
https://blog.xenproject.org/2017/08/25/my-gsoc-experience-fuzzing-the-hypervisor/

This blog post was written by Felix Schmoll, currently studying Mechanical Engineering at ETH Zurich. After obtaining a Bachelor in Computer Science from Jacobs University he spent the summer working on fuzzing the hypervisor as a Google Summer of Code student. His main interests in code are low-level endeavours and building scalable applications. Five months ago, […]
My GSoC Experience: Allow Setting up Shared Memory Regions between VMs from xl Config File
https://blog.xenproject.org/2017/08/29/my-gsoc-experience-allow-setting-up-shared-memory-regions-between-vms-from-xl-config-file/

This blog was written by Zhongze Liu. Zhongze Liu is a student studying information security in Huazhong University of Science and Technology in Wuhan, China. He recently took part in GSoC 2017 where he worked closely with the Xen Project community on “Allowing Sharing Memory Regions between VMs from xl Config.” His interests are low-level hacking and […]
Xen Project 4.8.2 is available
https://blog.xenproject.org/2017/09/06/xen-project-4-8-2-is-available/

I am pleased to announce the release of Xen 4.8.2. Xen Project Maintenance releases are released in line with our Maintenance Release Policy. We recommend that all users of the 4.8 stable series update to the latest point release. The release is available from its git repository xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=refs/heads/stable-4.8 (tag RELEASE-4.8.2) or from the XenProject download […]