PSA: There Is A Bug Where The Android Emulator Fails To Run On Linux.
https://issuetracker.google.com/issues/141255618
https://redd.it/d7gmin
@r_linux
https://issuetracker.google.com/issues/141255618
https://redd.it/d7gmin
@r_linux
Why nice levels are a placebo and have been for a very long time, and no one seems to have noticed
***(EDIT: I was assuming this configuration was default in most distros, but now I'm starting to think it just might be some Ubuntu/Mint-specific weirdness...)***
Linux has a feature called 'autogrouping', which is enabled by default on most systems, for scheduling processes (see manual page excerpt below). Essentially it causes the scheduler to act primarily on the nice level set for process *groups* rather than individual processes.
This generally improves responsiveness and fairness for typical use cases where the nice value is always left untouched at zero anyway, by not giving a task which is split into many processes a larger share of cpu time than one that is a single process despite having the same nice level. While your desktop session (including all apps launched through graphical launchers) typically shares one autogroup, opening a terminal window (which is typically where cpu-heavy background tasks are launched) creates a new autogroup, and background services generally have their own autogroups as well.
Are you with me so far? Here's where it gets screwy: when autogrouping is turned on, the standard per-process nice level **only** affects scheduling priority **relative to other processes in its group**. And the `nice` and `renice` commands (and their underlying system calls) are only aware of the traditional per-process nice value; they do **not** act on autogroups. Autogroup nice level can only be changed by writing to the virtual file at `proc/<pid>/autogroup`, and **none of the standard utilities for dealing with priority seem to take this into account**.
While autogrouping tends to ensure fairness, what if you don't want fairness? What if you want to run a background task in very low priority? So in your terminal, instead of running `make -j32` you run `nice 15 make -j32`. Except *oops*, that actually made no difference! Since its autogroup nice level is still zero and the build you just started has no other processes running in its autogroup, its nice level is *irrelevant*.
The dark side of autogrouping is that with it enabled, the conventional commands and system calls for setting priority mostly become *placebos* that don't actually do anything. This means that power users wanting to actually control the priority of their processes, are not getting the result they expect. Also the few programs that set their own nice level (such as Folding@Home, which kindly attempts to set itself to nice +19)... actually fail in their attempt to (de)prioritize themselves, and still receive a "fair" share of cpu time even when there are other sources of high CPU load running.
The only place where you'll find unrelated tasks running on the same autogroup and thus making the regular nice level somewhat relevant, is your main desktop session excluding terminal windows... except everything in there is almost always nice level 0, since you'd typically open a terminal to `nice` anything!
The kicker here is that very few people and very few programs are aware of this or do anything about it, instead they assume like `nice` actually does what it's traditionally supposed to. It doesn't help that the man pages for `nice` and `renice` don't mention it at all...
If you want manual control over process priority (or to prevent blatant priority inversions from things like F@H), you have a few options.
* Soft-disable autogroups using the `noautogroups` boot parameter or by writing "0" to `proc/sys/kernel/sched_autogroup_enabled`;
* Hard-disable autogroups by custom-building a kernel with `CONFIG_SCHED_AUTOGROUP=N`;
* Start using the autogroup nice level at `proc/<pid>/autogroup` instead of the regular nice level. There seems to be no standard command for this, though [I've jerry-rigged a crude solution](https://github.com/nlburgin/reallynice)
* The nuclear option of using `chrt -f` to bypass autogroups using `SCHED_FIFO` for something you don't want to have cpu time stolen from by lower-priority processes. Of course this on
***(EDIT: I was assuming this configuration was default in most distros, but now I'm starting to think it just might be some Ubuntu/Mint-specific weirdness...)***
Linux has a feature called 'autogrouping', which is enabled by default on most systems, for scheduling processes (see manual page excerpt below). Essentially it causes the scheduler to act primarily on the nice level set for process *groups* rather than individual processes.
This generally improves responsiveness and fairness for typical use cases where the nice value is always left untouched at zero anyway, by not giving a task which is split into many processes a larger share of cpu time than one that is a single process despite having the same nice level. While your desktop session (including all apps launched through graphical launchers) typically shares one autogroup, opening a terminal window (which is typically where cpu-heavy background tasks are launched) creates a new autogroup, and background services generally have their own autogroups as well.
Are you with me so far? Here's where it gets screwy: when autogrouping is turned on, the standard per-process nice level **only** affects scheduling priority **relative to other processes in its group**. And the `nice` and `renice` commands (and their underlying system calls) are only aware of the traditional per-process nice value; they do **not** act on autogroups. Autogroup nice level can only be changed by writing to the virtual file at `proc/<pid>/autogroup`, and **none of the standard utilities for dealing with priority seem to take this into account**.
While autogrouping tends to ensure fairness, what if you don't want fairness? What if you want to run a background task in very low priority? So in your terminal, instead of running `make -j32` you run `nice 15 make -j32`. Except *oops*, that actually made no difference! Since its autogroup nice level is still zero and the build you just started has no other processes running in its autogroup, its nice level is *irrelevant*.
The dark side of autogrouping is that with it enabled, the conventional commands and system calls for setting priority mostly become *placebos* that don't actually do anything. This means that power users wanting to actually control the priority of their processes, are not getting the result they expect. Also the few programs that set their own nice level (such as Folding@Home, which kindly attempts to set itself to nice +19)... actually fail in their attempt to (de)prioritize themselves, and still receive a "fair" share of cpu time even when there are other sources of high CPU load running.
The only place where you'll find unrelated tasks running on the same autogroup and thus making the regular nice level somewhat relevant, is your main desktop session excluding terminal windows... except everything in there is almost always nice level 0, since you'd typically open a terminal to `nice` anything!
The kicker here is that very few people and very few programs are aware of this or do anything about it, instead they assume like `nice` actually does what it's traditionally supposed to. It doesn't help that the man pages for `nice` and `renice` don't mention it at all...
If you want manual control over process priority (or to prevent blatant priority inversions from things like F@H), you have a few options.
* Soft-disable autogroups using the `noautogroups` boot parameter or by writing "0" to `proc/sys/kernel/sched_autogroup_enabled`;
* Hard-disable autogroups by custom-building a kernel with `CONFIG_SCHED_AUTOGROUP=N`;
* Start using the autogroup nice level at `proc/<pid>/autogroup` instead of the regular nice level. There seems to be no standard command for this, though [I've jerry-rigged a crude solution](https://github.com/nlburgin/reallynice)
* The nuclear option of using `chrt -f` to bypass autogroups using `SCHED_FIFO` for something you don't want to have cpu time stolen from by lower-priority processes. Of course this on
GitHub
GitHub - nlburgin/reallynice: replacement for nice/renice that doesn't get placebo'd by autogrouping
replacement for nice/renice that doesn't get placebo'd by autogrouping - nlburgin/reallynice
ly works on interactive processes as opposed to batch ones, or else your system could hang. While it's a clumsy solution, notably this is an option that someone who isn't aware of autogroups might still come up with on their own.
## Excerpt from `man 7 sched`:
>**The** **autogroup** **feature**
>
>Since Linux 2.6.38, the kernel provides a feature known as autogrouping to improve interactive desktop performance in the face of multiprocess, CPU-intensive workloads such as building the Linux kernel with large numbers of parallel build processes (i.e., the **[make](../man1/make.1.html)**(1) **-j** flag).
>
>This feature operates in conjunction with the CFS scheduler and requires a kernel that is configured with **CONFIG_SCHED_AUTOGROUP**. On a running system, this feature is enabled or disabled via the file <u>[/proc/sys/kernel/sched_autogroup_enabled](file:/proc/sys/kernel/sched_autogroup_enabled)</u>; a value of 0 disables the feature, while a value of 1 enables it. The default value in this file is 1, unless the kernel was booted with the <u>noautogroup</u> parameter.
>
>A new autogroup is created when a new session is created via **[setsid](../man2/setsid.2.html)**(2); this happens, for example, when a new terminal window is started. A new process created by **[fork](../man2/fork.2.html)**(2) inherits its parent's autogroup membership. Thus, all of the processes in a session are members of the same autogroup. An autogroup is automatically destroyed when the last process in the group terminates.
>
>When autogrouping is enabled, all of the members of an autogroup are placed in the same kernel scheduler "task group". The CFS scheduler employs an algorithm that equalizes the distribution of CPU cycles across task groups. The benefits of this for interactive desktop performance can be described via the following example.
>
>Suppose that there are two autogroups competing for the same CPU (i.e., presume either a single CPU system or the use of **[taskset](../man1/taskset.1.html)**(1) to confine all the processes to the same CPU on an SMP system). The first group contains ten CPU-bound processes from a kernel build started with <u>make</u> <u>-j10</u>. The other contains a single CPU-bound process: a video player. The effect of autogrouping is that the two groups will each receive half of the CPU cycles. That is, the video player will receive 50% of the CPU cycles, rather than just 9% of the cycles, which would likely lead to degraded video playback. The situation on an SMP system is more complex, but the general effect is the same: the scheduler distributes CPU cycles across task groups such that an autogroup that contains a large number of CPU- bound processes does not end up hogging CPU cycles at the expense of the other jobs on the system.
>
>A process's autogroup (task group) membership can be viewed via the file <u>[/proc/](file:/proc/)[pid]/autogroup</u>:
>
>$ **cat** **[/proc/1/autogroup](file:/proc/1/autogroup)** /autogroup-1 nice 0
>
>This file can also be used to modify the CPU bandwidth allocated to an autogroup. This is done by writing a number in the "nice" range to the file to set the autogroup's nice value. The allowed range is from +19 (low priority) to -20 (high priority). (Writing values outside of this range causes **[write](../man2/write.2.html)**(2) to fail with the error **EINVAL**.)
>
>The autogroup nice setting has the same meaning as the process nice value, but applies to distribution of CPU cycles to the autogroup as a whole, based on the relative nice values of other autogroups. For a process inside an autogroup, the CPU cycles that it receives will be a product of the autogroup's nice value (compared to other autogroups) and the process's nice value (compared to other processes in the same autogroup.
>
>The use of the **[cgroups](../man7/cgroups.7.html)**(7) CPU controller to place processes in cgroups other than the root CPU cgroup overrides the effect of autogrouping.
>
>The autogroup feature groups only processes scheduled under non-real-time policies (**SCHED_OTHER**, **SC
## Excerpt from `man 7 sched`:
>**The** **autogroup** **feature**
>
>Since Linux 2.6.38, the kernel provides a feature known as autogrouping to improve interactive desktop performance in the face of multiprocess, CPU-intensive workloads such as building the Linux kernel with large numbers of parallel build processes (i.e., the **[make](../man1/make.1.html)**(1) **-j** flag).
>
>This feature operates in conjunction with the CFS scheduler and requires a kernel that is configured with **CONFIG_SCHED_AUTOGROUP**. On a running system, this feature is enabled or disabled via the file <u>[/proc/sys/kernel/sched_autogroup_enabled](file:/proc/sys/kernel/sched_autogroup_enabled)</u>; a value of 0 disables the feature, while a value of 1 enables it. The default value in this file is 1, unless the kernel was booted with the <u>noautogroup</u> parameter.
>
>A new autogroup is created when a new session is created via **[setsid](../man2/setsid.2.html)**(2); this happens, for example, when a new terminal window is started. A new process created by **[fork](../man2/fork.2.html)**(2) inherits its parent's autogroup membership. Thus, all of the processes in a session are members of the same autogroup. An autogroup is automatically destroyed when the last process in the group terminates.
>
>When autogrouping is enabled, all of the members of an autogroup are placed in the same kernel scheduler "task group". The CFS scheduler employs an algorithm that equalizes the distribution of CPU cycles across task groups. The benefits of this for interactive desktop performance can be described via the following example.
>
>Suppose that there are two autogroups competing for the same CPU (i.e., presume either a single CPU system or the use of **[taskset](../man1/taskset.1.html)**(1) to confine all the processes to the same CPU on an SMP system). The first group contains ten CPU-bound processes from a kernel build started with <u>make</u> <u>-j10</u>. The other contains a single CPU-bound process: a video player. The effect of autogrouping is that the two groups will each receive half of the CPU cycles. That is, the video player will receive 50% of the CPU cycles, rather than just 9% of the cycles, which would likely lead to degraded video playback. The situation on an SMP system is more complex, but the general effect is the same: the scheduler distributes CPU cycles across task groups such that an autogroup that contains a large number of CPU- bound processes does not end up hogging CPU cycles at the expense of the other jobs on the system.
>
>A process's autogroup (task group) membership can be viewed via the file <u>[/proc/](file:/proc/)[pid]/autogroup</u>:
>
>$ **cat** **[/proc/1/autogroup](file:/proc/1/autogroup)** /autogroup-1 nice 0
>
>This file can also be used to modify the CPU bandwidth allocated to an autogroup. This is done by writing a number in the "nice" range to the file to set the autogroup's nice value. The allowed range is from +19 (low priority) to -20 (high priority). (Writing values outside of this range causes **[write](../man2/write.2.html)**(2) to fail with the error **EINVAL**.)
>
>The autogroup nice setting has the same meaning as the process nice value, but applies to distribution of CPU cycles to the autogroup as a whole, based on the relative nice values of other autogroups. For a process inside an autogroup, the CPU cycles that it receives will be a product of the autogroup's nice value (compared to other autogroups) and the process's nice value (compared to other processes in the same autogroup.
>
>The use of the **[cgroups](../man7/cgroups.7.html)**(7) CPU controller to place processes in cgroups other than the root CPU cgroup overrides the effect of autogrouping.
>
>The autogroup feature groups only processes scheduled under non-real-time policies (**SCHED_OTHER**, **SC
HED_BATCH**, and **SCHED_IDLE**). It does not group processes scheduled under real-time and deadline policies. Those processes are scheduled according to the rules described earlier.
>
>**The** **nice** **value** **and** **group** **scheduling**
>
>When scheduling non-real-time processes (i.e., those scheduled under the **SCHED_OTHER**, **SCHED_BATCH**, and **SCHED_IDLE** policies), the CFS scheduler employs a technique known as "group scheduling", if the kernel was configured with the **CONFIG_FAIR_GROUP_SCHED** option (which is typical).
>
>Under group scheduling, threads are scheduled in "task groups". Task groups have a hierarchical relationship, rooted under the initial task group on the system, known as the "root task group". Task groups are formed in the following circumstances:
>
>* All of the threads in a CPU cgroup form a task group. The parent of this task group is the task group of the corresponding parent cgroup.
>
>* If autogrouping is enabled, then all of the threads that are (implicitly) placed in an autogroup (i.e., the same session, as created by **[setsid](../man2/setsid.2.html)**(2)) form a task group. Each new autogroup is thus a separate task group. The root task group is the parent of all such autogroups.
>
>* If autogrouping is enabled, then the root task group consists of all processes in the root CPU cgroup that were not otherwise implicitly placed into a new autogroup.
>
>* If autogrouping is disabled, then the root task group consists of all processes in the root CPU cgroup.
>
>* If group scheduling was disabled (i.e., the kernel was configured without **CONFIG_FAIR_GROUP_SCHED**), then all of the processes on the system are notionally placed in a single task group.
>
>Under group scheduling, a thread's nice value has an effect for scheduling decisions <u>only</u> <u>relative</u> <u>to</u> <u>other</u> <u>threads</u> <u>in</u> <u>the</u> <u>same</u> <u>task</u> <u>group</u>. This has some surprising consequences in terms of the traditional semantics of the nice value on UNIX systems. In particular, if autogrouping is enabled (which is the default in various distributions), then employing **[setpriority](../man2/setpriority.2.html)**(2) or **[nice](../man1/nice.1.html)**(1) on a process has an effect only for scheduling relative to other processes executed in the same session (typically: the same terminal window).
>
>Conversely, for two processes that are (for example) the sole CPU-bound processes in different sessions (e.g., different terminal windows, each of whose jobs are tied to different autogroups), <u>modifying</u> <u>the</u> <u>nice</u> <u>value</u> <u>of</u> <u>the</u> <u>process</u> <u>in</u> <u>one</u> <u>of</u> <u>the</u> <u>sessions</u> <u>has</u> <u>no</u> <u>effect</u> in terms of the scheduler's decisions relative to the process in the other session. A possibly useful workaround here is to use a command such as the following to modify the autogroup nice value for <u>all</u> of the processes in a terminal session:
>
>$ **echo** **10** **>** **[/proc/self/autogroup](file:/proc/self/autogroup)**
https://redd.it/d7hx2c
@r_linux
>
>**The** **nice** **value** **and** **group** **scheduling**
>
>When scheduling non-real-time processes (i.e., those scheduled under the **SCHED_OTHER**, **SCHED_BATCH**, and **SCHED_IDLE** policies), the CFS scheduler employs a technique known as "group scheduling", if the kernel was configured with the **CONFIG_FAIR_GROUP_SCHED** option (which is typical).
>
>Under group scheduling, threads are scheduled in "task groups". Task groups have a hierarchical relationship, rooted under the initial task group on the system, known as the "root task group". Task groups are formed in the following circumstances:
>
>* All of the threads in a CPU cgroup form a task group. The parent of this task group is the task group of the corresponding parent cgroup.
>
>* If autogrouping is enabled, then all of the threads that are (implicitly) placed in an autogroup (i.e., the same session, as created by **[setsid](../man2/setsid.2.html)**(2)) form a task group. Each new autogroup is thus a separate task group. The root task group is the parent of all such autogroups.
>
>* If autogrouping is enabled, then the root task group consists of all processes in the root CPU cgroup that were not otherwise implicitly placed into a new autogroup.
>
>* If autogrouping is disabled, then the root task group consists of all processes in the root CPU cgroup.
>
>* If group scheduling was disabled (i.e., the kernel was configured without **CONFIG_FAIR_GROUP_SCHED**), then all of the processes on the system are notionally placed in a single task group.
>
>Under group scheduling, a thread's nice value has an effect for scheduling decisions <u>only</u> <u>relative</u> <u>to</u> <u>other</u> <u>threads</u> <u>in</u> <u>the</u> <u>same</u> <u>task</u> <u>group</u>. This has some surprising consequences in terms of the traditional semantics of the nice value on UNIX systems. In particular, if autogrouping is enabled (which is the default in various distributions), then employing **[setpriority](../man2/setpriority.2.html)**(2) or **[nice](../man1/nice.1.html)**(1) on a process has an effect only for scheduling relative to other processes executed in the same session (typically: the same terminal window).
>
>Conversely, for two processes that are (for example) the sole CPU-bound processes in different sessions (e.g., different terminal windows, each of whose jobs are tied to different autogroups), <u>modifying</u> <u>the</u> <u>nice</u> <u>value</u> <u>of</u> <u>the</u> <u>process</u> <u>in</u> <u>one</u> <u>of</u> <u>the</u> <u>sessions</u> <u>has</u> <u>no</u> <u>effect</u> in terms of the scheduler's decisions relative to the process in the other session. A possibly useful workaround here is to use a command such as the following to modify the autogroup nice value for <u>all</u> of the processes in a terminal session:
>
>$ **echo** **10** **>** **[/proc/self/autogroup](file:/proc/self/autogroup)**
https://redd.it/d7hx2c
@r_linux
reddit
Why nice levels are a placebo and have been for a very long time,...
***(EDIT: I was assuming this configuration was default in most distros, but now I'm starting to think it just might be some Ubuntu/Mint-specific...
keyboards with the penguin instead of windows?
if they exist where are they?
https://redd.it/d7loze
@r_linux
if they exist where are they?
https://redd.it/d7loze
@r_linux
reddit
keyboards with the penguin instead of windows?
if they exist where are they?
Speed Art Inkscape: Stylish business card for the Linux system administrator
https://youtu.be/QMRSmEzsTmA
https://redd.it/d7d4x0
@r_linux
https://youtu.be/QMRSmEzsTmA
https://redd.it/d7d4x0
@r_linux
YouTube
Speed Art Inkscape: Stylish business card for the Linux system administrator
In this speed art, we draw Stylish business card for the Linux system administrator in a open vector editor Inkscape
The work took 20 minutes time. Subscribe to the channel, put likes and wait for new videos. Always glad to new people and I will try to release…
The work took 20 minutes time. Subscribe to the channel, put likes and wait for new videos. Always glad to new people and I will try to release…
This week in KDE: a metric avalanche of amazing things
https://pointieststick.com/2019/09/21/this-week-in-kde-a-metric-avalanche-of-amazing-things/
https://redd.it/d7ma73
@r_linux
https://pointieststick.com/2019/09/21/this-week-in-kde-a-metric-avalanche-of-amazing-things/
https://redd.it/d7ma73
@r_linux
Adventures in Linux and KDE
This week in KDE: a metric avalanche of amazing things
Get ready for a massive load of improvement! And it’s all pretty darn shiny too because in addition to a ton of work on apps, we polished up Plasma to be as smooth as a marble for the 5.17 be…
Huawei MateBook laptops now come with Linux
https://www.techradar.com/in/news/huawei-matebook-laptops-now-come-with-linux
https://redd.it/d7mbr1
@r_linux
https://www.techradar.com/in/news/huawei-matebook-laptops-now-come-with-linux
https://redd.it/d7mbr1
@r_linux
TechRadar India
Huawei MateBook laptops now come with Linux
Huawei has launched a new line of MateBook's running Deepin Linux for the Chinese market.
Distributed compiling: Icecream 1.3 and Icemon 3.3 released
http://llunak.blogspot.com/2019/09/icecream-13-and-icemon-33-released.html?m=1
https://redd.it/d7n0dd
@r_linux
http://llunak.blogspot.com/2019/09/icecream-13-and-icemon-33-released.html?m=1
https://redd.it/d7n0dd
@r_linux
Blogspot
Icecream 1.3 and Icemon 3.3 released
A new version 1.3 of the distributed C/C++ compilation tool Icecream has been released. To accompany it, version 3.3 of the GUI monitor I...
Why don't more Arch users run Void?
So I'm by no means a purist or evangelist(ok maybe a little tiny bit). I enjoy a lot of different distro and operating systems. At work I run Regolith (Ubuntu+i3), one of my laptops I run Void and the other FreeBSD, and I also enjoy MacOS as well. However when I read about or talk to Arch users and why they use Arch it's simplicity is often one of the most talked about attributes. Now I don't contest this, and I'm also not a systemd bandwagon hater, but also as a Void user for about 3 years now I have to say the simplicity of runit is far greater than that of systemd. I think systemd works wonderfully on more fully featured desktops which are inevitably more complex to achieve a better abstraction from the underlying system, these abstractions were the reason I use an Ubuntu disto at work so I didn't have to take my entire first week getting my desktop environment dialed in. However "ricing" and deep customization is often the way of the Arch community, something I deeply admire about that community, but I often feel like Void might be better suited. Admittedly I've never run Arch for very long, not because I don't enjoy it, but because I've often needed something else and admittedly I get some weird satisfaction from running more obscure distros. So excuse me if I'm ignorant on Arch compared to a long term user.
So I have a few questions for you Arch users out there. Have you heard of Void? Have you used it? What made you pick Arch over Void if you've heard of it or even tried it in the past? What are some things you like and disklike about Void if you've tried it before? What are the qualities you look for in a distro? A lot of the Void community were or even still are Arch users, so it often makes me wonder, and I'm posting here because I'd like to reach outside both communities and see what the greater community has to say. I'm curious to know your input if you have even just a bit of experience in Arch and/or Void.
Lastly I'd like to thank the dedicated Arch community for their wiki and forums they've created over the years which I've found extremely useful and concise regardless of the distro I'm using.
https://redd.it/d7d6wb
@r_linux
So I'm by no means a purist or evangelist(ok maybe a little tiny bit). I enjoy a lot of different distro and operating systems. At work I run Regolith (Ubuntu+i3), one of my laptops I run Void and the other FreeBSD, and I also enjoy MacOS as well. However when I read about or talk to Arch users and why they use Arch it's simplicity is often one of the most talked about attributes. Now I don't contest this, and I'm also not a systemd bandwagon hater, but also as a Void user for about 3 years now I have to say the simplicity of runit is far greater than that of systemd. I think systemd works wonderfully on more fully featured desktops which are inevitably more complex to achieve a better abstraction from the underlying system, these abstractions were the reason I use an Ubuntu disto at work so I didn't have to take my entire first week getting my desktop environment dialed in. However "ricing" and deep customization is often the way of the Arch community, something I deeply admire about that community, but I often feel like Void might be better suited. Admittedly I've never run Arch for very long, not because I don't enjoy it, but because I've often needed something else and admittedly I get some weird satisfaction from running more obscure distros. So excuse me if I'm ignorant on Arch compared to a long term user.
So I have a few questions for you Arch users out there. Have you heard of Void? Have you used it? What made you pick Arch over Void if you've heard of it or even tried it in the past? What are some things you like and disklike about Void if you've tried it before? What are the qualities you look for in a distro? A lot of the Void community were or even still are Arch users, so it often makes me wonder, and I'm posting here because I'd like to reach outside both communities and see what the greater community has to say. I'm curious to know your input if you have even just a bit of experience in Arch and/or Void.
Lastly I'd like to thank the dedicated Arch community for their wiki and forums they've created over the years which I've found extremely useful and concise regardless of the distro I'm using.
https://redd.it/d7d6wb
@r_linux
reddit
Why don't more Arch users run Void?
So I'm by no means a purist or evangelist(ok maybe a little tiny bit). I enjoy a lot of different distro and operating systems. At work I run...
Global OS share for the past few years (2003-2019) it seems like linux share was growing comparably fast until 2011 when it stops at 5-6%. Interesting why !!
https://youtu.be/eJuvKn5j_kE
https://redd.it/d7je04
@r_linux
https://youtu.be/eJuvKn5j_kE
https://redd.it/d7je04
@r_linux
YouTube
Most Popular Operating Systems (Desktop & Laptops) 2003 - 2019
OS platforms market share usage on desktop computers and laptops.
I am a first year PhD student, data geek and I love visualizations.
As always your feedback is welcome.
Please support my channel. It can buy me another cup of coffee :)
Data source: W3S…
I am a first year PhD student, data geek and I love visualizations.
As always your feedback is welcome.
Please support my channel. It can buy me another cup of coffee :)
Data source: W3S…
KDE's Dolphin file browser and a web browser? Wanna view a man page outside the terminal?
In Dolphin's path bar type ```man:/<command here>```
For example, ```man:/rsync```
That will open the rsync's man page in your default browser and it is formatted very pretty.
Many thanks to Aleix Pol who implemented this feature!
Edit: or just use Konqueror then you are able to follow links within the man page
Edit: or just use Khelpcenter much better than all of the above
https://redd.it/d7osdi
@r_linux
In Dolphin's path bar type ```man:/<command here>```
For example, ```man:/rsync```
That will open the rsync's man page in your default browser and it is formatted very pretty.
Many thanks to Aleix Pol who implemented this feature!
Edit: or just use Konqueror then you are able to follow links within the man page
Edit: or just use Khelpcenter much better than all of the above
https://redd.it/d7osdi
@r_linux
reddit
KDE's Dolphin file browser and a web browser? Wanna view a man...
In Dolphin's path bar type ```man:/<command here>``` For example, ```man:/rsync``` That will open the rsync's man page in your default browser...
Elementary customization explained
https://thekengel.wordpress.com/2019/09/18/did-you-like-my-linux-desktop/
https://redd.it/d7pmuq
@r_linux
https://thekengel.wordpress.com/2019/09/18/did-you-like-my-linux-desktop/
https://redd.it/d7pmuq
@r_linux
Thek's Site
Customize the Elementary OS Desktop
Here is how you can start customizing yous: Picture of a customized Elementary OS Juno (based on Ubuntu 18.04), the system has several interesting functions yet, by tweaking the graphical interface…
Do you have any concerns about the Linux Foundation?
I'm taking a course through the Linux Foundation and on the course launch page there is a red error box that says: "Refresh this page in your browser (on your keyboard press F5 for Windows or ⌘R for Mac) to see an updated view of your progress."
Does this rub anyone else the wrong way that they don't even give instructions for how to refresh a browser page on Linux? I also recently read that some of the "higher up" employees use Windows and Mac on their computers.
There is probably good and bad for this organization and I know they do a lot of great work with the kernel but this seems a little strange to me.
https://redd.it/d7q9ln
@r_linux
I'm taking a course through the Linux Foundation and on the course launch page there is a red error box that says: "Refresh this page in your browser (on your keyboard press F5 for Windows or ⌘R for Mac) to see an updated view of your progress."
Does this rub anyone else the wrong way that they don't even give instructions for how to refresh a browser page on Linux? I also recently read that some of the "higher up" employees use Windows and Mac on their computers.
There is probably good and bad for this organization and I know they do a lot of great work with the kernel but this seems a little strange to me.
https://redd.it/d7q9ln
@r_linux
reddit
Do you have any concerns about the Linux Foundation?
I'm taking a course through the Linux Foundation and on the course launch page there is a red error box that says: "Refresh this page in your...
What are some cliches about your distro you have found out to be false?
https://redd.it/d7qr38
@r_linux
https://redd.it/d7qr38
@r_linux
reddit
What are some cliches about your distro you have found out to be...
Posted in r/linux by u/Kar_Kaz • 4 points and 10 comments
Reinstalling Windows XP on a Linux system
Not sure if this is the right place to ask but how do I go about this? I have a genuine XP product code. The old laptop I have currently has an old version of peppermint Linux on.
Cheers!
https://redd.it/d7rerj
@r_linux
Not sure if this is the right place to ask but how do I go about this? I have a genuine XP product code. The old laptop I have currently has an old version of peppermint Linux on.
Cheers!
https://redd.it/d7rerj
@r_linux
reddit
Reinstalling Windows XP on a Linux system
Not sure if this is the right place to ask but how do I go about this? I have a genuine XP product code. The old laptop I have currently has an...
Did SYSTEM-CONFIG-LVM + SYSTEM-CONFIG-LVM GUI ever make a comeback? That was an amazingly intuitive and powerful program with brilliant GUI and when combined with Gnome-disk-Utility made setting up LVM Disk Pools childs play. or is it "$oftware $ubnoscription a$ a $ervice" now?
https://redd.it/d7r18v
@r_linux
https://redd.it/d7r18v
@r_linux
reddit
Did SYSTEM-CONFIG-LVM + SYSTEM-CONFIG-LVM GUI ever make a...
Posted in r/linux by u/Tokamak_nV-Eon • 0 points and 0 comments
Multilang Input - only reason why I don’t use Linux
For people like me, who use 3-5 languages every day, which includes Chinese, Korean, Russian it’s impossible to use Linux.
For example on Mac or Windows I just press Cmd+space(alt+shift) to change input language and I can type everywhere. But on Linux a such simple thing(from user perspective) is so complicated. Even after finding some noscripts, libraries to change input source you will find out that in some applications input layout doesn’t work.
I think I’m not alone. Reply if you have same problem.
https://redd.it/d7t4lp
@r_linux
For people like me, who use 3-5 languages every day, which includes Chinese, Korean, Russian it’s impossible to use Linux.
For example on Mac or Windows I just press Cmd+space(alt+shift) to change input language and I can type everywhere. But on Linux a such simple thing(from user perspective) is so complicated. Even after finding some noscripts, libraries to change input source you will find out that in some applications input layout doesn’t work.
I think I’m not alone. Reply if you have same problem.
https://redd.it/d7t4lp
@r_linux
reddit
Multilang Input - only reason why I don’t use Linux
For people like me, who use 3-5 languages every day, which includes Chinese, Korean, Russian it’s impossible to use Linux. For example on Mac or...
Anyone else see Huawei using Deepin as a good thing?
Honestly using Deepin as a sacrificial lamb might be nice:
* Deepin is already terrible so this gives it a use
* The association might help scare new users away from it
* If China adopts it, some of the software the third-party software they need to use might come to the rest of us.
* Another, actually reputable distro won't have to fight/worry about China
https://redd.it/d7ts0s
@r_linux
Honestly using Deepin as a sacrificial lamb might be nice:
* Deepin is already terrible so this gives it a use
* The association might help scare new users away from it
* If China adopts it, some of the software the third-party software they need to use might come to the rest of us.
* Another, actually reputable distro won't have to fight/worry about China
https://redd.it/d7ts0s
@r_linux
reddit
Anyone else see Huawei using Deepin as a good thing?
Honestly using Deepin as a sacrificial lamb might be nice: * Deepin is already terrible so this gives it a use * The association might help scare...
Have you seen it ? Gnome Games 3.34 supports adaptive design.
[Search](https://i.imgur.com/aESBK1Y.png)
[Savestates](https://i.imgur.com/ZwWnAbI.png)
[In game](https://i.imgur.com/BkWddCn.png)
[Preferences](https://i.imgur.com/7fUYgXQ.png)
[Main Screen](https://i.imgur.com/Vf4duUw.png)
[Platform details](https://i.imgur.com/l25DGno.png)
[Platforms](https://i.imgur.com/OEOjDG6.png)
[Backup & Restore](https://i.imgur.com/0T92sef.png)
https://redd.it/d7tlwp
@r_linux
[Search](https://i.imgur.com/aESBK1Y.png)
[Savestates](https://i.imgur.com/ZwWnAbI.png)
[In game](https://i.imgur.com/BkWddCn.png)
[Preferences](https://i.imgur.com/7fUYgXQ.png)
[Main Screen](https://i.imgur.com/Vf4duUw.png)
[Platform details](https://i.imgur.com/l25DGno.png)
[Platforms](https://i.imgur.com/OEOjDG6.png)
[Backup & Restore](https://i.imgur.com/0T92sef.png)
https://redd.it/d7tlwp
@r_linux
TIFU case-study: destroying a SSD with IO problems
I'm writing this after having 4 months of data being only theoretically recoverable. Partially as a warning and partially for digital closure.
Also learn from my mistakes. Don't be like me.
# -1. The setup
WD Green 480 GB SSD + Hitachi 3TB
Both disks had LVM on LUKS. The SSD was split between /, cache\[1\], mostly, and a small /boot, /boot/EFI and swap. The HDD was just data part of the LVM cache and swap with a high priority.
The encryption key for the HDD was a file located on the SSD. This was the only encryption key for the HDD.
# 0. The cat-astrophe
One day, after waiting out a IO freeze, and after deciding to reboot, when the machine did not deal itself with the problem (4hrs vs. 5-10 minutes usually), I was welcomed by a SSD that decided to obliterate itself to 16KiB.
# 1. The road to dis-aster (a.k.a. all those small things that synergize in your face).
\- A incomplete backup solution.
For many reasons I was putting of fixing this for later™. Until the later™ bit me in the ass.
A lot of important files were saved by Syncthing to another machine I'm working on in parallel, but this is not a backup.
\- Ignoring high IO-related freezes and their impact on the SSD caching.
My system was plagued by freezes caused probably by excessive swapping. When the physical RAM was exhausted the system responsiveness went out of the window and in a hurry.
Usually this happened when I've wanted to play modded Minecraft and did not close Firefox (do note, I have enough tabs and extensions to make Chrome look slim and well-mannered at the table).
If I could observe the system (as SSHd was not responding to new connections) I would see in iotop mostly JBD2 operations, the RAM-intensive processes and often Syncthing doing high IO \[2\].
This plagued me even before buying the SSD and at that time having swap on a separate drive.
As one of the symptoms was intensive HDD work noise I've disregarded the potential impact on the SSD.
The only thing I've tried was adding "commit=15" to the SSD file systems, but I did not see any results.
While it did not trash my drive the system reacted to Magic SysRq. But...
\- Trying to wait out the problem.
When the IO problem occurred at first I've decided to REISUB the problem. Just add boot time, restart the apps and you can get back to whatever you were doing.
But at some point Pidgin or libpurple-hangouts on my machine decided to stop remembering Hangouts OAuth keys (on a second machine the same setup works flawlessly(!)) and I had to renew them on each reboot. I've decided at some point just to wait out the freeze to avoid the time lost. It was just 5-10 minutes. \[3\]
The last time I've did this the system tried to sort itself for a very long time (I think \~4 hours) and stopped reacting to the Magic SysRq.
\- Buying an DRAM-less SSD.
This is uncertain as far as my knowledge goes, but maybe, just maybe, the SSD firmware would not write all those transactions as fast as they occurred. Instead they could've been written in bulk or allow the LVM caching to write them directly to the HDD.
To my defense it was hard to find a definitive answer if a not-top-tier disk has DRAM or not (unless it's hidden under the term "cache").
\- Having the swap on the SSD.
I was a hard opposer of this, only to have one. I thought that priorities would save me from problems and this swap would save me in hard times of memory usage. Now I may never know if it worked at all.
In the end I will chalk it up to a unnecessary risk.
\- Having only one encryption key to the HDD or not using a password manager to backup it.
This is why my data remains theoretically recoverable. I only need to find a 1KiB (if I remember correctly) of random data that was my key.
And I was very sure that I have a second password on that drive. Turns out I had a second password. ...for different disk...
The password manager/GPG encryption/printer part goes without saying
I'm writing this after having 4 months of data being only theoretically recoverable. Partially as a warning and partially for digital closure.
Also learn from my mistakes. Don't be like me.
# -1. The setup
WD Green 480 GB SSD + Hitachi 3TB
Both disks had LVM on LUKS. The SSD was split between /, cache\[1\], mostly, and a small /boot, /boot/EFI and swap. The HDD was just data part of the LVM cache and swap with a high priority.
The encryption key for the HDD was a file located on the SSD. This was the only encryption key for the HDD.
# 0. The cat-astrophe
One day, after waiting out a IO freeze, and after deciding to reboot, when the machine did not deal itself with the problem (4hrs vs. 5-10 minutes usually), I was welcomed by a SSD that decided to obliterate itself to 16KiB.
# 1. The road to dis-aster (a.k.a. all those small things that synergize in your face).
\- A incomplete backup solution.
For many reasons I was putting of fixing this for later™. Until the later™ bit me in the ass.
A lot of important files were saved by Syncthing to another machine I'm working on in parallel, but this is not a backup.
\- Ignoring high IO-related freezes and their impact on the SSD caching.
My system was plagued by freezes caused probably by excessive swapping. When the physical RAM was exhausted the system responsiveness went out of the window and in a hurry.
Usually this happened when I've wanted to play modded Minecraft and did not close Firefox (do note, I have enough tabs and extensions to make Chrome look slim and well-mannered at the table).
If I could observe the system (as SSHd was not responding to new connections) I would see in iotop mostly JBD2 operations, the RAM-intensive processes and often Syncthing doing high IO \[2\].
This plagued me even before buying the SSD and at that time having swap on a separate drive.
As one of the symptoms was intensive HDD work noise I've disregarded the potential impact on the SSD.
The only thing I've tried was adding "commit=15" to the SSD file systems, but I did not see any results.
While it did not trash my drive the system reacted to Magic SysRq. But...
\- Trying to wait out the problem.
When the IO problem occurred at first I've decided to REISUB the problem. Just add boot time, restart the apps and you can get back to whatever you were doing.
But at some point Pidgin or libpurple-hangouts on my machine decided to stop remembering Hangouts OAuth keys (on a second machine the same setup works flawlessly(!)) and I had to renew them on each reboot. I've decided at some point just to wait out the freeze to avoid the time lost. It was just 5-10 minutes. \[3\]
The last time I've did this the system tried to sort itself for a very long time (I think \~4 hours) and stopped reacting to the Magic SysRq.
\- Buying an DRAM-less SSD.
This is uncertain as far as my knowledge goes, but maybe, just maybe, the SSD firmware would not write all those transactions as fast as they occurred. Instead they could've been written in bulk or allow the LVM caching to write them directly to the HDD.
To my defense it was hard to find a definitive answer if a not-top-tier disk has DRAM or not (unless it's hidden under the term "cache").
\- Having the swap on the SSD.
I was a hard opposer of this, only to have one. I thought that priorities would save me from problems and this swap would save me in hard times of memory usage. Now I may never know if it worked at all.
In the end I will chalk it up to a unnecessary risk.
\- Having only one encryption key to the HDD or not using a password manager to backup it.
This is why my data remains theoretically recoverable. I only need to find a 1KiB (if I remember correctly) of random data that was my key.
And I was very sure that I have a second password on that drive. Turns out I had a second password. ...for different disk...
The password manager/GPG encryption/printer part goes without saying