Linux - Reddit – Telegram
Linux - Reddit
763 subscribers
4.19K photos
207 videos
39.9K links
Stay up-to-date with everything Linux!
Content directly fetched from the subreddit just for you.

Powered by : @r_channels
Download Telegram
One shot book to learn Linux and Operating System

Hey, I just read a book on Computer Networks (Computer Networking: A Top-Down Approach). Now I am thinking about reading a book on Linux that also explains OS terms.

Does something like this exist? If so, can you please guide me? I want to be a backend engineer, and it will really help me in this journey.

https://redd.it/1msd9p1
@r_linux
How much % CPU does your mouse use on Linux desktop?

Here's something odd that we found out during a Linux LAN event this weekend. This is not a tech support question, but a peculiar behavior denoscription that got people into quite a heated exchange during the event, and was seen as something unexpected.

1. Close all programs so your Linux system is idle and no windows are open.
2. Open a terminal and run top.
3. Vigorously move your mouse in circles or back and forth over the desktop for several seconds, while observing output from top.

Surprising result: on three tested systems (Linux Mint 22 Cinnamon, Debian 13, Fedora 42 KDE), CPU usage spikes up to 20%, 50% and even up to 100% on one system, just from moving the mouse.

All these systems have desktop GPUs used for playing games - not integrated graphics.

Someone said that they would have expected moving the mouse to not even register in top, i.e. some 0-1% CPU overhead, and that is what would happen on Windows and on macOS. That got me thinking that surely that couldn't be possible, since the CPU must do some work at least to process the mouse.

Does Linux design dedicate a CPU core for processing the mouse?

I thought it would be interesting to poll: how much CPU overhead does moving the mouse result in on your Linux desktop system? Is e.g. 20%-100% CPU usage from moving the mouse nominal/expected on Linux? Does some Linux distro/desktop environment get 0% for mouse?

https://redd.it/1msmuyv
@r_linux
Sudo reference in The Simpsons
https://redd.it/1msprgk
@r_linux
I was recently given these manuals and decided to give them a try. I hope I'm up to date.
https://redd.it/1msxhj7
@r_linux
Humble Bundle; any of them good?

Humble Bundle currently has a bundle of Linux and Unix related books. I was looking at picking it up. Is this bundle worth picking up? Any books in it specifically a great read or a skip book?

https://www.humblebundle.com/books/linux-complete-pearson-books

https://redd.it/1msxmy4
@r_linux
ORYX - TUI for sniffing network traffic using eBPF
https://redd.it/1mt2ha3
@r_linux
I installed Linux Mint on my grandmother's brand new laptop (she asked me to)

My grandma recently bought a new laptop and when I was helping her set it up, I ran into a problem. Since Windows 11 likes to force you to make a Microsoft account nowadays, I had her give me an email address and password she wanted to use to make her account. The problem arose when I put her email address in and it got rejected. She uses a local ISP email address and it's been fine for everything else she uses. Microsoft wouldn't allow it in this case however and suggested creating a new email. Well of course she doesn't want to do that. I explained the options to her: I could override this and make a local account with some fiddling, we could make a new email, or I could install Linux.

My grandmother, who is in her 70's asked me to just install Linux. I've put Linux Mint on an older laptop of hers to squeeze some extra life out of it before and I guess she really enjoyed using it. So today I installed Linux Mint on her brand new laptop before even finishing the first boot of Windows 11. I just thought this was kind of amusing and wanted to share, I never thought I'd see the day where she'd actually choose Linux over Windows.


https://redd.it/1mtcb4p
@r_linux
Finally got WinApps to work, this tool is incredible.
https://redd.it/1mtg20d
@r_linux
Linux security policy

Hey,

I'm working on a Linux Security Policy for our company, which sets distro-agnostic requirements on the configuration and procedures that must be followed for employees wishing to use Linux on their work computers. Do you have any input?

("secure password" is defined elsewhere)

# Linux Security Policy draft

# Storage

The system MUST be secured with full-disk encryption using LUKS and a secure password or hardware key.
Suspend-to-disk (hibernation) MUST be encrypted or disabled.
Swap partitions MUST be encrypted or disabled.

# User setup

The user account MUST have a secure password.
Measures MUST be in place to protect against brute-force attacks. E.g. lock for 10 minutes after 3 failed login attempts.

# System configuration

Microcode MUST be applied to mitigate CPU/architecture vulnerabilities.
The system MUST NOT have SSH server running, unless explicitly required.
If used, root login MUST be prohibited, and SSH keys MUST be used instead of passwords.
The root account MUST be disabled for direct login, or secured with a strong password if enabled.
A firewall (e.g. ufw) MUST be configured with default deny inbound rules, except where explicity needed (e.g. mDNS on UDP 5353 for local printer discovery or similar services).
A Mandatory Access Control (MAC) (e.g. AppArmor or SELinux) system SHOULD be enabled and in enforcing mode.
Secure Boot SHOULD be enabled.

\> Unsure about this. Secure boot is as i understand more or less useless in Linux unless you own the whole trust chain yourself, which is kinda risky to set up, and a pretty big ask for a basic security requirement.

Sandboxed package formats like Snap, Flatpak, or AppImage SHOULD be used for untrusted or third-party GUI applications...

# Procedures

sudo SHOULD be used over su
Installed packages MUST be updated at least monthly
CVE scanning tools (e.g. arch-audit, debian-security-support) SHOULD be run periodically.
If CVE scanning is used, critical vulnerabilities MUST be reviewed in:
Externally exposed (e.g. browsers, dev servers)
Handling untrusted content (e.g. document viewers, email clients)
Actions on CVEs MAY include upgrading, sandboxing, disabling features, or temporary avoidance.

\> I'm partial to remove any mentions of CVEs, as I often find it hard to gain anything useful from the output (e.g. arch-audit currently reports several high-risk vulnerabilities in libxml2, which is used in a ton of applications, but hopefully/probably not in a way that exposes the vulnerabilities)

https://redd.it/1mtfgrc
@r_linux
It feels like majority of people, tech literate or not, are still stuck at 2010 if it comes to how they perceive Linux

Because every single time Linux comes up people keep shitting on it with "yeah but you have to code a million lines just to get your printer running" or "yeah but it will break after every update" and other vastly outdated cliches. I mean, sure, it still isnt Windows level in compatibility, but since switching to Fedora I can do literally everything I could do on Windows. And ironically enough, most games run legit better now. And I barely had to do anything. It just worked out of the box. While you still have to learn quite a few things (where most of it comes naturally with time just like on any OS) to use Linux efficiently it still isnt NEARLY as bad as it was 5 - 10 years ago. Sadly it seems like most peoples knowledge about Linux is still stuck on that time and they arent aware how far Linux came since then and just keep repeating this outdated shit, making Linux seem much worse than it actually it. Sadly enough it also affected me in my decision to give it a chance on my main PC and I kept delaying installing it. Thats also what made me think about this topic. If uninformed people wouldnt spout this outdated info everytime Linux comes up it wouldnt suprise me if more people would give that OS a fair chance.

https://redd.it/1mtltjf
@r_linux
Gentoo Linux with XFCE on a 2001 iBook G3/600
https://redd.it/1mtngzr
@r_linux
Czkawka / Krokiet 10.0: Cleaning duplicates, ARM Linux builds, removed appimage support and availability in Debian 13 repositories

After a little less than six months, I’m releasing a new version of my three distinct (yet similar) duplicate-finding programs today.

https://preview.redd.it/wsw3ai9gnsjf1.png?width=2880&format=png&auto=webp&s=4b5ad81c4f2e4de036f00b847c634f78b10b1508

The list of fixes and new features may seem random, and in fact it is, because I tackled them in the order in which ideas for their solutions came to mind. I know that the list of reported issues on GitHub is quite long, and for each user their own problem seems the most important, but with limited time I can only address a small portion of them, and I don’t necessarily pick the most urgent ones.

Interestingly, this version is the largest so far (at least if you count the number of lines changed). Krokiet now contains almost all the features I used in the GTK version, so it looks like I myself will soon switch to it completely, setting an example for other undecided users (as a reminder, the GTK version is already in maintenance mode, and I focus there exclusively on bug fixes, not adding new features).

As usual, the binaries for all three projects (czkawka\_cli, krokiet, and czkawka\_gui), along with a short legend explaining what the individual names refer to and where these files can be used, can be found in the releases section on GitHub — [https://github.com/qarmin/czkawka/releases](https://github.com/qarmin/czkawka/releases)

# Adding memory usage limits when loading the cache

One of the random errors that sometimes occurred due to the user, sometimes my fault, and sometimes — for example — because a power outage shut down the computer during operation, was a mysterious crash at the start of scanning, which printed the following information to the terminal:

memory allocation of 201863446528 bytes failed

Cache files that were corrupted by the user (or due to random events) would crash when loaded by the bincode library. Another situation, producing an error that looked identical, occurred when I tried to remove cache entries for non-existent or unavailable files using an incorrect struct for reading the data (in this case, the fix was simply changing the struct type into which I wanted to decode the data).

This was a rather unpleasant situation, because the application would crash for the user during scanning or when pressing the appropriate button, leaving them unsure of what to do next. Bincode provides the possibility of adding a memory limit for data decoding. The fix required only a few lines of code, and that could have been the end of it. However, during testing it turned out to be an unexpected breaking change—data saved with a memory-limited configuration cannot be read with a standard configuration, and vice versa.

use std::collections::BTreeMap;
use bincode::{serialize_into, Options};

const MEMORY_LIMIT: u64 = 1024 * 1024 * 1024; // 1 GB
fn main() {
let rands: Vec<u32> = (0..1).map(|_| rand::random::<u32>()).collect();
let btreemap: BTreeMap<u32, Vec<u32>> =
rands
.iter()
.map(|&x| (x % 10, rands.clone()))
.collect();
let options = bincode::DefaultOptions::new().with_limit(MEMORY_LIMIT);
let mut serialized: Vec<_> = Vec::new();
options.serialize_into(&mut serialized, &btreemap).unwrap();
println!("{:?}", serialized);
let mut serialized2: Vec<_> = Vec::new();
serialize_into(&mut serialized2, &btreemap).unwrap();
println!("{:?}", serialized2);
}

[1, 1, 1, 252, 53, 7, 34, 7]
[1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 53, 7, 34, 7]

The above code, when serializing data with and without the limit, produces two different results, which was very surprising to me because I thought that the limiting option applied only to the decoding code, and not to the file itself (it seems to me that most data encoding libraries write only the raw data to
the file).

So, like it or not, this version (following the path of its predecessors) has a cache that is incompatible with previous versions. This was one of the reasons I didn’t implement it earlier — I had tried adding limits only when reading the file, not when writing it (where I considered it unnecessary), and it didn’t work, so I didn’t continue trying to add this functionality.

I know that for some users it’s probably inconvenient that in almost every new version they have to rebuild the cache from scratch, because due to changed structures or data calculation methods, it’s not possible to simply read old files. So in future versions, I’ll try not to tamper too much with the cache unless necessary (although, admittedly, I’m tempted to add a few extra parameters to video files in the next version, which would force the use of the new cache).

An alternative would be to create a built-in tool for migrating cache files. However, reading arbitrary external data without memory limits in place would make such a tool useless and prone to frequent crashes. Such a tool is only feasible from the current version onward, and it may be implemented in the future.

# Translations in Krokiet

To match the feature set currently available in Czkawka, I decided to try to implement the missing translations, which make it harder for some users, less proficient in English, to use the application.

One might think that since Slint itself is written in Rust, using the Fluent library inside it, which is also written in Rust, would be an obvious and natural choice. However, for various reasons, the authors decided that it’s better to use probably the most popular translation tool instead — gettext, which, however, complicates compilation and almost makes cross-compilation impossible (the issue aims to change this situation — [https://github.com/slint-ui/slint/issues/3715](https://github.com/slint-ui/slint/issues/3715)).

Without built-in translation support in Slint, what seemed like a fairly simple functionality turned into a tricky puzzle of how to implement it best. My goal was to allow changing the language at runtime, without needing to restart the entire application.

Ultimately, I decided that the best approach would be to create a singleton containing all the translation texts, in a style like this:

export global Translations {
in-out property <string> ok_button_text: "Ok";
in-out property <string> cancel_button_text: "Cancel";
...
}

…and use it as

export component PopupBase inherits PopupWindow {
in-out property <string> ok_text <=> Translations.ok_button_text;
...
}

then, when changing the language or launching the application, all these attributes are updated in such a way:

app.global::<Callabler>().on_changed_language(move || {
let app = a.upgrade().unwrap();
let translation = app.global::<Translations>();
translation.set_ok_button_text(flk!("ok_button").into());
translation.set_cancel_button_text(flk!("cancel_button").into());
...
});

With over 200 texts to translate, it’s very easy to make a mistake or leave some translations unlinked, which is why I rely on Python helper noscripts that verify everything is being used.

This adds more code than if built-in support for fluent-rs existed and could be used directly, similar to how gettext translations currently work. I hope that something like this will be implemented for Fluent soon:

export component PopupBase inherits PopupWindow {
in-out property <string> ok_text: u/tr("ok_button");
...
}

Regarding the translations themselves, they are hosted and updated on Crowdin — [https://crowdin.com/project/czkawka](https://crowdin.com/project/czkawka) — and synchronized with GitHub from time to time. For each release, several dozen phrases are updated, so I’m forced to use machine translation for some languages. Not all texts may be fully translated or look as they should, so feel free to correct them if you come across any mistakes.

# Improving
Krokiet

The main goal of this version was to reduce the feature gaps between Czkawka (GUI) and Krokiet, so that I could confidently recommend Krokiet as a viable alternative. I think I largely succeeded in this area.

During this process, it often turned out that implementing the same features in Slint is much simpler than it was in the GTK version. Take sorting as an example. On the GTK side, due to the lack of better-known solutions (there probably are some, but I’ve lived until now in complete ignorance, which makes my eyes hurt when I look at the final implementation I once made), to sort a model, I would get an iterator over it and then iterate through each element one by one, collecting the TreeIters into a vector. Then I would extract the data from a specific column of each row and sort it using bubble sort within that vector.

fn popover_sort_general<T>(tree_view: &gtk4::TreeView, column_sort: i32, column_header: i32)
where
T: Ord + for<'b> glib::value::FromValue<'b> + 'static + Debug,
{
let model = get_list_store(tree_view);
if let Some(curr_iter) = model.iter_first() {
assert!(model.get::<bool>(&curr_iter, column_header)); // First item should be header
assert!(model.iter_next(&curr_iter)); // Must be at least two items
loop {
let mut iters = Vec::new();
let mut all_have = false;
loop {
if model.get::<bool>(&curr_iter, column_header) {
assert!(model.iter_next(&curr_iter), "Empty header, this should not happens");
break;
}
iters.push(curr_iter);
if !model.iter_next(&curr_iter) {
all_have = true;
break;
}
}
if iters.len() == 1 {
continue; // Can be equal 1 in reference folders
}
sort_iters::<T>(&model, iters, column_sort);
if all_have {
break;
}
}
}
}

fn sort_iters<T>(model: &ListStore, mut iters: Vec<TreeIter>, column_sort: i32)
where
T: Ord + for<'b> glib::value::FromValue<'b> + 'static + Debug,
{
assert!(iters.len() >= 2);
loop {
let mut changed_item = false;
for idx in 0..(iters.len() - 1) {
if model.get::<T>(&iters[idx], column_sort) > model.get::<T>(&iters[idx + 1], column_sort) {
model.swap(&iters[idx], &iters[idx + 1]);
iters.swap(idx, idx + 1);
changed_item = true;
}
}
if !changed_item {
return;
}
}
}

Over time, I’ve realized that I should have wrapped the model management logic earlier, which would have made reading and modifying it much easier. But now, it’s too late to make changes. On the Slint side, the situation is much simpler and more “Rust-like”:

pub(super) fn sort_modification_date(model: &ModelRc<MainListModel>, active_tab: ActiveTab) -> ModelRc<MainListModel> {
let sort_function = |e: &MainListModel| {
let modification_date_col = active_tab.get_int_modification_date_idx();
let val_int = e.val_int.iter().collect::<Vec<_>>();
connect_i32_into_u64(val_int[modification_date_col], val_int[modification_date_col + 1])
};
let mut items = model.iter().collect::<Vec<_>>();
items.sort_by_cached_key(&sort_function);
let new_model = ModelRc::new(VecModel::from(items));
recalculate_small_selection_if_needed(&new_model, active_tab);
return new_model;
}

It’s much shorter, more readable, and in most cases faster (the GTK version might be faster if the data is already almost sorted). Still, a few oddities remain, such as:

* modification\_date\_col —to generalize the model for different tools a bit, for each row in the scan
results, there are vectors containing numeric and string data. The amount and order of data differs for each tool, so it’s necessary to fetch from the current tab where the needed data currently resides
* connect\_i32\_into\_u64 — as the name suggests, it combines two i32 values into a u64. This is a workaround for the fact that Slint doesn’t yet support 64-bit integers (though I’m hopeful that support will be added soon).
* recalculate\_small\_selection\_if\_needed — due to the lack of built-in widgets with multi-selection support in Slint (unlike GTK), I had to create such a widget along with all the logic for selecting items, modifying selections, etc. It adds quite a bit of extra code, but at least I now have more control over selection, which comes in handy in certain situations

Another useful feature that already existed in Czkawka is the ability to start a scan, along with a list of selected folders, directly from the CLI. So now, running

krokiet . Desktop -i /home/rafal/Downloads -e /home/rafal/Downloads/images

will start scanning for files in three folders with one excluded (of course, only if the paths exist — otherwise, the path will be ignored). This mode uses a separate configuration file, which is loaded when the program is run with command-line arguments (configurations for other modes are not overwritten).

Since some things are easier to implement in Krokiet, I added several functions in this version that were missing in Czkawka:

* Remembering window size and column widths for each screen
* The ability to hide text on icons (for a more compact UI)
* Dark and light themes, switchable at runtime
* Disabling certain buttons when no items are selected
* Displaying the number of items queued for deletion

# Ending AppImage Support

Following the end of Snap support on Linux in the previous version, due to difficulties in building them, it’s now time to drop AppImage as well.

The main reasons for discontinuing AppImage are the nonstandard errors that would appear during use and its limited utility beyond what regular binary files provide.

Personally, I’m a fan of the AppImage format and use it whenever possible (unless the application is also available as a Flatpak or Snap), since it eliminates the need to worry about external dependencies. This works great for applications with a large number of dependencies. However, in Czkawka, the only dependencies bundled were GTK4 libraries — which didn’t make much sense, as almost every Linux distribution already has these libraries installed, often with patches to improve compatibility (for example, Debian patches: [https://sources.debian.org/src/gtk4/4.18.6%2Bds-2/debian/patches/series/](https://udd.debian.org/patches.cgi?src=gtk4&version=4.18.6%2Bds-2)).

It would make more sense to bundle optional libraries such as ffmpeg, libheif or libraw, but I didn’t have the time or interest to do that. Occasionally, some AppImage users started reporting issues that did not appear in other formats and could not be reproduced, making them impossible to diagnose and fix.

Additionally, the plugin itself ([https://github.com/linuxdeploy/linuxdeploy-plugin-gtk](https://github.com/linuxdeploy/linuxdeploy-plugin-gtk)) used to bundle GTK dependencies hadn’t been updated in over two years. Its authors did a fantastic job creating and maintaining it in their free time, but a major issue for me was that it wasn’t officially supported by the GTK developers, who could have assisted with the development of this very useful project.

# Multithreaded File Processing in Krokiet and CLI

Some users pointed out that deleting or copying files from within the application is time-consuming, and there is no feedback on progress. Additionally, during these operations, the entire GUI becomes unresponsive until the process finishes.

The problem stems from performing file operations in the same thread as the GUI rendering. Without interface updates, the system considers the application unresponsive and may display an os window prompting the user to kill it.

The solution is relatively
straightforward — simply move the computations to a separate thread. However, this introduces two new challenges: the need to stop the file-processing task and to synchronize the state of completed operations with the GUI.

A simple implementation in this style is sufficient:

let all_files = files.len();
let mut processing_files = Arc<AtomicBool<usize>>::new(0);
let _ = files.into_par_iter().map(|e| {
if stop_flag.load(Ordering::Relaxed) {
return None;
}
let processing_files = processing_files.fetch_add(1, Ordering::Relaxed);
let status_to_send = Status { all_files, processing_files };
progress_sender.send(status_to_send);
// Processing file
}).while_some().collect::<Vec<_>>();

The problem arises when a large number of messages are being sent, and updating the GUI/terminal for each of them would be completely unnecessary — after all, very few people could notice and process status changes appearing even 60 times per second.

This would also cause performance issues and unnecessarily increase system resource usage. I needed a way to limit the number of messages being sent. This could be implemented either on the side of the message generator (the thread deleting files) or on the recipient side (the GUI thread/progress bar in CLI). I decided it’s better to handle it sooner rather than later.

Ultimately, I created a simple structure that uses a lock to store the latest message to be sent. Then, in a separate thread, every \~100 ms, the message is fetched and sent to the GUI. Although the solution is simple, I do have some concerns about its performance on systems with a very large number of cores — there, thousands or even tens of thousands of messages per second could cause the mutex to become a bottleneck. For now, I haven’t tested it under such conditions, and it currently doesn’t cause problems, so I’ve postponed optimization (though I’m open to ideas on how it could be improved).

pub struct DelayedSender<T: Send + 'static> {
slot: Arc<Mutex<Option<T>>>,
stop_flag: Arc<AtomicBool>,
}
impl<T: Send + 'static> DelayedSender<T> {
pub fn new(sender: crossbeam_channel::Sender<T>, wait_time: Duration) -> Self {
let slot = Arc::new(Mutex::new(None));
let slot_clone = Arc::clone(&slot);
let stop_flag = Arc::new(AtomicBool::new(false));
let stop_flag_clone = Arc::clone(&stop_flag);
let _join = thread::spawn(move || {
let mut last_send_time: Option<Instant> = None;
let duration_between_checks = Duration::from_secs_f64(wait_time.as_secs_f64() / 5.0);
loop {
if stop_flag_clone.load(std::sync::atomic::Ordering::Relaxed) {
break;
}
if let Some(last_send_time) = last_send_time {
if last_send_time.elapsed() < wait_time {
thread::sleep(duration_between_checks);
continue;
}
}
let Some(value) = slot_clone.lock().expect("Failed to lock slot in DelayedSender").take() else {
thread::sleep(duration_between_checks);
continue;
};
if stop_flag_clone.load(std::sync::atomic::Ordering::Relaxed) {
break;
}
if let Err(e) = sender.send(value) {
log::error!("Failed to send value: {e:?}");
};
last_send_time = Some(Instant::now());
}
});
Self { slot, stop_flag }
}
pub fn send(&self, value: T) {
let mut slot = self.slot.lock().expect("Failed to lock slot in DelayedSender");
*slot = Some(value);
}
}
impl<T: Send + 'static> Drop for DelayedSender<T> {
fn drop(&mut self) {
// We need to know, that after
dropping DelayedSender, no more values will be sent
// Previously some values were cached and sent after other later operations
self.stop_flag.store(true, std::sync::atomic::Ordering::Relaxed);
}
}

# Alternative GUI

In the case of Krokiet and Czkawka, I decided to write the GUI in low-level languages (Slint is transpiled to Rust), instead of using higher-level languages — mainly for performance and simpler installation.

For Krokiet, I briefly considered using Tauri, but I decided that Slint would be a better solution in my case: simpler compilation and no need to use the heavy (and differently behaving on each system) webview with TS/JS.

However, one user apparently didn’t like the current gui and decided to create their own alternative using Tauri.

The author himself does not hide that he based the look of his program on Krokiet(which is obvious). Even so, differences can be noticed, stemming both from personal design preferences and limitations of the libraries that both projects use(for example, in the Tauri version popups are used more often, because Slint has issues with them, so I avoided using them whenever possible).

Since I am not very skilled in application design, it’s not surprising that I found several interesting solutions in this new GUI that I will want to either copy 1:1 or use as inspiration when modifying Krokiet.

Preliminary tests indicate that the application works surprisingly well, despite minor performance issues (one mode on Windows froze briefly — though the culprit might also be the czkawka\_core package), small GUI shortcomings (e.g., the ability to save the application as an HTML page), or the lack of a working Linux version (a month or two ago I managed to compile it, but now I cannot).

Link — [https://github.com/shixinhuang99/czkawka-tauri](https://github.com/shixinhuang99/czkawka-tauri)

# Czkawka in the Debian Repository

Recently, just before the release of Debian 13, a momentous event took place — Czkawka 8.0.0 was added to the Debian repository (even though version 9.0.0 already existed, but well… Debian has a preference for older, more stable versions, and that must be respected). The addition was made by user Fab Stz.

Links:
\- [https://packages.debian.org/sid/czkawka-gui](https://packages.debian.org/sid/czkawka-gui)
\- [https://packages.debian.org/sid/czkawka-cli](https://packages.debian.org/sid/czkawka-cli)

Debian takes reproducible builds very seriously, so it quickly became apparent that building Czkawka twice in the same environment produced two different binaries. I managed to reduce the problematic program to a few hundred lines. In my great wisdom (or naivety, assuming the bug wasn’t “between the chair and the keyboard”), I concluded that the problem must be in Rust itself. However, after analysis conducted by others, it turned out that the culprit was the i18n-cargo-fl library, whose proc-macro iterates over a hashmap of arguments, and in Rust the iteration order in such a case is random ([https://github.com/kellpossible/cargo-i18n/issues/150](https://github.com/kellpossible/cargo-i18n/issues/150)).

With the source of the problem identified, I prepared a fix — [https://github.com/kellpossible/cargo-i18n/pull/151](https://github.com/kellpossible/cargo-i18n/pull/151) — which has already been merged and is part of the new 0.10.0 version of the cargo-i18n library. Debian’s repository still uses version 0.9.3, but with this fix applied. Interestingly, cargo-i18n is also used in many other projects, including applications from Cosmic DE, so they too now have an easier path to achieving fully reproducible builds.

# Compilation Times and Binary Size

I have never hidden the fact that I gladly use external libraries to easily extend the capabilities of an application, so I don’t have to waste time reinventing the wheel in a process that is both inefficient and error-prone.

Despite many obvious advantages, the biggest downsides are larger binary sizes and longer compilation times. On my older laptop with 4 weak cores, compilation