the file).
So, like it or not, this version (following the path of its predecessors) has a cache that is incompatible with previous versions. This was one of the reasons I didn’t implement it earlier — I had tried adding limits only when reading the file, not when writing it (where I considered it unnecessary), and it didn’t work, so I didn’t continue trying to add this functionality.
I know that for some users it’s probably inconvenient that in almost every new version they have to rebuild the cache from scratch, because due to changed structures or data calculation methods, it’s not possible to simply read old files. So in future versions, I’ll try not to tamper too much with the cache unless necessary (although, admittedly, I’m tempted to add a few extra parameters to video files in the next version, which would force the use of the new cache).
An alternative would be to create a built-in tool for migrating cache files. However, reading arbitrary external data without memory limits in place would make such a tool useless and prone to frequent crashes. Such a tool is only feasible from the current version onward, and it may be implemented in the future.
# Translations in Krokiet
To match the feature set currently available in Czkawka, I decided to try to implement the missing translations, which make it harder for some users, less proficient in English, to use the application.
One might think that since Slint itself is written in Rust, using the Fluent library inside it, which is also written in Rust, would be an obvious and natural choice. However, for various reasons, the authors decided that it’s better to use probably the most popular translation tool instead — gettext, which, however, complicates compilation and almost makes cross-compilation impossible (the issue aims to change this situation — [https://github.com/slint-ui/slint/issues/3715](https://github.com/slint-ui/slint/issues/3715)).
Without built-in translation support in Slint, what seemed like a fairly simple functionality turned into a tricky puzzle of how to implement it best. My goal was to allow changing the language at runtime, without needing to restart the entire application.
Ultimately, I decided that the best approach would be to create a singleton containing all the translation texts, in a style like this:
export global Translations {
in-out property <string> ok_button_text: "Ok";
in-out property <string> cancel_button_text: "Cancel";
...
}
…and use it as
export component PopupBase inherits PopupWindow {
in-out property <string> ok_text <=> Translations.ok_button_text;
...
}
then, when changing the language or launching the application, all these attributes are updated in such a way:
app.global::<Callabler>().on_changed_language(move || {
let app = a.upgrade().unwrap();
let translation = app.global::<Translations>();
translation.set_ok_button_text(flk!("ok_button").into());
translation.set_cancel_button_text(flk!("cancel_button").into());
...
});
With over 200 texts to translate, it’s very easy to make a mistake or leave some translations unlinked, which is why I rely on Python helper noscripts that verify everything is being used.
This adds more code than if built-in support for fluent-rs existed and could be used directly, similar to how gettext translations currently work. I hope that something like this will be implemented for Fluent soon:
export component PopupBase inherits PopupWindow {
in-out property <string> ok_text: u/tr("ok_button");
...
}
Regarding the translations themselves, they are hosted and updated on Crowdin — [https://crowdin.com/project/czkawka](https://crowdin.com/project/czkawka) — and synchronized with GitHub from time to time. For each release, several dozen phrases are updated, so I’m forced to use machine translation for some languages. Not all texts may be fully translated or look as they should, so feel free to correct them if you come across any mistakes.
# Improving
So, like it or not, this version (following the path of its predecessors) has a cache that is incompatible with previous versions. This was one of the reasons I didn’t implement it earlier — I had tried adding limits only when reading the file, not when writing it (where I considered it unnecessary), and it didn’t work, so I didn’t continue trying to add this functionality.
I know that for some users it’s probably inconvenient that in almost every new version they have to rebuild the cache from scratch, because due to changed structures or data calculation methods, it’s not possible to simply read old files. So in future versions, I’ll try not to tamper too much with the cache unless necessary (although, admittedly, I’m tempted to add a few extra parameters to video files in the next version, which would force the use of the new cache).
An alternative would be to create a built-in tool for migrating cache files. However, reading arbitrary external data without memory limits in place would make such a tool useless and prone to frequent crashes. Such a tool is only feasible from the current version onward, and it may be implemented in the future.
# Translations in Krokiet
To match the feature set currently available in Czkawka, I decided to try to implement the missing translations, which make it harder for some users, less proficient in English, to use the application.
One might think that since Slint itself is written in Rust, using the Fluent library inside it, which is also written in Rust, would be an obvious and natural choice. However, for various reasons, the authors decided that it’s better to use probably the most popular translation tool instead — gettext, which, however, complicates compilation and almost makes cross-compilation impossible (the issue aims to change this situation — [https://github.com/slint-ui/slint/issues/3715](https://github.com/slint-ui/slint/issues/3715)).
Without built-in translation support in Slint, what seemed like a fairly simple functionality turned into a tricky puzzle of how to implement it best. My goal was to allow changing the language at runtime, without needing to restart the entire application.
Ultimately, I decided that the best approach would be to create a singleton containing all the translation texts, in a style like this:
export global Translations {
in-out property <string> ok_button_text: "Ok";
in-out property <string> cancel_button_text: "Cancel";
...
}
…and use it as
export component PopupBase inherits PopupWindow {
in-out property <string> ok_text <=> Translations.ok_button_text;
...
}
then, when changing the language or launching the application, all these attributes are updated in such a way:
app.global::<Callabler>().on_changed_language(move || {
let app = a.upgrade().unwrap();
let translation = app.global::<Translations>();
translation.set_ok_button_text(flk!("ok_button").into());
translation.set_cancel_button_text(flk!("cancel_button").into());
...
});
With over 200 texts to translate, it’s very easy to make a mistake or leave some translations unlinked, which is why I rely on Python helper noscripts that verify everything is being used.
This adds more code than if built-in support for fluent-rs existed and could be used directly, similar to how gettext translations currently work. I hope that something like this will be implemented for Fluent soon:
export component PopupBase inherits PopupWindow {
in-out property <string> ok_text: u/tr("ok_button");
...
}
Regarding the translations themselves, they are hosted and updated on Crowdin — [https://crowdin.com/project/czkawka](https://crowdin.com/project/czkawka) — and synchronized with GitHub from time to time. For each release, several dozen phrases are updated, so I’m forced to use machine translation for some languages. Not all texts may be fully translated or look as they should, so feel free to correct them if you come across any mistakes.
# Improving
GitHub
Add support for translations without gettext · Issue #3715 · slint-ui/slint
We need to support user changing translations through the interface, as well as translation on platform that do not have gettext (MCU) The idea would be somehow to have API to provide a translator....
Krokiet
The main goal of this version was to reduce the feature gaps between Czkawka (GUI) and Krokiet, so that I could confidently recommend Krokiet as a viable alternative. I think I largely succeeded in this area.
During this process, it often turned out that implementing the same features in Slint is much simpler than it was in the GTK version. Take sorting as an example. On the GTK side, due to the lack of better-known solutions (there probably are some, but I’ve lived until now in complete ignorance, which makes my eyes hurt when I look at the final implementation I once made), to sort a model, I would get an iterator over it and then iterate through each element one by one, collecting the TreeIters into a vector. Then I would extract the data from a specific column of each row and sort it using bubble sort within that vector.
fn popover_sort_general<T>(tree_view: >k4::TreeView, column_sort: i32, column_header: i32)
where
T: Ord + for<'b> glib::value::FromValue<'b> + 'static + Debug,
{
let model = get_list_store(tree_view);
if let Some(curr_iter) = model.iter_first() {
assert!(model.get::<bool>(&curr_iter, column_header)); // First item should be header
assert!(model.iter_next(&curr_iter)); // Must be at least two items
loop {
let mut iters = Vec::new();
let mut all_have = false;
loop {
if model.get::<bool>(&curr_iter, column_header) {
assert!(model.iter_next(&curr_iter), "Empty header, this should not happens");
break;
}
iters.push(curr_iter);
if !model.iter_next(&curr_iter) {
all_have = true;
break;
}
}
if iters.len() == 1 {
continue; // Can be equal 1 in reference folders
}
sort_iters::<T>(&model, iters, column_sort);
if all_have {
break;
}
}
}
}
fn sort_iters<T>(model: &ListStore, mut iters: Vec<TreeIter>, column_sort: i32)
where
T: Ord + for<'b> glib::value::FromValue<'b> + 'static + Debug,
{
assert!(iters.len() >= 2);
loop {
let mut changed_item = false;
for idx in 0..(iters.len() - 1) {
if model.get::<T>(&iters[idx], column_sort) > model.get::<T>(&iters[idx + 1], column_sort) {
model.swap(&iters[idx], &iters[idx + 1]);
iters.swap(idx, idx + 1);
changed_item = true;
}
}
if !changed_item {
return;
}
}
}
Over time, I’ve realized that I should have wrapped the model management logic earlier, which would have made reading and modifying it much easier. But now, it’s too late to make changes. On the Slint side, the situation is much simpler and more “Rust-like”:
pub(super) fn sort_modification_date(model: &ModelRc<MainListModel>, active_tab: ActiveTab) -> ModelRc<MainListModel> {
let sort_function = |e: &MainListModel| {
let modification_date_col = active_tab.get_int_modification_date_idx();
let val_int = e.val_int.iter().collect::<Vec<_>>();
connect_i32_into_u64(val_int[modification_date_col], val_int[modification_date_col + 1])
};
let mut items = model.iter().collect::<Vec<_>>();
items.sort_by_cached_key(&sort_function);
let new_model = ModelRc::new(VecModel::from(items));
recalculate_small_selection_if_needed(&new_model, active_tab);
return new_model;
}
It’s much shorter, more readable, and in most cases faster (the GTK version might be faster if the data is already almost sorted). Still, a few oddities remain, such as:
* modification\_date\_col —to generalize the model for different tools a bit, for each row in the scan
The main goal of this version was to reduce the feature gaps between Czkawka (GUI) and Krokiet, so that I could confidently recommend Krokiet as a viable alternative. I think I largely succeeded in this area.
During this process, it often turned out that implementing the same features in Slint is much simpler than it was in the GTK version. Take sorting as an example. On the GTK side, due to the lack of better-known solutions (there probably are some, but I’ve lived until now in complete ignorance, which makes my eyes hurt when I look at the final implementation I once made), to sort a model, I would get an iterator over it and then iterate through each element one by one, collecting the TreeIters into a vector. Then I would extract the data from a specific column of each row and sort it using bubble sort within that vector.
fn popover_sort_general<T>(tree_view: >k4::TreeView, column_sort: i32, column_header: i32)
where
T: Ord + for<'b> glib::value::FromValue<'b> + 'static + Debug,
{
let model = get_list_store(tree_view);
if let Some(curr_iter) = model.iter_first() {
assert!(model.get::<bool>(&curr_iter, column_header)); // First item should be header
assert!(model.iter_next(&curr_iter)); // Must be at least two items
loop {
let mut iters = Vec::new();
let mut all_have = false;
loop {
if model.get::<bool>(&curr_iter, column_header) {
assert!(model.iter_next(&curr_iter), "Empty header, this should not happens");
break;
}
iters.push(curr_iter);
if !model.iter_next(&curr_iter) {
all_have = true;
break;
}
}
if iters.len() == 1 {
continue; // Can be equal 1 in reference folders
}
sort_iters::<T>(&model, iters, column_sort);
if all_have {
break;
}
}
}
}
fn sort_iters<T>(model: &ListStore, mut iters: Vec<TreeIter>, column_sort: i32)
where
T: Ord + for<'b> glib::value::FromValue<'b> + 'static + Debug,
{
assert!(iters.len() >= 2);
loop {
let mut changed_item = false;
for idx in 0..(iters.len() - 1) {
if model.get::<T>(&iters[idx], column_sort) > model.get::<T>(&iters[idx + 1], column_sort) {
model.swap(&iters[idx], &iters[idx + 1]);
iters.swap(idx, idx + 1);
changed_item = true;
}
}
if !changed_item {
return;
}
}
}
Over time, I’ve realized that I should have wrapped the model management logic earlier, which would have made reading and modifying it much easier. But now, it’s too late to make changes. On the Slint side, the situation is much simpler and more “Rust-like”:
pub(super) fn sort_modification_date(model: &ModelRc<MainListModel>, active_tab: ActiveTab) -> ModelRc<MainListModel> {
let sort_function = |e: &MainListModel| {
let modification_date_col = active_tab.get_int_modification_date_idx();
let val_int = e.val_int.iter().collect::<Vec<_>>();
connect_i32_into_u64(val_int[modification_date_col], val_int[modification_date_col + 1])
};
let mut items = model.iter().collect::<Vec<_>>();
items.sort_by_cached_key(&sort_function);
let new_model = ModelRc::new(VecModel::from(items));
recalculate_small_selection_if_needed(&new_model, active_tab);
return new_model;
}
It’s much shorter, more readable, and in most cases faster (the GTK version might be faster if the data is already almost sorted). Still, a few oddities remain, such as:
* modification\_date\_col —to generalize the model for different tools a bit, for each row in the scan
results, there are vectors containing numeric and string data. The amount and order of data differs for each tool, so it’s necessary to fetch from the current tab where the needed data currently resides
* connect\_i32\_into\_u64 — as the name suggests, it combines two i32 values into a u64. This is a workaround for the fact that Slint doesn’t yet support 64-bit integers (though I’m hopeful that support will be added soon).
* recalculate\_small\_selection\_if\_needed — due to the lack of built-in widgets with multi-selection support in Slint (unlike GTK), I had to create such a widget along with all the logic for selecting items, modifying selections, etc. It adds quite a bit of extra code, but at least I now have more control over selection, which comes in handy in certain situations
Another useful feature that already existed in Czkawka is the ability to start a scan, along with a list of selected folders, directly from the CLI. So now, running
krokiet . Desktop -i /home/rafal/Downloads -e /home/rafal/Downloads/images
will start scanning for files in three folders with one excluded (of course, only if the paths exist — otherwise, the path will be ignored). This mode uses a separate configuration file, which is loaded when the program is run with command-line arguments (configurations for other modes are not overwritten).
Since some things are easier to implement in Krokiet, I added several functions in this version that were missing in Czkawka:
* Remembering window size and column widths for each screen
* The ability to hide text on icons (for a more compact UI)
* Dark and light themes, switchable at runtime
* Disabling certain buttons when no items are selected
* Displaying the number of items queued for deletion
# Ending AppImage Support
Following the end of Snap support on Linux in the previous version, due to difficulties in building them, it’s now time to drop AppImage as well.
The main reasons for discontinuing AppImage are the nonstandard errors that would appear during use and its limited utility beyond what regular binary files provide.
Personally, I’m a fan of the AppImage format and use it whenever possible (unless the application is also available as a Flatpak or Snap), since it eliminates the need to worry about external dependencies. This works great for applications with a large number of dependencies. However, in Czkawka, the only dependencies bundled were GTK4 libraries — which didn’t make much sense, as almost every Linux distribution already has these libraries installed, often with patches to improve compatibility (for example, Debian patches: [https://sources.debian.org/src/gtk4/4.18.6%2Bds-2/debian/patches/series/](https://udd.debian.org/patches.cgi?src=gtk4&version=4.18.6%2Bds-2)).
It would make more sense to bundle optional libraries such as ffmpeg, libheif or libraw, but I didn’t have the time or interest to do that. Occasionally, some AppImage users started reporting issues that did not appear in other formats and could not be reproduced, making them impossible to diagnose and fix.
Additionally, the plugin itself ([https://github.com/linuxdeploy/linuxdeploy-plugin-gtk](https://github.com/linuxdeploy/linuxdeploy-plugin-gtk)) used to bundle GTK dependencies hadn’t been updated in over two years. Its authors did a fantastic job creating and maintaining it in their free time, but a major issue for me was that it wasn’t officially supported by the GTK developers, who could have assisted with the development of this very useful project.
# Multithreaded File Processing in Krokiet and CLI
Some users pointed out that deleting or copying files from within the application is time-consuming, and there is no feedback on progress. Additionally, during these operations, the entire GUI becomes unresponsive until the process finishes.
The problem stems from performing file operations in the same thread as the GUI rendering. Without interface updates, the system considers the application unresponsive and may display an os window prompting the user to kill it.
The solution is relatively
* connect\_i32\_into\_u64 — as the name suggests, it combines two i32 values into a u64. This is a workaround for the fact that Slint doesn’t yet support 64-bit integers (though I’m hopeful that support will be added soon).
* recalculate\_small\_selection\_if\_needed — due to the lack of built-in widgets with multi-selection support in Slint (unlike GTK), I had to create such a widget along with all the logic for selecting items, modifying selections, etc. It adds quite a bit of extra code, but at least I now have more control over selection, which comes in handy in certain situations
Another useful feature that already existed in Czkawka is the ability to start a scan, along with a list of selected folders, directly from the CLI. So now, running
krokiet . Desktop -i /home/rafal/Downloads -e /home/rafal/Downloads/images
will start scanning for files in three folders with one excluded (of course, only if the paths exist — otherwise, the path will be ignored). This mode uses a separate configuration file, which is loaded when the program is run with command-line arguments (configurations for other modes are not overwritten).
Since some things are easier to implement in Krokiet, I added several functions in this version that were missing in Czkawka:
* Remembering window size and column widths for each screen
* The ability to hide text on icons (for a more compact UI)
* Dark and light themes, switchable at runtime
* Disabling certain buttons when no items are selected
* Displaying the number of items queued for deletion
# Ending AppImage Support
Following the end of Snap support on Linux in the previous version, due to difficulties in building them, it’s now time to drop AppImage as well.
The main reasons for discontinuing AppImage are the nonstandard errors that would appear during use and its limited utility beyond what regular binary files provide.
Personally, I’m a fan of the AppImage format and use it whenever possible (unless the application is also available as a Flatpak or Snap), since it eliminates the need to worry about external dependencies. This works great for applications with a large number of dependencies. However, in Czkawka, the only dependencies bundled were GTK4 libraries — which didn’t make much sense, as almost every Linux distribution already has these libraries installed, often with patches to improve compatibility (for example, Debian patches: [https://sources.debian.org/src/gtk4/4.18.6%2Bds-2/debian/patches/series/](https://udd.debian.org/patches.cgi?src=gtk4&version=4.18.6%2Bds-2)).
It would make more sense to bundle optional libraries such as ffmpeg, libheif or libraw, but I didn’t have the time or interest to do that. Occasionally, some AppImage users started reporting issues that did not appear in other formats and could not be reproduced, making them impossible to diagnose and fix.
Additionally, the plugin itself ([https://github.com/linuxdeploy/linuxdeploy-plugin-gtk](https://github.com/linuxdeploy/linuxdeploy-plugin-gtk)) used to bundle GTK dependencies hadn’t been updated in over two years. Its authors did a fantastic job creating and maintaining it in their free time, but a major issue for me was that it wasn’t officially supported by the GTK developers, who could have assisted with the development of this very useful project.
# Multithreaded File Processing in Krokiet and CLI
Some users pointed out that deleting or copying files from within the application is time-consuming, and there is no feedback on progress. Additionally, during these operations, the entire GUI becomes unresponsive until the process finishes.
The problem stems from performing file operations in the same thread as the GUI rendering. Without interface updates, the system considers the application unresponsive and may display an os window prompting the user to kill it.
The solution is relatively
GitHub
GitHub - linuxdeploy/linuxdeploy-plugin-gtk: Gtk+2/3 plugin for linuxdeploy. Bundles Gtk+ resources, GLib schemas, and a lot more.
Gtk+2/3 plugin for linuxdeploy. Bundles Gtk+ resources, GLib schemas, and a lot more. - linuxdeploy/linuxdeploy-plugin-gtk
straightforward — simply move the computations to a separate thread. However, this introduces two new challenges: the need to stop the file-processing task and to synchronize the state of completed operations with the GUI.
A simple implementation in this style is sufficient:
let all_files = files.len();
let mut processing_files = Arc<AtomicBool<usize>>::new(0);
let _ = files.into_par_iter().map(|e| {
if stop_flag.load(Ordering::Relaxed) {
return None;
}
let processing_files = processing_files.fetch_add(1, Ordering::Relaxed);
let status_to_send = Status { all_files, processing_files };
progress_sender.send(status_to_send);
// Processing file
}).while_some().collect::<Vec<_>>();
The problem arises when a large number of messages are being sent, and updating the GUI/terminal for each of them would be completely unnecessary — after all, very few people could notice and process status changes appearing even 60 times per second.
This would also cause performance issues and unnecessarily increase system resource usage. I needed a way to limit the number of messages being sent. This could be implemented either on the side of the message generator (the thread deleting files) or on the recipient side (the GUI thread/progress bar in CLI). I decided it’s better to handle it sooner rather than later.
Ultimately, I created a simple structure that uses a lock to store the latest message to be sent. Then, in a separate thread, every \~100 ms, the message is fetched and sent to the GUI. Although the solution is simple, I do have some concerns about its performance on systems with a very large number of cores — there, thousands or even tens of thousands of messages per second could cause the mutex to become a bottleneck. For now, I haven’t tested it under such conditions, and it currently doesn’t cause problems, so I’ve postponed optimization (though I’m open to ideas on how it could be improved).
pub struct DelayedSender<T: Send + 'static> {
slot: Arc<Mutex<Option<T>>>,
stop_flag: Arc<AtomicBool>,
}
impl<T: Send + 'static> DelayedSender<T> {
pub fn new(sender: crossbeam_channel::Sender<T>, wait_time: Duration) -> Self {
let slot = Arc::new(Mutex::new(None));
let slot_clone = Arc::clone(&slot);
let stop_flag = Arc::new(AtomicBool::new(false));
let stop_flag_clone = Arc::clone(&stop_flag);
let _join = thread::spawn(move || {
let mut last_send_time: Option<Instant> = None;
let duration_between_checks = Duration::from_secs_f64(wait_time.as_secs_f64() / 5.0);
loop {
if stop_flag_clone.load(std::sync::atomic::Ordering::Relaxed) {
break;
}
if let Some(last_send_time) = last_send_time {
if last_send_time.elapsed() < wait_time {
thread::sleep(duration_between_checks);
continue;
}
}
let Some(value) = slot_clone.lock().expect("Failed to lock slot in DelayedSender").take() else {
thread::sleep(duration_between_checks);
continue;
};
if stop_flag_clone.load(std::sync::atomic::Ordering::Relaxed) {
break;
}
if let Err(e) = sender.send(value) {
log::error!("Failed to send value: {e:?}");
};
last_send_time = Some(Instant::now());
}
});
Self { slot, stop_flag }
}
pub fn send(&self, value: T) {
let mut slot = self.slot.lock().expect("Failed to lock slot in DelayedSender");
*slot = Some(value);
}
}
impl<T: Send + 'static> Drop for DelayedSender<T> {
fn drop(&mut self) {
// We need to know, that after
A simple implementation in this style is sufficient:
let all_files = files.len();
let mut processing_files = Arc<AtomicBool<usize>>::new(0);
let _ = files.into_par_iter().map(|e| {
if stop_flag.load(Ordering::Relaxed) {
return None;
}
let processing_files = processing_files.fetch_add(1, Ordering::Relaxed);
let status_to_send = Status { all_files, processing_files };
progress_sender.send(status_to_send);
// Processing file
}).while_some().collect::<Vec<_>>();
The problem arises when a large number of messages are being sent, and updating the GUI/terminal for each of them would be completely unnecessary — after all, very few people could notice and process status changes appearing even 60 times per second.
This would also cause performance issues and unnecessarily increase system resource usage. I needed a way to limit the number of messages being sent. This could be implemented either on the side of the message generator (the thread deleting files) or on the recipient side (the GUI thread/progress bar in CLI). I decided it’s better to handle it sooner rather than later.
Ultimately, I created a simple structure that uses a lock to store the latest message to be sent. Then, in a separate thread, every \~100 ms, the message is fetched and sent to the GUI. Although the solution is simple, I do have some concerns about its performance on systems with a very large number of cores — there, thousands or even tens of thousands of messages per second could cause the mutex to become a bottleneck. For now, I haven’t tested it under such conditions, and it currently doesn’t cause problems, so I’ve postponed optimization (though I’m open to ideas on how it could be improved).
pub struct DelayedSender<T: Send + 'static> {
slot: Arc<Mutex<Option<T>>>,
stop_flag: Arc<AtomicBool>,
}
impl<T: Send + 'static> DelayedSender<T> {
pub fn new(sender: crossbeam_channel::Sender<T>, wait_time: Duration) -> Self {
let slot = Arc::new(Mutex::new(None));
let slot_clone = Arc::clone(&slot);
let stop_flag = Arc::new(AtomicBool::new(false));
let stop_flag_clone = Arc::clone(&stop_flag);
let _join = thread::spawn(move || {
let mut last_send_time: Option<Instant> = None;
let duration_between_checks = Duration::from_secs_f64(wait_time.as_secs_f64() / 5.0);
loop {
if stop_flag_clone.load(std::sync::atomic::Ordering::Relaxed) {
break;
}
if let Some(last_send_time) = last_send_time {
if last_send_time.elapsed() < wait_time {
thread::sleep(duration_between_checks);
continue;
}
}
let Some(value) = slot_clone.lock().expect("Failed to lock slot in DelayedSender").take() else {
thread::sleep(duration_between_checks);
continue;
};
if stop_flag_clone.load(std::sync::atomic::Ordering::Relaxed) {
break;
}
if let Err(e) = sender.send(value) {
log::error!("Failed to send value: {e:?}");
};
last_send_time = Some(Instant::now());
}
});
Self { slot, stop_flag }
}
pub fn send(&self, value: T) {
let mut slot = self.slot.lock().expect("Failed to lock slot in DelayedSender");
*slot = Some(value);
}
}
impl<T: Send + 'static> Drop for DelayedSender<T> {
fn drop(&mut self) {
// We need to know, that after
dropping DelayedSender, no more values will be sent
// Previously some values were cached and sent after other later operations
self.stop_flag.store(true, std::sync::atomic::Ordering::Relaxed);
}
}
# Alternative GUI
In the case of Krokiet and Czkawka, I decided to write the GUI in low-level languages (Slint is transpiled to Rust), instead of using higher-level languages — mainly for performance and simpler installation.
For Krokiet, I briefly considered using Tauri, but I decided that Slint would be a better solution in my case: simpler compilation and no need to use the heavy (and differently behaving on each system) webview with TS/JS.
However, one user apparently didn’t like the current gui and decided to create their own alternative using Tauri.
The author himself does not hide that he based the look of his program on Krokiet(which is obvious). Even so, differences can be noticed, stemming both from personal design preferences and limitations of the libraries that both projects use(for example, in the Tauri version popups are used more often, because Slint has issues with them, so I avoided using them whenever possible).
Since I am not very skilled in application design, it’s not surprising that I found several interesting solutions in this new GUI that I will want to either copy 1:1 or use as inspiration when modifying Krokiet.
Preliminary tests indicate that the application works surprisingly well, despite minor performance issues (one mode on Windows froze briefly — though the culprit might also be the czkawka\_core package), small GUI shortcomings (e.g., the ability to save the application as an HTML page), or the lack of a working Linux version (a month or two ago I managed to compile it, but now I cannot).
Link — [https://github.com/shixinhuang99/czkawka-tauri](https://github.com/shixinhuang99/czkawka-tauri)
# Czkawka in the Debian Repository
Recently, just before the release of Debian 13, a momentous event took place — Czkawka 8.0.0 was added to the Debian repository (even though version 9.0.0 already existed, but well… Debian has a preference for older, more stable versions, and that must be respected). The addition was made by user Fab Stz.
Links:
\- [https://packages.debian.org/sid/czkawka-gui](https://packages.debian.org/sid/czkawka-gui)
\- [https://packages.debian.org/sid/czkawka-cli](https://packages.debian.org/sid/czkawka-cli)
Debian takes reproducible builds very seriously, so it quickly became apparent that building Czkawka twice in the same environment produced two different binaries. I managed to reduce the problematic program to a few hundred lines. In my great wisdom (or naivety, assuming the bug wasn’t “between the chair and the keyboard”), I concluded that the problem must be in Rust itself. However, after analysis conducted by others, it turned out that the culprit was the i18n-cargo-fl library, whose proc-macro iterates over a hashmap of arguments, and in Rust the iteration order in such a case is random ([https://github.com/kellpossible/cargo-i18n/issues/150](https://github.com/kellpossible/cargo-i18n/issues/150)).
With the source of the problem identified, I prepared a fix — [https://github.com/kellpossible/cargo-i18n/pull/151](https://github.com/kellpossible/cargo-i18n/pull/151) — which has already been merged and is part of the new 0.10.0 version of the cargo-i18n library. Debian’s repository still uses version 0.9.3, but with this fix applied. Interestingly, cargo-i18n is also used in many other projects, including applications from Cosmic DE, so they too now have an easier path to achieving fully reproducible builds.
# Compilation Times and Binary Size
I have never hidden the fact that I gladly use external libraries to easily extend the capabilities of an application, so I don’t have to waste time reinventing the wheel in a process that is both inefficient and error-prone.
Despite many obvious advantages, the biggest downsides are larger binary sizes and longer compilation times. On my older laptop with 4 weak cores, compilation
// Previously some values were cached and sent after other later operations
self.stop_flag.store(true, std::sync::atomic::Ordering::Relaxed);
}
}
# Alternative GUI
In the case of Krokiet and Czkawka, I decided to write the GUI in low-level languages (Slint is transpiled to Rust), instead of using higher-level languages — mainly for performance and simpler installation.
For Krokiet, I briefly considered using Tauri, but I decided that Slint would be a better solution in my case: simpler compilation and no need to use the heavy (and differently behaving on each system) webview with TS/JS.
However, one user apparently didn’t like the current gui and decided to create their own alternative using Tauri.
The author himself does not hide that he based the look of his program on Krokiet(which is obvious). Even so, differences can be noticed, stemming both from personal design preferences and limitations of the libraries that both projects use(for example, in the Tauri version popups are used more often, because Slint has issues with them, so I avoided using them whenever possible).
Since I am not very skilled in application design, it’s not surprising that I found several interesting solutions in this new GUI that I will want to either copy 1:1 or use as inspiration when modifying Krokiet.
Preliminary tests indicate that the application works surprisingly well, despite minor performance issues (one mode on Windows froze briefly — though the culprit might also be the czkawka\_core package), small GUI shortcomings (e.g., the ability to save the application as an HTML page), or the lack of a working Linux version (a month or two ago I managed to compile it, but now I cannot).
Link — [https://github.com/shixinhuang99/czkawka-tauri](https://github.com/shixinhuang99/czkawka-tauri)
# Czkawka in the Debian Repository
Recently, just before the release of Debian 13, a momentous event took place — Czkawka 8.0.0 was added to the Debian repository (even though version 9.0.0 already existed, but well… Debian has a preference for older, more stable versions, and that must be respected). The addition was made by user Fab Stz.
Links:
\- [https://packages.debian.org/sid/czkawka-gui](https://packages.debian.org/sid/czkawka-gui)
\- [https://packages.debian.org/sid/czkawka-cli](https://packages.debian.org/sid/czkawka-cli)
Debian takes reproducible builds very seriously, so it quickly became apparent that building Czkawka twice in the same environment produced two different binaries. I managed to reduce the problematic program to a few hundred lines. In my great wisdom (or naivety, assuming the bug wasn’t “between the chair and the keyboard”), I concluded that the problem must be in Rust itself. However, after analysis conducted by others, it turned out that the culprit was the i18n-cargo-fl library, whose proc-macro iterates over a hashmap of arguments, and in Rust the iteration order in such a case is random ([https://github.com/kellpossible/cargo-i18n/issues/150](https://github.com/kellpossible/cargo-i18n/issues/150)).
With the source of the problem identified, I prepared a fix — [https://github.com/kellpossible/cargo-i18n/pull/151](https://github.com/kellpossible/cargo-i18n/pull/151) — which has already been merged and is part of the new 0.10.0 version of the cargo-i18n library. Debian’s repository still uses version 0.9.3, but with this fix applied. Interestingly, cargo-i18n is also used in many other projects, including applications from Cosmic DE, so they too now have an easier path to achieving fully reproducible builds.
# Compilation Times and Binary Size
I have never hidden the fact that I gladly use external libraries to easily extend the capabilities of an application, so I don’t have to waste time reinventing the wheel in a process that is both inefficient and error-prone.
Despite many obvious advantages, the biggest downsides are larger binary sizes and longer compilation times. On my older laptop with 4 weak cores, compilation
GitHub
GitHub - shixinhuang99/czkawka-tauri: A Tauri-based frontend for Czkawka on macOS and Windows
A Tauri-based frontend for Czkawka on macOS and Windows - shixinhuang99/czkawka-tauri
times became so long that I stopped developing this program on it.
However, this doesn’t mean I use additional libraries without consideration. I often try to standardize dependency versions or use projects that are actively maintained and update the libraries they depend on — for example, rawler instead of rawloader, or image-hasher instead of img-hash (which I created as a fork of img-hash with updated dependencies).
To verify the issue of long compilation times, I generated several charts showing how long Krokiet takes to compile with different options, how large the binary is after various optimizations, and how long a recompilation takes after adding a comment (I didn’t test binary performance, as that is a more complicated matter). This allowed me to consider which options were worth including in CI. After reviewing the results, I decided it was worth switching from the current configuration— release + thin lto to release + fat lto + codegen units = 1 .
The tests were conducted on a 12-core AMD Ryzen 9 9700 running Ubuntu 25.04, using the mold linker and rustc 1.91.0-nightly (cd7cbe818 2025–08–15). The base profiles were debug and release, and I adjusted some options based on them (not all combinations seemed worth testing, and some caused various errors) to see their impact on compilation. It’s important to note that Krokiet is a rather specific project with many dependencies, and Slint that generates a large (\~100k lines) Rust file, so other projects may experience significantly different compilation times.
Test Results:
|Config | Output File Size | Target Folder Size | Compilation Time | Rebuild Time |
|:---------------------------------------------------|:-------------------|:---------------------|:-------------------|:---------------|
| release + overflow checks | 73.49 MiB | 2.07 GiB | 1m 11s | 20s |
| debug | 1004.52 MiB | 7.00 GiB | 1m 54s | 3s |
| debug + cranelift | 624.43 MiB | 5.25 GiB | 47s | 3s |
| debug + debug disabled | 131.64 MiB | 2.52 GiB | 1m 33s | 2s |
| check | - | 1.66 GiB | 58s | 1s |
| release | 70.50 MiB | 2.04 GiB | 2m 58s | 2m 11s |
| release + cranelift | 70.50 MiB | 2.04 GiB | 2m 59s | 2m 10s |
| release + debug info | 786.19 MiB | 5.40 GiB | 3m 23s | 2m 18s |
| release + native | 67.22 MiB | 1.98 GiB | 3m 5s | 2m 13s |
| release + opt o2 | 70.09 MiB | 2.04 GiB | 2m 56s | 2m 9s |
| release + opt o1 | 76.55 MiB | 1.98 GiB | 1m 1s | 18s |
| release + thin lto | 63.77 MiB | 2.06 GiB | 3m 12s | 2m 32s |
| release + optimize size | 66.93 MiB | 1.93 GiB | 1m 1s | 18s |
| release + fat lto | 45.46 MiB | 2.03 GiB | 6m 18s | 5m 38s |
| release + cu 1 | 50.93 MiB | 1.92 GiB | 4m 9s | 2m 56s |
| release + panic abort | 56.81 MiB | 1.97 GiB | 2m 56s | 2m 15s
However, this doesn’t mean I use additional libraries without consideration. I often try to standardize dependency versions or use projects that are actively maintained and update the libraries they depend on — for example, rawler instead of rawloader, or image-hasher instead of img-hash (which I created as a fork of img-hash with updated dependencies).
To verify the issue of long compilation times, I generated several charts showing how long Krokiet takes to compile with different options, how large the binary is after various optimizations, and how long a recompilation takes after adding a comment (I didn’t test binary performance, as that is a more complicated matter). This allowed me to consider which options were worth including in CI. After reviewing the results, I decided it was worth switching from the current configuration— release + thin lto to release + fat lto + codegen units = 1 .
The tests were conducted on a 12-core AMD Ryzen 9 9700 running Ubuntu 25.04, using the mold linker and rustc 1.91.0-nightly (cd7cbe818 2025–08–15). The base profiles were debug and release, and I adjusted some options based on them (not all combinations seemed worth testing, and some caused various errors) to see their impact on compilation. It’s important to note that Krokiet is a rather specific project with many dependencies, and Slint that generates a large (\~100k lines) Rust file, so other projects may experience significantly different compilation times.
Test Results:
|Config | Output File Size | Target Folder Size | Compilation Time | Rebuild Time |
|:---------------------------------------------------|:-------------------|:---------------------|:-------------------|:---------------|
| release + overflow checks | 73.49 MiB | 2.07 GiB | 1m 11s | 20s |
| debug | 1004.52 MiB | 7.00 GiB | 1m 54s | 3s |
| debug + cranelift | 624.43 MiB | 5.25 GiB | 47s | 3s |
| debug + debug disabled | 131.64 MiB | 2.52 GiB | 1m 33s | 2s |
| check | - | 1.66 GiB | 58s | 1s |
| release | 70.50 MiB | 2.04 GiB | 2m 58s | 2m 11s |
| release + cranelift | 70.50 MiB | 2.04 GiB | 2m 59s | 2m 10s |
| release + debug info | 786.19 MiB | 5.40 GiB | 3m 23s | 2m 18s |
| release + native | 67.22 MiB | 1.98 GiB | 3m 5s | 2m 13s |
| release + opt o2 | 70.09 MiB | 2.04 GiB | 2m 56s | 2m 9s |
| release + opt o1 | 76.55 MiB | 1.98 GiB | 1m 1s | 18s |
| release + thin lto | 63.77 MiB | 2.06 GiB | 3m 12s | 2m 32s |
| release + optimize size | 66.93 MiB | 1.93 GiB | 1m 1s | 18s |
| release + fat lto | 45.46 MiB | 2.03 GiB | 6m 18s | 5m 38s |
| release + cu 1 | 50.93 MiB | 1.92 GiB | 4m 9s | 2m 56s |
| release + panic abort | 56.81 MiB | 1.97 GiB | 2m 56s | 2m 15s
|
| release + build-std | 70.72 MiB | 2.23 GiB | 3m 7s | 2m 11s |
| release + fat lto + cu 1 + panic abort | 35.71 MiB | 1.92 GiB | 5m 44s | 4m 47s |
| release + fat lto + cu 1 + panic abort + native | 35.94 MiB | 1.87 GiB | 6m 23s | 5m 24s |
| release + fat lto + cu 1 + panic abort + build-std | 33.97 MiB | 2.11 GiB | 5m 45s | 4m 44s |
| release + fat lto + cu 1 | 40.65 MiB | 1.95 GiB | 6m 3s | 5m 2s |
| release + incremental | 71.45 MiB | 2.38 GiB | 1m 8s | 2s |
| release + incremental + fat lto | 44.81 MiB | 2.44 GiB | 4m 25s | 3m 36s |
Some things that surprised me:
* build-std increases, rather than decreases, the binary size
* optimize-size is fast but only slightly reduces the final binary size.
* fat-LTO works much better than thin-LTO in this project, even though I often read online that thin-LTO usually gives results very similar to fat-LTO
* panic-abort — I thought using this option wouldn’t change the binary size much, but the file shrank by as much as 20%. However, I cannot disable this option and wouldn’t recommend it to anyone (at least for Krokiet and Czkawka), because with external libraries that process/validate/parse external files, panics can occur, and with panic-abort they cannot be caught, so the application will just terminate instead of printing an error and continuing
* release + incremental —this will probably become my new favorite flag, it gives release performance while keeping recompilation times similar to debug. Sometimes I need a combination of both, although I still need to test this more to be sure
The project I used for testing (created for my own purposes, so it might simply not work for other users, and additionally it modifies the Git repository, so all changes need to be committed before use) — [https://github.com/qarmin/czkawka/tree/master/misc/test\_compilation\_speed\_size](https://github.com/qarmin/czkawka/tree/master/misc/test_compilation_speed_size)
# Files from unverified sources
Lately, I’ve both heard and noticed strange new websites that seem to imply they are directly connected to the project (though this is never explicitly stated) and offer only binaries repackaged from GitHub, hosted on their own servers. This isn’t inherently bad, but in the future it could allow them to be replaced with malicious files.
Personally, I only manage a few projects related to Czkawka: the code repository on GitHub along with the binaries hosted there, the Flatpak version of the application, and projects on crates.io. All other projects are either abandoned (e.g., the Snap Store application) or managed by other people.
Czkawka itself does not have a website, and its closest equivalent is the [Readme.md](http://Readme.md) file displayed on the main GitHub project page — I have no plans to create an official site.
So if you use alternative methods to install the program, make sure they come from trustworthy sources. In my view, these include projects like [https://packages.msys2.org/base/mingw-w64-czkawka](https://packages.msys2.org/base/mingw-w64-czkawka) (MSYS2 Windows), [https://formulae.brew.sh/formula/czkawka](https://formulae.brew.sh/formula/czkawka) (Brew macOS), and [https://github.com/jlesage/docker-czkawka](https://github.com/jlesage/docker-czkawka) (Docker Linux).
# Other changes
* File logging — it’s now easier to check for panic errors and verify application behavior historically (mainly relevant for Windows, where both applications and users tend to avoid the terminal)
* Dependency updates — pdf-rs has been replaced with lopdf, and imagepipe + rawloader replaced with rawler (a fork of rawloader) which has more frequent commits,
| release + build-std | 70.72 MiB | 2.23 GiB | 3m 7s | 2m 11s |
| release + fat lto + cu 1 + panic abort | 35.71 MiB | 1.92 GiB | 5m 44s | 4m 47s |
| release + fat lto + cu 1 + panic abort + native | 35.94 MiB | 1.87 GiB | 6m 23s | 5m 24s |
| release + fat lto + cu 1 + panic abort + build-std | 33.97 MiB | 2.11 GiB | 5m 45s | 4m 44s |
| release + fat lto + cu 1 | 40.65 MiB | 1.95 GiB | 6m 3s | 5m 2s |
| release + incremental | 71.45 MiB | 2.38 GiB | 1m 8s | 2s |
| release + incremental + fat lto | 44.81 MiB | 2.44 GiB | 4m 25s | 3m 36s |
Some things that surprised me:
* build-std increases, rather than decreases, the binary size
* optimize-size is fast but only slightly reduces the final binary size.
* fat-LTO works much better than thin-LTO in this project, even though I often read online that thin-LTO usually gives results very similar to fat-LTO
* panic-abort — I thought using this option wouldn’t change the binary size much, but the file shrank by as much as 20%. However, I cannot disable this option and wouldn’t recommend it to anyone (at least for Krokiet and Czkawka), because with external libraries that process/validate/parse external files, panics can occur, and with panic-abort they cannot be caught, so the application will just terminate instead of printing an error and continuing
* release + incremental —this will probably become my new favorite flag, it gives release performance while keeping recompilation times similar to debug. Sometimes I need a combination of both, although I still need to test this more to be sure
The project I used for testing (created for my own purposes, so it might simply not work for other users, and additionally it modifies the Git repository, so all changes need to be committed before use) — [https://github.com/qarmin/czkawka/tree/master/misc/test\_compilation\_speed\_size](https://github.com/qarmin/czkawka/tree/master/misc/test_compilation_speed_size)
# Files from unverified sources
Lately, I’ve both heard and noticed strange new websites that seem to imply they are directly connected to the project (though this is never explicitly stated) and offer only binaries repackaged from GitHub, hosted on their own servers. This isn’t inherently bad, but in the future it could allow them to be replaced with malicious files.
Personally, I only manage a few projects related to Czkawka: the code repository on GitHub along with the binaries hosted there, the Flatpak version of the application, and projects on crates.io. All other projects are either abandoned (e.g., the Snap Store application) or managed by other people.
Czkawka itself does not have a website, and its closest equivalent is the [Readme.md](http://Readme.md) file displayed on the main GitHub project page — I have no plans to create an official site.
So if you use alternative methods to install the program, make sure they come from trustworthy sources. In my view, these include projects like [https://packages.msys2.org/base/mingw-w64-czkawka](https://packages.msys2.org/base/mingw-w64-czkawka) (MSYS2 Windows), [https://formulae.brew.sh/formula/czkawka](https://formulae.brew.sh/formula/czkawka) (Brew macOS), and [https://github.com/jlesage/docker-czkawka](https://github.com/jlesage/docker-czkawka) (Docker Linux).
# Other changes
* File logging — it’s now easier to check for panic errors and verify application behavior historically (mainly relevant for Windows, where both applications and users tend to avoid the terminal)
* Dependency updates — pdf-rs has been replaced with lopdf, and imagepipe + rawloader replaced with rawler (a fork of rawloader) which has more frequent commits,
wider usage, and newer dependencies (making it easier to standardize across different libraries)
* More options for searching similar video files — I had been blissfully unaware that the vid\_dup\_finder\_lib library only allowed adjusting video similarity levels; it turns out you can also configure the black-line detection algorithm and the amount of the ignored initial segment of a video
* Completely new icons — created by me (and admittedly uglier than the previous ones) under a CC BY 4.0 license, replacing the not-so-free icons
* Binaries for Mac with HEIF support, czkawka\_cli built with musl instead of eyre, and Krokiet with an alternative Skia backend — added to the release files on GitHub
* Faster resolution changes in image comparison mode (fast-image-resize crate) — this can no longer be disabled (because, honestly, why would anyone want to?)
* Fixed a panic error that occurred when the GTK SVG decoder was missing or there was an issue loading icons using it (recently this problem appeared quite often on macOS)
Full changelog: — [https://github.com/qarmin/czkawka/blob/master/Changelog.md](https://github.com/qarmin/czkawka/blob/master/Changelog.md)
Repository — [https://github.com/qarmin/czkawka](https://github.com/qarmin/czkawka)
License — MIT/GPL
(Reddit users don’t really like links to Medium, so I copied the entire article here. By doing so, I might have mixed up some things, so if needed you can read original article(with more images) here – [https://medium.com/@qarmin/czkawka-krokiet-10-0-4991186b7ad1](https://medium.com/@qarmin/czkawka-krokiet-10-0-4991186b7ad1) )
https://redd.it/1mtowxp
@r_linux
* More options for searching similar video files — I had been blissfully unaware that the vid\_dup\_finder\_lib library only allowed adjusting video similarity levels; it turns out you can also configure the black-line detection algorithm and the amount of the ignored initial segment of a video
* Completely new icons — created by me (and admittedly uglier than the previous ones) under a CC BY 4.0 license, replacing the not-so-free icons
* Binaries for Mac with HEIF support, czkawka\_cli built with musl instead of eyre, and Krokiet with an alternative Skia backend — added to the release files on GitHub
* Faster resolution changes in image comparison mode (fast-image-resize crate) — this can no longer be disabled (because, honestly, why would anyone want to?)
* Fixed a panic error that occurred when the GTK SVG decoder was missing or there was an issue loading icons using it (recently this problem appeared quite often on macOS)
Full changelog: — [https://github.com/qarmin/czkawka/blob/master/Changelog.md](https://github.com/qarmin/czkawka/blob/master/Changelog.md)
Repository — [https://github.com/qarmin/czkawka](https://github.com/qarmin/czkawka)
License — MIT/GPL
(Reddit users don’t really like links to Medium, so I copied the entire article here. By doing so, I might have mixed up some things, so if needed you can read original article(with more images) here – [https://medium.com/@qarmin/czkawka-krokiet-10-0-4991186b7ad1](https://medium.com/@qarmin/czkawka-krokiet-10-0-4991186b7ad1) )
https://redd.it/1mtowxp
@r_linux
GitHub
czkawka/Changelog.md at master · qarmin/czkawka
Multi functional app to find duplicates, empty folders, similar images etc. - qarmin/czkawka
Linux for a normie (me)
TLDR: can't code, love Linux
I'm not computer literate at all and have the most experience with really old versions of Windows.
Got Linux, Ubuntu distro.
Don't get kernels, don't get servers don't even know what anything means when I go to investigate the Linux user side of the web.
I must confess I also barely use the terminal because I use the laptop for spreadsheets and archiving mostly.
However, I really like it. Smooth, simple, etc etc.
One of the many perks for me is that my laptop hasn't been glitchy or slow since I got it and some of the weird noises stopped!
Thanks chat.
Room temperature IQ rating of Linux: 8/10
https://redd.it/1mtxne5
@r_linux
TLDR: can't code, love Linux
I'm not computer literate at all and have the most experience with really old versions of Windows.
Got Linux, Ubuntu distro.
Don't get kernels, don't get servers don't even know what anything means when I go to investigate the Linux user side of the web.
I must confess I also barely use the terminal because I use the laptop for spreadsheets and archiving mostly.
However, I really like it. Smooth, simple, etc etc.
One of the many perks for me is that my laptop hasn't been glitchy or slow since I got it and some of the weird noises stopped!
Thanks chat.
Room temperature IQ rating of Linux: 8/10
https://redd.it/1mtxne5
@r_linux
Reddit
From the linux community on Reddit
Explore this post and more from the linux community
why isn't fedora recommended for beginners?
Installing nvidia drivers is pretty easy (only on workstation though), just enable 3rd party repos and search for nvidia on gnome software
codecs is still an issue, but way less than before, cuz it's mainly a flatpak distro nowadays
you can do major updates (like 42 to 43) with gnome software, no need for terminal
it have great bleeding edge hardware support
so why it's still barely recommended?
https://redd.it/1mu0ul2
@r_linux
Installing nvidia drivers is pretty easy (only on workstation though), just enable 3rd party repos and search for nvidia on gnome software
codecs is still an issue, but way less than before, cuz it's mainly a flatpak distro nowadays
you can do major updates (like 42 to 43) with gnome software, no need for terminal
it have great bleeding edge hardware support
so why it's still barely recommended?
https://redd.it/1mu0ul2
@r_linux
Reddit
From the linux community on Reddit
Explore this post and more from the linux community
TIL that
https://curl.se/wcurl/
Instead of...
wget https://mirrors.rit.edu/ubuntu-releases/24.04.3/ubuntu-24.04.3-desktop-amd64.iso
...you can use
wcurl https://mirrors.rit.edu/ubuntu-releases/24.04.3/ubuntu-24.04.3-desktop-amd64.iso
TIL
https://redd.it/1mu43mn
@r_linux
curl 8.14.0 and later includes a wget replacement called wcurlhttps://curl.se/wcurl/
Instead of...
wget https://mirrors.rit.edu/ubuntu-releases/24.04.3/ubuntu-24.04.3-desktop-amd64.iso
...you can use
wcurl https://mirrors.rit.edu/ubuntu-releases/24.04.3/ubuntu-24.04.3-desktop-amd64.iso
TIL
https://redd.it/1mu43mn
@r_linux
Reddit
From the linux community on Reddit
Explore this post and more from the linux community
Why do you use your distro?
Ive been using linux for almost a year now. Ive tried many different distros, Ranging from Fedora. Mint. Arch, CachyOS. Lubuntu. and more.
And after trying all of these distros. i eventually settled on mint just because it seemed to be the most streamlined.
But ive thought a lot. Why do you even bother with other distros? the only thing i notice are the difference in package managers. Obviously theres a difference in Desktop Environments. But thats different. Why would you use Ubuntu with KDE instead of Fedora with KDE. Because i really wouldnt notice the difference.
https://redd.it/1mu97jp
@r_linux
Ive been using linux for almost a year now. Ive tried many different distros, Ranging from Fedora. Mint. Arch, CachyOS. Lubuntu. and more.
And after trying all of these distros. i eventually settled on mint just because it seemed to be the most streamlined.
But ive thought a lot. Why do you even bother with other distros? the only thing i notice are the difference in package managers. Obviously theres a difference in Desktop Environments. But thats different. Why would you use Ubuntu with KDE instead of Fedora with KDE. Because i really wouldnt notice the difference.
https://redd.it/1mu97jp
@r_linux
Reddit
From the linux community on Reddit
Explore this post and more from the linux community
Information about sleep inhibitors
I have aggressive power saving option but amazed how well it worked (GNOME).
When i have something downloading, its just turning off the screen instead of suspend. But if its only seeding, it know to suspend as well. When watching youtube videos, it doesnt turn off the screen even after hours not touching the mouse or keyboard. But somehow when i watch music videos instead, it know to turn off the screen after a while. How does it even work?
But when i check systemd inhibitor list, it only show some upower and network manager stuff. Is there any information so i customize it even more?
https://redd.it/1mufvok
@r_linux
I have aggressive power saving option but amazed how well it worked (GNOME).
When i have something downloading, its just turning off the screen instead of suspend. But if its only seeding, it know to suspend as well. When watching youtube videos, it doesnt turn off the screen even after hours not touching the mouse or keyboard. But somehow when i watch music videos instead, it know to turn off the screen after a while. How does it even work?
But when i check systemd inhibitor list, it only show some upower and network manager stuff. Is there any information so i customize it even more?
https://redd.it/1mufvok
@r_linux
Reddit
From the linux community on Reddit
Explore this post and more from the linux community
Happy birthday #DebianDay!
Happy #DebianDay!
Happy 32nd birthday to Debian, one of the oldest operating systems based on the #LinuxKernel, and the basis for #Ubuntu, #Kali, and #LinuxMint! 🐧🎈🎂
Thank you, #Debian community, for all your amazing work!
Ubuntu #FOSS #opensource #freesoftware #LPI
https://redd.it/1muiwl3
@r_linux
Happy #DebianDay!
Happy 32nd birthday to Debian, one of the oldest operating systems based on the #LinuxKernel, and the basis for #Ubuntu, #Kali, and #LinuxMint! 🐧🎈🎂
Thank you, #Debian community, for all your amazing work!
Ubuntu #FOSS #opensource #freesoftware #LPI
https://redd.it/1muiwl3
@r_linux
Reddit
From the linux community on Reddit
Explore this post and more from the linux community
RPGsh: A terminal application for managing TTRPGs
https://github.com/TheMohawkNinja/rpgsh
https://redd.it/1mujoyd
@r_linux
https://github.com/TheMohawkNinja/rpgsh
https://redd.it/1mujoyd
@r_linux
GitHub
GitHub - TheMohawkNinja/rpgsh: The interactive Linux shell for table-top RPGs.
The interactive Linux shell for table-top RPGs. Contribute to TheMohawkNinja/rpgsh development by creating an account on GitHub.
How do systemd-resolved, resolv.conf, and NetworkManager interact? It's confusing!
The following text applies to distros that use Network-Manager and Systemd-resolved, which is the default setup for most distros:
A few weeks ago, I found myself in a train wanting to use the free Wi-Fi with my Fedora 41 system, but was unable to get to the captive portal to register my device in the network and gain access to the internet. After lots of tinkering around, I found that my DNS settings were responsible for this problem.
The network relied on spoofing any DNS request made to the router to redirect users to the captive portal instead. Since the default behavior of most devices is using the default DNS this works most of the time.
On my machine though, a VPN (or maybe myself at some point) had altered systemd-resolved's config to use a secure DOH (DNS over HTTPS) server globally, which means instead of being redirected to the captive portal by the router my system tried to reach the secure DNS server.
I was able to solve my problem by temporarily stopping systemd-resolved and setting the routers DNS in resolv.conf (which usually forces requests to systemd-resolved)
This experience led me to dig deeper on how Linux handles DNS requests. I quickly realized that resolv.conf was merely a compatibility file for older software that still read it to find out which DNS server to use. I also understood that you can assign a DNS server to each network interface, but those will be overwritten by the global settings.
What I don't get is how NetworkManager ties into all of this and why disabling systemd-resolved fixed my issue. Changing the DNS settings through the gnome settings (which AFAIK is just a frontend to nm) doesn't seem to affect systemd-resolved's configuration. Setting the DNS in resolv.conf seems to work to a certain degree even though it's deprecated, some applications have their own DNS settings which will overwrite anything I just explained.
While I feel like I got somewhat of a picture on how all of these systems interact, I'm still left with lots of confusion. It seems like the whole setup is bodged together and prone to error if interacted with.
Perhaps it's just my lack of knowledge....
https://redd.it/1mus26j
@r_linux
The following text applies to distros that use Network-Manager and Systemd-resolved, which is the default setup for most distros:
A few weeks ago, I found myself in a train wanting to use the free Wi-Fi with my Fedora 41 system, but was unable to get to the captive portal to register my device in the network and gain access to the internet. After lots of tinkering around, I found that my DNS settings were responsible for this problem.
The network relied on spoofing any DNS request made to the router to redirect users to the captive portal instead. Since the default behavior of most devices is using the default DNS this works most of the time.
On my machine though, a VPN (or maybe myself at some point) had altered systemd-resolved's config to use a secure DOH (DNS over HTTPS) server globally, which means instead of being redirected to the captive portal by the router my system tried to reach the secure DNS server.
I was able to solve my problem by temporarily stopping systemd-resolved and setting the routers DNS in resolv.conf (which usually forces requests to systemd-resolved)
This experience led me to dig deeper on how Linux handles DNS requests. I quickly realized that resolv.conf was merely a compatibility file for older software that still read it to find out which DNS server to use. I also understood that you can assign a DNS server to each network interface, but those will be overwritten by the global settings.
What I don't get is how NetworkManager ties into all of this and why disabling systemd-resolved fixed my issue. Changing the DNS settings through the gnome settings (which AFAIK is just a frontend to nm) doesn't seem to affect systemd-resolved's configuration. Setting the DNS in resolv.conf seems to work to a certain degree even though it's deprecated, some applications have their own DNS settings which will overwrite anything I just explained.
While I feel like I got somewhat of a picture on how all of these systems interact, I'm still left with lots of confusion. It seems like the whole setup is bodged together and prone to error if interacted with.
Perhaps it's just my lack of knowledge....
https://redd.it/1mus26j
@r_linux
Reddit
From the linux community on Reddit
Explore this post and more from the linux community
How do you handle pre/post Btrfs snapshot hooks when managing packages?
I've generally stuck to Ext4 partitions with various backup strategies over the years, after a mishap with Btrfs years ago. Recently, I've gone back to Btrfs for new builds and am now adding snapshots to my data redundancy & recovery strategy. I was using Timeshift at first, but now I am starting to use Snapper with native btrfs-tools instead, along with btrfs-assistant for GUI managemet.
I was going to write some noscripts to handle snapshots for package management, but saw some existing packages to help with this. These are a list of the packages I've found with some links and copypasta. I am curious as to how y'all approach it. Do you use your own noscripts, use any of these packages, know of other packages I didn't find, or something else entirely?
I'm trying to decide how I should handle it. I generally run Arch (btw) on my laptops & Debian on my desktops & servers, usually with rEFInd and just Grub for headless installs.
\---
[**Arch**](https://wiki.archlinux.org/noscript/Snapper?pubDate=20250306#Wrapping_pacman_transactions_in_snapshots)
* [**snap-pac**](https://github.com/wesbarnett/snap-pac) — Makes pacman automatically use snapper to create [pre/post snapshots](https://wiki.archlinux.org/noscript/Snapper?pubDate=20250306#Pre/post_snapshots) like openSUSE's YaST. Uses [pacman hooks](https://wiki.archlinux.org/noscript/Pacman_hooks). *\[last commit: 4 years ago\]*
* [**grub-btrfs**](https://github.com/Antynea/grub-btrfs) — Includes a daemon (*grub-btrfsd*) that can be enabled via *systemctl* to look for new snapshots and automatically includes them in the [GRUB](https://wiki.archlinux.org/noscript/GRUB) menu. For that daemon to work you need to install the optional dependency [inotify-tools](https://archlinux.org/packages/?name=inotify-tools) *\[last commit: 7 months ago\]*
* [**snap-pac-grub**](https://github.com/maximbaz/snap-pac-grub) — Additionally updates [GRUB](https://wiki.archlinux.org/noscript/GRUB) entries for [grub-btrfs](https://archlinux.org/packages/?name=grub-btrfs) after [snap-pac](https://archlinux.org/packages/?name=snap-pac) made the snapshots. Also uses [pacman hooks](https://wiki.archlinux.org/noscript/Pacman_hooks). *\[last commit: 4 months ago.\]*
* [**refind-btrfs**](https://github.com/Venom1991/refind-btrfs) — Adds entries to [rEFInd](https://wiki.archlinux.org/noscript/REFInd) after [snap-pac](https://archlinux.org/packages/?name=snap-pac) made the snapshots. *\[last commit: 11 months ago\]*
* [**snp**](https://gist.github.com/erikw/5229436) — Wraps any shell command in a snapper pre-post snapshot (e.g. `snp pacman -Syu`), with better output than the native `--command` option of snapper (see [\#Pre/post snapshots](https://wiki.archlinux.org/noscript/Snapper?pubDate=20250306#Pre/post_snapshots)). *\[last update: 2 years ago\]*
* [**limine-snapper-sync**](https://gitlab.com/Zesko/limine-snapper-sync) — Adds snapper entries to [Limine](https://wiki.archlinux.org/noscript/Limine) [boot loader](https://wiki.archlinux.org/noscript/Boot_loader) after [snap-pac](https://archlinux.org/packages/?name=snap-pac) made the snapshots. *\[last commit: today\]*
**Debain/Ubuntu**
* [**snap-apt**](https://github.com/pavinjosdev/snap-apt) **-** Based on [snap-pac](https://github.com/wesbarnett/snap-pac) by Wes Barnett and [apt-btrfs-snapper](https://github.com/xhess/apt-btrfs-snapper) by xhess. *\[last commit: last year\]*
* [**apt-btrfs-snapper**](https://github.com/xhess/apt-btrfs-snapper) **-** This tool hooks into the apt install/remove/update process and creates two snapshots. One before and one after the packages have been installed/removed. *\[last commit: 10 years ago\]*
* [**apt-snapper**](https://github.com/credativ/apt-snapper) **-** apt-snapper is a wrapper noscript to take snapper snapshots from package manager hooks, and annotating the snappshot with the package list differences. In theory it should not only work with apt/dpkg but also with rpm, pacman and pkg. *\[last commit: 9 years ago\]*
**Suse**
* [Snapper handles this by
I've generally stuck to Ext4 partitions with various backup strategies over the years, after a mishap with Btrfs years ago. Recently, I've gone back to Btrfs for new builds and am now adding snapshots to my data redundancy & recovery strategy. I was using Timeshift at first, but now I am starting to use Snapper with native btrfs-tools instead, along with btrfs-assistant for GUI managemet.
I was going to write some noscripts to handle snapshots for package management, but saw some existing packages to help with this. These are a list of the packages I've found with some links and copypasta. I am curious as to how y'all approach it. Do you use your own noscripts, use any of these packages, know of other packages I didn't find, or something else entirely?
I'm trying to decide how I should handle it. I generally run Arch (btw) on my laptops & Debian on my desktops & servers, usually with rEFInd and just Grub for headless installs.
\---
[**Arch**](https://wiki.archlinux.org/noscript/Snapper?pubDate=20250306#Wrapping_pacman_transactions_in_snapshots)
* [**snap-pac**](https://github.com/wesbarnett/snap-pac) — Makes pacman automatically use snapper to create [pre/post snapshots](https://wiki.archlinux.org/noscript/Snapper?pubDate=20250306#Pre/post_snapshots) like openSUSE's YaST. Uses [pacman hooks](https://wiki.archlinux.org/noscript/Pacman_hooks). *\[last commit: 4 years ago\]*
* [**grub-btrfs**](https://github.com/Antynea/grub-btrfs) — Includes a daemon (*grub-btrfsd*) that can be enabled via *systemctl* to look for new snapshots and automatically includes them in the [GRUB](https://wiki.archlinux.org/noscript/GRUB) menu. For that daemon to work you need to install the optional dependency [inotify-tools](https://archlinux.org/packages/?name=inotify-tools) *\[last commit: 7 months ago\]*
* [**snap-pac-grub**](https://github.com/maximbaz/snap-pac-grub) — Additionally updates [GRUB](https://wiki.archlinux.org/noscript/GRUB) entries for [grub-btrfs](https://archlinux.org/packages/?name=grub-btrfs) after [snap-pac](https://archlinux.org/packages/?name=snap-pac) made the snapshots. Also uses [pacman hooks](https://wiki.archlinux.org/noscript/Pacman_hooks). *\[last commit: 4 months ago.\]*
* [**refind-btrfs**](https://github.com/Venom1991/refind-btrfs) — Adds entries to [rEFInd](https://wiki.archlinux.org/noscript/REFInd) after [snap-pac](https://archlinux.org/packages/?name=snap-pac) made the snapshots. *\[last commit: 11 months ago\]*
* [**snp**](https://gist.github.com/erikw/5229436) — Wraps any shell command in a snapper pre-post snapshot (e.g. `snp pacman -Syu`), with better output than the native `--command` option of snapper (see [\#Pre/post snapshots](https://wiki.archlinux.org/noscript/Snapper?pubDate=20250306#Pre/post_snapshots)). *\[last update: 2 years ago\]*
* [**limine-snapper-sync**](https://gitlab.com/Zesko/limine-snapper-sync) — Adds snapper entries to [Limine](https://wiki.archlinux.org/noscript/Limine) [boot loader](https://wiki.archlinux.org/noscript/Boot_loader) after [snap-pac](https://archlinux.org/packages/?name=snap-pac) made the snapshots. *\[last commit: today\]*
**Debain/Ubuntu**
* [**snap-apt**](https://github.com/pavinjosdev/snap-apt) **-** Based on [snap-pac](https://github.com/wesbarnett/snap-pac) by Wes Barnett and [apt-btrfs-snapper](https://github.com/xhess/apt-btrfs-snapper) by xhess. *\[last commit: last year\]*
* [**apt-btrfs-snapper**](https://github.com/xhess/apt-btrfs-snapper) **-** This tool hooks into the apt install/remove/update process and creates two snapshots. One before and one after the packages have been installed/removed. *\[last commit: 10 years ago\]*
* [**apt-snapper**](https://github.com/credativ/apt-snapper) **-** apt-snapper is a wrapper noscript to take snapper snapshots from package manager hooks, and annotating the snappshot with the package list differences. In theory it should not only work with apt/dpkg but also with rpm, pacman and pkg. *\[last commit: 9 years ago\]*
**Suse**
* [Snapper handles this by
default](https://documentation.suse.com/en-us/sles/15-SP7/html/SLES-all/cha-snapper.html#snapper-snapshot-type); not sure how they handle grub integration (too lazy to look rn 😅).
https://redd.it/1muugqj
@r_linux
https://redd.it/1muugqj
@r_linux
Suse
Administration Guide
Maintain, monitor and customize SLES