r_bash – Telegram
# Data processing commands requiring initialized environment
echo "Processing data..."
```

```link :go_to_menu
file: menu.md
```

In this example, the `data_processing` block relies on `initialize_environment`. When selecting `data_processing`, MDE first executes `initialize_environment` to ensure proper setup before proceeding. The `link` block type enables navigation to `menu.md`, offering a structured and interconnected document system. These attributes make MDE an effective tool for managing complex noscript sequences and various applications. The automated execution feature via command-line arguments further enhances MDE's role in batch processing and workflow automation.

https://redd.it/18r1ygi
@r_bash
How can i create a new file and run it afterwards without having to chmod it every time?

ive read i can add umask 011 in my zshrc but im still getting the permission issue even though im root.

https://redd.it/18ux7jz
@r_bash
Identifying and then moving folders with only zero-length files?

I have been trying to use Mac Automator with bash shell noscripts to assist with some time consuming file management. One part of the workflow has got me stuck. I've tried asking ChatGPT which has proposed some code, but it either doesn't work at all, or works with errors. Grateful for any advice.

This is my situation:

* I have output\_directory that is full of folders. These folders contain files and sometimes also have multiple layers of sub-folders with files within.
* As part of a space-saving and depulication-prevention workflow, some files have been truncated to 0kb while preserving filename and location. This is done using the Mac terminal command: "find . -type f -exec truncate -s 0 {} \\;"
* I want to identify folders that are comprised ONLY of 0kb files (and any associated sub-folders) and move these entire folders to empty\_folders\_directory. File structure within the moved folder should be maintained.
* If a folder has a mix of 0kb files and non-0kb files the folder should remain in output\_directory.

I'm not quite sure why the code(s) I've tried haven't worked e.g.:

# Check if the folder contains only empty files
if [ -z "$(find "$output_directory/$(basename "$folder")" -mindepth 1 -type f -exec test -s {} \;)" ]; then
# Move the folder to the !empty_folders directory
mv "$output_directory/$(basename "$folder")" "$empty_folders_directory"
echo "Moved $folder to $empty_folders_directory"

e.g.

# Check if there are non-zero-length files within the folder (including sub-folders)
zero_length_files=$(find "$output_directory/$(basename "$folder")" -type f -size +0c)
if [ -z "$zero_length_files" ]; then
mv "$output_directory/$(basename "$folder")" "$zero_kb_directory"
echo "Moved $folder to $zero_kb_directory"

And this example where ChatGPT did this step of the process as a function within an earlier part of the workflow that sorts folders depending on file-extensions present on folder contents:

#!/bin/bash

input_directory="/Users/rj/autotest/testinputdirectory"
output_directory="/Users/rj/autotest/testoutputdirectory"
empty_folders_directory="/Users/rj/autotest/testoutputdirectory/!empty_folders"

# Function to check if a folder and its sub-folders contain only empty files
check_empty_folder() {
local folder="$1"
# Check if the folder contains only empty files
if [ -z "$(find "$folder" -type f -exec test -s {} \;)" ]; then
return 0 # Folder contains only empty files
else
return 1 # Folder contains non-empty files
fi
}

# Loop through each folder in the input directory
for folder in "$input_directory"/*; do
# Check if the folder contains files with ".downloading" or ".prog" extensions
if [ -n "$(find "$folder" -type f \( -name "*.downloading" -o -name "*.prog" \))" ]; then
echo "Skipping $folder"
else
# Move the whole folder to the output directory
mv "$folder" "$output_directory"
echo "Moved $folder to $output_directory"

# Check if the folder and its sub-folders contain only empty files
if check_empty_folder "$output_directory/$(basename "$folder")"; then
# Move the folder to the !empty_folders directory
mv "$output_directory/$(basename "$folder")" "$empty_folders_directory"
echo "Moved $folder to $empty_folders_directory"
fi
fi
done


I've tried the above as well as lots of variants. They either haven't worked at all (i.e. empty\_folders\_directory has nothing it it when it should); or have worked incompletely e.g.: some folders haven't been moved; or contents of folders end up in the root of empty\_folders\_directory (which should never be the case).

Any pointers mucha appreciated.

Thanks

https://redd.it/18v4vbt
@r_bash
Copy file to a copied structure

I have a pictures folder on my Synology NAS, and within that are a number of albums, each with a set of photos and videos.

I occasionally want to share select photos with family by copying the files, but I want to keep the album structure so it's still viewable i.e. pictures/nans/70thbirthday/, or pictures/parents/10thanniversary/ etc. In reality the files are nested deeper than this, so I don't want to create the folders in target manually. As I don't want to copy the whole folder full of files, I was hoping to copy the file I want to share to a temporary folder, and then a noscript could check the source folders for the original location and replicate that in the family shared folder.

Source would be /pictures/nans/70thbirthday/img_20220111115326.jpg where there could be 100's of other pictures.

I would want to copy file to /temp/pictures/img_20220111115326.jpg

Then a bash noscript would find the location of the original file in the /pictures/ folder and recreate the folder structure to /shared/events/ i.e. /shared/events/nans/70thbirthday/

I've been banging my head with find and grep but I don't really know what I'm doing, so haven't even been able to successfully extract the folder from any output.

I had been trying to use find '/volume2/pictures/albums/' -type d -name "*img_20220111115326.jpg*" without success.

Should I be using find, or is there a better command to use to set the original folders to a variable?

https://redd.it/18uhcf5
@r_bash
license-generator: a bash noscript that will generate license for your next open source project

Just wrote this shell noscript which can generate license files. It uses Github's API to fetch license files and make modifications by adding name of project author & year to it. Here's the Github link.

Hadn't written in bash for a long time now, I would love to hear you guys' thoughts on the code. Thanks in advance!

https://redd.it/18ubien
@r_bash
Local -n vs declare -n

Whats the difference between local -n and declare -n when used inside the function?

Bash manual doesnt explain the difference when -n attribute is used for both.

https://redd.it/18to3jj
@r_bash
Generating PNG files with text and "colorful emojis"

I know how to do it with a wide choice of options, but my emojis are all black and white and flat.

I want to get them colorful like on my phone.

I have a trannoscript of a WA chat. I have written code that converts the entire conversation into audio. Now I want to create a video to go with it, containing the actual messages sent back and forth including the emojis, and I really need them to be in color. For reasons!

​

Edit: After searching the depths of the web, I found a tool in imagemagick called Pango that supports this in current versions. I will test it out over the weekend and revert.

https://redd.it/18tku9y
@r_bash
forkrun: the fastest pure-bash loop parallelizer ever written -- looking for "beta testers"

[LINK TO GITHUB REPO WITH CODE](https://github.com/jkool702/forkrun/tree/forkrun-v2_RC)

***

A year ago I started working on `forkrun` - a pure bash (well, almost pure bash) function that works to parallelize loops in much the same way that `xargs -P` and `parallel` do. 1 year, nearly 400 github commits, 1 complete rewrite, and I imagine several hundred hours worth of optimizing later, I do believe that `forkrun` (v2.0) is finally ready to be released.

Before I officially release it, Id love it if a few people would try it out and report any bugs they encounter. Ive thoroughly tested it on my Fedora 39 rig running bash 5.2.x, but other distros and older versions of bash is largely untested (NOTE: min bash version capable of running this is 4.0 due to the use of coprocs) .

Thanks in advance to anyone willing to test it out for me!

***

**USAGE**

There is detailed info in the github readme, but heres some brief usage instructions:

First, source `forkrun.bash` by running one of the following:

. <(curl https://raw.githubusercontent.com/jkool702/forkrun/forkrun-v2_RC/forkrun.bash)

or

wget https://raw.githubusercontent.com/jkool702/forkrun/forkrun-v2_RC/forkrun.bash
. ./forkrun.bash

or

git clone https://github.com/jkool702/forkrun.git --branch=forkrun-v2_RC
. ./forkrun/forkrun.bash

Then use it like you would `xargs`. The base (no flags) `forkrun` is roughly equivalent to `xargs -P $(nproc) -d $'\n'`. After sourcing it, you can display the full usage help (that described the available flags to tweak `forkrun`'s behavior) by running

forkrun --help=all

***

**EXAMPLE**

To compute the `cksum` of every file under the current directory, you would run

find ./ -type f | forkrun cksum

***

P.S. and yes, it really is that fast. My main speed testing has been computing 11 different checksums on ~500,000 mostly small files saved on a tmpfs ramdisk with a total combined size of ~19 gb. The speedtest code + results are [in the github repo](https://github.com/jkool702/forkrun/blob/forkrun-v2_RC/forkrun.speedtest.bash), but to summarize:

* on average it was 70% faster than `xargs -P $(nproc) -d $'\n'`, which is the fastest loop parallelizer I know of (not counting `forkrun`). For the lighter weight checksums like `cksum` and `sum -s` is was closer to 3x faster. Note that this is the fastest implementation of `xargs` (it isnt being crippled by using `-l 1` or `-n 1`), and `xargs` itself is a compiled C binary. Thats right, `forkrun` parallelizes loops faster than the fastest compiled C loop parallelizer I could find.
* on average it is \~7x as fast as `parallel -m`. For the lighter weight checksums like `cksum` and `sum -s` is was >18x faster.
* on my hardware, `forkrun` was computing the lightweight checksums (`cksum` and `sum -s`) on all ~19 gb worth of ~500,000 files in about 1.1 seconds (outputting to `wc -l`), not printing to the terminal)

As such, In can all but guarantee this is the fastest loop parallelizer written in bash that has ever been written. See the github readme if you are curious what makes `forkrun` so fast.

Note: "fast" is referring to "wall clock time". In terms of CPU time `xargs` is a bit better (though not *that* much), but forkrun parallelizes things so well it is faster in "real" execution time.

EDIT: fixed formatting issue.

https://redd.it/18sfjtz
@r_bash
Script to relink broken alias files with new path?

Hi, is there any way to get this done? I have no experience with noscripting, but together with ChatGPT I failed massively to get this done in Terminal on OSX (10.14.6)

From a stupid user perspective:

1. Open Finder-GUI to choose Folder (including subfolder) for damaged/unliked Alias files
2. Check those files for unlinked Aliases.
3. Open Finder-GUI to choose Folder (incl. subfolders) which contains the new destination of the original files.
4. Do the work - restore all Alias links with the new original paths (only for those files, which had a damaged Alias of course).

I cannot find a software that claims to do that on OSX. I am so desperately in need of this function because I work with a software which relies on Aliases for it's internal file management system, and after I did some major reorganisation of my hard drives, I'm left with hundrets of unlinked Aliases..... :(

https://redd.it/18w0sog
@r_bash
Pipe output to a file with auto incremented name?

I like doing > temp-file.txt for output of some commands that I might need later.

This has progressed to > ../tmp/2024-01-01-001.txt, but writing the timestamps and index numbers gets tedious.

Is there a utility or noscript that would let me do just > keep or something similar? Seems like a common use case but after a couple of google searches I didn't find anything.

https://redd.it/18w3vt5
@r_bash
Trouble formatting output of PS command

I'm trying to get a formatted list of the 5 most CPU intensive processes using the PS command. This works but I'm not sure how to align all values to the left.

I run the command like so:

ps --no-headers -Ao comm:21, -o pid:6, -o pcpu:6 --sort=-pcpu | head -5

which produces the following output:

Isolated Web Co 97231 7.9
firefox 32302 5.5
Isolated Web Co 175732 3.7
Hyprland 689 2.1
RDD Process 45174 1.9

Now I'd like to align the second column to the left but I'm not sure how to do this. Piping it into column -t messes up the layout because of the spaces in the first column's values:

Isolated Web Co 97231 8.1
Isolated Web Co 175732 5.7
firefox 32302 5.4
Hyprland 689 2.1
RDD Process 45174 2.0

I'm probably missing something obvious. Can anyone point me in the right direction? It would be much appreciated!

https://redd.it/18xwvec
@r_bash
Monitor filesystem events using inotify-tools

# inotify-tools
This is a basic guide to use inotify-tools.

```bash
apt-get install inotify-tools
```

## Initial Command
This is basic command of inotify-tools.

* `inotifywait` is a part of inotify-tools.
* `-m` monitor for events continuously (don't exit after the first event).
* `-e create` watch for file creation events specifically.
* `/path/to/directory` The directory to monitor.
```bash
inotifywait -m -e create /path/to/directory
```

When a new file is created, inotifywait will print a line like -
```
CREATE /path/to/directory/new_file.txt
```
Capture this output in a noscript or command to perform actions on the new file.

## Using while loop
```bash
inotifywait -m -e create /path/to/directory | while read line; do
# Extract the filename from the output
filename=$(echo $line | cut -d' ' -f3)
# Do something with the new file
echo "New file created: $filename"
done
```

## Additional options
* `-r` Monitor recursively for changes in subdirectories as well.
* `--format %f` Print only the filename in the output.
* `--timefmt %Y%m%d%H%M%S` Specify a custom timestamp format.

https://redd.it/1ad1pgp
@r_bash
Create bash noscripts 100x faster using libray

# bash-sdk 🔥

A bash library to create standalone noscripts.

https://ourcodebase.gitlab.io/bashsdk-docs/

## Features

There are some features of bash-sdk are mentioned here.

OOPS like code 💎.
Module based code 🗂️.
Similar functions to python 🐍.
Standalone noscript creation 📔.

## Beauty 🏵️

Checkout the ui of this cli project here.

## General 🍷

There are some rules or things to keep in mind while using this library.

The rules are mentioned here.

## Installation 🌀

Just clone it to anywhere.

git clone --depth=1 https://github.com/OurCodeBase/bash-sdk.git

## Modules 📚

These are the modules created in bash-sdk library. You can read about their functions by clicking on them.

[ask.sh](/docs/ask)
cursor.sh
[db](/docs/db)
file.sh
[inspect.sh](/docs/inspect)
os.sh
[package.sh](/docs/package)
repo.sh
[say.sh](/docs/say)
screen.sh
[spinner.sh](/docs/spinner)
string.sh
[url.sh](/docs/url)

## Structure 🗃️

File structure of bash-sdk is like:

bash-sdk
├── docs # docs for bash-sdk.
├── _uri.sh # helper of builder.
├──
builder.sh
└── src
├──
ask.sh
├──
cursor.sh
├──
db.sh
├──
file.sh
├──
inspect.sh
├──
os.sh
├──
package.sh
├──
repo.sh
├──
say.sh
├──
screen.sh
├──
spinner.sh
├──
string.sh
└──
url.sh

## Compiler 🧭

Compiler does combine all codes in a standalone bash file.

bash
builder.sh -i "path/to/input.sh" -o "path/to/output.sh";

input file is the file that you are going to compile.
output file is the standalone result file.

Then you can directly execute output file without bash-sdk library.

## Queries 📢

If you have any questions or doubt related to this library you can directly ask everything [here](
https://github.com/OurCodeBase/bash-sdk/issues).

## Suggestion 👌

bash-lsp to your code editor to get auto completions.
[shellcheck](https://github.com/koalaman/shellcheck) to debug bash code.
cooked.nvim code editor to get best compatibility.

## Author 🦋

Created By [@OurCodeBase](https://github.com/OurCodeBase)
Inspired By @mayTermux

https://redd.it/1acumxu
@r_bash
BEE-GENTLE-1ST-BASH-SCRIPT

so, I am looking to make my life a bit simple, I use nmap and use some pentesting labs. IP of target always changes and instead of remembering the ip of the target, it would be nice to just use TARGET in my commands that I pass.

I sudo this file and it was giving me ability to append a new targetname and ip to /etc/hosts file
e.g. target1 10.10.10.101

I tried it and it worked, so I added in a command where I need some guidance, pointers on how I can add the delete option. I googled and saw sed command but not sure how to incorporate it.

My expectation is to cat the /etc/hosts file and see whats there, then add or delete as needed before a new pentest box is being worked on.

filename: addtarget.sh
#!/bin/sh
echo "What is the TARGET # please"
read TARGET
echo "Enter IP address please"
read IP
/new line added to delete/ echo "Enter TARGET # to delete from /etc/hosts"
/new line added to delete/ read DEL
/new line added to delete/ sed -i.bak '/target$DEL\'./d' /etc/hosts # will delete lines containing "target."

echo "Adding $TARGET and its associated $IP address for you"
echo "$TARGET $IP" >> /etc/hosts

++++++++++++++++++++++
Thank you in advance to this community and any support you can provide me.

https://redd.it/1acytdx
@r_bash
Utility that Scans A Bash Script And Lists All Required Commands

#### I'm looking for a utility that scans a bash noscript and lists all required commands. Some reqs and specs, v00.03.

Do you know of such a beast?

I have not been able to frame a valid web query, other than ones that generate terabytes of cruft.

Barring that, I could use some help with specs.

**Shortcuts?**

* I can safely ignore or disallow command names and functions with embedded spaces.
* Would running the bash '-x' option provide a better basis for the scan, or is the source better?
* Or, what?
* I have a few noscripts that write other noscripts (templates, with an intervening editing session). I suppose that if I have a utility that can scan a "non-recursive" noscript, I could use the utility recursively?

**"Specs"**

* I only want the list to include external commands; if Bash internals are included, I would prefer that they be listed separately.
* There are quite a few Bash internals that have external equivalents, e.g., 'echo' and 'test'. I need to be able to distinguish between the two like-named commands. Is a format like '/bin/test', for example, sufficient to distinguish between the two?
* Ignore first <words> followed by '()'
* Ignore everything to the right of a '#' not quoted or escaped.
This gets pretty complicated on multiline quotes with embedded quotes, i.e.:
- \>"...'...'..."<
- \>"...'...'..."<
- \>"...\"...<
- etc.
* First <word> on a line.
* First <word> following a ';'.
* First <word> following a '|'.
* First <word> following a '$(', or '`'.

https://redd.it/1acparm
@r_bash
bash noscript developer

Hi I am a researcher and I have to write some bash noscripts for my project but I am too new here. Could you please help me (as a consultant or paid bash noscript writer)?

https://redd.it/1acdtxz
@r_bash
Iterating over ls output is fragile. Use globs.

My editor gave me this warning, can y'all help me understand why?

The warnig again is:

`Iterating over ls output is fragile. Use globs.`

Here is my lil noscript:

#!/usr/bin/env bash
for ZIPFILE in $(ls *.zip)
do
echo "Unzipping $ZIPFILE..."
unzip -o "$ZIPFILE"
done

What should I use instead of `$(ls *.zip)` and why?

https://redd.it/1abus2n
@r_bash
A Bash noscript thatspurpose is to source the latest release version number of a GitHub repository

To use it set the variable url (or name it whatever)

repo example 1
url=https://github.com/rust-lang/rust.git
repo example 2
url=https://github.com/llvm/llvm-project.git

And run this command in your bash noscript

curl -sH "Content-Type: text/plain" "https://raw.githubusercontent.com/slyfox1186/noscript-repo/main/Bash/Misc/source-git-version.sh" | bash -s "$url"

These examples should return the following version numbers for llvm and rust respectively...

17.0.6

1.75.0

It works for all the repos I have tested so far but I'm sure one will throw an error.

If a repo doesn't work let me know and I'll see if I can fix it.

https://redd.it/1abhv75
@r_bash
finding files and [sub]directories with exclusions

So, I recently discovered while using `find` and trying to exclude a particular directory via something like

find "${base_dir}" ! -path "${exclude_dir}" ! -wholename "${exclude_dir}/*"

that `find` still scans the excluded directory and then removes them from the output. i.e., it doesnt "skip" this directory, it scans it like all the rest and then removes any results that match the `! -path` or `! -wholename` rule.

This can be a bit annoying (and make the `find` run *really* slow) if the directory you are excluding is, for example:

* the mount point of a huge mounted zfs raidz2 pool storing some 40 TB of data
* the mount point of a 5 tb usb attached HDD to an embedded system that can only read it at a maximum of ~20 MB/s

Being that I ran into both of these in the last few days, In wrote up a little function to exclude directories from find without having to scan through them. That said, its decently robust but Im sure some edge cases will give it trouble, and I feel there is probably a tool that does this more robustly, so if anyone knows what it is by all means let me know.

The function below works by figuring out a minimal file/directory list that is searched (with `find`) that covers everything under the base search dir except the excluded stuff. For example: if you wanted to list everything under `/a/b` except for `a/b/c`, `a/b/d/e`, `a/b/d/f`, and any subdirectories under those three exclusions, this list of "files and directories to search" would include:

* everything immediately under `/a/b` except for `/a/b/c` and `/a/b/d` \-\-AND\-\-
* everything immediately under `/a/b/d`, except for `/a/b/d/e` and `/a/b/d/f`

This function constructs this list by breaking apart excluded directories into "nesting levels" relative to the base search dir and then on each doing `find -mindepth 1 -maxdepth 1 ! -path ...` in a loop on each unique dir from each nesting level.

***

OK, heres the code:

efind () {
## find files/directories under a base search directory with certain files and directories(+sub-directories) excluded
#
# IMPORTANT NOTE:
# excluded directories are not queried at all, making it fast in cases where the excluded directory contains A LOT of data
# (unlike `find "$base_dir" ! -path "$exclude_dir/*"`, which traverses the excluded directory and then drops the results)
#
# USAGE:
# 1st input is the base search directory that you are searching for things under
# All remaining inputs are excluded files and/or directories
#
# dependencies: `realpath` and `find`

local -a eLevels edir efile A B F;
local bdir a b nn;
local -i kk;

shopt -s extglob;

# get base search directory
bdir="${1%\/}";
shift 1;
[[ "${bdir}" == \/* ]] || bdir="$(realpath "${bdir}")";

# parse additional inputs. Split valid ones into seperate lists of files / directories to exclude
for nn in "${@%/}"; do

# get real paths. If path is relative (doesnt start with / or ./ or ~/) assume it is relative to the base search directory (NOT PWD)
case "${nn:0:1}" in
[\~\.\/])
nn="$(realpath "$nn")";
;;
*)
nn="$(realpath "${bdir}/${nn}")";
;;
esac

# ensure path is under base search directory
[[ "$nn" == "${bdir}"\/* ]] || {
printf 'WARNING: "%s" not under base search dir ("%s")\n ignoring "%s"\n\n' "$nn" "${bdir}" "$nn";
continue;
}

# split into files list or directories list
if [[ -f "$nn" ]]; then
efile+=("$nn");
elif [[ -d "$nn" ]]; then
edir+=("$nn");
else
printf 'WARNING: "%s" not found as file or dir.\n Could be a "lack of permissions" issue?\n ignoring "%s"\n\n' "$nn" "$nn";
fi;

done;