r_bash – Telegram
bash noscript developer

Hi I am a researcher and I have to write some bash noscripts for my project but I am too new here. Could you please help me (as a consultant or paid bash noscript writer)?

https://redd.it/1acdtxz
@r_bash
Iterating over ls output is fragile. Use globs.

My editor gave me this warning, can y'all help me understand why?

The warnig again is:

`Iterating over ls output is fragile. Use globs.`

Here is my lil noscript:

#!/usr/bin/env bash
for ZIPFILE in $(ls *.zip)
do
echo "Unzipping $ZIPFILE..."
unzip -o "$ZIPFILE"
done

What should I use instead of `$(ls *.zip)` and why?

https://redd.it/1abus2n
@r_bash
A Bash noscript thatspurpose is to source the latest release version number of a GitHub repository

To use it set the variable url (or name it whatever)

repo example 1
url=https://github.com/rust-lang/rust.git
repo example 2
url=https://github.com/llvm/llvm-project.git

And run this command in your bash noscript

curl -sH "Content-Type: text/plain" "https://raw.githubusercontent.com/slyfox1186/noscript-repo/main/Bash/Misc/source-git-version.sh" | bash -s "$url"

These examples should return the following version numbers for llvm and rust respectively...

17.0.6

1.75.0

It works for all the repos I have tested so far but I'm sure one will throw an error.

If a repo doesn't work let me know and I'll see if I can fix it.

https://redd.it/1abhv75
@r_bash
finding files and [sub]directories with exclusions

So, I recently discovered while using `find` and trying to exclude a particular directory via something like

find "${base_dir}" ! -path "${exclude_dir}" ! -wholename "${exclude_dir}/*"

that `find` still scans the excluded directory and then removes them from the output. i.e., it doesnt "skip" this directory, it scans it like all the rest and then removes any results that match the `! -path` or `! -wholename` rule.

This can be a bit annoying (and make the `find` run *really* slow) if the directory you are excluding is, for example:

* the mount point of a huge mounted zfs raidz2 pool storing some 40 TB of data
* the mount point of a 5 tb usb attached HDD to an embedded system that can only read it at a maximum of ~20 MB/s

Being that I ran into both of these in the last few days, In wrote up a little function to exclude directories from find without having to scan through them. That said, its decently robust but Im sure some edge cases will give it trouble, and I feel there is probably a tool that does this more robustly, so if anyone knows what it is by all means let me know.

The function below works by figuring out a minimal file/directory list that is searched (with `find`) that covers everything under the base search dir except the excluded stuff. For example: if you wanted to list everything under `/a/b` except for `a/b/c`, `a/b/d/e`, `a/b/d/f`, and any subdirectories under those three exclusions, this list of "files and directories to search" would include:

* everything immediately under `/a/b` except for `/a/b/c` and `/a/b/d` \-\-AND\-\-
* everything immediately under `/a/b/d`, except for `/a/b/d/e` and `/a/b/d/f`

This function constructs this list by breaking apart excluded directories into "nesting levels" relative to the base search dir and then on each doing `find -mindepth 1 -maxdepth 1 ! -path ...` in a loop on each unique dir from each nesting level.

***

OK, heres the code:

efind () {
## find files/directories under a base search directory with certain files and directories(+sub-directories) excluded
#
# IMPORTANT NOTE:
# excluded directories are not queried at all, making it fast in cases where the excluded directory contains A LOT of data
# (unlike `find "$base_dir" ! -path "$exclude_dir/*"`, which traverses the excluded directory and then drops the results)
#
# USAGE:
# 1st input is the base search directory that you are searching for things under
# All remaining inputs are excluded files and/or directories
#
# dependencies: `realpath` and `find`

local -a eLevels edir efile A B F;
local bdir a b nn;
local -i kk;

shopt -s extglob;

# get base search directory
bdir="${1%\/}";
shift 1;
[[ "${bdir}" == \/* ]] || bdir="$(realpath "${bdir}")";

# parse additional inputs. Split valid ones into seperate lists of files / directories to exclude
for nn in "${@%/}"; do

# get real paths. If path is relative (doesnt start with / or ./ or ~/) assume it is relative to the base search directory (NOT PWD)
case "${nn:0:1}" in
[\~\.\/])
nn="$(realpath "$nn")";
;;
*)
nn="$(realpath "${bdir}/${nn}")";
;;
esac

# ensure path is under base search directory
[[ "$nn" == "${bdir}"\/* ]] || {
printf 'WARNING: "%s" not under base search dir ("%s")\n ignoring "%s"\n\n' "$nn" "${bdir}" "$nn";
continue;
}

# split into files list or directories list
if [[ -f "$nn" ]]; then
efile+=("$nn");
elif [[ -d "$nn" ]]; then
edir+=("$nn");
else
printf 'WARNING: "%s" not found as file or dir.\n Could be a "lack of permissions" issue?\n ignoring "%s"\n\n' "$nn" "$nn";
fi;

done;
# split directories up into nesting levels relative to base search directory
# is base search directory is '/a/b'; then: level 0 is '/a/b', level 1 is '/a/b/_', level 2 is '/a/b/_/_, level 3 is '/a/b/_/_/_' etc.)'
eLevels[0]="${bdir}";
for nn in "${edir[@]%%+([\/\*])}"; do
b="${nn#"${bdir%/}/"}/";
a="${bdir%/}/";
kk=1;
until [[ -z $b ]] || [[ "$a" == "$nn" ]]; do
a+="${b%%/*}/";
b="${b#*/}";
eLevels[$kk]+="${a%\/}"$'\n';
((kk++));
done;
done;

# construct minimal list of files/directories to search that doesnt contain excluded directories and save in array F
# EXAMPLE:
# if the base search directory is '/a/b' and you want to exclude 'a/b/c', 'a/b/d/e', and 'a/b/d/f', this includes:
# everything immediately under '/a/b' except for '/a/b/c' and '/a/b/d' --AND--
# everything immediately under '/a/b/d', except for '/a/b/d/e' and '/a/b/d/f'
mapfile -t F < <(for ((kk=1; kk<${#eLevels[@]}; kk++ )); do
mapfile -t A < <(printf '%s' "${eLevels[$(( $kk - 1 ))]}" | sort -u)
A=("${A[@]}");
for nn in "${A[@]}"; do
mapfile -t B < <(printf '%s' "${eLevels[$kk]}" | grep -F "${nn}" | sort -u)
B=("${B[@]}");

[[ -n "$(printf '%s' "${B[@]//[ \t]/}")" ]] && source /proc/self/fd/0 <<< "find \"${nn}\" -maxdepth 1 -mindepth 1 $(printf '! -path "%s" ' "${B[@]}"; printf '! -wholename "%s/*" ' "${B[@]}")";
done;
done);

# run `find -O3` on the dir list saved in array F, with excluded files (from command line) now being excluded
source /proc/self/fd/0 <<<"find -O3 \"\${F[@]}\" $(printf '! -path "%s" ' "${efile[@]}")";
}

https://redd.it/1abct5q
@r_bash
like grep -A4 but instead of 4 lines go until matched pattern?

Does grep have a way of instead of specifying 4 like grep -A4 to print the lines until it encounters a regex match like ^---? It looks like this is called context line control in the manpage, but that section doesn't give a way to have it variable, the number must be fixed. Does grep have another mechanism that could be used? Right now I've got a python noscript that does this, but I'm very curious about a bash-1-liner. My matches can be printed properly from grep -A2 to grep -A7 to larger like -A33 and there's no way to know what the number is without counting the number of lines until the ^--- is encountered. Is grep capable of doing this on its own or do I need another tool?

https://redd.it/1advlq7
@r_bash
Tool for fast tables in Bash + request for design opinions

I created a table tool for high-performance data access in Bash to support a user interface tool (I'm calling it pwb for Pager with Benefits) I'm finishing up:

Array Table Extension (ate)

The tool is Bash builtin written in C, and as such it can work in the noscript's process space and access elements of the noscript in which it is called, including noscript functions which are sometimes called from the tool.

I would love to find a forum to discuss several design ideas I implemented, but the ate feature about which I am soliciting opinions is one I am also considering for the pwb project is how I create a "handle" with which one accesses the ate features. I think it's pretty developer-friendly, but if I'm mistaken, I might avoid making the same mistake on the new project.

I'll appreciate any comments or insights.

https://redd.it/1adz42r
@r_bash
Readline parsing in command completion

Can someone help me with command completion?

Or perhaps this is more about readline library but still.

BASH_VERSION="4.4.20(1)-release"

I use this simple function to test COMP_* variables during command completion:

complete -F compvars compvars

compvars() { echo >\&2; declare -p ${!COMP_*} >\&2; return 1; }

&#x200B;

And I use arguments like 'name=' and 'name=value' in my noscripts.

For example this works as I expect and COMP_WORDS is easy to use:

&#x200B;

$ compvars a='' b<TAB>

declare -- COMP_CWORD="3"

declare -- COMP_KEY="9"

declare -- COMP_LINE="compvars a='' b"

declare -- COMP_POINT="15"

declare -- COMP_TYPE="33"

declare -- COMP_WORDBREAKS="

\\"'><=;|&(:"

declare -a COMP_WORDS=([0\]="compvars" [1\]="a" [2\]="=''" [3\]="b")

\^C

&#x200B;

But when I use hyphens (to allow spaces in arguments) things get complicated:

&#x200B;

$ compvars a='x' b<TAB>

declare -- COMP_CWORD="3"

declare -- COMP_KEY="9"

declare -- COMP_LINE="compvars a='x' b"

declare -- COMP_POINT="16"

declare -- COMP_TYPE="33"

declare -- COMP_WORDBREAKS="

\\"'><=;|&(:"

declare -a COMP_WORDS=([0\]="compvars" [1\]="a" [2\]="='" [3\]="x' b")

\^C

&#x200B;

Common sense says I should get "b" as separate element in COMP_WORDS also in last completion. Why the last hyphen in COMP_WORDS[3\] doesn't also act as a word break and further split COMP_WORDS[3\] into [3\]="x'" and [4\]="b"?

Is there some solution out there to reassemble COMP_WORDS to overcome cases like this?

&#x200B;

https://redd.it/1ae4rp4
@r_bash
Simple bash noscript help

Hi,

I am hoping I can get some assistance with a simple bash noscript that I will run in a cron

If any file in one particular directory is older than 1 minute, then execute something

I cannot get find to work simply like that

any thoughts?

Thank you !

https://redd.it/1aeawaj
@r_bash
bash-workout, bash based timed workouts

Today I wrote a small utility for myself. I've been working on leveling up my bash skills recently.


This tool takes a CSV of workouts and time (in seconds) and runs through the workout allowing you to perfectly time the workout. It comes built in with rest times. If you're on MacOS it also says the workouts out loud. If you're on another form of Linux you can easily swap to using something like espeak via the `$SPEACH_COMMAND` variable.


I'm probably the only person who'll ever use this, but on the off chance you can either learn something from it or use it here it is. https://github.com/cogwizzle/bash-workout/tree/main

https://redd.it/1aeeikq
@r_bash
Weird Loop Behavior? No -negs allowed?

Hi all. I'm trying to generate an array of integers from -5 to 5.

for ((i = -5; i < 11; i++)); do
newoffsets+=("$i")
done

echo "Checking final array:"
for all in "${new
offsets@}"; do
echo " $all"
done

But the output extends to positive 11 instead. Even Bard is confused.

My guess is that negatives don't truly work in a c-style loop.

Finally, since I couldn't use negative number variables in the c-style loop, as expected, I just added some new variables and did each calculation in the loop and incrementing a different counter. It's best to use the c-style loop in an absolute-value manner instead of using its $i counter when negatives are needed, etc.

Thus, the solution:

declare -i viewportsize=11
declare -i view
radius=$(((viewportsize - 1) / 2))
declare -i lower
bound=$((viewradius * -1))
unset new
offsets

for ((i = 0; i < viewportsize; i++)); do
# bash can't employ negative c-loops; manual method:
new
offsets+=("$lowerbound")
((lower
bound++))
done

&#x200B;

https://redd.it/1aepy2i
@r_bash
Maintain list of env variables for both shell and systemd

I have a bunch of applications autostarted as systemd user services and I would like them to inherit environment variables defined in the shell config (.zprofile because these are rarely changed). I don't want to maintain two identical list of variables (one for login shell environment and one for systemctl import-environment <same list of these variables>).

I thought about using systemd's ~/.config/environment.d and then have my shell export everything on this list, but there are caveats mentioned here (I won't pretend I fully understand it all), hence why I'm thinking of just going with the initial approach (the shell config is also more flexible allowing setting variables conditionally based on more complicated logic). Parsing output of env is also not reliable as it contains some variables I didn't explicitly set and may not be appropriate for importing by systemd.

What is a good way to go about this? I suppose the shell config can be parsed for the variables but it seems pretty hacky. Associative array for env variables, then parse for the keys of the arrays for the variable names for systemctl import-environment? Any help/examples are much appreciated.

https://redd.it/1afawp3
@r_bash
Are there terminal apps like Fig but for windows?

I am looking for a nice-looking command line like Fig with features like autocomplete, are there windows alternatives?

https://redd.it/1affapj
@r_bash
What is the best way to run a server polling noscript for few days only, every 1 minute? Will it cause any hamper to server status?

host=www.google.com
port=443
while true; do
currenttime=$(date +%H:%M:%S)
r=$(bash -c 'exec 3<> /dev/tcp/'$host'/'$port';echo $?' 2>/dev/null)
if [ "$r" = "0" ]; then
echo "[$current
time] $host $port is open" >>tori.txt
else
echo "$current_time $host $port is closed" >>tori.txt
fi
sleep 60
done


This is the noscript. It runs every 1 minute. I was planning to use setsid ./noscript.sh but I didn't find a way to exit that process easily. So, I am not doing that. Should I run it as a cronjob(It doesn't make much sense to run this as a cronjob though. as it's already sleeping for 60 seconds. Anything you can think of?)

https://redd.it/1afjbg7
@r_bash
Running a command inside another command in a one liner?

Im not too familiar with bash so i might not be using the correct terms. What im trying to do is make a one liner that makes a PUT request to a page with its body being the output of a command.

Im trying to make this

date -Iseconds | head -c -7

go in the "value" of this command

curl -X PUT -H "Content-Type: application/json" -d '{"UTC":"value"}' address


and idea is ill run this with crontab every minute or so to update the time of a "smart" appliance (philips hue bridge)

https://redd.it/1afmquq
@r_bash
Need to create a loop that make a mkdir -p with the openssl rand -hex 2 output.

Hi guys.
Need to create a loop that make a mkdir -p with the openssl rand -hex 2 output.
I try this
\#!/bin/bash
var=$(openssl rand -hex 2)
for line in $var
do
mkdir "${line}"
done
But interact once and then stop!
I need to fullfill every possible combination which openssl rand -hex 2 gives.

Short history: need create hex directory from 0000 to FFFF

&#x200B;

Thanks

https://redd.it/1afrne9
@r_bash
why is this noscript running every second(even multiple times per second)?

host=www.google.com
port=443
while true; do
current_time=$(date +%H:%M:%S)
r=$(bash -c 'exec 3<> /dev/tcp/'$host'/'$port';echo $?' 2>/dev/null)
if [ "$r" = "0" ]; then
echo "[$current_time] $host $port is open" >> log_file.txt
else
echo "[$current_time] $host $port is closed" >> log_file.txt
fi
done

I put it into a cronjob every 1 hour or whatever, it'll always run every 1 second and sometimes multiple times per second.

https://redd.it/1ag2sg9
@r_bash
Is it possible to get the exit code of mv in "mv $folder $target &"

Is it possible to get the exit code of the mv command on the 2nd last line without messing up the progress bar function?

#!/usr/bin/env bash

# Shell Colors
Red='\e0;31m' # ${Red}
Yellow='\e[0;33m' # ${Yellow}
Cyan='\e[0;36m' # ${Cyan}
Error='\e[41m' # ${Error}
Off='\e[0m' # ${Off}

progbar(){
# $1 is pid of process
# $2 is string to echo
local PROC
local delay
local dots
local progress
PROC="$1"
delay="0.3"
dots=""
while [[ -d /proc/$PROC ]; do
dots="${dots}."
progress="$dots"
if [ ${#dots} -gt "10" ]; then
dots=""
progress=" "
fi
echo -ne " ${2}$progress\r"; sleep "$delay"
done
echo -e "$2 "
return 0
}

action="Moving"
sourcevol="volume1"
targetvol="/volume2"
folder="@foobar"

mv -f "/${sourcevol}/$folder" "${targetvol}" &
progbar $! "mv ${action} /${sourcevol}/$folder to ${Cyan}$targetvol${Off}"

https://redd.it/1ag3qiz
@r_bash
Running commands written in file

I am not familiar with advanced bash commands at all. I know the decent way for asking for help would be having at least a half-baked solution, but i have no idea how to do this. I would like to easily run prehook commands before testing for each project, and for that I would like to create a bashrc (zshrc) alias that searches the /test.config file for the commands that are in it, and runs them in sequence.

The format in the file is

{pre_hooks, [
{ct, "command1"},
{ct, "command2"}
]}.

The number and content of the commands is different for each project, and the the file contains a lot of thigs besides this.

I have tried to search it but could not find a relevant result. I will of course check what any given solution does, i want to learn this stuff. Thanks for the help!

https://redd.it/1aga2vn
@r_bash
Variable not global

I have the following code in my noscript and I can't figure out why pkgs_with_links (not pkg_with_link, which is local) is not accessible globally:

printreleasenotes() {
mapfile -t pkgs < <(comm -12 <( sort "$conf" | cut -d' ' -f 1) <( awk '{ sub("^#.| #.", "") } !NF { next } { print $1 }' "$cache" | sort))

if ((${#pkgs@})); then

local url

printf "\n%s\n" "# Release notes:"
for package in "${pkgs@}"; do
while read -r line; do
pkgswithlink="${line%% }"
if [[ "$package" == "$pkgs_with_link" ]]; then
url="${line##
}"
printf "%s\n" "# $(tput setaf 1)$pkgswithlink$(tput sgr0): $url"
pkgswithlinks+=("$url")
break
fi
done < "$conf"
done

printf "%s" "all my links:" "${pkgswithlinks@}"
fi
}

Quick google search shows piping involves a subshell and that variables define inside will not be accessible globally. But the while loop does not involves any pipes.

Any ideas and the recommended way to make it accessible globally? Also is there any point in using declare to initialize a variable? Would it be a good idea to initialize all variables intended to be used globally at the beginning of the noscript so that for maintaining the noscript in the future it's easier to see all the global variables and not accidentally add in code involving a new variable that might be named the same as the global variable?

https://redd.it/1aglq82
@r_bash