Guide to Customizing Your Prompt With Starship
I've recently switched from Oh-My-Zsh and Powerlevel10k to Starship for my shell prompt. While those are excellent tools, my config eventually felt a bit bloated. Oh-My-Zsh offers a "batteries included" approach with lots of features out of the box, but Starship's minimalist and lightweight nature made it easier for me to configure and maintain. Also, it's cross-platform and cross-shell, which is a nice bonus.
I recently made a video about my WezTerm and Starship config, but I kinda brushed over the Starship part. Since some people asked for a deeper dive, I made another video focusing on that.
Hope you find it helpful and if you're also using Starship, I'd love to see your configs! :)
https://www.youtube.com/watch?v=v2S18Xf2PRo
https://preview.redd.it/ynug9m9r5hgd1.png?width=2560&format=png&auto=webp&s=f99ddfcc4933f97a59d81a2ad4efa57333f0e820
https://redd.it/1ej7mq9
@r_bash
I've recently switched from Oh-My-Zsh and Powerlevel10k to Starship for my shell prompt. While those are excellent tools, my config eventually felt a bit bloated. Oh-My-Zsh offers a "batteries included" approach with lots of features out of the box, but Starship's minimalist and lightweight nature made it easier for me to configure and maintain. Also, it's cross-platform and cross-shell, which is a nice bonus.
I recently made a video about my WezTerm and Starship config, but I kinda brushed over the Starship part. Since some people asked for a deeper dive, I made another video focusing on that.
Hope you find it helpful and if you're also using Starship, I'd love to see your configs! :)
https://www.youtube.com/watch?v=v2S18Xf2PRo
https://preview.redd.it/ynug9m9r5hgd1.png?width=2560&format=png&auto=webp&s=f99ddfcc4933f97a59d81a2ad4efa57333f0e820
https://redd.it/1ej7mq9
@r_bash
YouTube
Ultimate Starship Shell Prompt Setup From Scratch
In this video, we're diving deep into my Starship shell prompt config from scratch, as some of you requested. Learn how to transform your terminal with Starship and turn a basic prompt into a colorful, informative display. We'll explore the benefits of using…
My first actually useful bash noscript
So this isn't my first noscript, I tend to do a lot of simple tasks with noscripts, but never actually took the time to turn them into a useful project.
I've created a backup utility, that can keep my configuration folders on one of my homelab servers backed up.
the main noscript, is called from cron jobs, with the relevant section name passed in from the cron file.
#!/bin/bash
# backup-and-sync.sh
CFG_FILE=/etc/config.ini
GREEN="\033[0;32m"
YELLOW="\033[1;33m"
NC="\033[0m"
WORK_DIR="/usr/local/bin"
LOCK_FILE="/tmp/$1.lock"
SECTION=$1
# Set the working directory
cd "$WORK_DIR" || exit
# Function to log to Docker logs
log() {
local timeStamp=$(date "+%Y-%m-%d %H:%M:%S")
echo -e "${GREEN}${timeStamp}${NC} - $@" | tee -a /proc/1/fd/1
}
# Function to log errors to Docker logs with timestamp
log_error() {
local timeStamp=$(date "+%Y-%m-%d %H:%M:%S")
while read -r line; do
echo -e "${YELLOW}${timeStamp}${NC} - ERROR - $line" | tee -a /proc/1/fd/1
done
}
# Function to read the configuration file
read_config() {
local section=$1
eval "$(awk -F "=" -v section="$section" '
BEGIN { in_section=0; exclusions="" }
/^\[/{ in_section=0 }
$0 ~ "\\["section"\\]" { in_section=1; next }
in_section && !/^#/ && $1 {
gsub(/^ +| +$/, "", $1)
gsub(/^ +| +$/, "", $2)
if ($1 == "exclude") {
exclusions = exclusions "--exclude=" $2 " "
} else {
print $1 "=\"" $2 "\""
}
}
END { print "exclusions=\"" exclusions "\"" }
' $CFG_FILE)"
}
# Function to mount the CIFS share
mount_cifs() {
local mountPoint=$1
local server=$2
local share=$3
local user=$4
local password=$5
mkdir -p "$mountPoint" 2> >(log_error)
mount -t cifs -o username="$user",password="$password",vers=3.0 //"$server"/"$share" "$mountPoint" 2> >(log_error)
}
# Function to unmount the CIFS share
unmount_cifs() {
local mountPoint=$1
umount "$mountPoint" 2> >(log_error)
}
# Function to check if the CIFS share is mounted
is_mounted() {
local mountPoint=$1
mountpoint -q "$mountPoint"
}
# Function to handle backup and sync
handle_backup_sync() {
local section=$1
local sourceDir=$2
local mountPoint=$3
local subfolderName=$4
local exclusions=$5
local compress=$6
local keep_days=$7
local server=$8
local share=$9
if [ "$compress" -eq 1 ]; then
# Create a timestamp for the backup filename
timeStamp=$(date +%d-%m-%Y-%H.%M)
mkdir -p "${mountPoint}/${subfolderName}"
backupFile="${mountPoint}/${subfolderName}/${section}-${timeStamp}.tar.gz"
#log "tar -czvf $backupFile -C $sourceDir $exclusions . 2> >(log_error)"
log "Creating archive of ${sourceDir}"
tar -czvf "$backupFile" -C "$sourceDir" $exclusions . 2> >(log_error)
log "//${server}/${share}/${subfolderName}/${section}-${timeStamp}.tar.gz was successfuly created."
else
rsync_cmd=(rsync -av --inplace --delete $exclusions "$sourceDir/" "$mountPoint/${subfolderName}/")
#log "${rsync_cmd[@]}"
log "Creating a backup of ${sourceDir}"
"${rsync_cmd[@]}" 2> >(log_error)
log "Successful backup located in //${server}/${share}/${subfolderName}."
fi
# Delete compressed backups older than specified days
find "$mountPoint/$subfolderName" -type f -name "${section}-*.tar.gz" -mtime +${keep_days} -exec rm {} \; 2> >(log_error)
}
# Check if the noscript is run as superuser
if [[ $EUID -ne 0 ]]; then
log_error <<< "This
So this isn't my first noscript, I tend to do a lot of simple tasks with noscripts, but never actually took the time to turn them into a useful project.
I've created a backup utility, that can keep my configuration folders on one of my homelab servers backed up.
the main noscript, is called from cron jobs, with the relevant section name passed in from the cron file.
#!/bin/bash
# backup-and-sync.sh
CFG_FILE=/etc/config.ini
GREEN="\033[0;32m"
YELLOW="\033[1;33m"
NC="\033[0m"
WORK_DIR="/usr/local/bin"
LOCK_FILE="/tmp/$1.lock"
SECTION=$1
# Set the working directory
cd "$WORK_DIR" || exit
# Function to log to Docker logs
log() {
local timeStamp=$(date "+%Y-%m-%d %H:%M:%S")
echo -e "${GREEN}${timeStamp}${NC} - $@" | tee -a /proc/1/fd/1
}
# Function to log errors to Docker logs with timestamp
log_error() {
local timeStamp=$(date "+%Y-%m-%d %H:%M:%S")
while read -r line; do
echo -e "${YELLOW}${timeStamp}${NC} - ERROR - $line" | tee -a /proc/1/fd/1
done
}
# Function to read the configuration file
read_config() {
local section=$1
eval "$(awk -F "=" -v section="$section" '
BEGIN { in_section=0; exclusions="" }
/^\[/{ in_section=0 }
$0 ~ "\\["section"\\]" { in_section=1; next }
in_section && !/^#/ && $1 {
gsub(/^ +| +$/, "", $1)
gsub(/^ +| +$/, "", $2)
if ($1 == "exclude") {
exclusions = exclusions "--exclude=" $2 " "
} else {
print $1 "=\"" $2 "\""
}
}
END { print "exclusions=\"" exclusions "\"" }
' $CFG_FILE)"
}
# Function to mount the CIFS share
mount_cifs() {
local mountPoint=$1
local server=$2
local share=$3
local user=$4
local password=$5
mkdir -p "$mountPoint" 2> >(log_error)
mount -t cifs -o username="$user",password="$password",vers=3.0 //"$server"/"$share" "$mountPoint" 2> >(log_error)
}
# Function to unmount the CIFS share
unmount_cifs() {
local mountPoint=$1
umount "$mountPoint" 2> >(log_error)
}
# Function to check if the CIFS share is mounted
is_mounted() {
local mountPoint=$1
mountpoint -q "$mountPoint"
}
# Function to handle backup and sync
handle_backup_sync() {
local section=$1
local sourceDir=$2
local mountPoint=$3
local subfolderName=$4
local exclusions=$5
local compress=$6
local keep_days=$7
local server=$8
local share=$9
if [ "$compress" -eq 1 ]; then
# Create a timestamp for the backup filename
timeStamp=$(date +%d-%m-%Y-%H.%M)
mkdir -p "${mountPoint}/${subfolderName}"
backupFile="${mountPoint}/${subfolderName}/${section}-${timeStamp}.tar.gz"
#log "tar -czvf $backupFile -C $sourceDir $exclusions . 2> >(log_error)"
log "Creating archive of ${sourceDir}"
tar -czvf "$backupFile" -C "$sourceDir" $exclusions . 2> >(log_error)
log "//${server}/${share}/${subfolderName}/${section}-${timeStamp}.tar.gz was successfuly created."
else
rsync_cmd=(rsync -av --inplace --delete $exclusions "$sourceDir/" "$mountPoint/${subfolderName}/")
#log "${rsync_cmd[@]}"
log "Creating a backup of ${sourceDir}"
"${rsync_cmd[@]}" 2> >(log_error)
log "Successful backup located in //${server}/${share}/${subfolderName}."
fi
# Delete compressed backups older than specified days
find "$mountPoint/$subfolderName" -type f -name "${section}-*.tar.gz" -mtime +${keep_days} -exec rm {} \; 2> >(log_error)
}
# Check if the noscript is run as superuser
if [[ $EUID -ne 0 ]]; then
log_error <<< "This
noscript must be run as root"
exit 1
fi
# Main noscript functions
if [[ -n "$SECTION" ]]; then
log "Running backup for section: $SECTION"
(
flock -n 200 || {
log "Another noscript is already running. Exiting."
exit 1
}
read_config "$SECTION"
# Set default values for missing fields
: ${server:=""}
: ${share:=""}
: ${user:=""}
: ${password:=""}
: ${source:=""}
: ${compress:=0}
: ${exclusions:=""}
: ${keep:=3}
: ${subfolderName:=$SECTION} # Will implement in a future release
MOUNT_POINT="/mnt/$SECTION"
if [[ -z "$server" || -z "$share" || -z "$user" || -z "$password" || -z "$source" ]]; then
log "Skipping section $SECTION due to missing required fields."
exit 1
fi
log "Processing section: $SECTION"
mount_cifs "$MOUNT_POINT" "$server" "$share" "$user" "$password"
if is_mounted "$MOUNT_POINT"; then
log "CIFS share is mounted for section: $SECTION"
handle_backup_sync "$SECTION" "$source" "$MOUNT_POINT" "$subfolderName" "$exclusions" "$compress" "$keep" "$server" "$share"
unmount_cifs "$MOUNT_POINT"
log "Backup and sync finished for section: $SECTION"
else
log "Failed to mount CIFS share for section: $SECTION"
fi
) 200>"$LOCK_FILE"
else
log "No section specified. Exiting."
exit 1
fi
This reads in from the config.ini file.
# Sample backups configuration
[Configs]
server=192.168.1.208
share=Backups
user=backup
password=password
source=/src/configs
compress=0
schedule=30 1-23/2 * * *
subfolderName=configs
[ZIP-Configs]
server=192.168.1.208
share=Backups
user=backup
password=password
source=/src/configs
subfolderName=zips
compress=1
keep=3
exclude=homeassistant
exclude=cifs
exclude=*.sock
schedule=0 0 * * *
The noscripts run in a docker container, and uses the other noscript to set up the environment, cron jobs, and check mount points on container startup.
#!/bin/bash
# entry.sh
CFG_FILE=/etc/config.ini
GREEN="\033[0;32m"
YELLOW="\033[1;33m"
NC="\033[0m"
error_file=$(mktemp)
WORK_DIR="/usr/local/bin"
# Function to log to Docker logs
log() {
local TIMESTAMP=$(date "+%Y-%m-%d %H:%M:%S")
echo -e "${GREEN}${TIMESTAMP}${NC} - $@"
}
# Function to log errors to Docker logs with timestamp
log_error() {
local TIMESTAMP=$(date "+%Y-%m-%d %H:%M:%S")
while read -r line; do
echo -e "${YELLOW}${TIMESTAMP}${NC} - ERROR - $line" | tee -a /proc/1/fd/1
done
}
# Function to syncronise the timezone
set_tz() {
if [ -n "$TZ" ] && [ -f "/usr/share/zoneinfo/$TZ" ]; then
echo $TZ > /etc/timezone
ln -snf /usr/share/zoneinfo$TZ /etc/localtime
log "Setting timezone to ${TZ}"
else
log_error <<< "Invalid or unset TZ variable: $TZ"
fi
}
# Function to read the configuration file
read_config() {
local section=$1
eval "$(awk -F "=" -v section="$section" '
BEGIN { in_section=0; exclusions="" }
/^\[/{ in_section=0 }
$0 ~ "\\["section"\\]" { in_section=1; next }
in_section && !/^#/ && $1 {
gsub(/^ +| +$/, "", $1)
gsub(/^ +| +$/, "", $2)
if ($1 == "exclude") {
exclusions = exclusions "--exclude=" $2 " "
} else {
if ($1 == "schedule") {
# Escape double quotes and backslashes
gsub(/"/, "\\\"", $2)
}
exit 1
fi
# Main noscript functions
if [[ -n "$SECTION" ]]; then
log "Running backup for section: $SECTION"
(
flock -n 200 || {
log "Another noscript is already running. Exiting."
exit 1
}
read_config "$SECTION"
# Set default values for missing fields
: ${server:=""}
: ${share:=""}
: ${user:=""}
: ${password:=""}
: ${source:=""}
: ${compress:=0}
: ${exclusions:=""}
: ${keep:=3}
: ${subfolderName:=$SECTION} # Will implement in a future release
MOUNT_POINT="/mnt/$SECTION"
if [[ -z "$server" || -z "$share" || -z "$user" || -z "$password" || -z "$source" ]]; then
log "Skipping section $SECTION due to missing required fields."
exit 1
fi
log "Processing section: $SECTION"
mount_cifs "$MOUNT_POINT" "$server" "$share" "$user" "$password"
if is_mounted "$MOUNT_POINT"; then
log "CIFS share is mounted for section: $SECTION"
handle_backup_sync "$SECTION" "$source" "$MOUNT_POINT" "$subfolderName" "$exclusions" "$compress" "$keep" "$server" "$share"
unmount_cifs "$MOUNT_POINT"
log "Backup and sync finished for section: $SECTION"
else
log "Failed to mount CIFS share for section: $SECTION"
fi
) 200>"$LOCK_FILE"
else
log "No section specified. Exiting."
exit 1
fi
This reads in from the config.ini file.
# Sample backups configuration
[Configs]
server=192.168.1.208
share=Backups
user=backup
password=password
source=/src/configs
compress=0
schedule=30 1-23/2 * * *
subfolderName=configs
[ZIP-Configs]
server=192.168.1.208
share=Backups
user=backup
password=password
source=/src/configs
subfolderName=zips
compress=1
keep=3
exclude=homeassistant
exclude=cifs
exclude=*.sock
schedule=0 0 * * *
The noscripts run in a docker container, and uses the other noscript to set up the environment, cron jobs, and check mount points on container startup.
#!/bin/bash
# entry.sh
CFG_FILE=/etc/config.ini
GREEN="\033[0;32m"
YELLOW="\033[1;33m"
NC="\033[0m"
error_file=$(mktemp)
WORK_DIR="/usr/local/bin"
# Function to log to Docker logs
log() {
local TIMESTAMP=$(date "+%Y-%m-%d %H:%M:%S")
echo -e "${GREEN}${TIMESTAMP}${NC} - $@"
}
# Function to log errors to Docker logs with timestamp
log_error() {
local TIMESTAMP=$(date "+%Y-%m-%d %H:%M:%S")
while read -r line; do
echo -e "${YELLOW}${TIMESTAMP}${NC} - ERROR - $line" | tee -a /proc/1/fd/1
done
}
# Function to syncronise the timezone
set_tz() {
if [ -n "$TZ" ] && [ -f "/usr/share/zoneinfo/$TZ" ]; then
echo $TZ > /etc/timezone
ln -snf /usr/share/zoneinfo$TZ /etc/localtime
log "Setting timezone to ${TZ}"
else
log_error <<< "Invalid or unset TZ variable: $TZ"
fi
}
# Function to read the configuration file
read_config() {
local section=$1
eval "$(awk -F "=" -v section="$section" '
BEGIN { in_section=0; exclusions="" }
/^\[/{ in_section=0 }
$0 ~ "\\["section"\\]" { in_section=1; next }
in_section && !/^#/ && $1 {
gsub(/^ +| +$/, "", $1)
gsub(/^ +| +$/, "", $2)
if ($1 == "exclude") {
exclusions = exclusions "--exclude=" $2 " "
} else {
if ($1 == "schedule") {
# Escape double quotes and backslashes
gsub(/"/, "\\\"", $2)
}
print $1 "=\"" $2 "\""
}
}
END { print "exclusions=\"" exclusions "\"" }
' $CFG_FILE)"
}
# Function to check the mountpoint
check_mount() {
local mount_point=$1
if ! mountpoint -q "$mount_point"; then
log_error <<< "CIFS share is not mounted at $mount_point"
exit 1
fi
}
mount_cifs() {
local mount_point=$1
local user=$2
local password=$3
local server=$4
local share=$5
mkdir -p "$mount_point" 2> >(log_error)
mount -t cifs -o username="$user",password="$password",vers=3.0 //"$server"/"$share" "$mount_point" 2> >(log_error)
}
# Create or clear the crontab file
sync_cron() {
crontab -l > mycron 2> "$error_file"
if [ -s "$error_file" ]; then
log_error <<< "$(cat "$error_file")"
rm "$error_file"
: > mycron
else
rm "$error_file"
fi
# Loop through each section and add the cron job
for section in $(awk -F '[][]' '/\[[^]]+\]/{print $2}' $CFG_FILE); do
read_config "$section"
if [[ -n "$schedule" ]]; then
echo "$schedule /usr/local/bin/backup.sh $section" >> mycron
fi
done
}
# Set the working directory
cd "$WORK_DIR" || exit
# Set the timezone as defined by Environmental variable
set_tz
# Install the new crontab file
sync_cron
crontab mycron 2> >(log_error)
rm mycron 2> >(log_error)
# Ensure cron log file exists
touch /var/log/cron.log 2> >(log_error)
# Start cron
log "Starting cron service..."
cron 2> >(log_error) && log "Cron started successfully"
# Check if cron is running
if ! pgrep cron > /dev/null; then
log "Cron is not running."
exit 1
else
log "Cron is running."
fi
# Check if the CIFS shares are mountable
log "Checking all shares are mountable"
for section in $(awk -F '[][]' '/\[[^]]+\]/{print $2}' $CFG_FILE); do
read_config "$section"
MOUNT_POINT="/mnt/$section"
mount_cifs "$MOUNT_POINT" "$user" "$password" "$server" "$share"
check_mount "$MOUNT_POINT"
log "$section: //$server/$share succesfully mounted at $MOUNT_POINT... Unmounting"
umount "$MOUNT_POINT" 2> >(log_error)
done
log "All shares mounted successfuly. Starting cifs-backup"
# Print a message indicating we are about to tail the log
log "Tailing the cron log to keep the container running"
tail -f /var/log/cron.log
log "cifs-backup now running"
I'm sure there might be better ways of achieving the same thing. But the satisfaction that I get from knowing that I've done it myself, can't be beaten.
Let me know what you think, or anything that I could have done better.
https://redd.it/1ejcdz4
@r_bash
}
}
END { print "exclusions=\"" exclusions "\"" }
' $CFG_FILE)"
}
# Function to check the mountpoint
check_mount() {
local mount_point=$1
if ! mountpoint -q "$mount_point"; then
log_error <<< "CIFS share is not mounted at $mount_point"
exit 1
fi
}
mount_cifs() {
local mount_point=$1
local user=$2
local password=$3
local server=$4
local share=$5
mkdir -p "$mount_point" 2> >(log_error)
mount -t cifs -o username="$user",password="$password",vers=3.0 //"$server"/"$share" "$mount_point" 2> >(log_error)
}
# Create or clear the crontab file
sync_cron() {
crontab -l > mycron 2> "$error_file"
if [ -s "$error_file" ]; then
log_error <<< "$(cat "$error_file")"
rm "$error_file"
: > mycron
else
rm "$error_file"
fi
# Loop through each section and add the cron job
for section in $(awk -F '[][]' '/\[[^]]+\]/{print $2}' $CFG_FILE); do
read_config "$section"
if [[ -n "$schedule" ]]; then
echo "$schedule /usr/local/bin/backup.sh $section" >> mycron
fi
done
}
# Set the working directory
cd "$WORK_DIR" || exit
# Set the timezone as defined by Environmental variable
set_tz
# Install the new crontab file
sync_cron
crontab mycron 2> >(log_error)
rm mycron 2> >(log_error)
# Ensure cron log file exists
touch /var/log/cron.log 2> >(log_error)
# Start cron
log "Starting cron service..."
cron 2> >(log_error) && log "Cron started successfully"
# Check if cron is running
if ! pgrep cron > /dev/null; then
log "Cron is not running."
exit 1
else
log "Cron is running."
fi
# Check if the CIFS shares are mountable
log "Checking all shares are mountable"
for section in $(awk -F '[][]' '/\[[^]]+\]/{print $2}' $CFG_FILE); do
read_config "$section"
MOUNT_POINT="/mnt/$section"
mount_cifs "$MOUNT_POINT" "$user" "$password" "$server" "$share"
check_mount "$MOUNT_POINT"
log "$section: //$server/$share succesfully mounted at $MOUNT_POINT... Unmounting"
umount "$MOUNT_POINT" 2> >(log_error)
done
log "All shares mounted successfuly. Starting cifs-backup"
# Print a message indicating we are about to tail the log
log "Tailing the cron log to keep the container running"
tail -f /var/log/cron.log
log "cifs-backup now running"
I'm sure there might be better ways of achieving the same thing. But the satisfaction that I get from knowing that I've done it myself, can't be beaten.
Let me know what you think, or anything that I could have done better.
https://redd.it/1ejcdz4
@r_bash
Reddit
From the bash community on Reddit
Explore this post and more from the bash community
How I can center the output of this Bash command
#!/bin/bash
#Stole it from https://www.putorius.net/how-to-make-countdown-timer-in-bash.html
GREEN='\0330;32m'
RED='\033[0;31m'
YELLOW='\033[0;33m'
RESET='\033[0m'
#------------------------
read -p "H:" hour
read -p "M:" min
read -p "S:" sec
#-----------------------
tput civis
#-----------------------
if [ -z "$hour" ; then
hour=0
fi
if -z "$min" ; then
min=0
fi
if -z "$sec" ; then
sec=0
fi
#----------------------
echo -ne "${GREEN}"
while $hour -ge 0 ; do
while $min -ge 0 ; do
while $sec -ge 0 ; do
if "$hour" -eq "0" && "$min" -eq "0" ; then
echo -ne "${YELLOW}"
fi
if "$hour" -eq "0" && "$min" -eq "0" && "$sec" -le "10" ; then
echo -ne "${RED}"
fi
echo -ne "$(printf "%02d" $hour):$(printf "%02d" $min):$(printf "%02d" $sec)\0330K\r"
let "sec=sec-1"
sleep 1
done
sec=59
let "min=min-1"
done
min=59
let "hour=hour-1"
done
echo -e "${RESET}"
[https://redd.it/1ejuy7c
@r_bash
#!/bin/bash
#Stole it from https://www.putorius.net/how-to-make-countdown-timer-in-bash.html
GREEN='\0330;32m'
RED='\033[0;31m'
YELLOW='\033[0;33m'
RESET='\033[0m'
#------------------------
read -p "H:" hour
read -p "M:" min
read -p "S:" sec
#-----------------------
tput civis
#-----------------------
if [ -z "$hour" ; then
hour=0
fi
if -z "$min" ; then
min=0
fi
if -z "$sec" ; then
sec=0
fi
#----------------------
echo -ne "${GREEN}"
while $hour -ge 0 ; do
while $min -ge 0 ; do
while $sec -ge 0 ; do
if "$hour" -eq "0" && "$min" -eq "0" ; then
echo -ne "${YELLOW}"
fi
if "$hour" -eq "0" && "$min" -eq "0" && "$sec" -le "10" ; then
echo -ne "${RED}"
fi
echo -ne "$(printf "%02d" $hour):$(printf "%02d" $min):$(printf "%02d" $sec)\0330K\r"
let "sec=sec-1"
sleep 1
done
sec=59
let "min=min-1"
done
min=59
let "hour=hour-1"
done
echo -e "${RESET}"
[https://redd.it/1ejuy7c
@r_bash
Putorius
How To Make a Countdown Timer in Bash - Putorius
Learn how to make a nice countdown timer in bash to use in your noscripts using while loops.
Help creating custom fuzzy seach command noscript.
https://preview.redd.it/6g4rr1jdengd1.png?width=1293&format=png&auto=webp&s=2f0008099046d63222afb9f7a1330e7be5a18f57
I want to interactively query nix pkgs using the nix-search command provided by `nix-search-cli`
Not really experiaenced in cli tools any ideas to make this work ?
https://redd.it/1ejv53s
@r_bash
https://preview.redd.it/6g4rr1jdengd1.png?width=1293&format=png&auto=webp&s=2f0008099046d63222afb9f7a1330e7be5a18f57
I want to interactively query nix pkgs using the nix-search command provided by `nix-search-cli`
Not really experiaenced in cli tools any ideas to make this work ?
https://redd.it/1ejv53s
@r_bash
Parameter expansion inserts "./" into copied string
I'm trying to loop through the results of
SERVERS=()
for word in
do
if [ $word == *".servers_minecraft_"* && $word != *".servers_minecraft_playit" ] ;
then
SERVERS+=${word#".servers_minecraft_"}
fi
done
echo ${SERVER[]}
where
https://redd.it/1ekc3rf
@r_bash
I'm trying to loop through the results of
screen -ls to look for sessions relevant to what I'm doing and add them to an array. The problem is that I need to use parameter expansion to do it, since screen sessions have an indeterminate-length number in front of them, and that adds ./ to the result. Here's the code I have so far:SERVERS=()
for word in
screen -list ;do
if [ $word == *".servers_minecraft_"* && $word != *".servers_minecraft_playit" ] ;
then
SERVERS+=${word#".servers_minecraft_"}
fi
done
echo ${SERVER[]}
where
echo ${SERVER[*]} outputs ./MyTargetString instead of MyTargetString. I already tried using parameter expansion to chop off ./, but of course that just reinserts it anyway.https://redd.it/1ekc3rf
@r_bash
Reddit
From the bash community on Reddit
Explore this post and more from the bash community
curl: (3) URL using bad/illegal format or missing URL error using two parameters
Hello,
I am getting the error above when trying to use the curl command -b -j with the cookies. When just typing in -b or -c then it works perfectly, however, not when applying both parameters. Do you happen to know why?
https://redd.it/1ekv0am
@r_bash
Hello,
I am getting the error above when trying to use the curl command -b -j with the cookies. When just typing in -b or -c then it works perfectly, however, not when applying both parameters. Do you happen to know why?
https://redd.it/1ekv0am
@r_bash
Reddit
From the bash community on Reddit
Explore this post and more from the bash community
remote execute screen command doesn't work from noscript, but works manually
I'm working on the thing I got set up with help in [this thread](https://www.reddit.com/r/bash/comments/1egamw2/comment/lfwxi38/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button). I've now got a new Terminal window with each of my screens in a different tab!
The problem is that now, when I try to do my remote execution outside the first loop, it doesn't work. I thought maybe it had to do with being part of a different command, but pasting that `echo hello` command into Terminal and replacing the variable name manually works fine.
gnome-terminal -- /bin/bash -c '
gnome-terminal --noscript="playit.gg" --tab -- screen -r servers_minecraft_playit
for SERVER in "$@" ; do
gnome-terminal --noscript="$SERVER" --tab -- screen -r servers_minecraft_$SERVER
done
' _ "${SERVERS[@]}"
for SERVER in "${SERVERS[@]}"
do
echo servers_minecraft_$SERVER
screen -S servers_minecraft_$SERVER -p 0 -X stuff "echo hello\n"
done;;
Is there anything I can do to fix it? The output of `echo servers_minecraft_$SERVER` matches the name of the screen session, so I don't think it could be a substitution issue.
https://redd.it/1el57w2
@r_bash
I'm working on the thing I got set up with help in [this thread](https://www.reddit.com/r/bash/comments/1egamw2/comment/lfwxi38/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button). I've now got a new Terminal window with each of my screens in a different tab!
The problem is that now, when I try to do my remote execution outside the first loop, it doesn't work. I thought maybe it had to do with being part of a different command, but pasting that `echo hello` command into Terminal and replacing the variable name manually works fine.
gnome-terminal -- /bin/bash -c '
gnome-terminal --noscript="playit.gg" --tab -- screen -r servers_minecraft_playit
for SERVER in "$@" ; do
gnome-terminal --noscript="$SERVER" --tab -- screen -r servers_minecraft_$SERVER
done
' _ "${SERVERS[@]}"
for SERVER in "${SERVERS[@]}"
do
echo servers_minecraft_$SERVER
screen -S servers_minecraft_$SERVER -p 0 -X stuff "echo hello\n"
done;;
Is there anything I can do to fix it? The output of `echo servers_minecraft_$SERVER` matches the name of the screen session, so I don't think it could be a substitution issue.
https://redd.it/1el57w2
@r_bash
Reddit
hopelessnerd-exe's comment on "Triple nest quotes, or open gnome-terminal window and execute command later?"
Explore this conversation and more from the bash community
Pulling Variables from a Json File
I'm looking for a snippet of noscript that will let me pull variables from a json file and pass it into the bash noscript. I mostly use powershell so this is a bit like writing left handed for me so far, same concept with a different execution
https://redd.it/1eljlfv
@r_bash
I'm looking for a snippet of noscript that will let me pull variables from a json file and pass it into the bash noscript. I mostly use powershell so this is a bit like writing left handed for me so far, same concept with a different execution
https://redd.it/1eljlfv
@r_bash
Reddit
From the bash community on Reddit
Explore this post and more from the bash community
Better autocomplete (like fish)
If I use the fish shell, I get a nice autocomplete. For example
Somehow it is sorted in a really usable way. The latest branches are at the top.
With Bash I get only a long list which looks like sorted by alphabet. This is hard to read if there are many branches.
Is there a way to get such a nice autocomplete in Bash?
https://redd.it/1eljld2
@r_bash
If I use the fish shell, I get a nice autocomplete. For example
git switch TABTAB looks like this:❯ git switch tg/avoid-warning-event-that-getting-cloud-init-output-failed
tg/installimage-async (Local Branch)
main (Local Branch)
tg/disable-bm-e2e-1716772 (Local Branch)
tg/rename-e2e-cluster-to-e2e (Local Branch)
tg/avoid-warning-event-that-getting-cloud-init-output-failed (Local Branch)
tg/fix-lychee-show-unknown-http-status-codes (Local Branch)
tg/fix-bm-e2e-1716772 (Local Branch)
tg/fix-lychee (Local Branch)
Somehow it is sorted in a really usable way. The latest branches are at the top.
With Bash I get only a long list which looks like sorted by alphabet. This is hard to read if there are many branches.
Is there a way to get such a nice autocomplete in Bash?
https://redd.it/1eljld2
@r_bash
Reddit
From the bash community on Reddit
Explore this post and more from the bash community
Anyone know a good way to get overall system CPU usage from the commandline?
Ideally something like
cat /proc/loadavg
but that shows 1 second, 5 seconds and 15 seconds (instead of 1, 5 and 15 minutes).
Its going to be used in a noscript, not for interactive use, so things like
So far the closest I know of is
https://redd.it/1elp6qg
@r_bash
Ideally something like
cat /proc/loadavg
but that shows 1 second, 5 seconds and 15 seconds (instead of 1, 5 and 15 minutes).
Its going to be used in a noscript, not for interactive use, so things like
top wont work. Id prefer it be pure bash, but that may not be possible...im not sure.So far the closest I know of is
ps [-A] -o %cpu. Anyone know of a better way?https://redd.it/1elp6qg
@r_bash
Reddit
From the bash community on Reddit
Explore this post and more from the bash community
Write noscript to check existing users and prints only user with home directories
is this correct and how would i know if user with home directories
#!/bin/bash
IFS=$'\n'
for user in $(cat /etc/passwd); do
if $(echo "$user" | cut -d':' -f6 | cut -d'/' -f2) = "home" ; then
echo "$user" | cut -d':' -f1
fi
done
IFS=$' \t\n'
https://redd.it/1em2pl1
@r_bash
is this correct and how would i know if user with home directories
#!/bin/bash
IFS=$'\n'
for user in $(cat /etc/passwd); do
if $(echo "$user" | cut -d':' -f6 | cut -d'/' -f2) = "home" ; then
echo "$user" | cut -d':' -f1
fi
done
IFS=$' \t\n'
https://redd.it/1em2pl1
@r_bash
Reddit
From the bash community on Reddit
Explore this post and more from the bash community
Correct way to use a function with this "write to error log" command?
Bash newbie so kindly bear with me!
Let us say I want to print output to an error log file and the console. I assume I have 2 options
Option 1: Include error logging inside the function
Option 2: Include error logging when calling the function
2 questions
- Which of these methods is recommended?
- If I put this file inside a crontab like this, will it still log errors?
Crontab
https://redd.it/1em67xm
@r_bash
Bash newbie so kindly bear with me!
Let us say I want to print output to an error log file and the console. I assume I have 2 options
Option 1: Include error logging inside the function
copy_to_s3() {
local INPUT_FILE_NAME=$1
local BUCKET_NAME=$2
if aws s3 cp "${INPUT_FILE_NAME}" "s3://${BUCKET_NAME}" >error.log 2>&1; then
echo "Successfully copied the input file ${INPUT_FILE} to s3://${BUCKET_NAME}"
else
error=$(cat "error.log")
# EMAIL this error to the admin
echo "Something went wrong when copying the input file ${INPUT_FILE} to s3://${BUCKET_NAME}"
exit 1
fi
rm -rf "${INPUT_FILE_NAME}"
}
copy_to_s3 "test.tar.gz" "test-s3-bucket"
Option 2: Include error logging when calling the function
copy_to_s3() {
local INPUT_FILE_NAME=$1
local BUCKET_NAME=$2
if aws s3 cp "${INPUT_FILE_NAME}" "s3://${BUCKET_NAME}"; then
echo "Successfully copied the input file ${INPUT_FILE} to s3://${BUCKET_NAME}"
else
echo "Something went wrong when copying the input file ${INPUT_FILE} to s3://${BUCKET_NAME}"
exit 1
fi
rm -rf "${INPUT_FILE_NAME}"
}
copy_to_s3 "test.tar.gz" "test-s3-bucket" >error.log 2>&1
2 questions
- Which of these methods is recommended?
- If I put this file inside a crontab like this, will it still log errors?
Crontab
crontab -u ec2-user - <<EOF
PATH=/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/ec2-user/.local/bin:/home/ec2-user/bin
0 0,4,8,12,16,20 * * * /home/ec2-user/test.sh
EOF
https://redd.it/1em67xm
@r_bash
Reddit
From the bash community on Reddit
Explore this post and more from the bash community
Need help, will award anyone that solves this
I will send (PP pref) $10 to anyone that can provide me with a noscript that converts a free format text file to an excel comma delimited file.
Each record in the file has the following characteristics: Earch record starts with "Kundnr" (customer number). Could be blank. I need the complete line including the leading company name as the first column of the new file.
Next field is the "Vårt Arb.nummer: XXXXX" which is the internal order number.
Third field is the date (YYYYMMDD) in the line "är utprintad: (date printed)"
End of each record is the text "inkl. moms" (including tax)
So to recapitulate, each line should contain
CUSTOMER NAME/NUMBER,ORDERNO,DATE
Is anyone up to the challenge? :). I can provide a sample file with 60'ish record if needed. The actual file contains 27000 records.
https://redd.it/1emd34o
@r_bash
I will send (PP pref) $10 to anyone that can provide me with a noscript that converts a free format text file to an excel comma delimited file.
Each record in the file has the following characteristics: Earch record starts with "Kundnr" (customer number). Could be blank. I need the complete line including the leading company name as the first column of the new file.
Next field is the "Vårt Arb.nummer: XXXXX" which is the internal order number.
Third field is the date (YYYYMMDD) in the line "är utprintad: (date printed)"
End of each record is the text "inkl. moms" (including tax)
So to recapitulate, each line should contain
CUSTOMER NAME/NUMBER,ORDERNO,DATE
Is anyone up to the challenge? :). I can provide a sample file with 60'ish record if needed. The actual file contains 27000 records.
HÖGANÄS SWEDEN AB Kundnr: 1701 263 83 HÖGANÄS Kopia Märke: 1003558217 Best.ref.: Li Löfgren Fridh AO 0006808556 Lev.vecka: 2415 Vårt Arb.nummer: 29000 Vit ArbetsOrder är utprintad. 20240411 Datum Sign Tid Kod 1 pcs Foldable fence BU29 ritn 10185510 240311 JR 4.75 1 240312 JR 5.00 1 240319 LL 2.25 240320 NR 4.50 1 240411 MM %-988.00 1 240411 NR 2.50 1 240411 NR 0.50 11 240411 FO 6.00 1 240411 FO 0.50 1 OBS!!! Timmar skall ej debiteras. 203.25 timmar a' 670.00 kr. Kod: 1 Ö-tillägg 0.50 timmar a' 221.00 kr. Kod: 11 Arbetat 203.25 timmar till en summa av136,288.00:- Lovad lev.: 8/4 Övertid Fakturabel. Fakturadat. Fakturanr. 110.50 187,078.50 Sign___ Onsdagen 7/8-24 10:32 233,848.13 kronor inkl. moms. https://redd.it/1emd34o
@r_bash
Reddit
From the bash community on Reddit
Explore this post and more from the bash community
bash declare builtin behaving odd
Can someone explain this behaviour (run from this shell:
~$ A=1 declare -px
declare -x OLDPWD
declare -x PWD="/home/me"
declare -x SHLVL="1"
versus
~$ A=1 declare -p A
declare -x A="1"
Tested in bash version 5.2.26.
I thought I could always trust declare, but now I'm not so sure anymore. Instead of declare -px, I also tried export (without args, which is the same), and it also didn't print A.
https://redd.it/1emeo6f
@r_bash
Can someone explain this behaviour (run from this shell:
env -i bash --norc)~$ A=1 declare -px
declare -x OLDPWD
declare -x PWD="/home/me"
declare -x SHLVL="1"
versus
~$ A=1 declare -p A
declare -x A="1"
Tested in bash version 5.2.26.
I thought I could always trust declare, but now I'm not so sure anymore. Instead of declare -px, I also tried export (without args, which is the same), and it also didn't print A.
https://redd.it/1emeo6f
@r_bash
Reddit
From the bash community on Reddit
Explore this post and more from the bash community
Bash escape string query
I am trying to run a noscript. Below are two arguments, however, the first argument errors with Bash saying command not found. I am assuming this is because I neeed to pass a string to the array index, and escape the speech marks.
module.aa"\"BAC\"".aws
Because there are " " in this command, I am wondering if this will make Bash say the command is not found and thus how to escape the argument?
https://redd.it/1emibh6
@r_bash
I am trying to run a noscript. Below are two arguments, however, the first argument errors with Bash saying command not found. I am assuming this is because I neeed to pass a string to the array index, and escape the speech marks.
module.aa"\"BAC\"".aws
Because there are " " in this command, I am wondering if this will make Bash say the command is not found and thus how to escape the argument?
https://redd.it/1emibh6
@r_bash
Reddit
From the bash community on Reddit
Explore this post and more from the bash community
Pulling variable from json
My goal is to pull the token from a json file but my mac vm is going crazy so I can't really test it. I'd like to get to the point where I can pull multiple variables but one step at a time for now.
The Json is simple and only has the one data point "Token": "123"
Thank you guys for the help on my last post btw, it was really helpful for making heads and tails of bash
https://redd.it/1emnbha
@r_bash
#Pull .json info into noscript and set the variableToken= ($jq -r '.[] | .Token' SenToken.json)echo $TokenMy goal is to pull the token from a json file but my mac vm is going crazy so I can't really test it. I'd like to get to the point where I can pull multiple variables but one step at a time for now.
The Json is simple and only has the one data point "Token": "123"
Thank you guys for the help on my last post btw, it was really helpful for making heads and tails of bash
https://redd.it/1emnbha
@r_bash
Reddit
From the bash community on Reddit
Explore this post and more from the bash community
Complete noob needing help with sh noscript
Hey everyone - I am trying to get better with Bash and literally started a "for dummies" guide here but for some reason no matter what my .sh noscript will not execute when running ./
all I get is a "zsh: no such file or directory". If I "ls" it I can see all the folders and files including my sh noscript and if I "bash mynoscript.sh" it runs normally...any ideas? I did chmod +x it as well
Any ideas? Apologies if my denoscription is confusing
https://redd.it/1emwv9x
@r_bash
Hey everyone - I am trying to get better with Bash and literally started a "for dummies" guide here but for some reason no matter what my .sh noscript will not execute when running ./
all I get is a "zsh: no such file or directory". If I "ls" it I can see all the folders and files including my sh noscript and if I "bash mynoscript.sh" it runs normally...any ideas? I did chmod +x it as well
Any ideas? Apologies if my denoscription is confusing
https://redd.it/1emwv9x
@r_bash
Medium
Bash Scripting For Dummies
In this article I’ll go over the very basics of bash noscripting so that by the end of it you’ll have an idea what it means and how can you…
Bash Question
Hii!
On [this thread](https://www.reddit.com/r/bash/comments/1ej6sg6/question_about_bash_function/), one of the questions I asked was whether it was better or more optimal to perform certain tasks with shell builtins instead of external binaries, and the truth is that I have been presented with this example and I wanted to know your opinion and advice.
already told me the following:
>Rule of thumb is, to use `grep`, `awk`, `sed` and such when you're filtering files or a stream of lines, because they will be much faster than bash. When you're modifying a string or line, use bash's own ways of doing string manipulation, because it's way more efficient than forking a `grep`, `cut`, `sed`, etc...
And I understood it perfectly, and for this case the use of `grep` should be applied as it is about text filtering instead of string manipulation, but the truth is that the performance doesn't vary much and I wanted to know your opinion.
Func1 ➡️
foo()
{
local _port=
while read -r _line
do
[[ $_line =~ ^#?\s*"Port "([0-9]{1,5})$ ]] && _port=${BASH_REMATCH[1]}
done < /etc/ssh/sshd_config
printf "%s\n" "$_port"
}
Func2 ➡️
bar()
{
local _port=$(
grep --ignore-case \
--perl-regexp \
--only-matching \
'^#?\s*Port \K\d{1,5}$' \
/etc/ssh/sshd_config
)
printf "%s\n" "$_port"
}
When I benchmark both ➡️
$ export -f -- foo bar
$ hyperfine --shell bash foo bar --warmup 3 --min-runs 5000 -i
Benchmark 1: foo
Time (mean ± σ): 0.8 ms ± 0.2 ms [User: 0.9 ms, System: 0.1 ms]
Range (min … max): 0.6 ms … 5.3 ms 5000 runs
Benchmark 2: bar
Time (mean ± σ): 0.4 ms ± 0.1 ms [User: 0.3 ms, System: 0.0 ms]
Range (min … max): 0.3 ms … 4.4 ms 5000 runs
Summary
'bar' ran
1.43 ± 0.76 times faster than 'foo'
The thing is that it doesn't seem to be much faster in this case either, I understand that for search and replace tasks it is much more convenient to use sed or awk instead of bash functionality, isn't it?
Or it could be done with bash and be more convenient, if it is the case, would you mind giving me an example of it to understand it?
Thanks in advance!!
https://redd.it/1en8490
@r_bash
Hii!
On [this thread](https://www.reddit.com/r/bash/comments/1ej6sg6/question_about_bash_function/), one of the questions I asked was whether it was better or more optimal to perform certain tasks with shell builtins instead of external binaries, and the truth is that I have been presented with this example and I wanted to know your opinion and advice.
already told me the following:
>Rule of thumb is, to use `grep`, `awk`, `sed` and such when you're filtering files or a stream of lines, because they will be much faster than bash. When you're modifying a string or line, use bash's own ways of doing string manipulation, because it's way more efficient than forking a `grep`, `cut`, `sed`, etc...
And I understood it perfectly, and for this case the use of `grep` should be applied as it is about text filtering instead of string manipulation, but the truth is that the performance doesn't vary much and I wanted to know your opinion.
Func1 ➡️
foo()
{
local _port=
while read -r _line
do
[[ $_line =~ ^#?\s*"Port "([0-9]{1,5})$ ]] && _port=${BASH_REMATCH[1]}
done < /etc/ssh/sshd_config
printf "%s\n" "$_port"
}
Func2 ➡️
bar()
{
local _port=$(
grep --ignore-case \
--perl-regexp \
--only-matching \
'^#?\s*Port \K\d{1,5}$' \
/etc/ssh/sshd_config
)
printf "%s\n" "$_port"
}
When I benchmark both ➡️
$ export -f -- foo bar
$ hyperfine --shell bash foo bar --warmup 3 --min-runs 5000 -i
Benchmark 1: foo
Time (mean ± σ): 0.8 ms ± 0.2 ms [User: 0.9 ms, System: 0.1 ms]
Range (min … max): 0.6 ms … 5.3 ms 5000 runs
Benchmark 2: bar
Time (mean ± σ): 0.4 ms ± 0.1 ms [User: 0.3 ms, System: 0.0 ms]
Range (min … max): 0.3 ms … 4.4 ms 5000 runs
Summary
'bar' ran
1.43 ± 0.76 times faster than 'foo'
The thing is that it doesn't seem to be much faster in this case either, I understand that for search and replace tasks it is much more convenient to use sed or awk instead of bash functionality, isn't it?
Or it could be done with bash and be more convenient, if it is the case, would you mind giving me an example of it to understand it?
Thanks in advance!!
https://redd.it/1en8490
@r_bash
Reddit
From the bash community on Reddit: Question about Bash Function
Explore this post and more from the bash community
Lazy Loading Custom Bash Completion for Subcommands
Hi, anyone who is familiar with bash-completion?
Is it possible to add a custom completion for a subcommand (e.g.,
If not, is there a user-local equivalent to
https://redd.it/1enbfgd
@r_bash
Hi, anyone who is familiar with bash-completion?
Is it possible to add a custom completion for a subcommand (e.g.,
cmd my-custom-subcmd) using a user-specific directory like ~/.local/share/bash-completion/completions/ and have it lazy-loaded?If not, is there a user-local equivalent to
/etc/bash_completion.d/ for sourcing completion files at startup?https://redd.it/1enbfgd
@r_bash
GitHub
GitHub - scop/bash-completion: Programmable completion functions for bash
Programmable completion functions for bash. Contribute to scop/bash-completion development by creating an account on GitHub.