edit-and-execute-command ignores $VISUAL when
When bash is run in posix and vi mode, it seems
$ set -x -o posix -o vi
$ export EDITOR=vim
$
# press
++ fc -e vi
+++ vi /tmp/bash-fc.kBdfnM
$
But when run in emacs mode with
https://redd.it/1mfvvh2
@r_bash
set -o posix -o viWhen bash is run in posix and vi mode, it seems
edit-and-execute-command ignores both $VISUAL, $EDITOR and $FCEDIT, and instead uses vi. Are anyone able to reproduce this?$ set -x -o posix -o vi
$ export EDITOR=vim
$
# press
v when in command mode++ fc -e vi
+++ vi /tmp/bash-fc.kBdfnM
$
But when run in emacs mode with
set -o emacs, it correctly uses the program specified by the env vars. Is this a bug or expected behavior?https://redd.it/1mfvvh2
@r_bash
Reddit
From the bash community on Reddit
Explore this post and more from the bash community
☄️ Structured Personality AI Experiment (llama.cpp)
previous tests with “Alex” and R12D12, here’s a collection of entities with predefined behavior. These are not general-purpose chatbots, but functional narrative profiles — each with its own internal framework.
Robots:
HAL\_10
Da1ta1
Vender
CC-33PPOO
Each one:
🤖 responds from within its role, not from the user’s intent
🫡 maintains tone, structure, and narrative constraints
🎭 doesn’t improvise, it interprets
🔄 doesn’t hallucinate, it follows internal logic
Designed to run fully offline — no RAG, no embeddings, no external dependencies.
Just prompt engineering, modular structure (prompt.txt, config, README, and per-profile examples).
Repository:
(http://github.com/BiblioGalactic/Robotsdelamanecer)
Can serve as a base for operational simulation, narrative training, or controlled interaction with role-based AIs under local execution.
Signed: Eto Demerzel
Motto: If something can be narrated, it can be replicated.
https://redd.it/1mgunx4
@r_bash
previous tests with “Alex” and R12D12, here’s a collection of entities with predefined behavior. These are not general-purpose chatbots, but functional narrative profiles — each with its own internal framework.
Robots:
HAL\_10
Da1ta1
Vender
CC-33PPOO
Each one:
🤖 responds from within its role, not from the user’s intent
🫡 maintains tone, structure, and narrative constraints
🎭 doesn’t improvise, it interprets
🔄 doesn’t hallucinate, it follows internal logic
Designed to run fully offline — no RAG, no embeddings, no external dependencies.
Just prompt engineering, modular structure (prompt.txt, config, README, and per-profile examples).
Repository:
(http://github.com/BiblioGalactic/Robotsdelamanecer)
Can serve as a base for operational simulation, narrative training, or controlled interaction with role-based AIs under local execution.
Signed: Eto Demerzel
Motto: If something can be narrated, it can be replicated.
https://redd.it/1mgunx4
@r_bash
GitHub
GitHub - BiblioGalactic/Robotsdelamanecer: Saga robots
Saga robots. Contribute to BiblioGalactic/Robotsdelamanecer development by creating an account on GitHub.
Bash 100%
>🧠 Estoy trabajando en un sistema de personalidades IA usando llama.cpp completamente en local. Toda la orquestación está hecha en Bash con modularidad por perfiles (prompt.txt, config, launcher.sh, etc.).
>
>¿Alguien más ha trabajado con Bash para estructurar prompts o manejar memoria contextual? Estoy usando distancia Levenshtein + control de tokens, y me encantaría feedback técnico sobre la arquitectura de los noscripts.
https://redd.it/1mh36rp
@r_bash
>🧠 Estoy trabajando en un sistema de personalidades IA usando llama.cpp completamente en local. Toda la orquestación está hecha en Bash con modularidad por perfiles (prompt.txt, config, launcher.sh, etc.).
>
>¿Alguien más ha trabajado con Bash para estructurar prompts o manejar memoria contextual? Estoy usando distancia Levenshtein + control de tokens, y me encantaría feedback técnico sobre la arquitectura de los noscripts.
https://redd.it/1mh36rp
@r_bash
Cloned Revolut App
🛑 I’m facing some problems with: ExpoModulesCore, some classes and cocoa pods I’ve been stuck for 6h. I would grateful if someone could help me out 🥲
https://redd.it/1mh90v8
@r_bash
🛑 I’m facing some problems with: ExpoModulesCore, some classes and cocoa pods I’ve been stuck for 6h. I would grateful if someone could help me out 🥲
https://redd.it/1mh90v8
@r_bash
Reddit
From the bash community on Reddit
Explore this post and more from the bash community
Newbie - Need help understanding an error in my noscript
Hey guys, I have a basic bash noscript I made for the purpose of checking for any disconnected file shares (missing mount points) on my proxmox VE host and automatically attempting to re-map the missing shares. This is so that if my NAS turns on after my proxmox VE host for any reason, I won't have to log into the host manually and run "mount -a" myself.
This is literally my first bash noscript beyond the usual "Hello World!" (and first noscript of any kind outside of basic AutoHotkey noscripts and some light PowerShell). At this stage, my noscript is working and serving its intended purpose along with an appropriate cron job schedule to run this noscript every 5 minutes, however I am noting an error "./Auto-Mount.sh: line 59: : command not found" every time the noscript runs and finds that a file share is missing and needs to be reconnected. If the noscript exits after finding that all file shares are already connected, this error is not logged. Regardless of this error, the noscript functions as expected.
I have identified which line (line 59: if "$any_still_false"; then) is throwing the error but I can't for the life of me understand why? Any help you guys could offer would be awesome... Feel free to constructively critique my code or documentation as well since it's my first go!
Side note: I'm hoping entering the code into this post with a code block is sufficient to make this as readable as possible. If there's a better way of formatting this in a reddit post, please tell me so I can edit the post.
\- - - - - - - - - -
"$mount1"
"$mount2"
https://redd.it/1mhf7f1
@r_bash
Hey guys, I have a basic bash noscript I made for the purpose of checking for any disconnected file shares (missing mount points) on my proxmox VE host and automatically attempting to re-map the missing shares. This is so that if my NAS turns on after my proxmox VE host for any reason, I won't have to log into the host manually and run "mount -a" myself.
This is literally my first bash noscript beyond the usual "Hello World!" (and first noscript of any kind outside of basic AutoHotkey noscripts and some light PowerShell). At this stage, my noscript is working and serving its intended purpose along with an appropriate cron job schedule to run this noscript every 5 minutes, however I am noting an error "./Auto-Mount.sh: line 59: : command not found" every time the noscript runs and finds that a file share is missing and needs to be reconnected. If the noscript exits after finding that all file shares are already connected, this error is not logged. Regardless of this error, the noscript functions as expected.
I have identified which line (line 59: if "$any_still_false"; then) is throwing the error but I can't for the life of me understand why? Any help you guys could offer would be awesome... Feel free to constructively critique my code or documentation as well since it's my first go!
Side note: I'm hoping entering the code into this post with a code block is sufficient to make this as readable as possible. If there's a better way of formatting this in a reddit post, please tell me so I can edit the post.
\- - - - - - - - - -
#!/bin/bash# Define the list of mount points to be checked as statementsmount1="mountpoint -q "/mnt/nas-media""mount2="mountpoint -q "/mnt/nas2-media""#mount3="mountpoint -q "/mnt/nas3-backup""#mount4="mountpoint -q "/mnt/something-else""# Store the mount point statements in an array# Be sure to only include current mount points that should be checked# Any old or invalid mount points defined as statements in the array will eval to falsemount_points=("$mount1"
"$mount2"
)any_false=false# Check if each mount point exists and print to the console any that do notfor stmt in "${mount_points[@]}"; doif ! eval "$stmt"; thensleep 1echo "Mount point not found: $stmt"any_false=truefidone# Evalute whether all mount points exist or not, and attempt to re-stablish missing mountsif "$any_false"; thensleep 1echo "Not all mount points exist."sleep 1echo "Attempting to re-establish mount points in fstab..."mount -asleep 2elsesleep 1echo "All mount points already exist."any_still_false=falseexit 0fi# Check again and report any mount points still missingfor stmt in "${mount_points[@]}"; doif ! eval "$stmt"; thensleep 1echo "Mount point still not found: $stmt"any_still_false=truefidone# Report on the final outcome of the programif "$any_still_false"; thensleep 1echo "Failed to establish one or more mount points."exit 1elsesleep 1echo "All mount points now exist."exit 0fihttps://redd.it/1mhf7f1
@r_bash
Reddit
From the bash community on Reddit
Explore this post and more from the bash community
Personality system
🧠 I'm working on an AI personality system using llama.cpp completely locally. All orchestration is done in Bash with modularity by profiles (prompt.txt, config, launcher.sh, etc.).
Has anyone else worked with Bash to structure prompts or handle contextual memory? I'm using Levenshtein distance + token control, and I would love technical feedback on the architecture of the noscripts.
https://redd.it/1mhpciq
@r_bash
🧠 I'm working on an AI personality system using llama.cpp completely locally. All orchestration is done in Bash with modularity by profiles (prompt.txt, config, launcher.sh, etc.).
Has anyone else worked with Bash to structure prompts or handle contextual memory? I'm using Levenshtein distance + token control, and I would love technical feedback on the architecture of the noscripts.
https://redd.it/1mhpciq
@r_bash
Reddit
From the bash community on Reddit
Explore this post and more from the bash community
Read command resulting in a lack of logs.
In my bash noscript, I have a function that logs some stuff and then requests a user input based on the content logged before it. The issue is that those logs don't get logged until I do the user input first, which is obviously not intended. Am I doing something wrong?
I'm using:
Also, if it helps, I'm using Git Bash for Windows.
Thanks for the help in advance!
https://redd.it/1mhmikx
@r_bash
In my bash noscript, I have a function that logs some stuff and then requests a user input based on the content logged before it. The issue is that those logs don't get logged until I do the user input first, which is obviously not intended. Am I doing something wrong?
I'm using:
read -p "Input: " choiceAlso, if it helps, I'm using Git Bash for Windows.
Thanks for the help in advance!
https://redd.it/1mhmikx
@r_bash
How to make bash noscript to archive each folder to separate rar/zip and retain the name?
The folder structure is
\- Main Folder
\-- Subfolder
\---Subsubfolder
\---Subsubfolder
\-- Subfolder
\---Subsubfolder
\---Subsubfolder
I want to archieve each subsubfolder that have different name and retain the rar name same as the original folder name. I tried by using this video noscript, this comment noscript, this noscript, and using chat gpt but still no clue.
https://redd.it/1mie7d3
@r_bash
The folder structure is
\- Main Folder
\-- Subfolder
\---Subsubfolder
\---Subsubfolder
\-- Subfolder
\---Subsubfolder
\---Subsubfolder
I want to archieve each subsubfolder that have different name and retain the rar name same as the original folder name. I tried by using this video noscript, this comment noscript, this noscript, and using chat gpt but still no clue.
https://redd.it/1mie7d3
@r_bash
YouTube
Batch Script: Zip and Unzip Files
█▀█▀█▀█▀█▀█▀█▀█▀█▀█▀█▀█▀█▀█▀█▀█▀█▀█▀█▀█▀█▀█▀█▀█▀█▀█▀█▀█
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
[Synopsis] }
▄▄▄▄▄▄▄▄▄▄▄▄
This video shows you how to use a batch noscript to zip and unzip files.
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄…
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
[Synopsis] }
▄▄▄▄▄▄▄▄▄▄▄▄
This video shows you how to use a batch noscript to zip and unzip files.
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄…
Process Priority Manager
# nicemgr
## The Story
I am ashamed to admit, despite years doing sysadmin work/software development, it wasn't until today that I learned about nice values for running processes. For those of you that also are unaware, a nice value tells your OS which programs to prioritize, and by what weights, when resources are constrained.
My relevant example, a long running ffmpeg process, making it impossible to use my computer for days, but me desperately desiring to play BG3 in the evening after work. Solution: nice values. Nice indicates how willing a process is to share CPU cycles with other programs. They range from -20 through 20. Negative nice values are greedy and unwilling to share. Positive values are happy to get whatever CPU cycles are available. The higher the nice value, the happier they are to let other processes use all of your CPU resources.
The solution worked great, but I was finding it a bit of a chore going through all the steps to find the PID, check the current nice value, or adjust them as the syntax isn't the most memorable. I'm attaching a wrapper below for those interested. This can be used across macOS and most linux distros. Hope you find this helpful!!
## TLDR;
- **What “nice” is:** process priority from –20 (greedy) to +20 (polite)
- **Why it matters:** lets CPU-hog jobs yield to interactive apps
- **My use-case:** reniced ffmpeg so I could finally play Baldur’s Gate 3
- Wrapper noscript below to simplify use of this nifty tool
## Script
```
#!/usr/bin/env bash
# nicer: Check or adjust the nice values of specific processes or list all processes sorted by nice.
#
# Usage:
# nicer checkALL
# nicer <process-name> check
# nicer <process-name> <niceValue>
#
# checkALL List PID, nice, and command for all processes sorted by nice (asc).
# check Show current nice value(s) for <process-name>.
# niceValue Integer from -20 (highest) to 20 (lowest) to renice matching processes.
#
# Note: Negative nice values require root or the process owner.
set -euo pipefail
# Ensure required commands are available
for cmd in pgrep ps sort renice uname; do
if ! command -v "$cmd" >/dev/null 2>&1; then
echo "Error: '$cmd' command not found. Please install it." >&2
exit 1
fi
done
# Describe a nice value in human-friendly terms
priority_desc() {
local nv=$1
case $nv in
-20) echo "top priority." ;;
-19|-18|-17|-16|-15|-14|-13|-12|-11|-10)
echo "high priority level \"$nv\"." ;;
-9|-8|-7|-6|-5|-4|-3|-2|-1)
echo "priority level \"$nv\"." ;;
0) echo "standard priority." ;;
1|2|3|4|5|6|7|8|9|10)
echo "background priority \"$nv\"." ;;
11|12|13|14|15|16|17|18|19)
echo "low priority \"$nv\"." ;;
20) echo "lowest priority." ;;
*) echo "nice value \"$nv\" out of range." ;;
esac
}
# Print usage and exit
usage() {
cat <<EOF >&2
Usage: $(basename "$0") checkALL
$(basename "$0") <process-name> check
$(basename "$0") <process-name> <niceValue>
checkALL List PID, nice, and command for all processes sorted by nice (asc).
check Show current nice value(s) for <process-name>.
niceValue Integer from -20 (highest) to 20 (lowest) to renice matching processes.
Note: Negative nice values require root or the process owner.
EOF
exit 1
}
# Detect OS for ps options
OS=$(uname)
if [ "$OS" = "Linux" ]; then
PS_LIST_OPTS=( -eo pid,ni,comm ) # GNU ps
elif [ "$OS" = "Darwin" ]; then
PS_LIST_OPTS=( axo pid,ni,comm ) # BSD ps on macOS
else
echo "Unsupported OS: $OS" >&2
exit 1
fi
# Must have at least one argument
if [ $# -lt 1 ]; then
usage
fi
# Global all-process check
if [ "$1" = "checkALL" ]; then
ps "${PS_LIST_OPTS[@]}" | sort -n -k2
exit 0
fi
# Per-process operations expect exactly two arguments
if [ $# -ne 2 ]; then
usage
fi
proc_name=$1
action=$2
# Find PIDs matching process name (exact match)
# Using read -a for compatibility with Bash 3.x
read -r -a pids <<< "$(pgrep -x "$proc_name" || echo)"
# Ensure we have at least one non-empty
# nicemgr
## The Story
I am ashamed to admit, despite years doing sysadmin work/software development, it wasn't until today that I learned about nice values for running processes. For those of you that also are unaware, a nice value tells your OS which programs to prioritize, and by what weights, when resources are constrained.
My relevant example, a long running ffmpeg process, making it impossible to use my computer for days, but me desperately desiring to play BG3 in the evening after work. Solution: nice values. Nice indicates how willing a process is to share CPU cycles with other programs. They range from -20 through 20. Negative nice values are greedy and unwilling to share. Positive values are happy to get whatever CPU cycles are available. The higher the nice value, the happier they are to let other processes use all of your CPU resources.
The solution worked great, but I was finding it a bit of a chore going through all the steps to find the PID, check the current nice value, or adjust them as the syntax isn't the most memorable. I'm attaching a wrapper below for those interested. This can be used across macOS and most linux distros. Hope you find this helpful!!
## TLDR;
- **What “nice” is:** process priority from –20 (greedy) to +20 (polite)
- **Why it matters:** lets CPU-hog jobs yield to interactive apps
- **My use-case:** reniced ffmpeg so I could finally play Baldur’s Gate 3
- Wrapper noscript below to simplify use of this nifty tool
## Script
```
#!/usr/bin/env bash
# nicer: Check or adjust the nice values of specific processes or list all processes sorted by nice.
#
# Usage:
# nicer checkALL
# nicer <process-name> check
# nicer <process-name> <niceValue>
#
# checkALL List PID, nice, and command for all processes sorted by nice (asc).
# check Show current nice value(s) for <process-name>.
# niceValue Integer from -20 (highest) to 20 (lowest) to renice matching processes.
#
# Note: Negative nice values require root or the process owner.
set -euo pipefail
# Ensure required commands are available
for cmd in pgrep ps sort renice uname; do
if ! command -v "$cmd" >/dev/null 2>&1; then
echo "Error: '$cmd' command not found. Please install it." >&2
exit 1
fi
done
# Describe a nice value in human-friendly terms
priority_desc() {
local nv=$1
case $nv in
-20) echo "top priority." ;;
-19|-18|-17|-16|-15|-14|-13|-12|-11|-10)
echo "high priority level \"$nv\"." ;;
-9|-8|-7|-6|-5|-4|-3|-2|-1)
echo "priority level \"$nv\"." ;;
0) echo "standard priority." ;;
1|2|3|4|5|6|7|8|9|10)
echo "background priority \"$nv\"." ;;
11|12|13|14|15|16|17|18|19)
echo "low priority \"$nv\"." ;;
20) echo "lowest priority." ;;
*) echo "nice value \"$nv\" out of range." ;;
esac
}
# Print usage and exit
usage() {
cat <<EOF >&2
Usage: $(basename "$0") checkALL
$(basename "$0") <process-name> check
$(basename "$0") <process-name> <niceValue>
checkALL List PID, nice, and command for all processes sorted by nice (asc).
check Show current nice value(s) for <process-name>.
niceValue Integer from -20 (highest) to 20 (lowest) to renice matching processes.
Note: Negative nice values require root or the process owner.
EOF
exit 1
}
# Detect OS for ps options
OS=$(uname)
if [ "$OS" = "Linux" ]; then
PS_LIST_OPTS=( -eo pid,ni,comm ) # GNU ps
elif [ "$OS" = "Darwin" ]; then
PS_LIST_OPTS=( axo pid,ni,comm ) # BSD ps on macOS
else
echo "Unsupported OS: $OS" >&2
exit 1
fi
# Must have at least one argument
if [ $# -lt 1 ]; then
usage
fi
# Global all-process check
if [ "$1" = "checkALL" ]; then
ps "${PS_LIST_OPTS[@]}" | sort -n -k2
exit 0
fi
# Per-process operations expect exactly two arguments
if [ $# -ne 2 ]; then
usage
fi
proc_name=$1
action=$2
# Find PIDs matching process name (exact match)
# Using read -a for compatibility with Bash 3.x
read -r -a pids <<< "$(pgrep -x "$proc_name" || echo)"
# Ensure we have at least one non-empty
PID
if [ ${#pids[@]} -eq 0 ] || [ -z "${pids[0]:-}" ]; then
echo "No processes found matching '$proc_name'." >&2
exit 1
fi
# Show current nice values
if [ "$action" = "check" ]; then
for pid in "${pids[@]}"; do
nice_val=$(ps -o ni= -p "$pid" | tr -d ' ')
echo "$proc_name \"PID: $pid\" is currently set to $(priority_desc "$nice_val")"
done
exit 0
fi
# Renice if numeric argument
if [[ "$action" =~ ^-?[0-9]+$ ]]; then
if (( action < -20 || action > 20 )); then
echo "Error: nice value must be between -20 and 20." >&2
exit 1
fi
for pid in "${pids[@]}"; do
if renice "$action" -p "$pid" &>/dev/null; then
echo "$proc_name \"PID: $pid\" has been adjusted to $(priority_desc "$action")"
else
echo "Failed to renice PID $pid (permission denied?)" >&2
fi
done
exit 0
fi
# Invalid action provided
echo "Invalid action: must be 'check' or a numeric nice value." >&2
usage
```
https://redd.it/1migur6
@r_bash
if [ ${#pids[@]} -eq 0 ] || [ -z "${pids[0]:-}" ]; then
echo "No processes found matching '$proc_name'." >&2
exit 1
fi
# Show current nice values
if [ "$action" = "check" ]; then
for pid in "${pids[@]}"; do
nice_val=$(ps -o ni= -p "$pid" | tr -d ' ')
echo "$proc_name \"PID: $pid\" is currently set to $(priority_desc "$nice_val")"
done
exit 0
fi
# Renice if numeric argument
if [[ "$action" =~ ^-?[0-9]+$ ]]; then
if (( action < -20 || action > 20 )); then
echo "Error: nice value must be between -20 and 20." >&2
exit 1
fi
for pid in "${pids[@]}"; do
if renice "$action" -p "$pid" &>/dev/null; then
echo "$proc_name \"PID: $pid\" has been adjusted to $(priority_desc "$action")"
else
echo "Failed to renice PID $pid (permission denied?)" >&2
fi
done
exit 0
fi
# Invalid action provided
echo "Invalid action: must be 'check' or a numeric nice value." >&2
usage
```
https://redd.it/1migur6
@r_bash
Reddit
From the bash community on Reddit
Explore this post and more from the bash community
Writing Your Own Simple Tab-Completions for Bash and Zsh
https://mill-build.org/blog/14-bash-zsh-completion.html
https://redd.it/1mjrifb
@r_bash
https://mill-build.org/blog/14-bash-zsh-completion.html
https://redd.it/1mjrifb
@r_bash
Reddit
From the bash community on Reddit: Writing Your Own Simple Tab-Completions for Bash and Zsh
Posted by lihaoyi - 1 vote and 0 comments
Practicing bash
Hello folks, I have started to learn bash for DevOps, what are some ways I can practice bash noscripts to get a good hands-on and become comfortable using it
https://redd.it/1ml24gb
@r_bash
Hello folks, I have started to learn bash for DevOps, what are some ways I can practice bash noscripts to get a good hands-on and become comfortable using it
https://redd.it/1ml24gb
@r_bash
Reddit
From the bash community on Reddit
Explore this post and more from the bash community
using parenthesis for execution
Is there any advantage or disadvantage to using parenthesis for the following execution of the "find" command:
as opposed to using the same command without the parenthesis like so:
Both seem to produce the same result, so don't fully understand the parenthesis in the first "find". I am trying to make sure that I understand when and when not to use the parenthesis considering that it can affect the flow of evaluation. Just thought in this example it would not have mattered.
thanks for the help
https://redd.it/1ml3au5
@r_bash
Is there any advantage or disadvantage to using parenthesis for the following execution of the "find" command:
sudo find / \( -mtime +30 -iname '*.zip' \) -exec cp {} /home/donnie \;as opposed to using the same command without the parenthesis like so:
sudo find / -mtime +30 -iname '*.zip' -exec cp {} /home/donnie \;Both seem to produce the same result, so don't fully understand the parenthesis in the first "find". I am trying to make sure that I understand when and when not to use the parenthesis considering that it can affect the flow of evaluation. Just thought in this example it would not have mattered.
thanks for the help
https://redd.it/1ml3au5
@r_bash
Reddit
From the bash community on Reddit
Explore this post and more from the bash community
how do we use the flag "-F" (-F from ls -Fla) in find?
Hi, I'd like to know if we can use a "highligt" for dirs in the output of find ./ -name 'something' for diff between dirs and files ...
Thank you and regards!
https://redd.it/1mls1tg
@r_bash
Hi, I'd like to know if we can use a "highligt" for dirs in the output of find ./ -name 'something' for diff between dirs and files ...
Thank you and regards!
https://redd.it/1mls1tg
@r_bash
Reddit
From the bash community on Reddit
Explore this post and more from the bash community
I can't see the advantage of vdir vs. ls -l cmd
**Hi**, I was reading about vdir (https://www.reddit.com/r/vim/comments/1midti4/vidir_and_vipe_command_utilities_that_use_vim/) and reading man vdir it is like ls -l
*What is the use of vdir cmd? what is its advantage?
*Thank you and Regards!*
https://redd.it/1mmknd6
@r_bash
**Hi**, I was reading about vdir (https://www.reddit.com/r/vim/comments/1midti4/vidir_and_vipe_command_utilities_that_use_vim/) and reading man vdir it is like ls -l
*What is the use of vdir cmd? what is its advantage?
*Thank you and Regards!*
https://redd.it/1mmknd6
@r_bash
Reddit
From the vim community on Reddit
Explore this post and more from the vim community
I made pentesting tool in bash can anyone have a look and provide commentary on my work
https://github.com/timeup48/UltimateMacHack/tree/main
https://redd.it/1mmfiqq
@r_bash
https://github.com/timeup48/UltimateMacHack/tree/main
https://redd.it/1mmfiqq
@r_bash
GitHub
GitHub - timeup48/SHADOWM: (SHADOWM) Macbook Web vulnerabilty hacking tool All in one
(SHADOWM) Macbook Web vulnerabilty hacking tool All in one - timeup48/SHADOWM
Is my code good enough?
#NO slashes ( / ) at the end of the string!
startFolder="/media/sam/T7/Windows recovered files"
destinationFolder="/media/sam/T7/Windows sorted files"
#double check file extensions
#should NOT have a period ( . ) at the start
extensions=("png" "jpg" "py" "pyc" "noscript" "txt" "mp4" "ogg" "java")
declare -A counters
for extension in "${extensions@}"
do
mkdir -p "$destinationFolder/$extension"
counters$extension=0
done
folders=$(ls "$startFolder")
arrFolders=()
for folder in $folders;do
arrFolders+=($folder)
done
folderAmount=${#arrFolders@}
echo $folderAmount folders
completed=0
for folder in $folders;do
completed=$((completed+1))
percentage=$(((completed100)/folderAmount))
files=$(ls "$startFolder/$folder")
for file in $files;do
for extension in "${extensions[@]}";do
if [[ $file == ".$extension" ]];then
filePath="$startFolder/$folder/$file"
number="${counters[$extension]}"
destPath="$destinationFolder/$extension/$number.$extension"
echo -n -e "\r\e[0K$completed/$folderAmount $percentage% $filePath -> $destPath"
mv "$filePath" "$destPath"
counters[$extension]=$((counters[$extension]+1))
break
fi
done
done
done
echo #NO slashes ( / ) at the end of the string!
startFolder="/media/sam/T7/Windows recovered files"
destinationFolder="/media/sam/T7/Windows sorted files"
#double check file extensions
#should NOT have a period ( . ) at the start
extensions=("png" "jpg" "py" "pyc" "noscript" "txt" "mp4" "ogg" "java")
declare -A counters
for extension in "${extensions[@]}"
do
mkdir -p "$destinationFolder/$extension"
counters[$extension]=0
done
folders=$(ls "$startFolder")
arrFolders=()
for folder in $folders;do
arrFolders+=($folder)
done
folderAmount=${#arrFolders[@]}
echo $folderAmount folders
completed=0
for folder in $folders;do
completed=$((completed+1))
percentage=$(((completed100)/folderAmount))
files=$(ls "$startFolder/$folder")
for file in $files;do
for extension in "${extensions@}";do
if [ $file == *".$extension"* ];then
filePath="$startFolder/$folder/$file"
number="${counters$extension}"
destPath="$destinationFolder/$extension/$number.$extension"
echo -n -e "\r\e0K$completed/$folderAmount $percentage% $filePath -> $destPath"
mv "$filePath" "$destPath"
counters[$extension=$((counters$extension+1))
break
fi
done
done
done
echo
It organized the folders generated by PhotoRec (salvaging files from a corrupt filesystem).
The code isn't very user friendly, but it gets the job done (although slowly)
I have released it on GitHub with additional instructions: https://github.com/justbanana9999/Arrange-by-file-type-PhotoRec-
https://redd.it/1mno9dm
@r_bash
#NO slashes ( / ) at the end of the string!
startFolder="/media/sam/T7/Windows recovered files"
destinationFolder="/media/sam/T7/Windows sorted files"
#double check file extensions
#should NOT have a period ( . ) at the start
extensions=("png" "jpg" "py" "pyc" "noscript" "txt" "mp4" "ogg" "java")
declare -A counters
for extension in "${extensions@}"
do
mkdir -p "$destinationFolder/$extension"
counters$extension=0
done
folders=$(ls "$startFolder")
arrFolders=()
for folder in $folders;do
arrFolders+=($folder)
done
folderAmount=${#arrFolders@}
echo $folderAmount folders
completed=0
for folder in $folders;do
completed=$((completed+1))
percentage=$(((completed100)/folderAmount))
files=$(ls "$startFolder/$folder")
for file in $files;do
for extension in "${extensions[@]}";do
if [[ $file == ".$extension" ]];then
filePath="$startFolder/$folder/$file"
number="${counters[$extension]}"
destPath="$destinationFolder/$extension/$number.$extension"
echo -n -e "\r\e[0K$completed/$folderAmount $percentage% $filePath -> $destPath"
mv "$filePath" "$destPath"
counters[$extension]=$((counters[$extension]+1))
break
fi
done
done
done
echo #NO slashes ( / ) at the end of the string!
startFolder="/media/sam/T7/Windows recovered files"
destinationFolder="/media/sam/T7/Windows sorted files"
#double check file extensions
#should NOT have a period ( . ) at the start
extensions=("png" "jpg" "py" "pyc" "noscript" "txt" "mp4" "ogg" "java")
declare -A counters
for extension in "${extensions[@]}"
do
mkdir -p "$destinationFolder/$extension"
counters[$extension]=0
done
folders=$(ls "$startFolder")
arrFolders=()
for folder in $folders;do
arrFolders+=($folder)
done
folderAmount=${#arrFolders[@]}
echo $folderAmount folders
completed=0
for folder in $folders;do
completed=$((completed+1))
percentage=$(((completed100)/folderAmount))
files=$(ls "$startFolder/$folder")
for file in $files;do
for extension in "${extensions@}";do
if [ $file == *".$extension"* ];then
filePath="$startFolder/$folder/$file"
number="${counters$extension}"
destPath="$destinationFolder/$extension/$number.$extension"
echo -n -e "\r\e0K$completed/$folderAmount $percentage% $filePath -> $destPath"
mv "$filePath" "$destPath"
counters[$extension=$((counters$extension+1))
break
fi
done
done
done
echo
It organized the folders generated by PhotoRec (salvaging files from a corrupt filesystem).
The code isn't very user friendly, but it gets the job done (although slowly)
I have released it on GitHub with additional instructions: https://github.com/justbanana9999/Arrange-by-file-type-PhotoRec-
https://redd.it/1mno9dm
@r_bash
GitHub
GitHub - justbanana9999/Arrange-by-file-type-PhotoRec-: Organize files generated by PhotoRec into a new directory.
Organize files generated by PhotoRec into a new directory. - justbanana9999/Arrange-by-file-type-PhotoRec-
timep: a next-gen time-profiler and flamegraph-generator for bash code
`timep` is a **time p**rofiler for bash code that will give you a per-command execution time breakdown of any bash noscript or function.
Unlike other profilers, `timep` records both wall-clock time and cpu time (via a loadable builtin that is base64 encoded in the noscript and automatically sets itself up when you source timep.bash). Also unlike other profilers, `timep also recovers and hierarchially records metadata on subshell and function nesting, allowing it to recreate the full call-stack tree for that bash code.
***
**BASH-NATIVE FLAMEGRAPHS**
If you call `timep` with the `--flame` flag, it will automatically generate a BASH-NATIVE flamegraph .noscript image (where each top-level block represents the wall-clock time spent on a particular command, and all the lower level blocks represent the combined time spent in the parent subshells/functions...this is not a perf flamegraph showing syscalls). Furthermore, Ive added a new colorscheme to the flamegraph generation noscript that will:
1. color things that take up more time with hotter colors (normal flamegraph coloring is "random but consistent for a given function name")
2. desaturate commands with low cpu time/ wall time ratio (e.g., wait, sleep, blocking reads, etc)
3. empirically remap the colors using a runtime-weighted CDF so that the colorscale is evenly used in the flamegraph and so extremes dont dominate the coloring
4. multiple flamegraphs are stacked vertically in the same noscript image.
[HERE](https://raw.githubusercontent.com/jkool702/timep/main/TESTS/FORKRUN/flamegraphs/flamegraph.ALL.noscript) is an example of what they look like (details near the bottom of this post).
***
**USAGE**
To use `timep`, download and source the `timep.bash` file from the github repo, then just add `timep` before whatever you want to profile. `timep` handles everything else, including (when needed) redirecting stdin to whatever is being profiled. ZERO changes need to be made to the code you want to profile. Example usage:
. timep.bash
timep someFunc <input_file
timep --flame /path/to/someScript.bash
timep -c 'command1' 'command2'
`timep` will create 2 time profiles for you - one that has every single command and full metadata, and one that combines commands repeated in loops and only shows run count + total runtime for each command. By default the 2nd one is shown, but this is configurable via thge '-o' flag and both profiles are always saved to disk.
For more info refer to the README on github and the comments at the top of timep.bash.
**DEPENDENCIES**: the major dependencies are bash 5+ and a mounted procfs. Various common commandline tools (sed, grep, cat, tail, ...) are required as well. This basically means you have to be running linux for timep to work.
* bash 5+ is required because timep fundamentally works by recording `$EPOCHREALTIME` timestamps. In theory you could probably replace each `${EPOCHREALTIME}` with `$(date +"%s.%6N")` to get it to run at bash 4, but it would be considerably less accurate and less efficient.
* mounted procfs it required to read several things (PPID, PGID, TPID, CTTY, PCOMM) from `/proc/<pid>/stat`. `timep` needs these to correctly re-create the call-stack tree. It *might* be possible to get these things from external tools, which would (at the cost of efficiency) allow `timep` to be used outsude of linux. But this would be a considerable undertaking.
***
**EXAMPLES**
Heres an example of the type of output timep generates.
```
testfunc() { f() { echo "f: $*"; }
g() ( echo "g: $*"; )
h() { echo "h: $*"; ff "$@"; gg "$@"; }
echo 0
{ echo 1; }
( echo 2 )
echo 3 &
{ echo 4; } &
echo 5 | cat | tee
for (( kk=6; kk<10; kk++ )); do
echo $kk
h $kk
for jj in {1..3}; do
f $kk $jj
g $kk $jj
done
done
}
timep testfunc
gives
LINE.DEPTH.CMD NUMBER COMBINED WALL-CLOCK TIME COMBINED CPU TIME COMMAND
`timep` is a **time p**rofiler for bash code that will give you a per-command execution time breakdown of any bash noscript or function.
Unlike other profilers, `timep` records both wall-clock time and cpu time (via a loadable builtin that is base64 encoded in the noscript and automatically sets itself up when you source timep.bash). Also unlike other profilers, `timep also recovers and hierarchially records metadata on subshell and function nesting, allowing it to recreate the full call-stack tree for that bash code.
***
**BASH-NATIVE FLAMEGRAPHS**
If you call `timep` with the `--flame` flag, it will automatically generate a BASH-NATIVE flamegraph .noscript image (where each top-level block represents the wall-clock time spent on a particular command, and all the lower level blocks represent the combined time spent in the parent subshells/functions...this is not a perf flamegraph showing syscalls). Furthermore, Ive added a new colorscheme to the flamegraph generation noscript that will:
1. color things that take up more time with hotter colors (normal flamegraph coloring is "random but consistent for a given function name")
2. desaturate commands with low cpu time/ wall time ratio (e.g., wait, sleep, blocking reads, etc)
3. empirically remap the colors using a runtime-weighted CDF so that the colorscale is evenly used in the flamegraph and so extremes dont dominate the coloring
4. multiple flamegraphs are stacked vertically in the same noscript image.
[HERE](https://raw.githubusercontent.com/jkool702/timep/main/TESTS/FORKRUN/flamegraphs/flamegraph.ALL.noscript) is an example of what they look like (details near the bottom of this post).
***
**USAGE**
To use `timep`, download and source the `timep.bash` file from the github repo, then just add `timep` before whatever you want to profile. `timep` handles everything else, including (when needed) redirecting stdin to whatever is being profiled. ZERO changes need to be made to the code you want to profile. Example usage:
. timep.bash
timep someFunc <input_file
timep --flame /path/to/someScript.bash
timep -c 'command1' 'command2'
`timep` will create 2 time profiles for you - one that has every single command and full metadata, and one that combines commands repeated in loops and only shows run count + total runtime for each command. By default the 2nd one is shown, but this is configurable via thge '-o' flag and both profiles are always saved to disk.
For more info refer to the README on github and the comments at the top of timep.bash.
**DEPENDENCIES**: the major dependencies are bash 5+ and a mounted procfs. Various common commandline tools (sed, grep, cat, tail, ...) are required as well. This basically means you have to be running linux for timep to work.
* bash 5+ is required because timep fundamentally works by recording `$EPOCHREALTIME` timestamps. In theory you could probably replace each `${EPOCHREALTIME}` with `$(date +"%s.%6N")` to get it to run at bash 4, but it would be considerably less accurate and less efficient.
* mounted procfs it required to read several things (PPID, PGID, TPID, CTTY, PCOMM) from `/proc/<pid>/stat`. `timep` needs these to correctly re-create the call-stack tree. It *might* be possible to get these things from external tools, which would (at the cost of efficiency) allow `timep` to be used outsude of linux. But this would be a considerable undertaking.
***
**EXAMPLES**
Heres an example of the type of output timep generates.
```
testfunc() { f() { echo "f: $*"; }
g() ( echo "g: $*"; )
h() { echo "h: $*"; ff "$@"; gg "$@"; }
echo 0
{ echo 1; }
( echo 2 )
echo 3 &
{ echo 4; } &
echo 5 | cat | tee
for (( kk=6; kk<10; kk++ )); do
echo $kk
h $kk
for jj in {1..3}; do
f $kk $jj
g $kk $jj
done
done
}
timep testfunc
gives
LINE.DEPTH.CMD NUMBER COMBINED WALL-CLOCK TIME COMBINED CPU TIME COMMAND
<line>.<depth>.<cmd>: ( time | cur depth % | total % ) ( time | cur depth % | total % ) (count) <command>
_____________________ ________________________________ ________________________________ ____________________________________
9.0.0: ( 0.025939s |100.00% ) ( 0.024928s |100.00% ) (1x) << (FUNCTION): main.testfunc "${@}" >>
├─ 1.1.0: ( 0.000062s | 0.23% ) ( 0.000075s | 0.30% ) (1x) ├─ testfunc "${@}"
│ │
│ 8.1.0: ( 0.000068s | 0.26% ) ( 0.000081s | 0.32% ) (1x) │ echo 0
│ │
│ 9.1.0: ( 0.000989s | 3.81% ) ( 0.000892s | 3.57% ) (1x) │ echo 1
│ │
│ 10.1.0: ( 0.000073s | 0.28% ) ( 0.000088s | 0.35% ) (1x) │ << (SUBSHELL) >>
│ └─ 10.2.0: ( 0.000073s |100.00% | 0.28% ) ( 0.000088s |100.00% | 0.35% ) (1x) │ └─ echo 2
│ │
│ 11.1.0: ( 0.000507s | 1.95% ) ( 0.000525s | 2.10% ) (1x) │ echo 3 (&)
│ │
│ 12.1.0: ( 0.003416s | 13.16% ) ( 0.000001s | 0.00% ) (1x) │ << (BACKGROUND FORK) >>
│ └─ 12.2.0: ( 0.000297s |100.00% | 1.14% ) ( 0.000341s |100.00% | 1.36% ) (1x) │ └─ echo 4
│ │
│ 13.1.0: ( 0.000432s | 1.66% ) ( 0.000447s | 1.79% ) (1x) │ echo 5
│ │
│ 13.1.1: ( 0.000362s | 1.39% ) ( 0.000376s | 1.50% ) (1x) │ cat
│ │
│ 13.1.2: ( 0.003441s | 13.26% ) ( 0.006943s | 27.85% ) (1x) │ tee | ((kk=6)) | ((kk<10))
│ │
│ 15.1.0: ( 0.000242s | 0.93% ) ( 0.000295s | 1.18% ) (4x) │ ((kk++ ))
│ │
│ 16.1.0: ( 0.000289s | 1.11% ) ( 0.000344s | 1.37% ) (4x) │ echo $kk
│ │
│ 17.1.0: ( 0.003737s | 3.59% | 14.40% ) ( 0.003476s | 3.48% | 13.94% ) (4x) │ << (FUNCTION): main.testfunc.h $kk >>
│ ├─ 1.2.0: ( 0.000231s | 6.20% | 0.89% ) ( 0.000285s | 8.22% | 1.14% ) (4x) │ ├─ h $kk
│ │ 8.2.0: ( 0.000302s | 8.07% | 1.16% ) ( 0.000376s | 10.84% | 1.50% ) (4x) │ │ echo "h: $*"
│ │ 9.2.0: ( 0.000548s | 14.72% | 2.11% ) ( 0.000656s | 18.96% | 2.63% ) (4x) │ │ << (FUNCTION): main.testfunc.h.f "$@" >>
│ │ ├─ 1.3.0: ( 0.000232s | 42.57% | 0.89% ) ( 0.000287s |
_____________________ ________________________________ ________________________________ ____________________________________
9.0.0: ( 0.025939s |100.00% ) ( 0.024928s |100.00% ) (1x) << (FUNCTION): main.testfunc "${@}" >>
├─ 1.1.0: ( 0.000062s | 0.23% ) ( 0.000075s | 0.30% ) (1x) ├─ testfunc "${@}"
│ │
│ 8.1.0: ( 0.000068s | 0.26% ) ( 0.000081s | 0.32% ) (1x) │ echo 0
│ │
│ 9.1.0: ( 0.000989s | 3.81% ) ( 0.000892s | 3.57% ) (1x) │ echo 1
│ │
│ 10.1.0: ( 0.000073s | 0.28% ) ( 0.000088s | 0.35% ) (1x) │ << (SUBSHELL) >>
│ └─ 10.2.0: ( 0.000073s |100.00% | 0.28% ) ( 0.000088s |100.00% | 0.35% ) (1x) │ └─ echo 2
│ │
│ 11.1.0: ( 0.000507s | 1.95% ) ( 0.000525s | 2.10% ) (1x) │ echo 3 (&)
│ │
│ 12.1.0: ( 0.003416s | 13.16% ) ( 0.000001s | 0.00% ) (1x) │ << (BACKGROUND FORK) >>
│ └─ 12.2.0: ( 0.000297s |100.00% | 1.14% ) ( 0.000341s |100.00% | 1.36% ) (1x) │ └─ echo 4
│ │
│ 13.1.0: ( 0.000432s | 1.66% ) ( 0.000447s | 1.79% ) (1x) │ echo 5
│ │
│ 13.1.1: ( 0.000362s | 1.39% ) ( 0.000376s | 1.50% ) (1x) │ cat
│ │
│ 13.1.2: ( 0.003441s | 13.26% ) ( 0.006943s | 27.85% ) (1x) │ tee | ((kk=6)) | ((kk<10))
│ │
│ 15.1.0: ( 0.000242s | 0.93% ) ( 0.000295s | 1.18% ) (4x) │ ((kk++ ))
│ │
│ 16.1.0: ( 0.000289s | 1.11% ) ( 0.000344s | 1.37% ) (4x) │ echo $kk
│ │
│ 17.1.0: ( 0.003737s | 3.59% | 14.40% ) ( 0.003476s | 3.48% | 13.94% ) (4x) │ << (FUNCTION): main.testfunc.h $kk >>
│ ├─ 1.2.0: ( 0.000231s | 6.20% | 0.89% ) ( 0.000285s | 8.22% | 1.14% ) (4x) │ ├─ h $kk
│ │ 8.2.0: ( 0.000302s | 8.07% | 1.16% ) ( 0.000376s | 10.84% | 1.50% ) (4x) │ │ echo "h: $*"
│ │ 9.2.0: ( 0.000548s | 14.72% | 2.11% ) ( 0.000656s | 18.96% | 2.63% ) (4x) │ │ << (FUNCTION): main.testfunc.h.f "$@" >>
│ │ ├─ 1.3.0: ( 0.000232s | 42.57% | 0.89% ) ( 0.000287s |