clevercli: ChatGPT powered CLI utilities.
https://github.com/clevercli/clevercli
https://redd.it/11lwoi4
@r_bash
https://github.com/clevercli/clevercli
https://redd.it/11lwoi4
@r_bash
GitHub
GitHub - clevercli/clevercli: ChatGPT powered CLI utilities. Easily add new prompt types in ~/.clevercli/
ChatGPT powered CLI utilities. Easily add new prompt types in ~/.clevercli/ - clevercli/clevercli
File Test Fails – Issue With Quotation Marks
if ! [ -e "${ISBN} - Book.pdf" ]; then
Gets interpolated to:
if ! [ -e 9780367199692 - Book.pdf ]; then
Condition always resolves to file not found, because the space in the filename breaks the path....
I know this is basic, but I can't figure out how to write shell that will result in the filename quoted:
if ! [ -e "9780367199692 - Book.pdf "]; then
https://redd.it/11ma0ig
@r_bash
if ! [ -e "${ISBN} - Book.pdf" ]; then
Gets interpolated to:
if ! [ -e 9780367199692 - Book.pdf ]; then
Condition always resolves to file not found, because the space in the filename breaks the path....
I know this is basic, but I can't figure out how to write shell that will result in the filename quoted:
if ! [ -e "9780367199692 - Book.pdf "]; then
https://redd.it/11ma0ig
@r_bash
Reddit
r/bash on Reddit: File Test Fails – Issue With Quotation Marks
Posted by u/EUTIORti - No votes and 3 comments
How to hack LD_LIBRARY_PATH to use a recent bash from a Debian sid chroot
I try to get a more up to date version of `bash` from `LinuxMint`.
I have a `chroot` with `Debian Sid` in my box.
What I try to do in a `bash` wrapper noscript, early in my `PATH`
#!/bin/bash
LD_LIBRARY_PATH=/path/to/chroot/usr/lib/x86_64-linux-gnu:/path/to/chroot/lib:/path/to/chroot/lib64:/path/to/chroot/var/lib:/path/to/chroot/usr/lib:/path/to/chroot/usr/local/lib /path/to/chroot/bin/bash "$@"
But I get:
/home/mevatlave/bin/bash: line 3: 1492488 Segmentation fault (core dumped) LD_LIBRARY_PATH=/path/to/chroot/usr/lib/x86_64-linux-gnu:/path/to/chroot/lib:/path/to/chroot/lib64:/path/to/chroot/var/lib:/path/to/chroot/usr/lib:/path/to/chroot/usr/local/lib /path/to/chroot/bin/bash "$@"
From the chroot:
% ldd /bin/bash
linux-vdso.so.1 (0x00007fff237fc000)
libtinfo.so.6 => /lib/x86_64-linux-gnu/libtinfo.so.6 (0x00007f94de839000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f94de658000)
/lib64/ld-linux-x86-64.so.2 (0x00007f94de9af000)
Is it feasible?
With
LD_LIBRARY_PATH=/path/to/chroot/lib:/path/to/chroot/lib64:/path/to/chroot/var/lib:/path/to/chroot/usr/lib:/path/to/chroot/usr/local/lib /path/to/chroot/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2 /path/to/chroot/bin/bash "$@"
I get
/lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.36' not found
With
LD_LIBRARY_PATH=/path/to/chroot/usr/lib/x86_64-linux-gnu:/path/to/chroot/lib:/path/to/chroot/lib64:/path/to/chroot/var/lib:/path/to/chroot/usr/lib:/path/to/chroot/usr/local/lib /path/to/chroot/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2 /path/to/chroot/bin/bash "$@"
I get:
Segmentation fault (core dumped)
LD_LIBRARY_PATH=/path/to/chroot/usr/lib/x86_64-linux- gnu:/path/to/chroot/lib:/path/to/chroot/lib64:/path/to/chroot/var/lib:/path/to/chroot/usr/lib:/path/to/chroot/usr/local/lib: /lib/x86_64-linux-gnu/ld-linux-x86-64.so.2 /path/to/chroot/bin/bash "$@"
I can run this one:
#!/bin/bash
LANG=C
LD_LIBRARY_PATH=/path/to/chroot/usr/lib/x86_64-linux-gnu:/path/to/chroot/lib:/path/to/chroot/lib64:/path/to/chroot/var/lib:/path/to/chroot/usr/lib:/path/to/chroot/usr/local/lib /path/to/chroot/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2 /path/to/chroot/bin/bash "$@"
But when I run `bash --version`, I get:
Segmentation fault (core dumped)
-
root@debian-sid_chroot:/# dpkg -l | grep libc6
ii libc6:amd64 2.36-8 amd64 GNU C
Library: Shared libraries
ii libc6-dev:amd64 2.36-8 amd64 GNU C
Library: Development Libraries and Header Files
https://redd.it/11mp45n
@r_bash
I try to get a more up to date version of `bash` from `LinuxMint`.
I have a `chroot` with `Debian Sid` in my box.
What I try to do in a `bash` wrapper noscript, early in my `PATH`
#!/bin/bash
LD_LIBRARY_PATH=/path/to/chroot/usr/lib/x86_64-linux-gnu:/path/to/chroot/lib:/path/to/chroot/lib64:/path/to/chroot/var/lib:/path/to/chroot/usr/lib:/path/to/chroot/usr/local/lib /path/to/chroot/bin/bash "$@"
But I get:
/home/mevatlave/bin/bash: line 3: 1492488 Segmentation fault (core dumped) LD_LIBRARY_PATH=/path/to/chroot/usr/lib/x86_64-linux-gnu:/path/to/chroot/lib:/path/to/chroot/lib64:/path/to/chroot/var/lib:/path/to/chroot/usr/lib:/path/to/chroot/usr/local/lib /path/to/chroot/bin/bash "$@"
From the chroot:
% ldd /bin/bash
linux-vdso.so.1 (0x00007fff237fc000)
libtinfo.so.6 => /lib/x86_64-linux-gnu/libtinfo.so.6 (0x00007f94de839000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f94de658000)
/lib64/ld-linux-x86-64.so.2 (0x00007f94de9af000)
Is it feasible?
With
LD_LIBRARY_PATH=/path/to/chroot/lib:/path/to/chroot/lib64:/path/to/chroot/var/lib:/path/to/chroot/usr/lib:/path/to/chroot/usr/local/lib /path/to/chroot/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2 /path/to/chroot/bin/bash "$@"
I get
/lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.36' not found
With
LD_LIBRARY_PATH=/path/to/chroot/usr/lib/x86_64-linux-gnu:/path/to/chroot/lib:/path/to/chroot/lib64:/path/to/chroot/var/lib:/path/to/chroot/usr/lib:/path/to/chroot/usr/local/lib /path/to/chroot/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2 /path/to/chroot/bin/bash "$@"
I get:
Segmentation fault (core dumped)
LD_LIBRARY_PATH=/path/to/chroot/usr/lib/x86_64-linux- gnu:/path/to/chroot/lib:/path/to/chroot/lib64:/path/to/chroot/var/lib:/path/to/chroot/usr/lib:/path/to/chroot/usr/local/lib: /lib/x86_64-linux-gnu/ld-linux-x86-64.so.2 /path/to/chroot/bin/bash "$@"
I can run this one:
#!/bin/bash
LANG=C
LD_LIBRARY_PATH=/path/to/chroot/usr/lib/x86_64-linux-gnu:/path/to/chroot/lib:/path/to/chroot/lib64:/path/to/chroot/var/lib:/path/to/chroot/usr/lib:/path/to/chroot/usr/local/lib /path/to/chroot/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2 /path/to/chroot/bin/bash "$@"
But when I run `bash --version`, I get:
Segmentation fault (core dumped)
-
root@debian-sid_chroot:/# dpkg -l | grep libc6
ii libc6:amd64 2.36-8 amd64 GNU C
Library: Shared libraries
ii libc6-dev:amd64 2.36-8 amd64 GNU C
Library: Development Libraries and Header Files
https://redd.it/11mp45n
@r_bash
Reddit
r/bash on Reddit: How to hack LD_LIBRARY_PATH to use a recent bash from a Debian sid chroot
Posted by u/MevatlaveKraspek - No votes and 1 comment
Get string field using only bash substitution ?
string="Archwiki 📘 link https://wiki.archlinux.org/index.php?search= care"
Using only bash substitution (meaning no awk, sed, cut, etc), how do I get only the link field $3 "https://wiki.archlinux.org/index.php?search=" ?
https://redd.it/11mte4d
@r_bash
string="Archwiki 📘 link https://wiki.archlinux.org/index.php?search= care"
Using only bash substitution (meaning no awk, sed, cut, etc), how do I get only the link field $3 "https://wiki.archlinux.org/index.php?search=" ?
https://redd.it/11mte4d
@r_bash
Comment in the middle of a case statement
I sent my co-worker a shell noscript snippet and after I copied it to email, I threw in a comment.
I got an email back saying the comment broke the code. Is that possible?
​
case "$1" in
start)
do_something
#comment
;;
stop)
do_something_else
;;
*)
echo "start or stop"
;;
esac
Where's the rule for this? Can a comment go at the end of the line or after the ;;? Google didn't help.
https://redd.it/11mujac
@r_bash
I sent my co-worker a shell noscript snippet and after I copied it to email, I threw in a comment.
I got an email back saying the comment broke the code. Is that possible?
​
case "$1" in
start)
do_something
#comment
;;
stop)
do_something_else
;;
*)
echo "start or stop"
;;
esac
Where's the rule for this? Can a comment go at the end of the line or after the ;;? Google didn't help.
https://redd.it/11mujac
@r_bash
Reddit
r/bash on Reddit: Comment in the middle of a case statement
Posted by u/drillbit7 - No votes and 2 comments
How to block saving sensitive info to history?
I have some regular tasks that involve copy/paste sensitive strings (passwords, etc) to my terminal to encode/decode them (sha256sum, base64, etc). In the process, these sensitive strings are being saved to my bash_history in cleartext, which I would like to avoid!
I can disable my history, that's easy, but I would like to be able to keep this feature.
I already have HISTCONTROL=ignoreboth set, which among other things prevents any command preceded by whitespace from being written to history, which is great for my ad-hoc needs.
Is there any similar option that would allow me to prevent, say, any line beginning with 'echo' from being saved to history? Any hook where I can toss in a regex to determine what does and does not get saved to history?
I could noscript something to manage my history file, but as I am typically working with my homedir on a NAS with background snapshotting, I would rather the string not get written in the first place.
Certainly not a backbreaking issue, but just seeing if I can squeeze another half a percent of efficiency out of my workflow and take care of an odd but major security issue with how I am working today.
https://redd.it/11n3tvd
@r_bash
I have some regular tasks that involve copy/paste sensitive strings (passwords, etc) to my terminal to encode/decode them (sha256sum, base64, etc). In the process, these sensitive strings are being saved to my bash_history in cleartext, which I would like to avoid!
I can disable my history, that's easy, but I would like to be able to keep this feature.
I already have HISTCONTROL=ignoreboth set, which among other things prevents any command preceded by whitespace from being written to history, which is great for my ad-hoc needs.
Is there any similar option that would allow me to prevent, say, any line beginning with 'echo' from being saved to history? Any hook where I can toss in a regex to determine what does and does not get saved to history?
I could noscript something to manage my history file, but as I am typically working with my homedir on a NAS with background snapshotting, I would rather the string not get written in the first place.
Certainly not a backbreaking issue, but just seeing if I can squeeze another half a percent of efficiency out of my workflow and take care of an odd but major security issue with how I am working today.
https://redd.it/11n3tvd
@r_bash
Reddit
r/bash on Reddit: How to block saving sensitive info to history?
Posted by u/gort32 - No votes and 1 comment
Can you force bash to not give a throw a specific error?
I have a function that does something like the following
gg() {
cleanupOnExit() {
declare -p FDall 2>/dev/null && for fd in "${FDall@}"; do
# if FDall has already been defined in the main noscript,
# send each open fd it contains a NULL and then close it.
[[ -e /proc/$$/fd/${FD} ]] && {
printf '\0' >&${fd}
exec {fd}>&-
}
done
# <...do other cleanup...>
}
trap 'cleanupOnExit' EXIT
local -a FDall
exec {FDall[0]}>./.file0
exec {FDall1}>./.file1
# <...do stuff...>
}
When trying to define/source it, bash throws an error saying that
Is there a good way to force bash to just ignore this error and source the function anyways? Any other suggestions to make this work?
I did figure out 1 way to work around this, but it is terrible and I really dont want to use it. Basically you create a variable withthe code to setup theexit trap,then the exit trap sources that variable. You cant just have the exit trap as-isthough, since if the scripot exits before the file denoscriptors are defined in the main noscript the exit trap (that does other stuff too) wont run. Instead, you have to do something like this:
gg() {
cleanupOnExitSrc="$(cat<<'EOF'
cleanupOnExitSrc0="$(cat<<EOI0
cleanupOnExit() {
$(declare -p FDall 2>/dev/null && {
cat<<'EOI1'
for fd in "${FDall@}"; do
# if FDall has already been defined in the main noscript,
# send each open fd it contains it a NULL and then close it.
[[ -e /proc/$$/fd/${FD} ]] && {
printf '\0' >&${fd}
exec {fd}>&-
}
done
EOI1
} || echo ':')
}
EOI0
)"
EOF
)"
trap 'source <(echo "${cleanupOnExitSrc}") && cleanupOnExit' EXIT
local -a FDall
exec {FDall[0]}>./.file0
exec {FDall1}>./.file1
# <...do stuff...>
}
which, again, is terrible
https://redd.it/11n2h0w
@r_bash
I have a function that does something like the following
gg() {
cleanupOnExit() {
declare -p FDall 2>/dev/null && for fd in "${FDall@}"; do
# if FDall has already been defined in the main noscript,
# send each open fd it contains a NULL and then close it.
[[ -e /proc/$$/fd/${FD} ]] && {
printf '\0' >&${fd}
exec {fd}>&-
}
done
# <...do other cleanup...>
}
trap 'cleanupOnExit' EXIT
local -a FDall
exec {FDall[0]}>./.file0
exec {FDall1}>./.file1
# <...do stuff...>
}
When trying to define/source it, bash throws an error saying that
{fd} is an ambiguous redirect. Now I get why bash is unhappy, since when cleanupOnExit is defined {fd} would, in fact, be an ambiguous redirect, but there are checks to ensure that bit of code will only ever run when {fd} exists and is an open file denoscriptor.Is there a good way to force bash to just ignore this error and source the function anyways? Any other suggestions to make this work?
I did figure out 1 way to work around this, but it is terrible and I really dont want to use it. Basically you create a variable withthe code to setup theexit trap,then the exit trap sources that variable. You cant just have the exit trap as-isthough, since if the scripot exits before the file denoscriptors are defined in the main noscript the exit trap (that does other stuff too) wont run. Instead, you have to do something like this:
gg() {
cleanupOnExitSrc="$(cat<<'EOF'
cleanupOnExitSrc0="$(cat<<EOI0
cleanupOnExit() {
$(declare -p FDall 2>/dev/null && {
cat<<'EOI1'
for fd in "${FDall@}"; do
# if FDall has already been defined in the main noscript,
# send each open fd it contains it a NULL and then close it.
[[ -e /proc/$$/fd/${FD} ]] && {
printf '\0' >&${fd}
exec {fd}>&-
}
done
EOI1
} || echo ':')
}
EOI0
)"
EOF
)"
trap 'source <(echo "${cleanupOnExitSrc}") && cleanupOnExit' EXIT
local -a FDall
exec {FDall[0]}>./.file0
exec {FDall1}>./.file1
# <...do stuff...>
}
which, again, is terrible
https://redd.it/11n2h0w
@r_bash
Reddit
r/bash on Reddit: Can you force bash to not give a throw a specific error?
Posted by u/jkool702 - No votes and no comments
finding duplicate files excluding metadata
I am interested in a noscript/utility that will BULK scan all directories recursively, and if the file is compatible with ffmpeg create a SHA checksum of the data EXCLUDING metadata and write it to a file for later sorting by checkum and removing all unique rows..
It is easy for ID3 tags/flac tags/video tags to change without the underlying file changing. I'd like to be able to detect duplicates where the underlying data is the same but the metadata is different.
It would be great if it also supported JPG EXIF data using exiftag or something similar
Has anyone seen a noscript in jists/github or similar ?
Cheers
https://redd.it/11nngho
@r_bash
I am interested in a noscript/utility that will BULK scan all directories recursively, and if the file is compatible with ffmpeg create a SHA checksum of the data EXCLUDING metadata and write it to a file for later sorting by checkum and removing all unique rows..
It is easy for ID3 tags/flac tags/video tags to change without the underlying file changing. I'd like to be able to detect duplicates where the underlying data is the same but the metadata is different.
It would be great if it also supported JPG EXIF data using exiftag or something similar
Has anyone seen a noscript in jists/github or similar ?
Cheers
https://redd.it/11nngho
@r_bash
Reddit
r/bash on Reddit: finding duplicate files excluding metadata
Posted by u/simonmcnair - No votes and no comments
I can't figure out what they want me to do with this bash noscript.
My employer said I had to run this noscript as a docker entrypoint for a postgres docker container.
#!/bin/bash
set -e cat << 'EOF' >> /var/lib/postgresql/data/postgresql.conf # archive options used for backup
wal_level = replica
archive_mode = on
archive_command = 'DIR="/var/backups/$(date +%Y%m%d)-wal"; (test -d "$DIR" || mkdir -p "$DIR") && gzip < "%p" > "$DIR/%f.gz"'
archive_timeout = 60min
#restore_command = 'gunzip < /var/backups/recovered_wal/%f.gz > %p'
EOF psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" <<-EOSQL
CREATE USER luca WITH PASSWORD 'luca';
CREATE DATABASE luca;
GRANT ALL PRIVILEGES ON DATABASE luca TO luca;
EOSQL
I am pretty illiterate in bash, but just by looking at it I could tell it was a little bit weird.
Anyways, when running it as a docker entrypoint, the container immediately exits and the docker logs read the following error:
./PostgresScript.sh: line 12: warning: here-document at line 2 delimited by end-of-file (wanted `EOF')
I've personally never seen EOF used like that (I've seen it used mostly like the EOSQL as in the noscript above). I can't figure out what was the intention behind it or how to fix it.
https://redd.it/11ntrki
@r_bash
My employer said I had to run this noscript as a docker entrypoint for a postgres docker container.
#!/bin/bash
set -e cat << 'EOF' >> /var/lib/postgresql/data/postgresql.conf # archive options used for backup
wal_level = replica
archive_mode = on
archive_command = 'DIR="/var/backups/$(date +%Y%m%d)-wal"; (test -d "$DIR" || mkdir -p "$DIR") && gzip < "%p" > "$DIR/%f.gz"'
archive_timeout = 60min
#restore_command = 'gunzip < /var/backups/recovered_wal/%f.gz > %p'
EOF psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" <<-EOSQL
CREATE USER luca WITH PASSWORD 'luca';
CREATE DATABASE luca;
GRANT ALL PRIVILEGES ON DATABASE luca TO luca;
EOSQL
I am pretty illiterate in bash, but just by looking at it I could tell it was a little bit weird.
Anyways, when running it as a docker entrypoint, the container immediately exits and the docker logs read the following error:
./PostgresScript.sh: line 12: warning: here-document at line 2 delimited by end-of-file (wanted `EOF')
I've personally never seen EOF used like that (I've seen it used mostly like the EOSQL as in the noscript above). I can't figure out what was the intention behind it or how to fix it.
https://redd.it/11ntrki
@r_bash
Reddit
r/bash on Reddit: I can't figure out what they want me to do with this bash noscript.
Posted by u/No-Fish9557 - No votes and 3 comments
Please help me with this noob multiline cmd argument question
(apologies for not crossposting it properly from r/javahelp, it doesn't let me do that)
I would like to do
cat << EOF | java -jar my.jar
> some stuff
> some more stuff
> EOF
and access the whole thing as the first element of args. But the array is empty.
If hovewer I just do
java -jar my.jar 7
the first element is actually 7. I desperately need to make this work with files or as written in the first example. Please help...
https://redd.it/11nw7v4
@r_bash
(apologies for not crossposting it properly from r/javahelp, it doesn't let me do that)
I would like to do
cat << EOF | java -jar my.jar
> some stuff
> some more stuff
> EOF
and access the whole thing as the first element of args. But the array is empty.
If hovewer I just do
java -jar my.jar 7
the first element is actually 7. I desperately need to make this work with files or as written in the first example. Please help...
https://redd.it/11nw7v4
@r_bash
Reddit
r/bash on Reddit: Please help me with this noob multiline cmd argument question
Posted by u/caverweni - No votes and no comments
What can you do with bash? can you use bash noscripts in accounting/finance?
It seems that a lot of people seem to use bash for things like networking/sys admin work. I really have no idea if I would be interested in that, however (if someone has some resources to see if I would enjoy that kind of work, please feel free to share). I come from a business background so I see a lot of menial things that seem like they could be easily automated. There's a lot of things we do in spreadsheets that I feel are just dirty work. Would bash noscripts be the best way to combat this or should I learn a different programming language such as Python or java?
​
Also, if anyone could direct me to a good place to learn bash, that would be much appreciated. Thank you!
https://redd.it/11nxbwe
@r_bash
It seems that a lot of people seem to use bash for things like networking/sys admin work. I really have no idea if I would be interested in that, however (if someone has some resources to see if I would enjoy that kind of work, please feel free to share). I come from a business background so I see a lot of menial things that seem like they could be easily automated. There's a lot of things we do in spreadsheets that I feel are just dirty work. Would bash noscripts be the best way to combat this or should I learn a different programming language such as Python or java?
​
Also, if anyone could direct me to a good place to learn bash, that would be much appreciated. Thank you!
https://redd.it/11nxbwe
@r_bash
Reddit
r/bash on Reddit: What can you do with bash? can you use bash noscripts in accounting/finance?
Posted by u/samicballs - No votes and 2 comments
Question: Bash process substitution with vim
Hi all,
I have a question about whether it's possible to get an interactive vim from inside a process substitution.
The reason I ask is because I had a seemingly simple idea, but unfortunatelly it simply doesn't work.
Example, how I'd expect it to work:
Here, once you get to
Example, where it doesn't work:
The naive idea here is to interactively write a temporary file with regexps and once you save via
In fact, I actually use an exported function for this, but for simplicity reasons let's just assume you typed the cmd as is.
But it doesn't work!
Depending on how you do it, it either is completely silent until you
So my question is: what does
Is there a way around it?
https://redd.it/11nzm62
@r_bash
Hi all,
I have a question about whether it's possible to get an interactive vim from inside a process substitution.
The reason I ask is because I had a seemingly simple idea, but unfortunatelly it simply doesn't work.
Example, how I'd expect it to work:
zypper pa -i | grep -E -f <(echo bash | fzf)
Here, once you get to
fzf, it will take over controll over your terminal and you can type in your search terms to narrow down matching lines.Example, where it doesn't work:
grep -F -x -f <(tmp="(mktemp)"; vim "$tmp" && cat "$tmp") .bashrc
The naive idea here is to interactively write a temporary file with regexps and once you save via
:wq, it will be taken as input for the matching.In fact, I actually use an exported function for this, but for simplicity reasons let's just assume you typed the cmd as is.
But it doesn't work!
Depending on how you do it, it either is completely silent until you
Ctrl-C out of it, or gives the error message:Vim: Warning: Output is not to a terminal
Vim: Warning: Input is not from a terminal
So my question is: what does
fzf do differently to get the fullscreen/interactive priority in the same terminal, but vim can't do the same?Is there a way around it?
https://redd.it/11nzm62
@r_bash
Reddit
r/bash on Reddit: Question: Bash process substitution with vim
Posted by u/PsychologicalOwl496 - No votes and no comments
How I use Bash to automate tasks on Linux
https://www.codelivly.com/how-i-use-bash-to-automate-tasks-on-linux/
https://redd.it/11obrvz
@r_bash
https://www.codelivly.com/how-i-use-bash-to-automate-tasks-on-linux/
https://redd.it/11obrvz
@r_bash
Codelivly
How I use Bash to automate tasks on Linux - Codelivly
As a Linux user, you’re probably already aware of how powerful the command line can be. With a littl
Is there any sed linter to quickly detect noscript errors?
It's not helpful in relatively long sed-noscripts to see errors that just tell which line number is error on (or just a char number). I wanna something like shellcheck but for sed.
P. S. Maybe this question is invalid, and I should just rewrite code without long sed embedded noscripts.
https://redd.it/11oghg1
@r_bash
It's not helpful in relatively long sed-noscripts to see errors that just tell which line number is error on (or just a char number). I wanna something like shellcheck but for sed.
P. S. Maybe this question is invalid, and I should just rewrite code without long sed embedded noscripts.
https://redd.it/11oghg1
@r_bash
Reddit
r/bash on Reddit: Is there any sed linter to quickly detect noscript errors?
Posted by u/EmilySeville7cfg - No votes and no comments
Globals or not globals?
Recently, I've implemented my first parser in Bash. It works, but the problem is how slow it works on thousands of files. One of the issues that almost all functions accepts some page as an input and produce some output, which means that they reparse the same page too many times. I don't use global variables now to cache parsed results to use them later. Its speed is not a big issue when just using it for small amount of files.
The question is: whether I should use globals or not? I am asking your opinions about that. I feel that with globals noscript become more unsafe as they can be accidentally modified. But on the other hand, it can improve performance as I can just retrieve some cached info inside globals instead of reparsing pages. Also, I am questioning about whether storing info in global variables improve noscript maintainability.
The initial issue is that Bash can't structure data very well. I mean, it doesn't have something like structs or classes, even primitive ones (without accessibility modifiers, just to group some info). I am in doubt whether I've chosen the right language to implement my parser.
P. S. I had an idea to store parsed results as JSON/YAML-formatted strings inside a noscript and retrieve data via jq/yq, but as I figured out yq may slow down noscripts. So that's why I use just sed mostly to do parsing.
https://redd.it/11oj8ky
@r_bash
Recently, I've implemented my first parser in Bash. It works, but the problem is how slow it works on thousands of files. One of the issues that almost all functions accepts some page as an input and produce some output, which means that they reparse the same page too many times. I don't use global variables now to cache parsed results to use them later. Its speed is not a big issue when just using it for small amount of files.
The question is: whether I should use globals or not? I am asking your opinions about that. I feel that with globals noscript become more unsafe as they can be accidentally modified. But on the other hand, it can improve performance as I can just retrieve some cached info inside globals instead of reparsing pages. Also, I am questioning about whether storing info in global variables improve noscript maintainability.
The initial issue is that Bash can't structure data very well. I mean, it doesn't have something like structs or classes, even primitive ones (without accessibility modifiers, just to group some info). I am in doubt whether I've chosen the right language to implement my parser.
P. S. I had an idea to store parsed results as JSON/YAML-formatted strings inside a noscript and retrieve data via jq/yq, but as I figured out yq may slow down noscripts. So that's why I use just sed mostly to do parsing.
https://redd.it/11oj8ky
@r_bash
GitHub
v2-tooling/clip-parse at main · command-line-interface-pages/v2-tooling
Tools for handling v2.*.* syntax. Contribute to command-line-interface-pages/v2-tooling development by creating an account on GitHub.
I'm using the find command to reorganize the mp3 files in a directory but it only half way works??
I have this noscript:
Each spotifydl command will download the songs into the download folder but like this:
But i want everything in 'downloads' like so:
Weird thing is after this noscript runs. All the songs downloaded from the 'Discover Weekly' playlist are correctly placed in the downloads folder but all the songs downloaded from 'Release Radar' playlist are downloaded into the Music directory which is one directory up from downloads and the $ARTIST/$ALBUM/$SONG.mp3 heirarchy is maintained.
I have no idea why. can anyone see what i'm doing wrong here?
https://redd.it/11oprsf
@r_bash
I have this noscript:
#!/bin/bash
cd /home/$USER/Music
log="/home/$USER/Documents/logs/spotify-dl.log"
output_dir="/home/$USER/Music/downloads"
echo $(date) >> "$log"
# Discover Weekly
npx spotifydl --download-report --output "$output_dir" "link to a spotify playlist" >> "$log"
# Release Radar
npx spotifydl --download-report --ouptut "$output_dir" "link to a spotify playlist" >> "$log"
find downloads/ -name *.mp3 -exec mv '{}' downloads/ \; && find downloads/ -type d -not -wholename 'downloads/' -exec rm -rf '{}' \;
echo >> "$log"
Each spotifydl command will download the songs into the download folder but like this:
./downloads/$ARTIST/$ALBUM/$SONG.mp3
But i want everything in 'downloads' like so:
./downloads/$SONG.mp3
Weird thing is after this noscript runs. All the songs downloaded from the 'Discover Weekly' playlist are correctly placed in the downloads folder but all the songs downloaded from 'Release Radar' playlist are downloaded into the Music directory which is one directory up from downloads and the $ARTIST/$ALBUM/$SONG.mp3 heirarchy is maintained.
I have no idea why. can anyone see what i'm doing wrong here?
https://redd.it/11oprsf
@r_bash
Reddit
r/bash on Reddit: I'm using the find command to reorganize the mp3 files in a directory but it only half way works??
Posted by u/Pickinanameainteasy - No votes and 4 comments
What exactly is the difference between an interactive and non-interactive shell? (direct execution vs through ssh)
I was trying to get a noscript running on several instances using a ssh loop.
Funnily some binaries won't run when executed remotely (ssh myuser@server "binary") but they do when you reference their whole path. This bothers me because the path of the binary is in $PATH (when executed remotely or direct)
The OS/Version/user/... are all the same on all instances.
Can someone explain why this is happening? I guess it has sth to do with interactive/non-interactive shells? What exactly seperates the two? How are user rights and profiles managed in these scenarios?
https://redd.it/11osjrn
@r_bash
I was trying to get a noscript running on several instances using a ssh loop.
Funnily some binaries won't run when executed remotely (ssh myuser@server "binary") but they do when you reference their whole path. This bothers me because the path of the binary is in $PATH (when executed remotely or direct)
The OS/Version/user/... are all the same on all instances.
Can someone explain why this is happening? I guess it has sth to do with interactive/non-interactive shells? What exactly seperates the two? How are user rights and profiles managed in these scenarios?
https://redd.it/11osjrn
@r_bash
Reddit
r/bash on Reddit: What exactly is the difference between an interactive and non-interactive shell? (direct execution vs through…
Posted by u/unix-elitist - No votes and 6 comments
Why does this loop exit early?
I have a text file containing a list of file names, one per line, that I want to download from a remote host (a seedbox hosted with feralhosting). The text file contains only partial file names, so I need to find the file on the remote host first. e.g., the text file might have "Miami Connection" and on the remote host it's "Miami Connection (1987).mkv".
Initially I was just doing this:
This would download 1 - 3 files then exit (rather than iterate over the full text file as I expected). I'd delete the lines that were downloaded from the list and restart. It would grab a few more files then exit again... The downloads always complete and it would exit after a very random amount of execution time. Nothing appears to be killing it. The job always exits as if it reached the end of the file, but it should be reading more lines.
I'm trying to figure out why it's exiting. I've expanded it into a small noscript with some diagnostic output and have gotten it down to this (no file transfer so it runs very quickly):
#!/bin/bash
set -x
while read i ; do
unset f
echo "==$i=="
f=$(ssh myhost "ls ~/files/ | grep \"$i\"" | head -1)
if [ $f ] ; then
echo "found $f"
else
echo "couldn't find $i"
fi
done <test
If I comment out the ssh line, it'll iterate over the entire file. If I leave the ssh line, it always stops early. To rule out any weirdness in the text file, I created a new one, making sure it's just plain text:
With the test file it always stops after the first line. The "mkv" line is the only one that should match anything on the remote host. It doesn't matter where I put that in the text file -- the noscript always stops after line one. Again if I comment out the ssh line, it goes through the whole text file. The output is like:
+ read i
+ unset f
+ echo '==not a file=='
==not a file==
++ ssh myhost 'ls ~/files/ | grep "not a file" | head -1'
+ f=
+ [ -n '' ]
+ echo 'couldn'\''t find not a file'
couldn't find not a file
+ read i
Can anyone explain what I'm doing wrong here/why it won't read the entire file? I'm not really looking for better/alternate ways of doing this. Just trying to understand what's happening here.
https://redd.it/11p4cbr
@r_bash
I have a text file containing a list of file names, one per line, that I want to download from a remote host (a seedbox hosted with feralhosting). The text file contains only partial file names, so I need to find the file on the remote host first. e.g., the text file might have "Miami Connection" and on the remote host it's "Miami Connection (1987).mkv".
Initially I was just doing this:
while read i ; do f=$(ssh myhost "ls -1 ~/files/ | grep \"$i\"") ; scp myhost:~/files/"$f" . ; done <file_listThis would download 1 - 3 files then exit (rather than iterate over the full text file as I expected). I'd delete the lines that were downloaded from the list and restart. It would grab a few more files then exit again... The downloads always complete and it would exit after a very random amount of execution time. Nothing appears to be killing it. The job always exits as if it reached the end of the file, but it should be reading more lines.
I'm trying to figure out why it's exiting. I've expanded it into a small noscript with some diagnostic output and have gotten it down to this (no file transfer so it runs very quickly):
#!/bin/bash
set -x
while read i ; do
unset f
echo "==$i=="
f=$(ssh myhost "ls ~/files/ | grep \"$i\"" | head -1)
if [ $f ] ; then
echo "found $f"
else
echo "couldn't find $i"
fi
done <test
If I comment out the ssh line, it'll iterate over the entire file. If I leave the ssh line, it always stops early. To rule out any weirdness in the text file, I created a new one, making sure it's just plain text:
printf "not a file\nmkv\nalso not a file\nnoperino" > testWith the test file it always stops after the first line. The "mkv" line is the only one that should match anything on the remote host. It doesn't matter where I put that in the text file -- the noscript always stops after line one. Again if I comment out the ssh line, it goes through the whole text file. The output is like:
+ read i
+ unset f
+ echo '==not a file=='
==not a file==
++ ssh myhost 'ls ~/files/ | grep "not a file" | head -1'
+ f=
+ [ -n '' ]
+ echo 'couldn'\''t find not a file'
couldn't find not a file
+ read i
Can anyone explain what I'm doing wrong here/why it won't read the entire file? I'm not really looking for better/alternate ways of doing this. Just trying to understand what's happening here.
https://redd.it/11p4cbr
@r_bash
Reddit
r/bash on Reddit: Why does this loop exit early?
Posted by u/KangarooDouble7454 - No votes and 1 comment
bash-annotations: A bash framework for creating custom injection and function hook style annotations
Source code: https://github.com/david-luison-starkey/bash-annotations
Showcase project: https://github.com/david-luison-starkey/bash-annotations-toolbox
https://redd.it/11p6cs4
@r_bash
Source code: https://github.com/david-luison-starkey/bash-annotations
Showcase project: https://github.com/david-luison-starkey/bash-annotations-toolbox
https://redd.it/11p6cs4
@r_bash
GitHub
GitHub - david-luison-starkey/bash-annotations: Java-style annotations for Bash
Java-style annotations for Bash. Contribute to david-luison-starkey/bash-annotations development by creating an account on GitHub.
Command works on Linux Mint terminal, but my sytax are wrong to work under Linux Mint in a starter.
The follow one works on terminal:
gsettings reset org.x.editor.state.history-entry history-replace-with
​
I tryed the follow one on on terminal, later I will use it on starter, but got the follow error message:
bash -c 'gsettings reset org.x.editor.state.history-entry history-replace-with; -c'
bash: -c: Command not found.
​
Any idea?
https://redd.it/11pjnew
@r_bash
The follow one works on terminal:
gsettings reset org.x.editor.state.history-entry history-replace-with
​
I tryed the follow one on on terminal, later I will use it on starter, but got the follow error message:
bash -c 'gsettings reset org.x.editor.state.history-entry history-replace-with; -c'
bash: -c: Command not found.
​
Any idea?
https://redd.it/11pjnew
@r_bash
Reddit
r/bash on Reddit: Command works on Linux Mint terminal, but my sytax are wrong to work under Linux Mint in a starter.
Posted by u/TitleApprehensive360 - No votes and no comments
Is there a better way to remove all files of certain extension except most recent?
Multiple backups are created within and named something like;
I'm needing a way to delete
The below works only if there aren't any spaces in the filenames. Sadly, sometimes there are indeed spaces in the filename.
#!/bin/bash
# ~/.noscripts/mynoscript.sh
# Gets run daily via systemd service and timer
myDir="$HOME/Documents/Blah/Foo Bar~D45EAG74.foo/"
myCount=$(find ~/Documents/Blah/Foo Bar~D45EAG74.foo"/ -type f -name '.backup' | wc -l)
if [ "$myCount" -ge 2 ]; then
cd "$myDir"
ls .backup | head -n -1 | xargs rm --
cd --
fi
I'm hoping there's a cleaner/more efficient way to do this while fixing it so spaces aren't an problem anymore. You can also see that one line has
Can anyone help me find some solutions?
https://redd.it/11ptomy
@r_bash
~/Documents/Blah/Foo Bar~D45EAG74.foo/ contains a bunch of files and folders. Multiple backups are created within and named something like;
Backup_2023-03-12T18-13-02_A_028E165A-42CB-E084-F0C8-04C8EE231D82.backupI'm needing a way to delete
*.backup files while leaving the most recent one alone.The below works only if there aren't any spaces in the filenames. Sadly, sometimes there are indeed spaces in the filename.
#!/bin/bash
# ~/.noscripts/mynoscript.sh
# Gets run daily via systemd service and timer
myDir="$HOME/Documents/Blah/Foo Bar~D45EAG74.foo/"
myCount=$(find ~/Documents/Blah/Foo Bar~D45EAG74.foo"/ -type f -name '.backup' | wc -l)
if [ "$myCount" -ge 2 ]; then
cd "$myDir"
ls .backup | head -n -1 | xargs rm --
cd --
fi
I'm hoping there's a cleaner/more efficient way to do this while fixing it so spaces aren't an problem anymore. You can also see that one line has
$HOME and another has a ~ in their paths. This triggers my OCD but I'm not sure how to fix it. Can anyone help me find some solutions?
https://redd.it/11ptomy
@r_bash