Piccola funzione bash per recuperare la versione di tutti i pacchetti in
IgnorePkg
locked_pkgs() (Nel mio caso — PC principale
export LC_ALL=C
local f1='/^IgnorePkg/b ok; d; :ok s/^.*?= ?//'
local p=( $(\sed -Es "$f1" /etc/pacman.conf) )
local f2='/^Name|^Version/b ok; d; :ok s/^.*?: //'
local tab=$'\e[33;1mName,Version\e[0m'
echo
\pacman -Qi --color=never "${p[@]}" \
| \sed -Es "$f2" \
| \paste -d' ' - - \
| \column -t -N "$tab"
echo
)
Name Version
outguess 0.2-2
matlab 9.9.0.1467703-5
typora 0.11.18-1
wpa_supplicant 2:2.10-3
freefilesync-bin 11.20-1
gst-plugin-pipewire 1:0.3.51-1
pipewire 1:0.3.51-1
pipewire-alsa 1:0.3.51-1
pipewire-docs 1:0.3.51-1
pipewire-jack 1:0.3.51-1
pipewire-pulse 1:0.3.51-1
wireplumber 0.4.11-2
mpv 1:0.34.1-5
libplacebo 4.208.0-1È tanto chiedere un'installazione di Arch che non si rompa ogni ~3 aggiornamenti?
Ora non posso più cambiare luminosità (sempre al massimo)
Penso sia correlato alla modalità
Ora non posso più cambiare luminosità (sempre al massimo)
Penso sia correlato alla modalità
kms [driver video, acpi, ...], controllerò i log di XorgProvato a rimuovere l'hook
Ora provo a
• installare
• impostare
kms da mkinitcpio.conf, nessun effettoOra provo a
• installare
mesa-amber (va in conflitto con mesa)• impostare
__GLX_VENDOR_LIBRARY_NAME=amber
• rigenerare l'initcpioNada
A quanto pare in
prima c'era
e
A quanto pare in
/sys/class/backlight è sparito tutto,prima c'era
acpi_video0e
intel_backlight
journalctl -u micro
Nada A quanto pare in /sys/class/backlight è sparito tutto, prima c'era acpi_video0 e intel_backlight
Questo mi ha fatto intuire una cosa
Ho fatto il downgrade del kernel ed ha funzionato 🥲
Ho fatto il downgrade del kernel ed ha funzionato 🥲
6.0.11 <- 6.1.6PKGBUILD
3.4 KB
linux-clear-bin 6.0.12PKGBUILD
2.6 KB
linux-clear-headers-bin 6.0.12Uniche modifiche che ho fatto: togliere il suffisso
-bin
Inoltre è necessario rimuovere il file/boot/vmlinuz-linux-clear.cmdlinePer maggiori info sui parametri kernel:
https://www.kernel.org/doc/Documentation/admin-guide/kernel-parameters.txt
This media is not supported in your browser
VIEW IN TELEGRAM
Rispondo a Dave dai commenti
[mi sono ricordato ora dell'esistenza di questo video]
[mi sono ricordato ora dell'esistenza di questo video]
🐳1
"Science isn't about why, it's about why not!"
https://www.youtube.com/watch?v=hqMh47lYHlc
https://www.youtube.com/watch?v=hqMh47lYHlc
YouTube
Youtube video streaming with VT-340 compatible DEC SIXEL sequence (yaft on framebuffer, over SSH)
You can watch youtube on some terminals supporting SIXEL graphics.
yaft: https://github.com/uobikiemukot/yaft
FFmpeg-SIXEL: https://github.com/saitoha/FFmpeg-SIXEL
libsixel: https://github.com/saitoha/libsixel
This DEMO uses 16 color DECSIXEL sequence(ht…
yaft: https://github.com/uobikiemukot/yaft
FFmpeg-SIXEL: https://github.com/saitoha/FFmpeg-SIXEL
libsixel: https://github.com/saitoha/libsixel
This DEMO uses 16 color DECSIXEL sequence(ht…
"Science isn't about why, it's about why not!" ×2
https://www.youtube.com/watch?v=X6M5tgNjEuQ
https://www.youtube.com/watch?v=X6M5tgNjEuQ
YouTube
QEMU on mlterm with SIXEL control sequence by SDL-SIXEL video driver (take 2)
QEMU on mlterm(http://mlterm.sourceforge.net/) with SIXEL graphics by using libsixel(https://github.com/saitoha/libsixel) and patched SDL(https://github.com/saitoha/SDL1.2-SIXEL).
https://time.com/6247678/openai-chatgpt-kenya-workers/
[src]
Alcuni estratti
To get those labels, OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya, beginning in November 2021. Much of that text appeared to have been pulled from the darkest recesses of the internet. Some of it described situations in graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest.
One Sama worker tasked with reading and labeling text for OpenAI told TIME he suffered from recurring visions after reading a graphic denoscription of a man having sex with a dog in the presence of a young child. “That was torture,” he said. “You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture.” The work’s traumatic nature eventually led Sama to cancel all its work for OpenAI in February 2022, eight months earlier than planned.
In February, according to one billing document reviewed by TIME, Sama delivered OpenAI a sample batch of 1,400 images. Some of those images were categorized as “C4”—OpenAI’s internal label denoting child sexual abuse—according to the document. Also included in the batch were “C3” images (including bestiality, rape, and sexual slavery,) and “V3” images depicting graphic detail of death, violence or serious physical injury, according to the billing document. OpenAI paid Sama a total of $787.50 for collecting the images, the document shows.
But the need for humans to label data for AI systems remains, at least for now. “They’re impressive, but ChatGPT and other generative models are not magic – they rely on massive supply chains of human labor and scraped data, much of which is unattributed and used without consent,” Andrew Strait, an AI ethicist, recently wrote on Twitter. “These are serious, foundational problems that I do not see OpenAI addressing.”
[src]
Alcuni estratti
To get those labels, OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya, beginning in November 2021. Much of that text appeared to have been pulled from the darkest recesses of the internet. Some of it described situations in graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest.
One Sama worker tasked with reading and labeling text for OpenAI told TIME he suffered from recurring visions after reading a graphic denoscription of a man having sex with a dog in the presence of a young child. “That was torture,” he said. “You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture.” The work’s traumatic nature eventually led Sama to cancel all its work for OpenAI in February 2022, eight months earlier than planned.
In February, according to one billing document reviewed by TIME, Sama delivered OpenAI a sample batch of 1,400 images. Some of those images were categorized as “C4”—OpenAI’s internal label denoting child sexual abuse—according to the document. Also included in the batch were “C3” images (including bestiality, rape, and sexual slavery,) and “V3” images depicting graphic detail of death, violence or serious physical injury, according to the billing document. OpenAI paid Sama a total of $787.50 for collecting the images, the document shows.
But the need for humans to label data for AI systems remains, at least for now. “They’re impressive, but ChatGPT and other generative models are not magic – they rely on massive supply chains of human labor and scraped data, much of which is unattributed and used without consent,” Andrew Strait, an AI ethicist, recently wrote on Twitter. “These are serious, foundational problems that I do not see OpenAI addressing.”
TIME
Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic
A TIME investigation reveals the difficult conditions faced by the workers who made ChatGPT possible