journalctl -u micro – Telegram
journalctl -u micro
93 subscribers
2.17K photos
210 videos
287 files
1.44K links
Esperienze e consigli di uno sviluppatore tech−unenthusiast

creation — 2021-04-29
owner — @Microeinstein

networks
@sigma_hub Σ
@ageiroumena
Download Telegram
Piccola funzione bash per recuperare la versione di tutti i pacchetti in IgnorePkg
locked_pkgs() (
export LC_ALL=C
local f1='/^IgnorePkg/b ok; d; :ok s/^.*?= ?//'
local p=( $(\sed -Es "$f1" /etc/pacman.conf) )
local f2='/^Name|^Version/b ok; d; :ok s/^.*?: //'
local tab=$'\e[33;1mName,Version\e[0m'
echo
\pacman -Qi --color=never "${p[@]}" \
| \sed -Es "$f2" \
| \paste -d' ' - - \
| \column -t -N "$tab"
echo
)

Nel mio caso — PC principale
Name                 Version
outguess 0.2-2
matlab 9.9.0.1467703-5
typora 0.11.18-1
wpa_supplicant 2:2.10-3
freefilesync-bin 11.20-1
gst-plugin-pipewire 1:0.3.51-1
pipewire 1:0.3.51-1
pipewire-alsa 1:0.3.51-1
pipewire-docs 1:0.3.51-1
pipewire-jack 1:0.3.51-1
pipewire-pulse 1:0.3.51-1
wireplumber 0.4.11-2
mpv 1:0.34.1-5
libplacebo 4.208.0-1
È tanto chiedere un'installazione di Arch che non si rompa ogni ~3 aggiornamenti?

Ora non posso più cambiare luminosità (sempre al massimo)

Penso sia correlato alla modalità kms [driver video, acpi, ...], controllerò i log di Xorg
Ma prima...
🌭2
Cosa diavolo è mesa-amber
Provato a rimuovere l'hook kms da mkinitcpio.conf, nessun effetto

Ora provo a
• installare mesa-amber (va in conflitto con mesa)
• impostare
__GLX_VENDOR_LIBRARY_NAME=amber
• rigenerare l'initcpio
Nada

A quanto pare in /sys/class/backlight è sparito tutto,
prima c'era acpi_video0
e intel_backlight
journalctl -u micro
Nada A quanto pare in /sys/class/backlight è sparito tutto, prima c'era acpi_video0 e intel_backlight
Questo mi ha fatto intuire una cosa

Ho fatto il downgrade del kernel ed ha funzionato 🥲

6.0.11 <- 6.1.6
Non male
Mi ero completamente scordato che i pacchetti AUR sono sotto git (e quindi eventualmente downgrade-abili)
PKGBUILD
3.4 KB
linux-clear-bin 6.0.12
PKGBUILD
2.6 KB
linux-clear-headers-bin 6.0.12

Uniche modifiche che ho fatto: togliere il suffisso -bin

Inoltre è necessario rimuovere il file
/boot/vmlinuz-linux-clear.cmdline

Per maggiori info sui parametri kernel:
https://www.kernel.org/doc/Documentation/admin-guide/kernel-parameters.txt
Odio Telegram
🔥2
This media is not supported in your browser
VIEW IN TELEGRAM
Rispondo a Dave dai commenti

[mi sono ricordato ora dell'esistenza di questo video]
🐳1
Poggers
🤯3
Latex pro tip

\newcommand{\cmd}[2][opt]{#1ional #2}
\cmd{args}
\cmd[except]{args}

\def\pat#1_#2{\text{pattern}\ #1_{#2+1}}
$\pat n_3$
https://time.com/6247678/openai-chatgpt-kenya-workers/
[src]

Alcuni estratti

To get those labels, OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya, beginning in November 2021. Much of that text appeared to have been pulled from the darkest recesses of the internet. Some of it described situations in graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest.

One Sama worker tasked with reading and labeling text for OpenAI told TIME he suffered from recurring visions after reading a graphic denoscription of a man having sex with a dog in the presence of a young child. “That was torture,” he said. “You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture.” The work’s traumatic nature eventually led Sama to cancel all its work for OpenAI in February 2022, eight months earlier than planned.

In February, according to one billing document reviewed by TIME, Sama delivered OpenAI a sample batch of 1,400 images. Some of those images were categorized as “C4”—OpenAI’s internal label denoting child sexual abuse—according to the document. Also included in the batch were “C3” images (including bestiality, rape, and sexual slavery,) and “V3” images depicting graphic detail of death, violence or serious physical injury, according to the billing document. OpenAI paid Sama a total of $787.50 for collecting the images, the document shows.

But the need for humans to label data for AI systems remains, at least for now. “They’re impressive, but ChatGPT and other generative models are not magic – they rely on massive supply chains of human labor and scraped data, much of which is unattributed and used without consent,” Andrew Strait, an AI ethicist, recently wrote on Twitter. “These are serious, foundational problems that I do not see OpenAI addressing.”