TIL there are mining operations as deep as 3.9 km underground. Also, elevators going 58 kmh!
https://en.wikipedia.org/wiki/TauTona_Mine
https://en.wikipedia.org/wiki/TauTona_Mine
Wikipedia
TauTona Mine
mine in Gauteng, South Africa
not a tech link this time... or is it?
https://inhabitat.com/how-one-family-thrives-in-the-arctic-with-a-cob-house-inside-a-solar-geodesic-dome/
https://inhabitat.com/how-one-family-thrives-in-the-arctic-with-a-cob-house-inside-a-solar-geodesic-dome/
OS X <=10.14.5 0day Gatekeeper bypass
TLDR: you can create a ZIP archive with "CoolPicture.JPG.аpp" inside, it will look like JPG and launch without warnings; PoC available too (90day disclosure timeline expired)
https://www.fcvl.net/vulnerabilities/macosx-gatekeeper-bypass
TLDR: you can create a ZIP archive with "CoolPicture.JPG.аpp" inside, it will look like JPG and launch without warnings; PoC available too (90day disclosure timeline expired)
https://www.fcvl.net/vulnerabilities/macosx-gatekeeper-bypass
(yes, it's not about the paper, it's about the noscript)
https://www.ncbi.nlm.nih.gov/pubmed/31181385
https://www.ncbi.nlm.nih.gov/pubmed/31181385
PubMed
Fantastic yeasts and where to find them: the hidden diversity of dimorphic fungal pathogens - PubMed
Dimorphic fungal pathogens are a significant cause of human disease worldwide. Notably, the dimorphic fungal pathogens within the order Onygenales are considered primary pathogens, causing disease in healthy hosts. Current changes in taxonomy are underway…
Universal Scalability Law: a really nice illustration on how adding more resources (cores, machines, people etc) to the system eventually makes it slower.
https://www.michaelnygard.com/blog/2018/01/coherence-penalty-for-humans/
https://www.michaelnygard.com/blog/2018/01/coherence-penalty-for-humans/
Michaelnygard
Coherence Penalty for Humans - Wide Awake Developers
This is a brief aside from my ongoing series about avoiding entity services. An interesting dinner conversation led to thoughts that I needed to write down. Amdahl's Law In 1967, Gene Amdahl presented a case against multiprocessing computers. He argued that…
https://tabnine.com/blog/deep -> junior/middle level programmers' salaries down in 3..2..1..
Forwarded from χаотичні нотатки
either you make things happen, or things happen to you https://oleksandr.works/2019/07/25/news-and-ideas/
oleksandr.works
News and Ideas | Understand the World
Got into a rather nasty car accident. Was waiting in the traffic jam, ended up with a broken shoulder and a totaled car. Right now…
We introduce the heat method for solving the single- or multiple-source shortest path problem on both flat and curved domains. (...) in near-linear time (...) in any dimension, and on any domain that admits a gradient and inner product—including regular grids, triangle meshes, and point clouds. (ACM'17)
https://www.cs.cmu.edu/~kmcrane/Projects/HeatMethod/
https://www.cs.cmu.edu/~kmcrane/Projects/HeatMethod/
TLDR: A good chunk of dependent type benefits, w/o dependent types. Need to check which other languages play nicely with this approach
http://okmij.org/ftp/Computation/lightweight-static-guarantees.html
http://okmij.org/ftp/Computation/lightweight-static-guarantees.html
okmij.org
Lightweight Static Guarantees
The programming style for static assurances, using types to carry capabilities, which are mere references to specifications rather than actual specifications
that's insane. and awesome. DIY semiconductor lithography
> 1975 tech level; still!
http://sam.zeloof.xyz/category/semiconductor/
> 1975 tech level; still!
http://sam.zeloof.xyz/category/semiconductor/
hey look, you can train GPT-2 in under an hour*
* using about $40M worth of hardware. still impressive, though
https://devblogs.nvidia.com/training-bert-with-gpus/
* using about $40M worth of hardware. still impressive, though
https://devblogs.nvidia.com/training-bert-with-gpus/
NVIDIA Technical Blog
NVIDIA Clocks World’s Fastest BERT Training Time and Largest Transformer Based Model, Paving Path For Advanced Conversational AI
NVIDIA DGX SuperPOD trains BERT-Large in just 47 minutes, and trains GPT-2 8B, the largest Transformer Network Ever with 8.3Bn parameters Conversational AI is an essential building block of human…