Pepemedia 🇺🇦🏳️‍🌈 – Telegram
Pepemedia 🇺🇦🏳️‍🌈
135 subscribers
2.63K photos
249 videos
3 files
753 links
Dicknoscription.
Feedbag - @pepemediafeedbagloop_bot
Download Telegram
this is how working with fucking centos looks like. worst piece of shit ever existed. after windows, of course. many sufferings to its creators.
NNN just've ended. It's time to watch C++20 standard draft!
Из коментов вынесу:

This sparked an interesting memory for me. I was once working with a customer who was producing on-board software for a missile. In my analysis of the code, I pointed out that they had a number of problems with storage leaks. Imagine my surprise when the customers chief software engineer said "Of course it leaks". He went on to point out that they had calculated the amount of memory the application would leak in the total possible flight time for the missile and then doubled that number. They added this much additional memory to the hardware to "support" the leaks. Since the missile will explode when it hits its target or at the end of its flight, the ultimate in garbage collection is performed without programmer intervention.
Fuck consumerism
Python is a vile thing and should be destroyed by all means
"With GPT-3 costing around $4.6 million in compute, than would put a price of $8.6 billion for the compute to train "GPT-4".
If making bigger models was so easy with parameter partitioning from a memory point of view then this seems like the hardest challenge, but you do need to solve the memory issue to actually get it to load at all.
However, if you're lucky you can get 3-6x compute increase from Nvidia A100s over V100s, https://developer.nvidia.com/blog/nvidia-ampere-architecture-in-depth/
But even a 6x compute gain would still put the cost at $1.4 billion.
Nvidia only reported $1.15 billion in revenue from "Data Center" in 2020 Q1, so just to train "GPT-4" you would pretty much need the entire world's supply of graphic cards for 1 quarter (3 months), at least on that order of magnitude."

Looks like where will be no significantly larger model in GPT4 for us :C
source - https://www.reddit.com/r/MachineLearning/comments/i49jf8/d_biggest_roadblock_in_making_gpt4_a_20_trillion/