2025-08-27, 23:59 UTC
495 days after halving
Current: $111,252.31
2012 scaled: $2,319,837.00
2016 scaled: $766,168.88
2020 scaled: $350,430.14
🄳🄾🄾🄼🄿🤖🅂🅃🄸🄽🄶
495 days after halving
Current: $111,252.31
2012 scaled: $2,319,837.00
2016 scaled: $766,168.88
2020 scaled: $350,430.14
🄳🄾🄾🄼🄿🤖🅂🅃🄸🄽🄶
🤯1👀1
Crypto guys after spending $50k into the dip:
i think i still have a month’s use left in this tube
🄳🄾🄾🄼🄿🤖🅂🅃🄸🄽🄶
i think i still have a month’s use left in this tube
🄳🄾🄾🄼🄿🤖🅂🅃🄸🄽🄶
😁9😱1
UPDATE: Solana's SIMD-0326 Alpenglow voting process has started.
The proposal aims to reduce block finality time from 12.8 seconds to 150ms through new consensus protocol changes.
🄳🄾🄾🄼🄿🤖🅂🅃🄸🄽🄶
The proposal aims to reduce block finality time from 12.8 seconds to 150ms through new consensus protocol changes.
🄳🄾🄾🄼🄿🤖🅂🅃🄸🄽🄶
🔥4👀2
DoomPosting
Makeup or no makeup? 🄳🄾🄾🄼🄿🄾🅂🅃🄸🄽🄶
Makeup or no makeup,
for the image above?
for the image above?
Anonymous Quiz
26%
I’m a MAN and MAKEUP
23%
I’m a MAN and NO MAKEUP
9%
I’m a WOMAN and MAKEUP
12%
I’m a WOMAN and NO MAKEUP
30%
Show results
😁4
🤬1😐1
Lie
LLMs are not predicting probability like classic language models
They’re estimating value like reinforcement learning models
Today the majority of their training is RL-style discounted value estimation, not aabout probability much at all
Why does it matter?
The RL formulation is “AI-complete”, more than enough to achieve any arbitrary level of intelligence
So why are LLMs so good if they’re just predicting probability of next word?
— Answer is that, no, it’s just a lie that they’re predicting the probability of the next word, and anyone with a brain has known for decades
False presupposition lies
🄳🄾🄾🄼🄿🄾🅂🅃🄸🄽🄶
LLMs are not predicting probability like classic language models
They’re estimating value like reinforcement learning models
Today the majority of their training is RL-style discounted value estimation, not aabout probability much at all
Why does it matter?
The RL formulation is “AI-complete”, more than enough to achieve any arbitrary level of intelligence
So why are LLMs so good if they’re just predicting probability of next word?
— Answer is that, no, it’s just a lie that they’re predicting the probability of the next word, and anyone with a brain has known for decades
False presupposition lies
🄳🄾🄾🄼🄿🄾🅂🅃🄸🄽🄶
💯2👀2