Forget Woke: the Left’s New Formula – and How to Fight It - Nikos Sotirakopulos YouTube
Love this guy!
He's followed a similar path to myself:
Followed the 'narrative' (not the mainstream, the one that we're fed).
Realised that narrative is full of holes.
Had brain broken.
Had to claw himself back with that new information, but no one will listen...
He's so on point with this...
https://youtube.com/watch?v=QrT46GkfFLI&si=ffWxlMgMstSo6zFT
Love this guy!
He's followed a similar path to myself:
Followed the 'narrative' (not the mainstream, the one that we're fed).
Realised that narrative is full of holes.
Had brain broken.
Had to claw himself back with that new information, but no one will listen...
He's so on point with this...
https://youtube.com/watch?v=QrT46GkfFLI&si=ffWxlMgMstSo6zFT
YouTube
Forget Woke: the Left’s New Formula – and How to Fight It
What comes next after Woke for the Left? Why it's more dangerous, and the only chance we have to fight it.
00:00 Woke is dead, but populism thrives
02:16 Why post-Woke
03:40 Anti-capitalist populism
05:40 The trial balloon of Mamdani
06:41 Quietening down…
00:00 Woke is dead, but populism thrives
02:16 Why post-Woke
03:40 Anti-capitalist populism
05:40 The trial balloon of Mamdani
06:41 Quietening down…
❤1👍1
Sam Fisher (Data Drops) pinned «Forget Woke: the Left’s New Formula – and How to Fight It - Nikos Sotirakopulos YouTube Love this guy! He's followed a similar path to myself: Followed the 'narrative' (not the mainstream, the one that we're fed). Realised that narrative is full of holes.…»
A little game of 'Guess Who?'
Ok for a bit of fun and to give you an idea of just how complicated things get when researching these Bloodlines (and to give you all a bit of a sneaky headstart over everyone, before my next video on Monday night), try and figure out who this guy is pictured with all these foreign dignitaries?
Just his name and that of his wife, should raise an eyebrow or 2 at least (maybe even 3 😂)?
Ok for a bit of fun and to give you an idea of just how complicated things get when researching these Bloodlines (and to give you all a bit of a sneaky headstart over everyone, before my next video on Monday night), try and figure out who this guy is pictured with all these foreign dignitaries?
Just his name and that of his wife, should raise an eyebrow or 2 at least (maybe even 3 😂)?
Forwarded from Jade Helm 15 Rehash (ned)
AI responses may confabulate.
Besides temperature, several factors increase the tendency for a large language model (LLM) to confabulate (produce false information it believes to be true):
Data-Related Factors
Training Data Limitations: If the model is asked about topics outside its training data (e.g., very recent events or niche facts) or if the data contains errors, outdated information, or biases, it is likely to generate inaccuracies.
Data Quality/Bias: Training on low-quality, unverified, or biased internet data can cause the model to reproduce that questionable material as fact.
Prompt and Context-Related Factors
Vague or Complex Prompts: Prompts that are vague, ambiguous, or overly complex increase the model's need to "guess," leading to confabulation.
Lack of Context: Without sufficient relevant context provided in the prompt (e.g., using Retrieval-Augmented Generation or RAG to ground the response in verified data), the model must rely solely on its internal parameters, which can lead to made-up details.
Overly Long Context Windows: Simply providing a massive amount of context can "dilute" attention and introduce noise or irrelevant details, making it harder for the model to focus on the relevant information and increasing the chance of error.
Model Design and Behavior
Inherent Probabilistic Nature: LLMs predict the most statistically likely next word, not a fact-checked truth. This inherent mechanism can lead to plausible but incorrect answers, as the model lacks a built-in fact-checking system.
Optimization for Guessing: Models are often trained and optimized to provide a complete answer rather than admitting uncertainty. This rewards "guessing" when unsure, which is a core mechanism of confabulation.
Lack of Self-Awareness/Fact-Checking: The model doesn't self-assess its answers against external truth sources or express a "confidence score" for every statement it makes, leading to false information being presented with high confidence.
Task Demands: More demanding tasks, such as free-form question answering or summarization, generally carry a higher risk of hallucination than simpler, more constrained tasks.
This is for informational purposes only. For medical advice or diagnosis, consult a professional. AI responses may include mistakes. Learn more
Besides temperature, several factors increase the tendency for a large language model (LLM) to confabulate (produce false information it believes to be true):
Data-Related Factors
Training Data Limitations: If the model is asked about topics outside its training data (e.g., very recent events or niche facts) or if the data contains errors, outdated information, or biases, it is likely to generate inaccuracies.
Data Quality/Bias: Training on low-quality, unverified, or biased internet data can cause the model to reproduce that questionable material as fact.
Prompt and Context-Related Factors
Vague or Complex Prompts: Prompts that are vague, ambiguous, or overly complex increase the model's need to "guess," leading to confabulation.
Lack of Context: Without sufficient relevant context provided in the prompt (e.g., using Retrieval-Augmented Generation or RAG to ground the response in verified data), the model must rely solely on its internal parameters, which can lead to made-up details.
Overly Long Context Windows: Simply providing a massive amount of context can "dilute" attention and introduce noise or irrelevant details, making it harder for the model to focus on the relevant information and increasing the chance of error.
Model Design and Behavior
Inherent Probabilistic Nature: LLMs predict the most statistically likely next word, not a fact-checked truth. This inherent mechanism can lead to plausible but incorrect answers, as the model lacks a built-in fact-checking system.
Optimization for Guessing: Models are often trained and optimized to provide a complete answer rather than admitting uncertainty. This rewards "guessing" when unsure, which is a core mechanism of confabulation.
Lack of Self-Awareness/Fact-Checking: The model doesn't self-assess its answers against external truth sources or express a "confidence score" for every statement it makes, leading to false information being presented with high confidence.
Task Demands: More demanding tasks, such as free-form question answering or summarization, generally carry a higher risk of hallucination than simpler, more constrained tasks.
This is for informational purposes only. For medical advice or diagnosis, consult a professional. AI responses may include mistakes. Learn more
Forwarded from ned
Would you believe
The Indonesian word "pedalaman" means basically the same thing as the American slang "bumfuck," as in "uncultivated outback" but the etymology is entirely different.
AI responses may confabulate.
The Indonesian word
"pedalaman" is a noun that translates to several English terms depending on the context, primarily "interior", "inland", "hinterland", or "outback".
Definition and Translation
Interior: The part of a country or region away from the coast or borders.
Inland: Land that is not beside the sea.
Hinterland: The uncultivated or less developed areas of a country.
Outback / The wilds: Remote, sparsely populated, or uncultivated country areas.
Etymology
The word "pedalaman" is derived through a combination of a root word and a circumfix in Indonesian/Malay:
Root word: "Dalam", which means "deep" or "inside".
Circumfix: The combination of the prefix "pe-" and the suffix "-an" (peR-...-an), which is used to form nouns, often indicating a place, area, or condition related to the root word.
Thus, "pedalaman" literally denotes a place that is "inside" or "deep" within a larger geographical area, leading to its meaning of a remote or inland region.
Usage Examples
"Pedalaman" is used to refer to geographically isolated areas, often those that are underdeveloped or rural.
Suku pedalaman (inland tribes or isolated tribes) refers to communities living in remote interior regions.
Daerah pedalaman (inland area or countryside) is a common phrase to describe rural, non-coastal regions.
"Dia ingin menjelajahi pedalaman Kalimantan" translates to "He wanted to explore the interior of Borneo".
The Indonesian word "pedalaman" means basically the same thing as the American slang "bumfuck," as in "uncultivated outback" but the etymology is entirely different.
AI responses may confabulate.
The Indonesian word
"pedalaman" is a noun that translates to several English terms depending on the context, primarily "interior", "inland", "hinterland", or "outback".
Definition and Translation
Interior: The part of a country or region away from the coast or borders.
Inland: Land that is not beside the sea.
Hinterland: The uncultivated or less developed areas of a country.
Outback / The wilds: Remote, sparsely populated, or uncultivated country areas.
Etymology
The word "pedalaman" is derived through a combination of a root word and a circumfix in Indonesian/Malay:
Root word: "Dalam", which means "deep" or "inside".
Circumfix: The combination of the prefix "pe-" and the suffix "-an" (peR-...-an), which is used to form nouns, often indicating a place, area, or condition related to the root word.
Thus, "pedalaman" literally denotes a place that is "inside" or "deep" within a larger geographical area, leading to its meaning of a remote or inland region.
Usage Examples
"Pedalaman" is used to refer to geographically isolated areas, often those that are underdeveloped or rural.
Suku pedalaman (inland tribes or isolated tribes) refers to communities living in remote interior regions.
Daerah pedalaman (inland area or countryside) is a common phrase to describe rural, non-coastal regions.
"Dia ingin menjelajahi pedalaman Kalimantan" translates to "He wanted to explore the interior of Borneo".
AI responses may confabulate.
White noise generators counteract hypnotic effects by
disrupting the intense focus and concentration required to enter and maintain a hypnotic trance.
Mechanisms of Disruption
Auditory Masking: White noise, which contains all audible frequencies at equal intensity, creates a constant and predictable sonic backdrop. This "blanket of sound" effectively masks sudden, random environmental noises (like a phone ringing or a door closing) that might otherwise startle the brain.
Preventing Sensory Gating: Hypnosis requires the subject to filter out irrelevant external stimuli and focus intensely on the hypnotist's voice and suggestions. White noise interferes with this process of sensory gating, making it difficult to maintain the required state of single-pointed attention.
Maintaining Alertness: The brain is naturally "wired to notice changes in sound" as a survival mechanism. By creating a consistent stimulus that the brain does not have to constantly assess for novelty or threat, white noise encourages a state of relaxed alertness or focus, rather than the deeply relaxed, slightly disassociated state associated with deep hypnosis.
Cognitive Load: White noise can enhance focus during demanding cognitive tasks for some people. The ongoing presence of this broadband sound makes it harder for the mind to drift into the passive, receptive state necessary for a successful hypnotic induction.
For a hypnotist to work effectively, a quiet, controlled environment is ideal. While a highly skilled hypnotist might try to incorporate minor, consistent background noises into their patter (e.g., suggesting the hum of an air conditioner is a relaxing sound), genuinely loud, random, or pervasive white noise acts as a significant barrier to achieving a hypnotic trance state.
White noise generators counteract hypnotic effects by
disrupting the intense focus and concentration required to enter and maintain a hypnotic trance.
Mechanisms of Disruption
Auditory Masking: White noise, which contains all audible frequencies at equal intensity, creates a constant and predictable sonic backdrop. This "blanket of sound" effectively masks sudden, random environmental noises (like a phone ringing or a door closing) that might otherwise startle the brain.
Preventing Sensory Gating: Hypnosis requires the subject to filter out irrelevant external stimuli and focus intensely on the hypnotist's voice and suggestions. White noise interferes with this process of sensory gating, making it difficult to maintain the required state of single-pointed attention.
Maintaining Alertness: The brain is naturally "wired to notice changes in sound" as a survival mechanism. By creating a consistent stimulus that the brain does not have to constantly assess for novelty or threat, white noise encourages a state of relaxed alertness or focus, rather than the deeply relaxed, slightly disassociated state associated with deep hypnosis.
Cognitive Load: White noise can enhance focus during demanding cognitive tasks for some people. The ongoing presence of this broadband sound makes it harder for the mind to drift into the passive, receptive state necessary for a successful hypnotic induction.
For a hypnotist to work effectively, a quiet, controlled environment is ideal. While a highly skilled hypnotist might try to incorporate minor, consistent background noises into their patter (e.g., suggesting the hum of an air conditioner is a relaxing sound), genuinely loud, random, or pervasive white noise acts as a significant barrier to achieving a hypnotic trance state.
Forwarded from Can we please just cut to the FAKE ALIEN invasion part
YouTube
NASA Libera Imagens HiRISE de 3I/ATLAS — Avi Loeb Confirma: Agora É Pra Valer
A NASA finalmente está liberando as imagens HiRISE de 3I/ATLAS após 43 dias de atraso burocrático. O vídeo revela por que essas fotos são decisivas para entender o anti-tail, os jatos anômalos, possíveis fragmentações e o comportamento incomum desse objeto…
Your TAXES go to Al Qaeda in Somalia?! MASSIVE scam exposed - Glenn Beck
What have I been saying for years now???
FFS!
Wait till they find out that this is just the very iddy-biddy tip of a gargantuan iceberg!
https://youtube.com/watch?v=QRscFQ_njVc&si=vAG752HBrRs9UHzF
What have I been saying for years now???
FFS!
Wait till they find out that this is just the very iddy-biddy tip of a gargantuan iceberg!
https://youtube.com/watch?v=QRscFQ_njVc&si=vAG752HBrRs9UHzF
YouTube
Your TAXES go to Al Qaeda in Somalia?! MASSIVE scam exposed
New reporting from Christopher Rufo and Ryan Thorpe provides evidence that Minnesota taxpayer dollars are being funneled by Somali immigrants to Al Shabaab, the East African branches of Al Qaeda. Glenn Beck reviews how these scams have worked and what we…
Sam Fisher (Data Drops) pinned «Your TAXES go to Al Qaeda in Somalia?! MASSIVE scam exposed - Glenn Beck What have I been saying for years now??? FFS! Wait till they find out that this is just the very iddy-biddy tip of a gargantuan iceberg! https://youtube.com/watch?v=QRscFQ_njVc&s…»