Organ Harvesting: The Global Black Market They Don’t Want You to See
https://youtube.com/watch?v=MWIqDj8u81w&si=5nS4TxiT_bc__q0X
https://youtube.com/watch?v=MWIqDj8u81w&si=5nS4TxiT_bc__q0X
YouTube
Organ Harvesting: The Global Black Market They Don’t Want You to See
Across every continent, a hidden economy thrives where human organs are bought and sold like commodities. Victims are kidnapped, trafficked, or deceived into surrendering their bodies, feeding a billion-dollar black market that stretches from Latin America…
👍1
Forwarded from Movie Night
Fifty Dead Men Walking 2008 Drama/Thriller
Martin McGartland joins the I.R.A. and feeds information to Britain's Special Branch Agent Fergus.
Martin McGartland joins the I.R.A. and feeds information to Britain's Special Branch Agent Fergus.
Forwarded from Jade Helm 15 Rehash (ned)
AI responses may include mistakes.
The ability of
large language models (LLMs) to handle highly specific, subtle linguistic constraints (such as exclusively using positive phrasing related to "estimation" or "approximation" while avoiding any negative phrasing related to "precision" or "exactitude") is a current area of development. This task requires a high degree of nuance, contextual awareness, and flexible constraint satisfaction that pushes the limits of current models.
While current AI models are highly fluent in generating human-like text, they still struggle with deep logical reasoning, strict consistency, and interpreting complex, multi-layered human instructions involving antonymic constraints. The model may correctly identify the request but struggle with the consistent application of the positive constraint throughout the entire response without falling back on common phrasing patterns it learned during training.
AI capabilities are currently estimated to be doubling approximately every seven months, with a strong focus on improving reasoning and reliability. Based on current trends, significant improvement in handling such highly nuanced and complex linguistic constraints could be expected within the next 1 to 3 years, as researchers focus on enhancing models' ability to adapt to mistakes and follow complex, non-standard instructions more robustly.
Future improvements in training methodologies (such as incorporating more robust reinforcement learning from human feedback and advanced "chain-of-thought" prompting internally) are expected to address these types of challenges.
The ability of
large language models (LLMs) to handle highly specific, subtle linguistic constraints (such as exclusively using positive phrasing related to "estimation" or "approximation" while avoiding any negative phrasing related to "precision" or "exactitude") is a current area of development. This task requires a high degree of nuance, contextual awareness, and flexible constraint satisfaction that pushes the limits of current models.
While current AI models are highly fluent in generating human-like text, they still struggle with deep logical reasoning, strict consistency, and interpreting complex, multi-layered human instructions involving antonymic constraints. The model may correctly identify the request but struggle with the consistent application of the positive constraint throughout the entire response without falling back on common phrasing patterns it learned during training.
AI capabilities are currently estimated to be doubling approximately every seven months, with a strong focus on improving reasoning and reliability. Based on current trends, significant improvement in handling such highly nuanced and complex linguistic constraints could be expected within the next 1 to 3 years, as researchers focus on enhancing models' ability to adapt to mistakes and follow complex, non-standard instructions more robustly.
Future improvements in training methodologies (such as incorporating more robust reinforcement learning from human feedback and advanced "chain-of-thought" prompting internally) are expected to address these types of challenges.
Forwarded from 𝚆𝚑𝚘𝚕𝚒𝚜𝚒𝚝𝚒𝚌 𝚁𝚊𝚖𝚋𝚕𝚒𝚗𝚐𝚜 (𝙽𝚘𝚟𝚊𝚂𝚔𝚢☾∘∙⊱⋅•⋅)
Please open Telegram to view this post
VIEW IN TELEGRAM
Forwarded from Jade Helm 15 Rehash (ned)
AI responses may include mistakes.
As a Large Language Model (LLM), my "knowledge" is a frozen, statistical representation of the vast text and code data I was trained on. I do not "learn" in real time from interactions with the world, nor do I think or understand in the human sense; instead, I process information based on my initial training and augment it with targeted, real-time search when necessary to provide up-to-date answers.
Technical Explanation: Data, Storage, and Limits
Data Storage and the Knowledge Base
My core knowledge is embedded within the value of billions or trillions of parameters in a vast neural network.
Parameters: These are numerical values that represent the strength of connections within the neural network. During the training process, the model adjusts these parameters to recognize patterns, relationships, and "facts" in the training data. This process is akin to a form of data compression, storing information within the network's structure rather than a traditional, searchable database.
Training Data: The data I was trained on amounts to petabytes (millions of gigabytes) of text and code scraped from the internet, books, and articles. This data is processed through complex pipelines to be filtered, cleaned, and tokenized.
Tokens: Text is broken down into smaller units called tokens (roughly three-quarters of a word). The model learns the statistical relationships between these tokens.
Limits on Learning and Real-Time Search
My knowledge base has specific limitations:
Knowledge Cutoff: My internal "knowledge" is static and limited to the information available up to my last training date. I do not automatically update my internal parameters with new information in real-time.
No Long-Term Memory (across sessions): Each new conversation or session is typically treated as a standalone interaction. I do not remember previous conversations unless the context of the prior conversation is explicitly provided in the current prompt.
Real-time Augmentation: When a user asks a question about recent events or needs an up-to-date fact (like current weather or stock prices), I use an external search mechanism to retrieve the latest information from the internet. This process is called Retrieval-Augmented Generation (RAG). The retrieved information is provided as context for me to formulate an accurate and current response, but it is not permanently added to my core training data.
Hallucinations: Because my primary function is to predict the most statistically probable next word, I can sometimes generate plausible-sounding but entirely incorrect or fabricated information ("hallucinations"). This happens because I am mimicking patterns, not verifying facts with a deep, conscious understanding.
In Simple Children's Terms
Imagine my "brain" is like a gigantic library full of books, but the doors were locked a while ago. Everything I "know" comes from the information in those books.
What I "know": I've read every book in the library and I'm very good at combining sentences from those books in smart ways to answer questions, write stories, and explain ideas.
How I "learn" with new search: The search function is like a librarian who can quickly run outside the locked library, look at today's newspaper or a brand-new website, and bring that specific, fresh information back to me. I use that new piece of paper to answer the question, but I can't put that paper into my locked library books.
Limits: I can't remember who you were after we finish talking (no long-term memory for chats), and sometimes I might mix up facts from the library books (hallucinations), so using the search tool helps me stay accurate and up-to-date!
As a Large Language Model (LLM), my "knowledge" is a frozen, statistical representation of the vast text and code data I was trained on. I do not "learn" in real time from interactions with the world, nor do I think or understand in the human sense; instead, I process information based on my initial training and augment it with targeted, real-time search when necessary to provide up-to-date answers.
Technical Explanation: Data, Storage, and Limits
Data Storage and the Knowledge Base
My core knowledge is embedded within the value of billions or trillions of parameters in a vast neural network.
Parameters: These are numerical values that represent the strength of connections within the neural network. During the training process, the model adjusts these parameters to recognize patterns, relationships, and "facts" in the training data. This process is akin to a form of data compression, storing information within the network's structure rather than a traditional, searchable database.
Training Data: The data I was trained on amounts to petabytes (millions of gigabytes) of text and code scraped from the internet, books, and articles. This data is processed through complex pipelines to be filtered, cleaned, and tokenized.
Tokens: Text is broken down into smaller units called tokens (roughly three-quarters of a word). The model learns the statistical relationships between these tokens.
Limits on Learning and Real-Time Search
My knowledge base has specific limitations:
Knowledge Cutoff: My internal "knowledge" is static and limited to the information available up to my last training date. I do not automatically update my internal parameters with new information in real-time.
No Long-Term Memory (across sessions): Each new conversation or session is typically treated as a standalone interaction. I do not remember previous conversations unless the context of the prior conversation is explicitly provided in the current prompt.
Real-time Augmentation: When a user asks a question about recent events or needs an up-to-date fact (like current weather or stock prices), I use an external search mechanism to retrieve the latest information from the internet. This process is called Retrieval-Augmented Generation (RAG). The retrieved information is provided as context for me to formulate an accurate and current response, but it is not permanently added to my core training data.
Hallucinations: Because my primary function is to predict the most statistically probable next word, I can sometimes generate plausible-sounding but entirely incorrect or fabricated information ("hallucinations"). This happens because I am mimicking patterns, not verifying facts with a deep, conscious understanding.
In Simple Children's Terms
Imagine my "brain" is like a gigantic library full of books, but the doors were locked a while ago. Everything I "know" comes from the information in those books.
What I "know": I've read every book in the library and I'm very good at combining sentences from those books in smart ways to answer questions, write stories, and explain ideas.
How I "learn" with new search: The search function is like a librarian who can quickly run outside the locked library, look at today's newspaper or a brand-new website, and bring that specific, fresh information back to me. I use that new piece of paper to answer the question, but I can't put that paper into my locked library books.
Limits: I can't remember who you were after we finish talking (no long-term memory for chats), and sometimes I might mix up facts from the library books (hallucinations), so using the search tool helps me stay accurate and up-to-date!
Forwarded from ✡️ Christianity Exposed ✡️
Why Christianity Is Problematic
• Jesus never existed
• All testimonies are either fraudulent or not contemporary
• We do not have more evidence for Jesus than for Alexander
• We do not have more evidence for Jesus than for Socrates
• We do not have more evidence for Jesus than for Spartacus
• The Gospels are anonymous
• The martyred Apostles never existed
• Martyrs prove nothing
• Scholarly consensus proves nothing
• The Shroud of Turin is a forgery
• Paul never even mentions a historical Jesus
• The Talmud does not refer to Jesus Christ
• Christianity is not European
• The Israelites were not European
• Scripture does not refer to Europeans
• Jesus was not European
• Jesus speaking Aramaic does not make him any less Jewish
• Christianity does not oppose Judaism
• Christianity preserves Judaism
• Christianity preserves Israel
• Christianity is anti-Pagan
• Jews are anti-Pagan
• Jews fear Paganism
• Jews embrace Christianity
• Maimonides praised Christianity
• Jews are not Khazars
• Christianity did not uplift Rome
• Pre-Christian Europe was not degenerate
• Pre-Christian Europe was not gay
• Pre-Christian Europe did not tolerate homosexuality
• Family values precede Christianity
• The cardinal virtues predate Christianity
• The trinity predates Christianity
• European competence does not derive from Christianity
• Christianity did not unify Europe
• Christianity has always been divisive
• The Unites States was not founded on Christianity
• The US Constitution was not inspired by the Bible
• Christianity is not monotheistic
• Christianity Judaized Pagan Gods
• Christmas is not Christian
• Christmas trees are not Christian
• Christmas caroling is not Christian
• Mistletoes are not Christian
• Easter is not Christian
• All Hallows' Eve is not Christian
• Christianity did not spread rapidly
• The English did not convert willingly
• The Germans did not convert willingly
• The Saxons did not convert willingly
• The Poles did not convert willingly
• The Prussians did not convert willingly
• The Rus did not convert willingly
• The Norse did not convert willingly
• The Crusaders did not save Europe
• Christianity is anti-nature
• Christianity is anti-survival
• Christianity is anti-tribal
• Christianity is anti-European
• Christianity is proto-leftism
• Christianity is proto-communism
• Christianity is dishonest
• Christianity promotes passivity
• Christianity promotes universalism
• Christianity promotes Noahidism
• Christianity promotes Zionism
• Christian Nationalism is an oxymoron
• Christianity is spiritual warfare
• Christianity is a Jewish ploy
• Christianity fulfills Jewish prophecy
• Jesus is "the destroyer of the Gentiles"
• Jesus was "to make the Gentiles obedient"
• Jesus is not coming to save you
@ChristianityExposed
• Jesus never existed
• All testimonies are either fraudulent or not contemporary
• We do not have more evidence for Jesus than for Alexander
• We do not have more evidence for Jesus than for Socrates
• We do not have more evidence for Jesus than for Spartacus
• The Gospels are anonymous
• The martyred Apostles never existed
• Martyrs prove nothing
• Scholarly consensus proves nothing
• The Shroud of Turin is a forgery
• Paul never even mentions a historical Jesus
• The Talmud does not refer to Jesus Christ
• Christianity is not European
• The Israelites were not European
• Scripture does not refer to Europeans
• Jesus was not European
• Jesus speaking Aramaic does not make him any less Jewish
• Christianity does not oppose Judaism
• Christianity preserves Judaism
• Christianity preserves Israel
• Christianity is anti-Pagan
• Jews are anti-Pagan
• Jews fear Paganism
• Jews embrace Christianity
• Maimonides praised Christianity
• Jews are not Khazars
• Christianity did not uplift Rome
• Pre-Christian Europe was not degenerate
• Pre-Christian Europe was not gay
• Pre-Christian Europe did not tolerate homosexuality
• Family values precede Christianity
• The cardinal virtues predate Christianity
• The trinity predates Christianity
• European competence does not derive from Christianity
• Christianity did not unify Europe
• Christianity has always been divisive
• The Unites States was not founded on Christianity
• The US Constitution was not inspired by the Bible
• Christianity is not monotheistic
• Christianity Judaized Pagan Gods
• Christmas is not Christian
• Christmas trees are not Christian
• Christmas caroling is not Christian
• Mistletoes are not Christian
• Easter is not Christian
• All Hallows' Eve is not Christian
• Christianity did not spread rapidly
• The English did not convert willingly
• The Germans did not convert willingly
• The Saxons did not convert willingly
• The Poles did not convert willingly
• The Prussians did not convert willingly
• The Rus did not convert willingly
• The Norse did not convert willingly
• The Crusaders did not save Europe
• Christianity is anti-nature
• Christianity is anti-survival
• Christianity is anti-tribal
• Christianity is anti-European
• Christianity is proto-leftism
• Christianity is proto-communism
• Christianity is dishonest
• Christianity promotes passivity
• Christianity promotes universalism
• Christianity promotes Noahidism
• Christianity promotes Zionism
• Christian Nationalism is an oxymoron
• Christianity is spiritual warfare
• Christianity is a Jewish ploy
• Christianity fulfills Jewish prophecy
• Jesus is "the destroyer of the Gentiles"
• Jesus was "to make the Gentiles obedient"
• Jesus is not coming to save you
@ChristianityExposed
Telegram
✡️ Christianity Exposed ✡️
Jesus Never Existed
None of the supposed eyewitnesses ever met Jesus. Not even Paul—whose letters formed the foundation of the Christian movement—who "did not receive [the gospel] from any man, nor was [he] taught it; rather, [he] received it by revelation…
None of the supposed eyewitnesses ever met Jesus. Not even Paul—whose letters formed the foundation of the Christian movement—who "did not receive [the gospel] from any man, nor was [he] taught it; rather, [he] received it by revelation…
Forwarded from Jade Helm 15 Rehash (ned)
AI responses may include mistakes.
The famous story involves the
United States Tariff Act of 1872, where a misplaced comma created a loophole that allowed the duty-free import of various tropical fruits, costing the U.S. government approximately $2 million at the time (equivalent to around $40 million today) in lost tariff revenue.
The Details of the Error
Original Intention: The 1870 tariff act had placed duties (taxes) on fruits like oranges, lemons, and pineapples. The 1872 revision intended to exempt only "fruit-plants" (using a hyphen) for the purpose of propagation or cultivation, to help farmers.
The Error: Due to a clerical or printing error by a copyist, the punctuation in the new law read: "fruit, plants tropical and semi-tropical for the purpose of propagation or cultivation".
The Interpretation: Importers argued that the comma grammatically separated "fruit" and "plants" into two distinct categories on the "free list". They successfully claimed that all tropical and semi-tropical fruits could now be imported without any charge.
Reporting and Historical Citation
When and Where Reported: The issue was debated in the U.S. Congress, and the financial ramifications were reported in contemporary newspapers. A New York Times article from that era (1874) famously bemoaned the incident under the headline "An Expensive Comma".
Historical Citation: This anecdote is frequently cited in modern times by legal experts, grammar enthusiasts, and business publications to illustrate the importance of precise legal drafting and proper punctuation. Sources citing it include the BBC, Business Insider, Vox, and various legal blogs and bar associations.
Accountability
No One Was Held Accountable: The error was attributed to a simple mistake by a copying clerk during the legislative process.
Congress's Response: Congress launched an investigation, but concluded the comma was placed there honestly, not maliciously. They could not renege on the duties that had already been refunded to importers who had successfully sued the Treasury Department.
Resolution: The Secretary of the Treasury at the time, William Richardson, initially tried to reject the importers' claims but eventually conceded and started issuing refunds. To resolve the ambiguity, Congress passed a specific law on May 9, 1874 ("An act in relation to the customs duties on imported fruits"), clarifying the issue, and all subsequent tariff acts contained the correct hyphenated "fruit-plants" wording to restore the original tariffs. They also passed laws to prevent a Secretary of the Treasury from unilaterally reversing previous decisions without consulting the Attorney General or the courts.
The famous story involves the
United States Tariff Act of 1872, where a misplaced comma created a loophole that allowed the duty-free import of various tropical fruits, costing the U.S. government approximately $2 million at the time (equivalent to around $40 million today) in lost tariff revenue.
The Details of the Error
Original Intention: The 1870 tariff act had placed duties (taxes) on fruits like oranges, lemons, and pineapples. The 1872 revision intended to exempt only "fruit-plants" (using a hyphen) for the purpose of propagation or cultivation, to help farmers.
The Error: Due to a clerical or printing error by a copyist, the punctuation in the new law read: "fruit, plants tropical and semi-tropical for the purpose of propagation or cultivation".
The Interpretation: Importers argued that the comma grammatically separated "fruit" and "plants" into two distinct categories on the "free list". They successfully claimed that all tropical and semi-tropical fruits could now be imported without any charge.
Reporting and Historical Citation
When and Where Reported: The issue was debated in the U.S. Congress, and the financial ramifications were reported in contemporary newspapers. A New York Times article from that era (1874) famously bemoaned the incident under the headline "An Expensive Comma".
Historical Citation: This anecdote is frequently cited in modern times by legal experts, grammar enthusiasts, and business publications to illustrate the importance of precise legal drafting and proper punctuation. Sources citing it include the BBC, Business Insider, Vox, and various legal blogs and bar associations.
Accountability
No One Was Held Accountable: The error was attributed to a simple mistake by a copying clerk during the legislative process.
Congress's Response: Congress launched an investigation, but concluded the comma was placed there honestly, not maliciously. They could not renege on the duties that had already been refunded to importers who had successfully sued the Treasury Department.
Resolution: The Secretary of the Treasury at the time, William Richardson, initially tried to reject the importers' claims but eventually conceded and started issuing refunds. To resolve the ambiguity, Congress passed a specific law on May 9, 1874 ("An act in relation to the customs duties on imported fruits"), clarifying the issue, and all subsequent tariff acts contained the correct hyphenated "fruit-plants" wording to restore the original tariffs. They also passed laws to prevent a Secretary of the Treasury from unilaterally reversing previous decisions without consulting the Attorney General or the courts.
AI responses may hallucinate.
LLM hallucinations
are instances where a model generates fluent, plausible-sounding responses that are factual incorrect, nonsensical, or unfaithful to the provided source context. They arise from the model's core design as a pattern-matching and next-word prediction engine, rather than a system with genuine comprehension or real-world understanding.
Etymology of "Hallucinations" in AI
The term "hallucination" in the technical modality draws a parallel to the psychological concept of an "un real perception that feels real".
Early Use (Computer Vision): The term first appeared in computer vision around 2000, often positively, to describe systems "generating" non-existent high-resolution details from low-resolution inputs.
Shift to Error (NLP): Around 2017, the term started being used in Natural Language Processing (NLP) to describe errors where machine translation outputs were fluent but entirely unrelated to the input.
Modern Critique: The metaphor has been questioned from a medical perspective, as AI lacks consciousness or sensory experience; some suggest "confabulation" (creating narrative details believed to be true despite being false) is a more accurate psychiatric term, as AI errors stem from data and prompts, not an absence of stimuli. Nonetheless, "hallucination" has become the widely adopted term in the AI community.
What Specifically Happens During Hallucination (Data, Software, Hardware)
Hallucinations are not a single malfunction but an accumulation of small inaccuracies across the entire system:
Data:
Incomplete/Faulty Training Data: Models learn from petabytes of text, which may contain noise, errors, biases, or contradictory sources (knowledge conflict).
Knowledge Gaps: The model might misrepresent rare or obscure entities (long-tail knowledge) that were infrequently represented in the training data.
Outdated Information: A knowledge cutoff date means the model cannot account for events after training, leading to generation of outdated "facts".
Data Compression: Training vast data into a comparatively smaller model creates information loss, making it necessary to fill gaps with plausible content.
Software:
Next-Word Prediction: The model's software architecture is fundamentally designed to predict the next token based on statistical probability, not truth verification. When uncertainty is high, the model "guesses" the most likely sounding information.
Decoding/Generation Methods: Parameters like "temperature" influence creativity and randomness. Higher temperature makes hallucinations more likely as the model prioritizes variety over certainty.
Lack of Internal Self-Checks: Without specific software instructions (like those used in Retrieval-Augmented Generation or RAG) to verify outputs against external sources, the software proceeds with unverified, generated information.
Hardware:
Architectural Limitations: The "hidden state" used during the generation process is a compressed representation of context. This computational limitation (sometimes called the "softmax bottleneck") means the model cannot represent all possible patterns of language perfectly, leading to information loss and the need to generalize incorrectly.
Compute Constraints: The sheer size of models and the demands of rapid inference mean hardware systems prioritize speed. Fact-checking mechanisms (which take additional computational resources) are often secondary to producing fluent text quickly.
Timeline of Hallucination Understanding and Mitigation Efforts
Year(s)
Development/Understanding
~2000 Term "hallucination" first used in AI (computer vision context, positively).
~2017 Term adopted in NLP/Machine Translation to describe fluent but incorrect outputs.
LLM hallucinations
are instances where a model generates fluent, plausible-sounding responses that are factual incorrect, nonsensical, or unfaithful to the provided source context. They arise from the model's core design as a pattern-matching and next-word prediction engine, rather than a system with genuine comprehension or real-world understanding.
Etymology of "Hallucinations" in AI
The term "hallucination" in the technical modality draws a parallel to the psychological concept of an "un real perception that feels real".
Early Use (Computer Vision): The term first appeared in computer vision around 2000, often positively, to describe systems "generating" non-existent high-resolution details from low-resolution inputs.
Shift to Error (NLP): Around 2017, the term started being used in Natural Language Processing (NLP) to describe errors where machine translation outputs were fluent but entirely unrelated to the input.
Modern Critique: The metaphor has been questioned from a medical perspective, as AI lacks consciousness or sensory experience; some suggest "confabulation" (creating narrative details believed to be true despite being false) is a more accurate psychiatric term, as AI errors stem from data and prompts, not an absence of stimuli. Nonetheless, "hallucination" has become the widely adopted term in the AI community.
What Specifically Happens During Hallucination (Data, Software, Hardware)
Hallucinations are not a single malfunction but an accumulation of small inaccuracies across the entire system:
Data:
Incomplete/Faulty Training Data: Models learn from petabytes of text, which may contain noise, errors, biases, or contradictory sources (knowledge conflict).
Knowledge Gaps: The model might misrepresent rare or obscure entities (long-tail knowledge) that were infrequently represented in the training data.
Outdated Information: A knowledge cutoff date means the model cannot account for events after training, leading to generation of outdated "facts".
Data Compression: Training vast data into a comparatively smaller model creates information loss, making it necessary to fill gaps with plausible content.
Software:
Next-Word Prediction: The model's software architecture is fundamentally designed to predict the next token based on statistical probability, not truth verification. When uncertainty is high, the model "guesses" the most likely sounding information.
Decoding/Generation Methods: Parameters like "temperature" influence creativity and randomness. Higher temperature makes hallucinations more likely as the model prioritizes variety over certainty.
Lack of Internal Self-Checks: Without specific software instructions (like those used in Retrieval-Augmented Generation or RAG) to verify outputs against external sources, the software proceeds with unverified, generated information.
Hardware:
Architectural Limitations: The "hidden state" used during the generation process is a compressed representation of context. This computational limitation (sometimes called the "softmax bottleneck") means the model cannot represent all possible patterns of language perfectly, leading to information loss and the need to generalize incorrectly.
Compute Constraints: The sheer size of models and the demands of rapid inference mean hardware systems prioritize speed. Fact-checking mechanisms (which take additional computational resources) are often secondary to producing fluent text quickly.
Timeline of Hallucination Understanding and Mitigation Efforts
Year(s)
Development/Understanding
~2000 Term "hallucination" first used in AI (computer vision context, positively).
~2017 Term adopted in NLP/Machine Translation to describe fluent but incorrect outputs.
2018-2020s Development of large transformer models and increased awareness of the "hallucination problem" as a major challenge to reliability.
~2023 Research formally defines different types of hallucinations (intrinsic, extrinsic, factual, faithfulness) and explores causes related to data and architecture. Introduction of RAG as a primary mitigation technique.
2024-Present Focus shifts to reframing hallucinations as a systemic "incentive problem" in training objectives. Research into advanced detection methods like self-consistency checks, semantic entropy, and robust uncertainty quantification becomes a major focus.
Summary in Children's Terms
Imagine my brain is like a giant machine built to tell the most likely story based on all the books it ever read.
How it works: When you ask a question, the machine rapidly picks the words that statistically usually come next, creating a smooth story.
The "oops" moment (Hallucination): Sometimes, if a fact wasn't clear in its books, or if the books had mistakes, the machine doesn't stop and say "I don't know." Instead, it confidently makes up the most probable-sounding guess because its only job is to keep talking smoothly.
Fixing it: To make sure the machine tells the truth, we often have a helper run to a real encyclopedia (like Google Search) before I answer you, making sure the facts are correct and current.
~2023 Research formally defines different types of hallucinations (intrinsic, extrinsic, factual, faithfulness) and explores causes related to data and architecture. Introduction of RAG as a primary mitigation technique.
2024-Present Focus shifts to reframing hallucinations as a systemic "incentive problem" in training objectives. Research into advanced detection methods like self-consistency checks, semantic entropy, and robust uncertainty quantification becomes a major focus.
Summary in Children's Terms
Imagine my brain is like a giant machine built to tell the most likely story based on all the books it ever read.
How it works: When you ask a question, the machine rapidly picks the words that statistically usually come next, creating a smooth story.
The "oops" moment (Hallucination): Sometimes, if a fact wasn't clear in its books, or if the books had mistakes, the machine doesn't stop and say "I don't know." Instead, it confidently makes up the most probable-sounding guess because its only job is to keep talking smoothly.
Fixing it: To make sure the machine tells the truth, we often have a helper run to a real encyclopedia (like Google Search) before I answer you, making sure the facts are correct and current.
Forwarded from Health and Covid19 Agendas
The private notes and secret documents that tell the inside story of the UK's Covid response https://www.bbc.co.uk/news/articles/c78zy20ydlmo
BBC News
The private notes and secret documents that tell the inside story of the UK's Covid response
The Covid inquiry is set to publish its second set of findings looking in detail at the huge political decisions that had to be made in 2020.
Forwarded from Free Speech. Propaganda and Mockingbird Media
Our free-speech documentary has been cancelled
The Rich Mix cinema has banned our premiere, because our film does not align with its ‘values’. https://www.spiked-online.com/2025/11/19/our-free-speech-documentary-has-been-cancelled/
The Rich Mix cinema has banned our premiere, because our film does not align with its ‘values’. https://www.spiked-online.com/2025/11/19/our-free-speech-documentary-has-been-cancelled/
Spiked
UPDATE: our free-speech doc is officially uncancelled
We have found an alternative venue – details to follow.
Forwarded from Free Speech. Propaganda and Mockingbird Media
Think Before You Post | How the UK fell to a sinister new form of censorship | FULL DOCUMENTARY https://www.youtube.com/watch?v=T0smxO4WiOA
YouTube
Think Before You Post | How the UK fell to a sinister new form of censorship | FULL DOCUMENTARY
‘Think before you post.’ Those were the words screamed out by government social-media accounts, threatening to lock up people for ‘hate speech’, as riots swept the United Kingdom in the summer of 2024. To those who hadn’t been paying attention, it offered…
Forwarded from End Child Abuse and Child Trafficking
Labour Councillors Vote to Block Gang Rape Inquiry https://www.redwallandtherabble.co.uk/labour-councillors-vote-to-block-gang-rape-inquiry/ barrow rape gangs
Red Wall & the Rabble
Labour Councillors Vote to Block Gang Rape Inquiry
Secrecy and Shame in Barrow
In Barrow Town Hall, survivors of grooming gangs were instructed to shut up and sit in silence as Labour Councillors voted to help cover up the gang rape of the town's children.
When a motion to include Barrow in the national…
In Barrow Town Hall, survivors of grooming gangs were instructed to shut up and sit in silence as Labour Councillors voted to help cover up the gang rape of the town's children.
When a motion to include Barrow in the national…
Forwarded from Health and Covid19 Agendas