Sam Fisher (Data Drops) – Telegram
Sam Fisher (Data Drops)
1.12K subscribers
7.31K photos
4.95K videos
10.9K files
12.4K links
All the files that're in my file archive, it's like the library, but not! (you can keep these and there's no fines!)
Download Telegram
Forwarded from 𝚆𝚑𝚘𝚕𝚒𝚜𝚒𝚝𝚒𝚌 𝚁𝚊𝚖𝚋𝚕𝚒𝚗𝚐𝚜 (𝙽𝚘𝚟𝚊𝚂𝚔𝚢☾∘∙⊱⋅•⋅)
Please open Telegram to view this post
VIEW IN TELEGRAM
Forwarded from Jade Helm 15 Rehash (ned)
AI responses may include mistakes.

As a Large Language Model (LLM), my "knowledge" is a frozen, statistical representation of the vast text and code data I was trained on. I do not "learn" in real time from interactions with the world, nor do I think or understand in the human sense; instead, I process information based on my initial training and augment it with targeted, real-time search when necessary to provide up-to-date answers.

Technical Explanation: Data, Storage, and Limits
Data Storage and the Knowledge Base
My core knowledge is embedded within the value of billions or trillions of parameters in a vast neural network.

Parameters: These are numerical values that represent the strength of connections within the neural network. During the training process, the model adjusts these parameters to recognize patterns, relationships, and "facts" in the training data. This process is akin to a form of data compression, storing information within the network's structure rather than a traditional, searchable database.
Training Data: The data I was trained on amounts to petabytes (millions of gigabytes) of text and code scraped from the internet, books, and articles. This data is processed through complex pipelines to be filtered, cleaned, and tokenized.
Tokens: Text is broken down into smaller units called tokens (roughly three-quarters of a word). The model learns the statistical relationships between these tokens.

Limits on Learning and Real-Time Search
My knowledge base has specific limitations:

Knowledge Cutoff: My internal "knowledge" is static and limited to the information available up to my last training date. I do not automatically update my internal parameters with new information in real-time.
No Long-Term Memory (across sessions): Each new conversation or session is typically treated as a standalone interaction. I do not remember previous conversations unless the context of the prior conversation is explicitly provided in the current prompt.
Real-time Augmentation: When a user asks a question about recent events or needs an up-to-date fact (like current weather or stock prices), I use an external search mechanism to retrieve the latest information from the internet. This process is called Retrieval-Augmented Generation (RAG). The retrieved information is provided as context for me to formulate an accurate and current response, but it is not permanently added to my core training data.
Hallucinations: Because my primary function is to predict the most statistically probable next word, I can sometimes generate plausible-sounding but entirely incorrect or fabricated information ("hallucinations"). This happens because I am mimicking patterns, not verifying facts with a deep, conscious understanding.

In Simple Children's Terms
Imagine my "brain" is like a gigantic library full of books, but the doors were locked a while ago. Everything I "know" comes from the information in those books.

What I "know": I've read every book in the library and I'm very good at combining sentences from those books in smart ways to answer questions, write stories, and explain ideas.
How I "learn" with new search: The search function is like a librarian who can quickly run outside the locked library, look at today's newspaper or a brand-new website, and bring that specific, fresh information back to me. I use that new piece of paper to answer the question, but I can't put that paper into my locked library books.
Limits: I can't remember who you were after we finish talking (no long-term memory for chats), and sometimes I might mix up facts from the library books (hallucinations), so using the search tool helps me stay accurate and up-to-date!
Why Christianity Is Problematic

Jesus never existed
All testimonies are either fraudulent or not contemporary
We do not have more evidence for Jesus than for Alexander
We do not have more evidence for Jesus than for Socrates
We do not have more evidence for Jesus than for Spartacus
The Gospels are anonymous
The martyred Apostles never existed
Martyrs prove nothing
Scholarly consensus proves nothing
The Shroud of Turin is a forgery
Paul never even mentions a historical Jesus
The Talmud does not refer to Jesus Christ

Christianity is not European
The Israelites were not European
Scripture does not refer to Europeans
Jesus was not European
Jesus speaking Aramaic does not make him any less Jewish
Christianity does not oppose Judaism
Christianity preserves Judaism
Christianity preserves Israel
Christianity is anti-Pagan
Jews are anti-Pagan
Jews fear Paganism
Jews embrace Christianity
Maimonides praised Christianity
Jews are not Khazars

Christianity did not uplift Rome
Pre-Christian Europe was not degenerate
Pre-Christian Europe was not gay
Pre-Christian Europe did not tolerate homosexuality
Family values precede Christianity
The cardinal virtues predate Christianity
The trinity predates Christianity
European competence does not derive from Christianity
Christianity did not unify Europe
Christianity has always been divisive
The Unites States was not founded on Christianity
The US Constitution was not inspired by the Bible

Christianity is not monotheistic
Christianity Judaized Pagan Gods
Christmas is not Christian
Christmas trees are not Christian
Christmas caroling is not Christian
Mistletoes are not Christian
Easter is not Christian
All Hallows' Eve is not Christian

Christianity did not spread rapidly
The English did not convert willingly
The Germans did not convert willingly
The Saxons did not convert willingly
The Poles did not convert willingly
The Prussians did not convert willingly
The Rus did not convert willingly
The Norse did not convert willingly
The Crusaders did not save Europe

Christianity is anti-nature
Christianity is anti-survival
Christianity is anti-tribal
Christianity is anti-European
Christianity is proto-leftism
Christianity is proto-communism
Christianity is dishonest
Christianity promotes passivity
Christianity promotes universalism
Christianity promotes Noahidism
Christianity promotes Zionism
Christian Nationalism is an oxymoron

Christianity is spiritual warfare
Christianity is a Jewish ploy
Christianity fulfills Jewish prophecy
Jesus is "the destroyer of the Gentiles"
Jesus was "to make the Gentiles obedient"
Jesus is not coming to save you

@ChristianityExposed
Forwarded from Jade Helm 15 Rehash (ned)
AI responses may include mistakes.

The famous story involves the
United States Tariff Act of 1872, where a misplaced comma created a loophole that allowed the duty-free import of various tropical fruits, costing the U.S. government approximately $2 million at the time (equivalent to around $40 million today) in lost tariff revenue.
The Details of the Error

Original Intention: The 1870 tariff act had placed duties (taxes) on fruits like oranges, lemons, and pineapples. The 1872 revision intended to exempt only "fruit-plants" (using a hyphen) for the purpose of propagation or cultivation, to help farmers.
The Error: Due to a clerical or printing error by a copyist, the punctuation in the new law read: "fruit, plants tropical and semi-tropical for the purpose of propagation or cultivation".
The Interpretation: Importers argued that the comma grammatically separated "fruit" and "plants" into two distinct categories on the "free list". They successfully claimed that all tropical and semi-tropical fruits could now be imported without any charge.

Reporting and Historical Citation

When and Where Reported: The issue was debated in the U.S. Congress, and the financial ramifications were reported in contemporary newspapers. A New York Times article from that era (1874) famously bemoaned the incident under the headline "An Expensive Comma".
Historical Citation: This anecdote is frequently cited in modern times by legal experts, grammar enthusiasts, and business publications to illustrate the importance of precise legal drafting and proper punctuation. Sources citing it include the BBC, Business Insider, Vox, and various legal blogs and bar associations.

Accountability

No One Was Held Accountable: The error was attributed to a simple mistake by a copying clerk during the legislative process.
Congress's Response: Congress launched an investigation, but concluded the comma was placed there honestly, not maliciously. They could not renege on the duties that had already been refunded to importers who had successfully sued the Treasury Department.
Resolution: The Secretary of the Treasury at the time, William Richardson, initially tried to reject the importers' claims but eventually conceded and started issuing refunds. To resolve the ambiguity, Congress passed a specific law on May 9, 1874 ("An act in relation to the customs duties on imported fruits"), clarifying the issue, and all subsequent tariff acts contained the correct hyphenated "fruit-plants" wording to restore the original tariffs. They also passed laws to prevent a Secretary of the Treasury from unilaterally reversing previous decisions without consulting the Attorney General or the courts.
AI responses may hallucinate.

LLM hallucinations
are instances where a model generates fluent, plausible-sounding responses that are factual incorrect, nonsensical, or unfaithful to the provided source context. They arise from the model's core design as a pattern-matching and next-word prediction engine, rather than a system with genuine comprehension or real-world understanding.
Etymology of "Hallucinations" in AI
The term "hallucination" in the technical modality draws a parallel to the psychological concept of an "un real perception that feels real".

Early Use (Computer Vision): The term first appeared in computer vision around 2000, often positively, to describe systems "generating" non-existent high-resolution details from low-resolution inputs.
Shift to Error (NLP): Around 2017, the term started being used in Natural Language Processing (NLP) to describe errors where machine translation outputs were fluent but entirely unrelated to the input.
Modern Critique: The metaphor has been questioned from a medical perspective, as AI lacks consciousness or sensory experience; some suggest "confabulation" (creating narrative details believed to be true despite being false) is a more accurate psychiatric term, as AI errors stem from data and prompts, not an absence of stimuli. Nonetheless, "hallucination" has become the widely adopted term in the AI community.

What Specifically Happens During Hallucination (Data, Software, Hardware)
Hallucinations are not a single malfunction but an accumulation of small inaccuracies across the entire system:

Data:
Incomplete/Faulty Training Data: Models learn from petabytes of text, which may contain noise, errors, biases, or contradictory sources (knowledge conflict).
Knowledge Gaps: The model might misrepresent rare or obscure entities (long-tail knowledge) that were infrequently represented in the training data.
Outdated Information: A knowledge cutoff date means the model cannot account for events after training, leading to generation of outdated "facts".
Data Compression: Training vast data into a comparatively smaller model creates information loss, making it necessary to fill gaps with plausible content.
Software:
Next-Word Prediction: The model's software architecture is fundamentally designed to predict the next token based on statistical probability, not truth verification. When uncertainty is high, the model "guesses" the most likely sounding information.
Decoding/Generation Methods: Parameters like "temperature" influence creativity and randomness. Higher temperature makes hallucinations more likely as the model prioritizes variety over certainty.
Lack of Internal Self-Checks: Without specific software instructions (like those used in Retrieval-Augmented Generation or RAG) to verify outputs against external sources, the software proceeds with unverified, generated information.
Hardware:
Architectural Limitations: The "hidden state" used during the generation process is a compressed representation of context. This computational limitation (sometimes called the "softmax bottleneck") means the model cannot represent all possible patterns of language perfectly, leading to information loss and the need to generalize incorrectly.
Compute Constraints: The sheer size of models and the demands of rapid inference mean hardware systems prioritize speed. Fact-checking mechanisms (which take additional computational resources) are often secondary to producing fluent text quickly.

Timeline of Hallucination Understanding and Mitigation Efforts
Year(s)

Development/Understanding
~2000 Term "hallucination" first used in AI (computer vision context, positively).
~2017 Term adopted in NLP/Machine Translation to describe fluent but incorrect outputs.
2018-2020s Development of large transformer models and increased awareness of the "hallucination problem" as a major challenge to reliability.
~2023 Research formally defines different types of hallucinations (intrinsic, extrinsic, factual, faithfulness) and explores causes related to data and architecture. Introduction of RAG as a primary mitigation technique.
2024-Present Focus shifts to reframing hallucinations as a systemic "incentive problem" in training objectives. Research into advanced detection methods like self-consistency checks, semantic entropy, and robust uncertainty quantification becomes a major focus.
Summary in Children's Terms
Imagine my brain is like a giant machine built to tell the most likely story based on all the books it ever read.

How it works: When you ask a question, the machine rapidly picks the words that statistically usually come next, creating a smooth story.
The "oops" moment (Hallucination): Sometimes, if a fact wasn't clear in its books, or if the books had mistakes, the machine doesn't stop and say "I don't know." Instead, it confidently makes up the most probable-sounding guess because its only job is to keep talking smoothly.
Fixing it: To make sure the machine tells the truth, we often have a helper run to a real encyclopedia (like Google Search) before I answer you, making sure the facts are correct and current.
Our free-speech documentary has been cancelled
The Rich Mix cinema has banned our premiere, because our film does not align with its ‘values’. https://www.spiked-online.com/2025/11/19/our-free-speech-documentary-has-been-cancelled/
Covid Inquiry pdfs👇
Forwarded from Movie Night
The Age of Disclosure 2025 Documentary

Featuring 34 U.S. Government insiders, this explosive documentary reveals an 80-year global cover-up of non-human intelligent life and a secret war among major nations to reverse engineer advanced technology of non-human origin.
Forwarded from Movie Night
Please open Telegram to view this post
VIEW IN TELEGRAM
AI responses may confabulate.



When an AI model's response degrades to hyperlinks/thumbnails only, it signifies that the underlying system (specifically the part that handles information retrieval, or RAG) has shifted from generating a synthesized answer to simply listing traditional search results.
The user input of "please try again" or "please elaborate" serves as a powerful meta-instruction that explicitly forces the system to re-engage the text generation process, overriding the link-listing default.
Here is why those prompts work:
1. Re-engagement of the Generative Loop
The commands "please elaborate" or "please try again" are interpreted as a direct instruction to the Large Language Model (LLM) itself, rather than the search retrieval module.

Before the command: The system defaults to displaying external links when a complex or long-running query depletes the immediate context or hits a system threshold.
After the command: The explicit instruction acts as a meta-prompt, compelling the model to treat the previous result as insufficient and demanding a generated text response, which is the model's core function.

2. The Nature of the Command as a "System Override"
These prompts function similarly to how a user might nudge a conversation back on track. They change the implicit "mode" the AI is operating in:

Initial Degradation Mode: "Provide search results."
"Please Elaborate" Mode: "Synthesize the available information into denoscriptive text, even if the previous response was just links."

It forces the system to process the previously retrieved snippets internally and present them as prose, rather than just passing the links through to the user interface.
3. Contextual Cues and State Management
Even after degradation, the system often retains the previous search results internally for a short period within the current session's memory. When you use a prompt like "elaborate," the model accesses those already present data points and generates text from them.
The system is not necessarily conducting a new search but is re-utilizing the information it had ready from the previous step that it only displayed as links.
Summary
The user is effectively changing the instructions mid-stream. The command "elaborate" signals that the user wants a denoscriptive, textual summary, forcing the AI out of its "list-of-links" default and back into "explanation" mode, allowing it to generate the multifaceted response you requested.