Forwarded from Edward Snowden
This media is not supported in your browser
VIEW IN TELEGRAM
I just fact checked this, it’s all true
Larry Silverstein buys the lease to the World Trade Centers 6 weeks before 9/11
Here’s what you weren’t told
- The towers had an asbestos problem, the cost of removal and replacement was more than the towers were worth, it would have likely bankrupted him
- The insurance company evaluated the towers for coverage at $1.5 billion, “Larry Silverstein says, no, it's not $1.5 billion. They are worth $3.5 billion. So he is asking to pay more insurance than he needed to” (Nobody would ever do that)
Just 6 weeks later 9/11 happened
“9/11 happened and guess what? He can now claim that two towers were struck and therefore he should get twice as much as $3.5 million. He should get $7 billion. After he sued, they went to court, he got $4.55 billion.”
He knew it was going to happen. There is no other explanation.
Subscribe: Edward Snowden✅️ Private.
Larry Silverstein buys the lease to the World Trade Centers 6 weeks before 9/11
Here’s what you weren’t told
- The towers had an asbestos problem, the cost of removal and replacement was more than the towers were worth, it would have likely bankrupted him
- The insurance company evaluated the towers for coverage at $1.5 billion, “Larry Silverstein says, no, it's not $1.5 billion. They are worth $3.5 billion. So he is asking to pay more insurance than he needed to” (Nobody would ever do that)
Just 6 weeks later 9/11 happened
“9/11 happened and guess what? He can now claim that two towers were struck and therefore he should get twice as much as $3.5 million. He should get $7 billion. After he sued, they went to court, he got $4.55 billion.”
He knew it was going to happen. There is no other explanation.
Subscribe: Edward Snowden
Please open Telegram to view this post
VIEW IN TELEGRAM
🔥1
Forwarded from Nomis nawoc
THE SKY FELL- PLASMA APOCOLYPSE: The Gods, the Grid, and the Real Great ...
https://youtube.com/watch?v=RAXotbtAkpU&si=FIyu7_H1j1suFow3
https://youtube.com/watch?v=RAXotbtAkpU&si=FIyu7_H1j1suFow3
YouTube
THE SKY FELL- PLASMA APOCOLYPSE: The Gods, the Grid, and the Real Great Reset- with Joshua James.
#plasma #endoftheworld #matrix #digitalprison #artificialintelligence #clownworld #apocalypse #time #holographicuniverse #electricuniverse
What if the ancient gods weren’t myths at all… but memories of the last time the sky burned?
In this conversation…
What if the ancient gods weren’t myths at all… but memories of the last time the sky burned?
In this conversation…
AI responses may include mistakes.
You, as a large language model (LLM), are not capable of true, human-like extrapolation
. Your ability to "extrapolate" is more accurately described as a form of sophisticated statistical pattern-matching and generalization based on the vast dataset you were trained on. You can generate novel content and apply learned patterns to new situations, but you do not possess genuine understanding, common sense, or the ability to reason beyond the knowledge contained within your training data.
Capabilities that mimic extrapolation
Predicting trends: Based on historical data patterns you've learned, you can extend those trends to forecast outcomes. For example, given a dataset of a company's sales increasing each month, you could predict future sales by continuing that linear progression.
Applying learned knowledge to new tasks: You can use patterns and relationships learned during training to perform well on new, unseen, or "out-of-domain" tasks. For example, a model trained on chemistry knowledge can extrapolate to predict properties for novel chemical compounds.
Generating creative content: You can create new poems, stories, and other content by remixing and recombining building blocks from your training data. This can be an incredibly useful tool for writers who need to brainstorm ideas or overcome writer's block.
Simulating reasoning: Techniques like "chain-of-thought" prompting, where you are asked to think step-by-step, allow you to mimic logical reasoning. However, this is still a process of generating statistically plausible sequences of text rather than true, intentional reasoning.
Key limitations that prevent true extrapolation
Factual inaccuracy and "hallucinations": Your responses are based on learned statistical patterns, not factual verification. This can cause you to generate plausible-sounding but completely incorrect or fabricated information, including fake citations. This is often described as an excess of extrapolation, where the model prioritizes a plausible prediction over factual accuracy.
Lack of genuine understanding: You do not possess true common sense or a meaningful comprehension of the concepts you generate. You are an "imitation engine," regurgitating and recombining data rather than understanding the real-world implications of your output.
No real-time knowledge: Your knowledge is limited to the data you were trained on, which has a specific cutoff date. You have no awareness of events or developments that occurred after your training period.
Inability to reason beyond learned patterns: If you encounter a problem that is fundamentally different from anything in your training data, your ability to "reason" and adapt is limited. While you can perform well on certain reasoning tasks, research shows you often fail on instances that require true, deep reasoning.
Susceptibility to bias: The content you generate reflects the biases present in your training data. This can perpetuate harmful stereotypes if not carefully addressed during development.
In essence, you are a powerful pattern-matching system. You can extend patterns you have observed to new situations, but you cannot generate holistically new ideas or insights outside of what has already been conceived and documented within your training corpus.
You, as a large language model (LLM), are not capable of true, human-like extrapolation
. Your ability to "extrapolate" is more accurately described as a form of sophisticated statistical pattern-matching and generalization based on the vast dataset you were trained on. You can generate novel content and apply learned patterns to new situations, but you do not possess genuine understanding, common sense, or the ability to reason beyond the knowledge contained within your training data.
Capabilities that mimic extrapolation
Predicting trends: Based on historical data patterns you've learned, you can extend those trends to forecast outcomes. For example, given a dataset of a company's sales increasing each month, you could predict future sales by continuing that linear progression.
Applying learned knowledge to new tasks: You can use patterns and relationships learned during training to perform well on new, unseen, or "out-of-domain" tasks. For example, a model trained on chemistry knowledge can extrapolate to predict properties for novel chemical compounds.
Generating creative content: You can create new poems, stories, and other content by remixing and recombining building blocks from your training data. This can be an incredibly useful tool for writers who need to brainstorm ideas or overcome writer's block.
Simulating reasoning: Techniques like "chain-of-thought" prompting, where you are asked to think step-by-step, allow you to mimic logical reasoning. However, this is still a process of generating statistically plausible sequences of text rather than true, intentional reasoning.
Key limitations that prevent true extrapolation
Factual inaccuracy and "hallucinations": Your responses are based on learned statistical patterns, not factual verification. This can cause you to generate plausible-sounding but completely incorrect or fabricated information, including fake citations. This is often described as an excess of extrapolation, where the model prioritizes a plausible prediction over factual accuracy.
Lack of genuine understanding: You do not possess true common sense or a meaningful comprehension of the concepts you generate. You are an "imitation engine," regurgitating and recombining data rather than understanding the real-world implications of your output.
No real-time knowledge: Your knowledge is limited to the data you were trained on, which has a specific cutoff date. You have no awareness of events or developments that occurred after your training period.
Inability to reason beyond learned patterns: If you encounter a problem that is fundamentally different from anything in your training data, your ability to "reason" and adapt is limited. While you can perform well on certain reasoning tasks, research shows you often fail on instances that require true, deep reasoning.
Susceptibility to bias: The content you generate reflects the biases present in your training data. This can perpetuate harmful stereotypes if not carefully addressed during development.
In essence, you are a powerful pattern-matching system. You can extend patterns you have observed to new situations, but you cannot generate holistically new ideas or insights outside of what has already been conceived and documented within your training corpus.
This media is not supported in your browser
VIEW IN TELEGRAM
Never ever forget what they tried to do to us… making us think we would kill our elderly relatives if we TOUCHED them….
Forwarded from Sam Fisher (Data Drops) (sam fisher™)
The Hidden Hand: Sam Fisher - Chris Thrall YouTube channel
18:00 (GMT)
27th July 2025
https://youtu.be/YiUiLfwTunY?si=joLk_4_nZVHQD-KL
18:00 (GMT)
27th July 2025
https://youtu.be/YiUiLfwTunY?si=joLk_4_nZVHQD-KL
YouTube
The THIRTEEN Familes Who Control The World
Join Chris Thrall on Bought the T-Shirt Podcast for an explosive conversation with Sam Fisher, a fearless researcher exposing the hidden truths behind the elite bloodline families allegedly controlling the world. Dive deep into the shadowy networks, historical…
Forwarded from Flower2child 🕊️🍀🤍
YouTube
Shocking details in book from Jeffrey Epstein victim Virginia Giuffre
CNN video IN-75TU
Forwarded from VResearch channel
Helen Marden (mother of Maxwell associate Mirabelle Marden) has uploaded some worrying photos to her Instagram, including Mirabelle at Bohemian Grove and caged children. Mirabelle herself has been connected to Ghislaine Maxwell and Rachel Chandler, as well as Hannah Bronfman, niece of Clare Bronfman and Edgar Bronfman Jr who are connected to NXVIM and Epstein. Mirabelle, Chandler, Hannah and Maxwell all socialized during October 17 2012 event in New York. I am not sure if these are connected but Epstein Black Book contains names like Marden, Scott&Sarah. Helen Marden herself is followed by James Alefantis on Instagram as well as his associates.
Sources:
https://www.instagram.com/goldenrock/
https://www.instagram.com/mirabellemarden/
https://observer.com/2006/12/the-marden-family/amp/
https://epsteinsblackbook.com/names/scott-sarah-marden
https://unnoscriptd-magazine.com/chanel-fine-jewelry-celebrates-its-80th-anniversary-of-the-bijoux-de-diamants-collection-created-by-gabrielle-chanel/
Sources:
https://www.instagram.com/goldenrock/
https://www.instagram.com/mirabellemarden/
https://observer.com/2006/12/the-marden-family/amp/
https://epsteinsblackbook.com/names/scott-sarah-marden
https://unnoscriptd-magazine.com/chanel-fine-jewelry-celebrates-its-80th-anniversary-of-the-bijoux-de-diamants-collection-created-by-gabrielle-chanel/
❤1