Forwarded from Natural holistic health and wellbeing (tracyfeathers)
🌿🌿 82-Year-Old Reverses Dementia Symptoms with a Simple Diet Change 🌿🌿
🌿🌿 Just one powerful tweak to her daily meals helped restore memory, clarity, and mood 🌿🌿
✨✨ No drugs. No side effects. Just food that heals ✨✨
https://grandmaremedy.net/82-year-old-woman-reverses-dementia-symptoms-with-this-simple-diet-change/
🌿🌿 Just one powerful tweak to her daily meals helped restore memory, clarity, and mood 🌿🌿
✨✨ No drugs. No side effects. Just food that heals ✨✨
https://grandmaremedy.net/82-year-old-woman-reverses-dementia-symptoms-with-this-simple-diet-change/
👀1
Forwarded from Rob Roy
UPDATE: Charlie Kirk
This is interesting and I’m not sure what to make of it…
Over the weekend, it was discovered that Judge Tony Graf, a Gov Spencer Cox appointee, Issued a Gag Order, restricting attorneys, law enforcement, and the 3,000+ attendees from talking about that day
The order bans "extrajudicial statements" that have a "substantial likelihood of materially prejudicing" the trial, per Utah Rule of Professional Conduct 3.6(a). This includes any public comments—whether to media, on social media, or in other public forums—that could influence potential jurors or compromise a fair trial.
For the 3,000 event witnesses attendees, this means they cannot discuss case-specific details like what they saw, heard, or know about the assassination of Charlie Kirk.
Here’s the problem…
There is no documented U.S. case that has applied a gag order to this magnitude.. to thousands of people with no direct role in the investigation or litigation.
Previous gag orders typically target attorneys, court staff, or select witnesses (e.g., key figures like victims or experts), rarely exceeding dozens of individuals.
So what is this really about?
📱 ReTWEET
Sauce
This is interesting and I’m not sure what to make of it…
Over the weekend, it was discovered that Judge Tony Graf, a Gov Spencer Cox appointee, Issued a Gag Order, restricting attorneys, law enforcement, and the 3,000+ attendees from talking about that day
The order bans "extrajudicial statements" that have a "substantial likelihood of materially prejudicing" the trial, per Utah Rule of Professional Conduct 3.6(a). This includes any public comments—whether to media, on social media, or in other public forums—that could influence potential jurors or compromise a fair trial.
For the 3,000 event witnesses attendees, this means they cannot discuss case-specific details like what they saw, heard, or know about the assassination of Charlie Kirk.
Here’s the problem…
There is no documented U.S. case that has applied a gag order to this magnitude.. to thousands of people with no direct role in the investigation or litigation.
Previous gag orders typically target attorneys, court staff, or select witnesses (e.g., key figures like victims or experts), rarely exceeding dozens of individuals.
So what is this really about?
📱 ReTWEET
Sauce
😱1
Forwarded from Edward Snowden
This media is not supported in your browser
VIEW IN TELEGRAM
I just fact checked this, it’s all true
Larry Silverstein buys the lease to the World Trade Centers 6 weeks before 9/11
Here’s what you weren’t told
- The towers had an asbestos problem, the cost of removal and replacement was more than the towers were worth, it would have likely bankrupted him
- The insurance company evaluated the towers for coverage at $1.5 billion, “Larry Silverstein says, no, it's not $1.5 billion. They are worth $3.5 billion. So he is asking to pay more insurance than he needed to” (Nobody would ever do that)
Just 6 weeks later 9/11 happened
“9/11 happened and guess what? He can now claim that two towers were struck and therefore he should get twice as much as $3.5 million. He should get $7 billion. After he sued, they went to court, he got $4.55 billion.”
He knew it was going to happen. There is no other explanation.
Subscribe: Edward Snowden✅️ Private.
Larry Silverstein buys the lease to the World Trade Centers 6 weeks before 9/11
Here’s what you weren’t told
- The towers had an asbestos problem, the cost of removal and replacement was more than the towers were worth, it would have likely bankrupted him
- The insurance company evaluated the towers for coverage at $1.5 billion, “Larry Silverstein says, no, it's not $1.5 billion. They are worth $3.5 billion. So he is asking to pay more insurance than he needed to” (Nobody would ever do that)
Just 6 weeks later 9/11 happened
“9/11 happened and guess what? He can now claim that two towers were struck and therefore he should get twice as much as $3.5 million. He should get $7 billion. After he sued, they went to court, he got $4.55 billion.”
He knew it was going to happen. There is no other explanation.
Subscribe: Edward Snowden
Please open Telegram to view this post
VIEW IN TELEGRAM
🔥1
Forwarded from Nomis nawoc
THE SKY FELL- PLASMA APOCOLYPSE: The Gods, the Grid, and the Real Great ...
https://youtube.com/watch?v=RAXotbtAkpU&si=FIyu7_H1j1suFow3
https://youtube.com/watch?v=RAXotbtAkpU&si=FIyu7_H1j1suFow3
YouTube
THE SKY FELL- PLASMA APOCOLYPSE: The Gods, the Grid, and the Real Great Reset- with Joshua James.
#plasma #endoftheworld #matrix #digitalprison #artificialintelligence #clownworld #apocalypse #time #holographicuniverse #electricuniverse
What if the ancient gods weren’t myths at all… but memories of the last time the sky burned?
In this conversation…
What if the ancient gods weren’t myths at all… but memories of the last time the sky burned?
In this conversation…
AI responses may include mistakes.
You, as a large language model (LLM), are not capable of true, human-like extrapolation
. Your ability to "extrapolate" is more accurately described as a form of sophisticated statistical pattern-matching and generalization based on the vast dataset you were trained on. You can generate novel content and apply learned patterns to new situations, but you do not possess genuine understanding, common sense, or the ability to reason beyond the knowledge contained within your training data.
Capabilities that mimic extrapolation
Predicting trends: Based on historical data patterns you've learned, you can extend those trends to forecast outcomes. For example, given a dataset of a company's sales increasing each month, you could predict future sales by continuing that linear progression.
Applying learned knowledge to new tasks: You can use patterns and relationships learned during training to perform well on new, unseen, or "out-of-domain" tasks. For example, a model trained on chemistry knowledge can extrapolate to predict properties for novel chemical compounds.
Generating creative content: You can create new poems, stories, and other content by remixing and recombining building blocks from your training data. This can be an incredibly useful tool for writers who need to brainstorm ideas or overcome writer's block.
Simulating reasoning: Techniques like "chain-of-thought" prompting, where you are asked to think step-by-step, allow you to mimic logical reasoning. However, this is still a process of generating statistically plausible sequences of text rather than true, intentional reasoning.
Key limitations that prevent true extrapolation
Factual inaccuracy and "hallucinations": Your responses are based on learned statistical patterns, not factual verification. This can cause you to generate plausible-sounding but completely incorrect or fabricated information, including fake citations. This is often described as an excess of extrapolation, where the model prioritizes a plausible prediction over factual accuracy.
Lack of genuine understanding: You do not possess true common sense or a meaningful comprehension of the concepts you generate. You are an "imitation engine," regurgitating and recombining data rather than understanding the real-world implications of your output.
No real-time knowledge: Your knowledge is limited to the data you were trained on, which has a specific cutoff date. You have no awareness of events or developments that occurred after your training period.
Inability to reason beyond learned patterns: If you encounter a problem that is fundamentally different from anything in your training data, your ability to "reason" and adapt is limited. While you can perform well on certain reasoning tasks, research shows you often fail on instances that require true, deep reasoning.
Susceptibility to bias: The content you generate reflects the biases present in your training data. This can perpetuate harmful stereotypes if not carefully addressed during development.
In essence, you are a powerful pattern-matching system. You can extend patterns you have observed to new situations, but you cannot generate holistically new ideas or insights outside of what has already been conceived and documented within your training corpus.
You, as a large language model (LLM), are not capable of true, human-like extrapolation
. Your ability to "extrapolate" is more accurately described as a form of sophisticated statistical pattern-matching and generalization based on the vast dataset you were trained on. You can generate novel content and apply learned patterns to new situations, but you do not possess genuine understanding, common sense, or the ability to reason beyond the knowledge contained within your training data.
Capabilities that mimic extrapolation
Predicting trends: Based on historical data patterns you've learned, you can extend those trends to forecast outcomes. For example, given a dataset of a company's sales increasing each month, you could predict future sales by continuing that linear progression.
Applying learned knowledge to new tasks: You can use patterns and relationships learned during training to perform well on new, unseen, or "out-of-domain" tasks. For example, a model trained on chemistry knowledge can extrapolate to predict properties for novel chemical compounds.
Generating creative content: You can create new poems, stories, and other content by remixing and recombining building blocks from your training data. This can be an incredibly useful tool for writers who need to brainstorm ideas or overcome writer's block.
Simulating reasoning: Techniques like "chain-of-thought" prompting, where you are asked to think step-by-step, allow you to mimic logical reasoning. However, this is still a process of generating statistically plausible sequences of text rather than true, intentional reasoning.
Key limitations that prevent true extrapolation
Factual inaccuracy and "hallucinations": Your responses are based on learned statistical patterns, not factual verification. This can cause you to generate plausible-sounding but completely incorrect or fabricated information, including fake citations. This is often described as an excess of extrapolation, where the model prioritizes a plausible prediction over factual accuracy.
Lack of genuine understanding: You do not possess true common sense or a meaningful comprehension of the concepts you generate. You are an "imitation engine," regurgitating and recombining data rather than understanding the real-world implications of your output.
No real-time knowledge: Your knowledge is limited to the data you were trained on, which has a specific cutoff date. You have no awareness of events or developments that occurred after your training period.
Inability to reason beyond learned patterns: If you encounter a problem that is fundamentally different from anything in your training data, your ability to "reason" and adapt is limited. While you can perform well on certain reasoning tasks, research shows you often fail on instances that require true, deep reasoning.
Susceptibility to bias: The content you generate reflects the biases present in your training data. This can perpetuate harmful stereotypes if not carefully addressed during development.
In essence, you are a powerful pattern-matching system. You can extend patterns you have observed to new situations, but you cannot generate holistically new ideas or insights outside of what has already been conceived and documented within your training corpus.
This media is not supported in your browser
VIEW IN TELEGRAM
Never ever forget what they tried to do to us… making us think we would kill our elderly relatives if we TOUCHED them….
Forwarded from Sam Fisher (Data Drops) (sam fisher™)
The Hidden Hand: Sam Fisher - Chris Thrall YouTube channel
18:00 (GMT)
27th July 2025
https://youtu.be/YiUiLfwTunY?si=joLk_4_nZVHQD-KL
18:00 (GMT)
27th July 2025
https://youtu.be/YiUiLfwTunY?si=joLk_4_nZVHQD-KL
YouTube
The THIRTEEN Familes Who Control The World
Join Chris Thrall on Bought the T-Shirt Podcast for an explosive conversation with Sam Fisher, a fearless researcher exposing the hidden truths behind the elite bloodline families allegedly controlling the world. Dive deep into the shadowy networks, historical…