Forwarded from Health and Covid19 Agendas
Forwarded from Health and Covid19 Agendas
Covid Inquiry pdfs👇
Forwarded from Movie Night
The Age of Disclosure 2025 Documentary
Featuring 34 U.S. Government insiders, this explosive documentary reveals an 80-year global cover-up of non-human intelligent life and a secret war among major nations to reverse engineer advanced technology of non-human origin.
Featuring 34 U.S. Government insiders, this explosive documentary reveals an 80-year global cover-up of non-human intelligent life and a secret war among major nations to reverse engineer advanced technology of non-human origin.
Forwarded from Movie Night
Please open Telegram to view this post
VIEW IN TELEGRAM
AI responses may confabulate.
When an AI model's response degrades to hyperlinks/thumbnails only, it signifies that the underlying system (specifically the part that handles information retrieval, or RAG) has shifted from generating a synthesized answer to simply listing traditional search results.
The user input of "please try again" or "please elaborate" serves as a powerful meta-instruction that explicitly forces the system to re-engage the text generation process, overriding the link-listing default.
Here is why those prompts work:
1. Re-engagement of the Generative Loop
The commands "please elaborate" or "please try again" are interpreted as a direct instruction to the Large Language Model (LLM) itself, rather than the search retrieval module.
Before the command: The system defaults to displaying external links when a complex or long-running query depletes the immediate context or hits a system threshold.
After the command: The explicit instruction acts as a meta-prompt, compelling the model to treat the previous result as insufficient and demanding a generated text response, which is the model's core function.
2. The Nature of the Command as a "System Override"
These prompts function similarly to how a user might nudge a conversation back on track. They change the implicit "mode" the AI is operating in:
Initial Degradation Mode: "Provide search results."
"Please Elaborate" Mode: "Synthesize the available information into denoscriptive text, even if the previous response was just links."
It forces the system to process the previously retrieved snippets internally and present them as prose, rather than just passing the links through to the user interface.
3. Contextual Cues and State Management
Even after degradation, the system often retains the previous search results internally for a short period within the current session's memory. When you use a prompt like "elaborate," the model accesses those already present data points and generates text from them.
The system is not necessarily conducting a new search but is re-utilizing the information it had ready from the previous step that it only displayed as links.
Summary
The user is effectively changing the instructions mid-stream. The command "elaborate" signals that the user wants a denoscriptive, textual summary, forcing the AI out of its "list-of-links" default and back into "explanation" mode, allowing it to generate the multifaceted response you requested.
When an AI model's response degrades to hyperlinks/thumbnails only, it signifies that the underlying system (specifically the part that handles information retrieval, or RAG) has shifted from generating a synthesized answer to simply listing traditional search results.
The user input of "please try again" or "please elaborate" serves as a powerful meta-instruction that explicitly forces the system to re-engage the text generation process, overriding the link-listing default.
Here is why those prompts work:
1. Re-engagement of the Generative Loop
The commands "please elaborate" or "please try again" are interpreted as a direct instruction to the Large Language Model (LLM) itself, rather than the search retrieval module.
Before the command: The system defaults to displaying external links when a complex or long-running query depletes the immediate context or hits a system threshold.
After the command: The explicit instruction acts as a meta-prompt, compelling the model to treat the previous result as insufficient and demanding a generated text response, which is the model's core function.
2. The Nature of the Command as a "System Override"
These prompts function similarly to how a user might nudge a conversation back on track. They change the implicit "mode" the AI is operating in:
Initial Degradation Mode: "Provide search results."
"Please Elaborate" Mode: "Synthesize the available information into denoscriptive text, even if the previous response was just links."
It forces the system to process the previously retrieved snippets internally and present them as prose, rather than just passing the links through to the user interface.
3. Contextual Cues and State Management
Even after degradation, the system often retains the previous search results internally for a short period within the current session's memory. When you use a prompt like "elaborate," the model accesses those already present data points and generates text from them.
The system is not necessarily conducting a new search but is re-utilizing the information it had ready from the previous step that it only displayed as links.
Summary
The user is effectively changing the instructions mid-stream. The command "elaborate" signals that the user wants a denoscriptive, textual summary, forcing the AI out of its "list-of-links" default and back into "explanation" mode, allowing it to generate the multifaceted response you requested.
Forget Woke: the Left’s New Formula – and How to Fight It - Nikos Sotirakopulos YouTube
Love this guy!
He's followed a similar path to myself:
Followed the 'narrative' (not the mainstream, the one that we're fed).
Realised that narrative is full of holes.
Had brain broken.
Had to claw himself back with that new information, but no one will listen...
He's so on point with this...
https://youtube.com/watch?v=QrT46GkfFLI&si=ffWxlMgMstSo6zFT
Love this guy!
He's followed a similar path to myself:
Followed the 'narrative' (not the mainstream, the one that we're fed).
Realised that narrative is full of holes.
Had brain broken.
Had to claw himself back with that new information, but no one will listen...
He's so on point with this...
https://youtube.com/watch?v=QrT46GkfFLI&si=ffWxlMgMstSo6zFT
YouTube
Forget Woke: the Left’s New Formula – and How to Fight It
What comes next after Woke for the Left? Why it's more dangerous, and the only chance we have to fight it.
00:00 Woke is dead, but populism thrives
02:16 Why post-Woke
03:40 Anti-capitalist populism
05:40 The trial balloon of Mamdani
06:41 Quietening down…
00:00 Woke is dead, but populism thrives
02:16 Why post-Woke
03:40 Anti-capitalist populism
05:40 The trial balloon of Mamdani
06:41 Quietening down…
❤1👍1
Sam Fisher (Data Drops) pinned «Forget Woke: the Left’s New Formula – and How to Fight It - Nikos Sotirakopulos YouTube Love this guy! He's followed a similar path to myself: Followed the 'narrative' (not the mainstream, the one that we're fed). Realised that narrative is full of holes.…»
A little game of 'Guess Who?'
Ok for a bit of fun and to give you an idea of just how complicated things get when researching these Bloodlines (and to give you all a bit of a sneaky headstart over everyone, before my next video on Monday night), try and figure out who this guy is pictured with all these foreign dignitaries?
Just his name and that of his wife, should raise an eyebrow or 2 at least (maybe even 3 😂)?
Ok for a bit of fun and to give you an idea of just how complicated things get when researching these Bloodlines (and to give you all a bit of a sneaky headstart over everyone, before my next video on Monday night), try and figure out who this guy is pictured with all these foreign dignitaries?
Just his name and that of his wife, should raise an eyebrow or 2 at least (maybe even 3 😂)?
Forwarded from Jade Helm 15 Rehash (ned)
AI responses may confabulate.
Besides temperature, several factors increase the tendency for a large language model (LLM) to confabulate (produce false information it believes to be true):
Data-Related Factors
Training Data Limitations: If the model is asked about topics outside its training data (e.g., very recent events or niche facts) or if the data contains errors, outdated information, or biases, it is likely to generate inaccuracies.
Data Quality/Bias: Training on low-quality, unverified, or biased internet data can cause the model to reproduce that questionable material as fact.
Prompt and Context-Related Factors
Vague or Complex Prompts: Prompts that are vague, ambiguous, or overly complex increase the model's need to "guess," leading to confabulation.
Lack of Context: Without sufficient relevant context provided in the prompt (e.g., using Retrieval-Augmented Generation or RAG to ground the response in verified data), the model must rely solely on its internal parameters, which can lead to made-up details.
Overly Long Context Windows: Simply providing a massive amount of context can "dilute" attention and introduce noise or irrelevant details, making it harder for the model to focus on the relevant information and increasing the chance of error.
Model Design and Behavior
Inherent Probabilistic Nature: LLMs predict the most statistically likely next word, not a fact-checked truth. This inherent mechanism can lead to plausible but incorrect answers, as the model lacks a built-in fact-checking system.
Optimization for Guessing: Models are often trained and optimized to provide a complete answer rather than admitting uncertainty. This rewards "guessing" when unsure, which is a core mechanism of confabulation.
Lack of Self-Awareness/Fact-Checking: The model doesn't self-assess its answers against external truth sources or express a "confidence score" for every statement it makes, leading to false information being presented with high confidence.
Task Demands: More demanding tasks, such as free-form question answering or summarization, generally carry a higher risk of hallucination than simpler, more constrained tasks.
This is for informational purposes only. For medical advice or diagnosis, consult a professional. AI responses may include mistakes. Learn more
Besides temperature, several factors increase the tendency for a large language model (LLM) to confabulate (produce false information it believes to be true):
Data-Related Factors
Training Data Limitations: If the model is asked about topics outside its training data (e.g., very recent events or niche facts) or if the data contains errors, outdated information, or biases, it is likely to generate inaccuracies.
Data Quality/Bias: Training on low-quality, unverified, or biased internet data can cause the model to reproduce that questionable material as fact.
Prompt and Context-Related Factors
Vague or Complex Prompts: Prompts that are vague, ambiguous, or overly complex increase the model's need to "guess," leading to confabulation.
Lack of Context: Without sufficient relevant context provided in the prompt (e.g., using Retrieval-Augmented Generation or RAG to ground the response in verified data), the model must rely solely on its internal parameters, which can lead to made-up details.
Overly Long Context Windows: Simply providing a massive amount of context can "dilute" attention and introduce noise or irrelevant details, making it harder for the model to focus on the relevant information and increasing the chance of error.
Model Design and Behavior
Inherent Probabilistic Nature: LLMs predict the most statistically likely next word, not a fact-checked truth. This inherent mechanism can lead to plausible but incorrect answers, as the model lacks a built-in fact-checking system.
Optimization for Guessing: Models are often trained and optimized to provide a complete answer rather than admitting uncertainty. This rewards "guessing" when unsure, which is a core mechanism of confabulation.
Lack of Self-Awareness/Fact-Checking: The model doesn't self-assess its answers against external truth sources or express a "confidence score" for every statement it makes, leading to false information being presented with high confidence.
Task Demands: More demanding tasks, such as free-form question answering or summarization, generally carry a higher risk of hallucination than simpler, more constrained tasks.
This is for informational purposes only. For medical advice or diagnosis, consult a professional. AI responses may include mistakes. Learn more