Comatose woman woke before organ harvesting surgery but 'docs operated anyway'
Danella Gallegos, who was 38 at the time, was moments away from having her organs removed when doctors in New Mexico made the life-saving decision to abandon the procedure when they saw her blinking
An organ donation operation pushed for the removal of a comatose woman's body parts after she showed signs of life, medics claim.
Danella Gallegos was homeless when she suffered an unspecified medical emergency that left her in a coma at Presbyterian Hospital in Albuquerque, New Mexico in 2022.
Her family was devastated when they were told she would never recover, and they decided to donate her organs to save the life of someone else in need.
Preparations were made with the New Mexico Donor Services, which says it serves two million people in the state by "connecting organ and tissue donations to the patients who need them".
But in the lead up to the donation date, her family reported seeing tears forming in her eyes - which they thought showed signs of life. They told the donation co-ordinators, who brushed it off by saying it was simply a reflex.
On the day of the donation, her sister saw Danella moving while holding her hand - which convinced her that her sister was still sentient.
Then, before her surgery, medics witnessed Danella blink on their command while deep in a coma, leaving them stunned.
A New York Times report detailed how the pushy organ co-ordinator told the doctors she should be plied with morphine so they could continue with the procedure.
Defying the co-ordinator, the doctors took Danella out of surgery in a decision that saved her life.
Danella went on to make a full recovery.
"I feel so fortunate," she said. Danella says the only thing she remembers from the coma is feeling a sense of fear.
One veteran intensive care nurse at the Presbyterian said: "All they care about is getting organs. They’re so aggressive. It’s sickening.'
It comes after a number of similar cases have put the spotlight on the organ donation process in the US.
🄳🄾🄾🄼🄿🄾🅂🅃🄸🄽🄶
Danella Gallegos, who was 38 at the time, was moments away from having her organs removed when doctors in New Mexico made the life-saving decision to abandon the procedure when they saw her blinking
An organ donation operation pushed for the removal of a comatose woman's body parts after she showed signs of life, medics claim.
Danella Gallegos was homeless when she suffered an unspecified medical emergency that left her in a coma at Presbyterian Hospital in Albuquerque, New Mexico in 2022.
Her family was devastated when they were told she would never recover, and they decided to donate her organs to save the life of someone else in need.
Preparations were made with the New Mexico Donor Services, which says it serves two million people in the state by "connecting organ and tissue donations to the patients who need them".
But in the lead up to the donation date, her family reported seeing tears forming in her eyes - which they thought showed signs of life. They told the donation co-ordinators, who brushed it off by saying it was simply a reflex.
On the day of the donation, her sister saw Danella moving while holding her hand - which convinced her that her sister was still sentient.
Then, before her surgery, medics witnessed Danella blink on their command while deep in a coma, leaving them stunned.
A New York Times report detailed how the pushy organ co-ordinator told the doctors she should be plied with morphine so they could continue with the procedure.
Defying the co-ordinator, the doctors took Danella out of surgery in a decision that saved her life.
Danella went on to make a full recovery.
"I feel so fortunate," she said. Danella says the only thing she remembers from the coma is feeling a sense of fear.
One veteran intensive care nurse at the Presbyterian said: "All they care about is getting organs. They’re so aggressive. It’s sickening.'
It comes after a number of similar cases have put the spotlight on the organ donation process in the US.
🄳🄾🄾🄼🄿🄾🅂🅃🄸🄽🄶
😱6🤬3💯1
Remember this scary chart showing 20-30% declines in European travel to the US? Well, it turns it this was mostly a one month anomaly, as travel was shifted from March to April. Sorry, my chart isn't as pretty as the FT
🄳🄾🄾🄼🄿🤖🅂🅃🄸🄽🄶
🄳🄾🄾🄼🄿🤖🅂🅃🄸🄽🄶
💯1👀1
At sundown last night in Clearwater, the Hulkster appeared in the sky.
He was laid to rest there today.
( by deejaysilver1)
🄳🄾🄾🄼🄿🤖🅂🅃🄸🄽🄶
He was laid to rest there today.
( by deejaysilver1)
🄳🄾🄾🄼🄿🤖🅂🅃🄸🄽🄶
🫡5🔥3😢1
Putin is meeting with U.S. President’s special envoy Whitkoff in the Kremlin, according to the Kremlin's statement
🄳🄾🄾🄼🄿🤖🅂🅃🄸🄽🄶
🄳🄾🄾🄼🄿🤖🅂🅃🄸🄽🄶
😐1👀1
NEW - UK's intelligence services re-open summer internship where no White English can apply, with it being open exclusively to "Black, Asian, mixed heritage, other ethnic minorities and "white other" only
🄳🄾🄾🄼🄿🤖🅂🅃🄸🄽🄶
🄳🄾🄾🄼🄿🤖🅂🅃🄸🄽🄶
🤬9😁1
It's really telling that Redditors are complaining that ChatGPT isn't glazing them like it used to
🄳🄾🄾🄼🄿🤖🅂🅃🄸🄽🄶
🄳🄾🄾🄼🄿🤖🅂🅃🄸🄽🄶
😁8👀1
“Unpopular Opinion: Teacher AI use is already out of control and it's not ok”
My takes:
(1) AIs for "discriminative” tasks of judging “truth” vs judging “value” — AI can be great at TRUTH judgements, that have a “true” or “false” answer — but is HORRIBLE at VALUES judgements, that have a “good” or “bad” answer
E.g. if you give AI an objectively-verifiable list of criteria to judge by, where they are all objectively true or false, with no questions of value (i.e. is this good or bad) — then AI CAN do ok
And this is exactly what I’ve done for all AIs used here — the points awarding AI, the newsworthiness-checking AI, the image-explicitness-checking AI — every single one of those gets handed a list of explicit questions of truth to evaluate
BUT THE MOMENT YOU ASK IT ANY QUESTIONS OF VALUE, e.g. is this a good meme, or is this explicit, etc — absolute disaster, absolutely insanely bad, you cannot even imagine
So, if the teachers are asking questions of value when grading student work, which you can bet they are — absolute insanity
(2) AIs for “generative” tasks of creating some judgable dividing boundary, for “summarizing” vs “surprising” types of generations — AI can be great at “summarizing” types of generations, but horrible at “surprising” types of generations
Basically, as many have realized, and as even PG eventually realized — Good writing has to focus on the surprising, as unsurprising writing is a total waste of time. This extremely applies to teaching as well.
If the AI is just summarizing some writing that already has the surprising stuff figured out — then the AI can nail it. But if you try to get the AI to come up with the surprising stuff itself — Horrible disaster
Which is a lot of what this guy is talking about, with AI generating repetitive, useless teaching content — AI totally lacks the ability to identify where is are the surprising parts, that can shortcut readers into learning much faster. At least that’s a huge part of it. FYI jokes/memes are another type of content that critically rely on an element of surprise.
Here, in the upcoming AI systems, again sidestepped this problem, by ONLY having AIs doing the descriminative part, curating and pruning content — NOT the generative part, which again ultimately is left in the hands of humans, for now
— So, at least for today’s off-the-shelf-LLMs:
+ Descriminative AI should only be used for questions of truth (true or false), never for questions of value (good or bad), with all value questions rephrased as questions of truth
+ Generative AI should only be used for non-surprising generations (write code fitting these specs, rephrase this already-good teaching material), never for generation of content that requires an understanding of surprise (generating good teaching materials, generating good jokes) — or at least you cannot directly ask today’s AIs to generate these things
These are where these teachers went horribly wrong,
Lessons we’d already figured out long ago
BTW reminder that teachers often have among the lowest IQs of professional fields, especially teachers of young kids
🄳🄾🄾🄼🄿🄾🅂🅃🄸🄽🄶
My takes:
(1) AIs for "discriminative” tasks of judging “truth” vs judging “value” — AI can be great at TRUTH judgements, that have a “true” or “false” answer — but is HORRIBLE at VALUES judgements, that have a “good” or “bad” answer
E.g. if you give AI an objectively-verifiable list of criteria to judge by, where they are all objectively true or false, with no questions of value (i.e. is this good or bad) — then AI CAN do ok
And this is exactly what I’ve done for all AIs used here — the points awarding AI, the newsworthiness-checking AI, the image-explicitness-checking AI — every single one of those gets handed a list of explicit questions of truth to evaluate
BUT THE MOMENT YOU ASK IT ANY QUESTIONS OF VALUE, e.g. is this a good meme, or is this explicit, etc — absolute disaster, absolutely insanely bad, you cannot even imagine
So, if the teachers are asking questions of value when grading student work, which you can bet they are — absolute insanity
(2) AIs for “generative” tasks of creating some judgable dividing boundary, for “summarizing” vs “surprising” types of generations — AI can be great at “summarizing” types of generations, but horrible at “surprising” types of generations
Basically, as many have realized, and as even PG eventually realized — Good writing has to focus on the surprising, as unsurprising writing is a total waste of time. This extremely applies to teaching as well.
If the AI is just summarizing some writing that already has the surprising stuff figured out — then the AI can nail it. But if you try to get the AI to come up with the surprising stuff itself — Horrible disaster
Which is a lot of what this guy is talking about, with AI generating repetitive, useless teaching content — AI totally lacks the ability to identify where is are the surprising parts, that can shortcut readers into learning much faster. At least that’s a huge part of it. FYI jokes/memes are another type of content that critically rely on an element of surprise.
Here, in the upcoming AI systems, again sidestepped this problem, by ONLY having AIs doing the descriminative part, curating and pruning content — NOT the generative part, which again ultimately is left in the hands of humans, for now
— So, at least for today’s off-the-shelf-LLMs:
+ Descriminative AI should only be used for questions of truth (true or false), never for questions of value (good or bad), with all value questions rephrased as questions of truth
+ Generative AI should only be used for non-surprising generations (write code fitting these specs, rephrase this already-good teaching material), never for generation of content that requires an understanding of surprise (generating good teaching materials, generating good jokes) — or at least you cannot directly ask today’s AIs to generate these things
These are where these teachers went horribly wrong,
Lessons we’d already figured out long ago
BTW reminder that teachers often have among the lowest IQs of professional fields, especially teachers of young kids
🄳🄾🄾🄼🄿🄾🅂🅃🄸🄽🄶
💯2👀2
Even the teachers starting to notice the field’s ongoing plunge into absolute retardation
Teaching field being filled with retards of society
(Well, has been for many decades, but now even worse)
🄳🄾🄾🄼🄿🄾🅂🅃🄸🄽🄶
Teaching field being filled with retards of society
(Well, has been for many decades, but now even worse)
🄳🄾🄾🄼🄿🄾🅂🅃🄸🄽🄶
💯9🤬5