EverythingScience – Telegram
EverythingScience
12.2K subscribers
469 photos
333 videos
28 files
4.3K links
Discover the best, curated science facts, news, discoveries, videos, and more!

Chat with us: @EverythingScienceChat
Contact: @DigitisedRealitySupport
Download Telegram
Waving from orbit — Thomas Reiter’s historic hello from space! 👋🌍

30 years ago today, Thomas Reiter became the first esa astronaut to perform a spacewalk during his 179-day EuroMir-95 mission.

🔗 esa.int/ESA_Multimedia…

Source: @esaspaceflight
@EverythingScience
🫡31👍1
DNA signaling cascades offer a better way to monitor drug therapy at home
Chemists at Université de Montréal have developed "signaling cascades" made with DNA molecules to report and quantify the concentration of various molecules in a drop of blood, all within five minutes.

Their findings, validated by experiments on mice, are published in the Journal of the American Chemical Society, and may aid efforts to build point-of-care devices for monitoring and optimizing the treatment of various diseases.

This result was achieved by a research group led by UdeM chemistry professor Alexis Vallée-Bélisle.

"One of the key factors in successfully treating various diseases is to provide and maintain a therapeutic drug dosage throughout treatment," he said. "Sub-optimal therapeutic exposure reduces efficiency and typically leads to drug resistance, while overexposure increases side effects."

Maintaining the right concentration of drugs in the blood remains, however, a major challenge in modern medicine. Since each patient has a distinct pharmacokinetic profile, the concentration of medications in their blood varies significantly. In chemotherapy, for example, many cancer patients do not get the optimal dosage of drugs, and few or no tests are currently rapid enough to flag this issue.

"Easy-to-perform tests could make therapeutic drug monitoring more widely available and enable more personalized treatments," said Vincent De Guire, a clinical biochemist at the UdeM-affiliated Maisonneuve-Rosemont Hospital and chair of the Working Group on Laboratory Errors and Patient Safety of the International Federation of Clinical Chemistry and Laboratory Medicine.

"A connected solution, similar to a glucometer in terms of portability, affordability, and accuracy, that would measure drug concentrations at the right time and transmit the results directly to the health care team, would ensure that patients receive the optimal dose that maximizes their chances of recovery," De Guire said in an independent assessment of the study.

Holder of a Canada Research Chair in Bioengineering and Bio-nanotechnology, Vallée-Bélisle has spent many years exploring how biological systems monitor the concentration of molecules in their surroundings in real time.

The breakthrough with this new technology came by observing how cells detect and quantify the concentration of molecules in their surroundings.

"Cells have developed nanoscale 'signaling cascades' made of biomolecules that are programmed to interact together to activate specific cellular activities in the presence of specific amounts of external stimuli or molecules," said the study's first author Guichi Zhu, a postdoctoral fellow at UdeM.

"Inspired by the modularity of nature's signaling systems and by the ease with which they can evolve to detect novel molecular targets, we have developed similar DNA-based signaling cascades that can detect and quantify specific molecules via the generation of an easy measurable electrochemical signal," she said.

Source: Phys.org
@EverythingScience
1
Engineers solve the sticky-cell problem in bioreactors and other industries
To help mitigate climate change, companies are using bioreactors to grow algae and other microorganisms that are hundreds of times more efficient at absorbing CO2 than trees. Meanwhile, in the pharmaceutical industry, cell culture is used to manufacture biologic drugs and other advanced treatments, including lifesaving gene and cell therapies.

Both processes are hampered by cells' tendency to stick to surfaces, which leads to a huge amount of waste and downtime for cleaning. A similar problem slows down biofuel production, interferes with biosensors and implants, and makes the food and beverage industry less efficient.

Now, MIT researchers have developed an approach for detaching cells from surfaces on demand, using electrochemically generated bubbles. In an open-access paper published in Science Advances, the researchers demonstrated their approach in a lab prototype and showed it could work across a range of cells and surfaces without harming the cells.

"We wanted to develop a technology that could be high-throughput and plug-and-play, and that would allow cells to attach and detach on demand to improve the workflow in these industrial processes," says Professor Kripa Varanasi, senior author of the study. "This is a fundamental issue with cells, and we've solved it with a process that can scale. It lends itself to many different applications."

Source: Phys.org
@EverythingScience
👍2
These Tiny Robots Can Swarm, Adapt, and Heal Themselves
Nature’s Blueprint for Robot Swarms
Animals such as bats, whales, and insects have long relied on sound to communicate and find their way. Drawing inspiration from this, an international group of scientists has developed a model for tiny robots that use sound waves to move and work together in large, coordinated swarms that behave almost intelligently. According to team leader Igor Aronson, Huck Chair Professor of Biomedical Engineering, Chemistry, and Mathematics at Penn State, these robotic collectives could eventually take on challenging missions like exploring disaster areas, cleaning polluted environments, or performing medical procedures inside the human body.

“Picture swarms of bees or midges,” Aronson said. “They move, that creates sound, and the sound keeps them cohesive, many individuals acting as one.”

The team’s findings were published in the journal Physical Review X.

Self-Organizing Machines With a Mission
Because these miniature, sound-emitting micromachines can organize themselves, they are capable of navigating confined spaces and reassembling if they are disrupted. This collective or “emergent” intelligence could make them valuable for cleaning contaminated environments, Aronson explained.

In addition to environmental applications, the robotic swarms might one day operate inside the body to deliver medication directly to targeted sites. Their ability to sense environmental changes and “self-heal” allows them to remain functional even after being separated, which could be particularly advantageous for detecting threats or serving as advanced sensors, Aronson said.

Source: SciTechDaily
@EverythingScience
👍1
Carbon dioxide in the atmosphere up by record amount in 2024: UN
The increase in the amount of carbon dioxide in the atmosphere last year was the biggest ever recorded, the United Nations said Wednesday, calling for urgent action to slash emissions.

Levels of the three main greenhouse gases—the climate-warming CO2, methane and nitrous oxide—all increased yet again in 2024, with each setting new record highs, the UN's weather and climate agency said.

The World Meteorological Organization said the increase in CO2 levels in the atmosphere from 2023 to 2024 marked the biggest one-year jump since records began in 1957.

Continued fossil CO2 emissions, greater emissions from wildfires, and a troubling reduced absorption by land and sea all drove the increase, the WMO said.

Wednesday's update, which comes ahead of the November 10-21 COP30 UN climate summit in Belem, Brazil, focused exclusively on concentrations of greenhouse gases in the atmosphere.

A separate UN report, out next month, will detail shifts in emissions of the gases, but those numbers are also expected to rise, as they have every year with the world continuing to burn more oil, gas and coal.

This defies commitments made under the 2015 Paris Agreement to cap global warming at "well below" 2C above average levels measured between 1850 and 1900—and 1.5C if possible.

2024 was the warmest year ever recorded.

Feedback loop
The WMO voiced "significant concern" that the land and oceans were becoming unable to soak up CO2, leaving the greenhouse gas in the atmosphere.

It warned that the planet could be witnessing a so-called "vicious cycle" of climate feedback—whereby increasing greenhouse gas emissions fuel rising temperatures and trigger wildfires that release more CO2, while warmer oceans cannot absorb as much CO2 from the air.

WMO senior scientific officer Oksana Tarasova said feedback may eventually push natural systems to a tipping point—for example, melting permafrost, leading to further emissions.

"Our actions should be towards the side of emission reduction as fast as possible if we don't want to see the domino effect," she told reporters.

Given CO2's role in driving climate change, "achieving net-zero anthropogenic CO2 emissions must be the focus of climate action", according to the report.

Source: Phys.org
@EverythingScience
👏3😡2🔥1😱1
Mysterious glow in Milky Way could be evidence of dark matter
Johns Hopkins researchers may have identified a compelling clue in the ongoing hunt to prove the existence of dark matter. A mysterious diffuse glow of gamma rays near the center of the Milky Way has stumped researchers for decades, as they've tried to discern whether the light comes from colliding particles of dark matter or quickly spinning neutron stars.

It turns out that both theories are equally likely, according to research published in the journal Physical Review Letters.

If excess gamma light is not from dying stars, it could become the first proof that dark matter exists.

"Dark matter dominates the universe and holds galaxies together. It's extremely consequential and we're desperately thinking all the time of ideas as to how we could detect it," said co-author Joseph Silk, a professor of physics and astronomy at Johns Hopkins and a researcher at the Institut d'Astrophysique de Paris and Sorbonne University. "Gamma rays, and specifically the excess light we're observing at the center of our galaxy, could be our first clue."

Silk and an international team of researchers used supercomputers to create maps of where dark matter should be located in the Milky Way, taking into account for the first time the history of how the galaxy formed.

Today, the Milky Way is a relatively closed system, without materials coming in or going out of it. But this hasn't always been the case. During the first billion years, many smaller galaxy-like systems made of dark matter and other materials entered and became the building blocks of the young Milky Way. As dark matter particles gravitated toward the center of the galaxy and clustered, the number of dark matter collisions increased.

When the researchers factored in more realistic collisions, their simulated maps matched actual gamma ray maps taken by the Fermi Gamma-ray Space Telescope.

These matching maps round out a triad of evidence that suggests excess gamma rays in the center of the Milky Way could originate with dark matter. Gamma rays coming from dark matter particle collisions would produce the same signal and have the same properties as those observed in the real world, the researchers said—though it's not definitive proof.

Source: Phys.org
@EverythingScience
👍3
How generative AI could change how we think and speak
There's no doubt that artificial intelligence (AI) will have a profound impact on our economies, work and lifestyle. But could this technology also shape the way we think and speak?

AI can be used to draft essays and solve problems in mere seconds that otherwise might take us minutes or hours. When we shift to an over-reliance on such tools, we arguably fail to exercise key skills such as critical thinking and our ability to use language creatively. Precedents from psychology and neuroscience research hint that we should take the possibility seriously.

There are several precedents for technology reconfiguring our minds, rather than just assisting them. Research shows that people who rely on GPS tend to lose part of their ability to form mental maps.

London taxi drivers once memorized hundreds of streets before the advent of satellite navigation. These drivers developed enlarged hippocampi as a result of this. The hippocampus is the brain region associated with spatial memory.

In one of his most striking studies, the Russian psychologist Lev Vygotsky examined patients who suffered from aphasia, a disorder that impairs the ability to understand or produce speech.

When asked to say "snow is black" or to misname a color, they could not. Their minds resisted any separation between words and things. Vygotsky saw this as the loss of a key ability: to use language as an instrument for thinking creatively, and going beyond what is given to us.

Could an over-reliance on AI produce similar problems? When language comes pre-packaged from screens, feeds, or AI systems, the link between thought and speech may begin to wither.

In education, students are using generative AI to compose essays, summarize books, and solve problems in seconds. Within an academic culture already shaped by competition, performance metrics, and quick results, such tools promise efficiency at the cost of reflection.

Many teachers will recognize those students who produce eloquent, grammatically flawless texts but reveal little understanding of what they have written. This represents the quiet erosion of thinking as a creative activity.

Source: Phys.org
@EverythingScience
👍3
Two Spacecraft To Fly Through Comet 3I/ATLAS's Ion Tail – Will They Be Able To Catch Something?
Comets tend to have two tails. One is known as the dust tail, and it tends to be more curved, while the other, known as the ion or plasma tail, is straighter, pointing away from the Sun. The tails can also be long, with ion tails often extending for hundreds of millions of kilometers. We have not seen the ion tail for interstellar comet 3I/ATLAS, but if it’s there, two spacecraft might soon cross it.

The spacecraft in question are NASA’s Europa Clipper, going to the eponymous icy moon of Jupiter, and the European Space Agency’s Hera, which is travelling to the binary asteroid Didymos and Dimorphos, the site of the first-ever planetary defense demonstration when the DART mission purposely hit Dimorphos, shifting its orbit.

According to a new paper, both spacecraft will be aligned in such a way that they could be entering the comet’s ion tail in the coming weeks. It is an excellent time to do so; the interstellar comet is not going to get very close to the Sun (203 million kilometers; 126 million miles), but that minimum distance will also mark a peak in activity. It is happening on October 29, and both spacecraft will be within the possible location of the ion tail.

Hera will be in the right position first, from October 25 to November 1, right at the peak. While this is exciting, and we know that ESA is always ready to jump on an opportunity when it comes to 3I/ATLAS, the probe does not have the right suite of instruments to study the ion tail.

“My understanding is that because Hera is not equipped with any in-situ instruments, there is no opportunity to measure any properties of the comet’s tail as it passes through,” an ESA spokesperson told IFLScience. They assured us that the team will look at the paper, and we will be informed if there are any new developments.

Europa Clipper, on the other hand, has the right instruments to try, and if the solar wind conditions are favorable between October 30 and November 6, “Europa Clipper has a rare opportunity to sample an interstellar object’s tail,” write the authors. Will Europa Clipper conduct these observations? Unsure.

We got in touch with NASA’s Jet Propulsion Laboratory (JPL), which runs the mission, but at the time of publication, we have not received a reply. This might be due to the effect of the current government shutdown, though JPL is also undergoing an internal restructuring, firing 550 people. This could also affect the possibility of actually using the mission for this investigation. 

Source: IFLScience
@EverythingScience
3
Most users cannot identify AI racial bias—even in training data
When recognizing faces and emotions, artificial intelligence (AI) can be biased, like classifying white people as happier than people from other racial backgrounds. This happens because the data used to train the AI contained a disproportionate number of happy white faces, leading it to correlate race with emotional expression.

In a recent study, published in Media Psychology, researchers asked users to assess such skewed training data, but most users didn't notice the bias—unless they were in the negatively portrayed group.

The study was designed to examine whether laypersons understand that unrepresentative data used to train AI systems can result in biased performance. The scholars, who have been studying this issue for five years, said AI systems should be trained so they "work for everyone," and produce outcomes that are diverse and representative for all groups, not just one majority group. According to the researchers, that includes understanding what AI is learning from unanticipated correlations in the training data—or the datasets fed into the system to teach it how it is expected to perform in the future.

"In the case of this study, AI seems to have learned that race is an important criterion for determining whether a face is happy or sad," said senior author S. Shyam Sundar, Evan Pugh University Professor and director of the Center for Socially Responsible Artificial Intelligence at Penn State. "Even though we don't mean for it to learn that."

The question is whether humans can recognize this bias in the training data. According to the researchers, most participants in their experiments only started to notice bias when the AI showed biased performance, such as misclassifying emotions for Black individuals but doing a good job of classifying the emotions expressed by white individuals. Black participants were more likely to suspect that there was an issue, especially when the training data over-represented their own group for representing negative emotion (sadness).

"In one of the experiment scenarios—which featured racially biased AI performance—the system failed to accurately classify the facial expression of the images from minority groups," said lead author Cheng "Chris" Chen, an assistant professor of emerging media and technology at Oregon State University who earned her doctorate in mass communications from the Donald P. Bellisario College of Communications at Penn State. "That is what we mean by biased performance in an AI system where the system favors the dominant group in its classification."

Source: Phys.org
@EverythingScience
11
#DidYouKnow Chameleons have prehensile tails that help them grip and wrap around branches while climbing. Unlike many other lizards, their tails cannot regenerate once broken off.

📸: Ignacio Palacios

Source: @AnimalPlanet
@EverythingScience
4
A Tiny Peptide Can Freeze Parkinson's Proteins Before They Turn Toxic
As Parkinson's disease progresses, harmful protein clumps build up in the brain, blocking communications between neurons and killing them off – but what if we could prevent these clusters from forming?

Researchers led by a team from the University of Bath in the UK have achieved just that in a basic worm model of Parkinson's. They engineered a peptide, a small amino acid chain, to essentially keep a protein called alpha-synuclein locked in its healthy shape. This prevented the misfolding that leads to clumps.

The potential treatment checks several important boxes: it's durable, and it can survive inside cells without causing any toxic side effects.

"This opens an exciting path towards new therapies for Parkinson's and related diseases, where treatment options remain extremely limited," says biochemist Jody Mason, from the University of Bath.

The study follows on from previous work by some of the same researchers, which identified part of the alpha-synuclein protein that may stop it building to dangerous levels. This key part or fragment acts like a guide for the protein to follow.

Source: ScienceAlert
@EverythingScience
2👏2
Genetic Therapy Cuts Cholesterol by Nearly 50% in Groundbreaking Study
When cholesterol levels in the blood rise too high, a condition known as hypercholesterolemia can develop, damaging arteries and threatening heart health. Researchers from the University of Barcelona and the University of Oregon have now unveiled a promising new therapy that helps control cholesterol levels and offers fresh possibilities for combating atherosclerosis, a disease linked to the buildup of fatty plaques in artery walls.

The team developed a method to block the activity of PCSK9, a protein that plays a crucial role in regulating the amount of low-density lipoprotein cholesterol (LDL-C) in the bloodstream. Using specially designed molecules called polypurine hairpins (PPRH), the technique boosts the removal of cholesterol by cells and prevents it from accumulating in arteries, without the unwanted side effects often associated with statin medications.

Source: SciTechDaily
@EverythingScience
👍1
New Cancer Therapy Smuggles Viruses Past Immune Defenses
Scientists at Columbia Engineering have developed a new cancer treatment that teams up bacteria and viruses to fight tumors. In findings published in Nature Biomedical Engineering, the Synthetic Biological Systems Lab demonstrated a method in which a virus is concealed inside a bacterium that naturally seeks out tumors. This allows the virus to evade the body’s immune defenses and activate once it reaches the cancer site.

The system takes advantage of each microbe’s strengths: bacteria’s ability to locate and invade tumors and viruses’ ability to infect and destroy cancer cells. The research, led by Tal Danino, an associate professor of biomedical engineering at Columbia Engineering, produced a platform named CAPPSID (short for Coordinated Activity of Prokaryote and Picornavirus for Safe Intracellular Delivery). The team collaborated with Charles M. Rice, a virology expert from The Rockefeller University.

Engineering Microbe Cooperation
“We aimed to enhance bacterial cancer therapy by enabling the bacteria to deliver and activate a therapeutic virus directly inside tumor cells, while engineering safeguards to limit viral spread outside the tumor,” says co-lead author Jonathan Pabón, an MD/PhD candidate at Columbia.

The scientists believe their mouse-based experiments mark the first instance of intentionally engineering bacteria and viruses to work together against cancer.
Source: SciTechDaily
@EverythingScience
🔥21
Is This the End of the Silicon Era? Scientists Unveil World’s First 2D Computer
Silicon has long been the foundation of semiconductor technology that powers devices such as smartphones, computers, and electric vehicles. However, its dominance may be waning, according to a research team led by scientists at Penn State.

For the first time, the group successfully built a functioning computer using two-dimensional (2D) materials, substances only one atom thick that maintain their properties even at that extreme scale, unlike silicon. The computer they developed is capable of performing basic operations, signaling a major shift in materials used for electronics.

The findings, published in Nature, mark a significant advancement toward creating thinner, faster, and more energy-efficient electronic systems, the researchers explained. The team developed a complementary metal-oxide semiconductor (CMOS) computer, the core technology found in nearly all modern electronic devices, without using silicon.

Instead, they combined two distinct 2D materials to form the necessary transistors that regulate electric current in CMOS circuits: molybdenum disulfide for the n-type transistors and tungsten diselenide for the p-type transistors.

“Silicon has driven remarkable advances in electronics for decades by enabling continuous miniaturization of field-effect transistors (FETs),” said Saptarshi Das, the Ackley Professor of Engineering and professor of engineering science and mechanics at Penn State, who led the research. FETs control current flow using an electric field, which is produced when a voltage is applied. “However, as silicon devices shrink, their performance begins to degrade. Two-dimensional materials, by contrast, maintain their exceptional electronic properties at atomic thickness, offering a promising path forward.”

Source: SciTechDaily
@EverythingScience
🔥1
In a surprising discovery, scientists find tiny loops in the genomes of dividing cells
Before cells can divide, they first need to replicate all of their chromosomes, so that each of the daughter cells can receive a full set of genetic material. Until now, scientists had believed that as division occurs, the genome loses the distinctive 3D internal structure that it typically forms.

Once division is complete, it was thought, the genome gradually regains that complex, globular structure, which plays an essential role in controlling which genes are turned on in a given cell.

However, a new study from MIT shows that in fact, this picture is not fully accurate. Using a higher-resolution genome mapping technique, the research team discovered that small 3D loops connecting regulatory elements and genes persist in the genome during cell division, or mitosis.

The study has been published in Nature Structural & Molecular Biology.

"This study really helps to clarify how we should think about mitosis. In the past, mitosis was thought of as a blank slate, with no trannoscription and no structure related to gene activity. And we now know that that's not quite the case," says Anders Sejr Hansen, an associate professor of biological engineering at MIT. "What we see is that there's always structure. It never goes away."

The researchers also discovered that these regulatory loops appear to strengthen when chromosomes become more compact in preparation for cell division. This compaction brings genetic regulatory elements closer together and encourages them to stick together. This may help cells "remember" interactions present in one cell cycle and carry it to the next one.

"The findings help to bridge the structure of the genome to its function in managing how genes are turned on and off, which has been an outstanding challenge in the field for decades," says Viraat Goel Ph.D. '25, the lead author of the study.

Source: Phys.org
@EverythingScience
👍1
Experimental Nanoparticle “Super-Vaccines” Stop Breast, Pancreatic, And Skin Cancers In Their Tracks
A nanoparticle vaccine has shown great promise in preventing three types of cancer in mice, as well as stopping tumors from spreading when they were exposed to cancerous cells. 

Cancer vaccines have moved from the sci-fi dream realm into actual scientific possibility within just a few short decades. We’re not just talking about the HPV vaccine, incredible though its success has been at preventing cases of cervical cancer. A vaccine against a virus, albeit one that causes cancer, is easier to conceptualize – we get vaccinated against tons of other viruses, after all. 

But vaccinating against a non-infectious disease like cancer, with all its complex causes and different presentations, is much harder to wrap your head around – making this latest study perhaps even more impressive. 

Researchers led by a team at the University of Massachusetts Amherst have developed a nanoparticle-based vaccine that has previously been shown to shrink and clear cancerous tumors in mice. Now, they’ve demonstrated it can also work to prevent three types of cancer: pancreatic cancer, melanoma, and triple-negative breast cancer. 

Source: IFLScience
@EverythingScience
1
Horses became gentle and easy to ride thanks to two gene mutations
Horses had a huge impact on the success of many human societies. Now, scientists have found two key gene variants that helped paved the way for that equine role in human history. The pair made horses tamer and more rideable, researchers now report.

Ancient horse DNA suggests modern domesticated horses came from southwestern Russia more than 4,200 years ago. This research, published in 2021, revealed where and when humans had domesticated the animals. Ludovic Orlando led that study. A molecular archaeologist, he works at the Centre for Anthropobiology and Genomics. That’s in Toulouse, France.

What that work hadn’t shown was precisely what genetic changes in horses — mutations — might have led to these new traits.

Orlando and a team of scientists from China and Switzerland have now done that. They analyzed horse genomes, the full set of genetic instructions making up their DNA. In all, they compared the genomes of 71 horses from a range of breeds and time periods.

The team focused on 266 places in the genomes. From these, nine genes showed strong signatures of have been selected, or altered. That suggests the traits these genes produced in the horses may have been targeted by human breeders.

Two of these genes appear to have been heavily selected very early in horse taming.

Source: SN Explores
@EverythingScience
👍1
Identical Twins Can Have Significant IQ Differences, Study Reveals
Identical twins who were raised apart may have IQ differences similar to those of total strangers, according to new research. The findings suggest that variations in IQ may be less about genetics and more about schooling.

The heartbreaking separation of twin siblings is a rare occurrence, and only nine large group studies have been published to date.

In the past, researchers have concluded that identical twins raised apart have many matching traits, including similar IQs, suggesting that IQ (a sign of intelligence) is largely determined by nature, not nurture.

Not so fast, argue cognitive neuroscientist Jared Horvath and developmental researcher Katie Fabricant. These two have crunched the numbers again, and this time, they've included a key overlooked factor: schooling.

When the researchers divided 87 twin-pairs into groups based on similar and dissimilar schooling backgrounds, they found IQ differences across the spectrum.

The gaps in IQ scores grew in tandem with educational differences, the authors say, "enough to transcend specific teachers or peer groups."

Twins that were raised apart and who went to significantly different schools showed IQ patterns more similar to strangers (a roughly 15-point difference).

There were only 10 twin-pairs in the study with school experiences that met suitable criteria, making for a small sample size that places limits on the study's conclusions.

Source: ScienceAlert
@EverythingScience
1
'This moves the timeline forward significantly': Quantum computing breakthrough could slash pesky errors by up to 100 times
Researchers have discovered a way to speed up quantum error correction (QEC) by a factor of up to 100 — a leap that could significantly shorten the time it takes quantum computers to solve complex problems.

The technique, called algorithmic fault tolerance (AFT), restructures quantum algorithms so they can detect and correct errors on the fly, rather than pausing to run checks at fixed intervals.

In simulations, AFT reduced the time and computational effort spent on error correction by up to 100 times while still maintaining accuracy, according to scientists at QuEra. The results, published Sept. 24 in the journal Nature, were based on tests run on a simulated neutral-atom quantum computer.

Source: Live Science
@EverythingScience
AI 'workslop' is creating unnecessary extra work. Here's how we can stop it
Have you ever used artificial intelligence (AI) in your job without double-checking the quality or accuracy of its output? If so, you wouldn't be the only one.

Our global research shows a staggering two-thirds (66%) of employees who use AI at work have relied on AI output without evaluating it.

This can create a lot of extra work for others in identifying and correcting errors, not to mention reputational hits. Just this week, consulting firm Deloitte Australia formally apologized after a A$440,000 report prepared for the federal government had been found to contain multiple AI-generated errors.

Against this backdrop, the term "workslop" has entered the conversation. Popularized in a recent Harvard Business Review article, it refers to AI-generated content that looks good but "lacks the substance to meaningfully advance a given task."

Beyond wasting time, workslop also corrodes collaboration and trust. But AI use doesn't have to be this way. When applied to the right tasks, with appropriate human collaboration and oversight, AI can enhance performance. We all have a role to play in getting this right.

The rise of AI-generated 'workslop'
According to a recent survey reported in the Harvard Business Review article, 40% of US workers have received workslop from their peers in the past month.

The survey's research team from BetterUp Labs and Stanford Social Media Lab found on average, each instance took recipients almost two hours to resolve, which they estimated would result in US$9 million (about A$13.8 million) per year in lost productivity for a 10,000-person firm.

Those who had received workslop reported annoyance and confusion, with many perceiving the person who had sent it to them as less reliable, creative, and trustworthy. This mirrors prior findings that there can be trust penalties to using AI.

Invisible AI, visible costs
These findings align with our own recent research on AI use at work. In a representative survey of 32,352 workers across 47 countries, we found complacent over-reliance on AI and covert use of the technology are common.

While many employees in our study reported improvements in efficiency or innovation, more than a quarter said AI had increased workload, pressure, and time on mundane tasks. Half said they use AI instead of collaborating with colleagues, raising concerns that collaboration will suffer.

Making matters worse, many employees hide their AI use; 61% avoided revealing when they had used AI and 55% passed off AI-generated material as their own. This lack of transparency makes it challenging to identify and correct AI-driven errors.

What you can do to reduce workslop
Without guidance, AI can generate low-value, error-prone work that creates busywork for others. So, how can we curb workslop to better realize AI's benefits?
If you're an employee, three simple steps can help.

1. Start by asking, "Is AI the best way to do this task?" Our research suggests this is a question many users skip. If you can't explain or defend the output, don't use it
2. If you proceed, verify and work with AI output like an editor; check facts, test code, and tailor output to the context and audience
3. When the stakes are high, be transparent about how you used AI and what you checked to signal rigor and avoid being perceived as incompetent or untrustworthy.
Source: Phys.org
@EverythingScience
👍1