<![CDATA[ Latest from Live Science ]]> https://www.livescience.com Tue, 30 Dec 2025 21:20:00 +0000 en <![CDATA[ 'Artificial intelligence' myths have existed for centuries – from the ancient Greeks to a pope’s chatbot ]]> It seems the AI hype has turned into an AI bubble. There have been many bubbles before, from the Tulip mania of the 17th century to the derivatives bubble of the 21st century. For many commentators, the most relevant precedent today is the dotcom bubble of the 1990s. Back then, a new technology (the World Wide Web) unleashed a wave of "irrational exuberance." Investors poured billions into any company with ".com" in the name.

Three decades later, another new technology has unleashed another wave of exuberance. Investors are pouring billions into any company with "AI" in its name. But there is a crucial difference between these two bubbles, which isn't always recognised. The World Wide Web existed. It was real. General Artificial Intelligence does not exist, and no one knows if or when it ever will.

In February, the CEO of OpenAI, Sam Altman, wrote on his blog that the very latest systems have only just started to "point towards" AI in its "general" sense. OpenAI may market its products as "AIs," but they are merely statistical data-crunchers, rather than "intelligences" in the sense that human beings are intelligent.

So why are investors so keen to give money to the people selling AI systems? One reason might be that AI is a mythical technology. I don't mean it is a lie. I mean it evokes a powerful, foundational story of Western culture about human powers of creation.

Perhaps investors are willing to believe AI is just around the corner because it taps into myths that are deeply ingrained in their imaginations?

The myth of Prometheus

The most relevant myth for AI is the Ancient Greek myth of Prometheus.

There are many versions of this myth, but the most famous are found in Hesiod'spoems Theogony and Works and Days, and in the play Prometheus Bound, traditionally attributed to Aeschylus.

Prometheus was a Titan, a god in the Ancient Greek pantheon. He was also a criminal who stole fire from Hephaestus, the blacksmith god. Hiding the fire in a stalk of fennel, Prometheus came to earth and gave it to humankind. As punishment, he was chained to a mountain, where an eagle visited every day to eat his liver.

Prometheus' gift was not simply the gift of fire; it was the gift of intelligence. In Prometheus Bound, he declares that before his gift humans saw without seeing and heard without hearing. After his gift, humans could write, build houses, read the stars, perform mathematics, domesticate animals, construct ships, invent medicines, interpret dreams and give proper offerings to the gods.

The myth of Prometheus is a creation story with a difference. In the Hebrew Bible, God does not give Adam the power to create life. But Prometheus gives (some of) the gods' creative power to humankind.

Hesiod indicates this aspect of the myth in Theogony. In that poem, Zeus not only punishes Prometheus for the theft of fire; he punishes humankind as well. He orders Hephaestus to fire up his forge and construct the first woman, Pandora, who unleashes evil on the world.

The fire that Hephaestus uses to make Pandora is the same fire that Prometheus has given humankind.

Engraving from Alcuni Monumenti del Museo Carrafa (Naples, 1778), pl. 25

In this 18th-century engraving, Prometheus constructs the first man.  (Image credit: AnonymousUnknown author, Public domain, via Wikimedia Commons)

The Greeks proposed the idea that humans are a form of artificial intelligence. Prometheus and Hephaestus use technology to manufacture men and women. As historian Adrienne Mayor reveals in her book Gods and Robots, the ancients often depicted Prometheus as a craftsman, using ordinary tools to create human beings in an ordinary workshop.

If Prometheus gave us the fire of the gods, it would seem to follow that we can use this fire to make our own intelligent beings. Such stories abound in Ancient Greek literature, from the inventor Daedalus, who created statues that came to life, to the witch Medea, who could restore youth and potency with her cunning drugs. Greek inventors also constructed mechanical computers for astronomy and remarkable moving figures powered by gravity, water and air.

The Pope and the chatbot

2,700 years have passed since Hesiod first wrote down the story of Prometheus. In the ensuing centuries, the myth has been endlessly retold, especially since the publication of Mary Shelley's Frankenstein; or the Modern Prometheus in 1818.

But the myth is not always told as fiction. Here are two historical examples where the myth of Prometheus seemed to come true.

Gerbert of Aurillac was the Prometheus of the 10th century. He was born in the early 940s CE, went to school at Aurillac Abbey, and became a monk himself. He proceeded to master every known branch of learning. In the year 999, he was elected Pope. He died in 1003 under his pontifical name, Sylvester II.

Rumours about Gerbert spread wildly across Europe. Within a century of his death, his life had already become legend. One of the most famous legends, and the most pertinent in our age of AI hype, is that of Gerbert's "brazen head." The legend was told in the 1120s by the English historian William of Malmesbury, in his well researched and highly regarded book, Deeds of the English Kings.

Gerbert was deeply learned in astronomy, a science of prediction. Astronomers could use the astrolabe to predict the position of the stars and foresee cosmological events such as eclipses. According to William, Gerbert used his knowledge of astronomy to construct a talking head. After inspecting the movements of the stars and planets, he cast a head in bronze that could answer yes-or-no questions.

First Gerbert asked the head: "Will I become Pope?"

"Yes," answered the head.

Then Gerbert asked: "Will I die before I sing mass in Jerusalem?"

"No," the head replied.

In both cases, the head was correct, though not as Gerbert anticipated. He did become Pope, and he sensibly avoided going on pilgrimage to Jerusalem. One day, however, he sang mass at Santa Croce in Gerusalemme in Rome. Unfortunately for Gerbert, Santa Croce in Gerusalemme was known in those days simply as "Jerusalem."

Gerbert sickened and died. On his deathbed, he asked his attendants to cut up his body and cast away the pieces, so he could go to his true master, Satan. In this way, he was, like Prometheus, punished for his theft of fire.

Pope Sylvester II and the Devil (Image credit: Chronicon pontificum et imperatorum, Public domain, via Wikimedia Commons)

It is a thrilling story. It is not clear whether William of Malmesbury actually believed it. But he does try to persuade his readers that it is plausible. Why did this great historian with a devotion to the truth insert some fanciful legends about a French pope into his history of England? Good question!

Is it so fanciful to believe that an advanced astronomer might build a general-purpose prediction machine? In those days, astronomy was the most powerful science of prediction. The sober and scholarly William was at least willing to entertain the idea that brilliant advances in astronomy might make it possible for a Pope to build an intelligent chatbot.

Today, that same possibility is credited to machine-learning algorithms, which can predict which ad you will click, which movie you will watch, which word you will type next. We can be forgiven for falling under the same spell.

The anatomist and the automaton

The Prometheus of the 18th century was Jacques de Vaucanson, at least according to Voltaire:

Bold Vaucanson, rival of Prometheus,Seems, imitating the springs of nature,To steal the fire of heaven to animate the body.

Jacques de Vaucanson – Joseph Boze (1784) (Image credit: Joseph Boze, Public domain, via Wikimedia Commons)

Vaucanson was a great machinist, famous for his automata. These were clockwork devices that realistically simulated human or animal anatomy. Philosophers of the time believed that the body was a machine — so why couldn't a machinist build one?

Sometimes Vaucanson's automata were scientifically significant. He constructed a piper, for example, that had lips and lungs and fingers, and blew the pipe in much the same way a human would. Historian Jessica Riskin explains in her book The Restless Clock that Vaucanson had to make significant discoveries in acoustics in order to make his piper play in tune.

Sometimes his automata were less scientific. His digesting duck was hugely famous, but turned out to be fraudulent. It appeared to eat and digest food, but its poos were in fact prefabricated pellets hidden inside the mechanism.

Vaucanson spent decades working on what he called a "moving anatomy." In 1741, he presented a plan to the Lyons Academy to build an "imitation of all animal operations." Twenty years later, he was at it again. He secured support from King Louis XV to build a simulation of the circulatory system. He claimed he could build a complete, living artificial body.

Three of Vaucanson’s famous automata: the Flute Player, the Digesting Duck, and the Provençal Farmer, who played the pipe and tambourine.

Three of Vaucanson’s famous automata: the Flute Player, the Digesting Duck, and the Provençal Farmer, who played the pipe and tambourine.  (Image credit: See page for author, Public domain, via Wikimedia Commons)

There is no evidence that Vaucanson ever completed a whole body. In the end, he couldn't live up to the hype. But many of his contemporaries believed he could do it. They wanted to believe in his magical mechanisms. They wished he would seize the fire of life.

If Vaucanson could manufacture a new human body, couldn't he also repair an existing one? This is the promise of some AI companies today. According to Dario Amodei, CEO of Anthropic, AI will soon allow people "to live as long as they want." Immortality seems like an attractive investment.

Sylvester II and Vaucanson were great technologists, but neither was a Prometheus. They stole no fire from the gods. Will the aspiring Prometheans of Silicon Valley succeed where their predecessors have failed? If only we had Sylvester II's brazen head, we could ask it.

This edited article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
https://www.livescience.com/archaeology/artificial-intelligence-myths-have-existed-for-centuries-from-the-ancient-greeks-to-a-popes-chatbot yscXRcrQLfnBnrUTdqbPGe Tue, 30 Dec 2025 21:20:00 +0000 Wed, 24 Dec 2025 14:42:12 +0000
<![CDATA[ Enough fresh water is lost from continents each year to meet the needs of 280 million people. Here's how we can combat that. ]]> Earth's continents are drying up at an alarming rate. Now, a new report has painted the most detailed picture yet of where and why fresh water is disappearing — and outlined precisely how countries can address the problem.

Continental drying is a long-term decline in fresh water availability across large land masses. It is caused by accelerated snow and ice melt, permafrost thaw, water evaporation and groundwater extraction. (The report's definition excludes meltwater from Greenland and Antarctica, the authors noted.)

"We always think that the water issue is a local issue," lead author Fan Zhang, global lead for Water, Economy and Climate Change at the World Bank, told Live Science in a joint interview with co-author Jay Famiglietti, a satellite hydrologist and professor of sustainability at Arizona State University. "But what we show in the report is that ... local water problems could quickly ripple through national borders and become an international challenge."

Continents have now surpassed ice sheets as the biggest contributor to global sea level rise, because regardless of its origin, the lost fresh water eventually ends up in the ocean. The new report found this contribution is roughly 11.4 trillion cubic feet (324 billion cubic meters) of water each year — enough to meet the annual water needs of 280 million people.

"Every second you lose four Olympic-size swimming pools," Zhang said.

Far-reaching impacts

The report was published Nov. 4 by the World Bank. Its results are based on 22 years of data from NASA's GRACE mission, which measures small changes in Earth's gravity resulting from shifting water. The authors also compiled two decades' worth of economic and land use data, which they fed into a hydrological model and a crop-growth model.

The average amount of fresh water lost from continents each year is equivalent to 3% of the world's annual net "income" from precipitation, the report found. This loss jumps to 10% in arid and semi-arid regions, meaning that continental drying hits dry areas such as South Asia the hardest, Zhang said.

This is a growing problem. In a study published earlier this year, Zhang, Famiglietti and their colleagues showed that separate dry areas are rapidly merging into "mega-drying" regions.

"The impact is already being felt," Zhang said. Regions where agriculture is the biggest economic sector and employs the most people, such as sub-Saharan Africa and South Asia, are especially vulnerable. "In sub-Saharan Africa, dry shocks reduce the number of jobs by 600,000 to 900,000 a year. If you look at who are the people being affected, those most hard hit are the most vulnerable groups, like landless farmers."

Countries that don't have a large agricultural sector are also indirectly affected, because most of them import food and goods from drying regions.

The consequences for ecosystems are dramatic, too. Continental drying increases the likelihood and severity of wildfires, and this is especially true in biodiversity hotspots, the report found. At least 17 of the 36 globally recognized biodiversity hotspots — including Madagascar and parts of Southeast Asia and Brazil — show a trend of declining freshwater availability and have a heightened risk of wildfires.

"The implications are so profound," Famiglietti told Live Science.

The biggest culprit

Currently, the biggest cause of continental drying is groundwater extraction. Groundwater is poorly protected and undermanaged in most parts of the world, meaning the past decades have been a pumping "free-for-all," Famiglietti said. And the warmer and drier the world gets due to climate change, the more groundwater will likely be extracted, because soil moisture and glacial water sources will start to dwindle.

However, better regulations and incentives could reduce groundwater overpumping. According to the report, agriculture is responsible for 98% of the global water footprint, so "if agriculture water use efficiency is improved to a certain benchmark, the total amount of the water that can be saved is huge," Zhang said.

Globally, if water use efficiency for 35 key crops, such as wheat and rice, improved to median levels, enough water would be saved to meet the annual needs of 118 million people, the researchers found. There are many ways to improve water use efficiency in agriculture; for example, countries could change where they grow certain crops to match freshwater availability in different regions, or adopt technologies like artificial intelligence to optimize the timing and amount of irrigation.

Countries can also set groundwater extraction limits, incentivize farmers through subsidies and raise the price of water for agriculture. Additionally, the report showed that countries with higher energy prices had slower drying rates because it costs more to pump groundwater, which boosts water use efficiency.

Overall, water management at the national scale works well, according to the report. Countries with good water management plans depleted their freshwater resources two to three times more slowly than countries with poor water management.

Virtual water trade

On the global scale, virtual water trade is one of the best solutions to conserve water if it is done right, Zhang said. Virtual water trade occurs when countries exchange fresh water in the form of agricultural products and other water-intensive goods.

Global water use increased by 25% between 2000 and 2019. One-third of that increase occurred in regions that were already drying out — including Central America, northern China, Eastern Europe and the U.S. Southwest — and a big share of the water was used to irrigate water-intensive crops with inefficient methods, according to the report.

There has also been a global shift toward more water-intensive crops, including wheat, rice, cotton, maize and sugar-cane. Out of 101 drying countries, 37 have increased cultivation of these crops.

Virtual water trade can save huge amounts of water by relocating some of these crops to countries that aren't drying out. For example, between 1996 and 2005, Jordan saved 250 billion cubic feet (7 billion cubic meters) of water by importing wheat from the U.S. and maize from Argentina, among other products.

Globally, from 2000 to 2019 virtual water trade saved 16.8 trillion cubic feet (475 billion cubic meters) of water each year, or about 9% of the water used to grow the world's 35 most important crops.

"When water-scarce countries import water-intensive products, they are actually importing water, and that helps them to preserve their own water supply," Zhang said.

However, virtual water trade isn't always so straightforward. It might benefit one water-scarce country but severely deplete the resources of another country. One example is the production of alfalfa, a water-intensive legume used in livestock feed, in dry regions of the U.S. for export to Saudi Arabia, Famiglietti said. Saudi Arabia benefits from this exchange because the country isn't using its water to grow alfalfa, but aquifers in Arizona are being sucked dry, he said.

Reasons for optimism

The solutions identified in the report fall into three broad categories: manage water demand, expand water supply through recycling and desalination, and ensure fair and effective water allocation.

If we can make those changes, sustainable fresh water use is "definitely possible," Zhang said. "We do have reason to be optimistic."

Famiglietti agreed that small changes could go a long way.

"It's complicated, because the population is growing and we're going to need to grow more food," he said. "I don't know that we're going to 'tech' our way out of it, but when we start thinking on decadal time scales, changes in policy, changes in financial innovations, changes in technology — I think there is some reason for optimism. And in those decades we can keep thinking about how to improve our lot."

Some of the views expressed in this article are not included in the World Bank report. They should not be interpreted as having been endorsed by the World Bank or by its representatives.

]]>
https://www.livescience.com/planet-earth/enough-fresh-water-is-lost-from-continents-each-year-to-meet-the-needs-of-280-million-people-heres-how-we-can-combat-that w43PVrLP7HuXiRyeAoEJGU Tue, 30 Dec 2025 19:10:00 +0000 Tue, 23 Dec 2025 17:55:09 +0000
<![CDATA[ Trump 2.0 is dismantling American science. Here's what's at stake, according to researchers. ]]> From beginning to end, 2025 was a year of devastation for scientists in the United States.

January saw the abrupt suspension of key operations across the National Institutes of Health, not only disrupting clinical trials and other in-progress studies but stalling grant reviews and other activities necessary to conduct research. Around the same time, the Trump administration issued executive orders declaring there are only two sexes and ending DEI programs. The Trump administration also removed public data and analysis tools related to health disparities, climate change and environmental justice, among other databases.

February and March saw a steep undercutting of federal support for the infrastructure crucial to conducting research as well as the withholding of federal funding from several universities.

And over the course of the following months, billions of dollars of grants supporting research projects across disciplines, institutions and states were terminated. These include funding already spent on in-progress studies that have been forced to end before completion. Federal agencies, including NASA, the Environmental Protection Agency, the National Oceanic and Atmospheric Administration and the U.S. Agency for International Development have been downsized or dismantled altogether.

The Conversation asked researchers from a range of fields to share how the Trump administration’s science funding cuts have affected them. All describe the significant losses they and their communities have experienced. But many also voice their determination to continue doing work they believe is crucial to a healthier, safer and more fair society.

Pipeline of new scientists cut off

Carrie McDonough, Associate Professor of Chemistry, Carnegie Mellon University

People are exposed to thousands of synthetic chemicals every day, but the health risks those chemicals pose are poorly understood. I was a co-investigator on a US $1.5 million grant from the EPA to develop machine-learning techniques for rapid chemical safety assessment. My lab was two months into our project when it was terminated in May because it no longer aligned with agency priorities, despite the administration’s Make America Healthy Again report specifically highlighting using AI to rapidly assess childhood chemical exposures as a focus area.

Labs like mine are usually pipelines for early-career scientists to enter federal research labs, but the uncertain future of federal research agencies has disrupted this process. I’m seeing recent graduates lose federal jobs, and countless opportunities disappear. Students who would have been the next generation of scientists helping to shape environmental regulations to protect Americans have had their careers altered forever.

Photo of a couple dozen scientists protesting at the U.S. Capitol building.

Many researchers are working to advocate for science in the public sphere. (Image credit: John McDonnell/AP Photo)

I’ve been splitting my time between research, teaching and advocating for academic freedom and the economic importance of science funding because I care deeply about the scientific and academic excellence of this country and its effects on the world. I owe it to my students and the next generation to make sure people know what’s at stake.

Fewer people trained to treat addiction

Cara Poland, Associate Professor of Obstetrics, Gynecology and Reproductive Biology, Michigan State University

I run a program that has trained 20,000 health care practitioners across the U.S. on how to effectively and compassionately treat addiction in their communities. Most doctors aren’t trained to treat addiction, leaving patients without lifesaving care and leading to preventable deaths.

This work is personal: My brother died from substance use disorder. Behind every statistic is a family like mine, hoping for care that could save their loved one’s life.

With our federal funding cut by 60%, my team and I are unable to continue developing our addiction medicine curriculum and enrolling medical schools and clinicians into our program.

Meanwhile, addiction-related deaths continue to rise as the U.S. health system loses its capacity to deliver effective treatment. These setbacks ripple through hospitals and communities, perpetuating treatment gaps and deepening the addiction crisis.

Communities left to brave extreme weather alone

Brian G. Henning, Professor of Philosophy and Environmental Studies and Sciences, Gonzaga University

In 2021, a heat dome settled over the Northwest, shattering temperature records and claiming lives. Since that devastating summer, my team and I have been working with the City of Spokane to prepare for the climate challenges ahead.

We and the city were awarded a $19.9 million grant from the EPA to support projects that reduce pollution, increase community climate resilience and build capacity to address environmental and climate justice challenges.

High angle photo of a hundred or so people distributed in chairs in a giant warehouse.

Cooling centers are becoming more critical as extreme heat becomes more common. (Image credit: Getty Images)

As our work was about to begin, the Trump administration rescinded our funding in May. As a result, the five public facilities that were set to serve as hubs for community members to gather during extreme weather will be less equipped to handle power failures. Around 300 low-income households will miss out on efficient HVAC system updates. And our local economy will lose the jobs and investments these projects would have generated.

Despite this setback, the work will continue. My team and I care about our neighbors, and we remain focused on helping our community become more resilient to extreme heat and wildfires. This includes pursuing new funding to support this work. It will be smaller, slower and with fewer resources than planned, but we are not deterred.

LGBTQ+ people made invisible

Nathaniel M. Tran, Assistant Professor of Health Policy and Administration, University of Illinois Chicago

This year nearly broke me as a scientist.

Shortly after coming into office, the Trump administration began targeting research projects focusing on LGBTQ+ health for early termination. I felt demoralized after receiving termination letters from the NIH for my own project examining access to preventive services and home-based care among LGBTQ+ older adults. The disruption of publicly funded research projects wastes millions of dollars from existing contracts.

Then, news broke that the Centers for Disease Control and Prevention would no longer process or make publicly available the LGBTQ+ demographic data that public health researchers like me rely on.

But instead of becoming demoralized, I grew emboldened: I will not be erased, and I will not let the LGBTQ+ community be erased. These setbacks renewed my commitment to advancing the public’s health, guided by rigorous science, collaboration and equity.

Photo of two men hugging. They are wearing surgical masks and their eyes are closed.

Research on LGBTQ+ health informs the kind of care patients receive. (Image credit: Jessica Rinaldi/The Boston Globe via Getty Images)

Pediatric brain cancer research squelched

Rachael Sirianni, Professor of Neurological Surgery, UMass Chan Medical School

My lab designs new cancer treatments. We are one of only a few groups in the nation focused on treating pediatric cancer that has spread across the brain and spinal cord. This research is being crushed by the broad, destabilizing impacts of federal cuts to the NIH.

Compared to last year, I am working with around 25% of our funding and less than 50% of our staff. We cannot finish our studies, publish results or pursue new ideas. We have lost technology in development. Students and colleagues are leaving as training opportunities and hope for the future of science dries up.

I’m faced with impossible questions about what to do next. Do I use my dwindling research funds to maintain personnel who took years to train? Keep equipment running? Bet it all on one final, risky study? There are simply no good choices remaining.

Inequality in science festers

Stephanie Nawyn, Associate Professor of Sociology, Michigan State University

Many people have asked me how the termination of my National Science Foundation grant to improve work cultures in university departments has affected me, but I believe that is the wrong question. Certainly it has meant the loss of publications, summer funding for faculty and graduate students, and opportunities to make working conditions at my and my colleagues’ institutions more equitable and inclusive.

But the greatest effects will come from the widespread terminations across science as a whole, including the elimination of NSF programs dedicated to improving gender equity in science and technology. These terminations are part of a broader dismantling of science and higher education that will have cascading negative effects lasting decades.

Infrastructure for knowledge production that took years to build cannot be rebuilt overnight.

This edited article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
https://www.livescience.com/human-behavior/politics/trump-2-0-is-dismantling-american-science-heres-whats-at-stake-according-to-researchers NAoSM9svzBcpYWn7UBaFPL Tue, 30 Dec 2025 19:00:00 +0000 Fri, 19 Dec 2025 20:50:02 +0000
<![CDATA[ 10 things we learned about Neanderthals in 2025 ]]> Neanderthals have fascinated scientists since they were first discovered in the 19th century. Their long heads and low brow ridges initially convinced experts that Neanderthals were some kind of evolutionary wrong turn that ended up in European caves.

It took more than a century for researchers to prove that Neanderthals were actually quite intelligent and that they interbred with modern humans (Homo sapiens). The number of discoveries related to Neanderthals' biology and culture has skyrocketed in recent years — and 2025 was a noteworthy year. While we learned that Neanderthals had biological features that were strikingly different from modern humans', this year's discoveries also showed that some aspects of their behavior and culture were similar to ours.

Here are 10 major Neanderthal findings from 2025 — and what they teach us about our own evolution.

1. Neanderthals were the first to make fire.

artistic drawing of a Neanderthal using a piece of pyrite and flint to make sparks

(Image credit: Craig Williams/The Trustees of the British Museum)

The hottest — but also somewhat controversial — Neanderthal discovery of the year was that the first humans to make and control fire were Neanderthals living in England more than 400,000 years ago.

In December, researchers announced that they had found reddened clay and heat-shattered flint hand axes at an archaeological site in Suffolk. But the smoking gun was the discovery of tiny flakes of pyrite, a mineral that produces sparks when struck against flint.

Experts have debated for decades whether early human ancestors deliberately made fire or whether they opportunistically used wildfires that sprang up. The combination of flakes of pyrite and charred soil and tools points to Neanderthals' purposeful creation of fire.

The discovery, however, does not tell us whether Neanderthals invented this technology or they learned it from even earlier ancestors, such as Homo erectus. Regardless, the fire evidence shows that Neanderthals were smart enough to figure out how to survive in cold and dark European climates.

2. Neanderthals cannibalized women and children.

a woman stands in front of a table full of bones with a human skeleton in the background

(Image credit: Getty Images)

Around 45,000 years ago — very close to when Neanderthals disappeared forever — six members of a Neanderthal group were cannibalized, according to a study published in November. Their remains were discovered in the Goyet cave system in Belgium with butchery marks similar to those on animal bones.

This isn't the first time archaeologists have found evidence of cannibalism in Neanderthals. But it is the best evidence experts have to suggest one group — probably Neanderthals but possibly modern humans — deliberately targeted the women and children of another group, perhaps as a way to eliminate the group's reproductive potential.

3. A Neanderthal left the world's oldest fingerprint.

A close-up of a red fingerprint

(Image credit: Álvarez-Alonso et al. 2025; CC BY 4.0)

A curious-looking rock found in Spain contains the world's oldest known fingerprint, and it was probably made by a Neanderthal using ocher 43,000 years ago, researchers announced in May.

The team investigating the rock, which is the size of a large potato, thinks that it has face-like features and that the red dot may be a nose. If they're correct, it would mean Neanderthals were creating symbolic art, which could settle a decades-long debate in paleoanthropology.

Not all experts agree that the rock is an early version of Mr. Potato Head, but they do think the fingerprint and its characteristic whorl pattern represent a clear example of Neanderthals' use of red ocher pigment.

4. Neanderthals may have used "crayons."

Ochre tool shaped like tear drop with zoom in on lines etched into the side.

(Image credit: d'Errico et al., Sci. Adv. 11, eadx4722; CC BY 4.0 )

Scientists in Crimea found three pointy chunks of red and yellow ocher that Neanderthals may have used as early "crayons" 100,000 years ago, according to research published in November.

The hunks of mineral appear to have been repeatedly sharpened, which suggested to the researchers that the ocher was used for culturally meaningful purposes rather than in practical tasks, such as tanning hides.

Although ocher has been found at other Neanderthal sites, not all experts are convinced of the crayon interpretation. Instead, they suggest Neanderthals may have scraped powder from the ocher chunks for another purpose, such as to leave a fingerprint.

5. Neanderthals were low-energy.

Runners jumping off the starting line for a race.

(Image credit: Chris Ryan/Getty Images)

In July, researchers discovered that a key Neanderthal gene variant that is still found in some humans today could be detrimental to athletic performance because it limits the body's ability to produce energy during intense exercise.

Researchers found that the Neanderthal version of an enzyme called AMPD1 was different from the one in most modern humans. The Neanderthal enzyme variant allowed adenosine monophosphate (AMP) to build up in their muscles rather than being quickly removed. This AMP buildup is problematic because it makes it harder to produce adenosine triphosphate (ATP), a molecule that the body uses to store energy.

Modern humans who carry the Neanderthal variant of the gene have a lower probability of achieving elite athletic status, the researchers found. But while the Neanderthal variant may have affected their muscle metabolism slightly, it may not have contributed to their extinction.

6. Neanderthals were more susceptible to lead poisoning compared with humans.

a recreation of a Neanderthal woman

(Image credit: Joe McNally via Getty Images)

In a study published in October, researchers examined 51 teeth from H. sapiens, Neanderthals and other ancestors for evidence of lead exposure. Lead occurs naturally in our environment, but it is known to be toxic at high levels, causing damage to the brain and other organs. Researchers discovered that human ancestors were affected by episodic lead exposure for nearly 2 million years — and that human brains may have evolved some protection against lead poisoning.

Humans living today have a unique version of a gene called NOVA1 that is important for brain development and language skills. The gene also appears to confer greater resistance to lead than other versions of the gene do, such as the one in our Neanderthal cousins.

Therefore, researchers propose, the modern-human version of NOVA1 may have given us a slight advantage over Neanderthals and may have contributed to the demise of the Neanderthals.

7. Neanderthals had a "fat factory" in Germany.

The statues model how Neanderthals may have looked.

(Image credit: imageBROKER.com via Alamy)

Neanderthals primarily ate meat (and maggots), which put them at risk of developing protein poisoning, a lethal condition that results from eating too much protein and too few fats and carbohydrates.

But in July, researchers announced their discovery of a "fat factory" that Neanderthals may have used to stave off this condition 125,000 years ago. Their survey of nearly 200 animal bones revealed that Neanderthals smashed the bones to get at the marrow inside, which they boiled to extract the fat.

Fat is high in calories, and Neanderthals may have saved it to eat during food shortages. This innovative food-collection method is similar to what some ancient modern-human foraging groups did, suggesting that, in at least one way, Neanderthals were similar to us.

8. Neanderthals lacked a key DNA-synthesizing gene.

a human woman and a Neanderthal woman

(Image credit: Getty Images)

In August, researchers investigating the enzyme adenylosuccinate lyase (ADSL) found that the version in Neanderthals was more active than the one in humans. ADSL helps synthesize purine, which is one of the fundamental building blocks of DNA, and an ADSL deficiency is known to result in intellectual disability in modern humans. So researchers modified mice to have a modern-human-like ADSL gene and found that they were better at completing a task to get water.

But even though ADSL deficiency can cause intellectual and behavioral problems in modern-day people, it's not yet clear whether the Neanderthal variant impaired them.

9. Our cousins suffered a population bottleneck.

Reconstruction of a Neanderthal man

(Image credit: Allan Henderson (CC BY 2.0))

Even before Neanderthals disappeared forever, their numbers were dwindling because of a population bottleneck, according to research published in February.

Scientists looked at the tiny inner-ear bones of Neanderthals from various time periods and noticed that, around 110,000 years ago, there was an abrupt decline in the diversity of bone shapes. This decline suggests a bottleneck event, when a species undergoes a sudden reduction in variation due to factors such as genocide or climate change.

While the ear bones alone didn't cause the Neanderthals' downfall, the bottleneck may have been the beginning of the end.

10. Neanderthals' blood may have doomed them.

Two skull replicas sit on a white table. The one in the foreground is a Neanderthal, while the one in the background is an early Homo sapiens.

(Image credit: Alamy)

Biologically, Neanderthals had distinct blood variants that separated them from modern humans — and two of those variants we learned about this year may have hastened our ancient cousins' extinction.

In January, researchers discovered that Neanderthals had a rare blood type that may have been fatal to their offspring when they mated with Denisovans or early H. sapiens.

Neanderthals carried a variation of the blood antigen Rh, which gives the positive and negative signs to blood types. Before modern medical interventions, if someone who was Rh-negative was pregnant with a fetus that was Rh-positive, it caused a miscarriage or stillbirth. The researchers found that, if a Neanderthal female mated with a H. sapiens or Denisovan male, there would have been a high risk of anemia, brain damage and infant death. And that might have spelled the end of the line for Neanderthals.

Another study published in October suggested that a fatal red blood cell incompatibility between Neanderthals and humans also contributed to our ancient cousins' extinction. Researchers focused on the PIEZO1 gene that affects oxygen transportation in red blood cells. Neanderthals' version of this gene essentially let their blood cells trap oxygen efficiently, while the modern-human version more efficiently released oxygen to tissues. When maternal oxygen isn't passed on to the fetus, it can restrict the growth of the fetus or lead to miscarriage. So, if a hybrid Neanderthal-human mother mated with a modern-human father or with a hybrid Neanderthal-human father, their offspring would be more likely to die than the offspring of non-hybrids.

Although Neanderthals' extinction likely did not hinge on any one specific gene variant, the new research into red blood cells and maternal-fetal incompatibility is providing key insight into the demise of our archaic cousins around 35,000 years ago.

Neanderthal quiz: How much do you know about our closest relatives?

]]>
https://www.livescience.com/archaeology/human-evolution/10-things-we-learned-about-neanderthals-in-2025 xDV4SrxHsLViFqijDpfsPo Tue, 30 Dec 2025 18:36:00 +0000 Tue, 23 Dec 2025 16:02:47 +0000
<![CDATA[ Did reintroducing Wolves to Yellowstone really cause an ecological cascade? ]]> Over the last three decades, Yellowstone National Park has undergone an ecological cascade. As elk numbers fell, aspen and willow trees thrived. This, in turn, allowed beaver numbers to increase, creating new habitats for fish and birds.

The shift has largely been attributed to the reintroduction of wolves to the park — as predators, they helped control the elk numbers. But their return may not have reshaped the entire ecosystem in the way that scientists thought, and has sparked a fierce debate among scientists over exactly why and how Yellowstone has rebounded.

According to a study published in January, the reintroduction of gray wolves (Canis lupus) in the 1990s created a trophic cascade — a chain reaction in the food web — that benefitted the entire ecosystem. The study linked wolves in the area to a reduction in the elk population, which in turn reduced browsing and allowed willow trees to grow. Between 2001 and 2020, this led to a 1,500% increase in crown volume, the total space filled by upper branches of the willows.

But now, scientists have written a response letter to the editor, published in Oct. 13 in the journal Global Ecology and Conservation, in which they argue that the original study's methodology was flawed, and that Yellowstone wolves' effect on willow shrubs is not so clear.

Large predators were targeted in Yellowstone from the end of the 1800s. By the 1920s, wolves were largely extinct from the park. Their disappearance created an ecological imbalance — the elk population exploded, which decimated plant populations and in turn threatened beavers, among other impacts. This is known as a trophic cascade, where the removal of one species causes ripples throughout the food web.

While the reintroduction of wolves to Yellowstone has led to changes within the park, the authors of the response letter claim the original study reinterpreted existing data to fit an oversimplified story.

The study converted willow height measurements collected and published by another research group into a metric called crown volume, response author Daniel MacNulty, a wildlife ecologist at Utah State University, told Live Science in an email. Crown volume was used as a proxy for willow size, meant to capture the shrub’s entire three-dimensional growth more than simply measuring its height.

"Because crown volume was built directly from height, [the study] only showed that height predicts height," MacNulty said. "They did not reveal anything new about how willow growth changed after wolf reintroduction."

The response letter suggests other inconsistencies in data analysis, like comparing willow measurements from different locations across years. This is problematic because it shows a misleading time series of willow growth, and MacNulty's research group has previously published research noting sampling biases in other studies supporting this same trophic cascade theory.

"There is substantial scientific evidence of a definitive effect of wolf recovery on the rest of the Yellowstone ecosystem," MacNulty said, like wolves increasing the supply of carrion to bears, coyotes, eagles and other meat-eating species. But the effect of wolves on vegetation is less clear because it operates through the decline of elk populations, which wolves were likely not solely responsible for. As MacNulty points out, humans, grizzly bears and cougars also hunt elk. "A major problem with the simple trophic cascade story is that it ignores the role of these other predators."

William Ripple, an Oregon State University wildlife ecologist and author of the original paper, stands by the original conclusions of the paper, maintaining that a large carnivore, elk, and willow trophic cascade occurred in Yellowstone. "Our methods are sound, the modeling approach is standard," Ripple told Live Science in an email. "So we reject the idea that there are fatal flaws."

The debate about Yellowstone wolves and the impact of their reintroduction goes beyond this study and the latest response. While scientists widely agree that there is a trophic cascade in Yellowstone, its strength — and which predators are most responsible for it — form the center of the disagreement, MacNulty said.

Some scientists argue the story is more complex. "There are reasons other than trophic cascades by which carnivores and plants can be positively associated," Jake Goheen, a wildlife ecologist at Iowa State University told Live Science in an email. Goheen, who was not involved in the research or response, said he doesn't believe that the authors of the original study provided enough evidence to support their conclusion that reintroducing wolves in Yellowstone caused a strong trophic cascade that affected willows.

"There is a growing body of literature at this point that has scrutinized the hypothesized cascade in Yellowstone," Goheen said. He adds that this does not mean there's no wolf-to-elk-to-willow trophic cascade in Yellowstone, only that the evidence presented so far is not clear enough.

To establish a clear trophic cascade from Yellowstone wolf reintroduction to willows, researchers would need to account for other predators and herbivores, said MacNulty. The ideal study would then analyze how much more total willow biomass there is now compared with before wolf introduction, to identify the strength of the effect; then calculate how much of that increase can be attributed solely to wolves, to identify its cause.

Ripple and his research team are now preparing a detailed reply, which explains that criticisms of the original study come from misunderstandings of what they did, Ripple said. "The basic scientific logic of the paper is solid," Ripple said.

Conservation priorities might be fueling the controversy over large carnivores' beneficial effects on ecosystems, said Goheen, adding that even if wolves are not definitively causing a trophic cascade to willows, they are still important to conserve.

]]>
https://www.livescience.com/animals/land-mammals/did-reintroducing-wolves-to-yellowstone-really-cause-an-ecological-cascade AD9CbZ6xRSYE5wsPeLCWJH Tue, 30 Dec 2025 16:35:00 +0000 Tue, 23 Dec 2025 17:48:18 +0000
<![CDATA[ 'Nobody knew why this was happening': Scientists race to understand baffling behavior of 'clumping clouds' ]]> Caroline Muller looks at clouds differently than most people. Where others may see puffy marshmallows, wispy cotton candy or thunderous gray objects storming overhead, Muller sees fluids flowing through the sky. She visualizes how air rises and falls, warms and cools, and spirals and swirls to form clouds and create storms.

But the urgency with which Muller, a climate scientist at the Institute of Science and Technology Austria in Klosterneuburg, considers such atmospheric puzzles has surged in recent years. As our planet swelters with global warming, storms are becoming more intense, sometimes dumping two or even three times more rain than expected. Such was the case in Bahía Blanca, Argentina, in March 2025: Almost half the city’s yearly average rainfall fell in less than 12 hours, causing deadly floods.

Atmospheric scientists have long used computer simulations to track how the dynamics of air and moisture might produce varieties of storms. But existing models hadn’t fully explained the emergence of these fiercer storms. A roughly 200-year-old theory describes how warmer air holds more moisture than cooler air: an extra 7 percent for every degree Celsius of warming. But in models and weather observations, climate scientists have seen rainfall events far exceeding this expected increase. And those storms can lead to severe flooding when heavy rain falls on already saturated soils or follows humid heatwaves.

Clouds, and the way that they cluster, could help explain what’s going on.

A growing body of research, set in motion by Muller over a decade ago, is revealing several small-scale processes that climate models had previously overlooked. These processes influence how clouds form, congregate and persist in ways that may amplify heavy downpours and fuel larger, long-lasting storms. Clouds have an “internal life,” Muller says, “that can strengthen them or may help them stay alive longer.”

Other scientists need more convincing, because the computer simulations researchers use to study clouds reduce planet Earth to its simplest and smoothest form, retaining its essential physics but otherwise barely resembling the real world.

Now, though, a deeper understanding beckons. Higher-resolution global climate models can finally simulate clouds and the destructive storms they form on a planetary scale — giving scientists a more realistic picture. By better understanding clouds, researchers hope to improve their predictions of extreme rainfall, especially in the tropics where some of the most ferocious thunderstorms hit and where future rainfall projections are the most uncertain.

First clues to clumping clouds

All clouds form in moist, rising air. A mountain can propel air upwards; so, too, can a cold front. Clouds can also form through a process known as convection: the overturning of air in the atmosphere that starts when sunlight, warm land or balmy water heats air from below. As warm air rises, it cools, condensing the water vapor it carried upwards into raindrops. This condensation process also releases heat, which fuels churning storms.

But clouds remain one of the weakest links in climate models. That’s because the global climate models scientists use to simulate scenarios of future warming are far too coarse to capture the updrafts that give rise to clouds or to describe how they swirl in a storm — let alone to explain the microphysical processes controlling how much rain falls from them to Earth.

To try to resolve this problem, Muller and other like-minded scientists turned to simpler simulations of Earth’s climate that are able to model convection. In these artificial worlds, each the shape of a shallow box typically a few hundred kilometers across and tens of kilometers deep, the researchers tinkered with replica atmospheres to see if they could figure out how clouds behaved under different conditions.

Intriguingly, when researchers ran these models, the clouds spontaneously clumped together, even though the models had none of the features that usually push clouds together — no mountains, no wind, no Earthly spin or seasonal variations in sunlight. “Nobody knew why this was happening,” says Daniel Hernández Deckers, an atmospheric scientist at the National University of Colombia in Bogotá.

In 2012, Muller discovered a first clue: a process known as radiative cooling. The Sun’s heat that bounces off Earth’s surface radiates back into space, and where there are few clouds, more of that radiation escapes — cooling the air. The cool spots set up atmospheric flows that drive air toward cloudier regions — trapping more heat and forming more clouds. A follow-up study in 2018 showed that in these simulations, radiative cooling accelerated the formation of tropical cyclones. “That made us realize that to understand clouds, you have to look at the neighborhood as well — outside clouds,” Muller says.

Once scientists started looking not just outside clouds, but also underneath them and at their edges, they found other small-scale processes that help to explain why clouds flock together. The various processes, described by Muller and colleagues in the Annual Review of Fluid Mechanics, all bring or hold together pockets of warm, moist air so more clouds form in already-cloudy regions. These small-scale processes hadn’t been understood much before because they are often obscured by larger weather patterns.

Hernández Deckers has been studying one of the processes, called entrainment — the turbulent mixing of air at the edges of clouds. Most climate models represent clouds as a steady plume of rising air, but in reality “clouds are like a cauliflower,” he says. “You have a lot of turbulence, and you have these bubbles [of air] inside the clouds.” This mixing at the edges affects how clouds evolve and thunderstorms develop; it can weaken or strengthen storms in various ways, but, like radiative cooling, it encourages more clouds to form as a clump in regions that are already moist.

Such processes are likely to be most important in storms in Earth’s tropical regions, where there’s the most uncertainty about future rainfall. (That’s why Hernández Deckers, Muller and others tend to focus their studies there.) The tropics lack the cold fronts, jet streams and spiraling high- and low-pressure systems that dominate air flows at higher latitudes.

Infographic showing the process driving cloud clumping.

From the lower levels of the atmosphere to the higher regions known as the free troposphere, several phenomena help drive clouds to form and clump together. They include radiative cooling (1), in which solar heat bounces from Earth’s surface through clear skies back to space, causing cooling of parts of the atmosphere, as well as mixing (2) at clouds’ edges, which holds clouds together. Other processes (3 and 4) involve additional disturbances that can affect cloud behavior. (Image credit: Knowable Magazine)

Supercharging heavy rains

There are other microscopic processes happening inside clouds that affect extreme rainfall, especially on shorter timescales. Moisture matters: Condensed droplets falling through moist, cloudy air don’t evaporate as much on their descent, so more water falls to the ground. Temperature matters too: When clouds form in warmer atmospheres, they produce less snow and more rain. Since raindrops fall faster than snowflakes, they evaporate less on their descent — producing, once again, more rain.

These factors also help explain why more rain can get squeezed from a cloud than the 7 percent rise per degree of warming predicted by the 200-year-old theory. “Essentially you get an extra kick … in our simulations, it was almost a doubling,” says Martin Singh, a climate scientist at Monash University in Melbourne, Australia.

Cloud clustering adds to this effect by holding warm, moist air together, so more rain droplets fall. One study by Muller and her collaborators found that clumping clouds intensify short-duration rainfall extremes by 30 to 70 percent, largely because raindrops evaporate less inside sodden clouds.

Other research, including a study led by Jiawei Bao, a postdoctoral researcher in Muller’s group, has likewise found that the microphysical processes going on inside clouds have a strong influence over fast, heavy downpours. These sudden downpours are intensifying much faster with climate change than protracted deluges, and often cause flash flooding.

The future of extreme rainfall

Scientists who study the clumping of clouds want to know how that behavior will change as the planet heats up — and what that will mean for incidences of heavy rainfall and flooding.

Some models suggest that clouds (and the convection that gives rise to them) will clump together more with global warming — and produce more rainfall extremes that often far exceed what theory predicts. But other simulations suggest that clouds will congregate less. “There seems to be still possibly a range of answers,” says Allison Wing, a climate scientist at Florida State University in Tallahassee who has compared various models.

Photo of six people walking through thigh-deep brown water, carrying personal items or holding the arm of the person next to them, the day after a heavy storm in Bahia Blanca, 600 km south of Buenos Aires on March 8, 2025.

Torrential rains in March 2025 flooded the city of Bahía Blanca, Argentina. Extreme precipitation like this is expected to become more common as the planet continues to warm, but predicting rainfall extremes in tropical regions is proving challenging. (Image credit: Photo by PABLO PRESTI/AFP via Getty Images)

Scientists are beginning to try to reconcile some of these inconsistencies using powerful types of computer simulations called global storm-resolving models. These can capture the fine structures of clouds, thunderstorms and cyclones while also simulating the global climate. They bring a 50-fold leap in realism beyond the global climate models scientists generally use — but demand 30,000 times more computational power.

Using one such model in a paper published in 2024, Bao, Muller and their collaborators found that clouds in the tropics congregated more as temperatures increased — leading to less frequent storms but ones that were larger, lasted longer and, over the course of a day, dumped more rain than expected from theory.

But that work relied on just one model and simulated conditions from around one future timepoint — the year 2070. Scientists need to run longer simulations using more storm-resolving models, Bao says, but very few research teams can afford to run them. They are so computationally intensive that they are typically run at large centralized hubs, and scientists occasionally host “hackathons” to crunch through and share data.

Researchers also need more real-world observations to get at some of the biggest unknowns about clouds. Although a flurry of recent studies using satellite data linked the clustering of clouds to heavier rainfall in the tropics, there are large data gaps in many tropical regions. This weakens climate projections and leaves many countries ill-prepared. In June of 2025, floods and landslides in Venezuela and Colombia swept away buildings and killed at least a dozen people, but scientists don’t know what factors worsened these storms because the data are so paltry. “Nobody really knows, still, what triggered this,” Hernández Deckers says.

New, granular data are on their way. Wing is analyzing rainfall measurements from a German research vessel that traversed the tropical Atlantic Ocean for six weeks in 2024. The ship’s radar mapped clusters of convection associated with the storms it passed through, so the work should help researchers see how clouds organize over vast tracts of the ocean.

And an even more global view is on the horizon. The European Space Agency plans to launch two satellites in 2029 that will measure, among other things, near-surface winds that ruffle Earth’s oceans and skim mountaintops. Perhaps, scientists hope, the data these satellites beam back will finally provide a better grasp of clumping clouds and the heaviest rains that fall from them.

Research and interviews for this article were partly supported through a journalism residency funded by the Institute of Science & Technology Austria (ISTA). ISTA had no input into the story.

This article originally appeared in Knowable Magazine, a nonprofit publication dedicated to making scientific knowledge accessible to all. Sign up for Knowable Magazine’s newsletter.

]]>
https://www.livescience.com/planet-earth/weather/nobody-knew-why-this-was-happening-scientists-race-to-understand-baffling-behavior-of-clumping-clouds YwSecnr9mfaAExBsDYeoVe Tue, 30 Dec 2025 15:30:00 +0000 Tue, 23 Dec 2025 23:24:07 +0000
<![CDATA[ Canon RF 200-800mm f/6.3-9 IS USM lens review: Enormous reach for wildlife photography ]]> It’s no secret that the best lenses for wildlife photography are among the most expensive lenses you can buy. Finding a versatile, good-quality lens with the reach and prowess needed for photographing distant animals is a tough feat if you don’t have a huge budget, but the Canon RF 200-800mm f/6.3-9 IS USM lens could be just what you’re looking for. With one of the widest focal ranges out there, it’s a wildlife photographer's dream — and, provided you’re shooting in favorable conditions, no animal will be out of reach.

We’ve taken it to a nature reserve, photographed birds from our window and zoomed in on the moon to assess its performance in all-light conditions for static and moving subjects, emulating real-world shooting conditions to test its mettle.

Canon RF 200-800mm f/6.3-9 IS USM review

Canon RF 200-800mm f/6.3-9 IS USM: Design

Canon RF 200-800mm F6.3-9 IS USM against a bright background

It's a beast of a lens, even without the lens hood. (Image credit: Kimberley Lane)
  • Big and heavy
  • Annoying amount of lens creep
  • Solid and well-built
  • Custom buttons difficult to access

There’s no beating around the bush here — this lens is big, and it’s heavy. Weighing about 4.5 lbs (just over 2 kilograms), this thing makes itself known both in your camera bag and out in the field. Needless to say, it got quite heavy after a while, even when resting in a hide, but it feels solid and well-built and is dust- and weather-resistant, although we never got caught out in the rain to fully test this.

We found it frustrating that it didn’t have a zoom lock, as it had an annoying amount of lens creep when we held the lens vertically, which meant we couldn’t carry the camera around our neck (as if its weight didn’t already see to that). We found the zoom ring a little on the stiff side, and, to be picky, the lens actually looked quite ugly when it was zoomed all the way in on a subject.

Specifications

Focal length: 200-800 mm
Maximum aperture: f/6.3-9
Weight: 4.5 pounds (2.05 kg)
Image stabilization: 5.5 stops
Filter thread: 95 mm
Dimensions (in): ⌀4.03 x 12.37
Dimensions (mm): ⌀102.3 x 314.1

In addition, it has a control ring, AF/MF switch, image stabilizer switch and two custom buttons, although we found these buttons hard to press as they aren’t within easy reach when holding the camera’s hefty weight. When we took our hand away to try to press either of the buttons, it threw the entire weight distribution off.

It has a nice big lens hood, although we’d have liked this to have a door in order to utilize a polarizer, particularly when we were photographing waterfowl.

Canon RF 200-800mm f/6.3-9 IS USM: Performance

  • Struggles in the dark with f/6.3 aperture
  • Good autofocus performance
  • Excellent image quality

For wildlife photography in generally favorable conditions, this lens performed very well overall. Its obvious downfall is the limited maximum aperture — f/6.3 performs just fine during the daytime, but as the light levels fell at dusk, or even when we went into a heavily wooded area, we had to push the ISO up higher than we’d have wanted.

Luckily, we were shooting with the Canon EOS R6 II, which has excellent noise handling, so we were able to save a lot of our images. But if you often shoot at dawn or dusk, we’d recommend investing in a wider telephoto lens so you won’t need to rely on denoise software.

The autofocus was also good, but at higher focal lengths, it’s at the mercy of how steady your hand is. It generally performed very well, but it suffered when we were shooting in harsh conditions, or if there were distractions or foliage in front of our subject.

Overall, though, its performance is very good for the price. Images are sharp and it captures color very nicely — certainly more than well enough for wildlife or moon photography.

Canon RF 200-800mm f/6.3-9 IS USM: Functionality

  • 5.5 stops of image stabilization is crucial
  • Versatile focal length
  • 2.6 ft (0.8 meter) close focusing distance at 200mm is great for insects

As much as it suffers from a fairly wide maximum aperture, the 200-800mm focal length offers versatility that many other lenses don’t. There’s a Sony super-telephoto with a 400-800mm range, but you’d be stuck if a subject came too close to you — with the Canon, you’d be able to zoom out easily. We never found ourselves wishing we had multiple lenses, as the 200-800mm can cover a lot of subjects, near or far.

Plus, although it doesn’t have the close focusing capabilities of a true macro lens, it can focus as close as 2.6 feet (0.8 meters) at 200mm, which is great for photographing butterflies and insects at a fairly close range.

The 5.5 stops of image stabilization were a lifesaver, and pretty crucial for such a long focal length. Even just for compositional purposes, we still struggled to follow subjects on occasion at the full 800mm, and if there had been no image stabilization, we’d have had no chance.

Should you buy the Canon RF 200-800mm f/6.3-9 IS USM?

Overall, this lens provides excellent value for money. You get a lot of lens for the price, and although it’s not a low-light champion, it still produces beautifully sharp, contrast-y images, while the versatility of the focal length is hard to beat.

Considering the very best wildlife lenses are telephoto primes costing upwards of $10,000, it’s one of the best you can buy for most wildlife photographers — that is, for anyone who’s not a serious pro.

If the Canon RF 200-800mm f/6.3-9 IS USM isn't for you

]]>
https://www.livescience.com/technology/canon-rf-200-800mm-f-6-3-9-is-usm-lens-review-enormous-reach-for-wildlife-photography bWEx9cJk3qQjHx6jandWhA Tue, 30 Dec 2025 14:00:00 +0000 Wed, 17 Dec 2025 16:21:19 +0000
<![CDATA[ Tractor beams inspired by sci-fi are real, and could solve the looming space junk problem ]]> In science fiction films, nothing raises tension quite like the good guys' spaceship getting caught in an invisible tractor beam that allows the baddies to slowly reel them in. But what was once only a sci-fi staple could soon become a reality.

Scientists are developing a real-life tractor beam, dubbed an electrostatic tractor. This tractor beam wouldn't suck in helpless starship pilots, however. Instead, it would use electrostatic attraction to nudge hazardous space junk safely out of Earth orbit.

The stakes are high: With the commercial space industry booming, the number of satellites in Earth's orbit is forecast to rise sharply. This bonanza of new satellites will eventually wear out and turn the space around Earth into a giant junkyard of debris that could smash into working spacecraft, plummet to Earth, pollute our atmosphere with metals and obscure our view of the cosmos. And, if left unchecked, the growing space junk problem could hobble the booming space exploration industry, experts warn.

The science is pretty much there, but the funding is not.

The electrostatic tractor beam could potentially alleviate that problem by safely moving dead satellites far out of Earth orbit, where they would drift harmlessly for eternity.

While the tractor beam wouldn't completely solve the space junk problem, the concept has several advantages over other proposed space debris removal methods, which could make it a valuable tool for tackling the issue, experts told Live Science.

Related: 11 sci-fi concepts that are possible (in theory)

A prototype could cost millions, and an operational, full-scale version even more. But if the financial hurdles can be overcome, the tractor beam could be operational within a decade, its builders say.

"The science is pretty much there, but the funding is not," project researcher Kaylee Champion, a doctoral student in the Department of Aerospace Engineering Sciences at the University of Colorado Boulder (CU Boulder), told Live Science.

Avoiding Disaster

Tractor beams are a staple of sci-fi films and TV shows, such as Star Trek. (Image credit: Star Trek)

The tractor beams depicted in "Star Wars" and "Star Trek" suck up spacecraft via artificial gravity or an ambiguous "energy field." Such technology is likely beyond anything humans will ever achieve. But the concept inspired Hanspeter Schaub, an aerospace engineering professor at CU Boulder, to conceptualize a more realistic version.

Schaub first got the idea after the first major satellite collision in 2009, when an active communications satellite, Iridium 33, smashed into a defunct Russian military spacecraft, Kosmos 2251, scattering more than 1,800 pieces of debris into Earth's orbit.

Related: How many satellites orbit Earth?

an image that says

Science Spotlight takes a deeper look at emerging science and gives you, our readers, the perspective you need on these advances. Our stories highlight trends in different fields, how new research is changing old ideas, and how the picture of the world we live in is being transformed thanks to science.

In the wake of this disaster, Schaub wanted to be able to prevent this from happening again. To do this, he realized you could pull spacecraft out of harm's way by using the attraction between positively and negatively charged objects to make them "stick" together.

Over the next decade, Schaub and colleagues refined the concept. Now, they hope it can someday be used to move dead satellites out of geostationary orbit (GEO) — an orbit around Earth's equator where an object's speed matches the planet's rotation, making it seem like the object is fixed in place above a certain point on Earth. This would then free up space for other objects in GEO, which is considered "prime real estate" for satellites, Schaub said.

How does it work?

The researchers have been testing the electron gun on pieces of metal in the lab. (Image credit: Nico Goda/CU Boulder)

The electrostatic tractor would use a servicer spacecraft equipped with an electron gun that would fire negatively charged electrons at a dead target satellite, Champion told Live Science. The electrons would give the target a negative charge while leaving the servicer with a positive charge. The electrostatic attraction between the two would keep them locked together despite being separated by 65 to 100 feet (20 to 30 meters) of empty space, she said.

Once the servicer and target are "stuck together," the servicer would be able to pull the target out of orbit without touching it. Ideally, the defunct satellite would be pulled into a "graveyard orbit" more distant from Earth, where it could safely drift forever, Champion said.

Related: 15 of the weirdest things we have launched into space

The electrostatic attraction between the two spacecraft would be extremely weak, due to limitations in electron gun technology and the distance by which the two would need to be separated to prevent collisions, project researcher Julian Hammerl, a doctoral student at CU Boulder, told Live Science. So the servicer would have to move very slowly, and it could take more than a month to fully move a single satellite out of GEO, he added.

That's a far cry from movie tractor beams, which are inescapable and rapidly reel in their prey. This is the "main difference between sci-fi and reality," Hammerl said.

Advantages and limitations

The amount of space junk surrounding Earth has greatly increased in recent years. Here is a comparison of space junk in 1965 (left) and 2010 (right). (Image credit: NASA)

The electrostatic tractor would have one big advantage over other proposed space junk removal methods, such as harpoons, giant nets and physical docking systems: It would be completely touchless.

"You have these large, dead spacecraft about the size of a school bus rotating really fast," Hammerl said. "If you shoot a harpoon, use a big net or try to dock with them, then the physical contact can damage the spacecraft and then you are only making the [space junk] problem worse."

Scientists have proposed other touchless methods, such as using powerful magnets, but enormous magnets are both expensive to produce and would likely interfere with a servicer's controls, Champion said.

Related: How do tiny pieces of space junk cause incredible damage?

The main limitation of the electrostatic tractor is how slowly it would work. More than 550 satellites currently orbit Earth in GEO, but that number is expected to rise sharply in the coming decades.

If satellites were moved one at a time, then a single electrostatic tractor wouldn't keep pace with the number of satellites winking out of operation. Another limitation of the electrostatic tractor is that it would work too slowly to be practical for clearing smaller pieces of space junk, so it wouldn't be able to keep GEO completely free of debris.

Cost is the other big obstacle. The team has not yet done a full cost analysis for the electrostatic tractor, Schaub said, but it would likely cost tens of millions of dollars. However, once the servicer were in space, it would be relatively cost-effective to operate it, he added.

Next steps

Researcher Julian Hammerl photographed next to the ECLIPS machine at CU Boulder. (Image credit: Nico Goda/CU Boulder)

The researchers are currently working on a series of experiments in their Electrostatic Charging Laboratory for Interactions between Plasma and Spacecraft (ECLIPS) machine at CU Boulder. The bathtub-sized, metallic vacuum chamber, which is equipped with an electron gun, allows the team to "do unique experiments that almost no one else can currently do" in order to simulate the effects of an electrostatic tractor on a smaller scale, Hammerl said.

Once the team is ready, the final and most challenging hurdle will be to secure funding for the first mission, which is a process they have not yet started.

Most of the mission cost would come from building and launching the servicer. However, the researchers would ideally like to launch two satellites for the first tests, a servicer and a target that they can maneuver, which would give them more control over their experiments but also double the cost.

Related: 10 stunning shots of Earth from space in 2022

If they can somehow wrangle that funding, a prototype tractor beam could be operational in around 10 years, the team previously estimated.

Is it viable?

Space junk is becoming a major problem for the space exploration industry. (Image credit: CU Boulder)

While tractor beams may sound like a pipe dream, experts are optimistic about the technology.

"Their technology is still in the infancy stage," John Crassidis, an aerospace scientist at the University at Buffalo in New York, who is not involved in the research, told Live Science in an email. "But I am fairly confident it will work."

If you shoot a harpoon, use a big net or try to dock with them, then the physical contact can damage the spacecraft and then you are only making the [space junk] problem worse.

Removing space junk without touching it would also be much safer than any current alternative method, Crassidis added.

The electrostatic tractor "should be able to produce the forces necessary to move a defunct satellite" and "certainly has a high potential to work in practice," Carolin Frueh, an associate professor of aeronautics and astronautics at Purdue University in Indiana, told Live Science in an email. "But there are still several engineering challenges to be solved along the way to make it real-world-ready."

Scientists should continue to research other possible solutions, Crassidis said. Even if the CU Boulder team doesn't create a "final product" to remove nonfunctional satellites, their research will provide a stepping stone for other scientists, he added.

If they are successful, it wouldn't be the first time scientists turned fiction into fact.

"What is today's science fiction could be tomorrow's reality," Crassidis said.

]]>
https://www.livescience.com/space/space-exploration/tractor-beams-inspired-by-sci-fi-are-real-and-could-solve-the-looming-space-junk-problem XhiKmue2w6bjHBqiznbRLJ Tue, 30 Dec 2025 14:00:00 +0000 Tue, 23 Dec 2025 18:21:43 +0000
<![CDATA[ Scientists are developing a 'self-driving' device that helps patients recover from heart attacks ]]> Hospitals may soon be able to rely on a "self-driving" machine to help patients recover from heart attacks. This machine would deliver treatments to the patient, collect data on how their body responds, and then adjust their medications to stabilize the patient within parameters preset by their doctor.

This is the vision for the Autonomous Closed-Loop Intervention System (ACIS), a device being developed by scientists at NTT Research, an arm of global technology company NTT. The device has been tested in animal experiments but not in human patients yet.

The researchers' eventual goal is to allow the heart to rest and minimize its oxygen use in that critical recovery window after a patient experiences a cardiac emergency. The jobs that would be handled by ACIS are usually done by medical providers — but the idea is that the device could standardize and optimize the process to deliver better outcomes while relieving strain on doctors' already-limited resources.

"We think that this system will outperform the standard of care," said Dr. Joe Alexander, director of NTT Research's Medical and Health Informatics (MEI) lab.

ACIS stemmed from a larger effort spearheaded by the MEI Lab known as the Bio Digital Twin program. Its aim is to construct advanced virtual models of organ systems that can be personalized with an individual patient's data, providing a detailed and dynamic representation of their medical status and a testable model for developing treatment plans.

Live Science spoke with Alexander about Digital Twins, ACIS and his vision for how they might transform health care.

Nicoletta Lanese: When we're talking about a Bio Digital Twin, is it fair to say it's a virtual copy of the patient?

Dr. Joe Alexander: Probably the layperson would think of a Bio Digital Twin as a copy of the person. But actually, it's just a system of equations, modeling and simulation to represent a person to the extent that is relevant for the disease. It's a very specific application, so there's no single Bio Digital Twin representing the [whole] person.

In our case, although we set out to build a family of Bio Digital Twins to represent different organ systems for different types of important diseases, we're starting with the cardiovascular system. So when I talk about a Cardiovascular Bio Digital Twin, I'm not talking about even a copy of the heart; I'm talking about a mathematical representation of all of the systems necessary for looking at the cardiovascular system in a particular patient.

In the case of ACIS, we're looking at acute heart failure and acute myocardial infarction [colloquially known as a heart attack].

a photo of a smiling man wearing a suit jacket and white shirt

Dr. Joe Alexander predicts ACIS could someday "outperform the standard of care." (Image credit: Courtesy of NTT Research, Inc.)

NL: Could you talk about what kind of data goes into the model?

JA: This Cardiovascular Bio Digital Twin is representing pressures and flows throughout the cardiovascular system, including pressures and flows generated by all four chambers of the heart. … We are able to represent the cardiovascular system dynamics in pressures, flows and volumes.

NL: And how do you make that actionable for an individual patient?

JA: We're in the early stages of it, but we have a road map for how to do it. Basically, we first go after representing the "normal" cardiovascular system for patients. So, if we can get data around "normal," then that's very good. [Editor's note: The MEI Lab is working with partners such as the National Cerebral and Cardiovascular Center in Japan to get access to this kind of data.]

But probably what's most important is finding populations that are relevant to the particular patient — so, in this case, patients with cardiovascular disease or patients with heart failure. So we go after that population-level data; let's say for heart failure. Then, from that data, we can estimate parameters for our cardiovascular model that represent the general population of patients with heart failure.

Within that population, as you know, there's a lot of variability. So are there other characteristics specific to our patient that we can use? Maybe results from echocardiogram [EKG]; maybe age; maybe comorbidities [other medical conditions]; sex, male or female; or environment. And if there is genetic information available, then we can find a subpopulation that's even more relevant to the patient.

Now, with ACIS, we [would] actually hook up a patient to the "first guess" of our Cardiovascular Bio Digital Twin for what would match that patient based on population-level data. Since it's a feedback control system, the feedback will automatically adjust the parameter values to deliver the necessary drugs or device therapies that that particular patient needs for some prespecified cardiac output. In that way, we can further fine-tune the Digital Twin for that patient.

NL: Can you describe how ACIS and its feedback loop work?

JA: The idea is that it's a "self-driving" therapeutic, just like a self-driving car. But in this case, "self-driving" is delivering the appropriate drugs or, in severe cases, medical-device therapies that a patient may need.

We have a system where we specify — just type in the keyboard — the desired cardiac output, heart rate, left atrial pressure, arterial pressure that we want the patient to achieve. Then, syringes that are filled with the appropriate drugs to create those changes are driven by our model, or "best guess" for that particular patient. This is all after a patient has had the primary lesion [like a blood vessel blockage] treated in the cath lab.

Let's say they had a vessel that was occluded; it's already been opened up or a stent has been placed, and they go to the ICU [intensive care unit] or CCU [coronary care unit] in order to recover. Recovery means that the heart needs an opportunity to rest. That means letting the heart work as little as possible to maintain the desired cardiac output.

We have a certain regimen of drugs that are given. Catecholamines improve the ability of the heart to contract. Nitrates reduce afterload of the heart so it doesn't have to work against such a high load when it tries to inject into the arterial system. Diuretics decrease the circulating blood volume and remove blood from the lungs, which has built up due to the acute failure.

These drugs are typically given by a physician; they'll give one drug and look at the response, give another drug, the response, and manage that patient over several days. When our system achieves proper function — and we're almost there, I think — all those drugs can be given at once if we know how the system will respond. That saves us a lot of time in treating the patient.

The drugs are delivered by these autonomously controlled syringes; then the patient responds to them, and that response is fed back in this system. Those values are compared to the ones that we typed in the keyboard, and if there's a difference, then feedback systems work to reduce that difference. It also gives information to our Digital Twin for that patient, so that in the future, we have better representations of those resistors and capacitors in the model.

abstract illustration of a human heart

The Cardiovascular Digital Twin represents the dynamics of the cardiovascular system through mathematical equations and simulations. (Image credit: Getty Images)

NL: What stage of development has ACIS reached at this point?

JA: So, in animal experiments in dogs, last year for the first time, we experimentally induced acute heart failure and we were able to let this autonomous system correct the cardiac output, arterial pressure autonomously, while minimizing myocardial [heart muscle] oxygen consumption.

Since that first successful experiment about a year ago, we've had several other successful [animal] experiments, all the while improving our feedback system to be more complex, making it so that it can operate based on intermittent data, so you don't have to be continuously sampling. You can do it episodically.

We have several more years of work in optimizing this system, we think, in animal experimentation — probably about three years more. And then we'll be ready for first-in-human studies where ACIS will be used but with a clinician in the loop [for the initial human tests]. What ACIS would do is tell the physician what doses of these various drugs to deliver, and the physician would then make a decision whether to do it or not, as a safety measure.

Now, what I've been describing so far has mostly been about drugs, but the same algorithms work for medical devices, such as left ventricular assist devices [LVAD, a type of mechanical pump] or extracorporeal membrane oxygenation devices [ECMO, which circulates the blood to let the heart and lungs rest]. This is all within the scope of what we expect to achieve in experimental animals within the next three years before going to first-in-human studies.

NL: What are the next steps toward getting ACIS approved? What might the trials look like?

JA: It would be kind of like [testing] an autonomous or self-driving vehicle — level 1 through 4 degrees, or stages, of autonomy.

In other words, allowing the system to have increasing responsibility and watching the performance until settling into acceptance of an autonomous system where then, still, probably a specialist would monitor it — like someone sitting in the seat of a self-driving car, ready to take over if things go wrong. I see that kind of progression, similar to the self-driving vehicle.

NL: And in the long run, would ACIS always have some kind of clinician supervision?

JA: I still hold to the concept of "autonomous," but I suspect that there will be a cardiologist somewhere roaming around, monitoring, perhaps, a number of patients at once.

I'm very committed to the idea that the device that we conceive of can actually outperform the cardiologist. And I know that we'll rub some cardiologists the wrong way. But we expect to demonstrate that point, or strongly suggest that that's true, by doing experiments in animals where we compare the ACIS system to clinically trained cardiologists. We expect reduced infarct size [degree of heart tissue death] from ACIS compared to the standard of care from cardiologists.

NL: Assuming this device gets approved in the future, where do you see it having the most benefit?

JA: There's the so-called Quintuple Aim of Health Care, which says to improve the patient experience, improve the physician experience, improve population health, reduce the cost of care, and improve health equity. These aims, I think, are all addressed by ACIS.

The patient would have more attention and minute-to-minute care — you wouldn't have a resident trying to juggle many patients at once. You could have a less-specialized clinical caretaker who is watching the behavior of the device, and so that would improve not only the patient experience and quality of the patient's care but also the health care provider's experience. They wouldn't have to be overworked to such an extent.

We think that this system will outperform the standard of care because [on paper] you more rapidly converge on the minimization of myocardial oxygen consumption and have better recovery during the hospital stay. So the patients have fewer readmissions and complications after being released. There's always some injury to the heart [with these cardiac events], and maybe, there may be some infarction of the heart. So we think that this level of care could reduce infarct size, so you preserve more of the heart, during treatment.

NL: And when you eventually hand off ACIS for clinical testing, what would the next project be?

JA: For us, the natural progression within the next 10 years, probably within the next five years, would be chronic heart failure. In chronic heart failure, you have to deal with more complexity, such as [tissue] remodeling, where the ventricles get thicker or get dilated. That kind of remodeling changes the mechanics.

You also have to deal with data from patients who are not in the hospital. We plan on building registries of patients [with Digital Twins] who would have been acutely ill to have access to that data for treating them outside. But then we have to also rely on things like wearable technologies, and we've been working on that as well. We have collaborations with folks at the Technical University of Munich who are developing special biosensors and biomaterials and implantable sensors and so forth that could help provide the data that would be important to doing predictive health maintenance in patients with chronic heart failure.

And in chronic heart failure, we have to deal with comorbidities and complications like kidney failure … and anemia. The combination of fluid overload and anemia all due to renal failure really makes the heart suffer from a lack of oxygen and causes slow deterioration.

I'm sure that complexity alone will keep me busy for the rest of my life. We have a lot of work to do with chronic heart failure; that would be next for sure.

Editor's note: This interview has been lightly edited for length and clarity.

]]>
https://www.livescience.com/health/heart-circulation/scientists-are-developing-a-self-driving-device-that-helps-patients-recover-from-heart-attacks XUjA9QmdztEYnxMqxQ2p5T Tue, 30 Dec 2025 13:00:00 +0000 Thu, 18 Dec 2025 21:13:14 +0000
<![CDATA[ This new DNA storage system can fit 10 billion songs in a liter of liquid — but challenges remain for the unusual storage format ]]> The U.S. biotech company Atlas Data Storage has launched a synthetic DNA storage system capable of holding 1,000 times more data than traditional magnetic tape.

The product, called Atlas Eon 100, claims it will store humanity’s "irreplaceable archives" for thousands of years. These include family photos, scientific data, corporate records, cultural artifacts and the master versions of digital artworks, movies, manuscripts and music.

"This is the culmination of more than ten years of product development and innovation across multiple disciplines," Bill Banyai, Founder of Atlas Data Storage, said in a statement. “We intend to offer new solutions for long-term archiving, data preservation for AI models, and the safeguarding of heritage and high-value content."

Fundamentally, all digital data is just a series of 1s and 0s in a defined sequence. DNA is similar in that it is made up of defined sequences of the chemical bases adenine (A), cytosine (C), guanine (G) and thymine (T).

DNA data storage works by mapping the binary code to these bases; for example, an encoding scheme might assign A as 00, C as 01, G as 10, and T as 11. Artificial DNA can then be synthesized with the bases arranged in the corresponding order.

For Atlas Eon 100, the DNA is then dehydrated and stored as a powder in 0.7-inch-tall (1.8 cm) ruggedized steel capsules. It is rehydrated only when it needs to be sequenced and its bases translated back to binary.

Diagram showing the DNA data storage process. (Image credit: Atlas Data Storage)

More useful than magnetic tape

Just one quart (one liter) of the DNA solution can hold 60 petabytes of data — the equivalent of 10 billion songs or 12 million HD movies. This makes Atlas Eon 100, which was announced on Dec. 2, 1,000 times more storage-dense than magnetic tape.

For context, about 15,500 miles (25,000 km) of 0.5-inch-wide (12.7 mm) LTO-10 tape, a standard high-capacity storage medium, would be needed to hold that same amount of data.

This storage density will make transporting large quantities of data easier than it would be with typical hard drives or tape reels. DNA is also known to keep its form for centuries, making it a remarkably stable medium for preserving data over very long periods.

Atlas Data Storage says its product is stable in an office environment with 99.99999999999% reliability, but the capsules can also endure temperatures as high as 104°F (40°C). Magnetic tape, on the other hand, decays in about a decade even with temperature and humidity controls.

Optical media, such as CDs and DVDs, typically degrade within 30 years, while hard drives last about 6 or 7 years before showing signs of deterioration. In less than 3 hours at 158°F (70 °C), a flash memory cell can ‘age’ as much as it normally would in a month.

Atlas also argues that its DNA storage service offers an easier way to make backups of its customers’ data than other media do. Indeed, once one strand is encoded, enzymes can be used to make more than a billion copies in just a few hours.

A solution for a data-hungry society?

According to Atlas, society generates 280 PB of data every minute. It presents its DNA data storage as a potential solution to the proliferation of digital data, which has been exacerbated massively by the generative artificial intelligence (AI) boom.

However, the biotech faces a key scaling challenge: synthesizing encoded artificial DNA is still quite a long process compared with, say, saving a photo on an existing hard drive. Twist Bioscience, Atlas’s former parent company from which it inherited its DNA synthesis process, currently has a lead time of between 2 and 8 business days on gene and oligo (short and long DNA strands) orders.

Atlas Eon 100 is about 1,000 times more storage-dense than magnetic tape. (Image credit: Atlas Data Storage)

Sequencing is notoriously expensive, too; it costs about $30 USD to read one gigabase of DNA, the equivalent of about 250 GB of data. It also takes a long time, with another recent DNA storage resolution reporting that it takes 25 minutes to recover a single file. Nevertheless, Atlas Data Storage claims that modern DNA sequencers are “improving throughput and cutting costs 1,000× faster than Moore’s Law.”

That said, due to the time required to synthesize and sequence DNA, the DNA Data Storage Alliance noted in 2025 that they do not expect DNA to be used for archival data storage at scale for another three to five years.

Professor Thomas Heinis, a computer science professor at Imperial College London who researches DNA-based data storage, is sceptical about the lack of concrete data that Atlas has published about the performance of Atlas Eon 100. He pointed to the fact that Catalog DNA, which made similar promises about its Shannon storage solution, went bust a few months ago.

"I have no doubt that they have built an impressive device, but it’s difficult to appreciate without concrete information," he told Live Science, adding that the major challenge to commercialising DNA storage is synthesis, not sequencing.

"It sounds banal, but if the write/synthesis cost is not competitive, then there is no point in reading/sequencing cost efficiently. You cannot read (cheaply) what you cannot afford to write. Currently, synthesis is orders of magnitude too expensive while sequencing is closer to tape but still more expensive. Despite being a firm believer in DNA storage, a lot of technological progress is needed and I have not seen anyone with an economically viable solution yet."

]]>
https://www.livescience.com/technology/computing/this-new-dna-storage-system-can-fit-10-billion-songs-in-a-liter-of-liquid-but-challenges-remain-for-the-unusual-storage-format uhaan5RfUxZmmc8i9Xe9QQ Tue, 30 Dec 2025 12:00:00 +0000 Fri, 19 Dec 2025 17:12:39 +0000
<![CDATA[ See the exact point where a glacier, a lake and a river 'touch' in Argentina — Earth from space ]]>
QUICK FACTS

Where is it? Los Glaciares National Park, Argentina [-50.469690266, -73.03391046]

What's in the photo? The point where a non-retreating glacier, a turquoise lake and a murky river meet

Who took the photo? An unnamed astronaut onboard the International Space Station (ISS)

When was it taken? March 2, 2021

This incredible astronaut photo shows the unusual point where a hefty non-retreating glacier, a pristine turquoise lake and a murky green "river" perfectly converge at the intersection of three valleys in Argentina.

The trio of hydrological features — the Perito Moreno Glacier, Lago Argentino and Brazo Rico — lie at the heart of Los Glaciares National Park, which covers an area of around 2,300 square miles (6,000 square kilometers) in the Santa Cruz province of southern Argentina, near the country's border with Chile.

The aerial photo doesn’t just show off these three aqueous entities in a single frame; if you look closely, it also reveals the point where the trio touch in a slim channel along the western edge of the Magallanes Peninsula — the rocky outcrop that lies between the lake and the river, according to NASA's Earth Observatory.

In this photo, the waters of Lago Argentino and Brazo Rico are likely in direct contact with each other (as in the photo below). But their waters do not readily mix because they have different densities, due to their respective concentrations of suspended particulate matter, according to a 2022 study.

But every four to five years, the glacier's tongue juts forward, colliding with the Magallanes Peninsula and temporarily damming the Brazo Rico. When this happens, the surface of the murky body of water rises by up to 100 feet (30 meters) until a pressure build-up causes the icy dam to spectacularly "rupture," the Earth Observatory previously reported.

A photo of the exact point where the glacier, lake and river meet, taken from the nearby peninsula

From the banks of the Magallanes Peninsula, tourists can clearly see the point where the glacier, lake and "river" meet. Every four to five years, the glacier juts forward, blocking the Brazo Rico (left) and causing it to rise by up to 100 feet. (Image credit: Fernando/Wikimedia)

Perito Moreno is the largest glacier in Patagonia, which includes parts of Argentina and Chile. It is approximately 19 miles (30 km) long with ice up to 200 feet (60 m) thick. In total, the glacier holds roughly the same amount of water as 360,000 Olympic swimming pools, according to back-of-the-envelope calculations.

The glacier is "non-retreating," meaning that it is not shrinking despite rising atmospheric temperatures triggered by human-caused climate change. This is extremely rare nowadays, and Perito Moreno is frequently cited as one of the "world's last major non-retreating glaciers." However, a recent study hints that it may finally be starting to shrink.

Lago Argentino is the largest freshwater lake in Argentina, covering a total area of around 550 square miles (1,425 square km). The section visible in the astronaut photo is the lake's southernmost arm. It contains glacial meltwater filled with rocky particles released by the glaciers' constant movements, collectively known as "glacier milk," which gives the water its striking turquoise color.

The lake's northernmost arm also connects to the Upsala Glacier, which is currently in full retreat.

A zoomed-out astronaut photo of the wider glacier, lake and river  systems

Another astronaut photo, taken in August 2012, shows the wider glacier, lake and "river" systems surrounding the point where all three meet. (Image credit: NASA/ISS program)

Brazo Rico, meaning "rich arm" in Spanish, is also technically part of Lago Argentino. However, it has become increasingly isolated from the rest of the lake due to repeated damming by the Perito Moreno glacier, making it behave more like a river than part of a lake.

The frequent icy obstruction is also responsible for Brazo Rico's insipid color, which is the result of sediment dislodged by its movements. The continued rising and falling of the river's surface has also carved out a border around its edges where no trees can grow.

Eagle-eyed viewers may have also spotted the narrow road winding across the Magallanes Peninsula and along the Brazo Rico's northern edge (just above the tree line): One can only imagine the extraordinary views you'd get to experience driving along there.

For more incredible satellite photos and astronaut images, check out our Earth from space archives.

]]>
https://www.livescience.com/planet-earth/rivers-oceans/see-the-exact-point-where-a-glacier-a-lake-and-a-river-touch-in-argentina-earth-from-space EvfeNabUBhLXmJFADcSUjG Tue, 30 Dec 2025 08:00:00 +0000 Thu, 18 Dec 2025 15:20:19 +0000
<![CDATA[ Orcas are adopting terrifying new behaviors. Are they getting smarter? ]]> In March 2019, researchers off the coast of southwestern Australia witnessed a gruesome scene: a dozen orcas ganging up on one of the biggest creatures on Earth to kill it. The orcas devoured huge chunks of flesh from the flanks of an adult blue whale, which died an hour later. This was the first-ever documented case of orca-on-blue-whale predation, but it wouldn't be the last.

In recent months, orcas (Orcinus orca) have also been spotted abducting baby pilot whales and tearing open sharks to feast on their livers. And off the coast of Spain and Portugal, a small population of orcas has begun ramming and sinking boats.

All of these incidents show just how clever these apex predators are.

"These are animals with an incredibly complex and highly evolved brain," Deborah Giles, an orca researcher at the University of Washington and the nonprofit Wild Orca, told Live Science. "They've got parts of their brain that are associated with memory and emotion that are significantly more developed than even in the human brain."

But the scale and novelty of recent attacks have raised a question: Are orcas getting smarter? And if so, what's driving this shift?

They've got parts of their brain that are associated with memory and emotion that are significantly more developed than even in the human brain.

It's not likely that orcas' brains are changing on an anatomical level, said Josh McInnes, a marine ecologist who studies orcas at the University of British Columbia. "Behavioral change can influence anatomical change in an animal or a population" — but only over thousands of years of evolution, McInnes told Live Science.

Related: Scientists investigate mysterious case of orca that swallowed 7 sea otters whole

But orcas are fast learners, which means they can and do teach each other some terrifying tricks, and thus become "smarter" as a group. Still, some of these seemingly new tricks may in fact be age-old behaviors that humans are only documenting now. And just like in humans, some of these learned behaviors become trends, ebbing and flowing in social waves.

Frequent interactions with humans through boat traffic and fishing activities may also drive orcas to learn new behaviors. And the more their environment shifts, the faster orcas must respond and rely on social learning to persist.

Teaching hunting strategies 

Orcas (Orcinus orca) attacked an adult blue whale off the coast of Australia and inserted their heads inside the whale's mouth to feed on its tongue. (Image credit: John Totterdell)

an image that says

Science Spotlight takes a deeper look at emerging science and gives you, our readers, the perspective you need on these advances. Our stories highlight trends in different fields, how new research is changing old ideas, and how the picture of the world we live in is being transformed thanks to science.

There's no question that orcas learn from each other. Many of the skills these animals teach and share relate to their role as highly evolved apex predators.

Scientists described orcas killing and eating blue whales (Balaenoptera musculus) for the first time in a study published last year. In the months and years that followed the first attack in March 2019, orcas preyed on a blue whale calf and juvenile in two additional incidents, pushing the young blue whales below the surface to suffocate them.

This newly documented hunting behavior is an example of social learning, with strategies being shared and passed on from adult orcas to their young, Robert Pitman, a marine ecologist at Oregon State University's Marine Mammal Institute, told Live Science in an email. "Anything the adults learn will be passed along" from the dominant female in a pod to her offspring, he said.

Taking down a blue whale "requires cooperation and coordination," Pitman said. Orcas may have learned and refined the skills needed to tackle such enormous prey in response to the recovery of whale populations from whaling. This know-how was then passed on, until the orcas became highly skilled at hunting even the largest animal on Earth, Pitman said.

Old tricks, new observations 

The remains of a shark that was attacked by orcas off the coast of South Africa. (Image credit: Marine Dynamics)

Some of the gory behaviors researchers have observed recently may actually be long-standing habits.

For instance, during the blue whale attacks, observers noted that the orcas inserted their heads inside live whales' mouths to feed on their tongues. But this is probably not a new behavior — just a case of humans finally seeing it up close.

"Killer whales are like humans in that they have their 'preferred cuts of meat,'" Pitman said. "When preying on large whales, they almost always take the tongue first, and sometimes that is all they will feed on."

Tongue is not the only delicacy orcas seek out. Off the coast of South Africa, two males — nicknamed Port and Starboard — have, for several years, been killing sharks to extract their livers.

Killer whales are like humans in that they have their 'preferred cuts of meat.'

Although the behavior surprised researchers at first, it's unlikely that orcas picked up liver-eating recently due to social learning, Michael Weiss, a behavioral ecologist and research director at the Center for Whale Research in Washington state, told Live Science.

Related: Orcas attacked a great white shark to gorge on its liver in Australia, shredded carcass suggests

That's because, this year, scientists also captured footage of orcas slurping down the liver of a whale shark off the coast of Baja California, Mexico. The likelihood that Port and Starboard transferred their know-how across thousands of miles of ocean is vanishingly small, meaning liver-eating is probably a widespread and established behavior.

"Because there are more cameras and more boats, we're starting to see these behaviors that we hadn't seen before," Weiss said.

Sharing scavenging techniques 

Orcas make an easy meal by following fishing boats and feasting on their catch. (Image credit: wildestanimal via Getty Images)

Orcas master and share more than hunting secrets. Several populations worldwide have learned to poach fish caught for human consumption from the longlines used in commercial fisheries and have passed on this information.

In the southern Indian Ocean, around the Crozet Islands, two orca populations have increasingly scavenged off longlines since fishing in the region expanded in the 1990s. By 2018, the entire population of orcas in these waters had taught one another to feast on longline buffets, with whole groups that previously foraged on seals and penguins developing a taste for human-caught toothfish.

Sometimes, orcas' ability to quickly learn new behaviors can have fatal consequences. In Alaska, orcas recently started dining on groundfish caught by bottom trawlers, but many end up entangled and dead in fishing gear.

"This behavior may be being shared between individuals, and that's maybe why we're seeing an increase in some of these mortality events," McInnes said.

Playing macabre games 

Orcas off the North Pacific coast have been playing with porpoises to death in a game that has lasted 60 years. (Image credit: Wild Orca)

Orcas' impressive cognitive abilities also extend to playtime.

Giles and her colleagues study an endangered population of salmon-eating orcas off the North Pacific coast. Called the Southern Resident population, these killer whales don't eat mammals. But over the past 60 years, they have developed a unique game in which they seek out young porpoises, with the umbilical cords sometimes still attached, and play with them to death.

Related: 'An enormous mass of flesh armed with teeth': How orcas gained their 'killer' reputation

There are 78 recorded incidents of these orcas tossing porpoises to one another like a ball but not a single documented case of them eating the small mammals, Giles said. "In some cases, you'll see teeth marks where the [killer] whale was clearly gently holding the animal, but the animal was trying to swim away, so it's scraping the skin."

The researchers think these games could be a lesson for young orcas on how to hunt salmon, which are roughly the same size as baby porpoises. "Sometimes they'll let the porpoise swim off, pause, and then go after it," Giles said.

Are humans driving orcas to become "smarter"? 

Orcas are adapting their hunting strategies to changing conditions in Antarctica. (Image credit: Delta Images via Getty Images)

Humans may indirectly be driving orcas to become smarter, by changing ocean conditions, McInnes said. Orca raids on longline and trawl fisheries show, for example, that they innovate and learn new tricks in response to human presence in the sea.

Human-caused climate change may also force orcas to rely more heavily on one another for learning.

In Antarctica, for instance, a population of orcas typically preys on Weddell seals (Leptonychotes weddellii) by washing them off ice floes. But as the ice melts, they are adapting their hunting techniques to catch leopard seals (Hydrurga leptonyx) and crabeater seals (Lobodon carcinophaga) — two species that don't rely on ice floes as much and are "a little bit more feisty," requiring orcas to develop new skills, McInnes said.

While human behaviors can catalyze new learning in orcas, in some cases we have also damaged the bonds that underpin social learning. Overfishing of salmon off the coast of Washington, for example, has dissolved the social glue that keeps orca populations together.

"Their social bonds get weaker because you can't be in a big partying killer-whale group if you're all hungry and trying to search for food," Weiss said. As orca groups splinter and shrink, so does the chance to learn from one another and adapt to their rapidly changing ecosystem, Weiss said.

And while orcas probably don't know that humans are to blame for changes in their ocean habitat, they are "acutely aware that humans are there," McInnes said.

Luckily for us, he added, orcas don't seem interested in training their deadly skills on us.

]]>
https://www.livescience.com/animals/orcas/orcas-are-adopting-terrifying-new-behaviors-are-they-getting-smarter 8yG5HwuhUvwv9QK4DBChX8 Mon, 29 Dec 2025 17:00:00 +0000 Tue, 23 Dec 2025 18:03:06 +0000
<![CDATA[ A fentanyl vaccine enters human trials in 2026 — here's how it works ]]> A vaccine that blocks the effects of fentanyl — including overdose — will enter human trials in the coming months, perhaps leading the way to the first-ever proactive treatment for opioid use disorder.

The initial trials will focus on assessing the safety of the vaccine, which was initially developed with funding from the U.S. Department of Defense. The shot was previously tested in rats and showed promising results. Now, it's been licensed by startup ARMR Sciences, which will begin enrolling patients for Phase I clinical trials in the Netherlands in 2026, starting in either January or February.

"Our goal as a company is to eliminate the lethality of the drug supply," said Colin Gage, co-founder and CEO of ARMR. "We want to go about doing that by attacking the root cause of not only addiction, but also, obviously, overdose."

How does the vaccine work?

The vaccine works by keeping fentanyl out of the brain, which it does by making the molecule a target of the immune system.

Fentanyl is a synthetic opioid with effects 50 times stronger than heroin. Opioids, also called narcotics, broadly work by binding to opioid receptors in the brain and spinal cord, triggering changes in nerve cell signaling that prevent pain and can create a euphoric high.

But these opioid receptors are also found in the part of the brain that controls breathing, so fentanyl can also reduce respiration to a deadly degree if used in excess. A 2-milligram dose of fentanyl — similar in volume to about a dozen grains of salt — can be fatal, according to the Drug Enforcement Agency (DEA).

If a person overdosing on fentanyl is treated with naloxone (better known by the brand name Narcan), quickly enough, these effects can be reversed. This antidote also binds to opioid receptors, thus blocking the effects of fentanyl.

ARMR's vaccine takes a different approach: It works in the circulatory system, before the drug can reach the brain.

"This would be the first-ever treatment that does not work on the [opioid] receptor," Gage told Live Science.

What's in the vaccine?

To keep fentanyl from reaching the brain, the immune system must first recognize the drug. But fentanyl is a tiny molecule, not a pathogen like a virus, and immune cells don't naturally react to its presence.

To spur an immune response to fentanyl, the University of Houston's Colin Haile, an ARMR co-founder and scientific adviser, and his colleagues had to tie the opioid to something else.

They chose a deactivated diphtheria toxin called CRM197, a compound already used in vaccines on the market; once deactivated, the toxin is no longer toxic and instead helps rouse an immune response. To boost this immune response even further, they also added dmLT, a compound distilled from toxins produced by the Escherichia coli bacterium. This modified compound is not toxic itself, and it has also been tested in humans in trials of other, not-yet-approved, vaccines.

These two components are attached to a synthetic piece of the fentanyl molecule, which in and of itself cannot cause a high or pain relief.

When the immune system meets this combo of fentanyl fragments, CRM197 and dmLT, it builds antibodies that react to real fentanyl. These antibodies bind to the opioid, keeping it from crossing the brain's protective membrane — the blood-brain barrier — and then clearing it from the body.

In rat studies, the vaccine blocked fentanyl from entering the rodents' brain and also blocked the drug from depressing respiration and causing overdose.

How is the vaccine being tested?

So far, the studies on the vaccine have been in rodents, though dmLT and CRM197 have respectively been tested to some extent and are already used in other vaccines in humans. The protocol in rats is to give an initial dose of the fentanyl vaccine and then boosters three and six weeks out from the first dose, Haile told Live Science.

"The longest we've followed the animals in our studies is about six months and we saw complete blockade of fentanyl effects at six months post the initial vaccination," Haile said. It remains to be seen how that will translate to "human years," he noted, but lab rats live a couple of years in total, so the researchers think the vaccine will work for a long time in humans.

The initial human trials that will begin in early 2026 will enroll 40 people and will focus on detecting any safety issues with the vaccines, such as unwanted or dangerous side effects. Researchers will also draw blood samples from participants to make sure that the vaccine is spurring the creation of anti-fentanyl antibodies.

If these Phase I trials are successful, the next step will be Phase II trials to test the vaccine's efficacy — how well the vaccine blocks fentanyl's effects. In these trials, not only will antibody levels be tracked over time, but some participants will also be dosed with safe levels of fentanyl used for pain relief in medical procedures. This will be done under close supervision, to check that the vaccine works in the presence of the drug.

Photo of an ambulance parked outside an emergency department. Two EMTs are wheeling in a patient on a gurney. They are blurred, suggesting they are moving quickly.

The new vaccine is designed to block the effects of fentanyl, including overdose. (Image credit: Getty Images)

Are there potential drawbacks to the vaccine?

Fentanyl has legitimate medical uses as a painkiller, especially in emergency situations. One concern about the vaccine is that people who take it will lose this option for pain relief.

However, the antibodies created by vaccination do not bind to other opioids — such as morphine, oxycodone or methadone — or to other pain-relief options, Haile said. That means there are alternatives if people who get the vaccine need pain relief down the line.

The drug also does not interfere with buprenorphine, a drug used to treat opioid use disorder by reducing withdrawal symptoms and cravings. Haile said he and his team are currently testing the vaccine in combination with naltrexone, a non-opioid medication also used to block the effects of opioids in treatment of substance use.

In theory, it might be possible to take enough fentanyl to override the body's supply of anti-fentanyl antibodies, Haile said. However, given that the vaccine blocks fentanyl's euphoric effects, he expects people who want to quit will not be motivated to try to work around it.

"We want people who want to quit, want to not use the drug," he said. "That will give them a chance to realize that they won’t get high from this drug and there is no use in taking it any longer."

Who might benefit from the fentanyl vaccine?

Gage suggested that one market for the vaccine could be first responders concerned about accidental fentanyl exposure. (That concern has risen in recent years with the spread of misinformation about fentanyl.)

For clarity: if fentanyl gets on your skin via casual exposure — for example, if you touch an object that's been exposed to the drug — it will not absorb through the skin. Meaningful absorption through the skin requires direct contact to the drug over hours or days. That said, if an EMT or police officer gets the drug on their hands and then touches their mouth or eyes, they could feel some of the drug's analgesic, or pain-relieving, effects, Haile said.

The vaccine could also be "an extra tool in the toolset" for people with opioid use disorder, Gage said. Combining the vaccine with "robust" cognitive behavioral therapy, a type of talk therapy, and communal support could be "incredibly beneficial to people who are just looking for another lifeline to help themselves get better," he said.

Finally, the vaccine could be beneficial for people who use less-deadly drugs — such as cocaine, stimulants or painkillers — that they buy on the black market. That's because these drugs are increasingly cut with fentanyl, meaning people may overdose without even knowing they are taking the opioid.

"I had two close childhood friends who passed away from fentanyl overdose," Gage said. "Neither of them were seeking it out."

Over 48,000 people are estimated to have died of opioid overdoses in 2024 in the U.S., according to provisional data. Perhaps due to this high death toll, early research suggests that people with personal experience with opioid use disorder and the general public alike view a possible anti-fentanyl vaccine positively. Time will tell how the new vaccine will perform in human trials, but if eventually approved, it could be a first-of-its-kind tool against overdose deaths.

This article is for informational purposes only and is not meant to offer medical advice.

]]>
https://www.livescience.com/health/a-fentanyl-vaccine-enters-human-trials-in-2026-heres-how-it-works 2S6keBAPTRuTtqvftSmoCc Mon, 29 Dec 2025 15:24:00 +0000 Tue, 23 Dec 2025 18:00:41 +0000
<![CDATA[ 'Putting the servers in orbit is a stupid idea': Could data centers in space help avoid an AI energy crisis? Experts are torn. ]]> As artificial intelligence (AI) models keep growing and getting more power-hungry, researchers are starting to ask not whether they can be trained — but where. That’s the context behind Google Research’s recent proposal to explore space-based AI infrastructure, an idea that sits somewhere between serious science and orbital overreach.

The idea, dubbed "Project Suncatcher" and outlined in a study uploaded Nov. 22 to the preprint arXiv database, explores whether future AI workloads could be run on constellations of satellites equipped with specialized accelerators and powered primarily by solar energy.

In certain low Earth or sun-synchronous orbits, the argument goes, solar panels can operate for much of the time, avoiding many of the night-day cycles, atmospheric losses and grid constraints that limit terrestrial data centers. Heat, meanwhile, would be rejected into space via radiative cooling rather than relying on water-intensive cooling systems on Earth.

The push to look beyond Earth for AI infrastructure isn’t coming out of nowhere. Data centers already consume a non-trivial slice of the world’s power supply: recent estimates put global data-center electricity use at roughly 415 terawatt-hours in 2024, or about 1.5% of total global electricity consumption, with projections suggesting this could more than double by 2030 as AI workloads surge.

Utilities in the U.S. are already planning for data centers, driven largely by AI workloads, to account for between 6.7-12% of total electricity demand in some regions by 2028, prompting some executives to warn that there simply “isn’t enough energy on the grid” to support unchecked AI growth without significant new generation capacity.

In that context, proposals like space-based data centers start to read less like sci-fi indulgence and more like a symptom of an industry confronting the physical limits of Earth-bound energy and cooling. On paper, space-based data centers sound like an elegant solution. In practice, some experts are unconvinced.

Reaching for the stars

Joe Morgan, COO of data center infrastructure firm Patmos, is blunt about the near-term prospects. "What won’t happen in 2026 is the whole ‘data centers in space’ thing," he told Live Science. "One of the tech billionaires might actually get close to doing it, but aside from bragging rights, why?"

Morgan points out that the industry has repeatedly flirted with extreme cooling concepts, from mineral-oil immersion to subsea facilities, only to abandon them once operational realities bite. "There is still hype about building data centers under the ocean, but any thermal benefits are far outweighed by the problem of replacing components," he said, noting that hardware churn is fundamental to modern computing.

That churn is central to the skepticism around orbital AI. GPUs and specialized accelerators depreciate quickly as new architectures deliver step-change improvements every few years. On Earth, racks can be swapped, boards replaced and systems upgraded continuously. In orbit, every repair requires launches, docking or robotic servicing — none of which scale easily or cheaply.

"Who wants to take a spaceship to update the orbital infrastructure every year or two?" Morgan asks. "What if a vital component breaks? Actually, forget that, what about the latency?"

Latency is not a footnote. Most AI workloads depend on tightly coupled systems with extremely fast interconnects, both within data centers and between them. Google’s proposal leans heavily on laser-based inter-satellite links to mimic those connections, but the physics remains unforgiving. Even at low Earth orbit, round-trip latency to ground stations is unavoidable.

"Putting the servers in orbit is a stupid idea, unless your customers are also in orbit," Morgan said. But not everyone agrees it should be dismissed so quickly. Paul Kostek, a senior member of IEEE and systems engineer at Air Direct Solutions, said the interest reflects genuine physical pressures on terrestrial infrastructure.

"The interest in placing data centers in space has grown as the cost of building centers on earth keeps increasing," Kostek said. "There are several advantages to space-based or Moon-based centers. First, access to 24 hours a day of solar power… and second, the ability to cool the centers by radiating excess heat into space versus using water."

From a purely thermodynamic standpoint, those arguments are sound. Heat rejection is one of the hardest limits on computation, and Earth-based data centers are increasingly constrained by water availability, grid capacity and local environmental opposition.

The backlash against terrestrial AI infrastructure isn’t limited to energy and water issues; health fears are increasingly part of the narrative. In Memphis, residents near xAI’s massive Colossus data center have voiced concern about air quality and long-term respiratory impacts, with community members reporting worsened symptoms and fear of pollution-linked illnesses since the facility began operating. In other states, opponents of proposed hyperscale data center projects have framed their resistance around potential health and environmental harms, arguing that large facilities could degrade local air and water quality and exacerbate existing public health burdens.

Putting data centers into orbit would remove some constraints, but replace them with others.

Staying grounded

"The technology questions that need to be answered include: Can the current processors used in data centers on Earth survive in space?” Kostek said. "Will the processors be able to survive solar storms or exposure to higher radiation on the Moon?"

Google researchers have already begun probing some of those questions through early work on Project Suncatcher. The team describes radiation testing of its Tensor Processing Units (TPUs) and modeling of how tightly clustered satellite formations could support the high-bandwidth inter-satellite links needed for distributed computing. Even so, Kostek stresses that the work remains exploratory.

"Initial testing is being done to determine the viability of space-based data centers," he said. "While significant technical hurdles remain and implementation is still several years away, this approach could eventually offer an effective way to achieve expansion."

That word — expansion — may be the real clue. For some researchers, the most compelling rationale for off-world computing has little to do with serving Earth-based users at all. Christophe Bosquillon, co-chair of the Moon Village Association’s working group for Disruptive Technology & Lunar Governance, argues that space-based data centers make more sense as infrastructure for space itself.

"With humanity on track to soon establish a permanent lunar presence, an infrastructure backbone for a future data-driven lunar industry and the cis-lunar economy is warranted," he told Live Science.

From this perspective, space-based data centers aren’t substitutes for Earth’s infrastructure so much as tools for enabling space activity, handling everything from lunar sensor data to autonomous systems and navigation.

"Affordable energy is a key issue for all activities and will include a nuclear component next to solar power and arrays of fuel cells and batteries," Bosquillon said, adding that the challenges extend well beyond engineering to governance, law and international coordination.

Crucially, space-based computing could offload non-latency-sensitive workloads from Earth altogether. "Solving the energy problem in space and taking that burden off the Earth to process Earth-related non-latency-sensitive data… has merit," Bosquillon said, even extending to the idea of space and the Moon as a secure vault for "civilisational" data.

Seen this way, Google’s proposal looks less like a solution to today’s data center shortages and more like a probe into the long-term physics of computation. As AI approaches planetary-scale energy consumption, the question may not be whether Earth has enough capacity, but whether researchers can afford to ignore environments where energy is abundant but everything else is hard.

For now, space-based AI remains strictly experimental. Whether it ever escapes Earth’s gravity may depend less on solar panels and lasers than on how desperate the energy race becomes.

]]>
https://www.livescience.com/technology/artificial-intelligence/putting-the-servers-in-orbit-is-a-stupid-idea-could-data-centers-in-space-help-avoid-an-ai-energy-crisis-experts-are-torn ArbaLjt8zAPaqz88G3bdZP Mon, 29 Dec 2025 14:18:00 +0000 Mon, 22 Dec 2025 22:56:02 +0000
<![CDATA[ 'Stop and re-check everything': Scientists discover 26 new bacterial species in NASA's cleanrooms ]]> NASA's cleanrooms rank among the cleanest spaces on Earth, and for good reason — these sterile spaces are fortified to prevent even the hardiest Earth microbes from hitching a ride to other worlds aboard NASA spacecraft. Yet even in the most sterile places on Earth, life finds a way.

Now, experts plan to test these newfound bugs inside a "planetary simulation chamber" that could reveal whether these microbes, or ones with similar adaptations, could survive a trip through space to Mars, possibly contaminating the alien worlds on arrival.

Earlier this year, scientists identified more than two dozen previously unknown bacterial species lurking in the Kennedy Space Center cleanrooms in Florida, where NASA assembled its Phoenix Mars Lander in 2007. The discovery showed that despite constant scrubbing, harsh cleaning chemicals and extreme nutrient scarcity, some microbes evolved a suite of genetic tricks that allowed them to persist in these punishing environments.

"It was a genuine 'stop and re-check everything' moment," study co-author Alexandre Rosado, a professor of Bioscience at King Abdullah University of Science and Technology in Saudi Arabia, told Live Science about the findings, which were described in a paper published in May in the journal Microbiome. While there were relatively few of these microbes, they persisted for a long time and in multiple cleanroom environments, he added.

Identifying these unusually hardy organisms and studying their survival strategies matters, the researchers say, because any microbe capable of slipping through standard cleanroom controls could also evade the planetary-protection safeguards meant to prevent Earth life from contaminating other worlds.

When asked whether any of these microbes might, in theory, tolerate conditions during a journey to Mars' northern polar cap, where Phoenix landed in 2008, Rosado said several species do carry genes that may help them adapt to the stresses of spaceflight, such as DNA repair and dormancy-related resilience. But he cautioned that their survival would depend on how they handle harsh conditions a microbe would face both during space travel and on Mars — factors the team didn't test — including exposure to vacuum, intense radiation, deep cold and high levels of UV at the Martian surface.

To explore that question, the researchers are now building a planetary simulation chamber at the King Abdullah University of Science and Technology in Saudi Arabia to expose the bacteria to Mars-like and space-like conditions, Rosado said. The chamber, now in its final assembly phase, with pilot experiments expected to begin in early 2026, is engineered to mimic stresses such as the low, carbon-dioxide-rich air pressure of Mars, high radiation, and the extreme temperature swings the microbes would face during spaceflight. These controlled environments will allow scientists to investigate how hardy microbes adapt and survive under combinations of stresses comparable to those encountered during spaceflight or on the Martian surface, said Rosado.

Photo of a large metal chamber about the size of a wine barrel, with several valves and gauges coming out of it, sitting on its side on a metal rack, inside a laboratory.

The planetary simulation chamber at King Abdullah University of Science and Technology in Saudi Arabia. Scientists will soon use it to recreate Mars-like and space-like conditions and test how the newly discovered microbes survive and adapt. (Image credit: Niketan Patel and Alexandre Rosado/King Abdullah University of Science and Technology)

'Cleanrooms don't contain 'no life"

NASA's spacecraft-assembly cleanrooms are engineered to be hostile to microbes — a cornerstone of the agency's efforts to prevent Earth organisms from hitchhiking to worlds beyond Earth — through continuously filtered air, strict humidity control and repeated treatments using chemical detergents and UV light, among other measures.

Even so, "cleanrooms don't contain 'no life,'" said Rosado. "Our results show these new species are usually rare but can be found, which fits with long-term, low-level persistence in cleanrooms."

During the Phoenix lander's assembly at the Kennedy Space Center's Payload Hazardous Servicing Facility, a team led by study co-author Kasthuri Venkateswaran, who is a senior research scientist at NASA's Jet Propulsion Laboratory, collected and preserved 215 bacterial strains from the cleanroom floors. Some samples were gathered before the spacecraft arrived in April 2007, again during assembly and testing in June, and once more after the spacecraft moved to the launch pad in August, according to the study.

At the time, researchers lacked the technology to classify new species precisely or in large numbers. But DNA technology has advanced dramatically in the 17 years since that mission, and today scientists can sequence almost every gene these microbes carry and compare their DNA to broad genetic surveys of microbes collected from cleanrooms in later years. This allows scientists "to study how often and for how long these microbes appear in different places and times, which wasn't possible in 2007," said Rosado.

Further analysis revealed a suite of survival strategies. Many of the newly identified species carry genes that help them resist cleaning chemicals, form sticky biofilms that anchor them to surfaces, repair radiation-damaged DNA or produce tough, dormant spores — adaptations that help them survive in tucked-away corners or microscopic cracks, the study reports. This makes the microbes "excellent test organisms" for validating the decontamination protocols and detection systems that space agencies rely on to keep spacecraft sterile, Rosado said.

From a broader research standpoint, Rosado said the next step is coordinated, long-term sampling across multiple cleanrooms using standardized methods, paired with controlled experiments that measure microbes' survival limits and stress responses, said Rosado.

"This would give us a much clearer picture of which traits truly matter for planetary protection and which might have translational value in biotechnology or astrobiology," he said.

]]>
https://www.livescience.com/planet-earth/microbiology/stop-and-re-check-everything-scientists-discover-26-new-bacterial-species-in-nasas-cleanrooms zwhvpGyAtvuqiacZfon6v5 Mon, 29 Dec 2025 12:00:00 +0000 Mon, 22 Dec 2025 23:01:58 +0000
<![CDATA[ Lchashen wagon: A 3,500-year-old covered wagon that transported a deceased chief to the next world ]]>
QUICK FACTS

Name: Lchashen wagon

What it is: An oak wagon

Where it is from: Lchashen village, Armenia

When it was made: Circa 1500 B.C.

Covered wagons are often associated with the Old West. But the best-preserved example of an ancient covered wagon was actually found in a Bronze Age grave in Armenia, where it had been buried for 3,500 years.

The remains of six oak wagons were excavated from an elite cemetery in Lchashen, Armenia, and were dated to the 15th to 14th centuries B.C., or the Late Bronze Age. Each wagon had four wheels arranged on two axles. But while two of the wagons were open, the other four had a complex frame structure on top. One of these wagons is considered the best-preserved example of an early covered wagon.

On display at the History Museum of Armenia in Yerevan, the Lchashen wagon was made of at least 70 parts joined together by a mortise-and-tenon system involving slotted pieces of wood and bronze fittings. The frame of the canopy required at least 600 mortise holes, archaeologist Stuart Piggott wrote in a 1968 study, indicating the precise workmanship that went into creating the wagon.

The wagon measures approximately 6.5 feet (2 meters) in length. Each wooden wheel was made of two slabs of wood joined together and measured a whopping 63 inches (160 centimeters) tall, historian Christoph Baumer wrote in "History of the Caucasus" (Bloomsbury, 2021).

The Lchashen wagon was discovered in the 1950s, when construction workers from the Soviet Union drained part of Lake Sevan in Armenia to help irrigate a nearby plain. They found a Late Bronze Age cemetery that contained more than 500 burials, along with hundreds of grave goods. One distinctive feature of the Lchashen necropolis is the presence of two- and four-wheeled wagons, as well as bronze models of war chariots, archaeologist L.A. Petrosyan wrote in a 2016 study.

Although some claim the Lchashen wagon is the "oldest in the world," there is abundant evidence of both wagon technology and covered wagons that predate this example. The exact invention dates are still being debated, but humans likely first invented the wheel and wheeled vehicles in Mesopotamia in the Copper Age, between about 4500 and 3300 B.C.

But the Lchashen wagon is a very early — as well as the best-preserved — example of a covered wagon with spoked wheels on axles, demonstrating innovation in early wheeled vehicles. Whether this technology was invented in Armenia or came from Mesopotamia to the south or the Russian steppe to the north is still being investigated.

According to the History Museum of Armenia, burials with wheeled vehicles arose in the Middle Bronze Age (2400 to 1500 B.C.) in Armenia but became most popular in the Late Bronze Age, when they were used as vehicles for physically and metaphorically transporting the remains of a deceased leader into the next life.

For more stunning archaeological discoveries, check out our Astonishing Artifacts archives.

]]>
https://www.livescience.com/archaeology/lchashen-wagon-a-3-500-year-old-covered-wagon-that-transported-a-deceased-chief-to-the-next-world FovuNZvxxZmY3ozWjftFiJ Mon, 29 Dec 2025 11:00:00 +0000 Mon, 22 Dec 2025 21:01:31 +0000
<![CDATA[ Science history: Richard Feynman gives a fun little lecture — and dreams up an entirely new field of physics — Dec. 29, 1959 ]]>

Milestone: Vision of nanotechnology laid out

Date: Dec. 29, 1959

Where: Pasadena, California

Who: Richard Feynman

On a December day, Richard Feynman gave a fun little lecture at Caltech — and dreamed up an entirely new field of physics.

During the talk, entitled "Plenty of room at the bottom," he described the enormous potential that could be realized if scientists could manipulate and control things at a "small scale."

How small? Feynman went on to discount advances of the time, such as writing the Lord's Prayer on the head of a pin, as trivial.

"But that's nothing; that's the most primitive, halting step in the direction I intend to discuss. It is a staggeringly small world that is below," Feynman said in his lecture. Rather, he suggested, people could write the entire 24-volume encyclopedia on the head of a pin, and elegantly showed that there's enough space there to write it legibly and read it out.

He then explored the possibility of a number of then-futuristic ideas: electron microscopes capable of manipulating individual atoms, ultracompact data storage, miniaturized computers, and powerful, ingestible biological machines that travel into organs like the heart, find defects, and repair them with tiny knives. He proposed a number of ways to create these small-scale innovations, including manipulating light and ions.

He ended the lecture by offering a reward of $1,000 to anyone who could miniaturize the text in a book 25,000-fold, such that it could be read using an electron microscope. He offered another $1,000 to anyone who could make a motor no bigger than 1/64th of an inch cubed.

Black and white professional headshot of Richard Feynman. He sits in a chair facing the camera, with his knee propped up on the chair and his hand partially covering his mouth.

Richard Feynman dreamed up the notion of nanotechnology in 1959, but the word wouldn't be coined until 1974. Historians debate how much his vision drove innovations in the field. (Image credit: Photo 12 / Contributor/ Getty Images)

The latter of these prizes was scooped up the following year by engineer William McLellan, who created a 250-microgram motor composed of 13 parts. In his award letter, Feynman congratulated McLellan on the feat but joked that he shouldn't "start writing small," lest he solve the first challenge, too and expect to receive the other $1,000 prize.

"I don't intend to make good on the other one. Since writing the article I've gotten married and bought a house!" Feynman wrote.The former challenge was eventually solved in 1985, when Stanford graduate Thomas Newman miniaturized the first page of the Dickens classic "A Tale of Two Cities." Feynman did, ultimately, pay up for the second prize.

Feynman's Caltech talk is now mythologized as having ushered in the field of nanotechnology. And yet, the term "nanotechnology" itself was not coined until 15 years after his talk, when scientist Norio Taniguchi penned a paper about manipulating material at the atomic scale.

In that 1974 paper, Taniguchi described nanotechnology as "the processing of separation, consolidation, and deformation of materials by one atom or one molecule." Many science historians now argue that the field was following its own trajectory, and that Feynman's talk, while prescient, wasn't the actual driver of future innovations. Prior to 1980, his talk was cited less than 10 times.

Whether it drove innovation or not, since Feynman's famous lecture, many of his predictions have proven true. The scanning tunneling microscope manipulated individual xenon atoms in 1990. Computers more powerful than he described now sit in our pockets, rather than taking up whole rooms. And indeed, tiny nanobots have been designed that can repair damaged blood vessels.

]]>
https://www.livescience.com/physics-mathematics/particle-physics/science-history-richard-feynman-gives-a-fun-little-lecture-and-dreams-up-an-entirely-new-field-of-physics-dec-29-1959 qU2WsqVCQmxDpcSybB228L Mon, 29 Dec 2025 07:00:00 +0000 Tue, 23 Dec 2025 22:48:57 +0000
<![CDATA[ Primates Quiz: Go ape and test your knowledge on our closest relatives ]]> Primates — the mammalian group that includes humans — are found pretty much everywhere on Earth, from equatorial rainforests to scientific research stations in Antarctica. This hugely diverse order appeared before the dinosaurs went extinct, with wild populations of non-human primates evolving to live in niches across three continents — Asia, Africa and Central/South America.

Primates don't just live in lots of places; there are also hundreds of species and subspecies. In fact, the order primates is the fourth most biodiverse mammal order in the animal kingdom — yet the majority (62.6%) of primates are threatened with extinction.

Scientists researching primates, called "primatologists," have learned a lot over the years about our closest evolutionary relatives. For example, did you know that chimps have opposable big toes, or that not all monkeys can swing through the trees? Or even that there are some primates that are neither monkeys nor apes?

Fancy yourself a primatologist? Put your knowledge to the test below!

Remember to log in to put your name on the leaderboard; hints are available if you click the yellow button. Good luck!

More science quizzes

  • Bird quiz: How much do you know about our feathered friends?
  • Big cats quiz: Can you get the lion's share of these questions right?
  • Snake quiz: How much do you know about the slithering reptiles?
]]>
https://www.livescience.com/animals/land-mammals/primates-quiz-go-ape-and-test-your-knowledge-on-our-closest-relatives Vq2oeVLUbHaHQvLDzb8xen Sun, 28 Dec 2025 17:05:00 +0000 Tue, 23 Dec 2025 17:53:07 +0000
<![CDATA[ Year in review: The standout health stories of 2025, from measles outbreaks to AI-made viruses ]]> Groundbreaking medical treatments; mysteries of fundamental biology; the impacts of health policy upheavals. Live Science covered all these topics and more in 2025 — and you can catch up on some of our best Health channel long-reads from the year below. The following list includes interviews, book excerpts and news analyses, as well as entries from our Science Spotlight series, which highlights how science is transforming the world as we know it.

1. Secrets of the world's oldest woman

An elderly woman blows out candles on her birthday cake

The supercentenarian Maria Branyas Morera on her 117th birthday on March 4, 2024. (Image credit: Arxiu de la família Branyas Morera, (CC0 1.0 UNIVERSAL Deed), via Wikimedia Commons)

Maria Branyas Morera, once the world's oldest woman, died in 2024 at age 117. Live Science took a deep look at a study that examined Branyas' biology and uncovered key traits that may have protected her from disease in old age. Could lessons from the study help others lead longer, healthier lives?

2. What makes us human?

Many consider the brain to be a central feature of what makes us human — but how did the remarkable organ come to be? In an interview, science communicator Jim Al-Khalili discussed what he learned from shooting the new BBC show "Horizon: Secrets of the Brain," which tells the story of how the human brain evolved. And in a book excerpt and interview with Live Science, neuroscientist Nikolay Kukushkin described the evolutionary forces he believes were key to the formation of the human brain and consciousness as we know it.

3. Could lab-grown brains gain consciousness?

Miniature models of the human brain can be grown from stem cells in the lab, and they're getting more and more advanced. Some scientists have raised concerns that these "minibrains" could become conscious and feel pain. We investigated experts' concerns and hopes for future regulation of the research.

4. The promise of mRNA medicine

mRNA may be best known for forming the basis of the first COVID-19 vaccines, but it could also be used in revolutionary cancer therapeutics, immune-reprogramming treatments and gene therapies. The promise of these emerging mRNA medicines is staggering, but due to the politicization of COVID-19 shots in the U.S., mRNA research and development — even unrelated to vaccines — now hangs in precarious uncertainty. A Science Spotlight feature described emerging mRNA technologies and their wobbly status under the second Trump administration.

5. Cancer in young people

a doctor talks to a woman with cancer

Certain types of cancer, including breast and colorectal cancers, are becoming more prevalent in people under 50. A combination of factors may be at play. (Image credit: Morsa Images via Getty Images)

You may have heard that more young people are being diagnosed with cancer. But which types of cancer are driving this trend? And why are the rates going up in the first place? We looked at what may be driving this pattern, from underlying cancer triggers to better techniques for early detection.

6. Male vs female brains

Is there really a difference between male and female brains? And do we even have the data required to answer that question? A Science Spotlight explored the existing research on sex differences in the brain, finding the results murkier than one might expect. Headlines often proclaim that male and female brains are "wired differently," and that may be true in some subtle ways. But the biological consequences of those differences remain unclear, even to experts in the field.

7. AI is designing viruses

Artificial intelligence can now be used to design brand-new viruses. Scientists hope to use these viruses for good — for example, to treat drug-resistant bacterial infections. But could the technology usher in the next generation of bioweapons? An analysis probed this dual-use problem and what can be done to safeguard our biosecurity.

8. When pandemics are a "certainty," how do we prepare?

In a book excerpt, epidemiologist Dr. Seth Berkley explained how he and other health leaders orchestrated a massive vaccine rollout to poor countries during the COVID-19 pandemic, so that the shots wouldn't exclusively be hoarded by wealthy nations. Live Science also spoke with Berkley about the lessons learned from the pandemic and the ongoing fight for vaccine equity.

9. USAID cuts

A group of Ugandan adults and children stand with HIV medication in their hands

HIV medications must be taken consistently to suppress the virus. Major cuts to HIV funding have threatened people's access to the medicines. (Image credit: Marco Di Lauro via Getty Images)

The United States Agency for International Development (USAID), once the world's largest foreign aid agency, was hit by massive funding cuts under the second Trump administration. A few of its functions will reportedly continue, under the control of the Department of State. We looked at the predicted and devastating effects that the loss of USAID will likely have on HIV care worldwide. And in an interview with author John Green, who published a book on tuberculosis (TB) this year, we explored what the cuts could mean for TB patients.

10. Microplastics on the brain

A study went viral after suggesting that healthy human brains may contain a similar amount of plastic as the average plastic spoon. But should we really be concerned? Our analysis broke down what we know and what we don't about microplastics in the brain.

11. Dodging early Alzheimer's disease

A man genetically guaranteed to develop early Alzheimer's disease is still disease-free in his 70s. We explored the details of the man's case, digging into his genetic profile and the broader lessons it could teach scientists about dementia.

12. Mental health after weight-loss surgery

Weight-loss surgeries often come with improvements in mental health — but research revealed that this effect is less tied to the weight loss itself and more connected to the relief from stigma that people often experience post-procedure. We examined this finding and what it can tell us about the profound impact of weight stigma on people's health and well-being.

13. Measles makes a comeback

Human skin covered with measles rash.

The U.S. is at risk of losing its "measles elimination status" very soon, as the infection continues to spread via various outbreaks. (Image credit: Natalya Maisheva/Getty Images)

In 2000, the United States hit a public health milestone by eliminating measles. But now, there's been a sustained resurgence of the highly infectious disease, putting the country on the brink of losing that precious elimination status. This story explained how we got here and what's at stake. And in an opinion piece, several experts called out the anti-vaccine movement that drove down measles vaccination rates — a movement that health secretary Robert F. Kennedy Jr. has been spearheading for years.

14. Is America losing the war on cancer?

In a book excerpt, Nafis Hasan argued that the United States has been employing the wrong strategies to fight cancer for decades. While hyperfocusing on finding treatments for individuals with cancer, America has largely ignored population-level strategies that could help drive down cancer rates and cancer deaths across the board, he argued.

15. Threats to fetal tissue research

The U.S. federal government is threatening to restrict research conducted with human fetal tissue. In an opinion piece, cell biologist, geneticist and neuroscientist Lawrence Goldstein dispelled widespread myths and misinformation about this type of research.

16. "The Big One," a disaster to dwarf COVID-19

Epidemiologist Michael Osterholm predicts that the next pandemic could be even worse than COVID-19. In a book excerpt and interview with Live Science, Osterholm described the lessons we should have taken away from the coronavirus pandemic, and how recent changes in U.S. policy may have destroyed our capacity to handle serious outbreaks.

17. Climate change may drive up hyponatremia

As the planet warms, a dangerous condition called hyponatremia may be on the rise. The condition causes a dramatic decline in sodium in the body, which can potentially cause seizures, coma and death. A Live Science exclusive looked at the emerging trend.

18. Baby-making robots?

A viral story suggested that researchers in China were working on a "pregnancy robot" that could gestate a human baby from conception to birth. It turns out that the story was complete fiction — but, in theory, could such a technology be realized? Experts weighed in on the sci-fi-sounding idea and discussed whether, eventually, it could be feasible to build a bona fide pregnancy robot.

]]>
https://www.livescience.com/health/year-in-review-the-standout-health-stories-of-2025-from-measles-outbreaks-to-ai-made-viruses oWNGJnFFFyxhNXTtqPwG7J Sun, 28 Dec 2025 15:00:00 +0000 Tue, 23 Dec 2025 18:35:30 +0000
<![CDATA[ 5 common mistakes beginner telescope users make — and how to avoid them ]]> A new telescope can be a doorway to the universe — that is, until you actually take it outside and nothing looks the way you imagined it. Telescopes aren't necessarily difficult to use, but they do require a little preparation, a bit of patience and an understanding of how the night sky moves.

If your first few sessions have been more frustrating than awe-inspiring, you're not alone. Here are five of the most common mistakes, plus how to avoid them so you can spend less time fiddling and more time actually enjoying the view.

1. Neglecting the planning stage

night sky simulation from the Stellarium software

Sky maps and star charts help with the planning process. (Image credit: Stellarium)

Many beginners grab their telescope on a whim, head outside and hope for magic. The problem is that astronomy doesn't work on impulse — it works on timing. Moon phases affect how bright the sky is, and local light pollution can wash out fainter objects. Even the time of year dictates what's actually visible.

Before heading out, take a moment to observe what's above the horizon, when the moon rises and whether your sky conditions are cooperating. Free apps make this easy — Stellarium is a favorite of ours — and a quick look at a cloud forecast can save you a wasted session.

Planning isn't a chore; it's the difference between hunting blindly and having a solid target list. When you know when are where to look, observing the sky with a telescope becomes far more rewarding.

2. Expecting Hubble-like views

man with a telescope against a starry sky

Unfortunately, you won't see Hubble-like views through a standard telescope. (Image credit: Getty Images)

It's completely normal to hope for swirling nebulas and razor-sharp galaxies like the images you see online. Unfortunately, those are long-exposure photographs taken by spacecraft or huge professional observatories. A backyard telescope shows the real sky, and it's much more subtle.

But that doesn't mean it's disappointing. The moon looks incredible through even a small telescope, Jupiter and Saturn show details and star clusters sparkle beautifully. What tends to trip people up is expecting colors and drama rather than appreciating the delicate, natural brightness of what can be seen with the eye.

Think of visual observation as seeing the universe with your own eyes, and once you adjust your expectations, you start to notice far more. If you do want to experiment with imaging space, you can mount one of the best astrophotography cameras directly onto your telescope, or invest in one of the best smart telescopes.

Another thing that often catches beginners out is that not all telescopes excel at the same targets. Not only are there different types of telescopes, but some designs are better suited to deep-space objects like galaxies and nebulas, while others are better suited for crisp planetary and lunar viewing.

Wide aperture, low focal-ratio scopes (like Dobsonians) gather lots of light, making faint objects easier to spot. On the other hand, longer focal length telescopes naturally deliver higher magnification, which is perfect for observing the details on Jupiter, Saturn or the moon's craters.

3. Not letting the telescope acclimate

Rear view of telescope

You need to let your telescope acclimatize to the outside temperature. (Image credit: Jason Parnell-Brookes)

One of the least glamorous but most important steps is simply letting your telescope cool down (or warm up) to match the outdoor temperature. If you take a scope from a warm living room out into the cold night, turbulent air currents swirl inside the tube, softening the view. The result looks like your optics suddenly went blurry.

Give your telescope 20-40 minutes outside before you start observing — maybe even a bit longer for bigger scopes. During this time, you can align your finderscope, set up a star chart or choose your targets.

Once the air settles inside the tube, things improve dramatically. Planets snap into focus, double stars separate cleanly and lunar details show the crisp edges it's meant to. Acclimation isn't sexy or exciting, but it's one of the easiest ways to upgrade your observing without spending a dime.

4. Choosing the wrong eyepiece or magnification

Storage tray on the Celestron AstroMaster 102AZ refractor telescope

Eyepieces can make or break your viewing experience. (Image credit: Russ Swan)

A common assumption is that more magnification automatically means better views. In reality, pushing the zoom too high will result in a dim, wobbly image.

Every telescope has a highest useful magnification. This is essentially the upper limit where the view will still look sharp, and it's determined by the scope's aperture and the viewing conditions. The general rule of thumb is that the highest useful magnification is roughly 50x its aperture in inches, although this does depend on the overall quality of your telescope. For example, a 6-inch telescope will have a highest useful magnification of around 300x.

Start with a low-power eyepiece, like the 20mm which typically comes with beginner telescopes. This will give you a wider field, making objects a lot easier to find and track. Only once you've centered your target should you then switch to a higher-power eyepiece — and even then, it's best to increase in small steps. On nights with poor viewing conditions, high magnification will just make objects look blurrier.

To determine the magnification of an eyepiece, divide the telescope's focal length by the measurement of the eyepiece. For example, on a 1,000mm scope, a 20mm eyepiece will provide 50x magnification. Over time, you'll instinctively know which eyepiece works best for the moon, planets and deep-sky objects.

When magnification is chosen well, everything suddenly becomes sharp, steady and a lot more impressive.

5. Expecting the telescope to do everything

Celestron NexStar 8SE hand control

Even telescopes with a motorized GoTo mount need accurate alignment. (Image credit: Future)

Modern telescopes can be surprisingly smart — some align themselves, some slew automatically to targets and others use your phone to guide you around the night sky. These features are awesome, especially for beginners, but they can create a false expectation that the telescope will do all the work.

In reality, even the most automated systems will still need some input and understanding from the user. Motorized GoTo mounts, for example, won't magically know where they are. They need accurate setup, which requires a level tripod, the correct date and time and a proper alignment on a couple of bright stars. If any of that is off, the telescope will miss every target.

Smart telescopes and app-driven models make navigation easier, but they're not a substitute for knowing what's actually visible or why certain objects won't appear on a bright, hazy night. Plus, smart telescopes often produce the best view by stacking images over a longer period, so they're better suited to photographing the cosmos as opposed to observing it.

]]>
https://www.livescience.com/technology/5-common-mistakes-beginner-telescope-users-make-and-how-to-avoid-them bTryHk7UyswowPyxnxKZSo Sun, 28 Dec 2025 14:00:00 +0000 Wed, 17 Dec 2025 16:22:00 +0000
<![CDATA[ Stunning array of 400 rings in a 'reflection' nebula solves a 30-year-old star-formation mystery — Space photo of the week ]]>

Composite image of the star-forming region NGC 1333 obtained by combining data from the 8.2 m Subaru Telescope and the Digitized Sky Survey.

Composite image of the star-forming region NGC 1333 obtained by combining data from the 8.2 m Subaru Telescope and the Digitized Sky Survey. (Image credit: NAOJ, NOAO/AURA/NSF, Robert Gendler, Roberto Colombari)
QUICK FACTS

What it is: Reflection nebula NGC 1333 and binary star system SVS 13

Where it is: 1,000 light-years away in the constellation Perseus

When it was shared: Dec. 16, 2025.

Go outside after dark this winter and look to the southeast, and you'll see some of the brightest stars in the night sky — Orion's Belt, Betelgeuse, Sirius, Aldabaran and Capella. Just above this melee is the quieter constellation Perseus, which lacks bright stars but hosts something extraordinary that the naked eye can't see — the explosive birth of new stars.

Lurking within the Perseus Molecular Cloud is NGC 1333, nicknamed the Embryo Nebula because it contains many young, hot stars that are teaching astronomers just what goes on when a star is born. NGC 1333 is a reflection nebula, meaning a cloud of gas and dust illuminated by the intense light coming from newly forged stars, some of which appear to be regularly spewing jets of matter. It's one of the closest star-forming regions to our solar system. On Dec. 16. astronomers published the most detailed images ever of a jet launched by a newborn star, called SVS 13, which revealed a sequence of nested, ring-like structures. The finding is evidence that the star has been undergoing an outburst — releasing an immense amount of energy — for decades.

The discovery, which the researchers described in the journal Nature Astronomy, marks the first direct observational confirmation of a long-standing theoretical model of how young stars feed on, and then explosively expel, surrounding material.

The researchers captured the high-resolution, 3D view of a fast-moving jet emitted from one of SVS 13's young stars using the Atacama Large Millimeter/submillimeter Array (ALMA) radio telescope array in Chile. Within the image, they identified more than 400 ultra-thin, bow-shaped molecular rings. Like tree rings that mark the passage of time, each ring marks the aftermath of an energetic outburst from the young star's early history. Remarkably, the youngest ring matches a bright outburst seen in the SVS 13 system in the early 1990s, allowing researchers to directly connect a specific burst of activity in a forming star with a change in the speed of its jet. It's thought that sudden bursts in jet activity are caused by large amounts of gas falling onto a young star.

"These images give us a completely new way of reading a young star's history," said study co-author Gary Fuller, a professor at the University of Manchester. "Each group of rings is effectively a time-stamp of a past eruption. It gives us an important new insight into how young stars grow and how their developing planetary systems are shaped."

For more sublime space images, check out our Space Photo of the Week archives.

]]>
https://www.livescience.com/space/stunning-array-of-400-rings-in-a-reflection-nebula-solves-a-30-year-old-star-formation-mystery-space-photo-of-the-week wBr8PT9cc8RuZDSvCMCA8V Sun, 28 Dec 2025 13:07:00 +0000 Tue, 23 Dec 2025 21:30:56 +0000
<![CDATA[ James Webb telescope spies a monstrous molecular cloud shrouded in mystery — Space photo of the week ]]>

Webb’s MIRI (Mid-Infrared Instrument) shows the Sagittarius B2 (Sgr B2) region in mid-infrared light, with warm dust glowing brightly. To the right is one clump of clouds that captured astronomers’ attention.

JWST's view of the Sagittarius B2 region in near-infrared light. (Image credit: NASA, ESA, CSA, STScI, Adam Ginsburg (University of Florida), Nazar Budaiev (University of Florida), Taehwa Yoo (University of Florida); Image Processing: Alyssa Pagan (STScI))
QUICK FACTS

What it is: Sagittarius B2 molecular cloud

Where it is: Roughly 26,000 light-years from Earth,, in the constellation Sagittarius

When it was shared: Sept. 24, 2025

Stars form in molecular clouds — molecular clouds — regions that are cold, dense, rich in molecules and filled with dust. One enormous cloud responsible for forming half of the stars in the Milky Way's central region is the Sagittarius B2 (Sgr B2) molecular cloud, located a few hundred light-years from our central supermassive black hole.

Boasting a total mass between 3 million and 10 million times that of the sun and stretching 150 light-years across, it is one of the largest molecular clouds in the galaxy. It lies roughly 26,000 light-years from Earth, in the constellation Sagittarius. It is also chemically rich, with several complex molecules discovered so far.

But this giant star-forming region is shrouded in a mystery: how it has managed to produce 50% of the stars in the region, despite containing just 10% of the galactic center's gas.

Astronomers observed this super-efficient stellar factory using the James Webb Space Telescope (JWST), in the hope of finding some clues about its unusual productivity. This spectacular image is the telescope's mid-infrared view, captured by JWST's Mid-Infrared Instrument (MIRI).

In the image, the clumps of dust and gas in the molecular complex glow in shades of pink, purple and red. These clumps are seen surrounded by dark areas. Dark does not mean that these regions are empty or emit nothing; instead, light in these areas is blocked by dense dust that the instrument cannot detect.

JWST spies a shimmering molecular cloud

JWST's view of the Sagittarius B2 region in near-infrared light (Image credit: NASA, ESA, CSA, STScI, Adam Ginsburg (University of Florida), Nazar Budaiev (University of Florida), Taehwa Yoo (University of Florida); Image Processing: Alyssa Pagan (STScI))

In star-forming regions like this one, warm dust and gas and only the brightest stars emit in the mid-infrared. This contrasts with the near-infrared image captured simultaneously by JWST's Near-Infrared Camera (NIRCam), which reveals an abundance of stars because stars emit more strongly in the near-infrared light.

In this MIRI image, the clumps on the right that appear redder than the rest of the cloud complex correspond to one of the most chemically complex areas known, as revealed by previous observations using other telescopes. Astronomers think this unique region may hold clues to why Sgr B2 is more efficient at star formation than the rest of the galactic center.

Additionally, an in-depth analysis of the masses and ages of the stars in this stellar factory could reveal further insight into the star-forming mechanisms in the Milky Way's center.

For more sublime space images, check out our Space Photo of the Week archives.

]]>
https://www.livescience.com/space/astronomy/james-webb-telescope-spies-a-monstrous-molecular-cloud-shrouded-in-mystery-space-photo-of-the-week WfSjFBFsQckJz7789n7UDF Sun, 28 Dec 2025 11:00:00 +0000 Fri, 19 Dec 2025 20:56:57 +0000
<![CDATA[ How many holes does the human body have? ]]> The human body is extraordinarily complex, with several openings and a few exits. But exactly how many holes does each person have?

It sounds like a simple enough question to answer — list the openings and add them up. But it's not quite that easy once you start considering questions like: "What exactly is a hole?" "Does any opening count?" And "why don't mathematicians know the difference between a straw and a doughnut?"

Before we start counting, we need to agree on what constitutes a "hole." Katie Steckles, a lecturer in mathematics at Manchester Metropolitan University in the U.K. and a freelance mathematics communicator, told Live Science that mathematicians "use the term 'hole' to mean one like the hole in a donut: one that goes all the way through a shape and out the other side."

But if you dig a "hole" at the beach, your aim is probably not to dig right through to the other side of the world. Many people would think of a hole as a depression in a solid object. But "this isn't a true hole, as it has an end," Steckles said.

Similarly, mathematical communicator James Arthur, who is based in the U.K., told Live Science that "in topology, a 'hole' is a through hole, that is you can put your finger through the object."

When digging a tunnel under the sea, like the Channel Tunnel that connects the U.K. and France, engineers started off by digging two openings. But as soon as those two digging projects joined up, the Channel Tunnel became a fundamentally different object (what Arthur and engineers would call a "through hole") — like a straw, or a tube with an opening at either end.

And if you ask people how many holes a straw has you will get a range of different answers: one, two and even zero. This is a result of our colloquial understanding of what constitutes a hole.

To find a consistent answer, we can turn to mathematics. And the problem of classifying how many holes there are in an object falls squarely within the realm of topology.

Sign up for our newsletter

The words 'Life Little Mysteries' over a blue background

Sign up for our weekly Life's Little Mysteries newsletter to get the latest mysteries before they appear online.

To a topologist, the actual shapes of objects are not important. Instead, "topology is more concerned with the fundamental properties of shapes and how things connect together in space," Steckles said.

In topology, objects can be grouped together by the number of holes they possess. For example, a topologist sees no difference between a golf ball, a baseball or even a Frisbee. If they were all made of plasticine, or putty, they could theoretically be squashed, stretched or otherwise manipulated to look like each other without making or closing any holes in the plasticine or sticking different parts together, Steckles argued.

However, to a topologist, these objects are fundamentally different to a bagel, a doughnut or a basketball hoop, which each have a hole through the middle of them. A figure of eight with two holes and a pretzel with three are different topological objects again.

Photo of a large soft pretzel with salt.

This delicious pretzel has three holes. (Image credit: Getty Images)

A useful way to get into the mathematicians' way of thinking about the straw problem is to "imagine our straw is made of play dough," Arthur said. "Let's take this straw and slowly squish the top down and down and down towards the bottom, making sure the hole in the middle stays open. We will squish it until we are in a shape that looks like a doughnut." Mathematicians, Arthur said, would say that "the straw is homeomorphic to a doughnut."

The long, thin aspect ratio of the straw, and the fact that the two openings are relatively far apart, are perhaps what gives rise to the suggestion of two holes. But to a topologist, bagels, basketball hoops and doughnuts are all topologically equivalent to a straw with a single hole. "The hole in a straw goes all the way through it, and the opening at the other end is just the back of that same hole," Steckles said.

Back to the human body

Armed with the topologists' definition of a hole, we can tackle the original question: How many holes does the human body have? Let's first try to list all the openings we have. The obvious ones are probably our mouths, our urethras (the ones we pee out of) and our anuses, as well as the openings in our nostrils and our ears. For some of us, there are also milk ducts in nipples and vaginas.

There are also four less-obvious openings that we all have in the corners of eyelids closest to our nose — the four lacrimal puncta, which drain tears from our eyes into our nasal cavities. At an even smaller scale there are the pores that enable sweat to escape our bodies and sebum to lubricate our skin. In total there are potentially millions of these openings in our bodies, but do they all count as holes?

Drawing of a right side human eye showing the lacrimal apparatus. The lacrimal glands sit above the tear duct, the lacrimal canal, lacrimal sac, and nasolacrimal duct sit on the outside of the eye, opposite the tear duct.

The two lacrimal puncta drain tears from the eye down the lacrimal canals and through to the nasolacrimal duct which connects to the nasal cavity. (Image credit: Getty Images)

To make the question interesting, think about whether we could pass a very thin string into one hole and out of another. If we set the size of this string to be about 60 microns (60 millionths of a meter) then it's possible that the string could enter an opening as small as a pore. However — and this is key — it wouldn't be able to leave. It wouldn't be able to come out the other end. It would be blocked by the cells at the bottom of the pore — too thick to pass through into the vasculature that supplies the pore.

"They're not actually holes in the topological sense, as they don't go all the way through," Steckles said. "They're just blind pits."

By this definition we can rule out all the pores, milk ducts and urethras. We couldn't thread a string in one of these openings and out of another. Even the ears canals have to go as they are separated from the rest of the sinuses by the ear drums.

"We have our mouth, our anus, and then our nostrils. They are four of the … openings that form a hole," Arthur said. "But we actually have eight. The remaining four come from the tear ducts, we each have two in each eye, an upper and a lower."

But this doesn't mean eight holes. Steckles pointed out ."When the holes that pass through a shape connect together inside the shape, it makes it harder to count how many there are."

Looking at underwear

A pair of underwear, for example, has three openings (one for the waist and one for each of the two legs), but it's not immediately clear how many holes a topologist would say it has. "A useful trick is to think about flattening it out," Steckles said. — "If we were to stretch the waistband of the pants out onto a big hula hoop, we'd see the two trouser legs sticking down, each being one hole."

Photo of a navy blue pair of men's briefs laying on a pink background.

Underwear has three openings but only two holes. (Image credit: Getty Images)

So despite having three openings, the pair of underwear has only two holes. "So when the holes connect together in the middle, there's one fewer hole than there are openings," Steckles argued. Correspondingly, topology tells us that, despite eight interconnected openings, the human body has seven different holes.

But there might be one more. Although often counted as a blind hole, the vagina leads to the uterus, which then leads to one of two fallopian tubes. These tubes are open at the far end and lead to the peritoneal cavity near the ovary. It is the job of the finger-like projections of the funnel-shaped infundibulum at the end of the fallopian tube to catch the egg when it is released from the nearest ovary. However, it has been demonstrated that eggs released from one ovary can be captured by the fallopian tube on the other side, so that passage between the two open ends of the fallopian tubes is possible. Our tiny string could therefore be threaded all the way through the female reproductive tract and back out, counting as one more hole.

So the mathematician's answer is that humans have either seven or eight holes.

In the end, the question is not just about counting openings but about understanding connections. Topologically speaking, our bodies are less like Swiss cheese and more like a carefully constructed onesie for an octopus.

Human skeleton quiz: What do you know about the bones in your body?

]]>
https://www.livescience.com/physics-mathematics/mathematics/how-many-holes-does-the-human-body-have kfDkaiAPvBToEqinFtiU3D Sun, 28 Dec 2025 10:00:00 +0000 Tue, 23 Dec 2025 15:36:43 +0000
<![CDATA[ Do you think you can tell an AI-generated face from a real one? ]]> With the rapid advances in artificial intelligence, computer-generated images — including hyperrealistic faces — are becoming more common. Many look so convincing it can be hard to tell them apart from real photos.

In a new study, researchers tested people's ability to distinguish images of real faces from AI-generated ones and found that most participants missed most of the AI-generated faces. Even "super-recognizers" — an elite group with exceptionally strong facial-processing abilities — were able to correctly identify fake faces as fake only 41% of the time. Typical recognizers correctly identified only about 30% of the AI-generated faces.

However, the study also showed that people's detection of fake faces improved when they were given just five minutes of training beforehand. The training taught the participants how to spot common computer-rendering errors, such as unnatural-looking skin textures or oddities in how hair lies across the face. After training, detection accuracy increased substantially, with super-recognizers spotting 64% of fake faces and typical recognizers identifying 51%.

Given how difficult the task proved to be even for highly skilled participants, how confident are you in your own ability to spot AI faces? Answer our poll below, and let us know why in the comments.

'I trust AI the way a sailor trusts the sea. It can carry you far, or it can drown you': Poll results reveal majority do not trust AI

'There's no shoving that genie back in the bottle': Readers believe it's too late to stop the progression of AI

'I honestly am not sure on this at all': Poll reveals public uncertainty over experimenting on conscious lab-grown 'minibrains'

]]>
https://www.livescience.com/technology/artificial-intelligence/do-you-think-you-can-tell-an-ai-generated-face-from-a-real-one 9rybLfFcQvPysj8kfmxxma Sat, 27 Dec 2025 18:00:00 +0000 Wed, 24 Dec 2025 11:16:01 +0000
<![CDATA[ AI is getting better and better at generating faces — but you can train to spot the fakes ]]> Images of faces generated by artificial intelligence (AI) are so realistic that even "super recognizers" — an elite group with exceptionally strong facial processing abilities — are no better than chance at detecting fake faces.

People with typical recognition capabilities are worse than chance: more often than not, they think AI-generated faces are real.

That's according to research published Nov. 12 in the journal Royal Society Open Science. However, the study also found that receiving just five minutes of training on common AI rendering errors greatly improves individuals' ability to spot the fakes.

"I think it was encouraging that our kind of quite short training procedure increased performance in both groups quite a lot," lead study author Katie Gray, an associate professor in psychology at the University of Reading in the U.K., told Live Science.

Surprisingly, the training increased accuracy by similar amounts in super recognizers and typical recognizers, Gray said. Because super recognizers are better at spotting fake faces at baseline, this suggests that they are relying on another set of clues, not simply rendering errors, to identify fake faces.

Gray hopes that scientists will be able to harness super recognizers' enhanced detection skills to better spot AI-generated images in the future.

"To best detect synthetic faces, it may be possible to use AI detection algorithms with a human-in-the-loop approach — where that human is a trained SR [super recognizer]," the authors wrote in the study.

Detecting deepfakes

In recent years, there has been an onslaught of AI-generated images online. Deepfake faces are created using a two-stage AI algorithm called generative adversarial networks. First, a fake image is generated based on real-world images, and the resulting image is then scrutinized by a discriminator that determines whether it is real or fake. With iteration, the fake images become realistic enough to get past the discriminator.

These algorithms have now improved to such an extent that individuals are often duped into thinking fake faces are more "real" than real faces — a phenomenon known as "hyperrealism."

As a result, researchers are now trying to design training regiments that can improve individuals' abilities to detect AI faces. These trainings point out common rendering errors in AI-generated faces, such as the face having a middle tooth, an odd-looking hairline or unnatural-looking skin texture. They also highlight that fake faces tend to be more proportional than real ones.

In theory, so-called super recognizers should be better at spotting fakes than the average person. These super recognizers are individuals who excel in facial perception and recognition tasks, in which they might be shown two photographs of unfamiliar individuals and asked to identify if they are the same person or not. But to date, few studies have examined super recognizers' abilities to detect fake faces, and whether training can improve their performance.

To fill this gap, Gray and her team ran a series of online experiments comparing the performance of a group of super recognizers to typical recognizers. The super recognizers were recruited from the Greenwich Face and Voice Recognition Laboratory volunteer database; they had performed in the top 2% of individuals in tasks where they were shown unfamiliar faces and had to remember them.

In the first experiment, an image of a face appeared onscreen and was either real or computer-generated. Participants had 10 seconds to decide if the face was real or not. Super recognizers performed no better than if they had randomly guessed, spotting only 41% of AI faces. Typical recognizers correctly identified only about 30% of fakes.

Each cohort also differed in how often they thought real faces were fake. This occurred in 39% of cases for super recognizers and in around 46% for typical recognizers.

The next experiment was identical, but included a new set of participants who received a five-minute training session in which they were shown examples of errors in AI-generated faces. They were then tested on 10 faces and provided with real-time feedback on their accuracy at detecting fakes. The final stage of the training involved a recap of rendering errors to look out for. The participants then repeated the original task from the first experiment.

Training greatly improved detection accuracy, with super recognizers spotting 64% of fake faces and typical recognizers noticing 51%. The rate that each group inaccurately called real faces fake was about the same as the first experiment, with super recognizers and typical recognizers rating real faces as "not real" in 37% and 49% of cases, respectively.

Trained participants tended to take longer to scrutinize the images than the untrained participants had — typical recognizers slowed by about 1.9 seconds and super recognizers did by 1.2 seconds. Gray said this is a key message to anyone who is trying to determine if a face they see is real or fake: slow down and really inspect the features.

It is worth noting, however, that the test was conducted immediately after participants completed the training, so it is unclear how long the effect lasts.

"The training cannot be considered a lasting, effective intervention, since it was not re-tested," Meike Ramon, a professor of applied data science and expert in face processing at the Bern University of Applied Sciences in Switzerland, wrote in a review of the study conducted before it went to print.

And since separate participants were used in the two experiments, we cannot be sure how much training improves an individual's detection skills, Ramon added. That would require testing the same set of people twice, before and after training.

]]>
https://www.livescience.com/health/psychology/ai-is-getting-better-and-better-at-generating-faces-but-you-can-train-to-spot-the-fakes 7BBTPzaySFCPops5wCRZP4 Sat, 27 Dec 2025 18:00:00 +0000 Mon, 29 Dec 2025 10:49:57 +0000
<![CDATA[ 6 'lost' cities archaeologists have never found ]]> Archaeologists have been very busy excavating lost civilizations, but they haven't found everything. There are still prominent ancient cities, including capitals of large kingdoms and empires, that have never been unearthed by scholars.

We know these cities exist because ancient texts describe them, but their location may be lost to time.

In a few cases, looters have found these cities, and have looted large numbers of artifacts from them. But these robbers have not come forward to reveal their location. In this countdown Live Science takes a look at six ancient cities whose whereabouts are unknown.

1. Irisagrig

Ancient artifacts, smuggled into the U.S. in violation of federal law and shipped to Hobby Lobby stores, are shown at an event returning the artifacts to Iraq on May 2, 2018 in Washington, D.C.

Ancient inscriptions, some of them from Irisagrig, are on display at a ceremony where they were returned to Iraq. (Image credit: Win McNamee/Getty Images)

Not long after the 2003 U.S. invasion of Iraq, thousands of ancient tablets from a city called "Irisagrig" began appearing on the antiquities market. From the tablets, scholars could tell that Irisagrig was in Iraq and flourished around 4,000 years ago.

Those tablets reveal that the rulers of the ancient city lived in palaces that housed many dogs. They also kept lions which were fed cattle. Those that took care of the lions, referred to as "lion shepherds," got rations of beer and bread. The inscriptions also mention a temple dedicated to Enki, a god of mischief and wisdom, and say that festivals were sometimes held within the temple.

Scholars think that looters found and looted Irisagrig around the time the 2003 U.S. invasion took place. Archaeologists have not found the city so far and the looters who did have not come forward and identified where it is.

2. Itjtawy

Pyramid of Amenemhat I, el-Lisht, Egypt. Egyptian civilization, Middle Kingdom, Dynasty XII.

The remains of the pyramid of Amenemhat I at Lisht. The capital city he built has never been found, although scholars think that it is likely somewhere near Lisht. (Image credit: DeAgostini/Getty Images)

Egyptian pharaoh Amenemhat I (reign circa 1981 to 1952 B.C.) ordered a new capital city built. This capital was known as "Itjtawy" and the name can be translated as "the seizer of the Two Lands" or "Amenemhat is the seizer of the Two Lands." As the name suggests Amenemhat faced a considerable amount of turmoil. His reign ended with his assassination.

Despite Amenemhat's assassination, Itjtawy would remain the capital of Egypt until around 1640 B.C, when the northern part of Egypt was taken over by a group known as the "Hyksos," and the kingdom fell apart.

While Itjtawy has not been found, archaeologists think it is located somewhere near the site of Lisht, in central Egypt. This is partly because many elite burials, including a pyramid belonging to Amenemhat I, are located at Lisht.

3. Akkad

Sargon of Akkad (2334 BC - 2279 BC), also known as Sargon the Great or Sargon I, Mesopotamian king. Bust of an Akkadian ruler, probably Sargon.

A bust of Sargon of Akkad, an early ruler of the Akkadian Empire. (Image credit: Photo12/Universal Images Group via Getty Images)

The city of Akkad (also called Agade) was the capital of the Akkadian Empire, which flourished between 2350 and 2150 B.C. At its peak the empire stretched from the Persian Gulf to Anatolia. Many of its conquests occurred during the reign of "Sargon of Akkad," who lived sometime around 2300 B.C. One of the most important structures in Akkad itself was the "Eulmash," a temple dedicated to Ishtar, a goddess associated with war, beauty and fertility.

Akkad has never been found, but it is thought to have been built somewhere in Iraq. Ancient records indicate that the city was destroyed or abandoned when the Akkadian empire ended around 2150 B.C.

4. Al-Yahudu

Painting that depicts Jewish exiles in the Babylonian empire named 'The Jews in the Babylonian Captivity' circa 1830 by Ferdinand Olivier.

A painting dating to 1830, which depicts Jewish exiles in the Babylonian empire. (Image credit:  ARTGEN/Alamy)

Al-Yahudu, a name which means "town" or "city" of Judah, was a place in the Babylonian empire where Jews lived after the kingdom of Judah was conquered by the Babylonian king Nebuchadnezzar II in 587 B.C. He sent part of the population into exile, a practice the Babylonians often engaged in after conquering a region.

About 200 tablets from the settlement are known to exist and they indicate that the exiled people who lived in this settlement kept their faith and used Yahweh, the name of God, in their names. Al-Yahudu's location has not been identified by archaeologists, but like many of these lost cities, was likely located in what is now Iraq. Given that the tablets showed up on the antiquities market, and there is no record of them being found in an archaeological excavation, it appears that at some point looters succeeded in finding its location.

5. Waššukanni

Cylinder seal with people and a griffin carved on it.

A cylinder seal from the Mitanni empire. It is now in the Metropolitan Museum of Art in New York City. (Image credit: Gift of Martin and Sarah Cherkasky, 1987; Metropolitan Museum of Art; Public Domain)

Waššukanni was the capital city of the Mitanni empire, which existed between roughly 1550 B.C. and 1300 B.C. and included parts of northeastern Syria, southern Anatolia and northern Iraq. It faced intense competition from the Hittite empire in the north and the Assyrian empire in the south and its territory was gradually lost to them.

Waššukanni has never been found and some scholars think that it may be located in northeastern Syria. The people who lived in the capital, and indeed throughout much of its empire, were known as the "Hurrians" and they had their own language which is known today from ancient texts.

6. Thinis

The Narmer Palette commemorates the victories of King Narmer identified as King Menes, the unifier of Upper and Lower Egypt, Horus, in the form of a falcon, delivers captives to King Narmer. The King stands over the defeated chief and is about to smite him with his mace.

The Narmer palette, shown here, depicts King Narmer — also known as Menes — smiting an enemy. It dates back around 5,000 years ago to when Egypt was being unified. (Image credit: Werner Forman/Universal Images Group/Getty Images)

Thinis (also known as Tjenu) was an ancient city in southern Egypt that flourished early in the ancient civilization's history. According to the ancient writer Manetho, it was where some of the early kings of Egypt ruled from around 5,000 years ago, when Egypt was being unified. Egypt's capital was moved to Memphis a bit after unification and Thinis became the capital of a nome (a province of Egypt) during the Old Kingdom (circa 2649 to 2150 B.C.) period, Ali Seddik Othman, an inspector with the Egyptian Ministry of Tourism and Antiquities, noted in an article published in the Journal of Abydos.

Thinis has never been identified although it is believed to be near Abydos, which is in southern Egypt. This is partly because many elite members of society, including royalty, were buried near Abydos around 5,000 years ago.

]]>
https://www.livescience.com/archaeology/6-lost-cities-archaeologists-have-never-found 7oLKJeBrMbsDzfweLLb5QP Sat, 27 Dec 2025 17:10:00 +0000 Tue, 23 Dec 2025 19:33:09 +0000
<![CDATA[ Tooth-in-eye surgery, 'blood chimerism,' and a pregnancy from oral sex: 12 wild medical cases we covered in 2025 ]]> Each week, Live Science highlights an intriguing case report from the medical literature, where we explore unusual symptoms, rarely seen diagnoses and out-of-the-box treatments. Through this "Diagnostic Dilemma" series, we describe how doctors work to ultimately discover the cause of a patient's ailment. In complex cases, this diagnostic process can be quite arduous. That's part of why doctors share case reports: to help other medical professionals who might be facing the same puzzle.

Here are 12 of our most intriguing Diagnostic Dilemmas from the past year. (If descriptions of medical symptoms and procedures make you squeamish, proceed with caution.)

1. Boy spoke foreign language after surgery

A Dutch teenager got knee surgery to treat a soccer injury, and upon waking up from anesthesia, he spoke only English — a language he'd previously spoken only in language classes at school. He kept insisting he was in the U.S., did not recognize his parents, and could not speak or understand spoken Dutch, his native language. Exams turned up no neurological abnormalities, and the doctors didn't initiate any specific treatment to address the language issue. Within 18 hours of surgery, the boy could understand some Dutch but not speak it without struggling. But then suddenly, he could both understand and speak it as normal. The doctors described the event as a strange case of "foreign language syndrome."

2. Woman with no vaginal opening gets pregnant via oral sex

A teenager reported to a hospital with abdominal pain, and examinations soon revealed that she was nine months pregnant and that she was having contractions. When doctors examined the patient's reproductive tract, they found that she lacked a vaginal opening — a rare condition called distal vaginal atresia. Because of this, the medical team had to deliver the baby — a healthy, 6.2-pound (2.8 kilograms) boy — via cesarean section. The teenager had been seen at the same hospital about nine months prior, when an ex stabbed her after finding her fellating a new boyfriend. The wounds she incurred during the stabbing likely allowed sperm to escape her digestive tract and make their way to her reproductive tract, resulting in an unlikely pregnancy, her doctors theorized.

3. Man stabbed by huge fish

A man was brought to a hospital by boat and helicopter after incurring an injury while fishing. He'd caught a white marlin (Kajikia albida) — a large fish with a long, pointy "bill" — and when he leaned over the edge of his boat to release his hook from the fish, it jumped up and struck him. At the hospital, doctors found a fragment of the fish's bill lodged in the man's throat, spinal canal and base of his skull. With an emergency surgery and antibiotics to prevent infections, the man survived the encounter without any long-term symptoms.

4. Acupuncture led to joint injury

An X-ray image of a patient's knees reveals acupuncture needles left in the tissue.

An X-ray of the front (A) and side (B) of the patient's left knee. The lines are the tiny golden threads. (Image credit: The New England Journal of Medicine ©2013.) (Image credit: The New England Journal of Medicine ©2013.)

A woman with osteoarthritis of the knee began getting acupuncture regularly when her pain medications started causing bad stomach issues. But her knees then became very sore, and she went to a hospital to be examined. X-rays revealed areas of her joints and shinbones where the bone tissue had thickened and spurs had formed. Additionally, hundreds of tiny flecks could be seen around both knee joints. It turned out that the woman's acupuncturists had left golden threads inside her knees on purpose as part of her treatment. In other cases, these threads have caused cysts and tissue damage, which can happen when they migrate through the body.

5. Man experiences rare meat allergy

A man in Michigan went to an ER with swollen eyelids and an itchy rash, and he noted that he'd also been experiencing cramps, nausea, abdominal pain and vomiting over the preceding days. When doctors examined the patient, they uncovered signs of anaphylaxis, a severe allergic reaction, and his condition quickly progressed to shock. The medical team successfully stabilized the patient, but a few days later, his condition worsened again. At that point, the doctors spotted a pattern: The symptoms arose when the man ate red meat. An allergy to meat, a condition called alpha-gal syndrome, can be triggered by the bite of certain tick species. It turned out that the man was an avid deer hunter who likely encountered an adult tick or tick larvae while hunting, his doctors concluded.

6. Woman had XY chromosomes in her blood

A woman had her chromosomes checked following a pregnancy loss to see if there might have been an underlying genetic reason for the miscarriage. The test revealed that, at least in the woman's blood, her chromosomal profile (or karyotype) was 46,XY — the typical karyotype among males. Further tests revealed that across the rest of her tissues, her karyotype was 46,XX, the typical chromosomal profile of a female. The woman had a fraternal twin, so in this case of "chimerism," the doctors concluded that the XY chromosomes likely came from her brother in the womb but somehow assimilated them only into her blood cells. The doctors suspected the "veins and arteries of the two children became intertwined in the umbilical cord" at some point. The woman had no overt symptoms tied to carrying these chromosomes in her blood and later went on to carry a pregnancy that resulted in the birth of a baby boy.

7. Woman injects herself with black widow venom

A woman visited an emergency room with a headache, severe cramps and muscle pain, as well as an elevated pulse, breathing rate and blood pressure. She told doctors she'd attempted to get high by injecting a ground-up black widow spider (Latrodectus) into her veins in a suspension of distilled water. The doctors suspected the injected dose of black-widow venom was likely much higher than one would get from a bite, and its effects may have been exacerbated by the patient's allergic reaction to proteins in the venom. After the patient had been treated for several days in an intensive care unit, her symptoms resolved and she was discharged.

8. Nut allergy was triggered by ejaculate

A woman developed hives, swelling under her skin and trouble breathing after having sex with her partner. While receiving treatment at a hospital, she reported having a known allergy to Brazil nuts. She said that her partner ate them a few hours prior to sex but that he'd taken a bath and washed his hands thoroughly before intercourse. When the doctors conducted a skin-prick allergy test, using samples of the partner's semen, before and after he ate Brazil nuts, they found that the allergy triggers could indeed pass through the semen and set off the woman's allergy.

9. Rash mysteriously migrated

Photo of the patient's back only. The rash looks like pink striations or lesions randomly distributed across the skin.

A man's red rash appeared to be "migrating" across his skin, doctors found. (Image credit: The New England Journal of Medicine ©2022)

Following a cancer treatment, a man developed a red rash that started out near the anus and then spread rapidly to the trunk and limbs. The rash, which looked like wavy lines all over the patient's body, appeared to migrate, with the lines starting out in one spot and later moving across the skin. A stool test revealed Strongyloides stercoralis, a parasite that can cause an infection called strongyloidiasis in humans. These worms were migrating under the man's skin, and the infection likely arose because the patient's immune system was stunted by glucocorticoids used in his cancer treatment.

10. Rare tooth-in-eye surgery performed

A rare autoimmune disorder injured a man's corneas and extensively impeded his sight. To restore vision in one eye, doctors attempted an osteo-odonto-keratoprosthesis, or "tooth-in-eye surgery." The procedure involves removing one of the patient's teeth and implanting it in their eye socket, where it serves as a platform for a transparent, plastic lens. The lens stands in for the injured cornea and enables light to enter the eye. The man's successful procedure was the first of its kind in Canada.

11. "Muscle-plumping" injections cause calcium spike

A man went to a hospital because he was experiencing weakness and vomiting. There, tests revealed that his kidneys were failing and the calcium in his blood was too high. Physical exams and scans revealed abnormalities in his upper-arm and chest muscles — namely, areas of superdense calcification. It turned out that the man had previously gotten injections of silicone-like, oil-based substances to "plump" up the look of his muscles. In this case, the injections triggered a persistent foreign-body reaction, resulting in extensive scarring and calcification of the muscle that leached calcium into the bloodstream.

12. Scientist catches plague from defanged bacteria

A lab worker came down with an infection that, despite medical treatment, ended up being fatal. His doctors were informed that the patient had worked with a weakened strain of Yersinia pestis, the bacterium that causes the plague. This weakened form of the germ was thought to be noninfectious, but nonetheless, the man contracted it. Further tests revealed that the man had unusually high levels of iron in his blood. One way the plague bacteria had been weakened was that its key gene for absorbing iron had been removed — but the man's blood, which was chock-full of iron, may have enabled the germ to overcome this weakness and establish a deadly infection.

For more intriguing medical cases, check out our Diagnostic Dilemma archives.

This article is for informational purposes only and is not meant to offer medical advice.

]]>
https://www.livescience.com/health/tooth-in-eye-surgery-blood-chimerism-and-a-pregnancy-from-oral-sex-12-wild-medical-cases-we-covered-in-2025 zfZDSBpV6ZJQUBBcpVF8wW Sat, 27 Dec 2025 12:00:00 +0000 Mon, 29 Dec 2025 10:49:57 +0000
<![CDATA[ Is the sun really a dwarf star? ]]> The sun is the biggest object in the solar system; at about 865,000 miles (1.4 million kilometers) across, it's more than 100 times wider than Earth. Despite being enormous, our star is often called a "dwarf." So is the sun really a dwarf star?

Technically, the sun is a G-type main-sequence star — specifically, a G2V star. The "V" indicates that it is a dwarf, Tony Wong, a professor of astronomy at the University of Illinois Urbana-Champaign, told Live Science.

Dwarf stars got their name when Danish astronomer Ejnar Hertzsprung noticed that the reddest stars he observed were either much brighter or much fainter than the sun. He called the brighter ones "giants" and the dimmer ones "dwarfs," according to Michael Richmond, a professor of physics and astronomy at the Rochester Institute of Technology in New York. The sun is currently more similar in size and brightness to smaller, dimmer stars called red dwarfs than to giant stars, so the sun and its brethren also became classified as dwarf stars.

"G" is astronomer code for yellow — that is, stars of a temperature range of around 9,260 to 10,340 degrees Fahrenheit (5,125 to 5,725 degrees Celsius), Lucas Guliano, an astronomer at the Harvard-Smithsonian Center for Astrophysics, told Live Science.

Sign up for our newsletter

The words 'Life Little Mysteries' over a blue background

Sign up for our weekly Life's Little Mysteries newsletter to get the latest mysteries before they appear online.

Wong noted that G2 means it's somewhat hotter than a typical G-type star. "They range from G0 to G9 in order of decreasing temperature," he said. At its surface, the sun is about 9,980 F (5,525 C), Guliano added.

Calling the sun yellow is a bit of a misnomer, however, as the sun's visible output is greatest in the green wavelengths, Guliano explained. But the sun emits all visible colors, so "the actual color of sunlight is white," Wong said.

(On Earth, the sun appears yellow because of the way molecules in the atmosphere can scatter the different colors that make up the sun's white light, according to Stanford University's Solar Center. This is the same reason the sky appears blue.)

G-type stars also range from G0 to G9 in order of decreasing size, Guliano said. Wong explained that class G stars "range in size from somewhere around 90% the mass of the sun up to around 110% the mass of the sun."

The sun is what astronomers call a main-sequence star, a class that includes most stars. Nuclear reactions within these stars fuse hydrogen to become helium, unleashing extraordinary amounts of energy. Among the main-sequence stars, the color is determined by the star's mass.

"The sun is yellow, but less-massive main sequence stars are orange or red, and more massive main sequence stars are blue," Carles Badenes, a professor of physics and astronomy at the University of Pittsburgh, told Live Science.

The sun is slowly changing as it ages. "It has gotten about 10% larger since it started on the main sequence, and it will get much larger," Wong said. Even as it grows, however, the sun will still be considered a dwarf until its last stage of life.

In about 5 billion years, the sun will run out of hydrogen fuel and begin to swell to become a red giant, leaving its dwarf days behind. "It will engulf the orbit of Venus, and maybe Earth as well," Badenes said, "and its surface temperature will get colder, making it red in color."

Sun quiz: How well do you know our home star?

]]>
https://www.livescience.com/space/the-sun/is-the-sun-really-a-dwarf-star bR5DG7ufgeckVgn4M5qtoE Sat, 27 Dec 2025 10:00:00 +0000 Thu, 18 Dec 2025 18:53:48 +0000
<![CDATA[ Science history: Dian Fossey found murdered, after decades protecting gorillas that she loved — Dec. 27, 1985 ]]>
QUICK FACTS

Milestone: Dian Fossey found murdered

Date: Dec. 27, 1985

Where: Karisoke Research Center in Rwanda

Who: The murderer remains unknown

In late December 1985, a worker opened the door to a remote cabin in the Virunga Mountains of Rwanda and encountered a horrific scene: Gorilla researcher Dian Fossey, whose aggressive approach to conservation had pitted her against the local community, had been hacked to death with a machete, and her cabin had been ransacked.

Fossey had been working with an endangered gorilla population in Rwanda's Volcanoes National Park since the late 1960s. Along with Jane Goodall and Biruté Galdikas, she was one of the three "trimates" chosen by Louis Leakey to study primates in their natural habitat.

Fossey had no formal training in ethology, the science of animal behavior, when she set out for Africa. She began her field work in Kabara, Congo, living in a tiny tent and venturing out to study mountain gorillas (Gorilla beringei beringei) there. After civil war broke out in 1967, she escaped to the Rwandan portion of the mountains and set up a new research project near Mount Karisimbi in Rwanda.

Fossey was inspired by the work of George Schaller, a biologist who, in 1959, had also studied the gorillas of the Virunga Mountains.

"I knew that animals try to stay out of your way. If you go quietly near them, they come to accept your presence. That's what I did with gorillas. I just went near them day after day, which was fairly easy because they form cohesive social groups. Soon, I knew them as individuals, both their faces and their behavior, and I just sat and watched them," Schaller said in a 2006 interview.

Fossey operated on this same principle of patient, unobtrusive observation. Still, the gorillas initially fled from her, and she spent hours tracking and trailing them across the misty forest.

Dian Fossey in 1983.

Dian Fossey in 1983, the year her book "Gorillas in the Mist" came out. Fossey's aggressive tactics to protect the gorillas did not earn her good will with the locals. (Image credit: Peter Breining/San Francisco Chronicle via Getty Images)

After a year, they stopped fleeing at her presence and started beating their chests and vocalizing. It was a bluff meant to scare her off, but it was still far from their ordinary, natural behavior, she said in a 1973 lecture. After two years, she received two young gorillas, Coco and Pucker; rehabilitated them; and learned about gorilla young by observing them.

"I came to know the gorillas' need for love and affection, and the young gorillas' need for constant play," she said.

It would take three years before the gorillas came to accept her presence and reveal more naturalistic behavior, she said in the lecture.

During her decades in Virunga, Fossey described and learned to mimic the vocalizations of gorillas, including the "belch vocalization" that signifies contentment. She also elucidated their tight-knit family structures, courtship and mating rituals, as well as documented the occasional murder of infant gorillas by rival males.

Although she would eventually earn her doctorate in zoology from the University of Cambridge, Fossey spent her first years studying the gorillas with no formal training. Perhaps because of her initial lack of training, she formed close bonds with individual animals and tended to ascribe more humanlike motivations and descriptions to their actions than is typically accepted in formal zoology. She often described gorillas as more altruistic than humans.

"You take these fine, regal animals,'' she told an interviewer, as reported by The New York Times. ''How many fathers have the same sense of paternity? How many human mothers are more caring? The family structure is unbelievably strong.''

She formed a particularly close bond with a gorilla she nicknamed Digit — so named for his damaged finger — who did not have playmates his age. Digit was killed by poachers in 1977.

The last years of Fossey's life were increasingly focused on conserving the gorillas' dwindling habitat and combating poachers. She used confrontational methods, such as burning snares, wearing masks to scare poachers, and spray-painting cattle to prevent herders from bringing them into the national park, according to the Dian Fossey Gorilla Fund.

She also shot over the heads of tourists to scare them away and told her graduate students to carry guns, according to The Washington Post.

Given that many of the people living on the fringes of the park lived in poverty and resorted to expansion and herding to survive, this did not earn her good will with many of the locals.

Fossey's murder was never solved. Many think poachers were responsible for the killing, but other theories have been floated as well.

]]>
https://www.livescience.com/animals/science-history-dian-fossey-found-murdered-after-decades-protecting-gorillas-that-she-loved-dec-27-1985 PYHybGbyXeTuQR3hZM6Pt7 Sat, 27 Dec 2025 07:00:00 +0000 Tue, 23 Dec 2025 16:37:18 +0000
<![CDATA[ Spinosaurus relative longer than a pickup truck stalked Thailand's rivers 125 million years ago ]]>

An illustration of two young spinosaurids hunting a juvenile Phuwiangosaurus in Cretaceous Thailand. A large adult spinosaurid rests in the background beside a body of water, while two feathered Kinnareemimus are depicted by the trees on the right.

Two young spinosaurids hunt a juvenile Phuwiangosaurus in Cretaceous Thailand. A large adult spinosaurid (not the newly unveiled Sam Ran spinosaurid) rests in the background beside a body of water, while two feathered Kinnareemimus are depicted by the trees on the right. (Image credit: Kmonvich Lawan)

Around 125 million years ago, a dinosaur longer than a pickup truck stalked rivers to gobble up fish in what is now Thailand.

The remains of the roughly 25-foot-long (7 to 8 meters) dinosaur, which include parts of its spine, pelvis and tail, represent one of the most complete spinosaurid specimens ever found in Asia, according to researchers.

Spinosaurids were a family of bipedal predators with elongated snouts, crocodile-like teeth and, in many species, sails on their backs. Researchers believe that the Thai specimen, first discovered in 2004, belonged to the Spinosaurinae subfamily, which included the longest-known carnivorous dinosaur genus, Spinosaurus — a potential swimming predator from North Africa that grew up to around 50 feet (15 m) long.

"This discovery from Thailand helps us better understand what spinosaurines looked like and how they evolved in Asia," Adun Samathi, an assistant professor at the Walai Rukhavej Botanical Research Institute and Mahasarakham University in Thailand, told Live Science in an email. "[The fossils] also show that dinosaur diversity in Southeast Asia was richer than previously known and expand our understanding of how these unusual fish-eating predators were spread around the world."

Samathi presented the spinosaurid findings Nov. 12 at the Society of Vertebrate Paleontology 2025 annual meeting in Birmingham, England. The findings haven't been peer-reviewed, as Samathi and his colleagues still have to submit them to a journal.

The researchers don't have an official name for the dinosaur. However, they've nicknamed it the Sam Ran spinosaurid, as it was found in the Sam Ran locality (area) of the Khok Kruat rock formation in northeastern Thailand, according to Samathi, who studied the spinosaurid as part of his doctoral thesis. (Samathi is one of several students and researchers to study the specimen since its discovery.)

The team quickly identified the dinosaur as a spinosaurid because it has several of the group's characteristic features, including long neck vertebrae and tall spines on its back vertebrae. However, the species also had features that distinguished it from known spinosaurid species, including shorter spines than Spinosaurus and more paddle-like spines than Ichthyovenator from Loas, which borders Thailand.

The team suspects that the Sam Ran spinosaurid was more closely related to Spinosaurus from North Africa than Ichthyovenator from Laos. However, there's a lot of uncertainty surrounding the evolution of Asian spinosaurids, as well as spinosaurids in general, and the researchers' findings are only preliminary at this stage.

The Sam Ran spinosaurid died beside a shallow river before some of its remains were fossilized. Samathi doesn't think that this spinosaurid could swim, but it seemed to be using the river ecosystem, which was teeming with life when the dinosaur perished relatively early in the Cretaceous period (145 million to 66 million years ago).

"The new spinosaur lived (or at least [was] found) in a river system with gently flowing water and occasional floods, within a dry to semi-arid landscape," Samathi said. "The site has yielded a variety of animals, including freshwater sharks, bony fish, turtles, crocodiles, and dinosaurs such as a sauropod and an iguanodontian."

]]>
https://www.livescience.com/animals/dinosaurs/spinosaurus-relative-longer-than-a-pickup-truck-stalked-thailands-rivers-125-million-years-ago q37N9C4DAKJRdVbVeY3mXn Fri, 26 Dec 2025 19:50:00 +0000 Tue, 23 Dec 2025 19:45:00 +0000
<![CDATA[ New electrochemical method splits water with electricity to produce hydrogen fuel — and cuts energy costs in the process ]]> Scientists have developed a new technique that doubles the amount of hydrogen produced when splitting water molecules with electricity. The method works by adding a simple organic molecule and a modified catalyst to the reactor.

The adapted method lowers energy costs by up to 40% and may offer a "promising pathway for efficient and scalable hydrogen production," the researchers said in a new study published Dec. 1 in the Chemical Engineering Journal.

"Hydrogen is one of the most in demand chemicals," study co-author Hamed Heidarpour, a doctoral student at McGill University in Montreal, Canada, told Live Science. Hydrogen is used for ammonia production to produce fertilizers, in fuel cells to generate electrical energy, or burned to directly produce energy, Heidarpour said.

The main way of producing hydrogen is through steam reforming, which involves reacting water with natural gas at high temperatures and pressures to separate water's oxygen and hydrogen atoms. But these conditions mean the process is energy intensive and requires burning large amounts of fossil fuels.

Using electricity to split water into hydrogen and oxygen molecules — a method known as electrolysis — could potentially offer a way to create hydrogen with no direct carbon dioxide emissions.

This works by connecting two metal plates known as electrodes to a direct current supply and submerging the ends of the plates into water. Applying electricity to the circuit generates hydrogen at the negative electrode (anode) and oxygen at the positive one (cathode).

However, electrolysis of water is currently inefficient, expensive and uses a lot of electricity, which often comes from non-renewable sources. The main inefficiency is from producing oxygen at the anode, Heidarpour explained.

To overcome this issue, the team behind the new study adapted the standard electrolysis setup to replace the oxygen-forming reaction with one that produces hydrogen by oxidizing an organic molecule.

First, the researchers set up two chambers containing potassium hydroxide (KOH) solutions, which were separated by a thin membrane, and then connected an electrode to either chamber to form a circuit. The team added a chemical called hydroxymethylfurfural (HMF) to the anode chamber, as well as a modified copper catalyst. Heidarpour said that chromium atoms, within the surface of their specifically designed catalyst, help favor hydrogen production by stabilizing the copper atoms in their reactive state.

When the team applied electricity, electrons from the anode oxidized the aldehyde groups in the HMF molecules. This generated hydrogen and a byproduct called HMFCA, which may find use as a chemical feedstock to make bioplastics, Heidarpour said. (Aldehydes have a carbon atom doubly bonded to an oxygen atom and a single bond to a hydrogen atom.)

This adapted method effectively doubles the amount of hydrogen made in one go, when also accounting for the hydrogen created by splitting water molecules at the cathode as usual.

The reactions also ran at around 0.4 volts, which is around 1 volt lower than in conventional water electrolysis. The researchers said this helps reduce overall energy usage by up to 40%.

Heidarpour said the team is not the first to report this type of strategy but explained that they increased the overall hydrogen production rate by using a more efficient catalyst.

HMF is often made by breaking down non-food plant materials such as paper residues, making it an attractive reagent to use in these systems. However, HMF is currently an expensive material.

Other aldehyde-containing molecules such as formaldehyde could be used instead. "Where there is a surplus of low-value organic substrates, oxidizing these into more valuable chemicals with simultaneous hydrogen generation could be an attractive and environmentally-friendly way to make two feedstocks at once," Mark Symes, a professor of electrochemistry and electrochemical technology at the University of Glasgow, who was not involved in the study, told Live Science in an email.

The researchers noted that there are still ways to improve the process to make it more efficient.

For example, further work needs to be done to improve the catalyst's stability so that it "can work for thousands of hours in an industrial setting," Heidarpour said.

]]>
https://www.livescience.com/chemistry/new-electrochemical-method-splits-water-with-electricity-to-produce-hydrogen-fuel-and-cuts-energy-costs-in-the-process RRp6vbD2C5AkY6bbW28gkK Fri, 26 Dec 2025 18:15:00 +0000 Tue, 23 Dec 2025 18:15:33 +0000
<![CDATA[ Uranus and Neptune may be 'rock giants,' not 'ice giants,' new model of their cores suggests ]]> The interiors of Uranus and Neptune may be rockier than scientists previously thought, a new computational model suggests — challenging the idea that the planets should be called "ice giants."

The new study, published Dec. 10 in the journal Astronomy & Astrophysics, may also help to explain the planets' puzzling magnetic fields.

Uranus and Neptune are relatively large planets at the edge of the solar system; Neptune is the most distant planet, orbiting at 2.8 billion miles (4.5 billion kilometers) from the sun, on average. The extremely cold temperatures at these distances cause gases such as hydrogen, helium and water to condense into compressed ice slurries that form the planets' cores. As such, these planets have become known as ice giants.

"The ice giant classification is oversimplified as Uranus and Neptune are still poorly understood," lead study author Luca Morf, a doctoral student at the University of Zurich, said in a statement.

Far out planets

Morf and his supervisor, Ravit Helled, developed a new hybrid model in an attempt to better understand the interior of these cold planets. Models based on physics alone rely heavily on assumptions made by the modeler, while observational models can be too simplistic, Morf explained. "We combined both approaches to get interior models that are both unbiased and physically consistent," he said.

The pair started by considering how the density of each planet's core could vary with distance from the center of the planets and then adjusted the model to account for the planets' gravities. From this, they inferred the temperature and composition of the core and generated a new density profile. The team inputted the new density parameters back into the model and iterated this process until the model core fully matched current observational data.

Neptune looks very blue up close

Voyager 2 took this snaphost of Neptune in 1989. Data on Uranus and Neptune taken during Voyager's flybys are still some of the best we have. (Image credit: NASA / Voyager 2)

This method generated eight total possible cores for both Uranus and Neptune, three of which had high rock-to-water ratios. This shows that the interiors of Uranus and Neptune are not limited to ice, as previously thought, the researchers said.

All of the modeled cores had convective regions where pure water exists in its ionic phase. This is where extreme temperatures and pressures cause water molecules to break apart into charged protons (H+) and hydroxide ions (OH-). The team thinks such layers may be the source of the planets' multiple magnetic fields, which cause Uranus and Neptune to have more than two poles. The model also suggests that Uranus' magnetic field is generated closer to its center than Neptune's is.

"One of the main issues is that physicists still barely understand how materials behave under the exotic conditions of [high] pressure and temperature found at the heart of a planet [and] this could impact our results," Morf said. The team aims to improve their model by including other molecules, like methane and ammonia, which also may be found in the cores.

"Both Uranus and Neptune could be rock giants or ice giants depending on the model assumptions," Helled said. She noted that much of our understanding of these planets may be incomplete, as it's based largely on data collected by the Voyager 2 space probe in the 1980s.

"Current data is insufficient to distinguish the two, and we therefore need dedicated missions to Uranus and Neptune that can reveal their true nature," Helled added

The team hopes the model may act as an unbiased tool for any new data from future space missions to these planets.

]]>
https://www.livescience.com/space/planets/uranus-and-neptune-may-be-rock-giants-not-ice-giants-new-model-of-their-cores-suggests EoxPrmgRMyWzTqVKkB6pBZ Fri, 26 Dec 2025 17:35:00 +0000 Tue, 23 Dec 2025 19:39:43 +0000
<![CDATA[ Diagnostic dilemma quiz: Can you guess the diagnosis in these strange medical cases? ]]> Each week, Live Science highlights an interesting medical case report in its Diagnostic Dilemma series. We describe the patient and their symptoms, the testing and history-taking that revealed their diagnosis, their course of treatment and their ultimate health outcomes. We also highlight what makes the case unique, whether it's the rarity of the diagnosis, the unusual constellation of symptoms, or a novel therapeutic approach.

How many Diagnostic Dilemmas have you read — and can you guess the diagnosis? Take our quiz that draws from the cases we highlighted in 2025 and see if you can figure out each patient’s ailment. Tell us how you got on in the comments below.

More science quizzes

]]>
https://www.livescience.com/health/diagnostic-dilemma-quiz-can-you-guess-the-diagnosis-in-these-strange-medical-cases 8iyqqY64gxRuZGEc6geEeJ Fri, 26 Dec 2025 17:00:00 +0000 Sat, 27 Dec 2025 09:43:01 +0000
<![CDATA[ 1.5 million-year-old Homo erectus face was just reconstructed — and its mix of old and new traits is complicating the picture of human evolution ]]> Scientists have reconstructed the head of an ancient human relative from 1.5 million year-old fossilized bones and teeth. But the face staring back is complicating scientists' understanding of early human evolution and dispersal, according to a new study.

The rebuilt fossil skull, called DAN5, shares traits with Homo erectus, the first early human relatives to have modern body proportions and to disperse from Africa. But the skull also has some features associated with the earlier species Homo habilis. The findings suggest a complex evolutionary path from early human ancestors to H. erectus, researchers reported Dec. 16 in the journal Nature Communications.

DAN5 was discovered in the Gona study region of northern Ethiopia and was first reported in a 2020 study published in the journal Science Advances. The fossils are between 1.5 million and 1.6 million years old and were thought to belong to a small H. erectus female based on the shape and size of the skull.

"We already knew that the DAN5 fossil had a small brain, but this new reconstruction shows that the face is also more primitive than classic African Homo erectus of the same antiquity," study co-author Karen Baab, a paleontologist at Midwestern University in Arizona, said in a statement. This could mean that the population from the Gona region might have "retained the anatomy of the population that originally migrated out of Africa approximately 300,000 years earlier," she said.

To reconstruct DAN5's face, the researchers used micro-computerized tomographic (CT) scans of 10 fossils — five fragments of facial bones and five teeth — to build a 3D model. The process was like "a very complicated 3D puzzle, and one where you do not know the exact outcome in advance," Baab said. "Fortunately, we do know how faces fit together in general, so we were not starting from scratch."

The shape of DAN5's braincase was similar to that of H. erectus. But some of the facial features such as large molars and a flat and narrow nose were more similar to features in the older human ancestor H. habilis.

A similar mix of old and new traits was previously observed in 1.8 million-year-old H. erectus fossils from Dmanisi in the Republic of Georgia, which led some scientists to believe that the species evolved in Eurasia from an earlier Homo population. Older H. erectus fossils dating back 1.8 million years have also been found in Africa. But DAN5 is the first African fossil to have the same mixture of attributes as the Dmanisi hominins, which could support the hypothesis that H. erectus evolved primarily in Africa like other hominins before it. Further complicating the picture, though, is the fact that the DAN5 fossils are younger than those from Dmanisi, suggesting the mixture of old and new traits persisted in Africa for at least 300,000 years.

In future work, the team plans to compare the DAN5 fossils to 1 million-year-old human fossils from Europe, including some that have been identified as H. erectus and as Homo antecessor — a later human relative that lived 1.2 million to 0.8 million years ago — to better understand variability in face shape in the early Homo genus. The team also plans to investigate whether DAN5 might be a product of interbreeding between multiple Homo species.

"We're going to need several more fossils dated between one to two million years ago to sort this out," study co-author Michael Rogers, an anthropologist at Southern Connecticut State University, said in the statement.

Human evolution quiz: What do you know about Homo sapiens?

]]>
https://www.livescience.com/archaeology/human-evolution/1-5-million-year-old-homo-erectus-face-was-just-reconstructed-and-its-mix-of-old-and-new-traits-is-complicating-the-picture-of-human-evolution zGQwnnj2WaaeFRTLJnpXf Fri, 26 Dec 2025 15:15:00 +0000 Tue, 23 Dec 2025 21:12:56 +0000
<![CDATA[ 10 things we learned about our human ancestors in 2025 ]]> Our understanding of how our species evolved has improved dramatically since we first began analyzing ancient DNA. This year, researchers made impressive discoveries across 3 million years of human evolution, most of which relied on DNA, genomic or proteomic analyses.

Here are 10 major findings about human ancestors and our close ancient relatives that scientists announced in 2025.

1. Two new species of human relatives were discovered in Ethiopia.

Fossilized hominin teeth on a black background.

Researchers found teeth belonging to ancient hominins at the Ledi-Geraru archaeological site in Ethiopia. (Image credit: Villmoare)

A handful of teeth found at the Ledi-Geraru site in Ethiopia suggest that diverse species of human relatives unlike any seen before were roaming the area 2.6 million years ago.

In August, researchers announced the discovery of 13 teeth. Ten are estimated to be 2.63 million years old and don't belong to either Australopithecus afarensis or Australopithecus garhi, the two australopithecine species known from the area. Because the teeth don't have any especially unique features and aren't in a skull, the newfound species they may come from does not have an official name. Researchers are calling it the Ledi-Geraru Australopithecus.

In the same study, the researchers found two teeth that are 2.59 million years old and one that is 2.78 million years old. All of them seem to belong to the genus Homo, which would make them some of the earliest remains of our own genus.

The dental discoveries mean that at least three archaic human relatives were living in this region of Ethiopia around 2.5 million years ago.

2. Imported stone tools show our relatives were much smarter than we thought.

A light-colored stone tool rests next to the shoulder blade of a hippo relative in the ground

An Oldowan flake tool was found near a butchered bone from a hippo relative. (Image credit: T.W. Plummer, Homa Peninsula Paleoanthropology Project)

Hundreds of stone tools discovered in Kenya revealed that our ancient relatives had a high degree of forward planning 600,000 years earlier than experts previously thought.

In an August study, researchers looked at more than 400 stone tools from the site of Nyayanga dated to 3 million to 2.6 million years ago. The tools were likely not made by our genus. While the tools were fairly basic — flakes chipped off of a larger stone — the stones used to make them came from locations more than 6 miles (9.7 kilometers) away.

The fact that hominins were transporting stones from far away to make tools suggests an excellent ability to plan ahead, long before our genus Homo arose.

3. Earliest evidence of Homo erectus found in Georgia

two hominin teeth peek out of a mass of bone embedded in orange-brown dirt

Researchers discovered a fragment of a jawbone and teeth at the archaeological site of Orozmani in the Republic of Georgia. (Image credit: Giorgi Bidzinashvili)

In July, researchers announced the discovery of a 1.8 million-year-old jawbone from Homo erectus at the site of Orozmani in the Republic of Georgia. In 2022, the paleoanthropologists had found a single tooth that they thought was from H. erectus, and the jawbone discovered this year clinched the identification.

H. erectus was our direct ancestor and evolved around 2 million years ago in Africa. It was also the first human ancestor to leave Africa, and eventually ended up in parts of Europe, Asia and Oceania.

To date, the earliest evidence of H. erectus outside Africa comes from Orozmani and a second site in Georgia called Dmanisi, suggesting human ancestors settled in the Caucasus region shortly after leaving Africa.

4. A mystery human reached Indonesia 1.5 million years ago.

A person with light skin shows off a chert stone tool with their left hand

One of the stone tools discovered on the island of Sulawesi in Indonesia dates back at least 1 million years. (Image credit: M.W. Moore)

Stone tools discovered on the Indonesian island of Sulawesi this year suggest that either H. erectus or an unknown human relative reached Oceania nearly 1.5 million years ago. This matches up well with previous evidence that H. erectus arrived on the island of Java around 1.6 million years ago.

But because no ancient skeletal remains have been found on Sulawesi yet, researchers are unsure if the toolmaker was indeed H. erectus. Another candidate could be H. floresiensis, the diminutive "hobbit" species, which has been found on the neighboring island of Flores. Some researchers think the hobbits originally came from Sulawesi.

Additional excavation on Sulawesi may eventually clarify which species called the island home.

5. Humans arrived in Australia 60,000 years ago.

a map of Sundaland showing possible migration routes of early humans into Sahul

A map of Sunda, Sahul and the Western Pacific, with arrows showing potential north and south migration routes suggested by genetic analysis. (Image credit: Helen Farr and Erich Fisher)

Genetic research published in November showed that Homo sapiens reached Australia 60,000 years ago, likely via two different routes through the Western Pacific. This finding appears to settle a long-standing debate about humans' arrival on the continent — a feat that required expert knowledge of watercraft and sailing.

The new DNA evidence supports archaeological evidence, including stone tools and pigments on cave walls, of a "long chronology" in which the first arrivals showed up around 60,000 to 65,000 years ago.

But not everyone is convinced. In a July study, researchers used the fact that some Indigenous Australians have Neanderthal DNA to suggest that Australia wasn't populated until about 50,000 years ago — an idea known as the "short chronology."

More research into the origins of the earliest Australians is forthcoming.

6. Drought may have doomed the "hobbits."

A photo of a

A skull of Homo floresiensis, also known as the "hobbit." (Image credit: Lanmas via Alamy)

By 50,000 years ago, H. floresiensis seems to have disappeared from Flores. In December, researchers published a study suggesting that drought may have fueled their demise.

While studying the rainfall on Flores, scientists discovered that it declined considerably between about 76,000 and 61,000 years ago and that the population of an elephant relative called Stegodon, which the hobbits hunted, disappeared around 50,000 years ago.

The researchers think decreased rainfall led to the reduction in the Stegodon population, which made life more difficult for the hobbits. And if modern humans also reached Flores — perhaps part of the wave of people who eventually settled Australia — the pressure of competition from another species may have wiped out H. floresiensis.

7. Denisovans got a face.

a top view of a jawbone

A photograph of the right side of the Penghu 1 lower jawbone that was found off the coast of Taiwan. (Image credit: Yousuke Kaifu)

Our extinct relatives the Denisovans were first discovered in 2010 based on DNA extracted from a tiny finger bone. But until this year, no one knew what a Denisovan skull looked like.

Researchers debated for years what species the thick jawbone, recovered off the coast of Taiwan in 2000, came from, with some suggesting H. erectus and others suggesting H. sapiens. But using paleoproteomic analysis, researchers announced in May that the jawbone was from a male Denisovan.

Ancient proteins also revealed in June that a skull discovered in China in 1933, called the "Dragon Man," is from a Denisovan, finally putting a face to the name. But while Dragon Man has now been slotted into the story of human evolution, it is not yet clear whether the group should be considered a separate species, Homo longi.

And in September, researchers reconstructed a 1 million-year-old squashed skull from China and suggested that it may have been a Denisovan ancestor rather than H. erectus.

These three discoveries are pointing paleoanthropologists to clues about the origins and spread of the mysterious Denisovans — a task that will surely continue in the coming years.

8. Denisovan DNA helped Native Americans survive.

black-and-white image of a person handling a human jaw carefully while gloved

A researcher inspects a human jawbone from a pre-Hispanic individual from what is now Mexico. (Image credit: Maria Avila Arcos)

Researchers announced in August that some people with Indigenous American ancestry carry Denisovan genes, likely passed on through Neanderthals who mated with modern humans.

In looking at a protein-coding gene called MUC19, scientists discovered that 1 in 3 Mexicans alive today has a version of the gene similar to Denisovans' and that it likely "hitched a ride" from Neanderthals. Essentially, Neanderthals got the gene from mating with Denisovans and then passed it along when they mated with humans. This is the first time scientists have found a Denisovan gene in humans that came via Neanderthals.

Exactly what the Denisovan variant of the MUC19 gene does is currently unclear, but the researchers think it must have been beneficial to the earliest Americans for it to be preserved in the human genome.

9. Interbreeding was rampant among our archaic relatives.

a series of teeth and jaws from ancient humans

Fossil teeth from Hualongdong show a mix of ancient and modern traits. (Image credit: X. Wu et al. / Journal of Human Evolution)

The story of human evolution has gotten wonderfully messy since the genomic revolution. DNA and protein analyses have revealed new groups like the Denisovans, as well as the mating of Neanderthals, modern humans and Denisovans. But this year brought a few surprise pairings as well.

In August, researchers announced that a handful of 300,000-year-old teeth suggested humans and H. erectus may have interbred in China. The teeth had an unusual combination of ancient features, like thick molar roots, and modern features, like small wisdom teeth, that could mean two different species were sharing their genes.

Researchers announced in March that Neanderthals, modern humans and a mysterious third lineage lived alongside one another in caves in what is now Israel around 130,000 years ago. The Homo groups may have mixed and mingled for 50,000 years, potentially sharing cultural practices in addition to genetic material.

And in November, a DNA study of humans' arrival in Australia suggested that, along the way, these early human pioneers likely interbred with one or more archaic human groups, such as H. longi, Homo luzonensis or H. floresiensis.

Although we can see genetic differences among these groups using 21st-century technology, perhaps our earliest ancestors simply saw Neanderthals, Denisovans and others as fellow humans.

10. Most Europeans had a dark complexion until 3,000 years ago.

a reconstruction of a man with dark skin and hair

The bones of Cheddar Man (whose reconstruction is pictured here) reveal he lived in the U.K. around 10,000 years ago. This reconstruction shows his probable dark skin. (Image credit: JUSTIN TALLIS via Getty Images)

In a study published in July, scientists found that the genes for lighter skin, lighter hair and lighter eyes emerged among Europeans only about 14,000 years ago and that, until 3,000 years ago, most Europeans had dark skin, hair and eyes.

The researchers determined this from 348 samples of ancient DNA from archaeological sites spread throughout Western Europe and Asia. The first humans to reach Europe around 50,000 years ago carried genes for dark complexions. Once lighter traits emerged, they appeared only sporadically in the genetic data until fairly recently. By about 1000 B.C., those lighter traits became widespread in Europe.

Whether lighter skin, hair and eyes had any sort of evolutionary advantage for early Europeans is still unclear, though.

Human evolution quiz: What do you know about Homo sapiens?

]]>
https://www.livescience.com/archaeology/10-things-we-learned-about-our-human-ancestors-in-2025 DfqnHxnPNSidyHxsHhL44D Fri, 26 Dec 2025 14:30:00 +0000 Tue, 23 Dec 2025 19:36:30 +0000
<![CDATA[ The world's 'hidden' volcanoes pose the greatest risk for global crisis ]]> The next global volcanic disaster is more likely to come from volcanoes that appear dormant and are barely monitored than from the likes of famous volcanoes such as Etna in Sicily or Yellowstone in the US.

Often overlooked, these "hidden" volcanoes erupt more often than most people realise. In regions like the Pacific, South America and Indonesia, an eruption from a volcano with no recorded history occurs every seven to ten years. And their effects can be unexpected and far-reaching.

One volcano has just done exactly that. In November 2025, the Hayli Gubbi volcano in Ethiopia has erupted for the first time in recorded history (at least 12,000 years that we know of). It sent ash plumes 8.5 miles into the sky, with volcanic material failing in Yemen and drifting into air space over northern India.

You don't have to look far back in history to find another example. In 1982, the little-known and unmonitored Mexican volcano El Chichón erupted explosively after lying dormant for centuries. This series of eruptions caught authorities off-guard: hot avalanches of rock, ash and gas flattened vast areas of jungle. Rivers were dammed, buildings destroyed, and ash fell as far as Guatemala.

More than 2,000 people died and 20,000 were displaced in Mexico's worst volcanic disaster in modern times. But the catastrophe did not end in Mexico. The sulphur from the eruption formed reflective particles in the upper atmosphere, cooling the northern hemisphere and shifting the African monsoon southwards, causing extreme drought.

This alone would test the resilience and coping strategies of any region. But when it coincided with a vulnerable population that was already experiencing poverty and civil war, disaster was inevitable. The Ethiopian (and East African) famine of 1983-85 claimed the lives of an estimated 1 million people. This brought global attention to poverty with campaigns like Live Aid.

Few scientists, even within my field of Earth science, realise that a remote, little-known volcano played a part in this tragedy.

Despite these lessons, global investment in volcanology has not kept pace with the risks: fewer than half of active volcanoes are monitored, and scientific research still disproportionately focuses on the well-known few.

There are more published studies on one volcano (Mount Etna) than on all the 160 volcanoes of Indonesia, Philippines and Vanuatu combined. These are some of the most densely populated volcanic regions on Earth – and the least understood.

The largest eruptions don't just affect the communities around them. They can temporarily cool the planet, disrupt monsoons and reduce harvests across entire regions. In the past, such shifts have contributed to famines, disease outbreaks and major social upheaval, yet scientists still lack a global system to anticipate or manage these future risks.

Etna volcano in eruption - Sicily

Mount Etna on the Italian island of Sicily. (Image credit: By Wead via Shutterstock)

To help address this, my colleagues and I recently launched the Global Volcano Risk Alliance, a charity that focuses on anticipatory preparedness for high-impact eruptions. We work with scientists, policymakers and humanitarian organisations to highlight overlooked risks, strengthen monitoring capacity where it is most needed, and support communities before eruptions occur.

Acting early, rather than responding only after disaster strikes, stands the best chance of preventing the next hidden volcano from becoming a global crisis.

Why 'quiet' volcanoes aren't safe

So why do volcanoes fail to receive attention proportionate to their risk? In part, it comes down to predictable human biases. Many people tend to assume that what has been quiet will remain quiet (normalcy bias). If a volcano has not erupted for generations, it is often instinctively considered safe.

The likelihood of an event tends to be judged by how easily examples come to mind (this mental shortcut is known as availability heuristic). Well-known volcanoes or eruptions, such as the Icelandic ash cloud from 2010, are familiar and can feel threatening, while remote volcanoes with no recent eruptions rarely register at all.

These biases create a dangerous pattern: we only invest most heavily after a disaster has already happened (response bias). El Chichón, for instance, was only monitored after the 1982 catastrophe. However, three-quarters of large eruptions (like El Chichón and bigger) come from volcanoes that have been quiet for at least 100 years and, as a result, receive the least attention.

Volcano preparedness needs to be proactive rather than reactive. When volcanoes are monitored, when communities know how to respond, and when communication and coordination between scientists and authorities is effective, thousands of lives can be saved.

Disasters have been averted in these ways in 1991 (at Mount Pinatubo in the Philippines), in 2019 (at Mount Merapi in Indonesia) and in 2021 (at La Soufrière on the Caribbean island of Saint Vincent).

To close these gaps, the world needs to shift attention towards undermonitored volcanoes in regions such as Latin America, south-east Asia, Africa and the Pacific – places where millions of people live close to volcanoes that have little or no historical record. This is where the greatest risks lie, and where even modest investments in monitoring, early warning and community preparedness could save the most lives.

This edited article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
https://www.livescience.com/planet-earth/volcanos/the-worlds-hidden-volcanoes-pose-the-greatest-risk-for-global-crisis Ug3WfM64nbEFr37T24vqKA Fri, 26 Dec 2025 14:00:00 +0000 Fri, 19 Dec 2025 21:56:15 +0000
<![CDATA[ The easiest constellations for beginners to spot in winter (and what you need to see them) ]]> On a clear winter night, the sky can look like a blanket of stars, but it isn’t a blanket — it’s a map. Constellations are the signposts to the stars, simple stick-figures that turn a random scatter of points of light into something you can recognize, remember and navigate by. Learn just a handful, and the whole winter sky begins to fall into place.

December is the ideal time to start stargazing in the Northern Hemisphere. Yes, it’s cold, but the long nights allow you to start early and give you hours of darkness, while the northern winter sky is packed with bright, easy patterns. Orion dominates in the southeast, with Taurus above and Gemini following behind, while together they form the vast Winter Circle of bright stars. High above, Cassiopeia’s crooked W and the Great Square of Pegasus mark the route to the Andromeda galaxy and the rich Milky Way fields of Perseus and Auriga.

You don’t need any equipment to get started — just patience, warm clothes and a willingness to look up for more than a few seconds. However, a pair of the best binoculars for stargazing, one of the best telescopes, or a smart telescope adds depth. They turn faint smudges into clusters, clouds and galaxies, and give you a reason to keep coming back.

With a few winter constellations under your belt, the Universe stops being abstract and becomes somewhere you can actually learn your way around. Here are the easiest constellations for beginners to spot in the Northern Hemisphere’s winter night sky.

1. Orion, the Hunter

constellation on Starry Night software

(Image credit: constellation from Starry Night software)

Hidden target: M42 (Orion Nebula)

On December and January evenings, Orion rises early and dominates the southern sky by mid-evening, making him the easiest winter landmark. Look southeast for three bright stars in a short, straight line — Orion’s Belt, made from the three equidistant stars Alnitak, Alnilam and Mintaka.

Above is reddish Betelgeuse, and below is blue-white Rigel. On the Rigel side of the belt stars, there's a fuzzy patch that appears brighter when viewed slightly to its side. This is Orion’s Sword; binoculars or a small telescope aimed at its middle will reveal the Orion Nebula (M42) as a glowing cloud lit by newborn stars.

2. Taurus, the Bull

constellation on Starry Night software

(Image credit: constellation from Starry Night software)

Hidden target: M45 (Pleiades)

After dark, look east, above the constellation Orion, for orange Aldebaran, the eye of Taurus. It’s set in a V-shaped cluster — the Hyades open cluster — marking the Bull’s face. Below are its horns, stretching to the stars Elnath and Tianguan.

Above Taurus is a tiny misty patch that looks like a miniature dipper — the Pleiades, also known as the “Seven Sisters” and M45. One of the easiest star clusters to see with the naked eye, through binoculars the Pleiades appear as many skywatchers see them — the night sky’s most beautiful object.

3. Gemini, the Twins

constellation on Starry Night software

(Image credit: constellation from Starry Night software)

Hidden target: M35 (open cluster)

Close to Taurus and Orion, find two bright stars standing side by side — Castor and Pollux, the heads of the Twins. In December 2025 and January 2026, they are easy to find because a very bright Jupiter shines close by. From them, fainter stars form stick-figure bodies.

Aim binoculars or a small telescope near the foot of the northern twin to uncover M35, a young open cluster of gravitationally bound stars that also has the name the Shoe Buckle Cluster, according to NASA.

4. Auriga, the Charioteer

constellation on Starry Night software

(Image credit: constellation from Starry Night software)

Hidden targets: M36, M37, M38 (open clusters)

High in the northeast to overhead, bright Capella blazes like a lantern in the winter sky as soon as it gets dark. The “Goat Star” marks one corner of Auriga, a roughly pentagonal constellation whose constituent stars are easy to see even from a city.

Sweep the southern area below Capella with binoculars or a small telescope, and you’ll come across M36, M37 and M38: three bright, open clusters that turn an apparently empty sky into anything but.

5. Winter Triangle asterism

constellation on Starry Night software

(Image credit: constellation from Starry Night software)

Hidden target: The colors of Sirius

Constellations are a great way to learn the night sky, but so are asterisms — easily recognizable patterns of stars. Look to the southeast after dark during winter for three bright stars — reddish Betelgeuse in Orion, Procyon in Canis Minor and dazzlingly bright Sirius in Canis Major. Together, they form the large Winter Triangle.

Point binoculars or a small telescope at Sirius, and you’ll notice it flashes in a rainbow of colors. Why? It's so very bright and so very close — just 8.6 light-years distant — that its intense starlight gets twisted by turbulence in Earth’s atmosphere, which bends starlight and makes stars twinkle. Sirius is the ultimate example.

6. Winter Hexagon

constellation on Starry Night software

(Image credit: constellation from Starry Night software)

Hidden target: Jupiter

Step back and join the dazzling stars of the southern sky — Rigel in Orion, Aldebaran in Taurus, Capella in Auriga, Pollux in Gemini, Procyon in Canis Minor and Sirius in Canis Major. Together they form the huge Winter Hexagon (or Winter Circle). It’s a vast shape that takes a while to find, so take your time and repeat your star-hops again and again until you’ve memorized it. It will stay with you forever and make you look forward to winter.

As a bonus this winter, put a pair of binoculars on bright Jupiter, shining brightly near Pollux in Gemini, to see four points of light — its giant moons Ganymede, Europa, Callisto and Io.

7. Cassiopeia, the Queen

constellation on Starry Night software

(Image credit: constellation from Starry Night software)

Hidden target: M31 (Andromeda Galaxy)

Look high in the north for a crooked “W” or “M” of five stars — the constellation Cassiopeia. It circles the North Star all night — more or less opposite the Big Dipper — and stays prominent through winter, making it a handy signpost from any site.

From the central V of the W, sweep outward toward the south with binoculars or a small telescope to find M31, the Andromeda Galaxy. This spiral galaxy, 2.5 million light-years distant, appears as a soft, elongated glow, though the darker the site you stargaze from, the brighter it will look.

8. Ursa Major, the Great Bear

constellation on Starry Night software

(Image credit: constellation from Starry Night software)

Hidden target: Mizar and Alcor (double star)

In late December evenings, the Big Dipper portion of Ursa Major sits low in the north-northeast, climbing higher after midnight. Look for a bright saucepan shape — three stars in the handle and four in the bowl. Mizar, the middle star in the handle, looks slightly fuzzy to the naked eye.

If you have great eyesight, you may even notice that there are actually two stars. To check that your eyes don’t deceive you, aim any pair of binoculars or a small telescope and you’ll split Mizar and Alcor cleanly into two distinct points of light. Called the “Horse and Rider” by stargazers, splitting Mizar and Alcor with the naked eye was a test of eyesight used by the ancient Arabs, according to Space.com.

9. Great Square of Pegasus

constellation on Starry Night software

(Image credit: constellation from Starry Night software)

Hidden target: Saturn

On early winter evenings, look west for a large, almost empty square of four medium-bright stars — Markab, Scheat, Algenib and Alpheratz — which form the vast Great Square of Pegasus. It’s sinking by late December, but still visible in the first half of the night.

In December 2025 and January 2026, it’s above something else that’s worth your attention — Saturn. Its pale golden light isn't much to look at with the naked eye, but its fabulous rings can be seen with a small 3-inch telescope at 50x magnification.

10. Perseus, the Hero

constellation on Starry Night software

(Image credit: constellation from Starry Night software)

Hidden target: Double Cluster (NGC 869 and NGC 884)

Look between Cassiopeia in the north and Capella in the northeast for a ragged, curved chain of stars — the constellation Perseus. It runs through the pale band of the winter Milky Way at this time of year and contains many riches.

One of these is the Double Cluster, NGC 869 and NGC 884, a faint, fuzzy patch halfway between Perseus and Cassiopeia that’s just about visible to the naked eye in a very dark sky. These two overlapping swarms of stars look terrific in binoculars or a small telescope.

]]>
https://www.livescience.com/space/easiest-constellations-for-beginners-to-spot-in-winter KnzrHchhWC4UEHRHmFWiqj Fri, 26 Dec 2025 14:00:00 +0000 Sat, 27 Dec 2025 09:43:01 +0000
<![CDATA[ Coconucos volcanic chain: Colombia's stunning cluster of volcanoes, lost in an otherworldly landscape ]]>
QUICK FACTS

Name: Cadena Volcánica de los Coconucos (Coconucos volcanic chain)

Location: Puracé National Park, Colombia

Coordinates: 2.2964, -76.4110

Why it's incredible: The volcanic chain comprises at least 14 volcanoes, including one that is active.

Colombia's Coconucos volcanic chain is a high mountain ridge pockmarked with at least 14 volcano craters. These craters form a line that runs northwest-to-southeast, offering striking aerial views and images.

Twelve volcanoes in the Coconucos volcanic chain have summits higher than 13,000 feet (4,000 meters) above sea level. The tallest volcano, at about 15,400 feet (4,700 m) above sea level, is the Pan de Azúcar, which until a few decades ago was permanently covered in snow.

The second-tallest volcano in the chain, known as Puracé, is Coconucos's only active volcano and one of the most active volcanoes in Colombia. Puracé, meaning "fire mountain" in the Quechua family of languages, stands 15,260 feet (4,650 m) high and sits at the northwesternmost end of the chain. Its last eruption was in early December 2025, when the volcano spewed gas and ash up to 3,000 feet (900 m) into the sky and showered nearby areas with fine debris. Colombian authorities issued an alert on Nov. 29 and the outburst continued Monday (Dec. 15).

Puracé showed signs of activity in 2022 and 2023, but the volcano's last recorded eruption before the most recent one was in 1977. The last measurements before the 2025 eruption showed that Puracé's crater has a diameter of 1,640 feet (500 m).

The Coconucos volcanic chain is located in Puracé National Park in the Andes mountains. The region is a misty grassland ecosystem known as páramo, with temperatures ranging between 37 and 65 degrees Fahrenheit (3 to 18 degrees Celsius). Snow was widespread on mountain tops in Puracé National Park until a century ago, but this is rare nowadays, according to Colombia's national parks website.

Several of Colombia's most important rivers originate in Puracé National Park, including the Cauca, Magdalena, Patía and Caquetá. The national park is peppered with sulfur springs and clear lagoons, attracting tourists and hikers.

The Coconuco people have traditionally inhabited, and continue to live in, the region.

Discover more incredible places, where we highlight the fantastic history and science behind some of the most dramatic landscapes on Earth.

]]>
https://www.livescience.com/planet-earth/volcanos/coconucos-volcanic-chain-colombias-stunning-cluster-of-volcanoes-lost-in-an-otherworldly-landscape ERZk3LftsDJRFrJkmtGvb3 Fri, 26 Dec 2025 13:00:00 +0000 Wed, 17 Dec 2025 18:21:23 +0000
<![CDATA[ Flat-headed cat not seen in Thailand for almost 30 years is rediscovered ]]> Researchers have photographed a rare cat in Thailand that hasn't been seen in the country for almost 30 years — and it's adorable.

Flat-headed cats (Prionailurus planiceps), named after their flattened foreheads, live in fragmented pockets across Brunei, Indonesia and Malaysia, but they were feared extinct in Thailand.

Researchers rediscovered the cats using remote camera traps in Thailand’s Princess Sirindhorn Wildlife Sanctuary in 2024 and 2025 — the first detections in Thailand since 1995. Cat conservation organization Panthera announced the rediscovery on Friday (Dec. 26), which is also Thailand's annual Wildlife Protection Day.

"For decades, the flat-headed cat has been classified as 'likely extinct,' but after years of sustained protection, strong scientific partnerships, and community stewardship, we can now celebrate its return to Thailand this National Wildlife Day," Suchart Chomklin, Thailand's minister of Natural Resources and Environment, said in a statement.

Flat-headed cats have webbed feet to traverse wetland habitats, such as waterlogged peat-swamp forest, where the species is thought to primarily hunt fish. However, researchers know very little about their lives. The enigmatic cat is the smallest in Southeast Asia, weighing around 4.4 pounds (2 kilograms) — less than a domestic cat — and is scarcely seen by humans.

The International Union for Conservation of Nature's (IUCN) last assessment of the species, carried out in 2014, concluded that flat-headed cats were endangered. They are primarily threatened by the loss and degradation of their wetlands and lowland forests, as well as other human pressures like overfishing and hunting.

Researchers went looking for the cats in remote areas of Thailand in what Panthera described as the "largest-ever survey of the species." The work is part of a new Panthera-led IUCN assessment of flat-headed cats, which Panthera expects to publish in early 2026.

The camera traps photographed several flat-headed cats, including a female with a cub, demonstrating that they are not only living in southern Thailand but also breeding in the region.

"Rediscovery of the flat-headed cat in southern Thailand is a significant win for conservation in Thailand and the broader southeast Asia region where the species is still found," Atthapol Charoenchansa, the director general of Thailand's Department of National Parks, Wildlife and Plant Conservation, said in the statement.

]]>
https://www.livescience.com/animals/flat-headed-cat-not-seen-in-thailand-for-almost-30-years-is-rediscovered sQwmgL2NtfMx97vbHth6kN Fri, 26 Dec 2025 09:00:00 +0000 Sat, 27 Dec 2025 09:43:01 +0000
<![CDATA[ Science history: Marie Curie discovers a strange radioactive substance that would eventually kill her — Dec. 26, 1898 ]]>
QUICK FACTS

Milestone: Discovery of radium and polonium

Date: Dec. 26, 1898

Where: Paris

Who: Marie and Pierre Curie, Gustave Bémont

On this day, chemists discovered a substance 900 times more radioactive than uranium. Their research led to unprecedented medical breakthroughs and worldwide fame — but it would also kill one of them.

Marie Curie was a medical student at the Sorbonne, a university in Paris, when she decided to study the new field of radiation for her thesis. In 1895, Wilhelm Röntgen discovered powerful "Röntgen rays," which would eventually be dubbed X-rays. The following year, Henri Becquerel accidentally discovered much weaker rays emitted by uranium salts would fog up photographic plates just like light rays did — even in the absence of light.

Curie realized that she wouldn't have to read a long list of prior papers on the newfangled subject before diving into experimental work, according to the American Institute of Physics. Curie's husband, Pierre, found her a workspace in a musty, crowded storeroom at his institution, the Paris Municipal School of Industrial Physics and Chemistry. He soon became so fascinated with her research that he abandoned his own to pursue hers.

Key to Marie Curie's research was the piezoelectric quartz electrometer. The device, invented by her brother-in-law, Jacques Curie, measured the weak electrical currents produced by radioactivity.

"Instead of making these bodies act upon photographic plates, I preferred to determine the intensity of their radiation by measuring the conductivity of the air exposed to the action of the rays," Curie wrote in a 1904 article for Century magazine.

The damp storeroom messed with her results, but she ultimately discovered that the intensity of this radiation depended on the concentration of uranium in the minerals she studied. She speculated that something intrinsic to the atomic structure of uranium must be at play.

Working with her husband Pierre and Gustave Bémont, the head of chemistry at the Higher School of Industrial Physics and Chemistry of the City of Paris, they began to study pitchblende, a black mineral rich in uranium often found in deposits alongside silver.

A rock containing uraninite, also known as pitchblende, is seen at the Rozna mine, operated by Geam, a division of Diamo S.P. mining company, in Dolni Rozinka, Czech Republic, on Thursday, April 10, 2014.

Pitchblende, or uraninite, is a mineral composed of up to 30 different elements. Some of its constituents, including radium and polonium, are highly radioactive. (Image credit: Martin Divisek/Bloomberg via Getty Images)

Curie noticed that it could be much more radioactive than uranium ore itself.

"How could an ore, containing many substances which I had proved inactive, be more active than the active substances of which it was formed? The answer came to me immediately: The ore must contain a substance more radioactive than uranium and thorium, and this substance must necessarily be a chemical element as yet unknown," Marie Curie wrote in Century magazine in 1903.

Marie Curie deduced that whatever this mysterious substance was, it had to exist only in small quantities yet have a remarkable level of what she had dubbed "radio-activity." The trio decided to try to separate pitchblende, which can be composed of up to 30 minerals, into its constituent parts to identify the radioactive substance. They used the light spectra of different substances to try to isolate and identify the ingredients.

In July, they pinpointed one mineral that was around 60 times more "radio-active" than uranium, which they named polonium. And on Dec. 21, they found another — called radium — that was an unprecedented 900 times more radioactive than uranium. They described both new substances during a talk at the French Academy of Sciences on Dec. 26.

The Curies would go on to isolate the radioactive elements over the next several years, while working in a poorly ventilated shed in the courtyard across from the original storeroom.

Their research on radiation earned the Curies and Becquerel the Nobel Prize in Physics in 1903. (Marie was originally going to be passed over, but she received the prize only after her husband, Pierre, insisted the committee credit her work.) Marie would earn another Nobel Prize in 1911, this time in chemistry, for her work on radium.

Pierre was killed by a horse-drawn carriage in 1906, but Marie would go on to advocate for the use of X-rays in medicine — including developing vehicles that could provide mobile X-rays for soldiers on the battlefield during World War I. She also noted that radium killed off diseased cells faster than healthy ones, a principle that would later inspire the development of radiotherapy for cancer treatment.

Radium caused frequent radiation sickness and burns in both Curies. Marie's radiation exposure likely killed her; she died in 1934 at age 66 due to aplastic anemia, a type of leukemia that can be caused by radiation damage to bone marrow. The notebook she used to document her 1898 discovery is still radioactive and is stored in a lead box.

]]>
https://www.livescience.com/physics-mathematics/science-history-marie-curie-discovers-a-strange-radioactive-substance-that-would-eventually-kill-her-dec-26-1898 umdH8mmr295yKhTbLhtW3c Fri, 26 Dec 2025 07:00:00 +0000 Tue, 23 Dec 2025 19:10:47 +0000
<![CDATA[ Last of its kind dodo relative spotted in a remote Samoan rainforest ]]> One of the closest living relatives of the dodo has been spotted multiple times in Samoa — raising hopes that this critically endangered creature can be saved from the brink of extinction.

The Samoa Conservation Society's (SCS) latest field survey, which took place from Oct. 17-Nov. 13, reported five sightings of the manumea (Didunculus strigirostris). Previous surveys only yielded a single sighting, if any. The last photograph of the cryptic species in the wild was taken in 2013.

In the early 1990s, there were around 7,000 of these dodo-like birds, which are only found in Samoa. But habitat destruction, hunting and invasive species decimated the population to an estimated 50 to 150 as of 2024. Before setting out, team members were concerned they wouldn't find the bird alive, potentially signalling its impending extinction.

"That was our worry," said Moeumu Uili, a project coordinator focusing on manumea with SCS. "What happens if we can't find the bird? Does that mean the manumea is no more?"

Despite confirming the manumea's existence, the team found it difficult to photograph due to their distance from the bird, its quick movement and rainy conditions. "All of a sudden, it appears out of nowhere," Uili told Live Science. "When we see it through the binoculars, we can see the bird."

But by the time researchers lower their binoculars to get a camera, the bird is gone, she said.

Last of its kind

The manumea is the only living species of its Didunculus genus, which will end if the bird goes extinct. The chicken-size manumea's scientific name, Didunculus strigirostris, means "little dodo." Both the dodo and manumea are classified as island ground pigeons.

The dodo went extinct due to habitat loss, hunting and predators — the same threats to the manumea's survival. Hunting has been outlawed and subject to fines, so it's imperative to focus on the current main threat — invasive species, particularly feral cats and rats, experts said. Cats hunt living birds and chicks, while rats eat the eggs and chicks.

"The impact on manumea is certainly catastrophic," Joe Wood, the manager of International Conservation Programs at the Toledo Zoo, told Live Science. "It seems very likely that feral cats are a major cause of decline," said Wood, who also co-chairs a group at the International Union for Conservation of Nature that works on manumea conservation efforts. "There has to be some kind of control program."

Saving manumea

In this fall's latest survey, Uili's team focused on the remote coastal rainforest of Uafato, but manumea potentially live in six additional forests in Samoa. A current invasive species management program already exists in one of those forests, Samoa's Malololelei Recreation Reserve, Uili said. If there's funding, SCS wants to expand the invasive species management to areas like Uafato.

If a manumea is secured, the partners working to save it said they can use biobanking to preserve biological samples to establish cultured cell lines for the bird. These cell lines will allow them to study the manumea's genetic material and learn more about it. With more information, they can determine the best measures to take, such as potential captive breeding, to repopulate the species, experts said.

The nonprofit conservation arm of Colossal Biosciences is also supporting some manumea conservation efforts, for instance, by building an app to distinguish the manumea's call from another bird's in hopes of getting a more accurate estimate of the manumea's prevalence.

Colossal has said they have plans to bring dodos back from extinction. It recently made headlines for "de-extincting" dire wolves — essentially gene editing gray wolves to include a handful of traits that make them look more like dire wolves.

But there's a need to be wary of efforts to bring extinct species back into ecosystems that have changed since they were alive, Nic Rawlence, an associate professor and director of the Otago Palaeogenetics Laboratory in the Department of Zoology at the University of Otago in New Zealand, told Live Science.

Rawlence also said you must bring back enough species to ensure genetic diversity so they can adapt and survive, which is known as the 500-rule in conservation.

To save the manumea, Rawlence echoed Wood and stressed it's crucial to stop invasive species and other threats to the manumea's survival without many left.

"I think it's still going to come down to the grunt work of predator control, habitat restoration, translocation," he said.

Manumea conservation work in Samoa is supported by SCS, the Samoa Ministry of Natural Resources and Environment, BirdLife International, the Colossal Foundation, the Toledo Zoo, and the Waddesdon Foundation through the Zoological Society of London.

Editor's Note: This story was produced in partnership with the Fellowship in Journalism and Health Impact through the University of Toronto Dalla Lana School of Public Health.

]]>
https://www.livescience.com/animals/birds/last-living-member-of-little-dodo-genus-spotted-in-a-remote-samoan-rainforest LKWxpyzgBgHW83AJQLpwAK Thu, 25 Dec 2025 18:00:00 +0000 Tue, 23 Dec 2025 02:31:30 +0000
<![CDATA[ Neuroscience word search — Find all the parts of the brain ]]>

More puzzles and quizzes

Brain quiz: Test your knowledge of the most complex organ in the body

Live Science crossword puzzle: Test your knowledge on all things science with our weekly puzzle

What do you know about psychology's most infamous experiments? Test your knowledge in this science quiz.

]]>
https://www.livescience.com/health/neuroscience/neuroscience-word-search-find-all-the-parts-of-the-brain jqzXf7BpTV3YSX7ZTw8St8 Thu, 25 Dec 2025 17:00:00 +0000 Sat, 27 Dec 2025 09:43:01 +0000
<![CDATA[ Archaeological artifacts should not be for sale in thrift shops. But putting them in a museum is harder than it sounds. ]]> An unusual email arrived in the inbox of a faculty member at the department of archeology at Simon Fraser University in the spring of 2024.

This email was from a thrift shop, Thrifty Boutique in Chilliwack, B.C. — unlike the many queries archeologists receive every year to authenticate objects that people have in their possession.

The shop wanted to determine whether items donated to the store (and initially put up for sale) were, in fact, ancient artifacts with historical significance. Shop employees relayed that a customer, who did not leave their name, stated the 11 rings and two medallions (though one may be a belt buckle) in the display case with a price tag of $30 were potentially ancient.

Thrifty Boutique wasn't looking for a valuation of the objects, but rather guidance on their authenticity.

Eclectic collection

The disparities between the two objects, suggesting different time periods, make it unlikely they're from the same hoard. We expect they were assembled into an eclectic collection by the unknown person (as of yet) who acquired them prior to their donation to Thrifty Boutique.

With the exciting revelation that the objects may be authentic ancient artifacts, the thrift store offered to donate them to SFU's archeology museum. The museum had to carefully consider whether it had the capacity and expertise to care for these objects in perpetuity, and ultimately decided to commit to their care and stewardship because of the potential for student learning.

Officially accepting and officially transferring these objects to the museum took more than a year. We grappled with the ethical implications of acquiring a collection without known provenance (history of ownership) and balanced this against the learning opportunities that it might offer our students.

As archeology faculty, we analyzed these objects with Babara Hilden, director of Museum of Archaeology and Ethnology at Simon Fraser University, after the store arranged to bring the items to the museum.

Our initial visual analysis of the objects led us to suspect that, based on their shapes, designs and construction, they were ancient artifacts most likely from somewhere within the boundaries of what was once the Roman Empire. They may date to late antiquity (roughly the third to sixth or seventh century) and/or the medieval period.

The initial dating was based largely on the decorative motifs that adorn these objects. The smaller medallion appears to bear a Chi Rho (Christogram), which was popular in the late antiquity period. The larger medallion (or belt buckle) resembles comparable items from the Byzantine Period.

Gloved hands holding individual artifact.

One of the ancient medallions. (Image credit: SFU/Sam Smith)

The disparities between the two objects, suggesting different time periods, make it unlikely they're from the same hoard. We expect they were assembled into an eclectic collection by the unknown person (as of yet) who acquired them prior to their donation to Thrifty Boutique.

With the exciting revelation that the objects may be authentic ancient artifacts, the thrift store offered to donate them to SFU's archeology museum. The museum had to carefully consider whether it had the capacity and expertise to care for these objects in perpetuity, and ultimately decided to commit to their care and stewardship because of the potential for student learning.

Officially accepting and officially transferring these objects to the museum took more than a year. We grappled with the ethical implications of acquiring a collection without known provenance (history of ownership) and balanced this against the learning opportunities that it might offer our students.

Gloved hands holding individual artifact.

A researchers handles a medallion with care. (Image credit: SFU/Sam Smith)

Learning to investigate the journey of the donated objects is akin to the process of provenance research in museums.

In accepting items without known provenance, museums must consider the ethical implications of doing so. The Canadian Museums Association Ethics Guidelines state that "museums must guard against any direct or indirect participation in the illicit traffic in cultural and natural objects."

When archeological artifacts have no clear provenance, it is difficult — if not impossible — to determine where they originally came from. It is possible such artifacts were illegally acquired through looting, even though the Canadian Property Import and Export Act exists to restrict the importation and exportation of such objects.

We are keenly aware of the responsibility museums have to not entertain donations of illicitly acquired materials. However, in this situation, there is no clear information — as yet — about where these items came from and whether they are ancient artifacts or modern forgeries. Without knowing this, we cannot notify authorities nor facilitate returning them to their original source.

With a long history of ethical engagement with communities, including repatriation, the Museum of Archaeology and Ethnology is committed to continuing such work. This donation would be no different if we're able to confirm our suspicions about their authenticity.

Archeological forgeries

Archeological forgeries, while not widely publicized, are perhaps more common than most realize — and they plague museum collections around the world.

Well-known examples of the archeological record being affected by inauthentic artifacts are the 1920s Glozel hoax in France and the fossil forgery known as Piltdown Man.

Other examples of the falsification of ancient remains include the Cardiff Giant and crystal skulls, popularized in one of the Indiana Jones movies.

Various scientific techniques can help determine authenticity, but it can sometimes prove impossible to be 100 per cent certain because of the level of skill involved in creating convincing forgeries.

Sabrina Higgins, SFU associate professor, Global Humanities and Archaeology, and Barbara Hilden, director, SFU Museum of Archaeology and Ethnology, examine the rare artifacts that have been donated to SFU for study

SFU researchers examine the donated artifacts. (Image credit: SFU/Sam Smith)

Copies of ancient artefacts

Other copies of ancient artifacts exist for honest purposes, such as those created for the tourist market or even for artistic purposes. Museums full of replicas still attract visitors, because they are another means of engaging with the past, and we are confident that the donation therefore has a place within the museum whether the objects are authentic or not.

By working closely with the objects, students will learn how to become archeological detectives and engage with the process of museum research from start to finish. The information gathered from this process will help to determine where the objects may have been originally uncovered or manufactured, how old they might be and what their original significance may have been.

Object-based learning using museum collections demonstrates the value of hands-on engagement in an age of increasing concern about the impact of artificial intelligence on education.

New course designed to examine items

The new archeology course we have designed, which will run at SFU in September 2026, will also focus heavily on questions of ethics and provenance, including what the process would look like if the objects — if determined to be authentic — could one day be returned to their country of origin.

The students will also benefit from the wide-ranging expertise of our colleagues in the department of archeology at SFU, including access to various technologies and avenues of archeological science that might help us learn more about the objects.

This will involve techniques such as X-ray fluorescence, which can be used to investigate elemental compositions of materials and using 3D scanners and printers to create resources for further study and outreach.

Individual artifact pictured.

The artifacts were being sold for $30 at a Chilliwack thrift shop. (Image credit: SFU/ Museum of Archaeology & Ethnology)

Mentoring with museum professionals

RELATED STORIES

Local museum professionals have also agreed to help mentor the students in exhibition development and public engagement, a bonus for many of our students who aspire to have careers in museums or cultural heritage.

Overall, the course will afford our students a rare opportunity to work with objects from a regional context not currently represented in the museum while simultaneously piecing together the story of these items far from their probable original home across the Atlantic.

We are excited to be part of their new emerging story at Simon Fraser, and can't wait to learn more about their mysterious past.

This edited article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
https://www.livescience.com/archaeology/archaeological-artifacts-should-not-be-for-sale-in-thrift-shops-but-putting-them-in-a-museum-is-harder-than-it-sounds KXGoLJEtHnQzzvtotzA9MC Thu, 25 Dec 2025 14:00:00 +0000 Fri, 19 Dec 2025 21:32:49 +0000
<![CDATA[ Science history: James Webb Space Telescope launches — and promptly cracks our view of the universe — Dec. 25, 2021 ]]>

Milestone: James Webb Space Telescope launches

Date: Dec. 25, 2021

Where: Guiana Space Centre, Kourou, French Guiana

Who: NASA, European Space Agency and Canadian Space Agency scientists

On a cloudy winter's day, in the Amazon jungle, a shuttle blasted off into space — and changed our view of the universe forever.

The James Webb Space Telescope (JWST) left Earth aboard an Ariane 5 rocket at 25,000 mph (40,000 km/h) "from a tropical rainforest to the edge of time itself," according to a live broadcast from NASA.

About a month later, it reached its orbiting parking place in space, a gravitationally-stable Lagrange point 930,000 miles (1.5 million kilometers) away, in perfect equilibrium between Earth and the sun's gravity. The telescope would beam back its first, spectacular pictures in July 2022. And the firehose of data it has sent back since has transformed our understanding of the cosmos.

JWST has been so pivotal in part because it can peer back to the "cosmic dawn," a period a few hundred million years after the Big Bang, when the first stars were winking on.

"The James Webb Space Telescope has proven itself capable of seeing 98% of the way back to the Big Bang," Peter Jakobsen, an affiliate professor of astrophysics at the University of Copenhagen in Denmark, previously told Live Science in an email.

Yet Webb, which was first conceived at Lockheed Martin in the late 1990s, almost didn't launch at all. The now-iconic, $10 billion project was catastrophically over budget, plagued by years' worth of delays and snarled by "stupid mistakes."

That was in part because, when it launched, it was by far the most complex telescope ever built.

It took more than 20,000 engineers and hundreds of scientists to design, build and launch the eye in the sky. That 21.3 feet (6.5 meter) mirror had to be folded into a honeycomb shape to be lofted on a rocket, then unfolded once in space. Yet despite being foldable, it also had to be so smooth that if it were as big as a continent, "it would feature no hill or valley greater than ankle height," according to Quanta Magazine.

Image showing the orange clouds of the Cosmic Cliffs billowing up into soft peaks in front of a deep blue background. The white sparkle of stars are scattered throughout the image.

This stunning image of the Cosmic Cliffs was the first one released by JWST. In it, you can see a profusion of stars in their earliest stages of star formation, a frenetic period which lasts between 50,000 and 100,000 years. (Image credit: NASA, ESA, CSA, and STScI)

To see the earliest epochs of cosmic history, Webb needed infrared vision. That's because ancient light has been stretched, or red-shifted, into infrared wavelengths as it travels across space-time. On Earth, humans and every other living thing give off heat in the form of infrared radiation, and that would drown out the faint infrared signals from the most distant, ancient starlight. So JWST needed to be lofted into the cold dark of outer space to use its infrared instruments.

Once JWST started imaging the cosmos, it promptly began breaking our existing models of the universe. It rapidly confirmed the Hubble tension — the discrepancy between the universe's expansion rates depending on where and what astronomers measure. It has found hints of potentially life-sustaining atmospheres shrouding distant exoplanets. And it has spotted shockingly bright galaxies and seemingly "impossible" black holes at the dawn of time. All these clues are pointing to new understandings of the universe.

Some of the questions JWST is raising, such as whether other planets harbor life, it will probably not be able to answer in its planned 10-year lifespan. But future telescopes — such as the currently operational Vera C. Rubin Observatory, meant to create a real-time "movie of the universe"; the recently completed Nancy Grace Roman Telescope, set to launch in 2027 and resolve questions about dark matter and energy; the Extremely Large Telescope, set to turn on in 2029; or the recently announced Habitable Worlds Observatory, which may come online in the 2030s — could start to answer the questions that Webb is raising.

]]>
https://www.livescience.com/space/science-history-james-webb-space-telescope-launches-and-promptly-cracks-our-view-of-the-universe-dec-25-2021 HTgm4yPw6ZtjPWCF7mA7tA Thu, 25 Dec 2025 07:00:00 +0000 Mon, 22 Dec 2025 17:02:41 +0000
<![CDATA[ 'Gospel stories themselves tell of dislocation and danger': A historian describes the world Jesus was born into ]]> Every year, millions of people sing the beautiful carol Silent Night, with its line “all is calm, all is bright”.

We all know the Christmas story is one in which peace and joy are proclaimed, and this permeates our festivities, family gatherings and present-giving. Countless Christmas cards depict the Holy Family – starlit, in a quaint stable, nestled comfortably in a sleepy little village.

However, when I began to research my book on the childhood of Jesus, Boy Jesus: Growing up Judaean in Turbulent Times, that carol started to sound jarringly wrong in terms of his family’s actual circumstances at the time he was born.

The Gospel stories themselves tell of dislocation and danger. For example, a “manger” was, in fact, a foul-smelling feeding trough for donkeys. A newborn baby laid in one is a profound sign given to the shepherds, who were guarding their flocks at night from dangerous wild animals (Luke 2:12).

When these stories are unpacked for their core elements and placed in a wider historical context, the dangers become even more glaring.

Take King Herod, for example. He enters the scene in the nativity stories without any introduction at all, and readers are supposed to know he was bad news. But Herod was appointed by the Romans as their trusted client ruler of the province of Judaea. He stayed long in his post because he was – in Roman terms – doing a reasonable job.

Jesus’ family claimed to be of the lineage of Judaean kings, descended from David and expected to bring forth a future ruler. The Gospel of Matthew begins with Jesus’ entire genealogy, it was that important to his identity.

But a few years before Jesus’ birth, Herod had violated the tomb of David and looted it. How did that affect the family and the stories they would tell Jesus? How did they feel about the Romans?

A time of fear and revolt

As for Herod’s attitude to Bethlehem, remembered as David’s home, things get yet more dangerous and complex.

When Herod was first appointed, he was evicted by a rival ruler supported by the Parthians (Rome’s enemy) who was loved by many local people. Herod was attacked by those people just near Bethlehem.

He and his forces fought back and massacred the attackers. When Rome vanquished the rival and brought Herod back, he built a memorial to his victorious massacre on a nearby site he called Herodium, overlooking Bethlehem. How did that make the local people feel?

Bethlehem (in 1898-1914) with Herodium on the skyline

Bethlehem (in 1898-1914) with Herodium on the skyline: memorial to a massacre. (Image credit: Matson Collection via Wikimedia Commons)

And far from being a sleepy village, Bethlehem was so significant as a town that a major aqueduct construction brought water to its centre. Fearing Herod, Jesus’ family fled from their home there, but they were on the wrong side of Rome from the start.

They were not alone in their fears or their attitude to the colonisers. The events that unfolded, as told by the first-century historian Josephus, show a nation in open revolt against Rome shortly after Jesus was born.

When Herod died, thousands of people took over the Jerusalem temple and demanded liberation. Herod’s son Archelaus massacred them. A number of Judaean revolutionary would-be kings and rulers seized control of parts of the country, including Galilee.

It was at this time, in the Gospel of Matthew, that Joseph brought his family back from refuge in Egypt – to this independent Galilee and a village there, Nazareth.

But independence in Galilee didn’t last long. Roman forces, under the general Varus, marched down from Syria with allied forces, destroyed the nearby city of Sepphoris, torched countless villages and crucified huge numbers of Judaean rebels, eventually putting down the revolts.

Archelaus – once he was installed officially as ruler – followed this up with a continuing reign of terror.

A nativity story for today

As a historian, I’d like to see a film that shows Jesus and his family embedded in this chaotic, unstable and traumatic social world, in a nation under Roman rule.

Instead, viewers have now been offered The Carpenter’s Son, a film starring Nicholas Cage. It’s partly inspired by an apocryphal (not biblical) text named the Paidika Iesou – the Childhood of Jesus – later called The Infancy Gospel of Thomas.

You might think the Paidika would be something like an ancient version of the hit TV show Smallville from the 2000s, which followed the boy Clark Kent before he became Superman.

But no, rather than being about Jesus grappling with his amazing powers and destiny, it is a short and quite disturbing piece of literature made up of bits and pieces, assembled more than 100 years after the life of Jesus.

The Paidika presents the young Jesus as a kind of demigod no one should mess with, including his playmates and teachers. It was very popular with non-Jewish, pagan-turned-Christian audiences who sat in an uneasy place within wider society.

The miracle-working Jesus zaps all his enemies – and even innocents. At one point, a child runs into Jesus and hurts his shoulder, so Jesus strikes him dead. Joseph says to Mary, “Do not let him out of the house so that those who make him angry may not die.”

Such stories rest on a problematic idea that one must never kindle a god’s wrath. And this young Jesus shows instant, deadly wrath. He also lacks much of a moral compass.

But this text also rests on the idea that Jesus’ boyhood actions against his playmates and teachers were justified because they were “the Jews”. “A Jew” turns up as an accuser just a few lines in. There should be a content warning.

The nativity scene from The Carpenter’s Son is certainly not peaceful. There is a lot of screaming and horrific images of Roman soldiers throwing babies into a fire. But, like so many films, the violence is somehow just evil and arbitrary, not really about Judaea and Rome.

It is surely the contextual, bigger story of the nativity and Jesus’ childhood that is so relevant today, in our times of fracturing and “othering”, where so many feel under the thumb of the unyielding powers of this world.

In fact, some churches in the United States are now reflecting this contemporary relevance as they adapt nativity scenes to depict ICE detentions and deportations of immigrants and refugees.

In many ways, the real nativity is indeed not a simple one of peace and joy, but rather one of struggle – and yet mystifying hope.

This edited article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
https://www.livescience.com/archaeology/gospel-stories-themselves-tell-of-dislocation-and-danger-a-historian-describes-the-world-jesus-was-born-into Q9vtKHdNbXG5Gsy4ozrUYD Thu, 25 Dec 2025 01:00:00 +0000 Wed, 24 Dec 2025 14:37:03 +0000
<![CDATA[ 'What the heck is this?' James Webb telescope spots inexplicable planet with diamonds and soot in its atmosphere ]]> A distant exoplanet appears to sport a sooty atmosphere that is confusing the scientists who recently spotted it.

The Jupiter-size world, detected by the James Webb Space Telescope (JWST), doesn't have the familiar helium-hydrogen combination we are used to in atmospheres from our solar system, nor other common molecules, like water, methane or carbon dioxide.

Rather, the planet seems to have soot clouds near the top of the atmosphere that condense into diamonds deeper in the atmosphere. This kind of overall atmosphere, which is made of helium and carbon, has never been spotted on another planet. What's even weirder is that its host star is not even a normal star.

"This was an absolute surprise," study co-author Peter Gao, a staff scientist at the Carnegie Earth and Planets Laboratory, said in a statement. "I remember after we got the data down, our collective reaction was, 'What the heck is this?' It's extremely different from what we expected."

Neutron sun

Researchers probed the bizarre environment of the planet, known as PSR J2322-2650b, in a paper published Tuesday (Dec. 16) in The Astrophysical Journal Letters. Although the planet was detected by a radio telescope survey in 2017, it took the sharper vision of JWST (which launched in 2021) to examine PSR J2322-2650b's environment from 750 light-years away.

PSR J2322-2650b orbits a pulsar. Pulsars are fast-spinning neutron stars — the ultradense cores of stars that have exploded as supernovas — that emit radiation in brief, regular pulses that are visible only when their lighthouse-like beams of electromagnetic radiation aim squarely at Earth. (That's bizarre on its own, as no other pulsar is known to have a gas-giant planet, and few pulsars have planets at all, the science team stated.)

The infrared instruments on JWST can't actually see this particular pulsar because it is sending out high-energy gamma-rays. However, JWST's "blindness" to the pulsar is actually a boon to scientists because they can easily probe the companion planet, PSR J2322-2650b, to see what the planet's environment is like.

"This system is unique because we are able to view the planet illuminated by its host star, but not see the host star at all," co-author Maya Beleznay, a doctoral candidate in physics at Stanford University, said in the statement. "We can study this system in more detail than normal exoplanets."

An artist's concept of the exoplanet PSR J2322-2650b.

An artist's concept of the exoplanet PSR J2322-2650b. (Image credit: NASA, ESA, CSA, Ralf Crawford (STScI))

Formation mystery

PSR J2322-2650b's origin story is an enigma. It is only a million miles (1.6 million kilometers) from its star — nearly 100 times closer than Earth is to the sun. That's even stranger when you consider that the gas giant planets of our solar system are much farther out — Jupiter is 484 million miles (778 million km) from the sun, for example.

The planet whips around its star in only 7.8 hours, and it's shaped like a lemon because the gravitational forces of the pulsar are pulling extremely strongly on the planet. At first glance, it appears PSR J2322-2650b could have a similar formation scenario as "black widow" systems, where a sunlike star is next to a small pulsar.

In black-widow systems, the pulsar "consumes" or erodes the nearby star, much like the myth of the black widow spider’s feasting behavior after which the phenomena is named. That happens because the star is so close to the pulsar that its material falls onto the pulsar. The extra stellar material causes the pulsar to gradually spin faster and to generate a strong "wind" of radiation that erodes the nearby star.

But lead author Michael Zhang, a postdoctoral fellow in exoplanet atmospheres at the University of Chicago, said this pathway made it difficult to understand how PSR J2322-2650b came to be. In fact, the planet's formation appears to be unexplainable at this point.

"Did this thing form like a normal planet? No, because the composition is entirely different," Zhang said in the statement. "It's very hard to imagine how you get this extremely carbon-enriched composition. It seems to rule out every known formation mechanism."

Diamonds in the air

Scientists still can't explain how the soot or diamonds are present in the exoplanet's atmosphere. Usually, molecular carbon doesn't appear in planets that are very close to their stars, due to the extreme heat.

One possibility for what happened comes from study co-author Roger Romani, a professor of physics at Stanford University and the Kavli Institute for Particle Astrophysics and Cosmology. After the planet cooled down from its formation, he suggested, carbon and oxygen in its interior crystallized.

But even that doesn't account for all of the odd properties. "Pure carbon crystals float to the top and get mixed into the helium … but then something has to happen to keep the oxygen and nitrogen away," Romani explained in the same statement. "And that's where the mystery [comes] in."

Scientists hope to continue studying PSR J2322-2650b. "It's nice to not know everything," Romani said. "I'm looking forward to learning more about the weirdness of this atmosphere. It's great to have a puzzle to go after."

]]>
https://www.livescience.com/space/astronomy/what-the-heck-is-this-james-webb-telescope-spots-inexplicable-planet-with-diamonds-and-soot-in-its-atmosphere p47mfGp3YQ4DHZ5BxVXdqF Wed, 24 Dec 2025 18:05:00 +0000 Tue, 23 Dec 2025 19:40:14 +0000
<![CDATA[ First-ever 'superkilonova' double star explosion puzzles astronomers ]]> Scientists may have witnessed a massive, dying star split in two and then crash back together, triggering a never-before-seen double explosion. The explosion sent ripples through space-time and forged some of the universe's heaviest elements.

Most massive stars reach the ends of their lives by collapsing and exploding as supernovas, seeding the cosmos with elements such as carbon and iron. A different kind of cataclysm, known as a kilonova, occurs when the ultradense remnants of dead stars, called neutron stars, collide, forging even heavier elements like gold.

The newly identified event, named AT2025ulz, appears to combine these two types of cosmic explosions in a way that scientists have long hypothesized but never before observed.

If confirmed, it could represent the first example of a "superkilonova," a rare hybrid blast in which a single object produces two distinct but equally dramatic explosions.

"We do not know with certainty that we found a superkilonova, but the event nevertheless is eye opening," study lead author Mansi Kasliwal, a professor of astronomy at Caltech, said in a statement.

The findings are detailed in a study published Dec. 15 in The Astrophysical Journal Letters.

A two-in-one combo

AT2025ulz first caught astronomers' attention on Aug. 18, 2025, when gravitational wave detectors operated by the U.S.-based Laser Interferometer Gravitational-Wave Observatory (LIGO) and its European partner, Virgo, registered a subtle signal consistent with the merger of two compact objects.

Soon after, the Zwicky Transient Facility at Palomar Observatory in California spotted a rapidly fading red point of light in the same region of the sky, according to the statement. The event's behavior closely resembled that of GW170817 — the only confirmed kilonova, which was observed in 2017 — with its red glow consistent with freshly forged heavy elements such as gold and platinum.

Instead of fading as astronomers typically expect, AT2025ulz began to brighten again, the study reported. Follow-up observations from a dozen observatories around the world, including Hawaii's Keck Observatory, showed the light shifting toward bluer wavelengths and revealing fingerprints of hydrogen, a hallmark of a supernova rather than a kilonova.

That data helped researchers confirm the presence of hydrogen and helium, indicating that the massive star had shed most of its hydrogen-rich outer layers before detonating, the paper noted.

To explain the baffling sequence, the team proposed that a massive, rapidly spinning star collapsed and exploded as a supernova. But instead of forming a single neutron star, its core split into two smaller neutron stars. Those newborn remnants then spiraled together and collided within hours, triggering a kilonova inside of the expanding debris of the supernova.

The combined effect is a hybrid explosion in which the supernova initially masks the kilonova's signature, accounting for the unusual observations, the researchers wrote in the paper.

Clues from the gravitational-wave data bolster this idea. While the signal cannot precisely determine the individual masses of the two merging neutron stars, it does rule out scenarios in which both were heavier than the sun, the new paper noted.

The researchers find a 99% chance that at least one of the objects was less massive than the sun— an outcome that challenges conventional stellar physics, which predicts neutron stars should not weigh less than about 1.2 solar masses. Such lightweight neutron stars can form only when a very rapidly spinning star collapses, matching the scenario proposed for AT2025ulz, according to the statement.

However, the study noted that the complexity of the overlapping signals makes it difficult to rule out the possibility that the signals came from unrelated events that happened to occur close together. Ultimately, the only way to test the theory will be to find more such events using next-generation sky surveys such as those from Vera C. Rubin Observatory and NASA's upcoming Nancy Grace Roman Space Telescope, the researchers said.

"If superkilonovae are real, we'll eventually see more of them," study co-author Antonella Palmese, an assistant professor of astrophysics and cosmology at Carnegie Mellon University in Pennsylvania, said in a different statement. "And if we keep finding associations like this, then maybe this was the first."

]]>
https://www.livescience.com/space/first-ever-superkilonova-double-star-explosion-puzzles-astronomers CDa5sa9wQvnDjYCnECwcmY Wed, 24 Dec 2025 17:22:00 +0000 Wed, 24 Dec 2025 16:21:52 +0000
<![CDATA[ We now know much more about how our ancestor 'Lucy' lived — and died ]]> From a distance, it might have looked like a small child was wending her way through the waving grass along a vast lake. But a closer look would have revealed a strange, in-between creature — a big-eyed imp with a small head and an apelike face who walked upright like a human.

She may have looked warily over her shoulder as she walked, on alert for saber-toothed cats or hyenas. She may have used her strong arms to climb the shrubby trees nearby, searching for fruit, eggs, or insects to eat. Or perhaps she simply rested on the shores of the croc-infested waters, gulping down water on a hot day.

She likely had no idea it was her last day on Earth.

Roughly 3.2 million years later, her skeleton was unearthed by paleoanthropologist Donald Johanson and his team on the International Afar Research Expedition.

The stunningly complete fossil was nicknamed "Lucy." And her remarkable species, Australopithecus afarensis, may have been our direct ancestor. Our discoveries about Lucy have transformed our understanding of humanity's tangled family tree.

Fifty years later, we know so much more about her species. In fact, anthropologists have learned so much about Lucy and her kind that we can now paint a picture of how she lived and died.

Her last day may have been filled with companionship, but it also entailed a relentless search for food. And it was likely dominated by the ever-present fear of predators.

"I suspect that the last day in her life was filled with danger," Johanson told Live Science.

An old photo of Donald Johanson sitting in the dirt and excavating a bone

Donald Johanson excavating a fossil in 1975. (Image credit: David Brill)

Finding Lucy

The modern story of Lucy began on Nov. 24, 1974, in Hadar, Ethiopia. Johanson and then-graduate student Tom Gray stumbled upon a bone poking out of a gully. Following two weeks of careful excavation, their team recovered dozens of fossilized bones. Together, these bones made up 40% of the skeleton of a human ancestor, making it the most complete skeleton of an archaic human species that had ever been found.

an image that says

Science Spotlight takes a deeper look at emerging science and gives you, our readers, the perspective you need on these advances. Our stories highlight trends in different fields, how new research is changing old ideas, and how the picture of the world we live in is being transformed thanks to science.

Pamela Alderman, another member of the expedition, suggested the team nickname the skeleton Lucy, after the Beatles song "Lucy in the Sky with Diamonds."

"And it just became iconic," Johanson said, "a moniker that everybody knew."

Lucy’s discovery transformed the study of ancient human relatives.

"I was in high school when she was found," John Kappelman, a paleoanthropologist at the University of Texas at Austin, told Live Science. "It really did reset the way paleoanthropology worked."

Lucy's skeleton, along with subsequent discoveries of other fossils of her species, have given anthropologists a wealth of information about what is essentially the halfway point in human evolution. At 3.2 million years old, Lucy and her kind lived equidistant in time from our ape ancestors and contemporary humans.

"She's our touchstone," Jeremy DeSilva, a paleoanthropologist at Dartmouth College, told Live Science. "Everything sort of comes back to her as the reference point, and she deserves it."

An old photo of Donald Johanson standing over Lucy's bones laid out on a table

Donald Johanson with the “Lucy” skeleton in 1975. (Image credit: Image courtesy of the Institute of Human Origins, Arizona State University.)

"A lot like us"

One thing is fairly certain: Though there were some obvious differences, Lucy looked and acted a lot like us.

"If we saw her coming out of a grocery store today, we would recognize her as upright walking and some kind of human," Johanson said.

Although her strong arms and the shape of her finger bones suggest Lucy could climb trees, her pelvis and knees were clearly adapted to walking on two feet.

The size of Lucy's thigh bone also revealed that she was only about 42 inches (1.1 meters) tall and 60 to 65 pounds (27 to 30 kilograms) — about the size of a 6- or 7-year-old child today. And the eruption of her wisdom teeth showed that, although she was in her early teens when she died, she was a fully mature young adult.

"Australopithecus in general was maturing fast," DeSilva said, "and it makes sense if you're on a landscape full of predators." In species that are frequently prey, individuals that mature faster are more likely to pass on their genes. But australopithecines were unique—while their teeth and bodies matured quickly, their brains grew more slowly, telling us that they relied quite a bit on learning for survival, DeSilva said.

Her discovery also settled a debate that was raging in the early 1970s: Did our big brains evolve before we learned to walk upright? Lucy's head, which was not much bigger than a chimp's, showed the answer was no. Our ancestors became bipedal long before they evolved large brains.

An illustration comparing the skeletons of Lucy, a modern human, and a chimpanzee

A comparison of the skeletons of Lucy (left), a chimpanzee (center) and a modern human (right). (Image credit: eLucy.org, CC BY-SA 3.0 US)

Lucy's clan

Because her skeleton was found on its own, Lucy's "social life" is a little murkier than other parts of her daily life. But many researchers think she lived in a mixed-sex group of about 15 to 20 males and females, not unlike modern-day chimpanzees do.

And although there's no direct evidence, Lucy's skeletal maturity suggests she could have had a baby. Bringing that relatively large-headed newborn through her relatively narrow pelvis would have been challenging, which means she may have had the help of a primitive "midwife."

If Lucy had a baby, she also likely had a partner. Other A. afarensis fossils, such as those of Kadanuumuu, show male australopithecines were only slightly larger than females, which, in primates, usually corresponds to more monogamous pairings.

Lucy and her kind would have spent a significant amount of their time avoiding becoming another animal's lunch. "These small creatures would have been nice hors d'oeuvres for a sabertooth or a large cat or hyena," Johanson said.

Perhaps because of that omnipresent danger, the group likely relied on each other.

"I think they had each other's backs and helped each other out," DeSilva said, "especially when they were in dangerous situations."

A healed bone fracture seen in Kadanuumuu provides evidence that these primates cared for one another. Around 3.6 million years ago, this male australopithecine broke his lower leg. By the time he died, though, the break was fully healed.

"On that landscape with that many predators, no doctors, no hospitals, no casts, no crutches, how in the world do you survive if not for social assistance?" DeSilva said. "It's really strong evidence that they didn't leave each other for dead."

Lucy's last day

Lucy probably started her last day much like any other, waking up from the treetop nest made of branches and leaves where she slept, along with her group, before setting off to find food.

It's not clear whether she would have been alone or in a group when she left to forage; if she did have a baby, she may have carried it.

But there's no doubt that she would have spent a significant part of her day looking for food. She most likely ate a few staples, such as grasses, roots and insects, chemical elements in her tooth enamel showed. She may have happened upon the eggs of birds or turtles and promptly gobbled them up as tasty, protein-rich treats. And if she was lucky enough to come across a carcass of a large mammal, such as an antelope, that hadn't been picked clean, she and her troop mates may have pulled the flesh from the bone, using large rocks.

"They can't afford to be picky eaters as these slow bipeds in a dangerous environment," DeSilva said. "They're eating everything they can get their hands on."

However, there's no evidence that Lucy’s species used fire to cook any of their food.

A photograph of a hilly landscape with sand, grass, and trees

A view of Hadar, Ethiopia, near where Lucy was found. (Image credit: Image courtesy of the Institute of Human Origins, Arizona State University.)

Death at the water's edge

In the past 50 years, we've created a picture of Lucy's last moments. It's not clear exactly why she was by the lake; maybe she was thirsty, or perhaps it was a great spot to look for food.

But there are two main theories for how she died.

"Perhaps she was down there at the water and — bam! — a crocodile comes out," Johanson said. "Crocodiles are incredibly fast, and it's a dangerous place if you're a little creature" like Lucy.

Johanson found one carnivore tooth mark on Lucy's pelvis, and it had not healed, meaning it occurred around the time of her death. Although the animal that made the mark has not been conclusively identified, "we know that australopithecines were preyed upon because there are a number of examples," Johanson said.

In 2016, Kappelman and his colleagues put forward an alternate ending for Lucy: a catastrophic fall from a tree.

Based on high-resolution CT scans and 3D reconstructions of Lucy's skeleton, Kappelman identified fractures in her right shoulder, ribs and knees that were unlike the typical fracturing that occurs in fossils crushed under the weight of dirt and rocks for millions of years.

"Something traumatic happened here during life," Kappelman said.

The kinds of fractures Lucy suffered are consistent with a fall from a considerable height, perhaps from a tall tree in which she was foraging for food.

I like to think all fossils are pretty special, but there's nothing like Lucy.

Jeremy DeSilva

"She hit on her feet and then her hands, which meant she was conscious when she hit the ground," Kappelman said. "I don't think she survived very long."

It's not clear whether she was alone when she died. But even if she was with others of her kind, they likely wouldn't have done much with her body.

There's no evidence that A. afarensis "bodies were treated any differently than any other animal," DeSilva said. "Maybe there was some curiosity around it, and then they carried on."

Primate researchers have documented other species' curiosity about inanimate bodies. For example, chimpanzees often care for the body for a few hours or days after death, sometimes guarding the body.

Lucy's group may have done the same for her until her body was naturally buried, which would have happened quite rapidly, perhaps by a flood or mudslide.

In the end, though, "we know very little about how any of these creatures died," Johanson said.

An illustration of multiple australopithecus walking together

An illustration of australopithecines walking in wet ash at Laetoli in Tanzania. (Image credit: Illustration by Michael Hagelberg, courtesy Institute of Human Origins at Arizona State University.)

Lucy lives on

Thanks to Johanson's 1974 discovery of Lucy — as well as other important findings, like the "First Family" and the footprints at Laetoli in Tanzania — we now know quite a lot about A. afarensis.

"It was a highly successful species that was comfortable in lots of different habitats," Johanson said; A. afarensis fossils have been found in Kenya in addition to Ethiopia and Tanzania. "From an evolutionary perspective, her species was highly adaptable," he said.

Lucy has had a broad impact on the field of anthropology.

"The discovery of Lucy really hit the start button for looking in older and older sediments in Africa," Kappelman said. As a result, we have found numerous ancient hominin species and now have 50 years' worth of fossil evidence that human evolution was messy and complex.

Lucy was the only human ancestor discovered at Hadar. But a couple dozen miles away at Woranso-Mille, a paleontological site in Ethiopia, Yohannes Haile-Selassie, director of the Institute of Human Origins at Arizona State University, and his colleagues have found evidence of a strange land inhabited by multiple humanlike species between 3.8 million and 3.3 million years ago. For instance, Lucy's kind coexisted alongside another ancient relative, A. anamensis.

Would they have been friends, enemies, competitors or something in between? Right now, anthropologists still have little idea what this landscape teeming with ancient hominins would have looked like.

But perhaps 50 years from now, we'll have a better picture of how Lucy's kind interacted with these other ancient hominins. Even then, Lucy will likely remain one of the most famous fossils of all time.

"I like to think all fossils are pretty special," DeSilva said, "but there's nothing like Lucy."

Editor's note: This article was originally published in November, 2024 as part of a special package written for the 50th anniversary of the discovery of a 3.2 million-year-old A. afarensis fossil (AL 288-1), nicknamed "Lucy."

]]>
https://www.livescience.com/archaeology/we-now-know-much-more-about-how-our-ancestor-lucy-lived-and-died gBTD4BErmjfmFXgvgN4aXK Wed, 24 Dec 2025 17:00:00 +0000 Thu, 25 Dec 2025 10:37:57 +0000
<![CDATA[ 'It won’t be so much a ghost town as a zombie apocalypse': How AI might forever change how we use the internet ]]> The rise of artificial intelligence (AI) has permeated our lives in ways that go beyond virtual assistants like Apple’s Siri and Amazon’s Alexa. Generative AI is not only disrupting how digital content is created but it's starting to influence how the internet serves us.

Greater access to large language models (LLMs) and AI tools has further fueled the dead internet conspiracy theory. This idea, posited in the early 2020s, suggested that the internet is actually dominated by AIs talking to and producing content for other AIs — with human-made and disseminated information a rarity.

When Live Science explored the theory, we concluded that this phenomenon has yet to emerge in the real world. But people now increasingly intermingle with bots — and one can never assume an online interaction is with another human.

Beyond this, low-quality content — ranging from articles and images, to videos and social media posts created by tools like Sora, ChatGPT and others — is leading to a rise in "AI slop." It can range from Instagram Reels showing videos of cats playing instruments or using weapons, to fake or fictional information being presented as news or fact. This has been fueled, in part, by a desire for more online content to drive clicks, draw attention to websites and raise their visibility in search engines.

"The challenge is that a combination of the drive towards search engine optimization [SEO] and playing to social media algorithms has led towards more content and less quality content. Content that's placed to leverage our attention economy (serving ads, etc.) has become the primary way information is served up," Adam Nemeroff, assistant provost for Innovations in Learning, Teaching, and Technology at Quinnipiac University in Connecticut, told Live Science. "AI slop and other AI-generated content is often filling those spaces now."

Two female hands holding their smartphones, connecting with social media, leaving comments, sending messages and sharing photos. Technology connecting people.

Social media platforms like Instagram may often host poor-quality AI-generated content. (Image credit: Oscar Wong/Getty Images)

Mistrust of information on the internet is nothing new, with many false claims made by people with particular agendas, or simply a desire to cause disruption or outrage. But AI tools have accelerated the speed at which machine-generated information, images or data can spread.

SEO firm Graphite found in November 2024 that the number of AI-generated articles being published had surpassed the number of human-written articles. Although 86% of articles ranking in Google Search were still written by people, versus 14% by AI (with a similar split found in the information a chatbot served up), it still points to a rise in AI-made content. Citing a report that one in 10 of the fastest-growing YouTube channels shows AI-generated content only, Nemeroff added that AI slop is starting to negatively affect us.

"AI slop is actively displacing creators who make their livelihood from online content," he explained. "Publications like Clarkesworld magazine had to stop taking submissions entirely due to the flood of AI-generated writing, and even Wikipedia is dealing with AI-generated content that strains its community moderation system, putting a key information resource at risk."

While an increase in AI content gives people more to consume, it also erodes trust in information, especially as generative AI gets better at serving up images and videos that look real or information that seems human-made. As such, there could be a situation where a deeper mistrust in information, especially in media brands and news, leads to human-made content being seen as fake and AI-made.

"I always recommend assuming content is AI-generated and looking for evidence that it's not. It's also a great moment to pay for the media we expect and to support creators and outlets that have clear editorial and creative guidelines," said Nemeroff.

Trust versus the attention economy

There are two sides to AI-generated content when it comes to the lens of trust.

The first is AI spreading convincing information that requires an element of savvy thinking to check and not take at face value. But the open nature of the web means it’s always been easy for incorrect information to spread, whether accidentally or intentionally, and there’s long been a need to have a healthy scepticism or desire to cross-reference information before jumping to conclusions.

"Information literacy has always been core to the experience of using the web, and it's all the more important and nuanced now with the introduction of AI content and other misinformation," said Nemeroff.

The other side of AI-generated content is when it's deliberately used to suck in attention, even if its viewers can easily tell it’s fabricated. One example, as flagged by Nemeroff, is of images of a displaced child with a puppy in the aftermath of Hurricane Helene, which was used to spread political misinformation.

Although the images were quickly flagged as AI-made, they still provoked reactions, therefore fueling their impact. Even obviously AI-made content can be either weaponized for political motivations or used to capture the precious attention of people on the open web or within social media platforms.

"AI content that is brighter, louder and more engaging than reality, and which sucks in human attention like a vortex … creates a "Siren" effect where AI companions or entertainment feeds are more seductive than messy, friction-filled, and sometimes disappointing human interactions." Nell Watson, an IEEE member and AI ethics engineer at Singularity University, told Live Science.

A hand touching a wave of lights. Conceptual image of AI.

There are fears that the AIs of the future will be fueled by synthetic content generated by other AIs, leading to an overall detachment from reality. (Image credit: Weiquan Lin/Getty Images)

While some AI content might look slick and engaging, it might represent a net negative for the way we use the internet, forcing us to question if what’s being viewed is real, and to deal with a flood of cheap, synthetic content.

"AI slop is the digital equivalent of plastic pollution in the ocean. It clogs the ecosystem, making it harder to navigate and degrading the experience for everyone. The immediate effect is authenticity fatigue," Watson explained. "Trust is fast becoming the most expensive currency online."

There’s a flipside to this. The rise of inauthentic content could be counterbalanced by people being drawn to content that’s explicitly human-made; we could see better-verified information and "artisanal" content created by real people. Whether that’s delivered by some form of watermark or locked off behind paywalls and in gated communities on Discord or other forums, has yet to be seen. It's down to how people react to AI slop, and their growing awareness of such content, that will determine the shape of content in the future and how it ultimately affects people, Nemeroff said.

"If people find slop and communicate that slop isn't acceptable, people's consumer behaviors will also change with that," he said. "This, combined with our broader media diet, will hopefully lead people to make changes to the nutrition of what they consume and how they approach it."

Less surfing, more sifting the web

AI-made content is only one part of how AI is changing the way that we use the internet. LLM-based agents already come built into the latest smartphones, for example. You'd also be hard-pressed to find anyone who hasn’t indirectly experienced generative AI, whether it was serving up information suggestions or offering the option to rework an email, generating an emoji or automatically editing a photo.

While Live Science’s publisher has strict rules on AI use (it certainly can't be used for writing or editing articles), some AI tools can help with mundane image-editing tasks, such as putting images on new backgrounds.

AI use, in other words, is inescapable in 2025. Depending on how we use it, it can influence how we communicate and socialize online — but more pertinently, it’s affecting how we seek and absorb information.

Google Search, for example, now has an AI overview serving up aggregated and disseminated information before external search results — something which a recently introduced AI Mode builds upon.

“We primarily used the internet via web addresses and search up to this moment. AI is the first innovation to disrupt that part of the cycle," Nemeroff adds. "AI chat tools are increasingly taking up internet queries that previously directed people to websites. Search engines that once handled questions and answers are now sharing that space with search-enabled chatbots and, more recently, AI agent browsers like Comet, Atlas, Dia, and others.”

On a surface level, this is changing the way people search and consume information. Even if someone types a query into a traditional search bar, it’s increasingly common that an AI-made summary will pop up rather than a list of websites from trusted sources.

Online memorial concept with R.I.P. abbreviation button on computer keyboard

(Image credit: Pop Paul-Catalin/Shutterstock)

"We are transitioning from an internet designed for human eyeballs to an internet designed for AI agents," Watson said. "There is a shift toward "Agentic workflows." Soon, you generally won't surf the web to book a flight or research a product yourself; your personal AI agent will negotiate with travel sites or summarize reviews for you. The web becomes a database for machines rather than a library for people."

There are two likely effects of this. The first is less human traffic to websites like Live Science, as AI agents scrape the information they feel a user wants — disrupting the advertising-led funding model of many websites.

"If an AI reads the website for you, you don't see the ads, which forces publishers to put up paywalls or block AI scrapers entirely, further fracturing the information ecosystem," said Watson. This fracturing could even see websites shutting down, given the already turbulent state of online media, further leading to a reduction in trusted sources of information.

The second is a situation where AI agents end up searching, ingesting and learning from AI-generated content.

"As the web fills with synthetic content — AI slop — future models train on that synthetic data, leading to a degradation of quality and a detachment from reality," Watson said. Slop or solid information, this all plays into the dead internet theory of machines interacting with other machines, rather than humans.

"Socially, this risks isolating us," Watson added. "If an AI companion is always available, always agrees with you, and never has a bad day, real human relationships feel exhausting by comparison. Information-seeking will shift from ’Googling’ — which relies on the user to filter truth from fiction — to relying on trusted AI curators. However, this centralises power; we are handing our critical thinking over to the algorithms that summarise the world for us."

It’s the end of the internet as we know it… and AI feels fine

Undoubtedly, the ways in which humans are using the internet, and the World Wide Web it supports, have been changed by AI. AI has affected every aspect of internet use in 2025, from how we search for information, to how content is generated and how we are served the information we asked for. Even if you choose to search the web without any AI tools, the information you see could have been produced or handled by some form of AI.

As we’re currently in the midst of this change, it’s hard to be clear on what exactly the internet will look like as the trend continues. When asked about whether AI could turn the internet into a "ghost town," Watson countered: "It won’t be so much a ghost town as a zombie apocalypse."

It’s hard not to be concerned by this damning assessment, whether you're a content creator directly affected by AI or simply an end user who’s getting tired of questioning information.

However, Nemeroff highlighted that we can learn from the rise of social media and its impact on the internet in the late 2000s. It serves as an example of the disruption and challenges that such platforms faced when it comes to the use and spread of information.

"Taking a few pages out of what we learned about social media, these technologies were not without harms, and we also did not anticipate a number of the issues that emerged at the beginning," he said. "There is a role for responsible regulation as part of that, which requires lawmakers to have an interest in regulating these tools and knowing how to regulate in an ongoing way."

When it comes to any new technology — self-driving cars being one example — regulation and lawmaking are often several steps behind the breakthroughs and adoption.

It’s also worth keeping in mind that while AI poses a challenge, the agentic tools it offers can also better surface information that might otherwise remain buried deep in search results or online archives — thereby helping uncover information from sources that might not have thrived in the age of SEO.

The way humans react to AI content on the internet will likely govern how it evolves, potentially bursting an AI bubble by retreating to human-only enclaves on the web or requiring a higher level of trust signals from both human- and AI-made content.

"We find ourselves in a really challenging moment with this," concluded Nemeroff. "Being familiar with the environment and knowing its presence there is a key point to both changing the incentives around this as well as communicating what we value to the platforms that distribute it. I think we will start to see more examples of showing the provenance of higher quality content and people investing in that."

]]>
https://www.livescience.com/technology/artificial-intelligence/it-wont-be-so-much-a-ghost-town-as-a-zombie-apocalypse-how-ai-might-forever-change-how-we-use-the-internet M3gQHgWtp4EBk5qCbgKUCG Wed, 24 Dec 2025 16:19:00 +0000 Mon, 22 Dec 2025 23:05:17 +0000
<![CDATA[ Guess the number quiz: Can you work out these scientific numbers and constants and top the leaderboard? ]]> Whether you're cooking up chemistry, getting physical with physics or bending your mind over mathematics, one thing that remains constant is, well, constants. Some numbers are so fundamental to the way we conduct science that we'd be lost without them, and their discovery has helped us better understand the world around us.

So how many of these key figures do you know? Try our new quiz and find out. We'll be dropping another number in the mix every day for you to guess, and if you prove yourself to be a numberphile, maybe you'll make it to the top of our leaderboard. All you need to do is register and your score will be saved, and be sure to leave a comment and share how you got on (but no spoilers please).

Try more science quizzes

Live Science crossword: Test your knowledge on all things science with our weekly, free puzzle!

Periodic table of elements quiz: How many elements can you name in 10 minutes?

How quickly can you name all 12 Apollo astronauts that walked on the moon?

]]>
https://www.livescience.com/physics-mathematics/guess-the-number-quiz-can-you-work-out-these-scientific-numbers-and-constants-and-top-the-leaderboard eUnbFYf9pBpjUU9BBMo2k6 Wed, 24 Dec 2025 14:34:09 +0000 Thu, 25 Dec 2025 10:37:57 +0000