Category: Science News

  • Chimpanzee feet allow scientists a new grasp on human foot evolution

    {An investigation into the evolution of human walking by looking at how chimpanzees walk on two legs is the subject of a new research paper published in the March 2017 issue of Journal of Human Evolution.}

    The human foot is distinguished from the feet of all other primates by the presence of a longitudinal arch, which spans numerous joints and bones of the midfoot region and is thought to stiffen the foot. This structure is thought to be a critical adaptation for bipedal locomotion, or walking on two legs, in part because this arch is absent from the feet of humans’ closest living relatives, the African apes.

    In contrast, African apes have long been thought to have highly mobile foot joints for climbing tree trunks and grasping branches, although few detailed quantitative studies have been carried out to confirm these beliefs.

    But now, Nathan Thompson, Ph.D., assistant professor of Anatomy at New York Institute of Technology College of Osteopathic Medicine (NYITCOM), is one of the researchers questioning some long-held ideas about the function and evolution of the human foot by investigating how chimpanzees use their feet when walking on two legs. The research team, including members Nicholas Holowka, Ph.D. (Harvard University); Brigitte Demes, Ph.D. (Stony Brook University School of Medicine); and Matthew O’Neill, Ph.D. (University of Arizona College of Medicine, Phoenix), conducted the research and collected data while all were at Stony Brook University (2013-2015).

    Most researchers studying human evolution assume a stark dichotomy between human and chimpanzee feet. One is a rigid lever that makes walking long distances easy and efficient. The other one is a grasping device, much more mobile and less effective at walking on two legs. Fossil feet of early human ancestors are nearly always compared with chimpanzee feet, making knowledge of their foot biomechanics crucial for understanding how the human foot evolved. However, prior to this research, no one has been able to actually investigate whether differences existed between humans and chimpanzees in how the foot works during walking on two legs.

    To find out, this research team used high-speed motion capture to measure three-dimensional foot motion in chimpanzees and humans walking at similar speeds. They then compared ranges of midfoot motion between species.

    Contrary to expectations, the researchers found that human feet are more — not less — mobile than chimpanzees walking on two limbs.

    “This finding upended our assumptions about how the feet of both humans and chimpanzees work. Based on simple visual observation, we’ve long known that human feet are stiffer than those of chimpanzees and other apes when the heel is first lifted off the ground in a walking step. What surprised us was that the human midfoot region flexes dramatically at the end of a step as the foot’s arch springs back into place following its compression during weight-bearing. This flexion motion is greater than the entire range of motion in the chimpanzee midfoot joints during a walking step, leading us to conclude that high midfoot joint mobility is actually advantageous for human walking. We never would have discovered this without being able to study chimpanzees with advanced motion capture technology,” said Holowka, with Harvard’s department of Human Evolutionary Biology.

    Ultimately, according to the findings, the fact that the traditional dichotomy between humans and chimpanzees has been disproven means that researchers may have to rethink what can be learned from the fossil feet of humans’ earliest ancestors. “The presence of human-like midfoot joint morphology in fossil hominins can no longer be taken as indicating foot rigidity, but it may tell us about the evolution of human-like enhanced push off mechanics,” said NYITCOM’s Thompson.

    Based on these findings, the researchers encourage future studies to consider the ways in which human foot morphology reflects longitudinal arch function throughout the full duration of stance phase, especially at the beginning and end of a step.

    Thompson added, “One of the things that is really remarkable about this project is that it shows us how much we have still to learn about our closest relatives. It seems like the more we learn about how chimpanzees move, the more we have to rethink some of the assumptions that paleoanthropologists have held on to for decades.”

    The researchers painted markers on the feet of both humans and chimpanzees in order to figure out how different bones and joints within the foot move in 3-D.

    Source:Science Daily

  • Gene therapy restores hearing in deaf mice, down to a whisper

    {Improved delivery vector better penetrates the inner ear, also restores balance in a mouse model of Usher syndrome.}

    In the summer of 2015, a team at Boston Children’s Hospital and Harvard Medical School reported restoring rudimentary hearing in genetically deaf mice using gene therapy. Now the Boston Children’s research team reports restoring a much higher level of hearing — down to 25 decibels, the equivalent of a whisper — using an improved gene therapy vector developed at Massachusetts Eye and Ear.

    The new vector and the mouse studies are described in two back-to-back papers in Nature Biotechnology.

    While previous vectors have only been able to penetrate the cochlea’s inner hair cells, the first Nature Biotechnology study showed that a new synthetic vector, Anc80, safely transferred genes to the hard-to-reach outer hair cells when introduced into the cochlea (see images). This study’s three Harvard Medical School senior investigators were Jeffrey R. Holt PhD, of Boston Children’s Hospital; Konstantina Stankovic, MD, PhD, of Mass. Eye and Ear and Luk H. Vandenberghe, PhD, who led Anc80’s development in 2015 at Mass. Eye and Ear’s Grousbeck Gene Therapy Center.

    “We have shown that Anc80 works remarkably well in terms of infecting cells of interest in the inner ear,” says Stankovic, an otologic surgeon at Mass. Eye and Ear and associate professor of otolaryngology at Harvard Medical School. “With more than 100 genes already known to cause deafness in humans, there are many patients who may eventually benefit from this technology.”

    The second study, led by Gwenaëlle Géléoc, PhD, of the Department of Otolaryngology and F.M. Kirby Neurobiology Center at Boston Children’s, used Anc80 to deliver a specific corrected gene in a mouse model of Usher syndrome, the most common genetic form of deaf-blindness that also impairs balance function.

    “This strategy is the most effective one we’ve tested,” Géléoc says. “Outer hair cells amplify sound, allowing inner hair cells to send a stronger signal to the brain. We now have a system that works well and rescues auditory and vestibular function to a level that’s never been achieved before.”

    Ushering in gene therapy for deafness

    Géléoc and colleagues at Boston Children’s Hospital studied mice with a mutation in Ush1c, the same mutation that causes Usher type 1c in humans. The mutation causes a protein called harmonin to be nonfunctional. As a result, the sensory hair cell bundles that receive sound and signal the brain deteriorate and become disorganized, leading to profound hearing loss.

    When a corrected Ush1c gene was introduced into the inner ears of the mice, the inner and outer hair cells in the cochlea began to produce normal full-length harmonin. The hair cells formed normal bundles (see images) that responded to sound waves and signaled the brain, as measured by electrical recordings.

    Most importantly, deaf mice treated soon after birth began to hear. Géléoc and colleagues showed this first in a “startle box,” which detects whether a mouse jumps in response to sudden loud sounds. When they next measured responses in the auditory regions of the brain, a more sensitive test, the mice responded to much quieter sounds: 19 of 25 mice heard sounds quieter than 80 decibels, and a few could heard sounds as soft as 25-30 decibels, like normal mice.

    “Now, you can whisper, and they can hear you,” says Géléoc, also an assistant professor of otolaryngology at Harvard Medical School.

    Margaret Kenna, MD, MPH, a specialist in genetic hearing loss at Boston Children’s who does research on Usher syndrome, is excited about the work. “Anything that could stabilize or improve native hearing at an early age would give a huge boost to a child’s ability to learn and use spoken language,” she says. “Cochlear implants are great, but your own hearing is better in terms of range of frequencies, nuance for hearing voices, music and background noise, and figuring out which direction a sound is coming from. In addition, the improvement in balance could translate to better and safer mobility for Usher Syndrome patients.”

    Restoring balance and potentially vision

    Since patients (and mice) with Usher 1c also have balance problems caused by hair-cell damage in the vestibular organs, the researchers also tested whether gene therapy restored balance. It did, eliminating the erratic movements of mice with vestibular dysfunction (see images) and, in another test, enabled the mice to stay on a rotating rod for longer periods without falling off.

    Further work is needed before the technology can be brought to patients. One caveat is that the mice were treated right after birth; hearing and balance were not restored when gene therapy was delayed 10-12 days. The researchers will do further studies to determine the reasons for this. However, when treated early, the effects persisted for at least six months, with only a slight decline between 6 weeks and 3 months. The researchers also hope to test gene therapy in larger animals, and plan to develop novel therapies for other forms of genetic hearing loss.

    Usher syndrome also causes blindness by causing the light-sensing cells in the retina to gradually deteriorate. Although these studies did not test for vision restoration, gene therapy in the eye is already starting to be done for other disorders.

    “We already know the vector works in the retina,” says Géléoc, “and because deterioration is slower in the retina, there is a longer window for treatment.”

    “Progress in gene therapy for blindness is much further along than for hearing, and I believe our studies take an important step toward unlocking a future of hearing gene therapy,” says Vandenberghe, also an assistant professor of ophthalmology at Harvard Medical School. “In the case of Usher syndrome, combining both approaches to ultimately treat both the blinding and hearing aspects of disease is very compelling, and something we hope to work toward.”

    “This is a landmark study,” says Holt, director of otolaryngology research at Boston Children’s Hospital, who was also a co-author on the second paper. “Here we show, for the first time, that by delivering the correct gene sequence to a large number of sensory cells in the ear, we can restore both hearing and balance to near-normal levels.”

    Unaffected mice, at left, have sensory hair bundles organized in 'V' formations with three rows of cilia (bottom left). This orderly structure falls apart in the mutant mice (middle column), but is dramatically restored after gene therapy treatment.
  • Practice makes perfect, and ‘overlearning’ locks it in

    {Want to learn something and then quickly make that mastery stick? A new Brown University study in which people learned visual perception tasks suggests that you should keep practicing for a little while even after you think you can’t get any better. Such “overlearning” locked in performance gains, according to the Nature Neuroscience paper that describes the effect and its underlying neurophysiology.}

    Everybody from actors learning lines, to musicians learning new songs, to teachers trying to impart key facts to students has observed that learning has to “sink in” in the brain. Prior studies and also the new one, for example, show that when people learn a new task and then learn a similar one soon afterward, the second instance of learning often interferes with and undermines the mastery achieved on the first one.

    The new study shows that overlearning prevents against such interference, cementing learning so well and quickly, in fact, that the opposite kind of interference happens instead. For a time, overlearning the first task prevents effective learning of the second task — as if learning becomes locked down for the sake of preserving master of the first task. The underlying mechanism, the researchers discovered, appears to be a temporary shift in the balance of two neurotransmitters that control neural flexibility, or “plasticity,” in the part of the brain where the learning occurred.

    “These results suggest that just a short period of overlearning drastically changes a post-training plastic and unstable [learning state] to a hyperstabilized state that is resilient against, and even disrupts, new learning,” wrote the team led by corresponding author Takeo Watanabe, the Fred M. Seed Professor of Cognitive Linguistic and Psychological Sciences at Brown.

    {{Different ways to learn}}

    The findings arose from several experiments in which Watanabe, lead author Kazuhisa Shibata and their co-authors asked a total of 183 volunteers to engage in the task of learning to detect which one of the two successively presented images had a patterned orientation and which depicted just unstructured noise. After eight rounds, or “blocks,” of training, which lasted about 20 minutes total, the initial 60 volunteers seemed to master the task.

    With that established, the researchers then formed two new groups of volunteers. After a pre-test before any training, a first group practiced the task for eight blocks, waited 30 minutes, and then trained for eight blocks on a new similar task. The next day they were tested on both tasks to assess what they learned. The other group did the same thing, except that they overlearned the first task for 16 blocks of training.

    On the next day’s tests, the first group performed quite poorly on the first task compared to the pre-test but showed substantial progress on the second task. Meanwhile the overlearning group showed strong performance on the first task, but no significant improvement on the second. Regular learning subjects were vulnerable to interference by the second task (as expected) but overlearners were not.

    In the second experiment, again with new volunteers, the researchers lengthened the break between task training from 30 minutes to 3.5 hours. This time on the next day’s tests, each group — those who overlearned and those who didn’t — showed similar performance patterns in that they both demonstrated significant improvement on both tasks. Given enough time between learning tasks, people successfully learned both and neither kind of interference was evident.

    What was going on? The researchers sought answers in a third experiment by using the technology of magnetic resonance spectroscopy to track the balance of two neurotransmitters in volunteers as they learned. Focusing on the “early visual” region in each subject’s brain, the researchers tracked the ratio of glutamate, which promotes plasticity, and GABA, which inhibits it. One group of volunteers trained on a task for eight blocks while the other group trained on it for 16. Meanwhile they all underwent MRS scans before training, 30 minutes after, and 3.5 hours after, and took the usual pre-training and post-training performance tests.

    The overlearners and the regular learners revealed a perfectly opposite pattern in how the ratio of their neurotransmitter levels changed. They all started from the same baseline, but for regular learners, the ratio of glutamate to GABA increased markedly 30 minutes after training, before declining almost back to the baseline by 3.5 hours. Meanwhile, the overlearners showed sharp decline in the ratio of glutamate to GABA 30 minutes after training before it rose nearly back to baseline by 3.5 hours.

    In other words, at the stage when regular learners were at the peak of plasticity (leaving their first training vulnerable to interference from a second training), overlearners were hunkered down with inhibition (protecting the first training, but closing the door on the second). After 3.5 hours everyone was pretty much back to normal.

    In a final experiment, the researchers showed that the amount of decline in the glutamate to GABA ratio in each volunteer was proportional to the degree to which their first training interfered with their second training, suggesting that the link between the neurotransmitter ratio and the effects of overlearning were no coincidence.

    {{Timing is everything}}

    Though the study focused on a visual learning task, Watanabe said he is confident the effect will likely translate to other kinds of learning, such as motor tasks, where phenomena such as interference work similarly.

    If further studies confirm that overlearning’s effects indeed carry over to learning in general, then such findings would suggest some advice optimizing the timing of training:

    To cement training quickly, overlearning should help, but beware it might interfere with similar learning it that follow immediately.

    Without overlearning, don’t try to learn something similar in rapid succession because there is a risk that the second bout of learning will undermine the first.

    If you have enough time, you can learn two tasks without interference by leaving a few hours between the two trainings.

    “If you want to learn something very important, maybe overlearning is a good way,” Watanabe said. “If you do overlearning, you may be able to increase the chance that what you learn will not be gone.”

    A new study shows that learning a new task past the point of mastery helps protect that learning from interference that could undermine it. The study used a visual task, but may extend to other forms of learning, such as motor learning.
  • Sleep research high-resolution images show how the brain resets during sleep

    {Striking electron microscope pictures from inside the brains of mice suggest what happens in our own brain every day: Our synapses — the junctions between nerve cells — grow strong and large during the stimulation of daytime, then shrink by nearly 20 percent while we sleep, creating room for more growth and learning the next day.}

    The four-year research project published today in Science offers a direct visual proof of the “synaptic homeostasis hypothesis” (SHY) proposed by Drs. Chiara Cirelli and Giulio Tononi of the Wisconsin Center for Sleep and Consciousness.

    This hypothesis holds that sleep is the price we pay for brains that are plastic and able to keep learning new things.

    When a synapse is repeatedly activated during waking, it grows in strength, and this growth is believed to be important for learning and memory. According to SHY, however, this growth needs to be balanced to avoid the saturation of synapses and the obliteration of neural signaling and memories. Sleep is believed to be the best time for this process of renormalization, since when asleep we pay much less attention to the external world and are free from the “here and now.”

    When synapses get stronger and more effective they also become bigger, and conversely they shrink when they weaken. Thus, Cirelli and Tononi reasoned that a direct test of SHY was to determine whether the size of synapses changes between sleep and wake. To do so, they used a method with extremely high spatial resolution called serial scanning 3-D electron microscopy.

    The research itself was a massive undertaking, with many research specialists working for four years to photograph, reconstruct, and analyze two areas of cerebral cortex in the mouse brain. They were able to reconstruct 6,920 synapses and measure their size.

    The team deliberately did not know whether they were analyzing the brain cells of a well-rested mouse or one that had been awake. When they finally “broke the code” and correlated the measurements with the amount of sleep the mice had during the six to eight hours before the image was taken, they found that a few hours of sleep led on average to an 18 percent decrease in the size of the synapses. These changes occurred in both areas of the cerebral cortex and were proportional to the size of the synapses.

    The scaling occurred in about 80 percent of the synapses but spared the largest ones, which may be associated with the most stable memory traces.

    “This shows, in unequivocal ultrastructural terms, that the balance of synaptic size and strength is upset by wake and restored by sleep,” Cirelli says. “It is remarkable that the vast majority of synapses in the cortex undergo such a large change in size over just a few hours of wake and sleep.

    Tononi adds, “Extrapolating from mice to humans, our findings mean that every night trillions of synapses in our cortex could get slimmer by nearly 20 percent.”

    The study was published today in Science along with research from Dr. Richard Huganir’s laboratory at Johns Hopkins University in Baltimore. This study, using biochemical and molecular methods, confirms SHY’s prediction that synapses undergo a process of scaling down during sleep, and identifies genes important for this process.

    This picture shows 3-D reconstructions of electron microscope images of tree branch-like dendrites. At the end of the branches are cup-like structures called the spines, and in the tips of the spines are synapses. By studying thousands of images like these, the Wisconsin researchers showed that the synapses shrink after the mouse sleeps and grow again during the next wakeful period.
  • Modern parenting may hinder brain development, research suggests

    {Social practices and cultural beliefs of modern life are preventing healthy brain and emotional development in children, according to an interdisciplinary body of research presented recently at a symposium at the University of Notre Dame.}

    “Life outcomes for American youth are worsening, especially in comparison to 50 years ago,” says Darcia Narvaez, Notre Dame professor of psychology who specializes in moral development in children and how early life experiences can influence brain development.

    “Ill-advised practices and beliefs have become commonplace in our culture, such as the use of infant formula, the isolation of infants in their own rooms or the belief that responding too quickly to a fussing baby will ‘spoil’ it,” Narvaez says.

    This new research links certain early, nurturing parenting practices — the kind common in foraging hunter-gatherer societies — to specific, healthy emotional outcomes in adulthood, and has many experts rethinking some of our modern, cultural child-rearing “norms.”

    “Breast-feeding infants, responsiveness to crying, almost constant touch and having multiple adult caregivers are some of the nurturing ancestral parenting practices that are shown to positively impact the developing brain, which not only shapes personality, but also helps physical health and moral development,” says Narvaez.

    Studies show that responding to a baby’s needs (not letting a baby “cry it out”) has been shown to influence the development of conscience; positive touch affects stress reactivity, impulse control and empathy; free play in nature influences social capacities and aggression; and a set of supportive caregivers (beyond the mother alone) predicts IQ and ego resilience as well as empathy.

    The United States has been on a downward trajectory on all of these care characteristics, according to Narvaez. Instead of being held, infants spend much more time in carriers, car seats and strollers than they did in the past. Only about 15 percent of mothers are breast-feeding at all by 12 months, extended families are broken up and free play allowed by parents has decreased dramatically since 1970.

    Whether the corollary to these modern practices or the result of other forces, an epidemic of anxiety and depression among all age groups, including young children; rising rates of aggressive behavior and delinquency in young children; and decreasing empathy, the backbone of compassionate, moral behavior, among college students, are shown in research.

    According to Narvaez, however, other relatives and teachers also can have a beneficial impact when a child feels safe in their presence. Also, early deficits can be made up later, she says.

    “The right brain, which governs much of our self-regulation, creativity and empathy, can grow throughout life. The right brain grows though full-body experience like rough-and-tumble play, dancing or freelance artistic creation. So at any point, a parent can take up a creative activity with a child and they can grow together.”

    Mother breastfeeding her baby.
  • Evidence of 2 billion years of volcanic activity on Mars

    {Meteorite found in Africa provides clues to evolution of the red planet.}

    Analysis of a Martian meteorite found in Africa in 2012 has uncovered evidence of at least 2 billion years of volcanic activity on Mars. This confirms that some of the longest-lived volcanoes in the solar system may be found on the Red Planet.

    Shield volcanoes and lava plains formed from lava flowing over long distances, similar to the formation of the Hawaiian Islands. The largest Martian volcano, Olympus Mons, is nearly 17 miles high. That’s almost triple the height of Earth’s tallest volcano, Mauna Kea, at 6.25 miles.

    Tom Lapen, a geology professor at the University of Houston and lead author of a paper published Feb. 1 in the journal Science Advances, said the findings offer new clues to how the planet evolved and insight into the history of volcanic activity on Mars.

    Much of what we know about the composition of rocks from volcanoes on Mars comes from meteorites found on Earth. Analysis of different substances provides information about the age of the meteorite, its magma source, length of time in space and how long the meteorite was on Earth’s surface.

    Something slammed into the surface of Mars 1 million years ago, hitting a volcano or lava plain. This impact ejected rocks into space. Fragments of these rocks crossed Earth’s orbit and fell as meteorites.

    The meteorite, known as Northwest Africa 7635 and discovered in 2012, was found to be a type of volcanic rock called a shergottite. Eleven of these Martian meteorites, with similar chemical composition and ejection time, have been found.

    “We see that they came from a similar volcanic source,” Lapen said. “Given that they also have the same ejection time, we can conclude that these come from the same location on Mars.”

    Together, these meteorites provide information about a single location on Mars. Previously analyzed meteorites range in age from 327 million to 600 million years old. In contrast, the meteorite analyzed by Lapen’s research team was formed 2.4 billion years ago and suggests that it was ejected from one of the longest-lived volcanic centers in the solar system.

    Something slammed into the surface of Mars 1 million years ago, hitting a volcano or lava plain. This impact ejected rocks into space. Fragments of these rocks crossed Earth's orbit and fell as meteorites.
  • Low level of oxygen in Earth’s middle ages delayed evolution for two billion years

    {A low level of atmospheric oxygen in Earth’s middle ages held back evolution for 2 billion years, raising fresh questions about the origins of life on this planet.}

    A low level of atmospheric oxygen in Earth’s middle ages held back evolution for 2 billion years, raising fresh questions about the origins of life on this planet.

    New research by the University of Exeter explains how oxygen was trapped at such low levels.

    Professor Tim Lenton and Dr Stuart Daines of the University of Exeter Geography department, created a computer model to explain how oxygen stabilised at low levels and failed to rise any further, despite oxygen already being produced by early photosynthesis. Their research helps explain why the ‘great oxidation event’, which introduced oxygen into the atmosphere around 2.4 billion years ago, did not generate modern levels of oxygen.

    In their paper, published in Nature Communications, Atmospheric oxygen regulation at low Proterozoic levels by incomplete oxidative weathering of sedimentary organic carbon, the University of Exeter scientists explain how organic material — the dead bodies of simple lifeforms — accumulated in the earth’s sedimentary rocks. After the Great Oxidation, and once plate tectonics pushed these sediments to the surface, they reacted with oxygen in the atmosphere for the first time.

    The more oxygen in the atmosphere, the faster it reacted with this organic material, creating a regulatory mechanism whereby the oxygen was consumed by the sediments at the same rate at which it was produced.

    This mechanism broke down with the rise of land plants and a resultant doubling of global photosynthesis. The increasing concentration of oxygen in the atmosphere eventually overwhelmed the control on oxygen and meant it could finally rise to the levels we are used to today.

    This helped animals colonise the land, leading eventually to the evolution of humankind.

    The model suggests atmospheric oxygen was likely at around 10% of present day levels during the two billion years following the Great Oxidation Event, and no lower than 1% of the oxygen levels we know today.

    Professor Lenton said: “This time in Earth’s history was a bit of a catch-22 situation. It wasn’t possible to evolve complex life forms because there was not enough oxygen in the atmosphere, and there wasn’t enough oxygen because complex plants hadn’t evolved — It was only when land plants came about did we see a more significant rise in atmospheric oxygen.

    “The history of life on Earth is closely intertwined with the physical and chemical mechanisms of our planet. It is clear that life has had a profound role in creating the world we are used to, and the planet has similarly affected the trajectory of life. I think it’s important people acknowledge the miracle of their own existence and recognise what an amazing planet this is.”

    Life on earth is believed to have begun with the first bacteria evolving 3.8 billion years ago. Around 2.7 billion years ago the first oxygen-producing photosynthesis evolved in the oceans. But it was not until 600 million years ago that the first multi-celled animals such as sponges and jellyfish emerged in the ocean. By 470 million years ago the first plants grew on land with the first land animals such as millipedes appearing around 428 million years ago. Mammals did not rise to ecological prominence until after the dinosaurs went extinct 65 million years ago. Humans first appeared on earth 200,000 years ago.

    New research explains how atmospheric oxygen was trapped at low levels following the Great Oxidation.
  • Ancient DNA reveals genetic ‘continuity’ between Stone Age, modern populations in East Asia

    {In contrast to Western Europeans, new research finds contemporary East Asians are genetically much closer to the ancient hunter-gatherers that lived in the same region eight thousand years previously.}

    Researchers working on ancient DNA extracted from human remains interred almost 8,000 years ago in a cave in the Russian Far East have found that the genetic makeup of certain modern East Asian populations closely resemble that of their hunter-gatherer ancestors.

    The study, published in the journal Science Advances, is the first to obtain nuclear genome data from ancient mainland East Asia and compare the results to modern populations.

    The findings indicate that there was no major migratory interruption, or “population turnover,” for well over seven millennia. Consequently, some contemporary ethnic groups share a remarkable genetic similarity to Stone Age hunters that once roamed the same region.

    The high “genetic continuity” in East Asia is in stark contrast to most of Western Europe, where sustained migrations of early farmers from the Levant overwhelmed hunter-gatherer populations. This was followed by a wave of horse riders from Central Asia during the Bronze Age. These events were likely driven by the success of emerging technologies such as agriculture and metallurgy

    The new research shows that, at least for part of East Asia, the story differs — with little genetic disruption in populations since the early Neolithic period.

    Despite being separated by a vast expanse of history, this has allowed an exceptional genetic proximity between the Ulchi people of the Amur Basin, near where Russia borders China and North Korea, and the ancient hunter-gatherers laid to rest in a cave close to the Ulchi’s native land.

    The researchers suggest that the sheer scale of East Asia and dramatic variations in its climate may have prevented the sweeping influence of Neolithic agriculture and the accompanying migrations that replaced hunter-gatherers across much of Europe. They note that the Ulchi retained their hunter-fisher-gatherer lifestyle until recent times.

    “Genetically speaking, the populations across northern East Asia have changed very little for around eight millennia,” said senior author Andrea Manica from the University of Cambridge, who conducted the work with an international team, including colleagues from Ulsan National Institute of Science and Technology in Korea, and Trinity College Dublin and University College Dublin in Ireland.

    “Once we accounted for some local intermingling, the Ulchi and the ancient hunter-gatherers appeared to be almost the same population from a genetic point of view, even though there are thousands of years between them.”

    The new study also provides further support for the ‘dual origin’ theory of modern Japanese populations: that they descend from a combination of hunter-gatherers and agriculturalists that eventually brought wet rice farming from southern China. A similar pattern is also found in neighbouring Koreans, who are genetically very similar to Japanese.

    However, Manica says that much more DNA data from Neolithic China is required to pinpoint the origin of the agriculturalists involved in this mixture.

    The team from Trinity College Dublin were responsible for extracting DNA from the remains, which were found in a cave known as Devil’s Gate. Situated in a mountainous area close to the far eastern coast of Russia that faces northern Japan, the cave was first excavated by a soviet team in 1973.

    Along with hundreds of stone and bone tools, the carbonised wood of a former dwelling, and woven wild grass that is one of the earliest examples of a textile, were the incomplete bodies of five humans.

    If ancient DNA can be found in sufficiently preserved remains, sequencing it involves sifting through the contamination of millennia. The best samples for analysis from Devil’s Gate were obtained from the skulls of two females: one in her early twenties, the other close to fifty. The site itself dates back over 9,000 years, but the two women are estimated to have died around 7,700 years ago.

    Researchers were able to glean the most from the middle-aged woman. Her DNA revealed she likely had brown eyes and thick, straight hair. She almost certainly lacked the ability to tolerate lactose, but was unlikely to have suffered from ‘alcohol flush’: the skin reaction to alcohol now common across East Asia.

    While the Devil’s Gate samples show high genetic affinity to the Ulchi, fishermen from the same area who speak the Tungusic language, they are also close to other Tungusic-speaking populations in present day China, such as the Oroqen and Hezhen.

    “These are ethnic groups with traditional societies and deep roots across eastern Russia and China, whose culture, language and populations are rapidly dwindling,” added lead author Veronika Siska, also from Cambridge.

    “Our work suggests that these groups form a strong genetic lineage descending directly from the early Neolithic hunter-gatherers who inhabited the same region thousands of years previously.”

    Exterior of Devil's Gate: the cave in the Primorye region, about 30km from the far eastern coast of Russia, where the human remains were found from which the ancient DNA used in the study was extracted.
  • Brain-computer interface allows completely locked-in people to communicate

    {Completely locked-in participants report being “happy”}

    {A computer interface that can decipher the thoughts of people who are unable to communicate could revolutionize the lives of those living with completely locked-in syndrome, according to a new paper publishing January 31st, 2017 in PLOS Biology. Counter to expectations, the participants in the study reported being “happy,” despite their extreme condition. The research was conducted by a multinational team, led by Professor Niels Birbaumer, at the Wyss Center for Bio and Neuroengineering in Geneva, Switzerland.}

    Patients suffering from complete paralysis, but with preserved awareness, cognition, and eye movements and blinking are classified as having locked-in syndrome. If eye movements are also lost, the condition is referred to as completely locked-in syndrome.

    In the trial, patients with completely locked-in syndrome were able to respond “yes” or “no” to spoken questions, by thinking the answers. A non-invasive brain-computer interface detected their responses by measuring changes in blood oxygen levels in the brain.

    The results overturn previous theories that postulate that people with completely locked-in syndrome lack the goal-directed thinking necessary to use a brain-computer interface and are, therefore, incapable of communication.

    Extensive investigations were carried out in four patients with ALS (amyotrophic lateral sclerosis, also known as Lou Gehrig’s disease) — a progressive motor neuron disease that leads to complete destruction of the part of the nervous system responsible for movement.

    The researchers asked personal questions with known answers and open questions that needed “yes” or “no” answers including: “Your husband’s name is Joachim?” and “Are you happy?.” They found the questions elicited correct responses in seventy percent of the trials.

    Professor Birbaumer said: “The striking results overturn my own theory that people with completely locked-in syndrome are not capable of communication. We found that all four patients we tested were able to answer the personal questions we asked them, using their thoughts alone. If we can replicate this study in more patients, I believe we could restore useful communication in completely locked-in states for people with motor neuron diseases.”

    The question “Are you happy?” resulted in a consistent “yes” response from the four people, repeated over weeks of questioning.

    Professor Birbaumer added: “We were initially surprised at the positive responses when we questioned the four completely locked-in patients about their quality of life. All four had accepted artificial ventilation in order to sustain their life, when breathing became impossible; thus, in a sense, they had already chosen to live. What we observed was that as long as they received satisfactory care at home, they found their quality of life acceptable. It is for this reason, if we could make this technique widely clinically available, it could have a huge impact on the day-to-day life of people with completely locked-in syndrome.”

    In one case, a family requested that the researchers asked one of the participants whether he would agree for his daughter to marry her boyfriend ‘Mario’. The answer was “no,” nine times out of ten.

    Professor John Donoghue, Director of the Wyss Center, said: “Restoring communication for completely locked-in patients is a crucial first step in the challenge to regain movement. The Wyss Center plans to build on the results of this study to develop clinically useful technology that will be available to people with paralysis resulting from ALS, stroke, or spinal cord injury. The technology used in the study also has broader applications that we believe could be further developed to treat and monitor people with a wide range of neuro-disorders.”

    The brain-computer interface in the study used near-infrared spectroscopy combined with electroencephalography (EEG) to measure blood oxygenation and electrical activity in the brain. While other brain-computer interfaces have previously enabled some paralyzed patients to communicate, near-infrared spectroscopy is, so far, the only successful approach to restore communication to patients suffering from completely locked-in syndrome.

    The NIRS/EEG brain computer interface system shown on a model.
  • Prediction of large earthquake probability improved

    {As part of the “Research in Collaborative Mathematics” project run by the Obra Social “la Caixa,” researchers of the Mathematics Research Centre (CRM) and the UAB have developed a mathematical law to explain the size distribution of earthquakes, even in the cases of large-scale earthquakes such as those which occurred in Sumatra (2004) and in Japan (2011).}

    The probability of an earthquake occurring exponentially decreases as its magnitude value increases. Fortunately, mild earthquakes are more probable than devastatingly large ones. This relation between probability and earthquake magnitude follows a mathematical curve called the Gutenberg-Richter law, and helps seismologists predict the probabilities of an earthquake of a specific magnitude occurring in some part of the planet.

    The law however lacks the necessary tools to describe extreme situations. For example, although the probability of an earthquake being of the magnitude of 12 is zero, since technically this would imply the earth breaking in half, the mathematics of the Gutenberg-Richter law do not consider impossible a 14-magnitude earthquake.

    “The limitations of the law are determined by the fact that the Earth is finite, and the law describes ideal systems, in a planet with an infinite surface,” explains Isabel Serra, first author of the article, researcher at CRM and affiliate lecturer of the UAB Department of Mathematics.

    To overcome these shortages, researchers studied a small modification in the Gutenberg-Richter law, a term which modified the curve precisely in the area in which probabilities were the smallest. “This modification has important practical effects when estimating the risks or evaluating possible economic losses. Preparing for a catastrophe where the losses could be, in the worst of the cases, very high in value, is not the same as not being able to calculate an estimated maximum value,” clarifies co-author Álvaro Corral, researcher at the Mathematics Research Centre and the UAB Department of Mathematics.

    Obtaining the mathematical curve which best fits the registered data on earthquakes is not an easy task when dealing with large tremors. From 1950 to 2003 there were only seven earthquakes measuring higher than 8.5 on the Richter scale and since 2004 there have only been six. Although we are now in a more active period following the Sumatra earthquake, there are very few cases and that makes it statistically a poorer period. Thus, the mathematical treatment of the problem becomes much more complex than when there is an abundance of data. For Corral, “this is where the role of mathematics is fundamental to complement the research of seismologists and guarantee the accuracy of the studies.” According to the researcher, the approach currently used to analyse seismic risk is not fully correct and, in fact, there are many risk maps which are downright incorrect, “which is what happened with the Tohoku earthquake of 2011, where the area contained an under-dimensioned risk.” “Our approach has corrected some things, but we are still far from being able to give correct results in specific regions,” Corral continues.

    The mathematical expression of the law at the seismic moment, proposed by Serra and Corral, meets all the conditions needed to determine both the probability of smaller earthquakes and of large ones, by adjusting itself to the most recent and extreme cases of Tohoku, in Japan (2011) and Sumatra, in Indonesia (2004); as well as to determine negligible probabilities for earthquakes of disproportionate magnitudes.

    The derived Gutenberg-Richter law has also been used to begin to explore its applications in the financial world. Isabel Serra worked in this field before beginning to study earthquakes mathematically. “The risk assessment of a firm’s economic losses is a subject insurance companies take very seriously, and the behaviour is similar: the probability of suffering losses decreases in accordance with the increase in volume of losses, according to a law that is similar to that of Gutenberg-Richter, but there are limit values which these laws do not take into consideration, since no matter how big the amount, the probability of losses of that amount never results in zero” Serra explains. “That makes the ‘expected value of losses’ enormous. To solve this, changes would have to be made to the law similar to those we introduced to the law on earthquakes.”