Category: Science News

  • Helping pays off: People who care for others live longer

    {Older people who help and support others live longer. These are the findings of a study published in the journal Evolution and Human Behavior, conducted by researchers from the University of Basel, Edith Cowan University, the University of Western Australia, the Humboldt University of Berlin, and the Max Planck Institute for Human Development in Berlin.}

    Older people who help and support others are also doing themselves a favor. An international research team has found that grandparents who care for their grandchildren on average live longer than grandparents who do not. The researchers conducted survival analyses of over 500 people aged between 70 and 103 years, drawing on data from the Berlin Aging Study collected between 1990 and 2009.

    In contrast to most previous studies on the topic, the researchers deliberately did not include grandparents who were primary or custodial caregivers. Instead, they compared grandparents who provided occasional childcare with grandparents who did not, as well as with older adults who did not have children or grandchildren but who provided care for others in their social network.

    Emotional support

    The results of their analyses show that this kind of caregiving can have a positive effect on the mortality of the carers. Half of the grandparents who took care of their grandchildren were still alive about ten years after the first interview in 1990. The same applied to participants who did not have grandchildren, but who supported their children — for example, by helping with housework. In contrast, about half of those who did not help others died within five years.

    The researchers were also able to show that this positive effect of caregiving on mortality was not limited to help and caregiving within the family. The data analysis showed that childless older adults who provided others with emotional support, for example, also benefited. Half of these helpers lived for another seven years, whereas non-helpers on average lived for only another four years.

    Too intense involvement causes stress

    “But helping shouldn’t be misunderstood as a panacea for a longer life,” says Ralph Hertwig, Director of the Center for Adaptive Rationality at the Max Planck Institute for Human Development. “A moderate level of caregiving involvement does seem to have positive effects on health. But previous studies have shown that more intense involvement causes stress, which has negative effects on physical and mental health,” says Hertwig. As it is not customary for grandparents in Germany and Switzerland to take custodial care of their grandchildren, primary and custodial caregivers were not included in the analyses.

    The researchers think that prosocial behavior was originally rooted in the family. “It seems plausible that the development of parents’ and grandparents’ prosocial behavior toward their kin left its imprint on the human body in terms of a neural and hormonal system that subsequently laid the foundation for the evolution of cooperation and altruistic behavior towards non-kin,” says first author Sonja Hilbrand, doctoral student in the Department of Psychology at the University of Basel.

    Older people who help and support others live longer.
  • Sunlight offers surprise benefit: It energizes infection fighting T cells

    {Sunlight allows us to make vitamin D, credited with healthier living, but a surprise research finding could reveal another powerful benefit of getting some sun.}

    Georgetown University Medical Center researchers have found that sunlight, through a mechanism separate than vitamin D production, energizes T cells that play a central role in human immunity.

    Their findings, published today in Scientific Reports, suggest how the skin, the body’s largest organ, stays alert to the many microbes that can nest there.

    “We all know sunlight provides vitamin D, which is suggested to have an impact on immunity, among other things. But what we found is a completely separate role of sunlight on immunity,” says the study’s senior investigator, Gerard Ahern, PhD, associate professor in the Georgetown’s Department of Pharmacology and Physiology. “Some of the roles attributed to vitamin D on immunity may be due to this new mechanism.”

    They specifically found that low levels of blue light, found in sun rays, makes T cells move faster — marking the first reported human cell responding to sunlight by speeding its pace.

    “T cells, whether they are helper or killer, need to move to do their work, which is to get to the site of an infection and orchestrate a response,” Ahern says. “This study shows that sunlight directly activates key immune cells by increasing their movement.”

    Ahern also added that while production of vitamin D required UV light, which can promote skin cancer and melanoma, blue light from the sun, as well as from special lamps, is safer.

    And while the human and T cells they studied in the laboratory were not specifically skin T cells — they were isolated from mouse cell culture and from human blood — the skin has a large share of T cells in humans, he says, approximately twice the number circulating in the blood.

    “We know that blue light can reach the dermis, the second layer of the skin, and that those T cells can move throughout the body,” he says.

    The researchers further decoded how blue light makes T cells move more by tracing the molecular pathway activated by the light.

    What drove the motility response in T cells was synthesis of hydrogen peroxide, which then activated a signaling pathway that increases T cell movement. Hydrogen peroxide is a compound that white blood cells release when they sense an infection in order to kill bacteria and to “call” T cells and other immune cells to mount an immune response.

    “We found that sunlight makes hydrogen peroxide in T cells, which makes the cells move. And we know that an immune response also uses hydrogen peroxide to make T cells move to the damage,” Ahern says. “This all fits together.”

    Ahern says there is much work to do to understand the impact of these findings, but he suggests that if blue light T cell activation has only beneficial responses, it might make sense to offer patients blue light therapy to boost their immunity.

    A surprise research finding could reveal another powerful benefit of getting some sun.
  • Children can ‘catch’ social bias through non-verbal signals expressed by adults

    {Most conscientious adults tend to avoid making biased or discriminatory comments in the presence of children.}

    But new research from the University of Washington suggests that preschool-aged children can learn bias even through nonverbal signals displayed by adults, such as a condescending tone of voice or a disapproving look.

    Published Dec. 21 in the journal Psychological Science, the research found that children can “catch” social bias by seeing negative signals expressed by adults and are likely to generalize that learned bias to others.

    “This research shows that kids are learning bias from the non-verbal signals that they’re exposed to, and that this could be a mechanism for the creation of racial bias and other biases that we have in our society,” said lead author Allison Skinner, a postdoctoral researcher in the UW’s Institute for Learning & Brain Sciences.

    “Kids are picking up on more than we think they are, and you don’t have to tell them that one group is better than another group for them to be getting that message from how we act.”

    The research involved an initial group of 67 children ages 4 and 5, an equal mix of boys and girls. The children were shown a video in which two different female actors displayed positive signals to one woman and negative signals to another woman. All people in the video were the same race, to avoid the possibility of racial bias factoring into the results.

    The actors greeted both women the same way and did the same activities with both (for example, giving each a toy) but the actors’ nonverbal signals differed when interacting with one woman versus the other. The actor spoke to one woman in a positive way — smiling, leaning toward her, using a warm tone of voice — and the other negatively, by scowling, leaning away and speaking in a cold tone.

    The children were then asked a series of questions — such as who they liked the best and who they wanted to share a toy with — intended to gauge whether they favored the recipient of positive nonverbal signals over the recipient of negative nonverbal signals.

    The results showed a consistent pattern of children favoring the recipient of positive nonverbal signals. Overall, 67 percent of children favored the recipient of positive nonverbal signals over the other woman — suggesting they were influenced by the bias shown by the actor.

    The researchers also wondered if nonverbal signals could lead to group bias or prejudice. To get at that question, they recruited an additional 81 children ages 4 and 5. The children were shown the same videos from the previous study, then a researcher introduced them to the “best friends” of the people in the video. The “friends” were described as members of the same group, with each wearing the same color shirt as their friend. The children were then asked questions to assess whether they favored one friend over the other.

    Strikingly, the results showed that children favored the friend of the recipient of positive nonverbal signals over the friend of the other woman. Taken together, the researchers say, the results suggest that biases extend beyond individuals to members of groups they are associated with.

    Skinner pointed out that many American preschoolers live in fairly homogenous environments, with limited ability to witness positive interactions with people from diverse populations. So even brief exposure to biased nonverbal signals, she said, could result in them developing generalized biases. The simulations created for the study represent just a small sample of what children likely witness in real life, Skinner said.

    “Children are likely exposed to nonverbal biases demonstrated by multiple people toward many different members of a target group,” she said. “It is quite telling that brief exposure to biased nonverbal signals was able to create a bias among children in the lab.”

    The study’s findings, she said, underscore the need for parents and other adults to be aware of the messages — verbal or otherwise — that they convey to children about how they feel about other people.

    These new findings underscore the need for parents and other adults to be aware of the messages — verbal or otherwise — that they convey to children about how they feel about other people.
  • Why big brains are rare

    {Studies of electric fish support the idea that really big brains can evolve only if constraints on energy intake are lifted.}

    As a species we’re so brain-proud it doesn’t occur to most of us to ask whether a big brain has disadvantages as well as cognitive benefits.

    “We can think of tons of benefits to a larger brain, but the other side of that is brain tissue is incredibly ‘expensive’ and increasing brain size comes at a heavy cost,” said Kimberley V. Sukhum, a graduate student in biology in Arts & Sciences at Washington University in St. Louis.

    So evolving a large brain requires either a decrease in other demands for energy or an increase in overall energy consumption, said Bruce Carlson, Sukhum’s advisor and professor of biology in Arts & Sciences.

    Previous studies in primates, frogs and toads, birds and fish found support for both hypotheses, leaving the evolutionary path to a larger brain unclear.

    Carlson’s lab studies mormyrid electric fishes from Africa, which use weak electric discharges to locate prey and to communicate with one another.

    The mormyrids have a reputation as large-brained fish and indeed one species (the fish in the top photo) has a brain that constitutes 3 percent of its body size, comparable to human brains, which range from 2 to 2.5 percent. But it was unclear whether other mormyrids were equally brainy.

    Examining 30 out of the more than 200 species in the mormyrid family, the scientists discovered that they have a wide variety of brain sizes.

    “We realized this meant the fish presented a great opportunity to study the metabolic costs of braininess,” Sukhum said.

    Using oxygen consumption and the ability to tolerate hypoxia as proxies for energy use and energy demand, the scientists put the fish to the test. Sure enough, they found that the largest brained species had the highest demand for oxygen and the smallest brained species the lowest.

    The results, published in the Dec. 21 issue of Proceedings of the Royal Society B, make a provocative pairing with an article published in the May 19, 2016 issue of Nature finding that large-brained humans have a much higher metabolic rate than the great apes.

    What is the alternative?

    Coming on this problem for the first time, you might be excused for thinking that of course it is necessary to eat more to feed a big brain.

    Many studies, however, have shown that larger brains can be accommodated by skimping on other energetically expensive organs or processes.

    For example a study of 30 species of frogs and toads published this September in The American Naturalist found that in these animals, the bigger the brain, the smaller the gut, — another expensive organ.

    Early studies of humans also suggested — in part because the human basal metabolic rate is broadly similar to that of other primates -that the smaller human gut similarly accommodated the big human brain.

    (How could a smaller gut, while it might use less energy, also supply more energy? The argument was that gut shrinkage must have coincided with a switch to a more nutritious diet includingf meat and tubers, and cooked food.)

    In any case, more recent studies that looked not just at primates but rather at the great apes, our closest evolutionary relatives, found instead that basal metabolic rate and total energy expenditure scale with brain size.

    Carlson suggests that confusion arose in earlier studies of brain size because big brains entail different costs and arise through different mechanisms than medium-sized brains. It might be possible to accommodate moderate increases in brain size, he said, by skimping on another organ or modifying behavior,. But the really big brains demand increases in total energy intake.

    It’s not all up-side

    Having a body that needs to be fed more just to exist is a risky strategy both for mormyrids and people.

    Carlson and Sukhum point out that the mormyrids’ ability to sense their environment by “electrolocation” helps them forage more efficiently. Some of the largest-brained mormyrids also have helpful appendages, such as the Schnauzenorgan or a tube snout that helps the fish extract invertebrates from crevices.

    Despite these adaptations, the extravagant energy needs of large-brained fish may restrict them to environments where oxygen concentrations are consistently high, such as large fast-moving rivers. Meanwhile their smaller-brained cousins may be generalists able to survive in many more areas, including low-oxygen swamps as well as fast-moving rivers.

    People, too, are remarkably vulnerable to any interruption in the food supply because of their big-brain energy budgets. We might mitigate this risk by efficient bipedal walking or cooking and sharing food — but we also do it by storing fat. Body fat, the Nature authors point out, provides an important buffer against food shortfalls.

    The brain of Gnathonemus petersii is larger in proportion to its body than a human's. To keep up with the energy demands of its big brain, it has evolved a Schnauzenorgan, a chin appendage covered with electroreceptors that helps it locate prey (calories).
  • Earliest evidence discovered of plants cooked in ancient pottery

    {An international team of scientists, led by the University of Bristol, has uncovered the earliest direct evidence of humans processing plants for food found anywhere in the world.}

    Researchers at the Organic Geochemistry Unit in the University of Bristol’s School of Chemistry, working with colleagues at Sapienza, University of Rome and the Universities of Modena and Milan, studied unglazed pottery dating from more than 10,000 years ago, from two sites in the Libyan Sahara.

    The invention of cooking has long been recognised as a critical step in human development.

    Ancient cooking would have initially involved the use of fires or pits and the invention of ceramic cooking vessels led to an expansion of food preparation techniques.

    Cooking would have allowed the consumption of previously unpalatable or even toxic foodstuffs and would also have increased the availability of new energy sources.

    Remarkably, until now, evidence of cooking plants in early prehistoric cooking vessels has been lacking.

    The researchers detected lipid residues of foodstuffs preserved within the fabric of unglazed cooking pots.

    Significantly, over half of the vessels studied were found to have been used for processing plants based on the identification of diagnostic plant oil and wax compounds.

    Detailed investigations of the molecular and stable isotope compositions showed a broad range of plants were processed, including grains, the leafy parts of terrestrial plants, and most unusually, aquatic plants.

    The interpretations of the chemical signatures obtained from the pottery are supported by abundant plant remains preserved in remarkable condition due to the arid desert environment at the sites.

    The plant chemical signatures from the pottery show that the processing of plants was practiced for over 4,000 years, indicating the importance of plants to the ancient people of the prehistoric Sahara.

    Dr Julie Dunne, a post-doctoral research associate Bristol’s School of Chemistry and lead author of the paper, said: “Until now, the importance of plants in prehistoric diets has been under-recognised but this work clearly demonstrates the importance of plants as a reliable dietary resource.

    “These findings also emphasise the sophistication of these early hunter-gatherers in their utilisation of a broad range of plant types, and the ability to boil them for long periods of time in newly invented ceramic vessels would have significantly increased the range of plants prehistoric people could eat.”

    Co-author Professor Richard Evershed, also from Bristol’s School of Chemistry, added: “The finding of extensive plant wax and oil residues in early prehistoric pottery provides us with an entirely different picture of the way early pottery was used in the Sahara compared to other regions in the ancient world.

    “Our new evidence fits beautifully with the theories proposing very different patterns of plant and animal domestication in Africa and Europe/Eurasia.”

    The research was funded by the UK’s Natural Environment Research Council (NERC) and is published today in Nature Plants.

    Rock art painting showing a human figure collecting plants.
  • Research team sets new mark for ‘deep learning’

    {Image-processing system learns largely on its own, much like a human baby.}

    Neuroscience and artificial intelligence experts from Rice University and Baylor College of Medicine have taken inspiration from the human brain in creating a new “deep learning” method that enables computers to learn about the visual world largely on their own, much as human babies do.

    In tests, the group’s “deep rendering mixture model” largely taught itself how to distinguish handwritten digits using a standard dataset of 10,000 digits written by federal employees and high school students. In results presented this month at the Neural Information Processing Systems (NIPS) conference in Barcelona, Spain, the researchers described how they trained their algorithm by giving it just 10 correct examples of each handwritten digit between zero and nine and then presenting it with several thousand more examples that it used to further teach itself. In tests, the algorithm was more accurate at correctly distinguishing handwritten digits than almost all previous algorithms that were trained with thousands of correct examples of each digit.

    “In deep-learning parlance, our system uses a method known as semisupervised learning,” said lead researcher Ankit Patel, an assistant professor with joint appointments in neuroscience at Baylor and electrical and computer engineering at Rice. “The most successful efforts in this area have used a different technique called supervised learning, where the machine is trained with thousands of examples: This is a one. This is a two.

    “Humans don’t learn that way,” Patel said. “When babies learn to see during their first year, they get very little input about what things are. Parents may label a few things: ‘Bottle. Chair. Momma.’ But the baby can’t even understand spoken words at that point. It’s learning mostly unsupervised via some interaction with the world.”

    Patel said he and graduate student Tan Nguyen, a co-author on the new study, set out to design a semisupervised learning system for visual data that didn’t require much “hand-holding” in the form of training examples. For instance, neural networks that use supervised learning would typically be given hundreds or even thousands of training examples of handwritten digits before they would be tested on the database of 10,000 handwritten digits in the Mixed National Institute of Standards and Technology (MNIST) database.

    The semisupervised Rice-Baylor algorithm is a “convolutional neural network,” a piece of software made up of layers of artificial neurons whose design was inspired by biological neurons. These artificial neurons, or processing units, are organized in layers, and the first layer scans an image and does simple tasks like searching for edges and color changes. The second layer examines the output from the first layer and searches for more complex patterns. Mathematically, this nested method of looking for patterns within patterns within patterns is referred to as a nonlinear process.

    “It’s essentially a very simple visual cortex,” Patel said of the convolutional neural net. “You give it an image, and each layer processes the image a little bit more and understands it in a deeper way, and by the last layer, you’ve got a really deep and abstract understanding of the image. Every self-driving car right now has convolutional neural nets in it because they are currently the best for vision.”

    Like human brains, neural networks start out as blank slates and become fully formed as they interact with the world. For example, each processing unit in a convolutional net starts the same and becomes specialized over time as they are exposed to visual stimuli.

    “Edges are very important,” Nguyen said. “Many of the lower layer neurons tend to become edge detectors. They’re looking for patterns that are both very common and very important for visual interpretation, and each one trains itself to look for a specific pattern, like a 45-degree edge or a 30-degree red-to-blue transition.

    “When they detect their particular pattern, they become excited and pass that on to the next layer up, which looks for patterns in their patterns, and so on,” he said. “The number of times you do a nonlinear transformation is essentially the depth of the network, and depth governs power. The deeper a network is, the more stuff it’s able to disentangle. At the deeper layers, units are looking for very abstract things like eyeballs or vertical grating patterns or a school bus.”

    Nguyen began working with Patel in January as the latter began his tenure-track academic career at Rice and Baylor. Patel had already spent more than a decade studying and applying machine learning in jobs ranging from high-volume commodities training to strategic missile defense, and he’d just wrapped up a four-year postdoctoral stint in the lab of Rice’s Richard Baraniuk, another co-author on the new study. In late 2015, Baraniuk, Patel and Nguyen published the first theoretical framework that could both derive the exact structure of convolutional neural networks and provide principled solutions to alleviate some of their limitations.

    Baraniuk said a solid theoretical understanding is vital for designing convolutional nets that go beyond today’s state-of-the-art.

    “Understanding video images is a great example,” Baraniuk said. “If I am looking at a video, frame by frame by frame, and I want to understand all the objects and how they’re moving and so on, that is a huge challenge. Imagine how long it would take to label every object in every frame of a video. No one has time for that. And in order for a machine to understand what it’s seeing in a video, it has to understand what objects are, the concept of three-dimensional space and a whole bunch of other really complicated stuff. We humans learn those things on our own and take them for granted, but they are totally missing in today’s artificial neural networks.”

    Patel said the theory of artificial neural networks, which was refined in the NIPS paper, could ultimately help neuroscientists better understand the workings of the human brain.

    “There seem to be some similarities about how the visual cortex represents the world and how convolutional nets represent the world, but they also differ greatly,” Patel said. “What the brain is doing may be related, but it’s still very different. And the key thing we know about the brain is that it mostly learns unsupervised.

    “What I and my neuroscientist colleagues are trying to figure out is, What is the semisupervised learning algorithm that’s being implemented by the neural circuits in the visual cortex? and How is that related to our theory of deep learning?” he said. “Can we use our theory to help elucidate what the brain is doing? Because the way the brain is doing it is far superior to any neural network that we’ve designed.”

    From left, Richard Baraniuk, Tan Nguyen and Ankit Patel.
  • Large, rare diamonds offer window into inner workings of Earth’s mantle

    {Breakthrough research led by GIA Postdoctoral Research Fellow Evan Smith examines diamonds of exceptional size and quality to uncover clues about Earth’s geology. The researchers studied the unique properties of diamonds with similar characteristics to famous stones such as the Cullinan, Constellation and Koh-i-Noor to advance our understanding of Earth’s deep mantle, hidden beneath tectonic plates and largely inaccessible for scientific observation. The study is published in the most recent issue of Science magazine.}

    “Some of the world’s largest and most valuable diamonds, like the Cullinan or Lesotho Promise, exhibit a distinct set of physical characteristics that have led many to regard them as separate from other, more common diamonds. However, exactly how these diamonds form and what they tell us about the Earth has remained a mystery until now,” explains Dr. Wuyi Wang, GIA’s director of research and development, and an author of the study.

    Large gem diamonds like the Cullinan have a set of physical characteristics that distinguish them from other kinds of diamonds. The new research shows these Cullinan-like gems sometimes have small metallic inclusions — or internal characteristics — trapped within them. The metallic inclusions coexist with traces of fluid methane and hydrogen. In addition to the metallic inclusions, some of these exceptional diamonds contain mineral inclusions that show the diamonds formed at extreme depths, likely within 360-750 km (approximately 224-466 miles) in the convecting mantle. This is much deeper than most other gem diamonds, which form in the lower part of continental tectonic plates at depths of 150-200 km (approximately 93-124 miles).

    “This new understanding of these large, type IIa diamonds resolves one of the major enigmas in the study of diamond formation — how the world’s largest and most valuable diamonds formed,” says Smith. “The composition of the inclusions, however, provides the story.”

    The metallic inclusions are a solidified mixture of iron, nickel, carbon and sulfur, also containing traces of fluid methane and hydrogen in the thin tiny space between the metallic phases and the encasing diamond. Pure carbon crystallized in this mix of molten metallic liquid in Earth’s deep mantle to form diamonds. Small droplets of this metallic liquid were occasionally trapped within the diamonds as they grew. During cutting and polishing, parts of the diamond that contain inclusions are often cut off or polished away to craft exquisite polished gems with minimal flaws. These cut diamond pieces are not normally available for research, but because of GIA’s unique position as an independent, nonprofit research organization, Dr. Smith and his coauthors were able to study the inclusions for this investigation.

    “Previous experiments and theory predicted for many years that parts of the deep mantle below about 250 km depth contain small amounts of metallic iron and have limited available oxygen. Now, the metallic inclusions and their surrounding methane and hydrogen jackets in these diamonds provide consistent, systematic physical evidence to support this prediction,” explains Smith.

    Though the extent of metal distribution is uncertain, this key observation has broad implications for understanding the behavior of the deep Earth, including the recycling of surface rocks into the convecting mantle, and the deep storage and cycling of carbon and hydrogen in the mantle through geologic time.

    This is an assortment of diamond offcuts used in the study. The largest is 9.6 carats. These diamonds could be analyzed by destructive means (polishing to expose inclusions) whereas many other diamonds studied were polished gemstones that were only borrowed and studied non-destructively.
  • Researchers uncover why morning people should not work at night

    {It has been known for a long time that early risers work less efficiently at night than night owls do. But researchers from the Higher School of Economics and Oxford University have uncovered new and distinctive features between the night activities of these two types of individuals. At night, early risers demonstrate a quicker reaction time when solving unusual attention-related tasks than night owls, but these early risers make more mistakes along the way.}

    Sleep deprivation and a relative increase in the time spent awake negatively impact the brain’s attention system. Nicola Barclay and Andriy Myachykov conducted a study that is the first experiment investigating the influence of sleep deprivation on people with different chronotypes. Specifically, the researchers found out how an increase in time spent awake affects the attention system of early risers and night owls. The study is available in the journal Experimental Brain Research.

    Twenty-six volunteers (13 male, 13 female) with an average age of 25 participated in the study. Participants were required to stay awake for 18 hours, from 8:00 a.m. to 2:00 a.m., and adhere to their normal routine. At the beginning and end of their time spent awake, the participants completed an Attention Network Test (ANT) and a Morningness-Eveningness Questionnaire to help assess their chronotype.

    The researchers did not find any important differences between the results of the ANT test the early birds and night owls completed in the morning, but the evening test showed a more pronounced contrast. The early birds completed tests quicker than the night owls, which was a rather unexpected and contradictory outcome, though the researchers did find an explanation for this. This may have been because of the different approaches the two groups took towards managing the task. Evening people tended to take a more serious approach when it came to tasks requiring more time and attention during their favorite hours, i.e., in the late evening or at night. ‘To deal with the most difficult test — resolving a conflict of attention — it was necessary not only to concentrate on the main visual stimulus, but at the same time to ignore accompanying stimulus that distract from the core task,’ Andriy Myachykov explains. Completion of this task requires increased concentration. ‘An interesting fact is that although night owls spent more time finishing than early birds, their accuracy in completing the task was higher,’ the researcher added.

    Overall, the evening people turned out to be slower but more efficient compared to the early risers, according to the second ANT taken at 2:00 a.m. after 18 hours of being awake. ‘On the one hand, it’s known that night owls are more efficient in the late hours, but how this influences the speed and accuracy with which attention-related tasks are completed remains unclear. Our study demonstrated how night owls working late at night “sacrifice” speed for accuracy,’ explained Andriy Myachykov.

    The results of this study could challenge the education system and human resources management in certain areas. For pilots, air traffic controllers, drivers, etc., attention, the ability to deal with large sets of data, and reaction time are all very important. During emergencies, these features could play a vital role. The results of this study could also be very useful for people who work night shift.

    At night, early risers demonstrate a quicker reaction time when solving unusual attention-related tasks than night owls, but these early risers make more mistakes along the way.
  • Runners’ brains may be more connected, research shows

    {MRI scans show that running may affect the structure and function of the brain in ways similar to complex tasks like playing a musical instrument.}

    If you’re thinking about taking up running as your New Year’s resolution and still need some convincing, consider this: MRI scans reveal that endurance runners’ brains have greater functional connectivity than the brains of more sedentary individuals.

    University of Arizona researchers compared brain scans of young adult cross country runners to young adults who don’t engage in regular physical activity. The runners, overall, showed greater functional connectivity — or connections between distinct brain regions — within several areas of the brain, including the frontal cortex, which is important for cognitive functions such as planning, decision-making and the ability to switch attention between tasks.

    Although additional research is needed to determine whether these physical differences in brain connectivity result in differences in cognitive functioning, the current findings, published in the journal Frontiers in Human Neuroscience, help lay the groundwork for researchers to better understand how exercise affects the brain, particularly in young adults.

    UA running expert David Raichlen, an associate professor of anthropology, co-designed the study with UA psychology professor Gene Alexander, who studies brain aging and Alzheimer’s disease as a member of the UA’s Evelyn F. McKnight Brain Institute.

    “One of the things that drove this collaboration was that there has been a recent proliferation of studies, over the last 15 years, that have shown that physical activity and exercise can have a beneficial impact on the brain, but most of that work has been in older adults,” Raichlen said.

    “This question of what’s occurring in the brain at younger ages hasn’t really been explored in much depth, and it’s important,” he said. “Not only are we interested in what’s going on in the brains of young adults, but we know that there are things that you do across your lifespan that can impact what happens as you age, so it’s important to understand what’s happening in the brain at these younger ages.”

    Along with their colleagues, Raichlen and Alexander compared the MRI scans of a group of male cross country runners to the scans of young adult males who hadn’t engaged in any kind of organized athletic activity for at least a year. Participants were roughly the same age — 18 to 25 — with comparable body mass index and educational levels.

    The scans measured resting state functional connectivity, or what goes on in the brain while participants are awake but at rest, not engaging in any specific task.

    The findings shed new light on the impact that running, as a particular form of exercise, may have on the brain.

    Previous studies have shown that activities that require fine motor control, such as playing a musical instrument, or that require high levels of hand-eye coordination, such as playing golf, can alter brain structure and function. However, fewer studies have looked at the effects of more repetitive athletic activities that don’t require as much precise motor control — such as running. Raichlen’s and Alexander’s findings suggest that these types of activities could have a similar effect.

    “These activities that people consider repetitive actually involve many complex cognitive functions — like planning and decision-making — that may have effects on the brain,” Raichlen said.

    Since functional connectivity often appears to be altered in aging adults, and particularly in those with Alzheimer’s or other neurodegenerative diseases, it’s an important measure to consider, Alexander said. And what researchers learn from the brains of young adults could have implications for the possible prevention of age-related cognitive decline later on.

    “One of the key questions that these results raise is whether what we’re seeing in young adults — in terms of the connectivity differences — imparts some benefit later in life,” said Alexander, who also is a professor of neuroscience and physiological sciences. “The areas of the brain where we saw more connectivity in runners are also the areas that are impacted as we age, so it really raises the question of whether being active as a young adult could be potentially beneficial and perhaps afford some resilience against the effects of aging and disease.”

  • Teen use of any illicit drug other than marijuana at new low, same true for alcohol

    {Teenagers’ use of drugs, alcohol and tobacco declined significantly in 2016 at rates that are at their lowest since the 1990s, a new national study showed.}

    But University of Michigan researchers cautioned that while these developments are “trending in the right direction,” marijuana use still remains high for 12th-graders.

    The results derive from the annual Monitoring the Future study, now in its 42nd year. About 45,000 students in some 380 public and private secondary schools have been surveyed each year in this national study, designed and conducted by research scientists at U-M’s Institute for Social Research and funded by the National Institute on Drug Abuse. Students in grades 8, 10 and 12 are surveyed.

    Overall, the proportion of secondary school students in the country who used any illicit drug in the prior year fell significantly between 2015 and 2016. The decline in narcotic drugs is of particular importance, the researchers say. This year’s improvements were particularly concentrated among 8th- and 10th-graders.

    Considerably fewer teens reported using any illicit drug other than marijuana in the prior 12 months — 5 percent, 10 percent and 14 percent in grades 8, 10 and 12, respectively — than at any time since 1991. These rates reflect a decline of about one percentage point in each grade in 2016, but a much larger decline over the longer term.

    In fact, the overall percentage of teens using any of the illicit drugs other than marijuana has been in a gradual, long-term decline since the last half of the 1990s, when their peak rates reached 13 percent, 18 percent and 21 percent, respectively.

    Marijuana, the most widely used of the illicit drugs, dropped sharply in 2016 in use among 8th-graders to 9.4 percent, or about one in every 11 indicating any use in the prior 12 months. Use also declined among 10th-graders as well, though not by a statistically significant amount, to 24 percent or about one in every four 10th-graders.

    The annual prevalence of marijuana use (referring to the percentage using any marijuana in the prior 12 months) has been declining gradually among 8th-graders since 2010, and more sharply among 10th-graders since 2013. Among 12th-graders, however, the prevalence of marijuana use is higher (36 percent) and has held steady since 2011. These periods of declining use (or in the case of 12th-graders, stabilization) followed several years of increasing use by each of these age groups.

    Daily or near-daily use of marijuana — defined as use on 20 or more occasions in the previous 30 days — also declined this year among the younger teens (significantly so in 8th grade to 0.7 percent and to 2.5 percent among 10th-graders). However, there was no change among 12th-graders in daily use, which remains quite high at 6 percent or roughly one in every 17 12th-graders — about where it has been since 2010.

    Prescription amphetamines and other stimulants used without medical direction have constituted the second-most widely used class of illicit drugs used by teens. Their use has fallen considerably, however. In 2016, 3.5 percent, 6.1 percent and 6.7 percent of 8th-, 10th- and 12th-graders, respectively, say they have used any in the prior 12 months — down from recent peak levels of 9 percent, 12 percent and 11 percent, respectively, reached during the last half of the 1990s.

    Prescription narcotic drugs have presented a serious problem for the country in recent years, with increasing numbers of overdose deaths and emergencies resulting from their use. Fortunately, the use of these drugs outside of medical supervision has been in decline, at least among high school seniors — the only ones for whom narcotics use is reported. In 2004, a high proportion of 12th-graders — 9.5 percent, or nearly one in 10 — indicated using a prescription narcotic in the prior 12 months, but today that percentage is down by half to 4.8 percent.

    “That’s still a lot of young people using these dangerous drugs without medical supervision, but the trending is in the right direction,” said Lloyd Johnston, the study’s principal investigator. “Fewer are risking overdosing as teenagers, and hopefully more will remain abstainers as they pass into their twenties, thereby reducing the number who become casualties in those high-risk years.”

    Users of narcotic drugs without medical supervision were asked where they get the drugs they use. About four in every 10 of the past-year users indicated that they got them “from a prescription I had.”

    “That suggests that physicians and dentists may want to consider reducing the number of doses they routinely prescribe when giving these drugs to their patients, and in particular to teenagers,” Johnston said.

    Heroin is another narcotic drug of obvious importance. There is no evidence in the study that the use of heroin has risen as the use of prescription narcotics has fallen — at least not in this population of adolescents still in school, who represent over 90 percent of their respective age groups.

    In fact, heroin use among secondary school students also has declined substantially since recent peak levels reached in the late 1990s. Among 8th-graders, the annual prevalence of heroin use declined from 1.6 percent in 1996 to 0.3 percent in 2016. And among 12th-graders, the decline was from 1.5 percent in 2000 to 0.3 percent in 2016.

    “So, among secondary school students, at least, there is no evidence of heroin coming to substitute for prescription narcotic drugs — a dynamic that apparently has occurred in other populations,” Johnston said. “Certainly there will be individual cases where that happens, but overall the use of heroin and prescription narcotics both have declined appreciably and largely in parallel among secondary school students.”

    The ecstasy epidemic, which peaked at about 2001, was a substantial one for teens and young adults, Johnston said. Ecstasy is a form of MDMA (methylenedioxy-methamphetamine) as is the much newer form on the scene, “Molly.”

    “The use of MDMA has generally been declining among teens since about 2010 or 2011, and it continued to decrease significantly in 2016 in all three grades even with the inclusion of Molly in the question in more recent years,” Johnston said.

    MDMA’s annual prevalence now stands at about 1 percent, 2 percent and 3 percent in grades 8, 10 and 12, respectively.

    Synthetic marijuana (often sold over the counter as “K-2” or “Spice”) continued its rapid decline in use among teens since its use was first measured in 2011. Among 12th-graders, for example, annual prevalence has fallen by more than two-thirds, from 11.4 percent in 2011 to 3.5 percent in 2016. Twelfth-graders have been showing an increased appreciation of the dangers associated with these drugs. It also seems likely that fewer students have access to these synthetic drugs, as many states and communities have outlawed their sale by retail outlets.

    Bath salts constitute another class of synthetic drugs sold over the counter. Their annual prevalence has remained quite low — at 1.3 percent or less in all grades — since they were first included in the study in 2012. One of the very few statistically significant increases in use of a drug this year was for 8th-graders’ use of bath salts (which are synthetic stimulants), but their annual prevalence is still only 0.9 percent with no evidence of a progressive increase.

    A number of other illicit drugs have shown declining use, as well. Among them are cocaine, crack, sedatives and inhalants (the declining prevalence rates for these drugs may be seen in the tables and figures associated with this release.)

    {{Alcohol}}

    The use of alcohol by adolescents is even more prevalent than the use of marijuana, but it, too, is trending downward in 2016, continuing a longer-term decline. For all three grades, both annual and monthly prevalence of alcohol use are at historic lows over the life of the study. Both measures continued to decline in all three grades in 2016.

    Of even greater importance, measures of heavy alcohol use are also down considerably, including self-reports of having been drunk in the previous 30 days and of binge drinking in the prior two weeks (defined as having five or more drinks in a row on at least one occasion).

    Binge drinking has fallen by half or more at each grade level since peak rates were reached at the end of the 1990s. Today, the proportions who binge drink are 3 percent, 10 percent and 16 percent in grades 8, 10 and 12, respectively.

    “Since 2005, 12th-graders have also been asked about what we call ‘extreme binge drinking,’ defined as having 10 or more drinks in a row or even 15 or more, on at least one occasion in the prior two weeks,” Johnston said. “Fortunately, the prevalence of this particularly dangerous behavior has been declining as well.”

    In 2016, 4.4 percent of 12th-graders reported drinking at the level of 10 or more drinks in a row, down by about two-thirds from 13 percent in 2006.

    Rates of daily drinking among teens has also fallen considerably over the same intervals. Flavored alcoholic beverages and alcoholic beverages containing caffeine have both declined appreciably in use since each was first measured — again, particularly among the younger teens, where significant declines in annual prevalence continued into 2016.

    {{Tobacco}}

    Declines in cigarette smoking and certain other forms of tobacco use also occurred among teens in 2016, continuing an important and now long-term trend in the use of cigarettes.

    Teenagers' use of drugs, alcohol and tobacco declined significantly in 2016, say investigators.