Category: Science News

  • Maximum human lifespan has already been reached

    {A study published online today in Nature by Albert Einstein College of Medicine scientists suggests that it may not be possible to extend the human life span beyond the ages already attained by the oldest people on record.}

    Since the 19th century, average life expectancy has risen almost continuously thanks to improvements in public health, diet, the environment and other areas. On average, for example, U.S. babies born today can expect to live nearly until age 79 compared with an average life expectancy of only 47 for Americans born in 1900. Since the 1970s, the maximum duration of life — the age to which the oldest people live — has also risen. But according to the Einstein researchers, this upward arc for maximal lifespan has a ceiling — and we’ve already touched it.

    “Demographers as well as biologists have contended there is no reason to think that the ongoing increase in maximum lifespan will end soon,” said senior author Jan Vijg, Ph.D., professor and chair of genetics, the Lola and Saul Kramer Chair in Molecular Genetics, and professor of ophthalmology & visual sciences at Einstein. “But our data strongly suggest that it has already been attained and that this happened in the 1990s.”

    Dr. Vijg and his colleagues analyzed data from the Human Mortality Database, which compiles mortality and population data from more than 40 countries. Since 1900, those countries generally show a decline in late-life mortality: The fraction of each birth cohort (i.e., people born in a particular year) who survive to old age (defined as 70 and up) increased with their calendar year of birth, pointing toward a continuing increase in average life expectancy.

    But when the researchers looked at survival improvements since 1900 for people aged 100 and above, they found that gains in survival peaked at around 100 and then declined rapidly, regardless of the year people were born. “This finding indicates diminishing gains in reducing late-life mortality and a possible limit to human lifespan,” said Dr. Vijg.

    He and his colleagues then looked at “maximum reported age at death” data from the International Database on Longevity. They focused on people verified as living to age 110 or older between 1968 and 2006 in the four countries (the U.S., France, Japan and the U.K.) with the largest number of long-lived individuals. Age at death for these supercentenarians increased rapidly between the 1970s and early 1990s but reached a plateau around 1995 — further evidence for a lifespan limit. This plateau, the researchers note, occurred close to 1997 — the year of death of 122-year-old French woman Jeanne Calment, who achieved the maximum documented lifespan of any person in history.

    Using maximum-reported-age-at-death data, the Einstein researchers put the average maximum human life span at 115 years — a calculation allowing for record-oldest individuals occasionally living longer or shorter than 115 years. (Jeanne Calment, they concluded, was a statistical outlier.) Finally, the researchers calculated 125 years as the absolute limit of human lifespan. Expressed another way, this means that the probability in a given year of seeing one person live to 125 anywhere in the world is less than 1 in 10,000.

    “Further progress against infectious and chronic diseases may continue boosting average life expectancy, but not maximum lifespan,” said Dr. Vijg. “While it’s conceivable that therapeutic breakthroughs might extend human longevity beyond the limits we’ve calculated, such advances would need to overwhelm the many genetic variants that appear to collectively determine the human lifespan. Perhaps resources now being spent to increase lifespan should instead go to lengthening healthspan — the duration of old age spent in good health.”

    How many birthdays can you celebrate? Researchers calculated 125 years as the absolute limit of human lifespan.
  • How evolution has equipped our hands with five fingers

    {Have you ever wondered why our hands have exactly five fingers? Dr. Marie Kmita’s team certainly has. The researchers at the Institut de recherches cliniques de Montréal and Université de Montréal have uncovered a part of this mystery, and their remarkable discovery has just been published in the journal Nature.}

    {{A matter of evolution}}

    We have known for several years that the limbs of vertebrates, including our arms and legs, stem from fish fins. The evolution that led to the appearance of limbs, and in particular the emergence of fingers in vertebrates, reflects a change in the body plan associated with a change of habitat, the transition from an aquatic environment to a terrestrial environment. How this evolution occurred is a fascinating question that goes all the way back to the work of Charles Darwin.

    This August, researchers in Chicago, Dr. Neil Shubin and his team, demonstrated that two genes — hoxa13 and hoxd13 — are responsible for the formation of fin rays and our fingers. “This result is very exciting, because it clearly establishes a molecular link between fin rays and fingers,” said Yacine Kherdjemil, a doctoral student in Marie Kmita’s laboratory and first author of the article published in Nature.

    However, the transition from fin to limb was not accomplished overnight. The fossil record indicates that our ancestors were polydactyl, meaning that they had more than five fingers, which raises another key question. Through what mechanism did evolution favor pentadactyly (five fingers) among current species?

    One observation in particular caught the attention of Dr. Kmita’s team: “During development, in mice and humans, the hoxa11 and hoxa13 genes are activated in separate domains of the limb bud, while in fish, these genes are activated in overlapping domains of the developing fin,” said Marie Kmita, Director of the Institut de recherches cliniques de Montréal’S Genetics and Development research unit and Associate Research Professor in the Department of Medicine at the Université de Montréal.

    In trying to understand the significance of this difference, Yacine Kherdjemil demonstrated that by reproducing the fish-type regulation for the hoxa11 gene, mice develop up to seven digits per paw, i.e., a return to ancestral status. Dr. Marie Kmita’s team also discovered the sequence of DNA responsible for the transition between fish- and mouse-type regulation for the hoxa11 gene. “It suggests that this major morphological change did not occur through the acquisition of new genes but by simply modifying their activities,” added Dr. Marie Kmita.

    From a clinical point of view, this discovery reinforces the notion that malformations during fetal development are not only due to mutations in the genes and may come from mutations in sequences of DNA known as regulatory sequences. “At present, technical constraints do not allow for identifying this type of mutation directly in patients, hence the importance of basic research using animal models,” said Marie Kmita.

    Have you ever wondered why our hands have exactly five fingers? New research has been digging around to find out why.
  • How the performing arts can set the stage for more developed brain pathways

    {Endless hours at the barre. Long afternoons practising scales. All that time you spent in piano lessons and dance classes as a youngster may have seemed like a pain, but new research now confirms what your parents claimed: it’s good for mind and body.}

    In fact, a recent study published in NeuroImage by a team of researchers from the the International Laboratory for Brain, Music and Sound Research, proves that dance and music training have even stronger effects on the brain than previously understood — but in markedly different ways.

    The researchers used high-tech imaging techniques to compare the effects of dance and music training on the white matter structure of experts in these two disciplines. They then examined the relationship between training-induced brain changes and dance and music abilities.

    “We found that dancers and musicians differed in many white matter regions, including sensory and motor pathways, both at the primary and higher cognitive levels of processing,” says Chiara Giacosa, Concordia PhD candidate and the study’s lead author.

    In particular, dancers showed broader connections of fibre bundles linking the sensory and motor brain regions themselves, as well as broader fibre bundles connecting the brain’s two hemispheres — in the regions that process sensory and motor information — . In contrast, musicians had stronger and more coherent fibre bundles in those same pathways.

    “This suggests that dance and music training affect the brain in opposite directions, increasing global connectivity and crossing of fibres in dance training, and strengthening specific pathways in music training,” Giacosa explains. “Indeed, while dancers train their whole body, which has a broader representation in the neural cortex, musicians focus their training on some specific body parts, such as hands, fingers or the mouth, which have a smaller cortical representation in the brain.”

    ‘This work has major potential’

    Interestingly, dancers and musicians differed more between each other than in comparison to the group of control subjects who had no extensive formal training in either field.

    According to Giacosa, this can happen because a range of uncontrolled variables influenced the control subjects in different ways, making them more similar to one group or the other. “Contrary to that, our samples of dancers and musicians were specifically selected to be pure groups of experts, which makes it easier to differentiate between them.”

    Virginia Penhune is a professor and chair of Concordia’s Department of Psychology and the study’s senior author. She notes that this research deepens the current knowledge about how regions of the brain are connected in networks, and how these structural networks change with training.

    “This work has major potential for being applied to the fields of education and rehabilitation,” Penhune says. “Understanding how dance and music training differently affect brain networks will allow us to selectively use them to enhance their functioning or compensate for difficulties and diseases that involve those specific brain networks.”

    Some studies have already shown how music training at a young age can improve various cognitive skills, but dance has yet to be used in a similar way.

    “Recent research has started to show some improvements with dance and music therapy in patients affected by Parkinson’s disease and children with autism respectively, but much more can be done with these and other diseases,” says Penhune.

    All that time you spent in piano lessons and dance classes as a youngster may have seemed like a pain, but new research now confirms what your parents claimed: it's good for mind and body.
  • Here’s looking at you: Finding allies through facial cues

    {After being on the losing side of a fight, men seek out other allies with a look of rugged dominance about them to ensure a backup in case of future fights. Women in similar situations however, prefer to seek solace from allies whose faces suggest they can provide emotional support. There is an evolutionary root to the differences in how men and women seek out allies and it is driven by the need for social survival in the long run. This is according to UK researchers Christopher Watkins of Abertay University and Benedict Jones of the University of Glasgow, in Springer’s journal Behavioral Ecology and Sociobiology.}

    Alliance formation refers to the tendency among people to team up in pursuit of a common goal. It is an important facet of social intelligence among humans and other species. Not much is known however about the cognitive processes that come into play when people choose allies within different social settings — and whether ‘minimal information’, such as snap judgments made about someone based on how their face looks, is used in our assessments of suitable allies.

    Watkins and Jones tested how people associate specific facial cues with suitability as an ally in the aftermath of specific social experiences. To find out if there are specific gender differences to this, the researchers analyzed the responses of 246 young adults who completed an online experiment. Participants were first asked to visualize themselves either winning or losing one of two situations: a physical fight or a contest for promotion with a same-sex rival. They were then shown 20 pairs of male and female faces. These photographs were manipulated using computer graphic methods so that each pair consisted of a masculine and a feminine version of the same individual. On each trial, participants had to choose who they judge to be the better ally from looking at their facial characteristics alone.

    In general men preferred masculine men as allies, in contrast to women who did not prefer masculine or feminine-looking faces when judging men as possible allies. However, feminine-looking women were preferred as allies by both men and women. According to Watkins and Jones, these general social preferences may have an evolutionary basis. Alliances with dominant men might benefitted ancestral males when competing against rival groups and improved the social rank of the male who selected a dominant ally.

    “Our results suggest that there are sex-specific responses to facial characteristics which are flexible and change in light of a recent experience of confrontation,” says Watkins. While men’s preferences for dominant-looking allies were stronger after a loss compared to a win in a violent confrontation with another male, women’s preferences for dominant-looking allies were weaker after a loss compared to a win in a violent confrontation with another female.

    “These findings suggest that intra-sexual selection, in part, has shaped the evolution of social intelligence in humans as revealed by flexibility in social preferences for allies,” say Watkins and Jones.

    Men seek allies with a look of rugged dominance while women prefer solace from allies whose faces suggest they can provide emotional support.
  • Brain waves can be used to detect potentially harmful personal information

    {Cyber security and authentication have been under attack in recent months as, seemingly every other day, a new report of hackers gaining access to private or sensitive information comes to light. Just recently, more than 500 million passwords were stolen when Yahoo revealed its security was compromised.}

    Securing systems has gone beyond simply coming up with a clever password that could prevent nefarious computer experts from hacking into your Facebook account. The more sophisticated the system, or the more critical, private information that system holds, the more advanced the identification system protecting it becomes.

    Fingerprint scans and iris identification are just two types of authentication methods, once thought of as science fiction, that are in wide use by the most secure systems. But fingerprints can be stolen and iris scans can be replicated. Nothing has proven foolproof from being subject to computer hackers.

    “The principal argument for behavioral, biometric authentication is that standard modes of authentication, like a password, authenticates you once before you access the service,” said Abdul Serwadda a cybersecurity expert and assistant professor in the Department of Computer Science at Texas Tech University.

    “Now, once you’ve accessed the service, there is no other way for the system to still know it is you. The system is blind as to who is using the service. So the area of behavioral authentication looks at other user-identifying patterns that can keep the system aware of the person who is using it. Through such patterns, the system can keep track of some confidence metric about who might be using it and immediately prompt for reentry of the password whenever the confidence metric falls below a certain threshold.”

    One of those patterns that is growing in popularity within the research community is the use of brain waves obtained from an electroencephalogram, or EEG. Several research groups around the country have recently showcased systems which use EEG to authenticate users with very high accuracy.

    However, those brain waves can tell more about a person than just his or her identity. It could reveal medical, behavioral or emotional aspects of a person that, if brought to light, could be embarrassing or damaging to that person. And with EEG devices becoming much more affordable, accurate and portable and applications being designed that allows people to more readily read an EEG scan, the likelihood of that happening is dangerously high.

    “The EEG has become a commodity application. For $100 you can buy an EEG device that fits on your head just like a pair of headphones,” Serwadda said. “Now there are apps on the market, brain-sensing apps where you can buy the gadget, download the app on your phone and begin to interact with the app using your brain signals. That led us to think; now we have these brain signals that were traditionally accessed only by doctors being handled by regular people. Now anyone who can write an app can get access to users’ brain signals and try to manipulate them to discover what is going on.”

    That’s where Serwadda and graduate student Richard Matovu focused their attention: attempting to see if certain traits could be gleaned from a person’s brain waves. They presented their findings recently to the Institute of Electrical and Electronics Engineers (IEEE) International Conference on Biometrics.

    Brain waves and cybersecurity

    Serwadda said the technology is still evolving in terms of being able to use a person’s brain waves for authentication purposes. But it is a heavily researched field that has drawn the attention of several federal organizations. The National Science Foundation (NSF), funds a three-year project on which Serwadda and others from Syracuse University and the University of Alabama-Birmingham are exploring how several behavioral modalities, including EEG brain patterns, could be leveraged to augment traditional user authentication mechanisms.

    “There are no installations yet, but a lot of research is going on to see if EEG patterns could be incorporated into standard behavioral authentication procedures,” Serwadda said

    Assuming a system uses EEG as the modality for user authentication, typically for such a system, all variables have been optimized to maximize authentication accuracy. A selection of such variables would include:

    • The features used to build user templates.

    • The signal frequency ranges from which features are extracted.

    • The regions of the brain on which the electrodes are placed, among other variables.

    Under this assumption of a finely tuned authentication system, Serwadda and his colleagues tackled the following questions: • If a malicious entity were to somehow access templates from this authentication-optimized system, would he or she be able to exploit these templates to infer non-authentication-centric information about the users with high accuracy? • In the event that such inferences are possible, which attributes of template design could reduce or increase the threat?

    Turns out, they indeed found EEG authentication systems to give away non-authentication-centric information. Using an authentication system from UC-Berkeley and a variant of another from a team at Binghamton University and the University of Buffalo, Serwadda and Matovu tested their hypothesis, using alcoholism as the sensitive private information which an adversary might want to infer from EEG authentication templates.

    In a study involving 25 formally diagnosed alcoholics and 25 non-alcoholic subjects, the lowest error rate obtained when identifying alcoholics was 25 percent, meaning a classification accuracy of approximately 75 percent.

    When they tweaked the system and changed several variables, they found that the ability to detect alcoholic behavior could be tremendously reduced at the cost of slightly reducing the performance of the EEG authentication system.

    Motivation for discovery

    Serwadda’s motivation for proving brain waves could be used to reveal potentially harmful personal information wasn’t to improve the methods for obtaining that information. It’s to prevent it.

    To illustrate, he gives an analogy using fingerprint identification at an airport. Fingerprint scans read ridges and valleys on the finger to determine a person’s unique identity, and that’s it.

    In a hypothetical scenario where such systems could only function accurately if the user’s finger was pricked and some blood drawn from it, this would be problematic because the blood drawn by the prick could be used to infer things other than the user’s identity, such as whether a person suffers from certain diseases, such as diabetes.

    Given the amount of extra information that EEG authentication systems are able glean about the user, current EEG systems could be likened to the hypothetical fingerprint reader that pricks the user’s finger. Serwadda wants to drive research that develops EEG authentication systems that perform the intended purpose while revealing minimal information about traits other than the user’s identity in authentication terms.

    Currently, in the vast majority of studies on the EEG authentication problem, researchers primarily seek to outdo each other in terms of the system error rates. They work with the central objective of designing a system having error rates which are much lower than the state-of-the-art. Whenever a research group develops or publishes an EEG authentication system that attains the lowest error rates, such a system is immediately installed as the reference point.

    A critical question that has not seen much attention up to this point is how certain design attributes of these systems, in other words the kinds of features used to formulate the user template, might relate to their potential to leak sensitive personal information. If, for example, a system with the lowest authentication error rates comes with the added baggage of leaking a significantly higher amount of private information, then such a system might, in practice, not be as useful as its low error rates suggest. Users would only accept, and get the full utility of the system, if the potential privacy breaches associated with the system are well understood and appropriate mitigations undertaken.

    But, Serwadda said, while the EEG is still being studied, the next wave of invention is already beginning.

    “In light of the privacy challenges seen with the EEG, it is noteworthy that the next wave of technology after the EEG is already being developed,” Serwadda said. “One of those technologies is functional near-infrared spectroscopy (fNIRS), which has a much higher signal-to-noise ratio than an EEG. It gives a more accurate picture of brain activity given its ability to focus on a particular region of the brain.”

    The good news, for now, is fNIRS technology is still quite expensive; however there is every likelihood that the prices will drop over time, potentially leading to a civilian application to this technology. Thanks to the efforts of researchers like Serwadda, minimizing the leakage of sensitive personal information through these technologies is beginning to gain attention in the research community.

    “The basic idea behind this research is to motivate a direction of research which selects design parameters in such a way that we not only care about recognizing users very accurately but also care about minimizing the amount of sensitive personal information it can read,” Serwadda said.

    Several research groups have recently showcased systems that use EEG to authenticate users with very high accuracy.
  • Sex before sport doesn’t negatively impact performance

    {Over the course of the Rio Olympics, 450,000 condoms were distributed around the athlete’s village. This may be surprising considering the common view that abstinence from sexual activity can boost athletic performance.}

    These long-standing views have now been challenged by a recent analysis of current scientific evidence, published in the open-access journal Frontiers in Physiology.

    “Abstaining from sexual activity before athletic competition is a controversial topic in the world of sport;” said Laura Stefani, an Assistant Professor of Sports Medicine at the University of Florence, Italy, and lead author of this review;”We show no robust scientific evidence to indicate that sexual activity has a negative effect upon athletic results.”

    The authors sifted through hundreds of studies with the potential to provide evidence, however big or small, on the impact of sexual activity upon sport performance. After setting a number of criteria to filter out the most reliable of these studies, only nine were included in the review.

    One of these found that the strength of female former athletes did not differ if they had sex the night before. Another actually observed a beneficial effect on marathon runners’ performance. While these small handful of studies provided some clues about the real effects of sex on sport performance, Dr. Stefani and her colleagues were disappointed with the research on this subject to date.

    “We clearly show that this topic has not been well investigated and only anecdotal stories have been reported;” explained Dr. Stefani; “In fact, unless it takes place less than two hours before, the evidence actually suggests sexual activity may have a beneficial effect on sports performance.”

    The review also revealed that males were more frequently investigated than females, with no comparison of effects across genders. In addition, it highlights that cultural differences in attitudes towards sexual activity may influence how much or how little impact it may have. Dr. Stefani emphasizes other factors that have been ignored.

    “No particular importance has been laid on the psychological or physical effects of sexual activity on sports performance, or upon the different kinds of sports.”

    This is an important point, given each sport’s different mental and physical challenges.

    This review demonstrates the need for proper scientific investigation into the impact of sexual activity on sport performance, clarifying any ethical, gender and sport differences.

    The authors conclude that because the current evidence debunks the long-held abstinence theories, athletes should not feel guilty when engaging in their usual sexual activity up to the day before competition.

  • Does meditation keep emotional brain in check?

    {Meditation can help tame your emotions even if you’re not a mindful person, suggests a new study from Michigan State University.}

    Reporting in the journal Frontiers in Human Neuroscience, psychology researchers recorded the brain activity of people looking at disturbing pictures immediately after meditating for the first time. These participants were able to tame their negative emotions just as well as participants who were naturally mindful.

    “Our findings not only demonstrate that meditation improves emotional health, but that people can acquire these benefits regardless of their ‘natural’ ability to be mindful,” said Yanli Lin, an MSU graduate student and lead investigator of the study. “It just takes some practice.”

    Mindfulness, a moment-by-moment awareness of one’s thoughts, feelings and sensations, has gained worldwide popularity as a way to promote health and well-being. But what if someone isn’t naturally mindful? Can they become so simply by trying to make mindfulness a “state of mind”? Or perhaps through a more focused, deliberate effort like meditation?

    The study, conducted in Jason Moser’s Clinical Psychophysiology Lab, attempted to find out.

    Researchers assessed 68 participants for mindfulness using a scientifically validated survey. The participants were then randomly assigned to engage in an 18-minute audio guided meditation or listen to a control presentation of how to learn a new language, before viewing negative pictures (such as a bloody corpse) while their brain activity was recorded.

    The participants who meditated — they had varying levels of natural mindfulness — showed similar levels of “emotion regulatory” brain activity as people with high levels of natural mindfulness. In other words their emotional brains recovered quickly after viewing the troubling photos, essentially keeping their negative emotions in check.

    In addition, some of the participants were instructed to look at the gruesome photos “mindfully” (be in a mindful state of mind) while others received no such instruction. Interestingly, the people who viewed the photos “mindfully” showed no better ability to keep their negative emotions in check.

    This suggests that for non-meditators, the emotional benefits of mindfulness might be better achieved through meditation, rather than “forcing it” as a state of mind, said Moser, MSU associate professor of clinical psychology and co-author of the study.

    “If you’re a naturally mindful person, and you’re walking around very aware of things, you’re good to go. You shed your emotions quickly,” Moser said. “If you’re not naturally mindful, then meditating can make you look like a person who walks around with a lot of mindfulness. But for people who are not naturally mindful and have never meditated, forcing oneself to be mindful ‘in the moment’ doesn’t work. You’d be better off meditating for 20 minutes.”

    Mindfulness, a moment-by-moment awareness of one's thoughts, feelings and sensations, has gained worldwide popularity as a way to promote health and well-being.
  • How the brain makes new memories while preserving the old

    {Columbia scientists have developed a new mathematical model that helps to explain how the human brain’s biological complexity allows it to lay down new memories without wiping out old ones — illustrating how the brain maintains the fidelity of memories for years, decades or even a lifetime. This model could help neuroscientists design more targeted studies of memory, and also spur advances in neuromorphic hardware — powerful computing systems inspired by the human brain.}

    This work is published online in Nature Neuroscience.

    “The brain is continually receiving, organizing and storing memories. These processes, which have been studied in countless experiments, are so complex that scientists have been developing mathematical models in order to fully understand them,” said Stefano Fusi, PhD, a principal investigator at Columbia’s Mortimer B. Zuckerman Mind Brain Behavior Institute, associate professor of neuroscience at Columbia University Medical Center and the paper’s senior author. “The model that we have developed finally explains why the biology and chemistry underlying memory are so complex — and how this complexity drives the brain’s ability to remember.”

    Memories are widely believed to be stored in synapses, tiny structures on the surface of neurons. These synapses act as conduits, transmitting the information housed inside electrical pulses that normally pass from neuron to neuron. In the earliest memory models, the strength of electrical signals that passed through synapses was compared to a volume knob on a stereo; it dialed up to boost (or down to lower) the connection strength between neurons. This allowed for the formation of memories.

    These models worked extremely well, as they accounted for enormous memory capacity. But they also posed an intriguing dilemma.

    “The problem with a simple, dial-like model of how synapses function was that it was assumed their strength could be dialed up or down indefinitely,” said Dr. Fusi, who is also a member of Columbia’s Center for Theoretical Neuroscience. “But in the real world this can’t happen. Whether it’s the volume knob on a stereo, or any biological system, there has to be a physical limit to how much it could turn.”

    When these limits were imposed, the memory capacity of these models collapsed. So Dr. Fusi, in collaboration with fellow Zuckerman Institute investigator Larry Abbott, PhD, an expert in mathematical modeling of the brain, offered an alternative: each synapse is more complex than just one dial, and instead should be described as a system with multiple dials.

    In 2005, Drs. Fusi and Abbott published research explaining this idea. They described how different dials (perhaps representing clusters of molecules) within a synapse could operate in tandem to form new memories while protecting old ones. But even that model, the authors later realized, fell short of what they believed the brain — particularly the human brain — could hold.

    “We came to realize that the various synaptic components, or dials, not only functioned at different timescales, but were also likely communicating with each other,” said Marcus Benna, PhD, an associate research scientist at Columbia’s Center for Theoretical Neuroscience and the first author of today’s Nature Neuroscience paper. “Once we added the communication between components to our model, the storage capacity increased by an enormous factor, becoming far more representative of what is achieved inside the living brain.”

    Dr. Benna likened the components of this new model to a system of beakers connected to each other through a series of tubes.

    “In a set of interconnected beakers, each filled with different amounts of water, the liquid will tend to flow between them such that the water levels become equalized. In our model, the beakers represent the various components within a synapse,” explained Dr. Benna. “Adding liquid to one of the beakers — or removing some of it — represents the encoding of new memories. Over time, the resulting flow of liquid will diffuse across the other beakers, corresponding to the long-term storage of memories.”

    Drs. Benna and Fusi are hopeful that this work can help neuroscientists in the lab, by acting as a theoretical framework to guide future experiments — ultimately leading to a more complete and more detailed characterization of the brain.

    “While the synaptic basis of memory is well accepted, in no small part due to the work of Nobel laureate and Zuckerman Institute codirector Dr. Eric Kandel, clarifying how synapses support memories over many years without degradation has been extremely difficult,” said Dr. Abbott. “The work of Drs. Benna and Fusi should serve as a guide for researchers exploring the molecular complexity of the synapse.”

    The technological implications of this model are also promising. Dr. Fusi has long been intrigued by neuromorphic hardware, computers that are designed to imitate a biological brain.

    “Today, neuromorphic hardware is limited by memory capacity, which can be catastrophically low when these systems are designed to learn autonomously,” said Dr. Fusi. “Creating a better model of synaptic memory could help to solve this problem, speeding up the development of electronic devices that are both compact and energy efficient — and just as powerful as the human brain.”

    This paper is titled: “Computational principles of synaptic memory consolidation.”

    This research was supported by the Gatsby Charitable Foundation, the Simons Foundation, the Swartz Foundation, the Kavli Foundation, the Grossman Foundation and Columbia’s Research Initiatives for Science and Engineering (RISE).

    The authors report no financial or other conflicts of interest.

    Memories are widely believed to be stored in synapses, tiny structures on the surface of neurons.
  • Gene linked to autism in people may influence dog sociability

    {DNA differences affect beagles’ tendency to seek human help.}

    {Dogs may look to humans for help in solving impossible tasks thanks to some genes previously linked to social disorders in people.}

    Beagles with particular variants in a gene associated with autism were more likely to sidle up to and make physical contact with a human stranger, researchers report September 29 in Scientific Reports.

    That gene, SEZ6L, is one of five genes in a particular stretch of beagle DNA associated with sociability in the dogs, animal behaviorist Per Jensen and colleagues at Linköping University in Sweden say. Versions of four of those five genes have been linked to human social disorders such as autism, schizophrenia and aggression.

    “What we figure has been going on here is that there are genetic variants that tend to make dogs more sociable and these variants have been selected during domestication,” Jensen says.

    But other researchers say the results are preliminary and need to be confirmed by looking at other dog breeds. Previous genetic studies of dog domestication have not implicated these genes. But, says evolutionary geneticist Bridgett vonHoldt of Princeton University, genes that influence sociability are “not an unlikely target for domestication — as humans, we would be most interested in a protodog that was interested in spending time with humans.”

    Most dog studies take DNA samples from pets or village dogs and wild wolves. Jensen’s team instead studied beagles that had been raised in a lab. None of the dogs had been trained. To test sociability, the researchers gave the dogs an unsolvable problem in a room with a female human observer whom the beagles had never seen before. The puzzle was a device with three treats that the dogs could see and smell under sliding lids. One lid was sealed shut and could not be opened.

    After opening two lids, the dogs “get very confident that this is not a difficult task, but then they encounter the third lid and that’s where the problem gets impossible,” Jensen says. Wolves would have kept trying to solve the problem on their own (SN: 10/17/15, p. 10). But after some futile attempts, many of the beagles looked to the human observer for help. Some dogs tried to catch her eye, glancing back and forth between the woman and the stuck lid. Other dogs made physical contact with or just tried to stay to close to the woman.

    The researchers then looked for places in the dogs’ DNA where the most and least human-friendly dogs differed. A region on chromosome 26 kept popping up, indicating that genes in that region could be involved in social interactions with people.

    The finding is a statistical signal, but doesn’t establish what the genes might be doing to influence the dogs’ behavior, says Adam Freedman, an evolutionary geneticist at Harvard University. And since the researchers only examined the beagles, it’s not clear that the same genes would affect behavior in other dogs, he says.

    Beagles that encountered an impossible problem — an immovable lid covering a treat (shown) — sometimes sought help from a nearby human stranger. Variations in a gene previously linked to autism in humans may influence the dogs’ people-seeking behavior.
  • What’s in a face? Study shows puberty changes facial recognition

    {Faces are as unique as fingerprints and can reveal a great deal of information about our health, personalities, age, and feelings. Penn State researchers recently discovered adolescents begin to view faces differently as they prepare for the transition to adulthood.}

    Suzy Scherf, assistant professor of psychology and head of the Laboratory of Developmental Neuroscience at Penn State, and Giorgia Picci, graduate student in developmental psychology, published their findings recently in the journal Psychological Science. “We know that faces convey a lot of different social information, and the ability to perceive and interpret this information changes through development,” Scherf explained. “For the first time, we’ve been able to show how puberty, not age, shapes our ability to recognize faces as we grow into adults.”

    According to Scherf, the ability of adolescents to retune their face processing system, from showing a bias toward adult female faces as children, to preferring peer faces that match their own developmental stage in puberty, is part of the social metamorphosis that prepares them to take on adult social roles. “In other words, it literally changes the way people see faces. This has been shown previously in research published in animal literature, but not in humans.”

    The researchers developed an innovative experimental design that proves a bias to remember peer faces is reflected in the pubertal stage of the face, rather than the age of the face, which is how previous researchers investigated these biases. “We were able to show that puberty shapes the subtle emergence of social behaviors that are important for adolescents’ transition to adulthood. This likely happens due to hormones influencing the brain and the nervous system reorganization that occurs during this time,” said Scherf.

    The researchers recruited 116 adolescents and young adults for the study and separated them into four pubertal groups depending on their stage of puberty. Importantly, the adolescents in the study were all the same age, but differed in their stage of puberty. Therefore, any differences in the way they responded to faces were related to their pubertal status, not their age. Scherf and Picci determined the adolescents’ stage of development through self-assessments as well as parent-provided assessments.

    The researchers presented participants with 120 gray-scale photographs of male and female faces. The pubertal status of the faces in the pictures matched that of the participants. “In other words, there were images of pre-pubescent children, young adolescents in early puberty, young adolescents in later puberty, and sexually mature young adults,” Scherf explained.

    Participants were asked to look at faces from all four pubertal groups, and the researchers measured their face-recognition ability using a computerized game. After studying 10 target faces with neutral expressions, participants were shown another set of 20 faces with happy expressions and had to identify whether they had seen each face previously or if they were new.

    Scherf and Picci found that the pre-pubescent children had a bias to remember adult faces, which they call the caregiver bias. “This is interesting because these are school-age children who spend lots of time with other children, yet they are still biased to remember adult faces,” said Scherf.

    In contrast, adolescents had a bias to remember other adolescent faces, exhibiting a peer bias. According to Scherf, the most surprising finding was that among adolescents who were the same age, those who were less mature in pubertal development had better recognition memory for other similarly less mature adolescents, while those who were more mature in pubertal development had better recognition memory for peers who were similar in their level of development.

    “This shows that adolescents are very clued into each other’s pubertal status. They can literally see it in each other’s faces, perhaps implicitly, and this influences how they keep track of each other. This may explain a well-known finding that adolescents organize their peer groups according to pubertal status and is relevant for understanding how adolescents begin to think about each other as romantic partners for the first time.”

    This research will help scientists uncover how puberty impacts the developing human brain and help them understand the timetable of behavioral and brain changes during adolescence, which could guide mental health treatment and inform public health policy. In the future, Scherf and Picci plan to further investigate face processing changes that occur during puberty.

    This research will help scientists uncover how puberty impacts the developing human brain and help them understand the timetable of behavioral and brain changes during adolescence, which could guide mental health treatment and inform public health policy.