Category: Science News

  • The science of baby’s first sight

    {When a newborn opens her eyes, she does not see well at all. You, the parent, are a blurry shape of light and dark. Soon, though, her vision comes online. Your baby will recognize you, and you can see it in her eyes. Then baby looks beyond you and that flash of recognition fades. She can’t quite make out what’s out the window. It’s another blurry world of shapes and light. But within a few months, she can see the trees outside. Her entire world is coming into focus.}

    UNC School of Medicine scientists have found more clues about what happens in the brains of baby mammals as they try to make visual sense of the world. The study in mice, published in the journal Nature Neuroscience, is part of an ongoing project in the lab of Spencer Smith, PhD, assistant professor of cell biology and physiology, to map the functions of the brain areas that play crucial roles in vision. Proper function of these brain areas is likely critical for vision restoration.

    “There’s this remarkable biological operation that plays out during development,” Smith said. “Early on, there are genetic programs and chemical pathways that position cells in the brain and help wire up a ‘rough draft’ of the circuitry. Later, after birth, this circuitry is actively sculpted by visual experience: simply looking around our world helps developing brains wire up the most sophisticated visual processing circuitry the world has ever known. Even the best supercomputers and our latest algorithms still can’t compete with the visual processing abilities of humans and animals. We want to know how neural circuitry does this.”

    If cures for partial or entire blindness can be developed through, say, gene therapy or retinal implants, then researchers will need to understand the totality of visual brain circuitry to ensure people can recover useful visual function.

    “Most work on restoring vision has focused on the retina and the primary visual cortex,” Smith said. “Less work has explored the development of the higher visual areas of the brain, and their potential for recovery from early deficits. I want to understand how these higher visual areas develop. We need to know the critical time windows during which vision should be restored, and what occurs during these windows to ensure proper circuit development.”

    To understand the potential challenges that vision restoration later in life might entail, take the case of bilateral cataracts — when the lenses of both eyes are cloudy and vision is severely limited. In developed countries, it’s common to have such cataracts surgically removed very early in life. If so, vision typically develops appropriately.

    “But in less developed, rural parts of the world, people often don’t get to a clinic until they are teens or older,” Smith said. “They’ve gone through life seeing light and dark, fuzzy things. That’s about it. When they have the cataracts removed, they recover a large amount of visual function, but it is not complete. They can learn to read and recognize their friends. But they have great difficulty perceiving some types of visual motion.” It’s the kind of visual perception needed during hand-eye coordination, or simply while navigating the world around you.

    There are two subnetworks of visual circuitry, called the ventral and dorsal streams, and the latter of these is important for motion perception.

    Smith wanted to know if visual experience is particularly essential for proper development of the dorsal stream. And he wanted to understand what could be changing at the individual neuron level during this early development.

    To explore these questions, Smith and his UNC colleagues conducted hundreds of painstaking, time-consuming experiments. In essence, Smith’s lab is reverse engineering complicated brain circuitry with the help of specialized two-photon imaging systems Smith and his team designed and built at the UNC Neuroscience Center, where he is a member.

    “If you want to reverse engineer a radio to know how it works, a good way to start would be to watch someone put together a radio,” Smith said. “Well, this is kind of what we’re doing. We’re using our imaging systems to watch how biology builds its visual processing circuitry.”

    In one series of experiments, Smith’s team reared mice in complete darkness for several weeks. Even the daily care of the mice was in darkness with the aid of night-vision goggles. Using his imaging system and precision surgical methods, Smith and colleagues could view specific areas of the brain with neuron-level resolution. They showed that the ventral visual stream in mice did indeed come online immediately, with individual neurons firing as the mice responded to visual stimuli. But the dorsal stream did not.

    “Keeping the mice in darkness significantly degraded the magnitude of visual responses in the dorsal stream — responses to what they were seeing,” Smith said. The neurons in the dorsal area weren’t firing as strongly as they did in mice raised with normal visual experience. “Interestingly, even after a recovery period in a normal light-dark cycle, the visual deficit in the dorsal stream persisted.”

    This is reminiscent of the persistent visual deficits seen in humans with bilateral cataracts that aren’t repaired until later in life.

    “Not only did the mice need visual experience to develop their dorsal stream of visual processing, but they needed it in an early developmental time window to refine the brain circuitry,” Smith said. “Otherwise, their vision never properly developed.”

    These experiments can help explain what happens in the human analogs of the ventral and dorsal streams when we’re babies, when part of our vision slowly develops and we try to make sense of the world moving around us during the first several months after birth.

    Smith added, “Now that we have a little bit of a feel for the lay of the land — how these two subnetworks develop — I really want to drill down into the actual computations that these different brain areas are performing. I want to analyze what information neurons in higher visual areas are encoding. What are they encoding better, or more efficiently, than neurons in the primary visual cortex? What, exactly, are they doing that allows us to analyze complex visual stimuli so quickly and efficiently?”

  • New apps designed to reduce depression, anxiety as easily as checking your phone

    {Speedy mini-apps are designed to address depression and anxiety.}

    Soon you can seek mental health advice on your smartphone as quickly as finding a good restaurant.

    A novel suite of 13 speedy mini-apps called IntelliCare resulted in participants reporting significantly less depression and anxiety by using the apps on their smartphones up to four times a day, reports a new Northwestern Medicine study.

    The apps offer exercises to de-stress, reduce self-criticism and worrying, methods to help your life feel more meaningful, mantras to highlight your strengths, strategies for a good night’s sleep and more.

    Most apps designed for mental health typically offer a single strategy to feel better or provide too many features that make them difficult to navigate. Users may get bored or overwhelmed and may stop using the apps after a few weeks.

    But participants robustly used the IntelliCare interactive apps as many as four times daily — or an average of 195 times — for eight weeks of the study. They spent an average of one minute using each app, with longer times for apps with relaxation videos.

    The 96 participants who completed the research study reported that they experienced about a 50 percent decrease in the severity of depressive and anxiety symptoms. The short-term study-related reductions are comparable to results expected in clinical practice using psychotherapy or with that seen using antidepressant medication.

    The study will be published Jan. 5 in the Journal of Medical Internet Research.

    “We designed these apps so they fit easily into people’s lives and could be used as simply as apps to find a restaurant or directions,” said lead study author David Mohr, professor of preventive medicine and director of the Center for Behavioral Intervention Technologies at Northwestern University Feinberg School of Medicine.

    “Some of the participants kept using them after the study because they felt that the apps helped them feel better,” Mohr said. “There were many apps to try during the study, so there was a sense of novelty.”

    Participants had access to the 13 IntelliCare apps from Google Play and received eight weeks of coaching for the use of IntelliCare. Coaching included an initial phone call plus two or more text messages per week over the eight weeks. In the study, 105 participants were enrolled and 96 of them completed the study.

    The preliminary study did not include a control arm, so it’s possible that some people who enrolled in the trial would have improved anyway, partly because they may have been motivated to try something new, Mohr said. He now has launched a larger trial, recruiting 300 participants, with a control arm.

    Some of the IntelliCare apps include:

    Daily Feats: designed to motivate you to add worthwhile and rewarding activities into your day to increase your overall satisfaction in life.
    Purple Chill: designed to help you unwind with audio recordings that guide you through exercises to de-stress and worry less.
    Slumber Time: designed to ease you into a good night’s rest.
    My Mantra: designed to help you create motivating mantras to highlight your strengths and values.
    “Using digital tools for mental health is emerging as an important part of our future,” Mohr said. “These are designed to help the millions of people who want support but can’t get to a therapist’s office.”

    More than 20 percent of Americans have significant symptoms of depression or anxiety each year, but only around 20 percent of people with a mental health problem get adequate treatment.

    The IntelliCare algorithm recommends new apps each week to keep the experience fresh, provide new opportunities for learning skills and avoid user boredom. Although the apps are not validated, each one was designed by Northwestern clinicians and based on validated techniques used by therapists.

    IntelliCare is a national research study. Individuals can download the apps free with no financial obligation. But Northwestern researchers hope participants will provide confidential feedback, via four weekly questions, that will be used to further develop the system. The data will help the system make even better recommendations and provide more personalized treatment.

    People also may enroll in a study in which they will be paid to provide even more feedback. Some also will have access to an IntelliCare coach via text messaging and phone calls, who are available to support them in using the apps.

    “We now have evidence these approaches will likely work,” Mohr said. “They are designed to teach many of the same skills therapists teach patients. Different apps are expected to work for different people. The goal is to find what’s right for you.”

  • New personality model sets up how we see ourselves, and how others see us

    {A new model for identifying personality traits may help organizations save money by improving the hiring process and with evaluating employee performance.}

    The model, developed by Brian Connelly, an associate professor in U of T Scarborough’s Department of Management, is called Trait-Reputation-Identity (TRI). The model is unique in that it contrasts personality as seen by an individual versus how their personality is seen by others.

    “If someone believes they are very outgoing or more friendly than they actually are based on peer assessment, that’s important information to have about that person,” says Connelly, who recently became a Canada Research Chair.

    Past personality trait models have relied heavily on the way people typically behave, but TRI combines that with a focus on how individuals think about their own personality traits. While the model is useful for anyone studying and using personality measures, Connelly says having the ability to better predict outcomes like performance, motivation, leadership, procrastination, and commitment to an organization are strong indicators of a potential candidate’s effectiveness in the workplace.

    “It’s a bit of a departure from the way we’ve typically studied personality in the past,” says Connelly, who is an expert on organizational behaviour and human resources.

    The current system for evaluating job applications, which relies heavily on reference checks, is not an effective means of predicting job performance, says Connelly. He also adds that the problem with current personality tests is that they often have a narrow view of personality that leads organizations to choose manipulators and egoists over more suitable candidates.

    He hopes that more reliable tests can lead to more accurate, data driven results that do a better job of weeding out bias and fakery that cost organizations millions of dollars in retention and hiring costs every year.

    TRI uses a unique blend of self and peer ratings to gather feedback on an individual’s relationship to the big five personality traits — extraversion, agreeableness, conscientiousness, neuroticism and openness. What sets it apart from previous models is that it provides a robust method and analytical framework to determine whether there’s agreement or divergence about an individual’s personality traits.

    “This difference has been talked about in the past from a theoretical standpoint, but now we can assign a number to a trait or reputation score, and identify a score for a particular personality construct like extroversion,” he says.

    As a follow-up study Connelly is looking at the personality traits of Korean Air Force cadets by focusing on their self-perception and how they were perceived by their peers. The ongoing research will evaluate the cadets when they eventually reach the air force. He will also be doing another follow-up study with Co-op Management students at U of T Scarborough.

    He says having a reliable model to provide valuable and realistic feedback about personality is even more important as more millennials enter the workplace.

    “Much has been made about narcissism being an issue among millennials and the difficulty of older workers connecting with younger generations,” he says.

    “From a practical perspective I hope this model will help people learn something new about themselves from the assessment and to think about aspects of their personality they may not have otherwise considered.”

    The model, which was developed with Samuel McAbbe, an Assistant Professor of Psychology at the Illinois Institute of Technology, is outlined in an article published in the journal Psychological Review.

  • Development of face recognition entails brain tissue growth

    {People are born with brains riddled with excess neural connections. Those are slowly pruned back until early childhood when, scientists thought, the brain’s structure becomes relatively stable.}

    Now a pair of studies, published in the Jan. 6, 2017, issue of Science and Nov. 30, 2016, in Cerebral Cortex, suggest this process is more complicated than previously thought. For the first time, the group found microscopic tissue growth in the brain continues in regions that also show changes in function.

    The work overturns a central thought in neuroscience, which is that the amount of brain tissue goes in one direction throughout our lives — from too much to just enough. The group made this finding by looking at the brains of an often-overlooked participant pool: children.

    “I would say it’s only in the last 10 years that psychologists started looking at children’s brains,” said Kalanit Grill-Spector, a professor of psychology at Stanford and senior author of both papers. “The issue is, kids are not miniature adults and their brains show that. Our lab studies children because there’s still a lot of very basic knowledge to be learned about the developing brain in that age range.”

    Grill-Spector and her team examined a region of the brain that distinguishes faces from other objects. In Cerebral Cortex, they demonstrate that brain regions that recognize faces have a unique cellular make-up. In Science, they find that the microscopic structures within the region change from childhood into adulthood over a timescale that mirrors improvements in people’s ability to recognize faces.

    “We actually saw that tissue is proliferating,” said Jesse Gomez, graduate student in the Grill-Spector lab and lead author of the Science paper. “Many people assume a pessimistic view of brain tissue: that tissue is lost slowly as you get older. We saw the opposite — that whatever is left after pruning in infancy can be used to grow.”

    Microscopic brain changes

    The group studied regions of the brain that recognize faces and places, respectively, because knowing who you are looking at and where you are is important for everyday function. In adults, these parts of the brain are close neighbors, but with some visible structural differences.

    “If you could walk across an adult brain and you were to look down at the cells, it would be like walking through different neighborhoods,” Gomez said. “The cells look different. They’re organized differently.”

    Curious about the deeper cellular structures not visible by magnetic resonance imaging (MRI), the Stanford group collaborated with colleagues in the Institute of Neuroscience and Medicine, Research Centre Jülich, in Germany, who obtained thin tissue slices of post-mortem brains. Over the span of a year, this international collaboration figured out how to match brain regions identified with functional MRI in living brains with the corresponding brain slices. This allowed them to extract the microscopic cellular structure of the areas they scanned with functional MRI, which is not yet possible to do in living subjects. The microscopic images showed visible differences in the cellular structure between face and place regions.

    “There’s been this pipe dream in the field that we will one day be able to measure cellular architecture in living humans’ brains and this shows that we’re making progress,” said Kevin Weiner, a Stanford social science research associate, co-author of the Science paper and co-lead author of the Cerebral Cortex paper with Michael Barnett, a former research assistant in the lab.

    Neighborhoods of the brain

    This work established that the two parts of the brain look different in adults, but Grill-Spector has been curious about these areas in brains of children, particularly because the skills associated with the face region improve through adolescence. To further investigate how development of these skills relates to brain development, the researchers used a new type of imaging technique.

    They scanned 22 children (ages 5 to 12) and 25 adults (ages 22 to 28) using two types of MRI, one that indirectly measures brain activity (functional MRI) and one that measures the proportion of tissue to water in the brain (quantitative MRI). This scan has been used to show changes in the fatty insulation surrounding the long neuronal wires connecting brain regions over a person’s lifetime, but this study is the first to use this method to directly assess changes in the cells’ bodies.

    What they found, published in Science, is that, in addition to seeing a difference in brain activity in these two regions, the quantitative MRI showed that a certain tissue in the face region grows with development. Ultimately, this development contributes to the tissue differences between face and place regions in adults. What’s more, tissue properties were linked with functional changes in both brain activity and face recognition ability, which they evaluated separately. There is no indication yet of which change causes the other or if they happen in tandem.

    A test bed

    Being able to identify familiar faces and places, while clearly an important skillset, may seem like an odd choice for study. The reason these regions are worth some special attention, said Grill-Spector, is because we can identify them in each person’s brain, even a 5-year-old child, which means research on these regions can include large pools of participants and produce results that are easy to compare across studies. This research also has health implications, as approximately 2 percent of the adult population is poor at recognizing faces, a disorder sometimes referred to as facial blindness.

    What’s more, the fusiform gyrus, an anatomical structure in the brain that contains face-processing regions, is only found in humans and great apes (gorillas, chimps, bonobos and orangutans).

    “If you had told me five or 10 years ago that we’d be able to actually measure tissue growth in vivo, I wouldn’t have believed it,” Grill-Spector said. “It shows there are actual changes to the tissue that are happening throughout your development. I think this is fantastic.”

    Researcher shows the child images of his brain just taken on the MRI machine. This material relates to a paper that appeared in the Jan. 6, 2017, issue of Science, published by AAAS. The paper, by J. Gomez at Stanford University School of Medicine in Stanford, Calif., and colleagues was titled, "Microstructural proliferation in human cortex is coupled with the development of face processing."
  • Lack of joy from music linked to brain disconnection

    {Have you ever met someone who just wasn’t into music? They may have a condition called specific musical anhedonia, which affects three-to-five per cent of the population.}

    Researchers at the University of Barcelona and the Montreal Neurological Institute and Hospital of McGill University have discovered that people with this condition showed reduced functional connectivity between cortical regions responsible for processing sound and subcortical regions related to reward.

    To understand the origins of specific musical anhedonia, researchers recruited 45 healthy participants who completed a questionnaire measuring their level of sensitivity to music and divided them into three groups of sensitivity based on their responses. The test subjects then listened to music excerpts inside an fMRI machine while providing pleasure ratings in real-time. To control for their brain response to other reward types, participants also played a monetary gambling task in which they could win or lose real money.

    Using the fMRI data, the researchers found that while listening to music, specific musical anhedonics presented a reduction in the activity of the Nucleus Accumbens, a key subcortical structure of the reward network. The reduction was not related to a general improper functioning of the Nucleus Accumbens itself, since this region was activated when they won money in the gambling task.

    Specific musical anhedonics, however, did show reduced functional connectivity between cortical regions associated with auditory processing and the Nucleus Accumbens. In contrast, individuals with high sensitivity to music showed enhanced connectivity.

    The fact that subjects could be insensible to music while still responsive to another stimulus like money suggests different pathways to reward for different stimuli. This finding may pave the way for the detailed study of the neural substrates underlying other domain-specific anhedonias and, from an evolutionary perspective, help us to understand how music acquired reward value.

    Lack of brain connectivity has been shown to be responsible for other deficits in cognitive ability. Studies of children with autism spectrum disorder, for example, have shown that their inability to experience the human voice as pleasurable may be explained by a reduced coupling between the bilateral posterior superior temporal sulcus and distributed nodes of the reward system, including the Nucleus Accumbens. This latest research reinforces the importance of neural connectivity in the reward response of human beings.

    “These findings not only help us to understand individual variability in the way the reward system functions, but also can be applied to the development of therapies for treatment of reward-related disorders, including apathy, depression, and addiction,” says Robert Zatorre, an MNI neuroscientist and one of the paper’s co-authors.

    Using the fMRI data, the researchers found that while listening to music, specific musical anhedonics presented a reduction in the activity of the Nucleus Accumbens, a key subcortical structure of the reward network.
  • Think chicken: Think intelligent, caring and complex

    {Review looks at studies on chicken intelligence, social development and emotions.}

    Chickens are not as clueless or “bird-brained” as people believe them to be. They have distinct personalities and can outmaneuver one another. They know their place in the pecking order, and can reason by deduction, which is an ability that humans develop by the age of seven. Chicken intelligence is therefore unnecessarily underestimated and overshadowed by other avian groups. So says Lori Marino, senior scientist for The Someone Project, a joint venture of Farm Sanctuary and the Kimmela Center in the USA, who reviewed the latest research about the psychology, behavior and emotions of the world’s most abundant domestic animal. Her review is published in Springer’s journal Animal Cognition.

    “They are perceived as lacking most of the psychological characteristics we recognize in other intelligent animals and are typically thought of as possessing a low level of intelligence compared with other animals,” Marino says. “The very idea of chicken psychology is strange to most people.”

    Research has shown that chickens have some sense of numbers. Experiments with newly hatched domestic chicks showed they can discriminate between quantities. They also have an idea about ordinality, which refers to the ability to place quantities in a series. Five-day-old domestic chicks presented with two sets of objects of different quantities disappearing behind two screens were able to successfully track which one hid the larger number by apparently performing simple arithmetic in the form of addition and subtraction.

    Chickens are also able to remember the trajectory of a hidden ball for up to 180 seconds if they see the ball moving and up to one minute if the displacement of the ball is invisible to them. Their performance is similar to that of most primates under similar conditions.

    The birds possess self-control when it comes to holding out for a better food reward. They are able to self-assess their position in the pecking order. These two characteristics are indicative of self-awareness.

    Chicken communication is also quite complex, and consists of a large repertoire of different visual displays and at least 24 distinct vocalizations. The birds possess the complex ability of referential communication, which involves signals such as calls, displays and whistles to convey information. They may use this to sound the alarm when there is danger, for instance. This ability requires some level of self-awareness and being able to take the perspective of another animal, and is also possessed by highly intelligent and social species, including primates.

    Chickens perceive time intervals and can anticipate future events. Like many other animals, they demonstrate their cognitive complexity when placed in social situations requiring them to solve problems.

    The birds are able to experience a range of complex negative and positive emotions, including fear, anticipation and anxiety. They make decisions based on what is best for them. They also possess a simple form of empathy called emotional contagion. Not only do individual chickens have distinct personalities, but mother hens also show a range of individual maternal personality traits which appear to affect the behavior of their chicks. The birds can deceive one another, and they watch and learn from each other.

    “A shift in how we ask questions about chicken psychology and behavior will, undoubtedly, lead to even more accurate and richer data and a more authentic understanding of who they really are,” says Marino.

    This study concludes that chickens are behaviorally, cognitively and emotionally complex individuals.
  • The beating heart of solar energy

    {First real-life study to provide data on the potential of powering medical implants with solar cells.}

    The notion of using solar cells placed under the skin to continuously recharge implanted electronic medical devices is a viable one. Swiss researchers have done the math, and found that a 3.6 square centimeter solar cell is all that is needed to generate enough power during winter and summer to power a typical pacemaker. The study is the first to provide real-life data about the potential of using solar cells to power devices such as pacemakers and deep brain stimulators. According to lead author Lukas Bereuter of Bern University Hospital and the University of Bern in Switzerland, wearing power-generating solar cells under the skin will one day save patients the discomfort of having to continuously undergo procedures to change the batteries of such life-saving devices. The findings are set out in Springer’s journal Annals of Biomedical Engineering.

    Most electronic implants are currently battery powered, and their size is governed by the battery volume required for an extended lifespan. When the power in such batteries runs out, these must either be recharged or changed. In most cases this means that patients have to undergo implant replacement procedures, which is not only costly and stressful but also holds the risk of medical complications. Having to use primary batteries also influences the size of a device.

    Various research groups have recently put forward prototypes of small electronic solar cells that can be carried under the skin and can be used to recharge medical devices. The solar cells convert the light from the sun that penetrates the skin surface into energy.

    To investigate the real-life feasibility of such rechargeable energy generators, Bereuter and his colleagues developed specially designed solar measurement devices that can measure the output power being generated. The cells were only 3.6 square centimeters in size, making them small enough to be implanted if needed. For the test, each of the ten devices was covered by optical filters to simulate how properties of the skin would influence how well the sun penetrates the skin. These were worn on the arm of 32 volunteers in Switzerland for one week during summer, autumn and winter.

    No matter what season, the tiny cells were always found to generate much more than the 5 to 10 microwatts of power that a typical cardiac pacemaker uses. The participant with the lowest power output still obtained 12 microwatts on average.

    “The overall mean power obtained is enough to completely power for example a pacemaker or at least extend the lifespan of any other active implant,” notes Bereuter. “By using energy-harvesting devices such as solar cells to power an implant, device replacements may be avoided and the device size may be reduced dramatically.”

    Bereuter believes that the results of this study can be scaled up and applied to any other mobile, solar powered application on humans. Aspects such as the catchment area of a solar cell, its efficiency and the thickness of a patient’s skin must be considered.

    This is the first real-life study to provide data on the potential of powering medical implants with solar cells.
  • How long did it take to hatch a dinosaur egg? 3-6 months

    {A human typically gives birth after nine months. An ostrich hatchling emerges from its egg after 42 days. But how long did it take for a baby dinosaur to incubate?}

    Groundbreaking research led by a Florida State University professor establishes a timeline of anywhere from three to six months depending on the dinosaur.

    In an article in the Proceedings of the National Academy of Sciences, FSU Professor of Biological Science Gregory Erickson and a team of researchers break down the complicated biology of these prehistoric creatures and explain how embryonic dental records solved the mystery of how long dinosaurs incubated their eggs.

    “Some of the greatest riddles about dinosaurs pertain to their embryology — virtually nothing is known,” Erickson said. “Did their eggs incubate slowly like their reptilian cousins — crocodilians and lizards? Or rapidly like living dinosaurs — the birds?”

    Scientists had long theorized that dinosaur incubation duration was similar to birds, whose eggs hatch in periods ranging from 11-85 days. Comparable-sized reptilian eggs typically take twice as long — weeks to many months.

    Because the eggs of dinosaurs were so large — some were about 4 kilograms or the size of a volleyball — scientists believed they must have experienced rapid incubation with birds inheriting that characteristic from their dinosaur ancestors.

    Erickson, FSU graduate student David Kay and colleagues from University of Calgary and the American Museum of Natural History decided to put these theories to the test.

    To do that, they accessed some rare fossils — those of dinosaur embryos.

    “Time within the egg is a crucial part of development, but this earliest growth stage is poorly known because dinosaur embryos are rare,” said co-author Darla Zelenitsky, assistant professor of geoscience at University of Calgary. “Embryos can potentially tell us how dinosaurs developed and grew very early on in life and if they are more similar to birds or reptiles in these respects.”

    The two types of dinosaur embryos researchers examined were those from Protoceratops — a sheep-sized dinosaur found in the Mongolian Gobi Desert whose eggs were quite small (194 grams) — and Hypacrosaurus, an enormous duck-billed dinosaur found in Alberta, Canada with eggs weighing more than 4 kilograms.

    Erickson and his team ran the embryonic jaws through a CT scanner to visualize the forming dentition. Then, they extracted several of the teeth to further examine them under sophisticated microscopes.

    Researchers found what they were looking for on those microscope slides. Growth lines on the teeth showed researchers precisely how long the dinosaurs had been growing in the eggs.

    “These are the lines that are laid down when any animal’s teeth develops,” Erickson said. “They’re kind of like tree rings, but they’re put down daily. We could literally count them to see how long each dinosaur had been developing.”

    Their results showed nearly three months for the tiny Protoceratops embryos and six months for those from the giant Hypacrosaurus.

    “Dinosaur embryos are some of the best fossils in the world,” said Mark Norell, Macaulay Curator for the American Museum of Natural History and a co-author on the study. “Here, we used spectacular fossils specimens collected by American Museum expeditions to the Gobi Desert, coupled them with new technology and new ideas, leading us to discover something truly novel about dinosaurs.”

    The implications of long dinosaur incubation are considerable.

    In addition to finding that dinosaur incubation was similar to primitive reptiles, the researchers could infer many aspects of dinosaurian biology from the results.

    Prolonged incubation put eggs and their parents at risk from predators, starvation and other environmental risk factors. And theories that some dinosaurs nested in the more temperate lower latitude of Canada and then traveled to the Arctic during the summer now seem unlikely given the time frame for hatching and migration.

    The biggest ramification from the study, however, relates to the extinction of dinosaurs. Given that these warm-blooded creatures required considerable resources to reach adult size, took more than a year to mature and had slow incubation times, they would have been at a distinct disadvantage compared to other animals that survived the extinction event.

    “We suspect our findings have implications for understanding why dinosaurs went extinct at the end of the Cretaceous period, whereas amphibians, birds, mammals and other reptiles made it through and prospered,” Erickson said.

    This research was supported by the National Science Foundation.

    Researchers examined a fossilized embryo of the dinosaur Hypacrosaurus.
  • Ash tree genome aids fight against disease

    {Researchers at Queen Mary University of London (QMUL) have successfully decoded the genetic sequence of the ash tree, to help the fight against the fungal disease, ash dieback.}

    Tens of millions of ash trees across Europe are dying from the Hymenoscyphus fraxinea fungus — the most visible signs that a tree is infected with ash dieback fungus are cankers on the bark and dying leaves.

    Project leader Dr Richard Buggs from QMUL’s School of Biological and Chemical Sciences said: “This ash tree genome sequence lays the foundations for accelerated breeding of ash trees with resistance to ash dieback.

    A small percentage of ash trees in Denmark show some resistance to the fungus and the reference genome is the first step towards identifying the genes that confer this resistance.

    The ash tree genome also contains some surprises. Up to quarter of its genes are unique to ash. Known as orphan genes, they were not found in ten other plants whose genomes have been sequenced.

    Dr Buggs added: “Orphan genes present a fascinating evolutionary conundrum as we have no idea how they evolved.”

    This research is published today in the journal Nature. It involved a collaboration between scientists at: QMUL, the Earlham Institute, Royal Botanic Gardens Kew, University of York, University of Exeter, University of Warwick, Earth Trust, University of Oxford, Forest Research, Teagasc, John Innes Centre, and National Institute of Agricultural Botany.

    The reference genome from QMUL was used by scientists at University of York who discovered genes that are associated with greater resistance to ash dieback. They have used these to predict the occurrence of more resistant trees in parts of the UK not yet affected by the disease, which is spreading rapidly.

    The genome sequence will also help efforts to combat the beetle Emerald Ash Borer, which has killed hundreds of millions of ash trees in North America.

    Ash trees have a huge significance in culture and society — they are one of the most common trees in Britain and over 1,000 species, from wildflowers to butterflies, rely on its ecosystem for shelter or sustenance. Ash timber has been used for years for making tools and sport handles, for example hammers and hockey sticks, and is used often for furniture.

    The work was funded by NERC, BBSRC, Defra, ESRC, the Forestry Commission, the Scottish Government, Marie Sklodowska-Curie Actions, Teagasc — the Agriculture and Food Development Authority.

    A young ash tree dying from ash dieback fungal disease. The disease has the potential to wipe out 90 per cent of the European ash tree population, which is one of the most common trees in Britain.
  • An extra second has been added to 2016 on Dec 31

    {On December 31, 2016, a “leap second” will be added to the world’s clocks at 23 hours, 59 minutes and 59 seconds Coordinated Universal Time (UTC). This corresponds to 6:59:59 pm Eastern Standard Time, when the extra second will be inserted at the U.S. Naval Observatory’s Master Clock Facility in Washington, DC.}

    Historically, time was based on the mean rotation of the Earth relative to celestial bodies and the second was defined in this reference frame. However, the invention of atomic clocks defined a much more precise “atomic” timescale and a second that is independent of Earth’s rotation.

    In 1970, international agreements established a procedure to maintain a relationship between Coordinated Universal Time (UTC) and UT1, a measure of the Earth’s rotation angle in space.

    The International Earth Rotation and Reference Systems Service (IERS) is the organization which monitors the difference in the two time scales and calls for leap seconds to be inserted in or removed from UTC when necessary to keep them within 0.9 seconds of each other. In order to create UTC, a secondary timescale, International Atomic Time (TAI), is first generated; it consists of UTC without leap seconds. When the system was instituted in 1972, the difference between TAI and UTC was determined to be 10 seconds. Since 1972, 26 additional leap seconds have been added at intervals varying from six months to seven years, with the most recent being inserted on June 30, 2015. After the insertion of the leap second in December, the cumulative difference between UTC and TAI will be 37 seconds.

    Confusion sometimes arises over the misconception that the occasional insertion of leap seconds every few years indicates that the Earth should stop rotating within a few millennia. This is because some mistake leap seconds to be a measure of the rate at which the Earth is slowing.

    The one-second increments are, however, indications of the accumulated difference in time between the two systems.

    The decision as to when to add a leap second is determined by the IERS, for which the USNO serves as the Rapid Service/Prediction Center. Measurements show that the Earth, on average, runs slow compared to atomic time, at about 1.5 to 2 milliseconds per day. These data are generated by the USNO using the technique of Very Long Baseline Interferometry (VLBI). VLBI measures the rotation of the Earth by observing the apparent positions of distant objects near the edge of the observable universe. These observations show that after roughly 500 to 750 days, the difference between Earth rotation time and atomic time would be about one second.

    Instead of allowing this to happen a leap second is inserted to bring the two time-scales closer together. We can easily change the time of an atomic clock, but it is not possible to alter the Earth’s rotational speed to match the atomic clocks.

    The U.S. Naval Observatory is charged with the responsibility for the precise determination and dissemination of time for the Department of Defense and maintains DoD’s Master Clock.

    The U.S. Naval Observatory, together with the National Institute ofStandards and Technology (NIST), determines time for the United States.

    Modern electronic navigation and communications systems depend increasingly on the dissemination of precise time through such mechanisms as theInternet-based Network Time Protocol (NTP) and the satellite-based Global Positioning System (GPS).

    The U.S. Naval Observatory is the largest single contributor (at approximately 30 percent of the weighted average) to the international time scale UTC, which is computed in Paris, France, at the International Bureau of Weights and Measures.