Category: Science News

  • Rain, wind, and temperatures: What Rwanda can expect in January

    Rain, wind, and temperatures: What Rwanda can expect in January

    The forecast highlights the likelihood of average conditions in most regions, with a few areas expected to experience deviations from the Long-Term Mean (LTM).

    The detailed forecast provides essential insights for farmers, planners, and citizens preparing for the month ahead.

    {{Rainfall Predictions
    }}

    Rainfall during January is expected to range between 0 and 180 mm.

    “The highest amounts, ranging between 150 and 180 mm, are anticipated in many parts of Rusizi and Nyamasheke Districts, as well as western parts of Nyamaguru and Nyamagabe Districts,” the agency stated.

    According to the agency, the regions, including parts of southern Karongi District near Nyungwe National Park, could see slightly above-average rainfall.

    Moderate rainfall ranging between 120 and 150 mm is expected in areas such as Rutsiro, Rubavu, Nyabihu, and Huye Districts, while districts like Rulindo, Kamonyi, and Gicumbi will receive between 90 and 120 mm.

    Eastern parts of the country, including areas of Nyagatare, Gatsibo, and Rwamagana, are projected to experience significantly less rainfall, ranging between 0 and 30 mm.

    {{Temperature Outlook
    }}

    January’s maximum temperatures are set to range between 18°C and 30°C.

    The hottest areas include Bugesera District, several parts of Kigali City, and Nyagatare, with temperatures reaching up to 30°C. Cooler temperatures of 18°C to 20°C are expected in Nyabihu, Musanze, and western Rubavu Districts.

    Minimum temperatures will vary between 8°C and 16°C.

    “The lowest minimum temperatures, ranging from 8°C to 10°C, are expected in Nyabihu, Musanze, and Rubavu Districts, as well as parts of northern Burera District,” the report noted.

    Meanwhile, areas like Nyagatare, Gatsibo, and Kirehe are predicted to experience relatively higher nighttime temperatures of 14°C to 16°C.

    {{Wind Speed Forecast
    }}

    Moderate to strong winds ranging between 4 and 10 m/s are anticipated nationwide. Strong winds of 8 to 10 m/s are expected in Nyamaguru, Nyamagabe, and parts of Rusizi Districts, while most of the country will experience moderate winds of 6 to 8 m/s.

    The report highlights specific zones likely to see higher wind speeds, including parts of Kigali City, Rutsiro, and Nyamasheke Districts. Regions like western Kamonyi, Gicumbi, and Bugesera will experience lighter winds, averaging 4 to 6 m/s.

    Notably, while moderate winds can aid pollination and reduce heat stress, stronger gusts pose risks to infrastructure, especially in vulnerable rural areas.

    The detailed forecast by the Rwanda Meteorology Agency provides essential insights for farmers, planners, and citizens preparing for the month ahead.
  • COP29 unlocks carbon markets and commits to tripling public finance for developing countries

    COP29 unlocks carbon markets and commits to tripling public finance for developing countries

    The agreement, reached under Article 6 of the Paris Agreement, unlocked international carbon markets, a milestone that had eluded previous COPs for over a decade.

    “We have ended a decade-long wait and unlocked a critical tool for keeping 1.5 degrees in reach,” said COP29 President Mukhtar Babayev. “Climate change is a transnational challenge, and Article 6 will enable transnational solutions.”

    With the agreement, carbon markets are poised to drive substantial investment in developing countries, ensuring transparency and environmental integrity. The newly adopted rules will facilitate real, additional, and measurable emission reductions while respecting human rights and promoting sustainable development.

    COP29 also achieved a breakthrough agreement to triple public climate finance for developing countries, raising the annual target from USD 100 billion to USD 300 billion by 2035.

    Additionally, resolutions were adopted to ensure collaborative efforts among all stakeholders to scale up climate finance for developing nations from public and private sources, targeting USD 1.3 trillion annually by 2035.

    Known formally as the New Collective Quantified Goal on Climate Finance (NCQG), it was agreed after two weeks of intensive negotiations and several years of preparatory work, in a process that requires all nations to unanimously agree on every word of the agreement.

    “This new finance goal is an insurance policy for humanity, amid worsening climate impacts hitting every country,” said Simon Stiell, Executive Secretary of UN Climate Change. “But like any insurance policy – it only works if premiums are paid in full, and on time. Promises must be kept, to protect billions of lives.”

    “It will keep the clean energy boom growing, helping all countries to share in its huge benefits: more jobs, stronger growth, cheaper and cleaner energy for all.”

    The International Energy Agency expects global clean energy investment to exceed USD 2 trillion for the first time in 2024.

    UN said the new finance goal at COP29 builds on significant strides forward in global climate action at COP27, which established a historic Loss and Damage Fund, and COP28, which delivered a global agreement to transition away from all fossil fuels in energy systems swiftly and fairly, triple renewable energy, and boost climate resilience.

    Stiell also acknowledged that the agreement reached in Baku did not meet all Parties’ expectations, and substantially more work is still needed next year on several crucial issues.

    “No country got everything they wanted, and we leave Baku with a mountain of work to do,” said Stiell. “The many other issues we need to progress may not be headlines, but they are lifelines for billions of people. So, this is no time for victory laps; we need to set our sights and redouble our efforts on the road to Belem.”

    The COP29 President Mukhtar Babayev bangs a gavel to signify the adoption of a rule during the 29th session of the Conference of the Parties to the United Nations Framework Convention on Climate Change (COP29) in Baku, Azerbaijan, November 23, 2024.
  • Video games can change your brain

    {Scientists have collected and summarized studies looking at how video games can shape our brains and behavior. Research to date suggests that playing video games can change the brain regions responsible for attention and visuospatial skills and make them more efficient. The researchers also looked at studies exploring brain regions associated with the reward system, and how these are related to video game addiction.}

    Do you play video games? If so, you aren’t alone. Video games are becoming more common and are increasingly enjoyed by adults. The average age of gamers has been increasing, and was estimated to be 35 in 2016. Changing technology also means that more people are exposed to video games. Many committed gamers play on desktop computers or consoles, but a new breed of casual gamers has emerged, who play on smartphones and tablets at spare moments throughout the day, like their morning commute. So, we know that video games are an increasingly common form of entertainment, but do they have any effect on our brains and behavior?

    Over the years, the media have made various sensationalist claims about video games and their effect on our health and happiness. “Games have sometimes been praised or demonized, often without real data backing up those claims. Moreover, gaming is a popular activity, so everyone seems to have strong opinions on the topic,” says Marc Palaus, first author on the review, recently published in Frontiers in Human Neuroscience.

    Palaus and his colleagues wanted to see if any trends had emerged from the research to date concerning how video games affect the structure and activity of our brains. They collected the results from 116 scientific studies, 22 of which looked at structural changes in the brain and 100 of which looked at changes in brain functionality and/or behavior.

    The studies show that playing video games can change how our brains perform, and even their structure. For example, playing video games affects our attention, and some studies found that gamers show improvements in several types of attention, such as sustained attention or selective attention. The brain regions involved in attention are also more efficient in gamers and require less activation to sustain attention on demanding tasks.

    There is also evidence that video games can increase the size and efficiency of brain regions related to visuospatial skills. For example, the right hippocampus was enlarged in both long-term gamers and volunteers following a video game training program.

    Video games can also be addictive, and this kind of addiction is called “Internet gaming disorder.” Researchers have found functional and structural changes in the neural reward system in gaming addicts, in part by exposing them to gaming cues that cause cravings and monitoring their neural responses. These neural changes are basically the same as those seen in other addictive disorders.

    So, what do all these brain changes mean? “We focused on how the brain reacts to video game exposure, but these effects do not always translate to real-life changes,” says Palaus. As video games are still quite new, the research into their effects is still in its infancy. For example, we are still working out what aspects of games affect which brain regions and how. “It’s likely that video games have both positive (on attention, visual and motor skills) and negative aspects (risk of addiction), and it is essential we embrace this complexity,” explains Palaus.

    The studies show that playing video games can change how our brains perform, and even their structure.

    Source:Science Daily

  • Cow herd behavior is fodder for complex systems analysis

    {The image of grazing cows in a field has long conjured up a romantic nostalgia about a relaxed pace of rural life. With closer inspection, however, researchers have recognized that what appears to be a randomly dispersed herd peacefully eating grass is in fact a complex system of individuals in a group facing differing tensions. A team of mathematicians and a biologist has now built a mathematical model that incorporates a cost function to behavior in such a herd to understand the dynamics of such systems.}

    Complex systems research looks at how systems display behaviors beyond those capable from individual components in isolation. This rapidly emerging field can be used to elucidate phenomena observed in many other disciplines including biology, medicine, engineering, physics and economics.

    “Complex systems science seeks to understand not just the isolated components of a given system, but how the individual components interact to produce ’emergent’ group behaviour,” said Erik Bollt, director of the Clarkson Center for Complex Systems Science and a professor of mathematics and of electrical and computer engineering.

    Bollt conducted the work with his team, lead-authored by post-doctoral fellow Kelum Gajamannage, which was reported this week in the journal Chaos, from AIP Publishing.

    “Cows grazing in a herd is an interesting example of a complex system,” said Bollt. “An individual cow performs three major activities throughout an ordinary day. It eats, it stands while it carries out some digestive processes, and then it lies down to rest.”

    While this process seems simple enough, there is also a balancing of group dynamics at work.

    “Cows move and eat in herds to protect themselves from predators,” said Bollt. “But since they eat at varying speeds, the herd can move on before the slower cows have finished eating. This leaves these smaller cows facing a difficult choice: Continue eating in a smaller, less safe group, or move along hungry with the larger group. If the conflict between feeding and keeping up with a group becomes too large, it may be advantageous for some animals to split into subgroups with similar nutritional needs.”

    Bollt and his colleagues incorporate a cost function into their model to capture these tensions. This adds mathematical complexity to their work, but it became apparent that it was necessary after discussing cows’ behavior with their co-author, Marian Dawkins, a biologist with experience researching cows.

    “Some findings from the simulation were surprising,” Bollt said. “One might have thought there would be two static groups of cows — the fast eaters and the slow eaters — and that the cows within each group carried out their activities in a synchronized fashion. Instead we found that there were also cows that moved back and forth between the two.”

    “The primary cause is that this complex system has two competing rhythms,” Bollt also said. “The large-sized animal group had a faster rhythm and the small-sized animal group had a slower rhythm. To put it into context, a cow might find itself in one group, and after some time the group is too fast. Then it moves to the slower group, which is too slow, but while moving between the two groups, the cow exposes itself more to the danger of predators, causing a tension between the cow’s need to eat and its need for safety.”

    The existing model and cost function could be used as a basis for studying other herding animals. In the future, there may even be scope to incorporate it into studies about human behavior in groups. “The cost function is a powerful tool to explore outcomes in situations where there are individual and group-level tensions at play,” said Bollt.

    A herd of cattle grazing.

    Source:Science Daily

  • Forgetting can make you smarter

    {For most people having a good memory means being able to remember more information clearly for long periods of time. For neuroscientists too, the inability to remember was long believed to represent a failure of the brain’s mechanisms for storing and retrieving information.}

    But according to a new review paper from Paul Frankland, a senior fellow in CIFAR’s Child & Brain Development program, and Blake Richards, an associate fellow in the Learning in Machines & Brains program, our brains are actively working to forget. In fact, the two University of Toronto researchers propose that the goal of memory is not to transmit the most accurate information over time, but to guide and optimize intelligent decision making by only holding on to valuable information.

    “It’s important that the brain forgets irrelevant details and instead focuses on the stuff that’s going to help make decisions in the real world,” says Richards.

    The review paper, published this week in the journal Neuron, looks at the literature on remembering, known as persistence, and the newer body of research on forgetting, or transience. The recent increase in research into the brain mechanisms that promote forgetting is revealing that forgetting is just as important a component of our memory system as remembering.

    “We find plenty of evidence from recent research that there are mechanisms that promote memory loss, and that these are distinct from those involved in storing information,” says Frankland.

    One of these mechanisms is the weakening or elimination of synaptic connections between neurons in which memories are encoded. Another mechanism, supported by evidence from Frankland’s own lab, is the generation of new neurons from stem cells. As new neurons integrate into the hippocampus, the new connections remodel hippocampal circuits and overwrite memories stored in those circuits, making them harder to access. This may explain why children, whose hippocampi are producing more new neurons, forget so much information.

    It may seem counterintuitive that the brain would expend so much energy creating new neurons at the detriment of memory. Richards, whose research applies artificial intelligence (AI) theories to understanding the brain, looked to principles of learning from AI for answers. Using these principles, Frankland and Richards frame an argument that the interaction between remembering and forgetting in the human brain allows us to make more intelligent memory-based decisions.

    It does so in two ways. First, forgetting allows us to adapt to new situations by letting go of outdated and potentially misleading information that can no longer help us maneuver changing environments.

    “If you’re trying to navigate the world and your brain is constantly bringing up multiple conflicting memories, that makes it harder for you to make an informed decision,” says Richards.

    The second way forgetting facilitates decision making is by allowing us to generalize past events to new ones. In artificial intelligence this principle is called regularization and it works by creating simple computer models that prioritize core information but eliminate specific details, allowing for wider application.

    Memories in the brain work in a similar way. When we only remember the gist of an encounter as opposed to every detail, this controlled forgetting of insignificant details creates simple memories which are more effective at predicting new experiences.

    Ultimately, these mechanisms are cued by the environment we are in. A constantly changing environment may require that we remember less. For example, a cashier who meets many new people every day will only remember the names of her customers for a short period of time, whereas a designer that meets with her clients regularly will retain that information longer.

    “One of the things that distinguishes an environment where you’re going to want to remember stuff versus an environment where you want to forget stuff is this question of how consistent the environment is and how likely things are to come back into your life, ” says Richards.

    Similarly, research shows that episodic memories of things that happen to us are forgotten more quickly than general knowledge that we access on a daily basis, supporting the old adage that if you don’t use it, you lose it. But in the context of making better memory-based decisions, you may be better off for it.

    Source:Science Daily

  • Mapping how words leap from brain to tongue

    {When you look at a picture of a mug, the neurons that store your memory of what a mug is begin firing. But it’s not a pinpoint process; a host of neurons that code for related ideas and items — bowl, coffee, spoon, plate, breakfast — are activated as well. How your brain narrows down this smorgasbord of related concepts to the one word you’re truly seeking is a complicated and poorly understood cognitive task. A new study led by San Diego State University neuroscientist Stephanie Ries, of the School of Speech, Language, and Hearing Sciences, delved into this question by measuring the brain’s cortical activity and found that wide, overlapping swaths of the brain work in parallel to retrieve the correct word from memory.}

    Most adults can quickly and effortlessly recall as many as 100,000 regularly used words when prompted, but how the brain accomplishes this has long boggled scientists. How does the brain nearly always find the needle in the haystack? Previous work has revealed that the brain organizes ideas and words into semantically related clusters. When trying to recall a specific word, the brain activates its cluster, significantly reducing the size of the haystack.

    To figure out what happens next in that process, Ries and colleagues asked for help from a population of people in a unique position to lend their brainpower to the problem: patients undergoing brain surgery to reduce their epileptic seizures. Before surgery, neurosurgeons monitor their brain activity to figure out which region of the brain is triggering the patients’ seizures, which requires the patients to wear a grid of dozens of electrodes placed directly on top of the cortex, the outermost folded layers of the brain.

    While the patients were hooked up to this grid in a hospital and waiting for a seizure to occur, Ries asked if they’d be willing to participate in her research. Recording brain signals directly from the cortical surface affords neuroscientists like Ries an unparalleled look at exactly when and where neurons are communicating with one another during tasks.

    “During that period, you have time to do cognitive research that’s impossible to do otherwise,” she said. “It’s an extraordinary window of opportunity.”

    For the recent study, nine patients agreed to participate. In 15 minute-sessions, she and her team would show the patients an item on a computer screen — musical instruments, vehicles, houses — then ask them to name it as quickly as possible, all while tracking their brain activity.

    They measured the separate neuronal processes involved with first activating the item’s conceptual cluster, then selecting the proper word. Surprisingly, they discovered the two processes actually happen at the same time and activate a much wider network of brain regions than previously suspected. As expected, two regions known to be involved in language processing lit up, the left inferior frontal gyrus and the posterior temporal cortex. But so did several other regions not traditionally linked to language, including the medial and middle frontal gyri, the researchers reported in the Proceedings of the National Academy of Sciences.

    “This work shows the word retrieval process in the brain is not at all as localized as we previously thought,” Ries said. “It’s not a clear division of labor between brain regions. It’s a much more complex process.”

    Learning exactly how the brain accomplishes these tasks could one day help speech-language pathologists devise strategies for treating disorders that prevent people from readily accessing their vocabulary.

    “Word retrieval is usually effortless in most people, but it is routinely compromised in patients who suffer from anomia, or word retrieval difficulty,” Ries said. “Anomia is the most common complaint in patients with stroke-induced aphasia, but is also common in neurodegenerative diseases and normal aging. So it is critical to understand how this process works to understand how to help make it better.”

    Most adults can quickly and effortlessly recall as many as 100,000 regularly used words when prompted, but how the brain accomplishes this has long boggled scientists.

    Source:Science Daily

  • The story of music is the story of humans

    {How did music begin? Did our early ancestors first start by beating things together to create rhythm, or use their voices to sing? What types of instruments did they use? Has music always been important in human society, and if so, why? These are some of the questions explored in a recent Hypothesis and Theory article published in Frontiers in Sociology. The answers reveal that the story of music is, in many ways, the story of humans.}

    So, what is music? This is difficult to answer, as everyone has their own idea. “Sound that conveys emotion,” is what Jeremy Montagu, of the University of Oxford and author of the article, describes as his. A mother humming or crooning to calm her baby would probably count as music, using this definition, and this simple music probably predated speech.

    But where do we draw the line between music and speech? You might think that rhythm, pattern and controlling pitch are important in music, but these things can also apply when someone recites a sonnet or speaks with heightened emotion. Montagu concludes that “each of us in our own way can say ‘Yes, this is music’, and ‘No, that is speech’.”

    So, when did our ancestors begin making music? If we take singing, then controlling pitch is important. Scientists have studied the fossilized skulls and jaws of early apes, to see if they were able to vocalize and control pitch. About a million years ago, the common ancestor of Neanderthals and modern humans had the vocal anatomy to “sing” like us, but it’s impossible to know if they did.

    Another important component of music is rhythm. Our early ancestors may have created rhythmic music by clapping their hands. This may be linked to the earliest musical instruments, when somebody realized that smacking stones or sticks together doesn’t hurt your hands as much. Many of these instruments are likely to have been made from soft materials like wood or reeds, and so haven’t survived. What have survived are bone pipes. Some of the earliest ever found are made from swan and vulture wing bones and are between 39,000 and 43,000 years old. Other ancient instruments have been found in surprising places. For example, there is evidence that people struck stalactites or “rock gongs” in caves dating from 12,000 years ago, with the caves themselves acting as resonators for the sound.

    So, we know that music is old, and may have been with us from when humans first evolved. But why did it arise and why has it persisted? There are many possible functions for music. One is dancing. It is unknown if the first dancers created a musical accompaniment, or if music led to people moving rhythmically. Another obvious reason for music is entertainment, which can be personal or communal. Music can also be used for communication, often over large distances, using instruments such as drums or horns. Yet another reason for music is ritual, and virtually every religion uses music.

    However, the major reason that music arose and persists may be that it brings people together. “Music leads to bonding, such as bonding between mother and child or bonding between groups,” explains Montagu. “Music keeps workers happy when doing repetitive and otherwise boring work, and helps everyone to move together, increasing the force of their work. Dancing or singing together before a hunt or warfare binds participants into a cohesive group.” He concludes: “It has even been suggested that music, in causing such bonding, created not only the family but society itself, bringing individuals together who might otherwise have led solitary lives.”

    When did our ancestors begin making music?

    Source:Science Daily

  • Memory for stimulus sequences distinguishes humans from other animals

    {Humans possess many cognitive abilities not seen in other animals, such as a full-blown language capacity as well as reasoning and planning abilities. Despite these differences, however, it has been difficult to identify specific mental capacities that distinguish humans from other animals. Researchers at the City University of New York (CUNY) and Stockholm University have now discovered that humans have a much better memory to recognize and remember sequential information.}

    “The data we present in our study indicate that humans have evolved a superior capacity to deal with sequential information. We suggest that this can be an important piece of the puzzle to understand differences between humans and other animals,” says Magnus Enquist, head of the Centre for the Study of Cultural Evolution at Stockholm University.

    The new study collated data from 108 experiments on birds and mammals, showing that the surveyed species had great difficulties in distinguishing between certain sequences of stimuli.

    “In some experiments, animals had to remember the order in which a green and a red lamp were lit. Even this simple discrimination turned out to be very difficult, and the difficulties increase with longer sequences. In contrast, animals perform as well as humans in most cases in which they have to distinguish between single stimuli, rather than sequences,” says Johan Lind, a co-author of the study and an Associate Professor at Stockholm University.

    Recognizing sequences of stimuli is a prerequisite for many uniquely human traits, for instance language, mathematics, or strategic games such as chess. After establishing that non-human animals have trouble distinguishing stimulus sequences, the researchers proposed a theory for why it is so.

    “We found that the limited capacities of non-human animals can be explained by a simpler kind of memory that does not faithfully represent sequential information. Using a mathematical model, we show that this simpler memory explains the results from animal experiments,” says Stefano Ghirlanda, lead author of the study and Professor of psychology at Brooklyn College and the CUNY Graduate Center.

    This research can explain why no language-trained animal has successfully mastered sequential aspects of language, such as the difference between “the dog bit the lady” and “the lady bit the dog.” The researchers’ hypothesize that, some time during human prehistory, the capacity to recognize and remember sequences of stimuli evolved, supporting the later evolution of human-level language, planning, and reasoning.

    The article “Memory for stimulus sequences: a divide between humans and other animals?” is published on Royal Society Open Science.

    Macaques, and other animals, have great difficulties in distinguishing between sequences of stimuli. This might be what separates humans from other animals.

    Source:Science Daily

  • How viewing cute animals can help rekindle marital spark

    {One of the well-known challenges of marriage is keeping the passion alive after years of partnership, as passions tend to cool even in very happy relationships. In a new study, a team of psychological scientists led by James K. McNulty of Florida State University has developed an unconventional intervention for helping a marriage maintain its spark: pictures of puppies and bunnies.}

    The study is published in Psychological Science, a journal of the Association for Psychological Science.

    Previous research has shown that, in many instances, marriage satisfaction declines even when day-to-day behaviors stay the same. This led McNulty and colleagues to hypothesize that an intervention focused on changing someone’s thoughts about their spouse, as opposed to one that targets their behaviors, might improve relationship quality.

    Specifically, the research team wanted to find out whether it was possible to improve marital satisfaction by subtly retraining the immediate, automatic associations that come to mind when people think about their spouses.

    “One ultimate source of our feelings about our relationships can be reduced to how we associate our partners with positive affect, and those associations can come from our partners but also from unrelated things, like puppies and bunnies,” McNulty explained.

    Repeatedly linking a very positive stimulus to an unrelated one can create positive associations over time — perhaps the most famous example of this kind of conditioned response is Pavlov’s dogs, who salivated at the sound of a bell after being exposed to multiple pairings of meat and the bell sound.

    McNulty and colleagues designed their intervention using a similar kind of conditioning called evaluative conditioning: Images of a spouse were repeatedly paired with very positive words or images (like puppies and bunnies). In theory, the positive feelings elicited by the positive images and words would become automatically associated with images of the spouse after practice.

    Participants in the study included 144 married couples, all under the age of 40 and married for less than 5 years. On average, participants were around 28 years old and around 40% of the couples had children.

    At the start of the study, couples completed a series of measures of relationship satisfaction. A few days later, the spouses came to the lab to complete a measure of their immediate, automatic attitudes toward their partner.

    Each spouse was asked to individually view a brief stream of images once every 3 days for 6 weeks. Embedded in this stream were pictures of their partner. Those in the experimental group always saw the partner’s face paired with positive stimuli (e.g., an image of a puppy or the word “wonderful”) while those in the control condition saw their partner’s face matched to neutral stimuli (e.g., an image of a button).

    Couples also completed implicit measures of attitude towards their partner every 2 weeks for 8 weeks. To measure implicit attitude, each spouse was asked to indicate as quickly as possible the emotional tone of positive and negative words after quickly glimpsing a series of faces, which included their partner’s face.

    The data showed that the evaluative conditions worked: Participants who were exposed to positive images paired with their partner’s face showed more positive automatic reactions to their partner over the course of the intervention compared with those who saw neutral pairings.

    More importantly, the intervention was associated with overall marriage quality: As in other research, more positive automatic reactions to the partner predicted greater improvements in marital satisfaction over the course of the study.

    “I was actually a little surprised that it worked,” McNulty explained. “All the theory I reviewed on evaluative conditioning suggested it should, but existing theories of relationships, and just the idea that something so simple and unrelated to marriage could affect how people feel about their marriage, made me skeptical.”

    It’s important to note that McNulty and colleagues are not arguing that behavior in a relationship is irrelevant to marital satisfaction. They note that interactions between spouses are actually the most important factor for setting automatic associations.

    However, the new findings suggest that a brief intervention focused on automatic attitudes could be useful as one aspect of marriage counseling or as a resource for couples in difficult long-distance situations, such as soldiers.

    “The research was actually prompted by a grant from the Department of Defense — I was asked to conceptualize and test a brief way to help married couples cope with the stress of separation and deployment,” McNulty said. “We would really like to develop a procedure that could help soldiers and other people in situations that are challenging for relationships.”

    Golden retriever puppy. Do you think of puppies and bunnies when you think of your spouse?

    Source:Science Daily

  • Detecting social signals may have affected how we see colors

    The arrangement of the photoreceptors in our eyes allows us to detect socially {significant color variation better than other types of color vision, a team of researchers has found. Specifically, our color vision is superior at spotting “social signaling,” such as blushing or other facial color changes — even when compared to the type of color vision that we design for digital cameras and other photographic devices.}

    “Our color vision is very strange,” says James Higham, an assistant professor in NYU’s Department of Anthropology and one of the study’s co-authors. “Our green receptor and our red receptor detect very similar colors. One would think that the ideal type of color vision would look different from ours, and when we design color detection, such as for digital cameras, we construct a different type of color vision. However, we’ve now shown that when it comes to spotting changes in color linked to social cues, humans outshine the type of color vision we’ve designed for our technologies.”

    The study, which appears in the journal Proceedings of the Royal Society Biological Sciences, focuses on trichromatic color vision — that is, how we process the colors we see, based on comparisons among how red, green, and blue they are.

    One particularly interesting thing about how our visual system is structured is how significantly it differs from that of cameras. Notably, the green and red photoreceptors we use for color vision are placed very close together; by contrast, the equivant components in cameras are situated with ample (and even) spacing among them. Given that cameras are designed to optimally capture color, many have concluded that their ability to detect an array of colors should be superior to that of humans and other primates — and wondered why our vision is the way it is.

    One idea that has been well-studied is related to foraging. It hypothesizes that primate color vision allows us to detect between subtle shades of green and red, which is useful, for example, when fruit are ripening against green leaves in a tree. An alternative hypothesis relates to the fact that both humans and primates must be able to spot subtle changes in facial color in social interactions. For instance, some species of monkeys give red signals on their faces and on genitals that change color during mating and in social interactions. Similarly, humans exhibit facial color changes such as blushing, which are socially informative signals.

    In their study, the researchers had 60 human subjects view a series of digital photographs of female rhesus macaque monkeys. These primates’ facial color has been known to change with their reproductive status, with female faces becoming redder when they are ready to mate. This process, captured in the series of photographs, provides a good model for testing the ability to not only detect colors, but also to spot those linked to social cues — albeit across two species.

    In different sets of photographs, the scientists developed software that replicated how colors look under different types of color vision, including different types of color blindness, and the type of trichromatic vision seen in many artificial systems, with even spacing of the green and red photoreceptors. Some of the study’s subjects viewed photos of the transformation of the monkeys’ faces as a human or primate would see them while others saw pictures as a color-blind person would and others as a camera would. During this period, the study’s subjects had to discriminate between the different colors being exhibited by the monkeys in the photos.

    Overall, the subjects viewing the images using the human/primate visual system more accurately and more quickly identified changes in the monkeys’ face coloring.

    “Humans and many other primates have an unusual type of color vision, and no one is sure why,” first author Chihiro Hiramatsu of Japan’s Kyushu University notes. “Here, we provide one of the first experimental tests of the idea that our unusual vision might be related to detecting social signals in the faces of others.”

    “But, perhaps more importantly, these results support a rarely tested idea that social signaling itself, such as the need to detect blushing and facial color changes, might have had a role in the evolution or maintenance of the unusual type of color vision shown in primates, especially those with conspicuous patches of bare skin, including humans, macaques, and many others,” concludes co-author Amanda Melin of the University of Calgary.

    Our color vision is superior at spotting "social signaling," such as blushing or other facial color changes, when compared to other types of color vision, including the type we design for digital cameras and other photographic devices. In their study, the researchers had 60 human subjects view a series of digital photographs of female rhesus macaque monkeys, above, whose facial color changes to give social cues.

    Source:Science Daily