Category: Science &Technology

  • The moon is older than scientists thought

    {Formation occurred 4.51 billion years ago, millions of years earlier than previously believed.}

    A UCLA-led research team reports that the moon is at least 4.51 billion years old, or 40 million to 140 million years older than scientists previously thought.

    The findings — based on an analysis of minerals from the moon called zircons that were brought back to Earth by the Apollo 14 mission in 1971 — are published Jan. 11 in the journal Science Advances.

    The moon’s age has been a hotly debated topic, even though scientists have tried to settle the question over many years and using a wide range of scientific techniques.

    “We have finally pinned down a minimum age for the moon; it’s time we knew its age and now we do,” said Mélanie Barboni, the study’s lead author and a research geochemist in UCLA’s Department of Earth, Planetary and Space Sciences.

    The moon was formed by a violent, head-on collision between the early Earth and a “planetary embryo” called Theia, a UCLA-led team of geochemists and colleagues reported in 2016.

    The newest research would mean that the moon formed “only” about 60 million years after the birth of the solar system — an important point because it would provide critical information for astronomers and planetary scientists who seek to understand the early evolution of the Earth and our solar system.

    That has been a difficult task, Barboni said, because “whatever was there before the giant impact has been erased.” While scientists cannot know what occurred before the collision with Theia, these findings are important because they will help scientists continue to piece together major events that followed it.

    It’s usually difficult to determine the age of moon rocks because most of them contain a patchwork of fragments of multiple other rocks. But Barboni was able to analyze eight zircons in pristine condition. Specifically, she examined how the uranium they contained had decayed to lead (in a lab at Princeton University) and how the lutetium they contained had decayed to an element called hafnium (using a mass spectrometer at UCLA). The researchers analyzed those elements together to determine the moon’s age.

    “Zircons are nature’s best clocks,” said Kevin McKeegan, a UCLA professor of geochemistry and cosmochemistry, and a co-author of the study. “They are the best mineral in preserving geological history and revealing where they originated.”

    The Earth’s collision with Theia created a liquefied moon, which then solidified. Scientists believe most of the moon’s surface was covered with magma right after its formation. The uranium-lead measurements reveal when the zircons first appeared in the moon’s initial magma ocean, which later cooled down and formed the moon’s mantle and crust; the lutetium-hafnium measurements reveal when its magma formed, which happened earlier.

    “Mélanie was very clever in figuring out the moon’s real age dates back to its pre-history before it solidified, not to its solidification,” said Edward Young, a UCLA professor of geochemistry and cosmochemistry and a co-author of the study.

    Previous studies concluded the moon’s age based on moon rocks that had been contaminated by multiple collisions. McKeegan said those rocks indicated the date of some other events, “but not the age of the moon.”

    The UCLA researchers are continuing to study zircons brought back by the Apollo astronauts to study the early history of the moon.

    This is astronaut Alan B. Shepard Jr. on the moon in 1971 with the Apollo 14 mission.
  • Drone-based blood deliveries in Tanzania to be funded by UK

    {The UK government is to fund a trial of drone-based deliveries of blood and other medical supplies in Tanzania.}

    The goal is to radically reduce the amount of time it takes to send stock to health clinics in the African nation by road or other means.

    The scheme involves Zipline, a Silicon Valley start-up that began running a similar service in Rwanda in October.

    Experts praised that initiative but cautioned that “cargo drones” are still of limited use to humanitarian bodies.

    The Department for International Development (Dfid) has not said how much money will be invested in the Tanzanian effort or for how long.

    It also announced plans to fund tests of drones in Nepal to map areas of the country prone to damage from extreme weather, so help prepare for future crises.

    “This innovative, modern approach ensures we are achieving the best results for the world’s poorest people and delivering value for money for British taxpayers,” commented the International Development Secretary Priti Patel.
    {{
    Parachute deliveries}}

    Zipline’s drones – called Zips – are small fixed-wing aircraft that are fired from a catapult and follow a pre-programmed path using GPS location data.

    The advantage of the design over multi-rotor models is that the vehicles can better cope with windy conditions and stay airborne for longer. In theory, they can fly up to about 180 miles (290km) before running out of power, although Zipline tries to keep round trips to about half that distance.

    Their drawback is that they require open space to land – in Zip’s case an area about the size of two car parking slots.

    Zipline gets round this issue by having its drones descend to heights of about 5m (16.4ft) when they reach their destinations and then release their loads via paper parachutes. Afterwards, they regain altitude and return to base before coming to rest.

    The aircraft fly below 500ft (152m) to avoid the airspace used by passenger planes.
    Tanzania, Rwanda and Malawi – which uses a different type of drone for medical deliveries – all take a permissive approach to unmanned aerial vehicle [UAV] regulations, helping make them attractive places for such trials.

    Earlier in the year, Tanzania also authorised the use of drones in its Tarangire National Park as part of an effort to deter animal poachers.

    {{Saved lives}}

    Dfid estimates that flying blood and medical supplies by drone from out of Tanzania’s capital, Dodoma, could save $58,000 (£47,400) a year compared to sending them by car or motorcycle.

    But a spokeswoman suggested that the time savings were more crucial.
    “Flights are planned to start in early 2017, and when they do it is estimated that [the] UAVs could support over 50,000 births a year, cutting down the time mothers and new-borns would have to wait for life-saving medicine to 19 minutes – reduced from the 110 minutes traditional transport methods would take,” she explained.

    The Ifakara Health Institute – which specialises in treatments for malaria, HIV, tuberculosis as well as neonatal health issues – will be the local partner.

    The Humanitarian UAV Network and other non-profit bodies recently surveyed the use of drones to carry out human welfare tasks.

    The study highlighted the work Zipline was doing, noting the firm was capable of setting up a new drones launch hub in as little as 24 hours, meaning it was well suited to rapid response efforts as well as longer-term projects.

    But the study also noted that humanitarian cargos are often measured in tonnes rather than kilograms, and need to be transported across longer distances than a Zip can manage.

    “Given these current trade-offs relative to manned aviation, the specific cases in which cargo drones can currently add value are particularly narrow in the context of the universe of needs that humanitarian organisations typically face,” it said.

    And it added that more research was needed to properly evaluate whether existing schemes were as reliable as claimed.

    “Organisations considering the use of cargo drones need statistics on flights performed, hours logged, failure rates and other performance measures.”

  • One step closer to reality: Devices that convert heat into electricity

    {The same researchers who pioneered the use of a quantum mechanical effect to convert heat into electricity have figured out how to make their technique work in a form more suitable to industry.}

    In Nature Communications, engineers from The Ohio State University describe how they used magnetism on a composite of nickel and platinum to amplify the voltage output 10 times or more — not in a thin film, as they had done previously, but in a thicker piece of material that more closely resembles components for future electronic devices.

    Many electrical and mechanical devices, such as car engines, produce heat as a byproduct of their normal operation. It’s called “waste heat,” and its existence is required by the fundamental laws of thermodynamics, explained study co-author Stephen Boona.

    But a growing area of research called solid-state thermoelectrics aims to capture that waste heat inside specially designed materials to generate power and increase overall energy efficiency.

    “Over half of the energy we use is wasted and enters the atmosphere as heat,” said Boona, a postdoctoral researcher at Ohio State. “Solid-state thermoelectrics can help us recover some of that energy. These devices have no moving parts, don’t wear out, are robust and require no maintenance. Unfortunately, to date, they are also too expensive and not quite efficient enough to warrant widespread use. We’re working to change that.”

    In 2012, the same Ohio State research group, led by Joseph Heremans, demonstrated that magnetic fields could boost a quantum mechanical effect called the spin Seebeck effect, and in turn boost the voltage output of thin films made from exotic nano-structured materials from a few microvolts to a few millivolts.

    In this latest advance, they’ve increased the output for a composite of two very common metals, nickel with a sprinkling of platinum, from a few nanovolts to tens or hundreds of nanovolts — a smaller voltage, but in a much simpler device that requires no nanofabrication and can be readily scaled up for industry.

    Heremans, a professor of mechanical and aerospace engineering and the Ohio Eminent Scholar in Nanotechnology, said that, to some extent, using the same technique in thicker pieces of material required that he and his team rethink the equations that govern thermodynamics and thermoelectricity, which were developed before scientists knew about quantum mechanics. And while quantum mechanics often concerns photons — waves and particles of light — Heremans’ research concerns magnons — waves and particles of magnetism.

    “Basically, classical thermodynamics covers steam engines that use steam as a working fluid, or jet engines or car engines that use air as a working fluid. Thermoelectrics use electrons as the working fluid. And in this work, we’re using quanta of magnetization, or ‘magnons,’ as a working fluid,” Heremans said.

    Research in magnon-based thermodynamics was up to now always done in thin films — perhaps only a few atoms thick — and even the best-performing films produce very small voltages.

    In the 2012 paper, his team described hitting electrons with magnons to push them through thermoelectric materials. In the current Nature Communications paper, they’ve shown that the same technique can be used in bulk pieces of composite materials to further improve waste heat recovery.

    Instead of applying a thin film of platinum on top of a magnetic material as they might have done before, the researchers distributed a very small amount of platinum nanoparticles randomly throughout a magnetic material — in this case, nickel. The resulting composite produced enhanced voltage output due to the spin Seebeck effect. This means that for a given amount of heat, the composite material generated more electrical power than either material could on its own. Since the entire piece of composite is electrically conducting, other electrical components can draw the voltage from it with increased efficiency compared to a film.

    While the composite is not yet part of a real-world device, Heremans is confident the proof-of-principle established by this study will inspire further research that may lead to applications for common waste heat generators, including car and jet engines. The idea is very general, he added, and can be applied to a variety of material combinations, enabling entirely new approaches that don’t require expensive metals like platinum or delicate processing procedures like thin-film growth.

    Scanning transmission electron microscope image of a nickel-platinum composite material created at The Ohio State University. At left, the image is overlaid with false-color maps of elements in the material, including platinum (red), nickel (green) and oxygen (blue).
  • World’s smallest radio receiver has building blocks the size of two atoms

    {Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences have made the world’s smallest radio receiver — built out of an assembly of atomic-scale defects in pink diamonds.}

    This tiny radio — whose building blocks are the size of two atoms — can withstand extremely harsh environments and is biocompatible, meaning it could work anywhere from a probe on Venus to a pacemaker in a human heart.

    The research was led by Marko Loncar, the Tiantsai Lin Professor of Electrical Engineering at SEAS, and his graduate student Linbo Shao and published in Physical Review Applied.

    The radio uses tiny imperfections in diamonds called nitrogen-vacancy (NV) centers. To make NV centers, researchers replace one carbon atom in a diamond crystal with a nitrogen atom and remove a neighboring atom — creating a system that is essentially a nitrogen atom with a hole next to it. NV centers can be used to emit single photons or detect very weak magnetic fields. They have photoluminescent properties, meaning they can convert information into light, making them powerful and promising systems for quantum computing, phontonics and sensing.

    Radios have five basic components — a power source, a receiver, a transducer to convert the high-frequency electromagnetic signal in the air to a low-frequency current, speaker or headphones to convert the current to sound and a tuner.

    In the Harvard device, electrons in diamond NV centers are powered, or pumped, by green light emitted from a laser. These electrons are sensitive to electromagnetic fields, including the waves used in FM radio, for example. When NV center receives radio waves it converts them and emits the audio signal as red light. A common photodiode converts that light into a current, which is then converted to sound through a simple speaker or headphone.

    An electromagnet creates a strong magnetic field around the diamond, which can be used to change the radio station, tuning the receiving frequency of the NV centers.

    Shao and Loncar used billions of NV centers in order to boost the signal, but the radio works with a single NV center, emitting one photon at a time, rather than a stream of light.

    The radio is extremely resilient, thanks to the inherent strength of diamond. The team successfully played music at 350 degrees Celsius — about 660 Fahrenheit.

    “Diamonds have these unique properties,” said Loncar. “This radio would be able to operate in space, in harsh environments and even the human body, as diamonds are biocompatible.”

    This research was coauthored by Mian Zhang, Matthew Markham and Andrew M. Edmonds. It was supported in part by the STC Center for Integrated Quantum Materials.

    This tiny radio — whose building blocks are the size of two atoms — can withstand extremely harsh environments and is biocompatible, meaning it could work anywhere from a probe on Venus to a pacemaker in a human heart.
  • New robot has a human touch

    {Most robots achieve grasping and tactile sensing through motorized means, which can be excessively bulky and rigid. A Cornell University group has devised a way for a soft robot to feel its surroundings internally, in much the same way humans do.}

    A group led by Robert Shepherd, assistant professor of mechanical and aerospace engineering and principal investigator of Organic Robotics Lab, has published a paper describing how stretchable optical waveguides act as curvature, elongation and force sensors in a soft robotic hand.

    Doctoral student Huichan Zhao is lead author of “Optoelectronically Innervated Soft Prosthetic Hand via Stretchable Optical Waveguides,” which is featured in the debut edition of Science Robotics.

    “Most robots today have sensors on the outside of the body that detect things from the surface,” Zhao said. “Our sensors are integrated within the body, so they can actually detect forces being transmitted through the thickness of the robot, a lot like we and all organisms do when we feel pain, for example.”

    Optical waveguides have been in use since the early 1970s for numerous sensing functions, including tactile, position and acoustic. Fabrication was originally a complicated process, but the advent over the last 20 years of soft lithography and 3-D printing has led to development of elastomeric sensors that are easily produced and incorporated into a soft robotic application.

    Shepherd’s group employed a four-step soft lithography process to produce the core (through which light propagates), and the cladding (outer surface of the waveguide), which also houses the LED (light-emitting diode) and the photodiode.

    The more the prosthetic hand deforms, the more light is lost through the core. That variable loss of light, as detected by the photodiode, is what allows the prosthesis to “sense” its surroundings.

    “If no light was lost when we bend the prosthesis, we wouldn’t get any information about the state of the sensor,” Shepherd said. “The amount of loss is dependent on how it’s bent.”

    The group used its optoelectronic prosthesis to perform a variety of tasks, including grasping and probing for both shape and texture. Most notably, the hand was able to scan three tomatoes and determine, by softness, which was the ripest.

    (A) Schematic of hand structure and components; (B) image of the fabricated hand mounted on a robotic arm with each finger actuated at ΔP = 100 kPa.
  • Will Earth still exist 5 billion years from now?

    {Old star offers sneak preview of the future.}

    What will happen to Earth when, in a few billion years’ time, the Sun is a hundred times bigger than it is today? Using the most powerful radio telescope in the world, an international team of astronomers has set out to look for answers in the star L2 Puppis. Five billion years ago, this star was very similar to the Sun as it is today.

    “Five billion years from now, the Sun will have grown into a red giant star, more than a hundred times larger than its current size,” says Professor Leen Decin from the KU Leuven Institute of Astronomy. “It will also experience an intense mass loss through a very strong stellar wind. The end product of its evolution, 7 billion years from now, will be a tiny white dwarf star. This will be about the size of the Earth, but much heavier: one tea spoon of white dwarf material weighs about 5 tons.”

    This metamorphosis will have a dramatic impact on the planets of our Solar System. Mercury and Venus, for instance, will be engulfed in the giant star and destroyed.

    “But the fate of the Earth is still uncertain,” continues Decin. “We already know that our Sun will be bigger and brighter, so that it will probably destroy any form of life on our planet. But will the Earth’s rocky core survive the red giant phase and continue orbiting the white dwarf?”

    To answer this question, an international team of astronomers observed the evolved star L2 Puppis. This star is 208 light years away from Earth — which, in astronomy terms, means nearby. The researchers used the ALMA radio telescope, which consists of 66 individual radio antennas that together form a giant virtual telescope with a 16-kilometre diameter.

    “We discovered that L2 Puppis is about 10 billion years old,” says Ward Homan from the KU Leuven Institute of Astronomy. “Five billion years ago, the star was an almost perfect twin of our Sun as it is today, with the same mass. One third of this mass was lost during the evolution of the star. The same will happen with our Sun in the very distant future.”

    300 million kilometres from L2 Puppis — or twice the distance between the Sun and the Earth — the researchers detected an object orbiting the giant star. In all likelihood, this is a planet that offers a unique preview of our Earth five billion years from now.

    A deeper understanding of the interactions between L2 Puppis and its planet will yield valuable information on the final evolution of the Sun and its impact on the planets in our Solar System. Whether the Earth will eventually survive the Sun or be destroyed is still uncertain. L2 Puppis may be the key to answering this question.

    This is a schematic view of the candidate planet's orbit in L2 Puppis disk.
  • Toyota to expand hybrid system development to cut emissions

    {Toyota Motor Corp on Tuesday said it would expand the development of its gasoline hybrid technology over the next five years to speed up the introduction of lower-emission engines in the face of stricter global emissions standards.}

    The announcement was the latest by the Japanese firm aimed at making “greener” cars as global automakers face tighter regulations in regions that will require more environment-friendly cars in the coming years.

    In addition to improving the efficiency of its hybrid systems and gasoline engines, Toyota is stepping up the development of longer-range battery-electric cars in a shift from its strategy of promoting hydrogen fuel-cell technology as the future of zero-emission vehicles.

    Toyota said it would expand personnel on its hybrid technology development team by 30 percent through 2021, by which time it aims to introduce 19 new lower-emission power train components made on its recently introduced common manufacturing platform.

    Toyota is speeding up the development of lower-emission cars, last month appointing President Akio Toyoda to lead a new electric car division to accelerate the development of battery-powered cars.

    Toyota has pledged to reduce global average CO2 emissions of its new vehicles by around 90 percent by 2050.

    To this end, Toshiyuki Mizushima, president of Toyota’s power train division, told reporters at a strategy briefing that he expected the take-up of hybrid vehicles to increase, accounting for around 20 percent of Toyota’s global annual vehicle sales by 2025, from around 10 percent now.

    While Toyota invests heavily in alternatives to conventional engines, it believes gasoline engines and hybrid systems will play a large role in the medium term as it will take “quite some time” for electric cars to become a common sight on roads and highways around the world, given their limited driving range and charging infrastructure.

    Toyota is stepping up the development of longer-range battery-electric cars in a shift from its strategy of promoting hydrogen fuel-cell technology as the future of zero-emission vehicles.
  • No peeking: Humans play computer game using only direct brain stimulation

    {In the Matrix film series, Keanu Reeves plugs his brain directly into a virtual world that sentient machines have designed to enslave mankind.}

    The Matrix plot may be dystopian fantasy, but University of Washington researchers have taken a first step in showing how humans can interact with virtual realities via direct brain stimulation.

    In a paper published online Nov. 16 in Frontiers in Robotics and AI, they describe the first demonstration of humans playing a simple, two-dimensional computer game using only input from direct brain stimulation — without relying on any usual sensory cues from sight, hearing or touch.

    The subjects had to navigate 21 different mazes, with two choices to move forward or down based on whether they sensed a visual stimulation artifact called a phosphene, which are perceived as blobs or bars of light. To signal which direction to move, the researchers generated a phosphene through transcranial magnetic stimulation, a well-known technique that uses a magnetic coil placed near the skull to directly and noninvasively stimulate a specific area of the brain.

    “The way virtual reality is done these days is through displays, headsets and goggles, but ultimately your brain is what creates your reality,” said senior author Rajesh Rao, UW professor of Computer Science & Engineering and director of the Center for Sensorimotor Neural Engineering.

    “The fundamental question we wanted to answer was: Can the brain make use of artificial information that it’s never seen before that is delivered directly to the brain to navigate a virtual world or do useful tasks without other sensory input? And the answer is yes.”

    The five test subjects made the right moves in the mazes 92 percent of the time when they received the input via direct brain stimulation, compared to 15 percent of the time when they lacked that guidance.

    The simple game demonstrates one way that novel information from artificial sensors or computer-generated virtual worlds can be successfully encoded and delivered noninvasively to the human brain to solve useful tasks. It employs a technology commonly used in neuroscience to study how the brain works — transcranial magnetic stimulation — to instead convey actionable information to the brain.

    The test subjects also got better at the navigation task over time, suggesting that they were able to learn to better detect the artificial stimuli.

    “We’re essentially trying to give humans a sixth sense,” said lead author Darby Losey, a 2016 UW graduate in computer science and neurobiology who now works as a staff researcher for the Institute for Learning & Brain Sciences (I-LABS). “So much effort in this field of neural engineering has focused on decoding information from the brain. We’re interested in how you can encode information into the brain.”

    The initial experiment used binary information — whether a phosphene was present or not — to let the game players know whether there was an obstacle in front of them in the maze. In the real world, even that type of simple input could help blind or visually impaired individuals navigate.

    Theoretically, any of a variety of sensors on a person’s body — from cameras to infrared, ultrasound, or laser rangefinders — could convey information about what is surrounding or approaching the person in the real world to a direct brain stimulator that gives that person useful input to guide their actions.

    “The technology is not there yet — the tool we use to stimulate the brain is a bulky piece of equipment that you wouldn’t carry around with you,” said co-author Andrea Stocco, a UW assistant professor of psychology and I-LABS research scientist. “But eventually we might be able to replace the hardware with something that’s amenable to real world applications.”

    Together with other partners from outside UW, members of the research team have co-founded Neubay, a startup company aimed at commercializing their ideas and introducing neuroscience and artificial intelligence (AI) techniques that could make virtual-reality, gaming and other applications better and more engaging.

    The team is currently investigating how altering the intensity and location of direct brain stimulation can create more complex visual and other sensory perceptions which are currently difficult to replicate in augmented or virtual reality.

    “We look at this as a very small step toward the grander vision of providing rich sensory input to the brain directly and noninvasively,” said Rao. “Over the long term, this could have profound implications for assisting people with sensory deficits while also paving the way for more realistic virtual reality experiences.”

    Test subjects in a UW experiment navigated simple mazes based solely on inputs delivered to their brains by a magnetic coil placed at the back of the skull, demonstrating how humans can interact with virtual realities via direct brain stimulation.
  • Modeling offers new perspective on how Pluto’s ‘icy heart’ came to be

    {Heart’s location and Charon’s existence led to heart’s formation.}

    Pluto’s “icy heart” is a bright, two-lobed feature on its surface that has attracted researchers ever since its discovery by the NASA New Horizons team in 2015. Of particular interest is the heart’s western lobe, informally named Sputnik Planitia, a deep basin containing three kinds of ices — frozen nitrogen, methane and carbon monoxide — and appearing opposite Charon, Pluto’s tidally locked moon. Sputnik Planitia’s unique attributes have spurred a number of scenarios for its formation, all of which identify the feature as an impact basin, a depression created by a smaller body striking Pluto at extremely high speed.

    A new study led by Douglas Hamilton, professor of astronomy at the University of Maryland, instead suggests that Sputnik Planitia formed early in Pluto’s history and that its attributes are inevitable consequences of evolutionary processes. The study was published in the journal Nature on December 1, 2016.

    “The main difference between my model and others is that I suggest that the ice cap formed early, when Pluto was still spinning quickly, and that the basin formed later and not from an impact,” said Hamilton, who is lead author of the paper. “The ice cap provides a slight asymmetry that either locks toward or away from Charon when Pluto’s spin slows to match the orbital motion of the moon.”

    Using a model he developed, Hamilton found that the initial location of Sputnik Planitia could be explained by Pluto’s unusual climate and its spin axis, which is tilted by 120 degrees. For comparison, Earth’s tilt is 23.5 degrees. Modeling the dwarf planet’s temperatures showed that when averaged over Pluto’s 248-year orbit, the 30 degrees north and south latitudes emerged as the coldest places on the dwarf planet, far colder than either pole. Ice would have naturally formed around these latitudes, including at the center of Sputnik Planitia, which is located at 25 degrees north latitude.

    Hamilton’s model also showed that a small ice deposit naturally attracts more ices by reflecting away solar light and heat. Temperatures remain low, which attracts more ice and keeps the temperature low, and the cycle repeats. This positive feedback phenomenon, called the runaway albedo effect, would eventually lead to a single dominating ice cap, like the one observed on Pluto. However, Pluto’s basin is significantly larger than the volume of ice it contains today, suggesting that Pluto’s heart has been slowly losing mass over time, almost as if it was wasting away.

    Even so, the single ice cap represents an enormous weight on Pluto’s surface, enough to shift the dwarf planet’s center of mass. Pluto’s rotation slowed gradually due to gravitational forces from Charon, just as Earth is slowly losing spin under similar forces from its moon. However, because Charon is so large and so close to Pluto, the process led to Pluto locking one face toward its moon in just a few million years. The large mass of Sputnik Planitia would have had a 50 percent chance of either facing Charon directly or turning as far away from the moon as possible.

    “It is like a Vegas slot machine with just two states, and Sputnik Planitia ended up in the latter position, centered at 175 degrees longitude,” said Hamilton.

    It would also be easy for the accumulated ice to create its own basin, simply by pushing down, according to Hamilton.

    “Pluto’s big heart weighs heavily on the small planet, leading inevitably to depression,” said Hamilton, noting that the same phenomenon happens on Earth: the Greenland Ice Sheet created a basin and pushed down the crust that it rests upon.

    While Hamilton’s model can explain both the latitude and longitude of Sputnik Planitia, as well as the fact that the ices exist in a basin, several other models were also presented in the December 1, 2016 issue of the journal Nature.

    In one of those papers, UC Santa Cruz Professor of Earth and Planetary Sciences Francis Nimmo, Hamilton and their co-authors modeled how Sputnik Planitia may have formed if its basin was caused by an impact, such as the one that created Charon. Their results showed that the basin may have formed after Pluto slowed its rotation, migrating only slightly to its present location. If this late formation scenario proves correct, the properties of Sputnik Planitia may hint at the presence of a subsurface ocean on Pluto.

    “Either model is viable under the right conditions,” said Hamilton. “While we cannot conclude definitively that there is an ocean under Pluto’s icy shell, we also cannot state that there is not one.”

    Although Pluto was stripped of its status as a planet, an ice cap is a surprisingly Earth-like property. In fact, Pluto is only the third body — Earth and Mars being the others — known to possess an ice cap. The ices of Sputnik Planitia may therefore offer hints relevant to more familiar ices here on Earth.

    Pluto, shown here in the front of this false-color image, has a bright ice-covered 'heart.' The left, roughly oval lobe is the basin provisionally named Sputnik Planitia. Sputnik Planitia appears directly opposite Pluto's moon, Charon (back).
  • Toddler robots help solve how children learn

    {Children learn new words using the same method as robots, according to psychologists.}

    This suggests that early learning is based not on conscious thought but on an automatic ability to associate objects which enables babies to quickly make sense of their environment.

    Dr Katie Twomey from Lancaster University, with Dr Jessica Horst from Sussex University, Dr Anthony Morse and Professor Angelo Cangelosi from Plymouth wanted to find out how young children learn new words for the first time. They programmed a humanoid robot called iCub designed to have similar proportions to a three year old child, using simple software which enabled the robot to hear words through a microphone and see with a camera. They trained it to point at new objects to identify them in order to solve the mystery of how young children learn new words.

    Dr Twomey said: “We know that two-year-old children can work out the meaning of a new word based on words they already know. That is, our toddler can work out that the new word “giraffe” refers to a new toy, when they can also see two others, called “duck” and “rabbit.” ”

    It is thought that toddlers achieve this through a strategy known as “mutual exclusivity” where they use a process of elimination to work out that because the brown toy is called “rabbit,” and the yellow toy is called “duck,” then the orange toy must be “giraffe.”

    What the researchers found is that the robot learned in exactly the same way when shown several familiar toys and one brand new toy.

    Dr Twomey said: “This new study shows that mutual exclusivity behaviour can be achieved with a very simple “brain” that just learns associations between words and objects. In fact, intelligent as iCub seems, it actually can’t say to itself “I know that the brown toy is a rabbit, and I know that that the yellow toy is a duck, so this new toy must be giraffe,” because its software is too simple.

    “This suggests that at least some aspects of early learning are based on an astonishingly powerful association making ability which allows babies and toddlers to rapidly absorb information from the very complicated learning environment.”

    A robot toddler used in this study.