Category: Science &Technology

  • Technology which makes electricity from urine also kills pathogens, researchers find

    A scientific breakthrough has taken an emerging biotechnology a step closer to being used to treat wastewater in the Developing World.

    Researchers at the University of the West of England (UWE Bristol) (Ieropoulos & Greenman) have discovered that technology they have developed which has already been proven to generate electricity through the process of cleaning organic waste, such as urine, also kills bacteria harmful to humans.

    Experts have shown that a special process they have developed in which wastewater flows through a series of cells filled with electroactive microbes can be used to attack and destroy a pathogen — the potentially deadly Salmonella.

    It is envisaged that the microbial fuel cell (MFC) technology could one day be used in the Developing World in areas lacking sanitation and installed in homes in the Developed World to help clean waste before it flows into the municipal sewerage network, reducing the burden on water companies to treat effluent.

    Professor Ioannis Ieropoulos, who is leading the research, said it was necessary to establish the technology could tackle pathogens in order for it to be considered for use in the Developing World.

    The findings of the research have been published in leading scientific journal PLOS ONE. Professor Ieropoulos, Director of the Bristol BioEnergy Centre, based in the Bristol Robotics Laboratory at UWE Bristol, said it was the first time globally it had been reported that pathogens could be destroyed using this method.

    He said: “We were really excited with the results — it shows we have a stable biological system in which we can treat waste, generate electricity and stop harmful organisms making it through to the sewerage network.”

    It had already been established that the MFC technology created by Dr Ieropoulos’ team could successfully clean organic waste, including urine, to the extent that it could be safely released into the environment. Through the same process, electricity is generated — enough to charge a mobile phone or power lighting in earlier trials.

    In the unique system, being developed with funding from the Bill & Melinda Gates Foundation, the organic content of the urine is consumed by microbes inside the fuel cells, breaking it down and creating energy.

    For the pathogen experiment, Salmonella enteritidis was added to urine flowing through the system, then checked at the end of the process to identify if bacteria numbers had been reduced. Results revealed pathogen numbers had dropped significantly, beyond minimum requirements used by the sanitation sector.

    Other pathogens, including viruses, are now being tested and there are plans for experiments which will establish if the MFC system can eliminate pathogens completely.

    John Greenman, Emeritus Professor of Microbiology, said: “The wonderful outcome in this study was that tests showed a reduction in the number of pathogens beyond the minimum expectations in the sanitation world.

    “We have reduced the number of pathogenic organisms significantly but we haven’t shown we can bring them down to zero — we will continue the work to test if we can completely eliminate them.”

    Professor Ieropoulos said his system could be beneficial to the wastewater industry because MFC systems fitted in homes could result in wastewater being cleaner when it reaches the sewerage system.

    He said: “Water companies are under pressure to improve treatment and produce cleaner and cleaner water at the end of the process. This means costs are rising, energy consumption levels are high and chemicals that are not good for the environment are being used.”

    Enterobacteriaceae: Large family of Gram-negative bacteria that includes many of the more familiar pathogens, such as Salmonella and Escherichia coli.

    Source:Science Daily

  • New face-aging technique could boost search for missing people

    The method maps out the key features, such as the shape of the cheek, mouth and forehead, of a face at a certain age. This information is fed to a computer algorithm which then synthesises new features for the face to produce photographic quality images of the face at different ages.

    A key feature of the method is that it teaches the machine how humans age by feeding the algorithm facial feature data from a large database of individuals at various ages. Consequently, the method improves on existing techniques, achieving greater level of accuracy.

    The findings will be presented at the International Conference on Missing Children and Adults at Abertay University, Dundee in June, and have been published in the Journal of Forensic Sciences.

    Professor Hassan Ugail, of Bradford’s Centre for Visual Computing, is leading the research. He said: “Each year around 300,000 missing person cases are recorded in the UK alone. This has been part of our motivation in endeavouring to improve current techniques of searching for missing people, particularly those who have been missing for some considerable time.”

    The technique developed by the team uses a method of predictive modelling and applies it to age progression. The model is further strengthened by incorporating facial data from a large database of individuals at different ages thus teaching the machine how humans actually age. In order to test their results the researchers use a method called de-aging whereby they take an individual’s picture and run their algorithm backwards to de-age that person to a younger age. The result is then compared with an actual photograph of the individual taken at the young age.

    As a test case, the researchers chose to work on the case of Ben Needham. Ben disappeared on the Greek island of Kos on 24th July 1991, when he was only 21 months old. He has never been found, but several images have been produced by investigators showing how Ben might look at ages 11-14 years, 17-20 years, and 20-22 years. The team used their method to progress the image of Ben Needham to the ages of 6, 14 and 22 years. The resulting images show very different results, which the researchers believe more closely resemble what Ben might look like today.

    An effective method needs to do two things: the synthesized images need to fit the intended age; and they need to retain the identity of the subject in age-progressed images. The results were evaluated using both machine and human methods, and in both, the images of Ben produced using this method were found to be more like the original picture of Ben than the images created as part of previous investigations.

    Professor Ugail added: “No criticism is implied of existing age progression work. Instead we are presenting our work as a development and improvement that could make a contribution to this important area of police work. We are currently working with the relevant parties to further test our method. We are also developing further research plans in order to develop this method so it can be incorporated as a biometric feature, in face recognition systems, for example.”

    “Our method generates more individualised results and hence is more accurate for a given face. This is because we have used large datasets of faces from different ethnicities as well as gender in order to train our algorithm. Furthermore, our model can take data from an individual’s relatives, if available, such as parents, grandparents and siblings. This enables us to generate more accurate and individualised ageing results. Current methods that exist use linear or one-dimensional methods whereas ours is non-linear, which means it is better suited for the individual in question.”

    Age progressed images produced by University of Bradford researchers used the new technique.

    Source:Science Daily

  • Jupiter is one old-timer, scientist finds

    An international group of scientists has found that Jupiter is the oldest planet in our solar system.

    By looking at tungsten and molybdenum isotopes on iron meteorites, the team, made up of scientists from Lawrence Livermore National Laboratory and Institut für Planetologie at the University of Münsterin Germany, found that meteorites are made up from two genetically distinct nebular reservoirs that coexisted but remained separated between 1 million and 3-4 million years after the solar system formed.

    “The most plausible mechanism for this efficient separation is the formation of Jupiter, opening a gap in the disc (a plane of gas and dust from stars) and preventing the exchange of material between the two reservoirs,” said Thomas Kruijer, lead author of the paper appearing in the June 12 online issue of, Proceedings of the National Academy of Sciences. Formerly at the University of Münster, Kruijer, is now at LLNL. “Jupiter is the oldest planet of the solar system, and its solid core formed well before the solar nebula gas dissipated, consistent with the core accretion model for giant planet formation.”

    Jupiter is the most massive planet of the solar system and its presence had an immense effect on the dynamics of the solar accretion disk. Knowing the age of Jupiter is key for understanding how the solar system evolved toward its present-day architecture. Although models predict that Jupiter formed relatively early, until now, its formation has never been dated.

    “We do not have any samples from Jupiter (in contrast to other bodies like the Earth, Mars, the moon and asteroids),” Kruijer said. “In our study, we use isotope signatures of meteorites (which are derived from asteroids) to infer Jupiter’s age.”

    The team showed through isotope analyses of meteorites that Jupiter’s solid core formed within only about 1 million years after the start of the solar system history, making it the oldest planet. Through its rapid formation, Jupiter acted as an effective barrier against inward transport of material across the disk, potentially explaining why our solar system lacks any super-Earths (an extrasolar planet with a mass higher than Earth’s).

    The team found that Jupiter’s core grew to about 20 Earth masses within 1 million years, followed by a more prolonged growth to 50 Earth masses until at least 3-4 million years after the solar system formed.

    The earlier theories proposed that gas-giant planets such as Jupiter and Saturn involved the growth of large solid cores of about 10 to 20 Earth masses, followed by the accumulation of gas onto these cores. So the conclusion was the gas-giant cores must have formed before dissipation of the solar nebula — the gaseous circumstellar disk surrounding the young sun — which likely occurred between 1 million years and 10 million years after the solar system formed.

    In the work, the team confirmed the earlier theories but we’re able to date Jupiter much more precisely within 1 million years using the isotopic signatures of meteorites.

    Although this rapid accretion of the cores has been modeled, it had not been possible to date their formation.

    “Our measurements show that the growth of Jupiter can be dated using the distinct genetic heritage and formation times of meteorites,” Kruijer said.

    Most meteorites derive from small bodies located in the main asteroid belt between Mars and Jupiter. Originally these bodies probably formed at a much wider range of heliocentric distances, as suggested by the distinct chemical and isotopic compositions of meteorites and by dynamical models indicating that the gravitational influence of the gas giants led to scattering of small bodies into the asteroid belt.

    Jupiter is not only the largest planet in our solar system, but it’s also the oldest, according to new research from Lawrence Livermore National Laboratory.

    Source:Science Daily

  • Plastic made from sugar and carbon dioxide

    Some biodegradable plastics could in the future be made using sugar and carbon dioxide, replacing unsustainable plastics made from crude oil, following research by scientists from the Centre for Sustainable Chemical Technologies (CSCT) at the University of Bath.

    Polycarbonate is used to make drinks bottles, lenses for glasses and in scratch-resistant coatings for phones, CDs and DVDs

    Current manufacture processes for polycarbonate use BPA (banned from use in baby bottles) and highly toxic phosgene, used as a chemical weapon in World War One
    Bath scientists have made alternative polycarbonates from sugars and carbon dioxide in a new process that also uses low pressures and room temperature, making it cheaper and safer to produce

    This new type of polycarbonate can be biodegraded back into carbon dioxide and sugar using enzymes from soil bacteria

    This new plastic is bio-compatible so could in the future be used for medical implants or as scaffolds for growing replacement organs for transplant

    Polycarbonates from sugars offer a more sustainable alternative to traditional polycarbonate from BPA, however the process uses a highly toxic chemical called phosgene. Now scientists at Bath have developed a much safer, even more sustainable alternative which adds carbon dioxide to the sugar at low pressures and at room temperature.

    The resulting plastic has similar physical properties to those derived from petrochemicals, being strong, transparent and scratch-resistant. The crucial difference is that they can be degraded back into carbon dioxide and sugar using the enzymes found in soil bacteria.

    The new BPA-free plastic could potentially replace current polycarbonates in items such as baby bottles and food containers, and since the plastic is bio-compatible, it could also be used for medical implants or as scaffolds for growing tissues or organs for transplant.

    Dr Antoine Buchard, Whorrod Research Fellow in the University’s Department of Chemistry, said: “With an ever-growing population, there is an increasing demand for plastics. This new plastic is a renewable alternative to fossil-fuel based polymers, potentially inexpensive, and, because it is biodegradable, will not contribute to growing ocean and landfill waste.

    “Our process uses carbon dioxide instead of the highly toxic chemical phosgene, and produces a plastic that is free from BPA, so not only is the plastic safer, but the manufacture process is cleaner too.”

    Dr Buchard and his team at the Centre for Sustainable Chemical Technologies, published their work in a series of articles in the journals Polymer Chemistry and Macromolecules.

    In particular, they used nature as inspiration for the process, using the sugar found in DNA called thymidine as a building block to make a novel polycarbonate plastic with a lot of potential.

    PhD student and first author of the articles, Georgina Gregory, explained: “Thymidine is one of the units that makes up DNA. Because it is already present in the body, it means this plastic will be bio-compatible and can be used safely for tissue engineering applications.

    “The properties of this new plastic can be fine-tuned by tweaking the chemical structure — for example we can make the plastic positively charged so that cells can stick to it, making it useful as a scaffold for tissue engineering.” Such tissue engineering work has already started in collaboration with Dr Ram Sharma from Chemical Engineering, also part of the CSCT.

    The researchers have also looked at using other sugars such as ribose and mannose. Dr Buchard added: “Chemists have 100 years’ experience with using petrochemicals as a raw material so we need to start again using renewable feedstocks like sugars as a base for synthetic but sustainable materials. It’s early days, but the future looks promising.”

    This work was supported by Roger and Sue Whorrod (Fellowship to Dr Buchard), EPSRC (Centre for Doctoral Training in Sustainable Chemical Technologies), and a Royal Society research Grant.

    This is a graphic showing how sugar and carbon dioxide is converted to plastic.

    Source:Science Daily

  • 7 in 10 smartphone apps share your data with third-party services

    Our mobile phones can reveal a lot about ourselves: where we live and work; who our family, friends and acquaintances are; how (and even what) we communicate with them; and our personal habits. With all the information stored on them, it isn’t surprising that mobile device users take steps to protect their privacy, like using PINs or passcodes to unlock their phones.

    The research that we and our colleagues are doing identifies and explores a significant threat that most people miss: More than 70 percent of smartphone apps are reporting personal data to third-party tracking companies like Google Analytics, the Facebook Graph API or Crashlytics.

    When people install a new Android or iOS app, it asks the user’s permission before accessing personal information. Generally speaking, this is positive. And some of the information these apps are collecting are necessary for them to work properly: A map app wouldn’t be nearly as useful if it couldn’t use GPS data to get a location.

    But once an app has permission to collect that information, it can share your data with anyone the app’s developer wants to – letting third-party companies track where you are, how fast you’re moving and what you’re doing.

    The help, and hazard, of code libraries

    An app doesn’t just collect data to use on the phone itself. Mapping apps, for example, send your location to a server run by the app’s developer to calculate directions from where you are to a desired destination.

    The app can send data elsewhere, too. As with websites, many mobile apps are written by combining various functions, precoded by other developers and companies, in what are called third-party libraries. These libraries help developers track user engagement, connect with social media and earn money by displaying ads and other features, without having to write them from scratch.

    However, in addition to their valuable help, most libraries also collect sensitive data and send it to their online servers – or to another company altogether. Successful library authors may be able to develop detailed digital profiles of users. For example, a person might give one app permission to know their location, and another app access to their contacts. These are initially separate permissions, one to each app. But if both apps used the same third-party library and shared different pieces of information, the library’s developer could link the pieces together.

    Users would never know, because apps aren’t required to tell users what software libraries they use. And only very few apps make public their policies on user privacy; if they do, it’s usually in long legal documents a regular person won’t read, much less understand.

    Developing Lumen

    Our research seeks to reveal how much data are potentially being collected without users’ knowledge, and to give users more control over their data. To get a picture of what data are being collected and transmitted from people’s smartphones, we developed a free Android app of our own, called the Lumen Privacy Monitor. It analyzes the traffic apps send out, to report which applications and online services actively harvest personal data.

    Because Lumen is about transparency, a phone user can see the information installed apps collect in real time and with whom they share these data. We try to show the details of apps’ hidden behavior in an easy-to-understand way. It’s about research, too, so we ask users if they’ll allow us to collect some data about what Lumen observes their apps are doing – but that doesn’t include any personal or privacy-sensitive data. This unique access to data allows us to study how mobile apps collect users’ personal data and with whom they share data at an unprecedented scale.

    In particular, Lumen keeps track of which apps are running on users’ devices, whether they are sending privacy-sensitive data out of the phone, what internet sites they send data to, the network protocol they use and what types of personal information each app sends to each site. Lumen analyzes apps traffic locally on the device, and anonymizes these data before sending them to us for study: If Google Maps registers a user’s GPS location and sends that specific address to maps.google.com, Lumen tells us, “Google Maps got a GPS location and sent it to maps.google.com” – not where that person actually is.

    Trackers are everywhere

    More than 1,600 people who have used Lumen since October 2015 allowed us to analyze more than 5,000 apps. We discovered 598 internet sites likely to be tracking users for advertising purposes, including social media services like Facebook, large internet companies like Google and Yahoo, and online marketing companies under the umbrella of internet service providers like Verizon Wireless.

    We found that more than 70 percent of the apps we studied connected to at least one tracker, and 15 percent of them connected to five or more trackers. One in every four trackers harvested at least one unique device identifier, such as the phone number or its device-specific unique 15-digit IMEI number. Unique identifiers are crucial for online tracking services because they can connect different types of personal data provided by different apps to a single person or device. Most users, even privacy-savvy ones, are unaware of those hidden practices.

    More than just a mobile problem

    Tracking users on their mobile devices is just part of a larger problem. More than half of the app-trackers we identified also track users through websites. Thanks to this technique, called “cross-device” tracking, these services can build a much more complete profile of your online persona.

    And individual tracking sites are not necessarily independent of others. Some of them are owned by the same corporate entity – and others could be swallowed up in future mergers. For example, Alphabet, Google’s parent company, owns several of the tracking domains that we studied, including Google Analytics, DoubleClick or AdMob, and through them collects data from more than 48 percent of the apps we studied.

    Users’ online identities are not protected by their home country’s laws. We found data being shipped across national borders, often ending up in countries with questionable privacy laws. More than 60 percent of connections to tracking sites are made to servers in the U.S., U.K., France, Singapore, China and South Korea – six countries that have deployed mass surveillance technologies. Government agencies in those places could potentially have access to these data, even if the users are in countries with stronger privacy laws such as Germany, Switzerland or Spain.

    Even more disturbingly, we have observed trackers in apps targeted to children. By testing 111 kids’ apps in our lab, we observed that 11 of them leaked a unique identifier, the MAC address, of the Wi-Fi router it was connected to. This is a problem, because it is easy to search online for physical locations associated with particular MAC addresses. Collecting private information about children, including their location, accounts and other unique identifiers, potentially violates the Federal Trade Commission’s rules protecting children’s privacy.

    Just a small look

    Although our data include many of the most popular Android apps, it is a small sample of users and apps, and therefore likely a small set of all possible trackers. Our findings may be merely scratching the surface of what is likely to be a much larger problem that spans across regulatory jurisdictions, devices and platforms.

    It’s hard to know what users might do about this. Blocking sensitive information from leaving the phone may impair app performance or user experience: An app may refuse to function if it cannot load ads. Actually, blocking ads hurts app developers by denying them a source of revenue to support their work on apps, which are usually free to users.

    If people were more willing to pay developers for apps, that may help, though it’s not a complete solution. We found that while paid apps tend to contact fewer tracking sites, they still do track users and connect with third-party tracking services.

    Transparency, education and strong regulatory frameworks are the key. Users need to know what information about them is being collected, by whom, and what it’s being used for. Only then can we as a society decide what privacy protections are appropriate, and put them in place. Our findings, and those of many other researchers, can help turn the tables and track the trackers themselves.

    Source:Science Daily

  • Meet the most nimble-fingered robot ever built

    Grabbing the awkwardly shaped items that people pick up in their day-to-day lives is a slippery task for robots. Irregularly shaped items such as shoes, spray bottles, open boxes, even rubber duckies are easy for people to grab and pick up, but robots struggle with knowing where to apply a grip. In a significant step toward overcoming this problem, roboticists at UC Berkeley have a built a robot that can pick up and move unfamiliar, real-world objects with a 99 percent success rate.

    Berkeley professor Ken Goldberg, postdoctoral researcher Jeff Mahler and the Laboratory for Automation Science and Engineering (AUTOLAB) created the robot, called DexNet 2.0. DexNet 2.0’s high grasping success rate means that this technology could soon be applied in industry, with the potential to revolutionize manufacturing and the supply chain.

    DexNet 2.0 gained its highly accurate dexterity through a process called deep learning. The researchers built a vast database of three-dimensional shapes — 6.7 million data points in total — that a neural network uses to learn grasps that will pick up and move objects with irregular shapes. The neural network was then connected to a 3D sensor and a robotic arm. When an object is placed in front of DexNet 2.0, it quickly studies the shape and selects a grasp that will successfully pick up and move the object 99 percent of the time. DexNet 2.0 is also three times faster than its previous version.

    DexNet 2.0 was featured as the cover story of the latest issues of MIT Technology Review, which called DexNet 2.0 “the most nimble-fingered robot yet.” The complete paper will be published in July.

    DexNet 2.0.

    Source:Science Daily

  • Harnessing energy from glass walls

    A Korean research team has developed semi-transparent perovskite solar cells that could be great candidates for solar windows.

    Scientists are exploring ways to develop transparent or semi-transparent solar cells as a substitute for glass walls in modern buildings with the aim of harnessing solar energy. But this has proven challenging, because transparency in solar cells reduces their efficiency in absorbing the sunlight they need to generate electricity.

    Typical solar cells today are made of crystalline silicon, which is difficult to make translucent. By contrast, semi-transparent solar cells use, for example, organic or dye-sensitized materials. But compared to crystalline silicon-based cells, their power-conversion efficiencies are relatively low. Perovskites are hybrid organic-inorganic photovoltaic materials, which are cheap to produce and easy to manufacture. They have recently received much attention, as the efficiency of perovskite solar cells has rapidly increased to the level of silicon technologies in the past few years.

    Using perovskites, a Korean research team, led by Professor Seunghyup Yoo of the Korea Advanced Institute of Science and Technology and Professor Nam-Gyu Park of Sungkyunkwan University, has developed a semi-transparent solar cell that is highly efficient and functions very effectively as a thermal mirror.

    One key to achieving efficient semitransparent solar cells is to develop a transparent electrode for the cell’s uppermost layer that is compatible with the photoactive material. The Korean team developed a ‘top transparent electrode’ (TTE) that works well with perovskite solar cells. The TTE is based on a multilayer stack consisting of a metal film sandwiched between a high refractive index layer and an interfacial buffer layer. This TTE, placed as a solar cell’s top-most layer, can be prepared without damaging ingredients used in the development f perovskite solar cells. Unlike conventional transparent electrodes that only transmit visible light, the team’s TTE plays the dual role of allowing visible light to pass through while at the same time reflecting infrared rays.

    The semi-transparent solar cells made with the TTEs exhibited an average power conversion efficiency as high as 13.3%, reflecting 85.5% of incoming infrared light. Currently available crystalline silicon solar cells have up to 25% efficiency but are opaque.

    The team believes that if the semi-transparent perovskite solar cells are scaled up for practical applications, they can be used in solar windows for buildings and automobiles, which not only generate electrical energy but also allow smart heat management in indoor environments, thereby utilizing solar energy more efficiently and effectively.

    Prototype of a semi-transparent perovskite solar cell with thermal-mirror functionality.

    Source:Science Daily

  • Where rivers meet the sea: Harnessing energy generated when freshwater meets saltwater

    Penn State researchers have created a new hybrid technology that produces unprecedented amounts of electrical power where seawater and freshwater combine at the coast.

    “The goal of this technology is to generate electricity from where the rivers meet the ocean,” said Christopher Gorski, assistant professor in environmental engineering at Penn State. “It’s based on the difference in the salt concentrations between the two water sources.”

    That difference in salt concentration has the potential to generate enough energy to meet up to 40 percent of global electricity demands. Though methods currently exist to capture this energy, the two most successful methods, pressure retarded osmosis (PRO) and reverse electrodialysis (RED), have thus far fallen short.

    PRO, the most common system, selectively allows water to transport through a semi-permeable membrane, while rejecting salt. The osmotic pressure created from this process is then converted into energy by turning turbines.

    “PRO is so far the best technology in terms of how much energy you can get out,” Gorski said. “But the main problem with PRO is that the membranes that transport the water through foul, meaning that bacteria grows on them or particles get stuck on their surfaces, and they no longer transport water through them.”

    This occurs because the holes in the membranes are incredibly small, so they become blocked easily. In addition, PRO doesn’t have the ability to withstand the necessary pressures of super-salty waters.

    The second technology, RED, uses an electrochemical gradient to develop voltages across ion-exchange membranes.

    “Ion-exchange membranes only allow either positively charged ions to move through them or negatively charged ions,” Gorski explained. “So only the dissolved salt is going through, and not the water itself.”

    Here, the energy is created when chloride or sodium ions are kept from crossing ion-exchange membranes as a result of selective ion transport. Ion-exchange membranes don’t require water to flow through them, so they don’t foul as easily as the membranes used in PRO; however, the problem with RED is that it doesn’t have the ability to produce large amounts of power.

    A third technology, capacitive mixing (CapMix), is a relatively new method also being explored. CapMix is an electrode-based technology that captures energy from the voltage that develops when two identical electrodes are sequentially exposed to two different kinds of water with varying salt concentrations, such as freshwater and seawater. Like RED, the problem with CapMix is that it’s not able to yield enough power to be viable.

    Gorski, along with Bruce Logan, Evan Pugh Professor and the Stan and Flora Kappe Professor of Environmental Engineering, and Taeyoung Kim, post-doctoral scholar in environmental engineering, may have found a solution to these problems. The researchers have combined both the RED and CapMix technologies in an electrochemical flow cell.

    “By combining the two methods, they end up giving you a lot more energy,” Gorski said.

    The team constructed a custom-built flow cell in which two channels were separated by an anion-exchange membrane. A copper hexacyanoferrate electrode was then placed in each channel, and graphite foil was used as a current collector. The cell was then sealed using two end plates with bolts and nuts. Once built, one channel was fed with synthetic seawater, while the other channel was fed with synthetic freshwater. Periodically switching the water’s flow paths allowed the cell to recharge and further produce power. From there, they examined how the cutoff voltage used for switching flow paths, external resistance and salt concentrations influenced peak and average power production.

    “There are two things going on here that make it work,” said Gorski. “The first is you have the salt going to the electrodes. The second is you have the chloride transferring across the membrane. Since both of these processes generate a voltage, you end up developing a combined voltage at the electrodes and across the membrane.”

    To determine the gained voltage of the flow cell depending on the type of membrane used and salinity difference, the team recorded open-circuit cell voltages while feeding two solutions at 15 milliliters per minute. Through this method, they identified that stacking multiple cells did influence electricity production. At 12.6 watts per square meter, this technology leads to peak power densities that are unprecedentedly high compared to previously reported RED (2.9 watts per square meter), and on par with the maximum calculated values for PRO (9.2 watts per square meter), but without the fouling problems.

    “What we’ve shown is that we can bring that power density up to what people have reported for pressure retarded osmosis and to a value much higher than what has been reported if you use these two processes alone,” Gorski said.

    Though the results are promising, the researchers want to do more research on the stability of the electrodes over time and want to know how other elements in seawater — like magnesium and sulfate — might affect the performance of the cell.

    “Pursuing renewable energy sources is important,” Gorski said. “If we can do carbon neutral energy, we should.”

    Photograph of the concentration flow cell. Two plates clamp the cell together, which contains two narrow channels fed with either synthetic freshwater or seawater through the plastic lines.

    Source:Science Daily

  • Off-the-shelf, power-generating clothes are almost here

    Scientists introduce coating that turns fabrics into circuits

    A lightweight, comfortable jacket that can generate the power to light up a jogger at night may sound futuristic, but materials scientist Trisha Andrew at the University of Massachusetts Amherst could make one today. In a new paper this month, she and colleagues outline how they have invented a way to apply breathable, pliable, metal-free electrodes to fabric and off-the-shelf clothing so it feels good to the touch and also transports enough electricity to power small electronics.

    She says, “Our lab works on textile electronics. We aim to build up the materials science so you can give us any garment you want, any fabric, any weave type, and turn it into a conductor. Such conducting textiles can then be built up into sophisticated electronics. One such application is to harvest body motion energy and convert it into electricity in such a way that every time you move, it generates power.” Powering advanced fabrics that can monitor health data remotely are important to the military and increasingly valued by the health care industry, she notes.

    Generating small electric currents through relative movement of layers is called triboelectric charging, explains Andrew, who trained as a polymer chemist and electrical engineer. Materials can become electrically charged as they create friction by moving against a different material, like rubbing a comb on a sweater. “By sandwiching layers of differently materials between two conducting electrodes, a few microwatts of power can be generated when we move,” she adds.

    In the current early online edition of Advanced Functional Materials, she and postdoctoral researcher Lu Shuai Zhang in her lab describe the vapor deposition method they use to coat fabrics with a conducting polymer, poly(3,4-ethylenedioxytiophene) also known as PEDOT, to make plain-woven, conducting fabrics that are resistant to stretching and wear and remain stable after washing and ironing. The thickest coating they put down is about 500 nanometers, or about 1/10 the diameter of a human hair, which retains a fabric’s hand feel.

    The authors report results of testing electrical conductivity, fabric stability, chemical and mechanical stability of PEDOT films and textile parameter effects on conductivity for 14 fabrics, including five cottons with different weaves, linen and silk from a craft store.

    “Our article describes the materials science needed to make these robust conductors,” Andrew says. “We show them to be stable to washing, rubbing, human sweat and a lot of wear and tear.” PEDOT coating did not change the feel of any fabric as determined by touch with bare hands before and after coating. Coating did not increase fabric weight by more than 2 percent. The work was supported by the Air Force Office of Scientific Research.

    Until recently, she and Zhang point out, textile scientists have tended not to use vapor deposition because of technical difficulties and high cost of scaling up from the laboratory. But over the last 10 years, industries such as carpet manufacturers and mechanical component makers have shown that the technology can be scaled up and remain cost-effective. The researchers say their invention also overcomes the obstacle of power-generating electronics mounted on plastic or cladded, veneer-like fibers that make garments heavier and/or less flexible than off-the-shelf clothing “no matter how thin or flexible these device arrays are.”

    “There is strong motivation to use something that is already familiar, such as cotton/silk thread, fabrics and clothes, and imperceptibly adapting it to a new technological application.” Andrew adds, “This is a huge leap for consumer products, if you don’t have to convince people to wear something different than what they are already wearing.”

    Test results were sometimes a surprise, Andrew notes. “You’d be amazed how much stress your clothes go through until you try to make a coating that will survive a shirt being pulled over the head. The stress can be huge, up to a thousand newtons of force. For comparison, one footstep is equal to about 10 newtons, so it’s yanking hard. If your coating is not stable, a single pull like that will flake it all off. That’s why we had to show that we could bend it, rub it and torture it. That is a very powerful requirement to move forward.”

    Andrew is director of wearable electronics at the Center for Personalized Health Monitoring in UMass Amherst’s Institute of Applied Life Sciences (IALS). Since the basic work reported this month was completed, her lab has also made a wearable heart rate monitor with an off-the-shelf fitness bra to which they added eight monitoring electrodes. They will soon test it with volunteers on a treadmill at the IALS human movement facility.

    She explains that a hospital heart rate monitor has 12 electrodes, while the wrist-worn fitness devices popular today have one, which makes them prone to false positives. They will be testing a bra with eight electrodes, alone and worn with leggings that add four more, against a control to see if sensors can match the accuracy and sensitivity of what a hospital can do. As the authors note in their paper, flexible, body-worn electronics represent a frontier of human interface devices that make advanced physiological and performance monitoring possible.

    For the future, Andrew says, “We’re working on taking any garment you give us and turning it into a solar cell so that as you are walking around the sunlight that hits your clothes can be stored in a battery or be plugged in to power a small electronic device.”

    Zhang and Andrew believe their vapor coating is able to stick to fabrics by a process called surface grafting, which takes advantage of free bonds dangling on the surface chemically bonding to one end of the polymer coating, but they have yet to investigate this fully.

    PEDOT-coated yarns that act as 'normal' wires transmit electricity from a wall outlet to an incandescent lightbulb. Materials scientist Trisha Andrew at UMass Amherst and colleagues outline in a new paper how they have invented a way to apply breathable, pliable, metal-free electrodes to fabric and off-the-shelf clothing so it feels good to the touch and also transports electricity to power small electronics. Harvesting body motion energy generates power.

    Source:Science Daily

  • Computer code that Volkswagen used to cheat emissions tests uncovered

    International team of researchers uncovered the system inside cars’ onboard computers

    An international team of researchers has uncovered the mechanism that allowed Volkswagen to circumvent U.S. and European emission tests over at least six years before the Environmental Protection Agency put the company on notice in 2015 for violating the Clean Air Act. During a year-long investigation, researchers found code that allowed a car’s onboard computer to determine that the vehicle was undergoing an emissions test. The computer then activated the car’s emission-curbing systems, reducing the amount of pollutants emitted. Once the computer determined that the test was over, these systems were deactivated.

    When the emissions curbing system wasn’t running, cars emitted up to 40 times the amount of nitrogen oxides allowed under EPA regulations.

    The team, led by Kirill Levchenko, a computer scientist at the University of California San Diego will present their findings at the 38th IEEE Symposium on Security and Privacy in the San Francisco Bay Area on May 22 to 24, 2017.

    “We were able to find the smoking gun,” Levchenko said. “We found the system and how it was used.”

    Computer scientists obtained copies of the code running on Volkswagen onboard computers from the company’s own maintenance website and from forums run by car enthusiasts. The code was running on a wide range of models, including the Jetta, Golf and Passat, as well as Audi’s A and Q series.

    “We found evidence of the fraud right there in public view,” Levchenko said.

    During emissions standards tests, cars are placed on a chassis equipped with a dynamometer, which measures the power output of the engine. The vehicle follows a precisely defined speed profile that tries to mimic real driving on an urban route with frequent stops. The conditions of the test are both standardized and public. This essentially makes it possible for manufacturers to intentionally alter the behavior of their vehicles during the test cycle. The code found in Volkswagen vehicles checks for a number of conditions associated with a driving test, such as distance, speed and even the position of the wheel. If the conditions are met, the code directs the onboard computer to activate emissions curbing mechanism when those conditions were met.

    A year-long investigation

    It all started when computer scientists at Ruhr University, working with independent researcher Felix Domke, teamed up with Levchenko and the research group of computer science professor Stefan Savage at the Jacobs School of Engineering at UC San Diego.

    Savage, Levchenko and their team have extensive experience analyzing embedded systems, such as cars’ onboard computers, known as Engine Control Units, for vulnerabilities. The team examined 900 versions of the code and found that 400 of those included information to circumvent emissions tests.

    A specific piece of code was labeled as the “acoustic condition” — ostensibly, a way to control the sound the engine makes. But in reality, the label became a euphemism for conditions occurring during an emissions test. The code allowed for as many as 10 different profiles for potential tests. When the computer determined the car was undergoing a test, it activated emissions-curbing systems, which reduced the amount of nitrogen oxide emitted.

    “The Volkswagen defeat device is arguably the most complex in automotive history,” Levchenko said.

    Researchers found a less sophisticated circumventing ploy for the Fiat 500X. That car’s onboard computer simply allows its emissions-curbing system to run for the first 26 minutes and 40 seconds after the engine starts — roughly the duration of many emissions tests.

    Researchers note that for both Volkswagen and Fiat, the vehicles’ Engine Control Unit is manufactured by automotive component giant Robert Bosch. Car manufacturers then enable the code by entering specific parameters.

    Diesel engines pose special challenges for automobile manufacturers because their combustion process produces more particulates and nitrogen oxides than gasoline engines. To curb emissions from these engines, the vehicle’s onboard computer must sometimes sacrifice performance or efficiency for compliance.

    The study draws attention to the regulatory challenges of verifying software-controlled systems that may try to hide their behavior and calls for a new breed of techniques that work in an adversarial setting.

    “Dynamometer testing is just not enough anymore,” Levchenko said.

    The article is entitled: “How They Did It: An Analysis of Emission Defeat Devices in Modern Automobiles”

    The authors are: Guo Li, Kirill Levchenko and Stefan Savage from UC San Diego; Moritz Contag, Andre Pawlowski and Thorsten Holz from Ruhr University; and independent researcher Felix Domke.

    This work was supported by the European Research Council and by the U.S. National Science Foundation (NSF).

    Diagnostic sensor applied to the exhaust of a car.

    Source:Science Daily