Tag Archives: Science

First Marijuana-Based Medicine Is Approved for Sale in U.S.

The first-ever medical treatment derived from a marijuana plant will hit the U.S. market in a few months after regulators on Monday gave the epilepsy treatment the green light.

The Food and Drug Administration approved GW Pharmaceuticals Plc’s Epidiolex to treat two rare forms of childhood epilepsy, according to a statement from the agency. The liquid is made from a compound in the marijuana plant called cannabidiol, a different chemical from tetrahydrocannabinol, or THC, which gets users high.

GW Pharmaceuticals’ Epidiolex medication.

Photographer: Kathy Young/AP

Epilepsy patients and doctors have long had interest in marijuana’s therapeutic potential. The approval marks the first time patients will have access in the U.S. to a cannabis-derived drug that has undergone a safety and efficacy review by the FDA.

“The same principles around any prescription medication can now be applied to cannabis-based medications,” GW Pharma Chief Executive Officer Justin Gover said in an interview before the FDA’s decision. “That underlies the whole value of this. We now remove ourselves from being a special case and now meet the standard criteria for prescription medications.”

FDA Commissioner Scott Gottlieb issued a separate statement stressing the importance of proper research on medical uses of marijuana and cautioning other companies that might try to push their pot treatments.

“This is an important medical advance,” Gottlieb said of Epidiolex. “But it’s also important to note that this is not an approval of marijuana or all of its components.”

GW Pharma’s American depositary receipts fell less than 1 percent to $149.85 at 1:03 p.m. in New York. They had gained 15 percent this year through Friday’s close.

GW Pharma has to wait to sell Epidiolex until the Drug Enforcement Administration decides what restrictions to place on the drug to ensure that it reaches only the patients for whom it is intended. The DEA, which classifies marijuana as an illegal drug, is required to make that determination in 90 days, Gover said. FDA staff said at an April meeting on the drug with outside advisers that cannibidiol, known as CBD, “does not appear to have abuse potential.”

Severe Forms

Epidiolex is approved to treat Lennox-Gastaut and Dravet syndromes in patients age 2 or older. Both are considered severe forms of epilepsy that begin in childhood. They’re resistant to many existing treatments, and as many as 20 percent of children with Dravet syndrome die before reaching adulthood, according to the National Institutes of Health.

GW Pharma will make Epidiolex in the U.K., where the company is based, Gover said, and export the finished product to the U.S. As of last week, the company hadn’t determined the price but was in preliminary talks with insurance companies to make them aware Epidiolex is coming, he said.

While Epidiolex is the first approved medicine that comes from a pot plant, the FDA has allowed the use a few drugs made from synthetic cannabinoids, including Insys Therapeutics Inc.’s Syndros for loss of appetite in people with AIDS and nausea caused by chemotherapy. Insys is developing a cannabidiol oral solution for a severe type of epileptic seizure known as infantile spasms, and childhood epilepsy defined by staring spells where the child isn’t aware or responsive.

(Updates with FDA commissioner comments in fifth paragraph.)

    Read more: https://www.bloomberg.com/news/articles/2018-06-25/first-marijuana-based-medicine-wins-approval-for-sale-in-u-s

    Its Time for the Next Wave of Ocean Exploration and Protection

    This week marks the 10th Annual World Oceans Day, a global confluence of ocean-awareness events intended to bring our oceans the level of public attention they deserve. As we both have had the opportunity to explore a fair amount of our globe’s seas, on this occasion we’d like to share our excitement and our vision for the future.

    WIRED OPINION

    ABOUT

    Ray Dalio (@raydalio) is the founder of Bridgewater Associates and the OceanX initiative. Marc Benioff (@benioff) is CEO and chair of Salesforce, as well as founder of the Benioff Ocean Initiative at the University of California, Santa Barbara.

    To us, the ocean is humanity’s most important and most under-examined treasure. While the world below the ocean’s surface is more than twice the size of the world above it and contains an estimated 94 percent of the space where life can exist on Earth, only 5 percent of the world’s oceans have been fully explored.

    The ocean is critical to human life—more than 50 percent of the oxygen we breathe comes from it. It drives our weather, provides a nutritious food supply, and is a key source of commerce, supporting more than 28 million jobs in the United States alone. For those reasons, it deserves our reverence and protection. Instead, humans neglect it and treat it like a toilet that we overfish from.

    We believe ocean exploration is more exciting and more important than space exploration. Yet it only receives about one-one hundredth as much funding. We want to change that by showing people the ecosystems and underwater habitats across the globe that are brimming with unexplored environments filled with species that have evolved in ways we cannot possibly imagine. By discovering and understanding these ecosystems, we can unlock cures to diseases, grow new foods, discover new medicines, create new industries, and fully understand our planet. The possibilities are endless, if we choose to open the door to them.

    Thanks to profound technological advancements in recent years, we now have the potential to open these doors like never before. Using new sensors, submarine technology, and autonomous vehicles, humans have the opportunity to advance the public's understanding and appreciation of the ocean, much as the development of scuba equipment and underwater cameras allowed ocean explorer Jacques Cousteau to captivate the world 50 years ago.

    Using many of these technologies, the BBC’s recent Blue Planet II inspired a new generation of explorers to turn toward the ocean and prodded public policymakers to introduce new measures to prevent plastics pollution. When it sets sail next year, the Alucia2, a new vessel funded by OceanX, will be the most advanced vessel designed for both media production and cutting-edge scientific research, bringing the excitement of ocean discovery to the world as broadcast and digital programs and in real time.

    Exploring our oceans is key to protecting them. As renowned oceanographer Sylvia Earle has said, “Far and away, the biggest threat to the ocean is ignorance.” Exploration is the key to ending that ignorance and making the oceans accessible, tangible, and exciting to the broader world so that people will understand and protect them.

    Creating such understanding will ensure we don’t lose the richness of the biological assets in our oceans before they are even discovered. Protected areas, such as Papahānaumokuākea and the Pacific Remote Islands Marine National Monuments, serve as savings accounts that protect strategic areas of our ocean for future exploration and discovery while simultaneously producing more fish, food, and income for fisheries.

    Even as we begin to explore the most remote reaches of our oceans, we are finding them sullied—with trash detected at the bottom of the Marianas Trench, the deepest part of our ocean, and floating in the most remote regions of Antarctica. This signals we must act now to ensure we discover more than plastic bottles during this next generation of ocean exploration.

    There are many great government and non-profit organizations, philanthropists, scientists, and entrepreneurs doing critical and unheralded work to protect our oceans—but they need more support. Not just in the form of funding, but in the form of public energy and momentum.

    One positive sign is that world leaders at the G7 Summit in Canada today are prioritizing ocean health, calling for aggressive measures to combat plastic pollution and climate change. We need this interest to translate into a firm G7 stance on oceans. And we need this ripple of leadership to turn into a tidal wave of public, private, and community support for securing the healthy future for our oceans upon which we all depend.

    We must embrace ocean exploration in the same way President Kennedy inspired the nation when he called for man to land on the moon. We’ve spent 65 years since that moonshot pledge looking up at the stars, while the oceans and all the wonders and creations they hold are sitting right at our feet, waiting to be discovered.

    Our goal is to revive the Jacques Cousteau moment, creating one big wave of excitement and interest among the public in what lies beneath the waterline—because we know that if humans explore our oceans, we will love them, and if we love them, we will protect them.

    WIRED Opinion publishes pieces written by outside contributors and represents a wide range of viewpoints. Read more opinions here.


    More Great WIRED Stories

    Read more: https://www.wired.com/story/forget-space-oceans-need-exploring/

    The Creepy Genetics Behind the Golden State Killer Case

    For the dozen years between 1974 and 1986, he rained down terror across the state of California. He went by many names: the East Side Rapist, the Visalia Ransacker, the Original Night Stalker, the Golden State Killer. And on Wednesday, law enforcement officials announced they think they finally have his real name: Joseph James DeAngelo. Police arrested the 72-year-old Tuesday; he’s accused of committing more than 50 rapes and 12 murders.

    In the end, it wasn’t stakeouts or fingerprints or cell phone records that got him. It was a genealogy website.

    FBI

    Lead investigator Paul Holes, a retired Contra Costa County District Attorney inspector, told the Mercury News late Thursday night that his team used GEDmatch, a no-frills Florida-based website that pools raw genetic profiles shared publicly by their owners, to find the man believed to be one of California’s most notorious criminals. A spokeswoman for the Sacramento County District Attorney’s Office reached Friday morning would not comment or confirm the report.

    GEDmatch—a reference to the data file format GEDCOM, developed by the Mormon church to share genealogical information—caters to curious folks searching for missing relatives or filling in family trees. The mostly volunteer-run platform exists “to provide DNA and genealogy tools for comparison and research services,” the site’s policy page states. Most of its tools for tracking down matches are free; users just have to register and upload copies of their raw DNA files exported from genetic testing services like 23andMe and Ancestry. These two companies don’t allow law enforcement to access their customer databases unless they get a court order. Neither 23andMe nor Ancestry was approached by investigators in this case, according to spokespeople for the companies.

    But no court order would be needed to mine GEDmatch’s open-source database of more than 650,000 genetically connected profiles. Using sequence data somehow wrung from old crime scene samples, police could create a genetic profile for their suspect and and upload it to the free site. As the Sacramento Bee first reported, that gave them a pool of relatives who all shared some of that incriminating genetic material. Then they could use other clues—like age and sex and place of residence—to rule out suspects. Eventually the search narrowed down to just DeAngelo. To confirm their suspicions, police staked out his Citrus Heights home and obtained his DNA from something he discarded, then ran it against multiple crime scene samples. They were a match.

    “It’s fitting that today is National DNA Day,” said Anne Marie Schubert, the Sacramento district attorney, at a press conference announcing the arrest Wednesday afternoon. A champion of genetic forensics, Schubert convened a task force two years ago to re-energize the cold case with DNA technology. “We found the needle in the haystack, and it was right here in Sacramento.”

    After four decades of failure, no one could blame law enforcement officials for celebrating. But how they came to suspect DeAngelo, and eventually put him in cuffs, raises troubling questions about what constitutes due process and civil liberty amid the explosive proliferation of commercial DNA testing.

    FBI

    DNA evidence has been a cornerstone of forensic science for decades, and rightly so. It’s way more accurate than hair or bite-mark analysis. But the routine DNA tests used by crime labs aren’t anything like what you get if you send your spit to a commercial testing company. Cops look at a panel of 20 regions of repeating locations in the genome that don’t code for proteins. Because those repeating sections vary so much from individual to individual, they’re good for matching two samples—but only if the suspect is already in the criminal databases maintained by US law enforcement. Investigators in the Golden State Killer case had long had DNA, but there was no one in their files with which to match it. And so the case went cold.

    Companies like 23andMe and Ancestry, on the one hand, probe the coding regions of DNA, to see what mysteries someone’s genes might be hiding—a heightened risk for cancer, or perhaps a long lost cousin. While those areas may be less prone to variation between individual samples, the number of customers who have received these tests—more than 10 million between both services—means that detectives can triangulate an individual. Maybe even a mass murderer. Thanks to the (biological) laws of inheritance, suspected criminals don’t have to have been tested themselves for bits of their DNA to be caught up in the dragnet of a criminal fishing investigation.

    So far, these leaders in the consumer DNA testing space have denied ever turning over any customer genetic data to the police. Not that they haven’t been asked for it. According to the 23andMe’s self-reported data, law enforcement has requested information on a total of five American 23andMe customers. Ancestry’s published transparency reports state that it has provided some customer information—but it was in response to requests related to credit card fraud and identity theft, and none of it was genetic in nature.

    Representatives from both companies said that police can’t simply upload a DNA profile they have from old crime scenes and sign up for the company’s services, allowing them to find genetic relatives and compare detailed chromosome segment data. Not because impersonating someone necessarily constitutes a violation of the their terms and conditions—people use fake names and email accounts occasionally to maintain privacy—but because they don’t accept digital files. The database entrance fee is a mandatory three milliliters of saliva.

    Cops have found ways around this before. In 2014, a New Orleans filmmaker named Michael Usry was arrested for the 1996 murder of an 18-year-old girl in Idaho Falls, after investigators turned up a “partial match” between semen found on the victim’s body and DNA from Usry’s father. A familial connection they found by sifting through DNA samples donated by Mormon churchgoers, including Usry’s father, for a genealogy project. Ancestry later purchased the database and made the genetic profiles (though not the names associated with them) publicly searchable. A search warrant got them to turn over the identity of the partial match.

    After 33 days in police custody a DNA test cleared Michael Usry, and Ancestry has since shuttered the database. But it highlighted two big potential problems with this kind of familial searching. On the other are questions of efficacy—nongovernmental databases, whether public or private, haven’t been vetted for use by law enforcement, even as they’re increasingly being used as crime-fighting tools. More worrying though are the privacy concerns—most people who get their DNA tested for the fun of it don’t expect their genetic code might one day be scrutinized by cops. And people who’ve never been tested certainly don’t expect their genes to turn them into suspects.

    Those questions get even thornier as more and more people have their DNA tested and then liberate that information from the walled off databases of private companies. GEDmatch’s policies don’t explicitly ask its users to contemplate the risks the wider network might incur on account of any one individual’s choices. “While the results presented on this site are intended solely for genealogical research, we are unable to guarantee that users will not find other uses,” it states. “If you find the possibility unacceptable, please remove your data from this site.” GEDmatch did not immediately respond to a request for comment

    Legal experts say investigators wouldn’t break any laws in accessing a publicly available database like GEDmatch, which exists expressly to map that connectivity. “The tension though is that any sample that gets uploaded also is providing information that could to lead to relatives that either haven’t consented to have their information made public, or even know it’s been done,” says Jennifer Mnookin, dean of the UCLA School of Law and a founder of its program on understanding forensic science evidence. “That’s not necessarily wrong, but it leads to a web of information that implicates a population well beyond those who made a decision themselves to be included.”

    That’s the same argument that critics have made against more traditional kinds of forensic familial searches—where a partial DNA match reveals any of a suspect’s relatives already in a criminal database. But those searches are at least regulated, to different extent, by federal and state laws. In California, investigators have to get approval from a state Department of Justice committee to run a familial DNA search through a criminal database, which limits use of the technique to particularly heinous crimes. A similar search on a site like GEDmatch requires no such oversight.

    In the case of the Golden State Killer, the distinction doesn’t seem that important. But what if police started using these tools for much lesser crimes? “If these techniques became widely used there’s a risk a lot of innocent people would be caught in a web of genetic suspicion and subject to heightened scrutiny,” says Mnookin. While she’s impressed with the ingenuity of the investigators in this case to track down their suspect, she can’t help but see it as a step toward a genetic surveillance state. “That’s what’s hard about this,” she says. “We don’t have a blood taint in this country. Guilt shouldn’t travel by familial association, whether your brother is a felon or an amateur genealogist.”

    More Genetic Informants

    Read more: https://www.wired.com/story/detectives-cracked-the-golden-state-killer-case-using-genetics/

    Nuclear fusion on brink of being realised, say MIT scientists

    Carbon-free fusion power could be on the grid in 15 years

    The dream of nuclear fusion is on the brink of being realised, according to a major new US initiative that says it will put fusion power on the grid within 15 years.

    The project, a collaboration between scientists at MIT and a private company, will take a radically different approach to other efforts to transform fusion from an expensive science experiment into a viable commercial energy source. The team intend to use a new class of high-temperature superconductors they predict will allow them to create the worlds first fusion reactor that produces more energy than needs to be put in to get the fusion reaction going.

    Bob Mumgaard, CEO of the private company Commonwealth Fusion Systems, which has attracted $50 million in support of this effort from the Italian energy company Eni, said: The aspiration is to have a working power plant in time to combat climate change. We think we have the science, speed and scale to put carbon-free fusion power on the grid in 15 years.

    Quick guide

    What is fusion?

    Fusion is the fundamental energy source of the universe, powering our sun and the distant stars. The process involves light elements, such as hydrogen, smashing together to form heavier elements, like helium, releasing prodigious amounts of energy in process.

    The promise of harnessing fusion energy is limitless, safe, zero-carbon energy. The problem is that the process only produces net energy at very high temperatures of hundreds of millions of degrees too hot for any solid material to withstand. To get around that, fusion researchers use magnetic fields to hold in place the hot plasma, a gaseous soup of subatomic particles that fuels the process, to stop it melting through the metal reactor.

    The ultimate goal of fusion research, yet to be achieved, is creating a fusion reactor that produces more energy than it took to ignite and contain the process.

    The promise of fusion is huge: it represents a zero-carbon, combustion-free source of energy. The problem is that until now every fusion experiment has operated on an energy deficit, making it useless as a form of electricity generation. Decades of disappointment in the field has led to the joke that fusion is the energy of the future and always will be.

    The just-over-the-horizon timeframe normally cited is 30 years, but the MIT team believe they can halve this by using new superconducting materials to produce ultra-powerful magnets, one of the main components of a fusion reactor.

    Prof Howard Wilson, a plasma physicist at York University who works on different fusion projects, said: The exciting part of this is the high-field magnets.

    Fusion works on the basic concept of forging lighter elements together to form heavier ones. When hydrogen atoms are squeezed hard enough, they fuse together to make helium, liberating vast amounts of energy in the process.

    However, this process produces net energy only at extreme temperatures of hundreds of millions of degrees celsius hotter than the centre of the sun and far too hot for any solid material to withstand.

    To get around this, scientists use powerful magnetic fields to hold in place the hot plasma a gaseous soup of subatomic particles to stop it from coming into contact with any part of the doughnut-shaped chamber.

    A newly available superconducting material a steel tape coated with a compound called yttrium-barium-copper oxide, or YBCO has allowed scientists to produce smaller, more powerful magnets. And this potentially reduces the amount of energy that needs to be put in to get the fusion reaction off the ground.

    The higher the magnetic field, the more compactly you can squeeze that fuel, said Wilson.

    The planned fusion experiment, called Sparc, is set to be far smaller about 1/65th of the volume than that of the International Thermonuclear Experimental Reactor project, an international collaboration currently being constructed in France.

    The experimental reactor is designed to produce about 100MW of heat. While it will not turn that heat into electricity, it will produce, in pulses of about 10 seconds, as much power as is used by a small city. The scientists anticipate the output would be more than twice the power used to heat the plasma, achieving the ultimate technical milestone: positive net energy from fusion.

    Prof Wilson was also cautious about the timeframe, saying that while the project was exciting he couldnt see how it would achieve its goal of putting energy on the grid within 15 years.

    Unlike with fossil fuels, or nuclear fuel like uranium used in fission reactions, there will never be a shortage of hydrogen.

    The reaction also does not create greenhouse gases or produce hazardous radioactive waste of the sort made by conventional nuclear fission reactors.

    Prof Maria Zuber, MITs vice-president for research, said that the development could represent a major advance in tackling climate change. At the heart of todays news is a big idea – a credible, viable plan to achieve net positive energy for fusion, she said.

    If we succeed, the worlds energy systems will be transformed. Were extremely excited about this.

    Read more: https://www.theguardian.com/environment/2018/mar/09/nuclear-fusion-on-brink-of-being-realised-say-mit-scientists

    Stephen Hawking, a Physicist Transcending Space and Time, Passes Away at 76

    For arguably the most famous physicist on Earth, Stephen Hawking—who died Wednesday in Cambridge at 76 years old—was wrong a lot. He thought, for a while, that black holes destroyed information, which physics says is a no-no. He thought Cygnus X-1, an emitter of X-rays over 6,000 light years away, wouldn’t turn out to be a black hole. (It did.) He thought no one would ever find the Higgs boson, the particle indirectly responsible for the existence of mass in the universe. (Researchers at CERN found it in 2012.)

    But Hawking was right a lot, too. He and the physicist Roger Penrose described singularities, mind-bending physical concepts where relativity and quantum mechanics collapse inward on each other—as at the heart of a black hole. It’s the sort of place that no human will ever see first-hand; the event horizon of a black hole smears matter across time and space like cosmic paste. But Hawking’s mind was singular enough to see it, or at least imagine it.

    His calculations helped show that as the young universe expanded and grew through inflation, fluctuations at the quantum scale—the smallest possible gradation of matter—became the galaxies we see around us. No human will ever visit another galaxy, and the quantum realm barely waves at us in our technology, but Hawking envisioned them both. And he calculated that black holes could sometimes explode, an image that would vex even the best visual effects wizard.

    More than that, he could explain it to the rest of us. Hawking was the Lucasian Chair of Mathematics at Cambridge until his retirement in 2009, the same position held by Isaac Newton, Charles Babbage, and Paul Dirac. But he was also a pre-eminent popularizer of some of the most brain-twisting concepts science has to offer. His 1988 book A Brief History of Time has sold more than 10 million copies. His image—in an electric wheelchair and speaking via a synthesizer because of complications of the degenerative disease amyotrophic lateral sclerosis, delivering nerdy zingers on TV shows like The Big Bang Theory and Star Trek: The Next Generation—defined “scientist” for the latter half of the 20th century perhaps as much as Albert Einstein’s mad hair and German accent did in the first half.

    Possibly that’s because in addition to being brilliant, Hawking was funny. Or at least sly. He was a difficult student by his own account. Diagnosed with ALS in 1963 at the age of 21, he thought he’d have only two more years to live. When the disease didn’t progress that fast, Hawking is reported to have said, “I found, to my surprise, that I was enjoying life in the present more than before. I began to make progress with my research.” With his mobility limited by the use of a wheelchair, he sped in it, dangerously. He proved time travel didn't exist by throwing a party for time travelers, but not sending out invitations until the party was over. No one came. People learned about the things he got wrong because he’d bet other scientists—his skepticism that Cygnus X-1 was a black hole meant he owed Kip Thorne of Caltech a subscription to Penthouse. (In fact, as the terms of that bet hint, rumors of mistreatment of women dogged him.)

    Hawking became as much a cultural icon as a scientific one. For a time police suspected his second wife and one-time nurse of abusing him; the events became the basis of an episode of Law and Order: Criminal Intent. He played himself on The Simpsons and was depicted on Family Guy and South Park. Eddie Redmayne played Hawking in a biopic.

    In recent years he looked away from the depths of the universe and into humanity’s future, joining the technologist Elon Musk in warning against the dangers of intelligent computers. “Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization,” Hawking reportedly said at a talk last year. “It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy.” In an interview with WIRED UK, he said: “Someone will design AI that replicates itself. This will be a new form of life that will outperform humans.”

    In 2016 he said that he thought humanity only had about 1,000 years left, thanks to AI, climate change, and other (avoidable) disasters. Last year he reduced that horizon exponentially—100 years left, he warned, unless we changed our ways.

    Hawking was taking an unusual step away from cosmology, and it was easy, perhaps, to dismiss that fear—why would someone who’d help define what a singularity actually was warn people against the pseudo-singularity of Silicon Valley? Maybe Hawking will be as wrong on this one as he was about conservation of information in black holes. But Hawking always did see into realms no one else could—until he described them to the rest of us.

    Hawking's Influence

    Read more: https://www.wired.com/story/stephen-hawking-a-physicist-transcending-space-and-time-passes-away-at-76/

    ‘Remember to look up at the stars’: the best Stephen Hawking quotes

    The British physicist and author had a way with words. Here are a collection of some of his greatest quotationsModern cosmologys brightest star dies aged 76

    Stephen Hawking, who has died aged 76, combined a soaring intellect and a mischievous sense of humour that made him an icon of both academia and popular culture.

    Here are a collection of some of his greatest quotes:

    • For millions of years, mankind lived just like the animals. Then something happened which unleashed the power of our imagination. We learned to talk and we learned to listen. Speech has allowed the communication of ideas, enabling human beings to work together to build the impossible. Mankinds greatest achievements have come about by talking, and its greatest failures by not talking. It doesnt have to be like this. Our greatest hopes could become reality in the future. With the technology at our disposal, the possibilities are unbounded. All we need to do is make sure we keep talking.
    • My goal is simple. It is a complete understanding of the universe, why it is as it is and why it exists at all.
    • I regard the brain as a computer which will stop working when its components fail. There is no heaven or afterlife for broken-down computers; that is a fairy story for people afraid of the dark.
    • I believe the simplest explanation is, there is no God. No one created the universe and no one directs our fate. This leads me to a profound realisation that there probably is no heaven and no afterlife either. We have this one life to appreciate the grand design of the universe and for that, I am extremely grateful.
    • Remember to look up at the stars and not down at your feet. Try to make sense of what you see and wonder about what makes the universe exist. Be curious. And however difficult life may seem, there is always something you can do and succeed at. It matters that you dont just give up.
    • Life would be tragic if it werent funny.
    • My expectations were reduced to zero when I was 21. Everything since then has been a bonus.
    • People who boast about their IQ are losers.
    • I have lived with the prospect of an early death for the last 49 years. Im not afraid of death, but Im in no hurry to die. I have so much I want to do first.
    • We are just an advanced breed of monkeys on a minor planet of a very average star. But we can understand the Universe. That makes us something very special.

    Read more: https://www.theguardian.com/science/2018/mar/14/best-stephen-hawking-quotes-quotations

    Scientists Discover Clean Water Ice Just Below Mars’ Surface

    Locked away beneath the surface of Mars are vast quantities of water ice. But the properties of that ice—how pure it is, how deep it goes, what shape it takes—remain a mystery to planetary geologists. Those things matter to mission planners, too: Future visitors to Mars, be they short-term sojourners or long-term settlers, will need to understand the planet's subsurface ice reserves if they want to mine it for drinking, growing crops, or converting into hydrogen for fuel.

    Trouble is, dirt, rocks, and other surface-level contaminants make it hard to study the stuff. Mars landers can dig or drill into the first few centimeters of the planet's surface, and radar can give researchers a sense of what lies tens-of-meters below the surface. But the ice content of the geology in between—the first 20 meters or so—is largely uncharacterized.

    Fortunately, land erodes. Forget radar and drilling robots: Locate a spot of land laid bare by time, and you have a direct line of sight on Mars' subterranean layers—and any ice deposited there.

    Now, scientists have discovered such a site. In fact, with the help of HiRISE, a powerful camera aboard NASA's Mars Reconnaissance Orbiter, they've found several.

    In this week's issue of Science, researchers led by USGS planetary geologist Colin Dundas present detailed observations of eight Martian regions where erosion has uncovered large, steep cross-sections of underlying ice. It’s not just the volume of water they found (it's no mystery that Mars harbors a lot of ice in these particular regions), it’s how mineable it promises to be. The deposits begin at depths as shallow as one meter and extend upwards of 100 meters into the planet. The researchers don't estimate the quantity of ice present, but they do note that the amount of ice near the surface is likely more extensive than the few locations where it's exposed. And what's more, the ice looks pretty damn pure.

    NASA calls the use of space-based resources “in-situ resource utilization,” and the agency thinks it will be essential to survival in deep space. Of particular interest to ISRU planners is the depth of the ice, and the ratio of pure ice to that mixed in with bits of Mars regolith. The more pristine the ice, and the closer it is to the surface, the less energy it takes to extract and use.

    The ice found this time isn’t crystal clear. Over years, observations showed that the ice is slowly surrendering water to the atmosphere through a process called sublimation, and signs suggest that boulders and sediment are dislodging from the ice as it recedes. But some debris is to be expected. Dundas and his colleagues hypothesize that the ice originated as snow, falling in waves over millions of years. Some rocky material probably found its way in, in between snow events—but the surrounding ice, the researchers think, is relatively clean.

    "On Mars, when you see something bright, it usually means ice,” says Richard Zurek, chief scientist for the Mars Program Office at NASA's Jet Propulsion Laboratory, who was unaffiliated with the study. Most of the material on Mars reflects little light, "but the albedo readings on these exposed sections show that this is very bright stuff," he says. "And the spectrometer readings support that this is water ice and not ice-cemented soil, which would be much harder to convert into water as a resource."

    Now, don't pack your bags for Mars just yet. The eight sites Dundas and his colleagues observed were all located at upper mid-latitudes, between 55 and 60 degrees north or south of the equator, where temperatures can drop extremely low. Most Mars missions, though, restrict their landing sites to within 30 degrees of the equator—as would future crewed missions to the planet's surface, most likely. As Zurek puts it: "If you wanna stay warm, it's better to be in Hawaii than Alaska."

    But that close to the equator, warmer temperatures could drive subsurface ice reserves deeper into the ground, where they'll be harder to get to. "So that's something you'll want to follow up on and investigate before you put your base down," Zurek says.

    Plans to do so are already in the works. "I'm sure we haven't found all of the exposures at this point," Dundas says, and more could certainly exist closer to the equator. NASA's Mars 2020 rover is equipped with a ground-penetrating radar that could allow it to probe the mysterious upper layers of the planet's surface. The European Space Agency's ExoMars rover, also slated for a 2020 launch, will come outfitted with a drill designed to sample geology at depths of up to two meters.

    Another option: artificial meteors. Scientists imagine sending spacecraft to hitch a ride through Mars' atmosphere on larger vehicles, only to break off at low altitude and collide with the planet. They'd land with enough impact to bury themselves a few meters into the planet's surface, detect the composition of the area around them, and relay their observations back to Earth by way of satellites in Mars orbit. "That one, the technology's not quite there yet, but it's rapidly developing," Zurek says.

    Fortunately, scientists still have some time to pinpoint Mars' reservoirs of water ice. Humans will likely return to the moon before they venture into deep space. Optimistic timelines put our arrival on the Red Planet some time during the 2030s. Where we land, how long we visit, and what we bring along will all depend on the resources that await us—and how hard we'll have to work to get them.

    Read more: https://www.wired.com/story/scientists-discover-clean-water-ice-just-below-mars-surface/

    Not Their Best Work: Boston Dynamics Has Engineered A Microwave That Can Get Tired

    Boston Dynamics—the company behind such amazing projects as the BigDog military robot and DI-Guy human-simulation software—is known for being at the forefront of the robotics field. But its latest offering is pretty underwhelming given its impressive history of cutting-edge achievements in robotic technology: Boston Dynamics announced earlier today that it has engineered a microwave that can get tired and sometimes goes to bed.

    Oh well. They can’t all be winners!

    The project, dubbed DrowsyWave, is the result of tireless research by a highly specialized team of Boston Dynamics engineers who set out to achieve one goal: to make a microwave oven that experiences the feeling of being tired and needs to sleep just like a human being does. After over a year of work, the engineers had their first major breakthrough. “Our test microwave groaned out a yawn midway through reheating a chimichanga and shut down,” said head researcher Elanor Nguyen. “It was clear from monitoring its systems that it had actually gotten tired and needed a little shut-eye. We are ecstatic to announce that this microwave is capable of feeling exhaustion, and can even start gasping for breath after completing strenuous microwaving tasks.”

    Uh, cool? This doesn’t really move humanity, but it’s nice Boston Dynamics wanted to share its iffy new invention with us.

    Nguyen went on to say that the DrowzyWave tired microwave is capable of begging for sleep if kept awake or working for too long and that it audibly groans in agony when a user puts something frozen in it because it doesn’t have the energy to cook anything for too long. All in all, it’s kind of an underwhelming accomplishment with unclear applications to the field of robotics, but even a company as cutting-edge as Boston Dynamics is allowed to swing and miss every now and then.

    There’s no doubt about it: The microwave that can get tired is not something that Boston Dynamics should feel particularly proud of. Maybe we expected a little more from the company who brought us a badass quadruped combat robot that can carry 350 pounds of gear over rough terrain, but cut Boston Dynamics some slack. They have a pretty impressive track record of making awesome stuff besides microwaves that need to rest all the time.

    Read more: http://www.clickhole.com/article/not-their-best-work-boston-dynamics-has-engineered-7341

    Google Is Giving Away AI That Can Build Your Genome Sequence

    Today, a teaspoon of spit and a hundred bucks is all you need to get a snapshot of your DNA. But getting the full picture—all 3 billion base pairs of your genome—requires a much more laborious process. One that, even with the aid of sophisticated statistics, scientists still struggle over. It’s exactly the kind of problem that makes sense to outsource to artificial intelligence.

    On Monday, Google released a tool called DeepVariant that uses deep learning—the machine learning technique that now dominates AI—to assemble full human genomes. Modeled loosely on the networks of neurons in the human brain, these massive mathematical models have learned how to do things like identify faces posted to your Facebook news feed, transcribe your inane requests to Siri, and even fight internet trolls. And now, engineers at Google Brain and Verily (Alphabet’s life sciences spin-off) have taught one to take raw sequencing data and line up the billions of As, Ts, Cs, and Gs that make you you.

    And oh yeah, it’s more accurate than all the existing methods out there. Last year, DeepVariant took first prize in an FDA contest promoting improvements in genetic sequencing. The open source version the Google Brain/Verily team introduced to the world Monday reduced the error rates even further—by more than 50 percent. Looks like grandmaster Ke Jie isn’t be the only one getting bested by Google’s AI neural networks this year.

    DeepVariant arrives at a time when healthcare providers, pharma firms, and medical diagnostic manufacturers are all racing to capture as much genomic information as they can. To meet the need, Google rivals like IBM and Microsoft are all moving into the healthcare AI space, with speculation about whether Apple and Amazon will follow suit. While DeepVariant’s code comes at no cost, that isn’t true of the computing power required to run it. Scientists say that expense is going to prevent it from becoming the standard anytime soon, especially for large-scale projects.

    But DeepVariant is just the front end of a much wider deployment; genomics is about to go deep learning. And once you go deep learning, you don’t go back.

    It’s been nearly two decades since high-throughput sequencing escaped the labs and went commercial. Today, you can get your whole genome for just $1,000 (quite a steal compared to the $1.5 million it cost to sequence James Watson’s in 2008).

    But the data produced by today’s machines still only produce incomplete, patchy, and glitch-riddled genomes. Errors can get introduced at each step of the process, and that makes it difficult for scientists to distinguish the natural mutations that make you you from random artifacts, especially in repetitive sections of a genome.

    See, most modern sequencing technologies work by taking a sample of your DNA, chopping it up into millions of short snippets, and then using fluorescently-tagged nucleotides to produce reads—the list of As, Ts, Cs, and Gs that correspond to each snippet. Then those millions of reads have to be grouped into abutting sequences and aligned with a reference genome.

    That’s the part that gives scientists so much trouble. Assembling those fragments into a usable approximation of the actual genome is still one of the biggest rate-limiting steps for genetics. A number of software programs exist to help put the jigsaw pieces together. FreeBayes, VarDict, Samtools, and the most well-used, GATK, depend on sophisticated statistical approaches to spot mutations and filter out errors. Each tool has strengths and weaknesses, and scientists often wind up having to use them in conjunction.

    No one knows the limitations of the existing technology better than Mark DePristo and Ryan Poplin. They spent five years creating GATK from whole cloth. This was 2008: no tools, no bioinformatics formats, no standards. “We didn’t even know what we were trying to compute!” says DePristo. But they had a north star: an exciting paper that had just come out, written by a Silicon Valley celebrity named Jeff Dean. As one of Google’s earliest engineers, Dean had helped design and build the fundamental computing systems that underpin the tech titan’s vast online empire. DePristo and Poplin used some of those ideas to build GATK, which became the field’s gold standard.

    But by 2013, the work had plateaued. “We tried almost every standard statistical approach under the sun, but we never found an effective way to move the needle,” says DePristo. “It was unclear after five years whether it was even possible to do better.” DePristo left to pursue a Google Ventures-backed start-up called SynapDx that was developing a blood test for autism. When that folded two years later, one of its board members, Andrew Conrad (of Google X, then Google Life Sciences, then Verily) convinced DePristo to join the Google/Alphabet fold. He was reunited with Poplin, who had joined up the month before.

    And this time, Dean wasn’t just a citation; he was their boss.

    As the head of Google Brain, Dean is the man behind the explosion of neural nets that now prop up all the ways you search and tweet and snap and shop. With his help, DePristo and Poplin wanted to see if they could teach one of these neural nets to piece together a genome more accurately than their baby, GATK.

    The network wasted no time in making them feel obsolete. After training it on benchmark datasets of just seven human genomes, DeepVariant was able to accurately identify those single nucleotide swaps 99.9587 percent of the time. “It was shocking to see how fast the deep learning models outperformed our old tools,” says DePristo. Their team submitted the results to the PrecisionFDA Truth Challenge last summer, where it won a top performance award. In December, they shared them in a paper published on bioRxiv.

    DeepVariant works by transforming the task of variant calling—figuring out which base pairs actually belong to you and not to an error or other processing artifact—into an image classification problem. It takes layers of data and turns them into channels, like the colors on your television set. In the first working model they used three channels: The first was the actual bases, the second was a quality score defined by the sequencer the reads came off of, the third contained other metadata. By compressing all that data into an image file of sorts, and training the model on tens of millions of these multi-channel “images,” DeepVariant began to be able to figure out the likelihood that any given A or T or C or G either matched the reference genome completely, varied by one copy, or varied by both.

    But they didn’t stop there. After the FDA contest they transitioned the model to TensorFlow, Google's artificial intelligence engine, and continued tweaking its parameters by changing the three compressed data channels into seven raw data channels. That allowed them to reduce the error rate by a further 50 percent. In an independent analysis conducted this week by genomics computing platform, DNAnexus, DeepVariant vastly outperformed GATK, Freebayes, and Samtools, sometimes reducing errors by as much as 10-fold.

    “That shows that this technology really has an important future in the processing of bioinformatic data,” says DNAnexus CEO, Richard Daly. “But it’s only the opening chapter in a book that has 100 chapters.” Daly says he expects this kind of AI to one day actually find the mutations that cause disease. His company received a beta version of DeepVariant, and is now testing the current model with a limited number of its clients—including pharma firms, big health care providers, and medical diagnostic companies.

    To run DeepVariant effectively for these customers, DNAnexus has had to invest in newer generation GPUs to support its platform. The same is true for Canadian competitor, DNAStack, which plans to offer two different versions of DeepVariant—one tuned for low cost and one tuned for speed. Google’s Cloud Platform already supports the tool, and the company is exploring using the TPUs (tensor processing units) that connect things like Google Search, Street View, and Translate to accelerate the genomics calculations as well.

    DeepVariant’s code is open-source so anyone can run it, but to do so at scale will likely require paying for a cloud computing platform. And it’s this cost—computationally and in terms of actual dollars—that have researchers hedging on DeepVariant’s utility.

    “It’s a promising first step, but it isn’t currently scalable to a very large number of samples because it’s just too computationally expensive,” says Daniel MacArthur, a Broad/Harvard human geneticist who has built one of the largest libraries of human DNA to date. For projects like his, which deal in tens of thousands of genomes, DeepVariant is just too costly. And, just like current statistical models, it can only work with the limited reads produced by today’s sequencers.

    Still, he thinks deep learning is here to stay. “It’s just a matter of figuring out how to combine better quality data with better algorithms and eventually we’ll converge on something pretty close to perfect,” says MacArthur. But even then, it’ll still just be a list of letters. At least for the foreseeable future, we’ll still need talented humans to tell us what it all means.

    Read more: https://www.wired.com/story/google-is-giving-away-ai-that-can-build-your-genome-sequence/

    Dem politician sees #Science disaster brewing at the White House (others, not so much)

    Democrat Maryland legislator and candidate for U.S. Congress Aruna Miller sees a science emergency in the Trump administration:

    Read more: https://twitchy.com/dougp-3137/2017/11/25/dem-politician-sees-science-disaster-brewing-at-the-white-house-others-not-so-much/