Here is our annual list of technological advances that we believe will make a real difference in solving important problems. How do we pick? We avoid the one-off tricks, the overhyped new gadgets. Instead we look for those breakthroughs that will truly change how we live and work.
Unhackable internet
An internet based on quantum physics will soon enable inherently secure communication. A team led by Stephanie Wehner, at Delft University of Technology, is building a network connecting four cities in the Netherlands entirely by means of quantum technology. Messages sent over this network will be unhackable.
In the last few years, scientists have learned to transmit pairs of photons across fiber-optic cables in a way that absolutely protects the information encoded in them. A team in China used a form of the technology to construct a 2,000-kilometer network backbone between Beijing and Shanghai—but that project relies partly on classical components that periodically break the quantum link before establishing a new one, introducing the risk of hacking.
The Delft network, in contrast, will be the first to transmit information between cities using quantum techniques from end to end.
The technology relies on a quantum behavior of atomic particles called entanglement. Entangled photons can’t be covertly read without disrupting their content.
But entangled particles are difficult to create, and harder still to transmit over long distances. Wehner’s team has demonstrated it can send them more than 1.5 kilometers (0.93 miles), and they are confident they can set up a quantum link between Delft and the Hague by
around the end of this year. Ensuring an unbroken connection over greater distances will require quantum repeaters that extend the network.
Such repeaters are currently in design at Delft and elsewhere. The first should be completed in the next five to six years, says Wehner, with a global quantum network following by the end of the decade.
Hyper-personalized medicine
Here’s a definition of a hopeless case: a child with a fatal disease so exceedingly rare that not only is there no treatment, there’s not even anyone in a lab coat studying it. “Too rare to care,” goes the saying.
That’s about to change, thanks to new classes of drugs that can be tailored to a person’s genes. If an extremely rare disease is caused by a specific DNA mistake—as several thousand are—there’s now at least a fighting chance for a genetic fix.
One such case is that of Mila Makovec, a little girl suffering from a devastating illness caused by a unique genetic mutation, who got a drug manufactured just for her. Her case made the New England Journal of Medicine in October, after doctors moved from a readout of her genetic error to a treatment in just a year. They called the drug milasen, after her.
The treatment hasn’t cured Mila. But it seems to have stabilized her condition: it has reduced her seizures, and she has begun to stand and walk with assistance.
Mila’s treatment was possible because creating a gene medicine has never been faster or had a better chance of working. The new medicines might take the form of gene replacement, gene editing, or antisense (the type Mila received), a sort of molecular eraser, which erases or fixes erroneous genetic messages. What the treatments have in common is that they can be programmed, in digital fashion and with digital speed, to correct or compensate for inherited diseases, letter for DNA letter.
Digital money
Last June Facebook unveiled a “global digital currency” called Libra. The idea triggered a backlash and Libra may never launch, at least not in the way it was originally envisioned. But it’s still made a difference: just days after Facebook’s announcement, an official from the People’s Bank of China implied that it would speed the development of its own digital currency in response. Now China is poised to become the first major economy to issue a digital version of its money, which it intends as a replacement for physical cash.
China’s leaders apparently see Libra, meant to be backed by a reserve that will be mostly US dollars, as a threat: it could reinforce America’s disproportionate power over the global financial system, which stems from the dollar’s role as the world’s de facto reserve currency. Some suspect China intends to promote its digital renminbi internationally.
Now Facebook’s Libra pitch has become geopolitical. In October, CEO Mark Zuckerberg promised Congress that Libra “will extend America’s financial leadership as well as our
democratic values and oversight around the world.” The digital money wars have begun.
Anti-aging drugs
The first wave of a new class of anti-aging drugs have begun human testing. These drugs won’t let you live longer (yet) but aim to treat specific ailments by slowing or reversing a fundamental process of aging.
The drugs are called senolytics—they work by removing certain cells that accumulate as we age. Known as “senescent” cells, they can create low-level inflammation that suppresses normal mechanisms of cellular repair and creates a toxic environment for neighboring cells.
In June, San Francisco–based Unity Biotechnology reported initial results in patients with mild to severe osteoarthritis of the knee. Results from a larger clinical trial are expected in the second half of 2020. The company is also developing similar drugs to treat age-related diseases of the eyes and lungs, among other conditions.
Senolytics are now in human tests, along with a number of other promising approaches targeting the biological processes that lie at the root of aging and various diseases.
A company called Alkahest injects patients with components found in young people’s blood and says it hopes to halt cognitive and functional decline in patients suffering from mild to moderate Alzheimer’s disease. The company also has drugs for Parkinson’s and dementia in human testing.
And in December, researchers at Drexel University College of Medicine even tried to see if a cream including the immune-suppressing drug rapamycin could slow aging in human skin.
The tests reflect researchers’ expanding efforts to learn if the many diseases associated with getting older—such as heart diseases, arthritis, cancer, and dementia—can be hacked to delay their onset.
AI-discovered molecules
The universe of molecules that could be turned into potentially life-saving drugs is mind-boggling in size: researchers estimate the number at around 1060. That’s more than all the atoms in the solar system, offering virtually unlimited chemical possibilities—if only chemists could find the worthwhile ones.
Now machine-learning tools can explore large databases of existing molecules and their properties, using the information to generate new possibilities. This could make it faster and cheaper to discover new drug candidates.
In September, a team of researchers at Hong Kong–based Insilico Medicine and the University of Toronto took a convincing step toward showing that the strategy works by synthesizing several drug candidates found by AI algorithms.
Using techniques like deep learning and generative models similar to the ones that allowed a
computer to beat the world champion at the ancient game of Go, the researchers identified some 30,000 novel molecules with desirable properties. They selected six to synthesize and test. One was particularly active and proved promising in animal tests.
Chemists in drug discovery often dream up new molecules—an art honed by years of experience and, among the best drug hunters, by a keen intuition. Now these scientists have a new tool to expand their imaginations.
Satellite mega-constellations
Satellites that can beam a broadband connection to internet terminals. As long as these terminals have a clear view of the sky, they can deliver internet to any nearby devices. SpaceX alone wants to send more than 4.5 times more satellites into orbit this decade than humans have ever launched since Sputnik.
These mega-constellations are feasible because we have learned how to build smaller satellites and launch them more cheaply. During the space shuttle era, launching a satellite into space cost roughly $24,800 per pound. A small communications satellite that weighed four tons cost nearly $200 million to fly up.
Today a SpaceX Starlink satellite weighs about 500 pounds (227 kilograms). Reusable architecture and cheaper manufacturing mean we can strap dozens of them onto rockets to greatly lower the cost; a SpaceX Falcon 9 launch today costs about $1,240 per pound.
The first 120 Starlink satellites went up last year, and the company planned to launch batches of 60 every two weeks starting in January 2020. OneWeb will launch over 30 satellites later this year. We could soon see thousands of satellites working in tandem to supply internet access for even the poorest and most remote populations on the planet.
But that’s only if things work out. Some researchers are livid because they fear these objects will disrupt astronomy research. Worse is the prospect of a collision that could cascade into a catastrophe of millions of pieces of space debris, making satellite services and future space exploration next to impossible. Starlink’s near-miss with an ESA weather satellite in September was a jolting reminder that the world is woefully unprepared to manage this much orbital traffic. What happens with these mega-constellations this decade will define the future of orbital space.
Quantum supremacy
Quantum computers store and process data in a way completely differently from the ones we’re all used to. In theory, they could tackle certain classes of problems that even the most powerful classical supercomputer imaginable would take millennia to solve, like breaking today’s cryptographic codes or simulating the precise behavior of molecules to help discover new drugs and materials.
There have been working quantum computers for several years, but it’s only under certain conditions that they outperform classical ones, and in October Google claimed the first such
demonstration of “quantum supremacy.” A computer with 53 qubits—the basic unit of quantum computation—did a calculation in a little over three minutes that, by Google’s reckoning, would have taken the world’s biggest supercomputer 10,000 years, or 1.5 billion times as long. IBM challenged Google’s claim, saying the speedup would be a thousandfold at best; even so, it was a milestone, and each additional qubit will make the computer twice as fast.
However, Google’s demo was strictly a proof of concept—the equivalent of doing random sums on a calculator and showing that the answers are right. The goal now is to build machines with enough qubits to solve useful problems. This is a formidable challenge: the more qubits you have, the harder it is to maintain their delicate quantum state. Google’s engineers believe the approach they’re using can get them to somewhere between 100 and 1,000 qubits, which may be enough to do something useful—but nobody is quite sure what.
And beyond that? Machines that can crack today’s cryptography will require millions of qubits; it will probably take decades to get there. But one that can model molecules should be easier to build.
Tiny AI
AI has a problem: in the quest to build more powerful algorithms, researchers are using ever greater amounts of data and computing power, and relying on centralized cloud services. This not only generates alarming amounts of carbon emissions but also limits the speed and privacy of AI applications.
But a countertrend of tiny AI is changing that. Tech giants and academic researchers are working on new algorithms to shrink existing deep-learning models without losing their capabilities. Meanwhile, an emerging generation of specialized AI chips promises to pack more computational power into tighter physical spaces, and train and run AI on far less energy.
These advances are just starting to become available to consumers. Last May, Google announced that it can now run Google Assistant on users’ phones without sending requests to a remote server. As of iOS 13, Apple runs Siri’s speech recognition capabilities and its QuickType keyboard locally on the iPhone. IBM and Amazon now also offer developer platforms for making and deploying tiny AI.
All this could bring about many benefits. Existing services like voice assistants, autocorrect, and digital cameras will get better and faster without having to ping the cloud every time they need access to a deep-learning model. Tiny AI will also make new applications possible, like mobile-based medical-image analysis or self-driving cars with faster reaction times. Finally, localized AI is better for privacy, since your data no longer needs to leave your device to improve a service or a feature.
But as the benefits of AI become distributed, so will all its challenges. It could become harder to combat surveillance systems or deepfake videos, for example, and discriminatory
algorithms could also proliferate. Researchers, engineers, and policymakers need to work together now to develop technical and policy checks on these potential harms.
Differential privacy
In 2020, the US government has a big task: collect data on the country’s 330 million residents while keeping their identities private. The data is released in statistical tables that policymakers and academics analyze when writing legislation or conducting research. By law, the Census Bureau must make sure that it can’t lead back to any individuals.
But there are tricks to “de-anonymize” individuals, especially if the census data is combined with other public statistics.
So the Census Bureau injects inaccuracies, or “noise,” into the data. It might make some people younger and others older, or label some white people as black and vice versa, while keeping the totals of each age or ethnic group the same. The more noise you inject, the harder de-anonymization becomes.
Differential privacy is a mathematical technique that makes this process rigorous by measuring how much privacy increases when noise is added. The method is already used by Apple and Facebook to collect aggregate data without identifying particular users.
But too much noise can render the data useless. One analysis showed that a differentially private version of the 2010 Census included households that supposedly had 90 people.
If all goes well, the method will likely be used by other federal agencies. Countries like Canada and the UK are watching too.
Climate change attribution
Ten days after Tropical Storm Imelda began flooding neighborhoods across the Houston area last September, a rapid-response research team announced that climate change almost certainly played a role.
The group, World Weather Attribution, had compared high-resolution computer simulations of worlds where climate change did and didn’t occur. In the former, the world we live in, the severe storm was as much as 2.6 times more likely—and up to 28% more intense.
Earlier this decade, scientists were reluctant to link any specific event to climate change. But many more extreme-weather attribution studies have been done in the last few years, and rapidly improving tools and techniques have made them more reliable and convincing.
This has been made possible by a combination of advances. For one, the lengthening record of detailed satellite data is helping us understand natural systems. Also, increased computing power means scientists can create higher-resolution simulations and conduct many more virtual experiments.
These and other improvements have allowed scientists to state with increasing statistical certainty that yes, global warming is often fueling more dangerous weather events.
By disentangling the role of climate change from other factors, the studies are telling us what kinds of risks we need to prepare for, including how much flooding to expect and how severe heat waves will get as global warming becomes worse. If we choose to listen, they can help us understand how to rebuild our cities and infrastructure for a climate-changed world.
Source: Technologyreview