Ultrabooks demoed at CES show the company has put a considerable effort into making the notebook fresh again.
At the Consumer Electronics Show in Las Vegas today, chip maker Intel refreshed the familiar notebook computer with ideas borrowed from more glamorous competitors.
See through: A prototype ultrabook called a Nikiski has a large, transparent touch pad that stretches the full width of the device.
Touch, voice control, and even gesture control—the latter popularized by Microsoft's Kinect gaming controller—will be coming to lightweight laptops dubbed "ultrabooks," said Mooly Eden, Intel's vice president for sales and marketing, at Intel's press conference this morning.
Intel dominates the market for desktop, laptop, and server processors, but has been a spectator to the rapid growth of smart phones and tablets. Worse for the Santa Clara, California, chip maker, high-powered smart-phone and tablet processors based on designs from U.K.-based ARM are beginning to show potential in Intel's traditional realm.
Smart phones, tablets, and Apple's super-lightweight MacBook Air have made conventional laptops look rather staid in recent years, threatening a major source of revenue for Intel. Eden's presentation made it clear that Intel has spent considerable effort in its labs developing new technologies to refresh the notebook. Touch, voice recognition, and novel hybrid tablet-laptop designs have all been developed and will be licensed to partners such as Asus, Acer, and HP, which make ultrabooks.
Eden also showed a brief demonstration of an ultrabook able to recognize hand and arm gestures made in front of its screen, using software developed by Intel. A simple game involved using a slingshot, operated by extending an arm into the space in front of the ultrabook, making a grasping motion in thin air, then pulling back and releasing to fire the catapult. "We believe that we'll see gestures even with our ultrabook," said Eden. He didn't explain how the technology worked but the ultrabook appeared to have a normal camera, suggesting it was using machine vision software to process video from its webcam.
Eden presented all those new twists on the notebook as logical moves enabled by more power-efficient processors and by a better appreciation of the importance of human-machine interaction. "In the last 30 years, the number of transistors went up a million percent, but we didn't do enough with the man-machine interface," he said.
There was no reason for touch to have appeared in phones and tablet devices but not laptops, said Eden. "Let me tell you something, it's not going to skip the ultrabook," he said. Trials in Europe, the U.S., China, and Brazil involving prototype ultrabooks with touch screens have found that people used the touch panel for around 70 percent of operations, he said. Eden also dismissed claims that people would find operating a touch panel in a notebook tiring: "People say that it's very easy."
Eden was dismissive of tablets, labeling them devices well suited to consuming movies and other content, but not to doing work or creating content. Intel's anthropologists had discovered that being able to do both is important, he said. "People don't buy the story that consumption is enough," Eden said. "We are people; consumption is for cows."
Eden also showed ultrabooks with designs halfway between phone and tablet. The screen of one design, which he described as a "slider," could be moved over the keyboard to become a tablet. A prototype called a Nikiski has a large, transparent touch pad that stretches the full width of the device. When a Nikiski laptop is closed, some of its screen is visible through that touchpad, providing easy access to notifications like calendar events or e-mails. When open, the panel can detect when a person places his hands down to type on the keyboard.
The Nikiski prototype was shown running the forthcoming Windows 8 operating system. It includes a special interface, known as Metro, which presents notifications and access to programs using a grid of tiles intended to be swiped and tapped. It was originally designed for touch devices. "We'll be able to get an even better [touch] experience with the tile experience," said Eden.
Peter Mahoney, chief marketing offer for Nuance, which develops voice recognition technology, joined Eden to announce that future ultrabooks will be able to recognize voice commands in a manner similar to Apple's Siri assistant, which is built into the iPhone 4S.
Nuance's technology is licensed by Apple for use in Siri, but Mahoney said that voice control could be more powerful in an ultrabook because the devices have more computing power than phones do. Unlike Google or Apple's voice recognition, there will be no need for speech data to be sent to cloud servers for analysis, he said, leading to quicker performance. The software can also adapt to a person's voice and even relatively thick accents, he said. The feature will initially support nine languages: English, French, German, Italian, Dutch, Spanish, Brazilian Portuguese, Japanese, and Mandarin.
Eden said that the voice feature could be used to compose social-media messages and updates, and to ask a closed laptop in a person's bag for information with questions such as "when is my next meeting?"
There are more than 75 ultrabook devices "in the design pipeline" for 2012, said Eden, and a handful will launch with touch screens before the year is out. Analysts at the Consumer Electronics Association, which organizes CES, estimate that Intel's various partners will launch more than 50 regular ultrabooks this week.
Although Intel says ultrabooks will be the company's main focus in 2012, the company is also working to gain a foothold in phones and tablets, and recently showed prototypes expected to be seen again at CES later this week.
BY TOM SIMONITE
Monday, January 16, 2012
Wednesday, January 4, 2012
Technological Healing
A leading researcher says digital technologies are about to make health care more effective. But is so much data really beneficial?
Nanosensors patrolling your bloodstream for the first sign of an imminent stroke or heart attack, releasing anticlotting or anti-inflammatory drugs to stop it in its tracks. Cell phones that display your vital signs and take ultrasound images of your heart or abdomen. Genetic scans of malignant cells that match your cancer to the most effective treatment.
In cardiologist Eric Topol's vision, medicine is on the verge of an overhaul akin to the one that digital technology has brought to everything from how we communicate to how we locate a pizza parlor. Until now, he writes in his upcoming book The Creative Destruction of Medicine: How the Digital Revolution Will Create Better Health Care, the "ossified" and "sclerotic" nature of medicine has left health "largely unaffected, insulated, and almost compartmentalized from [the] digital revolution." But that, he argues, is about to change.
Digital technologies, he foresees, can bring us true prevention (courtesy of those nanosensors that stop an incipient heart attack), individualized care (thanks to DNA analyses that match patients to effective drugs), cost savings (by giving patients only those drugs that help them), and a reduction in medical errors (because of electronic health records, or EHRs). Virtual house calls and remote monitoring could replace most doctor visits and even many hospitalizations. Topol, the director of the Scripps Translational Science Institute, is far from alone: e-health is so widely favored that the 2010 U.S. health-care reform act allocates billions of dollars to electronic health records in the belief that they will improve care.
Anyone who has ever been sick or who is likely to ever get sick—in other words, all of us—would say, Bring it on. There is only one problem: the paucity of evidence that these technologies benefit patients. Topol is not unaware of that. The eminently readable Creative Destruction almost seems to have two authors, one of them a rigorous, hard-nosed physician/researcher who insightfully critiques the tendency to base treatments on what is effective for the average patient. This Topol cites study after study showing that much of what he celebrates may not benefit many individual patients at all. The other author, however, is a kid in the electronics store whose eyes light up at every cool new toy. He seems to dismiss the other Topol as a skunk at a picnic.
Much of the enthusiasm for bringing the information revolution to medicine reflects the assumption that more information means better health care. Actual data offer reasons for caution, if not skepticism. Take telemonitoring, in which today's mobile apps and tomorrow's nanosensors would measure blood pressure, respiration, blood glucose, cholesterol, and other physiological indicators. "Previously, we've been able to assess people's health status when they came in to a doctor's office, but mobile and wireless technology allow us to monitor and track important health indicators throughout the day, and get alerts before something gets too bad," says William Riley, program director at the National Heart, Lung & Blood Institute and chairman of a mobile health interest group at the National Institutes of Health. "Soon there won't be much that we can't monitor remotely."
Certainly, it is worthwhile to monitor blood pressure, glucose, and other indicators; if nothing else, having regular access to such data might help people make better choices about their health. But does turning the flow of data into a deluge lead to better results on a large scale? The evidence is mixed. In a 2010 study of 480 patients, telemonitoring of hypertension led to larger reductions in blood pressure than did standard care. And a 2008 study found that using messaging devices and occasional teleconferencing to monitor patients with chronic conditions such as diabetes and heart disease reduced hospital admissions by 19 percent. But a 2010 study of 1,653 patients hospitalized for heart failure concluded that "telemonitoring did not improve outcomes." Similarly, a recent review of randomized studies of mobile apps for smoking cessation found that they helped in the short term, but that there is insufficient research to determine the long-term benefits. Given the land rush into mobile health technologies, or "m-health," the lack of data on their helpfulness raises concerns. "People are putting out systems and technologies that haven't been studied," says Riley.
These concerns also apply to technologies we don't have yet, like those nanosensors in our blood. For instance, studies have reached conflicting conclusions about whether diabetics benefit from aggressive glucose control—something that could be provided by nanosensors paired with insulin delivery devices. Several studies have found that it can lead to hypoglycemia (dangerously low levels of blood glucose) and does not reduce mortality in severely ill diabetics. And sensors may be no better at detecting incipient cancers or heart attacks. If the ongoing debate about overdiagnosis of breast and prostate cancer has taught us anything, it should be that an abnormality that looks like cancer might not spread or do harm, and therefore should not necessarily be treated. For heart attacks, we need rigorous clinical trials establishing the rate of false positives and false negatives before we start handing out nanosensors like lollipops.
EHRs also seem like a can't-miss advance: corral a patient's history in easily searched electrons, rather than leaving it scattered in piles of paper with illegible scribbles, and you'll reduce medical errors, minimize redundant tests, avoid dangerous drug interactions (the system alerts the prescriber if a new prescription should not be taken with an existing one), and ensure that necessary exams are done (by reminding a physician to, say, test a diabetic's vision).
In practice, however, the track record is mixed. In one widely cited study, scientists led by Jeffrey Linder of Harvard Medical School reported in 2007 that EHRs were not associated with better care in doctor's offices on 14 of 17 quality indicators, including managing common diseases, providing preventive counseling and screening tests, and avoiding potentially inappropriate prescriptions to elderly patients. (Practices that used EHRs did do better at avoiding unnecessary urinalysis tests.) Topol acknowledges that there is no evidence that the use of EHRs reduces diagnostic errors, and he cites several studies that, for instance, found "no consistent association between better quality of care and [EHRs]." Indeed, one disturbing study he describes showed that the rate of patient deaths doubled in the first five months after a hospital computerized its system for ordering drugs.
Financial incentives threaten another piece of Topol's vision. Perhaps the most promising path to personal medicine is pharmacogenomics, or using genetics to identify patients who will—or will not—benefit from a drug. Clearly, the need is huge. Clinical trials have shown that only one or two people out of 100 without prior heart disease benefit from a certain statin, for instance, and one heart attack victim in 100 benefits more from tPA (tissue plasminogen activator, a genetically engineered clot-dissolving drug) than from streptokinase (a cheap, older clot buster). Genetic scans might eventually reveal who those one or two are. Similarly, as Topol notes, only half the patients receiving a $50,000 hepatitis C drug, and half of those taking rheumatoid arthritis drugs that ring up some $14 billion in annual sales, see their health improve on these medications. By preĆ«mptively identifying who's in which half, genomics might keep patients, private insurers, and Medicare from wasting tens of billions of dollars a year.
Yet despite some progress in matching cancer drugs to tumors, pharmacogenomics "has had limited impact on clinical practice," says Joshua Cohen of the Tufts Center for the Study of Drug Development, who led a 2011 study of the field. Several dozen diagnostics are in use to assess whether patients would benefit from a specific drug, he estimates; one of the best-known analyzes breast cancers to see if they are fueled by a mutation in the her2 protein, which means they are treatable with Herceptin. But insurers still doubt the value of most such tests. It's not clear that testing everyone who's about to be prescribed a drug would save money compared with giving it to all those patients and letting the chips fall where they may.
Genotyping is not even routine in clinical trials of experimental cancer drugs. As Tyler Jacks, an MIT cancer researcher, recently told me, companies "run big dumb trials" rather than test drugs specifically on patients whose cancer is driven by the mutation the drug targets. Why? Companies calculate that it is more profitable to test these drugs on many patients, not just those with the mutation in question. That's because although a new drug might help nearly all lung cancer patients with a particular mutation, a research trial might indicate that it helps—just to make up a number—30 percent of lung cancer patients as a whole. Even that less impressive number could be enough for Food and Drug Administration approval to sell the drug to everyone with lung cancer. Limiting the trial to those with the mutation would limit sales to those patients. The risk that the clinical trial will fail is more than balanced by the chance to sell the drug to millions more people.
Such financial considerations are not all that stands in the way of Topol's predictions. He and other enthusiasts need to overcome the lack of evidence that cool gadgets will improve health and save money. But though he acknowledges the "legitimate worry" about adopting technologies before they have been validated, his cheerleading hardly flags. "The ability to digitize any individual's biology, physiology, and anatomy" will "undoubtedly reshape" medicine, he declares, thanks to the "super-convergence of DNA sequencing, mobile smart phones and digital devices, wearable nanosensors, the Internet, [and] cloud computing." Only a fool wouldn't root for such changes, and indeed, that's why Topol wrote the book, he says: to inspire people to demand that medicine enter the 21st century. Yet he may have underestimated how much "destruction" will be required for that goal to be realized.
BY SHARON BEGLEY
Sharon Begley, a former science columnist at Newsweek and the Wall Street Journal, is a contributing writer for Newsweek and its website, the Daily Beast.
Nanosensors patrolling your bloodstream for the first sign of an imminent stroke or heart attack, releasing anticlotting or anti-inflammatory drugs to stop it in its tracks. Cell phones that display your vital signs and take ultrasound images of your heart or abdomen. Genetic scans of malignant cells that match your cancer to the most effective treatment.
In cardiologist Eric Topol's vision, medicine is on the verge of an overhaul akin to the one that digital technology has brought to everything from how we communicate to how we locate a pizza parlor. Until now, he writes in his upcoming book The Creative Destruction of Medicine: How the Digital Revolution Will Create Better Health Care, the "ossified" and "sclerotic" nature of medicine has left health "largely unaffected, insulated, and almost compartmentalized from [the] digital revolution." But that, he argues, is about to change.
Digital technologies, he foresees, can bring us true prevention (courtesy of those nanosensors that stop an incipient heart attack), individualized care (thanks to DNA analyses that match patients to effective drugs), cost savings (by giving patients only those drugs that help them), and a reduction in medical errors (because of electronic health records, or EHRs). Virtual house calls and remote monitoring could replace most doctor visits and even many hospitalizations. Topol, the director of the Scripps Translational Science Institute, is far from alone: e-health is so widely favored that the 2010 U.S. health-care reform act allocates billions of dollars to electronic health records in the belief that they will improve care.
Anyone who has ever been sick or who is likely to ever get sick—in other words, all of us—would say, Bring it on. There is only one problem: the paucity of evidence that these technologies benefit patients. Topol is not unaware of that. The eminently readable Creative Destruction almost seems to have two authors, one of them a rigorous, hard-nosed physician/researcher who insightfully critiques the tendency to base treatments on what is effective for the average patient. This Topol cites study after study showing that much of what he celebrates may not benefit many individual patients at all. The other author, however, is a kid in the electronics store whose eyes light up at every cool new toy. He seems to dismiss the other Topol as a skunk at a picnic.
Much of the enthusiasm for bringing the information revolution to medicine reflects the assumption that more information means better health care. Actual data offer reasons for caution, if not skepticism. Take telemonitoring, in which today's mobile apps and tomorrow's nanosensors would measure blood pressure, respiration, blood glucose, cholesterol, and other physiological indicators. "Previously, we've been able to assess people's health status when they came in to a doctor's office, but mobile and wireless technology allow us to monitor and track important health indicators throughout the day, and get alerts before something gets too bad," says William Riley, program director at the National Heart, Lung & Blood Institute and chairman of a mobile health interest group at the National Institutes of Health. "Soon there won't be much that we can't monitor remotely."
Certainly, it is worthwhile to monitor blood pressure, glucose, and other indicators; if nothing else, having regular access to such data might help people make better choices about their health. But does turning the flow of data into a deluge lead to better results on a large scale? The evidence is mixed. In a 2010 study of 480 patients, telemonitoring of hypertension led to larger reductions in blood pressure than did standard care. And a 2008 study found that using messaging devices and occasional teleconferencing to monitor patients with chronic conditions such as diabetes and heart disease reduced hospital admissions by 19 percent. But a 2010 study of 1,653 patients hospitalized for heart failure concluded that "telemonitoring did not improve outcomes." Similarly, a recent review of randomized studies of mobile apps for smoking cessation found that they helped in the short term, but that there is insufficient research to determine the long-term benefits. Given the land rush into mobile health technologies, or "m-health," the lack of data on their helpfulness raises concerns. "People are putting out systems and technologies that haven't been studied," says Riley.
These concerns also apply to technologies we don't have yet, like those nanosensors in our blood. For instance, studies have reached conflicting conclusions about whether diabetics benefit from aggressive glucose control—something that could be provided by nanosensors paired with insulin delivery devices. Several studies have found that it can lead to hypoglycemia (dangerously low levels of blood glucose) and does not reduce mortality in severely ill diabetics. And sensors may be no better at detecting incipient cancers or heart attacks. If the ongoing debate about overdiagnosis of breast and prostate cancer has taught us anything, it should be that an abnormality that looks like cancer might not spread or do harm, and therefore should not necessarily be treated. For heart attacks, we need rigorous clinical trials establishing the rate of false positives and false negatives before we start handing out nanosensors like lollipops.
EHRs also seem like a can't-miss advance: corral a patient's history in easily searched electrons, rather than leaving it scattered in piles of paper with illegible scribbles, and you'll reduce medical errors, minimize redundant tests, avoid dangerous drug interactions (the system alerts the prescriber if a new prescription should not be taken with an existing one), and ensure that necessary exams are done (by reminding a physician to, say, test a diabetic's vision).
In practice, however, the track record is mixed. In one widely cited study, scientists led by Jeffrey Linder of Harvard Medical School reported in 2007 that EHRs were not associated with better care in doctor's offices on 14 of 17 quality indicators, including managing common diseases, providing preventive counseling and screening tests, and avoiding potentially inappropriate prescriptions to elderly patients. (Practices that used EHRs did do better at avoiding unnecessary urinalysis tests.) Topol acknowledges that there is no evidence that the use of EHRs reduces diagnostic errors, and he cites several studies that, for instance, found "no consistent association between better quality of care and [EHRs]." Indeed, one disturbing study he describes showed that the rate of patient deaths doubled in the first five months after a hospital computerized its system for ordering drugs.
Financial incentives threaten another piece of Topol's vision. Perhaps the most promising path to personal medicine is pharmacogenomics, or using genetics to identify patients who will—or will not—benefit from a drug. Clearly, the need is huge. Clinical trials have shown that only one or two people out of 100 without prior heart disease benefit from a certain statin, for instance, and one heart attack victim in 100 benefits more from tPA (tissue plasminogen activator, a genetically engineered clot-dissolving drug) than from streptokinase (a cheap, older clot buster). Genetic scans might eventually reveal who those one or two are. Similarly, as Topol notes, only half the patients receiving a $50,000 hepatitis C drug, and half of those taking rheumatoid arthritis drugs that ring up some $14 billion in annual sales, see their health improve on these medications. By preĆ«mptively identifying who's in which half, genomics might keep patients, private insurers, and Medicare from wasting tens of billions of dollars a year.
Yet despite some progress in matching cancer drugs to tumors, pharmacogenomics "has had limited impact on clinical practice," says Joshua Cohen of the Tufts Center for the Study of Drug Development, who led a 2011 study of the field. Several dozen diagnostics are in use to assess whether patients would benefit from a specific drug, he estimates; one of the best-known analyzes breast cancers to see if they are fueled by a mutation in the her2 protein, which means they are treatable with Herceptin. But insurers still doubt the value of most such tests. It's not clear that testing everyone who's about to be prescribed a drug would save money compared with giving it to all those patients and letting the chips fall where they may.
Genotyping is not even routine in clinical trials of experimental cancer drugs. As Tyler Jacks, an MIT cancer researcher, recently told me, companies "run big dumb trials" rather than test drugs specifically on patients whose cancer is driven by the mutation the drug targets. Why? Companies calculate that it is more profitable to test these drugs on many patients, not just those with the mutation in question. That's because although a new drug might help nearly all lung cancer patients with a particular mutation, a research trial might indicate that it helps—just to make up a number—30 percent of lung cancer patients as a whole. Even that less impressive number could be enough for Food and Drug Administration approval to sell the drug to everyone with lung cancer. Limiting the trial to those with the mutation would limit sales to those patients. The risk that the clinical trial will fail is more than balanced by the chance to sell the drug to millions more people.
Such financial considerations are not all that stands in the way of Topol's predictions. He and other enthusiasts need to overcome the lack of evidence that cool gadgets will improve health and save money. But though he acknowledges the "legitimate worry" about adopting technologies before they have been validated, his cheerleading hardly flags. "The ability to digitize any individual's biology, physiology, and anatomy" will "undoubtedly reshape" medicine, he declares, thanks to the "super-convergence of DNA sequencing, mobile smart phones and digital devices, wearable nanosensors, the Internet, [and] cloud computing." Only a fool wouldn't root for such changes, and indeed, that's why Topol wrote the book, he says: to inspire people to demand that medicine enter the 21st century. Yet he may have underestimated how much "destruction" will be required for that goal to be realized.
BY SHARON BEGLEY
Sharon Begley, a former science columnist at Newsweek and the Wall Street Journal, is a contributing writer for Newsweek and its website, the Daily Beast.
Tuesday, January 3, 2012
New Camera Captures Light in Motion
The system records 0.6 trillion frames a second—good enough to follow the path of a laser beam as it bounces off objects.
Hollywood has to resort to trickery to show moviegoers laser beams traveling through the air. That's because the beams move too fast to be captured on film. Now a camera that records frames at a rate of 0.6 trillion every second can truly capture the bouncing path of a laser pulse.
See a video of a laser pulse moving through a Coke bottle, or bouncing off a tomato.
The system was developed by researchers led by Ramesh Raskar at MIT's Media Lab. Currently limited to a tabletop inside the group's lab, the camera can record what happens when very short pulses of laser light—lasting just 50 femtoseconds (50,000 trillionths of a second) long—hit objects in front of them. The camera captures the pulses bouncing between and reflecting off objects.
Raskar says the new camera could be used for novel kinds of medical imaging, tracking light inside body tissue. It could also enable novel kinds of photographic manipulation. In experiments, the camera has captured frames roughly 500 by 600 pixels in size.
The fastest scientific cameras on the market typically capture images at rates in the low millions of frames per second. They work similar to the way a consumer digital camera works, with a light sensor that converts light from the lens into a digital signal that's saved to disk.
The Media Lab researchers had to take a different approach, says Andreas Velten, a member of the research team. An electronic system's reaction time is inherently limited to roughly 500 picoseconds, he says, because it takes too long for electronic signals to travel along the wires and through the chips in such designs. "[Our shutter speed is] just under two picoseconds because we detect light with a streak camera, which gets around the electrical problem."
More typically used to measure the timing of laser pulses than for photography, a streak camera doesn't need any electronics to record light. Light entering the streak camera falls onto a specialized electrode—a photocathode—that converts the stream of photons into a matching stream of electrons. That electron beam hits a screen on the back of the streak camera that's covered with chemicals that light up wherever the beam falls. The same mechanism is at work in a traditional cathode ray tube TV set.
Hollywood has to resort to trickery to show moviegoers laser beams traveling through the air. That's because the beams move too fast to be captured on film. Now a camera that records frames at a rate of 0.6 trillion every second can truly capture the bouncing path of a laser pulse.
See a video of a laser pulse moving through a Coke bottle, or bouncing off a tomato.
The system was developed by researchers led by Ramesh Raskar at MIT's Media Lab. Currently limited to a tabletop inside the group's lab, the camera can record what happens when very short pulses of laser light—lasting just 50 femtoseconds (50,000 trillionths of a second) long—hit objects in front of them. The camera captures the pulses bouncing between and reflecting off objects.
Raskar says the new camera could be used for novel kinds of medical imaging, tracking light inside body tissue. It could also enable novel kinds of photographic manipulation. In experiments, the camera has captured frames roughly 500 by 600 pixels in size.
The fastest scientific cameras on the market typically capture images at rates in the low millions of frames per second. They work similar to the way a consumer digital camera works, with a light sensor that converts light from the lens into a digital signal that's saved to disk.
The Media Lab researchers had to take a different approach, says Andreas Velten, a member of the research team. An electronic system's reaction time is inherently limited to roughly 500 picoseconds, he says, because it takes too long for electronic signals to travel along the wires and through the chips in such designs. "[Our shutter speed is] just under two picoseconds because we detect light with a streak camera, which gets around the electrical problem."
More typically used to measure the timing of laser pulses than for photography, a streak camera doesn't need any electronics to record light. Light entering the streak camera falls onto a specialized electrode—a photocathode—that converts the stream of photons into a matching stream of electrons. That electron beam hits a screen on the back of the streak camera that's covered with chemicals that light up wherever the beam falls. The same mechanism is at work in a traditional cathode ray tube TV set.
Monday, January 2, 2012
The Year in Materials
Vibrant displays head to market, invisibility cloaks become more practical, and batteries store more energy.
Light warp: This is the largest sheet ever made of a metamaterial that can bend near infrared light backward.
John Rogers
Tiny crystals called quantum dots emit intense, sharply defined colors. Now researchers have made LED displays that use quantum dots. Five years ago, QD Vision demonstrated its first, rudimentary one-color displays, using the nanoscale crystals. This year it demonstrated a full-color display capable of showing video. The company says it could be another five years before the technology appears in commercial displays. Samsung might get there first—it's also developing quantum-dot displays, and demonstrated a full-color one in February.
Quantum-dot displays could use far less energy than LCDs. Another ingenious way to reduce energy use is make displays that emit no light at all, but instead reflect ambient light, an approach being taken by Qualcomm with its full-color Mirasol displays, which use only a tenth of the energy of an LCD. The technology has started to appear in tablet computers in South Korea.
No display looks good after it's covered with fingerprints. A new coating based on soot from a candle flame could provide a cheap oil-repelling layer that could eliminate smudges.
Novel nanostructured materials could greatly enhance the power output of solar panels and make them cheaper by capturing light that would have otherwise been reflected. They could also achieve these goals by converting near infrared light into colors that conventional silicon solar cells can absorb. Another material could render stealth aircraft invisible at night—and invisible to radar night and day.
Metamaterials offer another approach to invisibility: instead of absorbing light, metamaterials bend it around an object. Until this year, researchers have only been able to make metamaterials on a small scale—less than a millimeter across. Now they've made them big enough to be practical. They don't work yet for all wavelengths of light, but they could render objects invisible to night vision equipment.
Stanford researchers built a battery electrode that can be recharged 40,000 times—compared to the 1,000 charges you'd get with a typical laptop battery. Since the electrode lasts so long, and is made of abundant materials, it could provide an inexpensive way to store power from wind turbines and solar panels.
Other researchers have developed inexpensive materials that can store 10 times as much energy as conventional graphite electrodes in lithium-ion batteries. Paired with an equally high-capacity opposite electrode, these could transform portable electronics and electric vehicles. One technology in particular, from Lawrence Berkeley National Laboratory, seems promising because it uses a conductive polymer that can be incorporated into existing manufacturing lines, instead of requiring the expensive new technology for making nanostructures required by others.
New tools could speed the next materials breakthroughs. A modeling program developed at Harvard has led to one of the best organic semiconductors ever made. And a robotic system for making thousands of battery cells with unique electrode chemistries has discovered materials that could boost lithium-ion battery storage capacity by 25 percent.
BY KEVIN BULLIS
Light warp: This is the largest sheet ever made of a metamaterial that can bend near infrared light backward.
John Rogers
Tiny crystals called quantum dots emit intense, sharply defined colors. Now researchers have made LED displays that use quantum dots. Five years ago, QD Vision demonstrated its first, rudimentary one-color displays, using the nanoscale crystals. This year it demonstrated a full-color display capable of showing video. The company says it could be another five years before the technology appears in commercial displays. Samsung might get there first—it's also developing quantum-dot displays, and demonstrated a full-color one in February.
Quantum-dot displays could use far less energy than LCDs. Another ingenious way to reduce energy use is make displays that emit no light at all, but instead reflect ambient light, an approach being taken by Qualcomm with its full-color Mirasol displays, which use only a tenth of the energy of an LCD. The technology has started to appear in tablet computers in South Korea.
No display looks good after it's covered with fingerprints. A new coating based on soot from a candle flame could provide a cheap oil-repelling layer that could eliminate smudges.
Novel nanostructured materials could greatly enhance the power output of solar panels and make them cheaper by capturing light that would have otherwise been reflected. They could also achieve these goals by converting near infrared light into colors that conventional silicon solar cells can absorb. Another material could render stealth aircraft invisible at night—and invisible to radar night and day.
Metamaterials offer another approach to invisibility: instead of absorbing light, metamaterials bend it around an object. Until this year, researchers have only been able to make metamaterials on a small scale—less than a millimeter across. Now they've made them big enough to be practical. They don't work yet for all wavelengths of light, but they could render objects invisible to night vision equipment.
Stanford researchers built a battery electrode that can be recharged 40,000 times—compared to the 1,000 charges you'd get with a typical laptop battery. Since the electrode lasts so long, and is made of abundant materials, it could provide an inexpensive way to store power from wind turbines and solar panels.
Other researchers have developed inexpensive materials that can store 10 times as much energy as conventional graphite electrodes in lithium-ion batteries. Paired with an equally high-capacity opposite electrode, these could transform portable electronics and electric vehicles. One technology in particular, from Lawrence Berkeley National Laboratory, seems promising because it uses a conductive polymer that can be incorporated into existing manufacturing lines, instead of requiring the expensive new technology for making nanostructures required by others.
New tools could speed the next materials breakthroughs. A modeling program developed at Harvard has led to one of the best organic semiconductors ever made. And a robotic system for making thousands of battery cells with unique electrode chemistries has discovered materials that could boost lithium-ion battery storage capacity by 25 percent.
BY KEVIN BULLIS
Hybrids versus Electric Cars
GM is on the hybrid bandwagon while other automakers continue to argue against it.
According to the Wall Street Journal, Honda, Nissan, and Renault are making the same arguments against hybrid vehicles that General Motors made several years ago to predict that hybrids would fail. (See "Hybrid or Electric: Car Makers Take Sides" and "Honda Won't Pursue Plug-in Hybrids.") The difference now is that the arguments are right this time--at least for some markets.
In the 1990s, U.S. automakers such as GM led the development of hybrid-vehicle technology. But GM elected to drop hybrids in favor of a much longer-term technology--hydrogen fuel-cell vehicles--arguing that hybrids were too expensive and didn't provide enough environmental benefit to be successful. Then Toyota's Prius proved GM wrong. And recently, GM has become a big promoter of hybrid technology, especially next-generation plug-in hybrids.
Not so Honda, Nissan, and Renault. According to the Journal report, Carlos Ghosn, the CEO of Renault and Nissan, complains that hybrids don't really do much to reduce petroleum consumption and pollution, arguing that it's better to build all-electric vehicles that have zero emissions. U.S. automakers such as GM made the same arguments, although they pushed for fuel-cell vehicles, not battery-powered vehicles. (GM, of course, already had a battery-electric vehicle, the EV1, which it scrapped.)
Honda currently sells hybrid vehicles, but the company's president and CEO, Takeo Fukui, is skeptical of next-generation plug-in hybrids. Such cars would still have both electric motors and a gasoline engine, but they could go much farther on electricity alone than today's hybrids can. GM has been touting its Volt concept, which would go 40 miles on electricity stored in lithium-ion batteries. For longer trips, a gas generator would kick on to recharge the battery, providing an additional 600 miles of range. Fukui's argument is that the gas engine in the Volt is unnecessary. Presumably, he is suggesting that it would be better, and cheaper, to use batteries alone, or to stick with gasoline.
His argument doesn't make sense in the United States, where there's probably not much of a market for a car that can only go 40 miles on a charge. But a couple of trends suggest that there may indeed be a growing market for relatively short-range electric cars. London has a congestion tax on vehicles driving in the city--and other cities are considering imposing similar fees--from which zero-emission vehicles are exempt. It's not unlikely that such regulations could evolve to keep non-zero-emission vehicles out of city centers entirely, Ghosn suggests. Meanwhile, the taxes make it more expensive to drive gas-powered cars. Climate-change legislation could also make it more expensive to drive conventional vehicles. As these costs rise, the people who will feel the pressure most are those who are likely unable to afford a car with both an engine and an electric motor. For them, a 40-mile, zero-emission, all-electric commuter could be appealing, especially considering the fact that (in the United States) most people drive less than 40 miles a day.
BY KEVIN BULLIS
According to the Wall Street Journal, Honda, Nissan, and Renault are making the same arguments against hybrid vehicles that General Motors made several years ago to predict that hybrids would fail. (See "Hybrid or Electric: Car Makers Take Sides" and "Honda Won't Pursue Plug-in Hybrids.") The difference now is that the arguments are right this time--at least for some markets.
In the 1990s, U.S. automakers such as GM led the development of hybrid-vehicle technology. But GM elected to drop hybrids in favor of a much longer-term technology--hydrogen fuel-cell vehicles--arguing that hybrids were too expensive and didn't provide enough environmental benefit to be successful. Then Toyota's Prius proved GM wrong. And recently, GM has become a big promoter of hybrid technology, especially next-generation plug-in hybrids.
Not so Honda, Nissan, and Renault. According to the Journal report, Carlos Ghosn, the CEO of Renault and Nissan, complains that hybrids don't really do much to reduce petroleum consumption and pollution, arguing that it's better to build all-electric vehicles that have zero emissions. U.S. automakers such as GM made the same arguments, although they pushed for fuel-cell vehicles, not battery-powered vehicles. (GM, of course, already had a battery-electric vehicle, the EV1, which it scrapped.)
Honda currently sells hybrid vehicles, but the company's president and CEO, Takeo Fukui, is skeptical of next-generation plug-in hybrids. Such cars would still have both electric motors and a gasoline engine, but they could go much farther on electricity alone than today's hybrids can. GM has been touting its Volt concept, which would go 40 miles on electricity stored in lithium-ion batteries. For longer trips, a gas generator would kick on to recharge the battery, providing an additional 600 miles of range. Fukui's argument is that the gas engine in the Volt is unnecessary. Presumably, he is suggesting that it would be better, and cheaper, to use batteries alone, or to stick with gasoline.
His argument doesn't make sense in the United States, where there's probably not much of a market for a car that can only go 40 miles on a charge. But a couple of trends suggest that there may indeed be a growing market for relatively short-range electric cars. London has a congestion tax on vehicles driving in the city--and other cities are considering imposing similar fees--from which zero-emission vehicles are exempt. It's not unlikely that such regulations could evolve to keep non-zero-emission vehicles out of city centers entirely, Ghosn suggests. Meanwhile, the taxes make it more expensive to drive gas-powered cars. Climate-change legislation could also make it more expensive to drive conventional vehicles. As these costs rise, the people who will feel the pressure most are those who are likely unable to afford a car with both an engine and an electric motor. For them, a 40-mile, zero-emission, all-electric commuter could be appealing, especially considering the fact that (in the United States) most people drive less than 40 miles a day.
BY KEVIN BULLIS
Sunday, January 1, 2012
What If Electric Cars Were Better?
Improving the energy density of batteries is the key to mass-market electric vehicles.
Electric vehicles are still too expensive and have too many limitations to compete with regular cars, except in a few niche markets. Will that ever change? The answer has everything to do with battery technology. Batteries carrying more charge for a lower price could extend the range of electric cars from today's 70 miles to hundreds of miles, effectively challenging the internal-combustion motor.
To get there, many experts agree, a major shift in battery technology may be needed. Electric vehicles such as the all-electric Nissan Leaf and the Chevrolet Volt, a plug-in hybrid from GM, rely on larger versions of the lithium-ion batteries that power smart phones, iPads, and ultrathin laptops. Such gadgets are possible only because lithium-ion batteries have twice the energy density of the nickel–metal hydride batteries used in the brick-size mobile phones and other bulky consumer electronics of the 1980s.
Using lithium-ion batteries, companies like Nissan, which has sold 20,000 Leafs globally (the car is priced at $33,000 in the U.S.), are predicting that they've already hit upon the right mix of vehicle range and sticker price to satisfy many commuters who drive limited distances.
The problem, however, is that despite several decades of optimization, lithium-ion batteries are still expensive and limited in performance, and they will probably not get much better. Assembled battery packs for a vehicle like the Volt cost roughly $10,000 and deliver about 40 miles before an internal-combustion engine kicks in to extend the charge. The battery for the Leaf costs about $15,000 (according to estimates from the Department of Energy) and delivers about 70 miles of driving, depending on various conditions. According to an analysis by the National Academy of Sciences, plug-in hybrid electric vehicles with a 40-mile electric range are "unlikely" to be cost competitive with conventional cars before 2040, assuming gasoline prices of $4 per gallon.
Estimates of the cost of assembled lithium-ion battery packs vary widely (see "Will Electric Vehicles Finally Succeed?"). The NAS report put the cost at about $625 to $850 per kilowatt-hour of energy; a Volt-like car requires a battery capacity of 16 kilowatts. But the bottom line is that batteries need to get far cheaper and provide far greater range if electric vehicles are ever to become truly popular.
Whether that's possible with conventional lithium-ion technology is a matter of debate. Though some involved in battery manufacturing say the technology still has room for improvement, the NAS report, for one, notes that although lithium-ion batteries have been getting far cheaper over the last decade, those reductions seem to be leveling off. It concludes that even under optimistic assumptions, lithium-ion batteries are likely to cost around $360 per kilowatt-hour in 2030.
The U.S. Department of Energy, however, has far more ambitious goals for electric-vehicle batteries, aiming to bring the cost down to $125 per kilowatt-hour by 2020. For that, radical new technologies will probably be necessary. As part of its effort to encourage battery innovation, the DOE's ARPA-E program has funded 10 projects, most of them involving startup companies, to find "game-changing technologies" that will deliver an electric car with a range of 300 to 500 miles.
The department has put $57 million toward efforts to develop a number of very different technologies, including metal-air, lithium-sulfur, and solid-state batteries. Among the funding recipients is Pellion Technologies, a Cambridge, Massachusetts-based startup working on magnesium-ion batteries that could provide twice the energy density of lithium-ion ones; another ARPA-E-funded startup, Sion Power in Tucson, Arizona, promises a lithium-sulfur battery that has an energy density three times that of conventional lithium-ion batteries and could power electric vehicles for more than 300 miles.
The ARPA-E program is meant to support high-risk projects, so it's hard to know whether any of the new battery technologies will succeed. But if the DOE meets its ambitious goals, it will truly change the economics of electric cars. Improving the energy density of batteries has already changed how we communicate. Someday it could change how we commute.
BY DAVID ROTMAN
Electric vehicles are still too expensive and have too many limitations to compete with regular cars, except in a few niche markets. Will that ever change? The answer has everything to do with battery technology. Batteries carrying more charge for a lower price could extend the range of electric cars from today's 70 miles to hundreds of miles, effectively challenging the internal-combustion motor.
To get there, many experts agree, a major shift in battery technology may be needed. Electric vehicles such as the all-electric Nissan Leaf and the Chevrolet Volt, a plug-in hybrid from GM, rely on larger versions of the lithium-ion batteries that power smart phones, iPads, and ultrathin laptops. Such gadgets are possible only because lithium-ion batteries have twice the energy density of the nickel–metal hydride batteries used in the brick-size mobile phones and other bulky consumer electronics of the 1980s.
Using lithium-ion batteries, companies like Nissan, which has sold 20,000 Leafs globally (the car is priced at $33,000 in the U.S.), are predicting that they've already hit upon the right mix of vehicle range and sticker price to satisfy many commuters who drive limited distances.
The problem, however, is that despite several decades of optimization, lithium-ion batteries are still expensive and limited in performance, and they will probably not get much better. Assembled battery packs for a vehicle like the Volt cost roughly $10,000 and deliver about 40 miles before an internal-combustion engine kicks in to extend the charge. The battery for the Leaf costs about $15,000 (according to estimates from the Department of Energy) and delivers about 70 miles of driving, depending on various conditions. According to an analysis by the National Academy of Sciences, plug-in hybrid electric vehicles with a 40-mile electric range are "unlikely" to be cost competitive with conventional cars before 2040, assuming gasoline prices of $4 per gallon.
Estimates of the cost of assembled lithium-ion battery packs vary widely (see "Will Electric Vehicles Finally Succeed?"). The NAS report put the cost at about $625 to $850 per kilowatt-hour of energy; a Volt-like car requires a battery capacity of 16 kilowatts. But the bottom line is that batteries need to get far cheaper and provide far greater range if electric vehicles are ever to become truly popular.
Whether that's possible with conventional lithium-ion technology is a matter of debate. Though some involved in battery manufacturing say the technology still has room for improvement, the NAS report, for one, notes that although lithium-ion batteries have been getting far cheaper over the last decade, those reductions seem to be leveling off. It concludes that even under optimistic assumptions, lithium-ion batteries are likely to cost around $360 per kilowatt-hour in 2030.
The U.S. Department of Energy, however, has far more ambitious goals for electric-vehicle batteries, aiming to bring the cost down to $125 per kilowatt-hour by 2020. For that, radical new technologies will probably be necessary. As part of its effort to encourage battery innovation, the DOE's ARPA-E program has funded 10 projects, most of them involving startup companies, to find "game-changing technologies" that will deliver an electric car with a range of 300 to 500 miles.
The department has put $57 million toward efforts to develop a number of very different technologies, including metal-air, lithium-sulfur, and solid-state batteries. Among the funding recipients is Pellion Technologies, a Cambridge, Massachusetts-based startup working on magnesium-ion batteries that could provide twice the energy density of lithium-ion ones; another ARPA-E-funded startup, Sion Power in Tucson, Arizona, promises a lithium-sulfur battery that has an energy density three times that of conventional lithium-ion batteries and could power electric vehicles for more than 300 miles.
The ARPA-E program is meant to support high-risk projects, so it's hard to know whether any of the new battery technologies will succeed. But if the DOE meets its ambitious goals, it will truly change the economics of electric cars. Improving the energy density of batteries has already changed how we communicate. Someday it could change how we commute.
BY DAVID ROTMAN
Subscribe to:
Posts (Atom)