Hacker News

hackinfo delivers the latest news updates related to Security breach, Cyber Crime, vulnerability, Cyber Security and Penetration testing tools and more.

  • Home
  • Beauty
  • Health
  • General
  • About

Recent Post

Total Pageviews

Blog Archive

  • ►  2015 (5)
    • ►  January (5)
  • ▼  2014 (41)
    • ▼  December (10)
      • ET deals: Lenovo K450e Core i7 desktop with 16GB R...
      • Viber launches games platform
      • South Korea nuclear plant hit by hacker
      • Desalination out of Desperation
      • Can Japan Recapture Its Solar Power?
      • Drones of 2014: Quadcopters
      • Forget Hydrogen Cars, and Buy a Hybrid
      • Router Vulnerability Puts 12 Million Home and Busi...
      • Chemical-Sensing Displays and Other Surprising Use...
      • “Nanobuds” Could Turn Almost Any Surface into a To...
    • ►  November (1)
    • ►  October (9)
    • ►  September (3)
    • ►  May (12)
    • ►  April (6)
Design by HunterDevil Copyright © 2014. Powered by Blogger.

Search This Blog

Pages

  • Home

Author

  • hi
  • hotnews.com

Infolinks in Text Ads

Hacker News

Followers

Home » Archives for December 2014

Saturday, December 27, 2014

ET deals: Lenovo K450e Core i7 desktop with 16GB RAM, 2TB HHD for $700

One of the pros of owning a large form factor desktop is how easy it is to upgrade components at will. Start off with a great set of base specs and enjoy ample room to grow with Lenovo’s performance focused K450e desktop, on sale now for just $699.99 after a helpful $300 in savings.
Lenovo has equipped their K450e desktop with a strong set of specs to get you started, headlined by a 4th-gen Core i7-4790 quad-core processor. Also included off the bat is a sizable quantity of memory, including 16GB of RAM and a 2TB hybrid hard drive with an 8GB SSD cache. Together these specs give you everything you need to fly through both everyday and complex office and multimedia tasks, all while multitasking freely and enjoying snappy performance and quick boot times.
lenovo-k450e-desktop-topAs mentioned above, the K450e is designed to be highly upgradeable, and features tool-less entry, easy pull trays, clear access to drive bays and more. This means it’ll be easy to upgrade your memory even further if you want to, as well as to do things like add a dedicated graphics card or even a new PSU and turn this into much more of a serious gaming rig. Aiding that cause, the case is conveniently built with side-vents for more efficient cooling.
Even without a GPU, the K450e offers a large quantity of ports, including HDMI, DVI, VGA, and six USB ports (four USB 3.0, and two of which are conveniently front-facing). There’s also 7.1-channel surround sound support and a DVDRW, as well as built-in 802.11n WiFi. A keyboard and mouse are also included with purchase. Snag this desktop up and expand it to your needs – or not – from a great starting price after our coupon savings.
Lenovo K450e Core i7 desktop with 16GB RAM, 2TB Hybrid Hard Drive for $699.99. Apply coupon code USPK450E3124 at checkout for total $300 savings.
Have some interest in gaming, but don’t need a traditional desktop? We’ve picked out some of our other top options for you, including the console-like Alienware Alpha, a unique device with a custom UI and tight Steam integration that combines a console with a PC. We’ve also got a pair of quality overall laptops, each with dedicated graphics.
Alienware Alpha Core i3 PC gaming console with Xbox 360 controller & $50 eGift card for $499. Apply coupon code M8MR6FW$PP8$36 at checkout for $50 savings – effective price is just $449 assuming you use the entire gift card.
Lenovo IdeaPad Z710 17.3-inch Core i7 1080p laptop with 8GB RAM, 1TB HDD, 2GB Nvidia GeForce GT840M for $849. Get $400.99 savings instantly.
Toshiba Satellite S50-BBT2G22 15.6-inch Core i5 laptop with 6GB RAM, 750GB HDD, 2GB R7 M260 for $649.99. Get $310 savings instantly.

Viber launches games platform

After voice and video calling, the social messaging app is now adding social gaming to its platform.
As the popularity of social messaging apps grows, so does the need for the different companies behind the apps to offer new and exciting features in order to keep up with the competition and to keep people on their platforms.
Gaming is quickly becoming part and parcel of messaging apps that are big in Japan, Korea and China -- such as Line and WeChat. So, while Viber is by no means in uncharted territory, it is the first app with a large Western world user base to plug into the trend for smartphone gaming.
"We have been focusing on broadening the functionality of Viber and are really excited to unveil Viber Games," said Talmon Marco, Viber CEO. "This major expansion of our platform gives people another completely new way to connect with Viber. It's an important step for us and we are looking forward to the response from our users."
By hosting games within the app, Viber can bring a host of social, interactive, connected elements that are missing from standalone titles. Users can compete with their contacts and gift each other in-game purchases for example.
But perhaps the smartest feature is to put quality above all else. When the feature goes live on Monday in five initial test regions -- Belarus, Malaysia, Israel, Singapore and the Ukraine -- players will only have a choice of three games. All of the titles has been developed specifically for Viber and even feature characters that are based on the app's growing choice of customizable stickers.
The three games are Viber Candy Mania -- a platform/puzzle game featuring Violet and Legat (two popular Viber sticker characters); Viber Pop -- a bubble puzzle game; and Wild Luck Casino, which features virtual slot machines.

After a test run, Viber plans to roll the games out globally some time in 2015.

Tuesday, December 23, 2014

South Korea nuclear plant hit by hacker

The hacking comes in the wake of increased tension and trouble from North Korea, though the source has not been confirmed.


Computers at a nuclear power plant in South Korea have been compromised by a hacker, but the plant's operator says no critical data has been leaked.
The hacker was able to access blueprints, floor maps and other information on the plant, the South Korean Yonhap News Agency reported Sunday. Using a Twitter account called "president of anti-nuclear reactor group," the hacker has released a total of four postings of the leaked data since December 15, each one revealing internal designs and manuals of the Gori-2 and Wolsong-1 nuclear reactors run by Korea Hydro and Nuclear Power Co. (KHNP), Yonhap added. The hacker has threatened to leak further information unless the reactors are shut down.
KHNP has insisted that the leaked information is not critical and does not undermine the safety of the reactors. The company also played down the threat of any type of cyberattack, saying that the reactors' controllers are protected because they're not linked to any external networks, according to the Wall Street Journal.
The hacking against KHNP nuclear plants occurs in the midst of a major hack against Sony Pictures over its movie "The Interview," a comedy about an assassination attempt against North Korean leader Kim Jong-un. The FBI has accused North Korea of orchestrating the Sony hack, though the country has denied any involvement. As a further response, North Korea suggested a joint investigation into the hack with the US but then accused the US of being involved in the making of the film, according to The Guardian.
Despite the increased tension, no fingers have been pointed at North Korea for the hacking against the KHNP power plants. An official at KHNP told Reuters that the hacking appeared to be the work of "elements who want to cause social unrest," but added that he had no one specific in mind.
Government officials looking into the incident were able to trace the hacker's IP address to a PC located in a specific location, Yonhap said. Investigators have been sent to the location as well as to the plant's reactors to probe further.

Monday, December 22, 2014

Desalination out of Desperation

Even in drought-stricken California, San Diego stands out. It gets less rain than parched Los Angeles or Fresno. The region has less groundwater than many other parts of the state. And more than 80 percent of water for homes and businesses is imported from sources that are increasingly stressed. The Colorado River is so overtaxed that it rarely reaches the sea; water originating in the Sacramento River delta, more than 400 miles north, was rationed by state officials this year, cutting off some farmers in California’s Central Valley from their main source of irrigation. San Diego County, hot, dry, and increasingly populous, offers a preview of where much of the world is headed. So too does a recent decision by the county government: it is building the largest seawater desalination plant in the Western Hemisphere, at a cost of $1 billion.

The massive project, in Carlsbad, teems with nearly 500 workers in yellow hard hats. When it’s done next year, it will take in more than 100 million gallons of Pacific Ocean water daily and produce 54 million gallons of fresh, drinkable water. While this adds up to just 10 percent of the county’s water delivery needs, it will, crucially, be reliable and drought-proof—a hedge against potentially worse times ahead.

The county is betting on a combination of modern engineering and decades-old desalination technology. A pipe trench under construction leads to a nearby lagoon inlet; 18 house-size concrete tanks await loads of sand and charcoal to treat the salt water before it is ready for desalination; pressurizers lead to a stainless-steel pipe one meter in diameter. This final piece of gleaming hardware will convey water under high pressure into 2,000 fiberglass tubes, where it will be squeezed through semipermeable polymer membranes. What gets through will be fresh water, leaving brine behind.

The process is called reverse osmosis (RO), and it’s the mainstay of large-scale desalination facilities around the world. As water is forced through the membrane, the polymer allows the water molecules to pass while blocking the salts and other inorganic impurities. Global desalination output has tripled since 2000: 16,000 plants are up and running around the world, and the pace of construction is expected to increase while the technology continues to improve. Carlsbad, for example, has been outfitted with state-of-the art commercial membranes and advanced pressure-recovery systems. But the plants remain costly to build and operate.

Seawater desalination, in fact, is one of the most expensive sources of fresh water. The water sells—depending on site conditions—for between $1,000 and $2,500 per acre-foot (the amount used by two five-person U.S. households per year). Carlsbad’s product will sell for around $2,000, which is 80 percent more than the county pays for treated water from outside the area. One reason is the huge amount of energy required to push water through the membranes. And Carlsbad, like most desalination plants, is being built with extra pumps, treatment capacity, and membrane tubes, the better to guarantee uptime. “Because it is a critical asset for the region, there is a tremendous amount of redundancy to give high reliability,” says Jonathan Loveland, vice president at Poseidon Water, the owner of the plant. “If any piece fails, something else will pick up the slack.”

Already, some 700 million people worldwide suffer from water scarcity, but that number is expected to swell to 1.8 billion in just 10 years. Some countries, like Israel, already rely heavily on desalination; more will follow suit. In many places, “we are already at the limit of renewable water resources, and yet we continue to grow,” says John Lienhard, a mechanical engineer and director of the Center for Clean Water and Clean Energy at MIT. “On top of that we have global warming, with hotter and drier conditions in many areas, which will potentially further reduce the amount of renewable water available.” While conservation and recycling will help, you can’t recycle what you don’t have. “As coastal cities grow,” he says, “the value of seawater desalination is going to increase rapidly, and it’s likely we will see widespread adoption.”

Against this grim backdrop, there is some good news. In short, desalination is ripe for technological improvement. A combination of sensor-driven optimization and automation, plus new types of membranes, could eventually allow for desalination plants that are half the size and use commensurately less energy. Among other benefits, small, mobile desalination units could be used in agricultural regions hundreds of miles away from the ocean, where demand for water is great and growing.



Smart Water

Every two weeks, Yoram Cohen, a chemical engineer who heads the Water Technology Research Center at the University of California, Los Angeles, hits the road for the drought-blasted San Joaquin Valley. Part of the state’s vast agricultural midsection that grows much of the country’s produce, the region has suffered badly. Last year, 2014, was the third straight drought year—at a time when demand for water has reached an all-time high. I joined Cohen for a recent outing: a car ride from his labs at UCLA to the small valley town of Firebaugh, in one of the hardest-hit agricultural regions in the state. Along I-5, the highway that connects the cities of California’s southern coast with its central valley, we saw vast water-engineering edifices built in the 1950s, including four vast pipes traversing the Tehachapi ­Mountains and the cement-lined California Aqueduct, which cuts a serpentine path through the valley floor. The state’s water system—devoted roughly 80 percent to agriculture and 20 percent to cities—is still conveying water pumped all the way from the Sacramento River delta through the 444-mile California Aqueduct. The water infrastructure made Southern California what it is today.

But it’s a system under great stress. California’s persistent lack of precipitation means 80 percent of the state is now in “extreme” or “exceptional” drought, forcing water restrictions in urban areas and cutoffs to some farmers. The results are plain to see: tracts of parched farmland lie newly abandoned; road signs flash warnings of “extreme drought”; signs plead “Water = Jobs.” According to a recent study by the University of California, Davis, the drought inflicted $1.5 billion in agricultural losses in 2014 alone.

Still under construction, the desalination plant in Carlsbad, California, will be the largest such facility in the United States. Awaiting installation at the facility are stainless-steel turbine pumps, wrapped in protective Mylar, that will be used to pump the clean water.

The Israeli-born Cohen explains that despite these pressures, desalination hasn’t fundamentally changed since the 1980s. The time it takes to plan for big projects (Carlsbad took 14 years) makes it hard for investors to expect much payoff from new technologies, and U.S. federal research funding has gone to other priorities. Besides, it’s been possible to recycle or conserve water so that expensive desalination has been less necessary. The flip side of this, Cohen says, is that desalination is now in a position to be transformed by the same kinds of sensing, automation, and algorithm-controlled processes that are remaking other industries. I would soon see what he was talking about.

As the late-October sun set, long shadows cast the crusty ground in high relief. We exited I-5, drove nine miles, and turned right on a hard-packed dirt lane between pistachio trees. It was dusk, and the beams from headlights disappeared into the flat desert nothingness. Yet when I opened the window, I caught a whiff of something that smelled vaguely like the salty air at the coast. The headlights exposed the culprit: a pipe vomiting a brew of much-reused agricultural runoff. It had started in the ­Sacramento delta as fresh water. But it got progressively more concentrated by evaporation in the aqueduct system, and still more so as it was applied to crops, picked up minerals in the ground, and was applied to crops again. It was now almost as saline as seawater, and contaminated with a range of minerals and fertilizers as well.

    It takes a lot of energy to push water through the membranes.

Cohen led me to a nearby trailer inhabited by two graduate students and a vast collection of tanks, pipes, valves, tubes, and computers. It was a totally automated system, able to use any of the brackish or polluted stuff Firebaugh’s farmers produce and generate 30,000 drinkable gallons per day. A computer screen displayed a real-time black-and-white image that looked like a lunar landscape. It was a shot from a piece of the polyamide membrane at the center of the process. The image revealed a few white chunks: the beginning of mineral scaling, a bane of membranes. Image analysis software can detect this happening, and an algorithm can direct a valve to open and dispense an antiscaling solution into the system—keeping ahead of the problem. Other sensors and control systems can drive tweaks to avert other fouling problems, changing the pressure or the dosage of chemical additives used for pretreatment.

Cohen reached for a plastic tube and twisted a small tap. Clear water drooled out; he held his hand out to capture some, lifted it to his mouth, drank a bit, and rubbed the rest on his face. “If we can figure out a car that does not require a driver, why can’t we figure out how to run an RO plant without operators?” he said.

The savings could be significant: automated systems such as these could probably save between one-third and one-half the costs of conventional desalination plants, Cohen says. But more than that, a trailer-sized unit—able to adapt to different sites and conditions by the hour—could simply roll around and help farmers get fresh water no matter what they start with.

Magic Membranes

Even if systems get smarter, reverse osmosis is still an energy hog. Carlsbad will consume more than 35 megawatts of electricity (which could power around 30,000 homes), for an annual bill of $30 million. About two-thirds of that will go to the water pressure needed to make the technology work. (The other third will go mostly to pumping the water 10 miles uphill to a reservoir, as well as to pretreatment and intake pumping.) Carlsbad’s owners estimate that the plant will consume 2.8 kilowatt-hours per cubic meter for desalination alone. Some small reverse-osmosis systems, using differently configured processes (running water in batches rather than pumping continuously), are hitting 1.5 to 1.7 kilowatt-hours, says Lienhard. But the technology hasn’t been proved at larger scales.

What’s the problem? It takes a lot of work to push water through the membranes—pressure that translates into high energy usage. Those relatively thick polyamide membranes, though far from ideal, are the best we’ve got right now. But a few groups are trying to come up with more efficient materials. At MIT, mechanical engineer Rohit Karnik’s team is building membranes a single atom thick, to help water molecules just pop through. The researchers blast graphene with ion beams and bathe it in chemicals to etch pores less than a nanometer across.

In theory, an essentially two-dimensional membrane like this one provides the least possible resistance. Computer models by Jeffrey Grossman’s materials science and engineering group at MIT showed that graphene membranes could cut the energy used in reverse osmosis by 15 to 46 percent. Even better, the high permeability could mean that far less surface area is needed to get the job done, so the entire plant could be half the size.

So far Karnik has fabricated a one-square-centimeter graphene membrane, punched holes in it, and shown that it can selectively hold back certain ions. But he’s not yet shown it can actually desalinate seawater, even on a lab bench. And once he or another group achieves that, the next challenge is to reliably make miles of membrane materials with consistent features. Karnik is optimistic that he’ll get there, but he says it will be years before graphene membranes are ready.

Existing membrane materials might get better thanks to other nanoengineering approaches. In a small section of the Firebaugh trailer, Cohen is running an experiment with a membrane of his group’s own devising. A base layer is made of polyamide. But then he adds a layer of tentacle-like brushes made of polymers that are hydrophilic, which means they attract water. Early research suggests these hybrid membranes may be far better at resisting fouling, because the brushes—which he likens to kelp swaying on an undersea rock—discourage things from sticking. This could mean less downtime, fewer replacements, and faster throughput. But Cohen, taking a swig of his ditch water, urges realism. “People have this fixation that somehow there will be a magic membrane that will reduce the cost of desalination to next to none, and I think that is a little bit misleading,” he says.

For now in California’s coastal municipalities, seawater is still the option of last resort, after conservation, recycling, and even treating and reusing sewage. While many are weighing desalination, the city most likely to follow in San Diego’s footsteps is Santa Barbara. That’s because it already built an RO plant in the early 1990s after a five-year drought, only to quickly shut it down when a couple of years of good winter rains refilled reservoirs. The city recently moved to start funding an expensive rehabilitation of the site so that it can be reactivated if needed. Other municipalities have decided it’s too expensive or environmentally problematic (the facilities inevitably kill fish eggs and other marine life, unless intake pipes are buried beneath sand at great cost).

But that assessment might get turned on its head. Water captured in reservoirs or pumped from faraway deltas is getting more expensive—and such alternatives come with their own environmental costs. As sources dry up and competition for water mounts from businesses, farmers, and cities, we will inevitably turn to seawater and other salty sources. It might not be a great solution, but the bottom line is that we are left with fewer and fewer choices in a water-starved world.

Can Japan Recapture Its Solar Power?

The way the Land of the Rising Sun built and lost its dominance in photovoltaics shows just how vulnerable renewables remain to changing politics and national policies.
  • By Peter Fairley on December 18, 2014
  • 4 comments

Why It Matters

The fate of solar power in Japan, which lost 30 percent of its electricity production after the Fukushima disaster, will be an important test of renewable technologies.
It’s 38 °C on the Atsumi Peninsula southwest of Tokyo: a deadly heat wave has been gripping much of Japan late this summer. Inside the offices of a newly built power plant operated by the plastics company Mitsui Chemicals, the AC is blasting. Outside, 215,000 solar panels are converting the blistering sunlight into 50 megawatts of electricity for the local grid. Three 118-meter-high wind turbines erected at the site add six megawatts of generation capacity to back up the solar panels during the winter.
Mitsui’s plant is just one of thousands of renewable-power installations under way as Japan confronts its third summer in a row without use of the nuclear reactors that had delivered almost 30 percent of its electricity. In Japan people refer to the earthquake and nuclear disaster at Tokyo Electric Power Company’s Fukushima Daiichi nuclear power plant on March 11, 2011, as “Three-Eleven.” Radioactive contamination forced more than 100,000 people to evacuate and terrified millions more. It also sent a shock wave through Japan’s already fragile manufacturing sector, which is the country’s second-largest employer and accounts for 18 percent of its economy.
Eleven of Japan’s 54 nuclear reactors shut down on the day of the earthquake. One year later every reactor in Japan was out of service; each had to be upgraded to meet heightened safety standards and then get in a queue for inspections. During my visit this summer, Japan was still without nuclear power, and only aggressive energy conservation kept the lights on. Meanwhile, the country was using so much more imported fossil fuel that electricity prices were up by about 20 percent for homes and 30 percent for businesses, according to Japan’s Ministry of Economy, Trade, and Industry (METI).
The post-Fukushima energy crisis, however, has fueled hopes for the country’s renewable-power industry, particularly its solar businesses. As one of his last moves before leaving office in the summer of 2011, Prime Minister Naoto Kan established potentially lucrative feed-in tariffs to stimulate the installation of solar, wind, and other forms of renewable energy. Feed-in tariffs set a premium rate at which utilities must purchase power generated from such sources.
The government incentive is what motivated Mitsui to finally make use of land originally purchased for an automotive plastics factory that was never built because carmakers moved manufacturing operations overseas. The site had sat idle for 21 years before Mitsui assembled a consortium to help finance a $180 million investment in solar panels and wind turbines. By moving fast, Mitsui and its six partners qualified for 2012 feed-in tariffs that promised industrial-scale solar facilities 40 yen (35 cents) per kilowatt-hour generated for 20 years. At that price, says Shin Fukuda, the former nuclear engineer who runs Mitsui’s energy and environment business, the consortium should earn back its investment in 10 years and collect substantial profits from the renewable facility for at least another decade.
Sanyo Electric’s so-called Solar Ark, built in 2001 during the heyday of the country’s initial solar boom, was designed to generate 630 kilowatts of power, making it one of the world’s largest solar facilities. It boasts 5,046 solar panels.
Overnight, Japan has become the world’s hottest solar market: in less than two years after Fukushima melted down, the country more than doubled its solar generating capacity. According to METI, developers installed nearly 10 gigawatts of renewable generating capacity through the end of April 2014, including 9.6 gigawatts of photovoltaics. (The nuclear reactors at Fukushima Daiichi had 4.7 gigawatts of capacity; overall, the country has around 290 gigawatts of installed electricity-generating capacity.) Three-quarters of the new solar capacity was in large-scale installations such as Mitsui’s.
Yet this explosion of solar capacity marks a bittersweet triumph for Japan’s solar-panel manufacturers, which had led the design of photovoltaics in the 1980s and launched the global solar industry in the 1990s. Bitter because most of the millions of panels being installed are imports made outside the country. Even some Japanese manufacturers, including early market leader Sharp, have taken to buying panels produced abroad and selling them in Japan.
How Japan­­—once the world’s most advanced semiconductor producer and a pioneer in using that technology to manufacture photovoltaic cells—gave away its solar industry is a story of national insecurity, monopoly power, and money-driven politics. It is also a tale with important lessons for those who believe that the strength of renewable technologies will provide sufficient incentives for countries to transform their energy habits.
In Japan, for most of the 2000s, impressive advances in photovoltaics were ignored because the country’s powerful utilities exerted their political muscle to favor nuclear power. And despite resurging consumer demand for solar power and strong public disdain for nuclear, the same thing could happen again. Will a country with few fossil-fuel resources and bleak memories of the Fukushima disaster take advantage of its technical expertise to recapture its position as a leading producer of photovoltaics, or will it turn away from renewable energy once more?
Riches
Longer than three football fields and over 37 meters tall, the Solar Ark is clearly visible from the Tokkaido Shinkansen as the bullet train crosses central Japan. The structure, covered with photovoltaic panels, looks like a temple of energy from another era—a time when Japan owned the solar-power industry. Sanyo erected the Ark in 2001, arraying on it 5,046 solar panels capable of generating 630 kilowatts of pollution-free electricity.
An image from Japanese television captures smoke rising after a hydrogen explosion at Fukushima Daiichi’s unit 3 on March 14, 2011, days after the initial earthquake. Following the Fukushima disaster, all the country’s nuclear reactors were shut down.
The era that gave rise to this feat began with the energy crises of the 1970s, when spiking global petroleum prices pummeled Japan’s export-driven manufacturing economy. The country harnessed its dominance in the production of electronic semiconductor chips to pursue alternatives for cleaner, safer power in photovoltaics. And unlike other countries, such as the United States, it stuck with the resulting solar development programs even when oil prices dropped in the 1980s. Between 1985 and 2007, Japanese researchers filed for more than twice as many patents in solar technologies as rival U.S. and European inventors combined. Companies like Sharp, Sanyo Electric, Panasonic, and Kyocera became the clear leaders in solar technology. Japanese producers began ramping up sales and solar installations in the 1990s. By 2001 total solar-power output in Japan was 500 times higher than it had been a decade earlier—a decade in which U.S. solar generation edged up by a meager 15 percent.
Then it all came crashing to a halt a decade ago as the country staked its future on nuclear power.
The government’s nuclear plans were ambitious: by the time Fukushima Daiichi melted down, they would call for 14 additional reactors by 2030, which would have nearly doubled nuclear generation to account for 50 percent of Japan’s power supply. Meanwhile, photovoltaic sales in Japan declined during the mid-2000s, and by 2007 Japanese producers had ceded global market leadership to U.S., Chinese, and European manufacturers. In just a few years, the country had gone from industry leader to has-been.
What turned Japan away from the sun was a pernicious blend of perception, culture, and politics. Nuclear power had an aura of strength, while energy based on intermittent renewable power sources looked weak and unreliable—an impression encouraged by the country’s politically powerful utilities. Though Japan has numerous locations that are ideal for wind and solar power, power companies convinced the public that energy choices were limited. “We are really severely of the mind-set that we lack resources and that Japan has to depend on imported fuel,” says Mika Ohbayashi, director of the Tokyo-based Japan Renewable Energy Foundation.
What turned Japan away from the sun was a pernicious blend of perception, culture, and politics.
The utilities’ view was colored by self-interest. Japan’s 10 utilities were (and remain) vertical monopolies. Each controls power generation, transmission, and distribution in its respective region, and its grids are designed to deliver electricity from centralized power plants—including large nuclear reactors. They lack, by design, the interconnections that facilitate the safe use of variable power generation. In most industrialized countries, governments have broken up the monopolies in power markets, freeing operators of transmission grids to build those interconnections, but Japan’s utilities have bucked the deregulation trend. The interconnection problem is further compounded by an artifact: two AC frequencies that split the country’s electrical system in half. Eastern Japan operates at 50 hertz, while western Japan uses 60-hertz power—a barrier that proved crippling in 2011, in the immediate aftermath of the Fukushima disaster, when a suddenly underpowered Tokyo could access little of Osaka’s surplus power.
Asked why Japan chose not to push solar power aggressively when it dominated the global industry, former prime minister Kan told me he puts the blame squarely on the country’s utilities: “The reason is very clear. The electric power companies, the people who wanted to promote nuclear power, were opposed.”
Revival
In a subdivision spreading over reclaimed land in the bay in Ashiya, a city between Osaka and Kobe, a 400-unit residential development called Smart City Shio-Ashiya (“Salty-Ashiya”) is taking shape, the brainchild of the Panasonic subsidiary ­PanaHome. On a Sunday in July, solar panels atop each of the 50 houses built to date are pumping surplus power into the local grid, and PanaHome salespeople are selling a couple with toddlers on the homes’ energy benefits and earthquake resistance.
Shio-Ashiya’s two-story homes include geothermal heating and cooling and other green design features to minimize power consumption, while the high-efficiency rooftop solar panels maximize power generation. The surplus power should, according to PanaHome saleswoman Saho Watanabe, earn residents roughly 100,000 yen ($825) each year. Watanabe touts another feature, which should be invaluable when the grid goes down—say, in an earthquake or typhoon. She opens a cupboard in the dining room of a model home to reveal a lithium battery that, working with an energy management system near the kitchen, can run the family’s AC/heat pumps, first-floor lighting, and refrigerator for about two days.

Panasonic’s solar hopes rest on a technology invented by researchers at Sanyo in the 1990s and acquired by Panasonic four years ago when the corporations merged. The solar cells combine conventional crystalline-silicon and thin-film amorphous-­silicon technologies to achieve relatively high efficiency in converting sunlight to electricity. Called HIT, for heterojunction with intrinsic thin layer, the hybrid technology has become a mainstay of the company’s solar strategy.
Shingo Okamoto, a materials scientist who spent his career at Sanyo Electric before becoming director of solar R&D for Panasonic’s EcoSolutions business group, says the panels are earning premium pricing in domestic sales because they produce far more electricity from a given rooftop than the cheaper polycrystalline panels that dominate the market. Assuming that each household consumes electricity at the Japanese average of 1,400 kilowatt-hours per year during daylight hours, he says, a household with the Panasonic system will have 52 percent more surplus power to return to the grid than a home with an ordinary solar system.
Residential power in Japan is pricey—at 24.33 yen (20 cents) per kilowatt-hour in 2013, it was nearly double the U.S. average. And given that electricity prices are “sure to keep going up,” says Okamoto, the most efficient rooftop photovoltaic systems will have a strong advantage. When we met in July at Panasonic’s Shiga plant, east of Kyoto, the plant had just started shipping its newest and most powerful panel design. The advances behind the panel, which uses cells with an efficiency of 22.5 percent, include a light-scattering film on the backside to enhance light absorption. Assembly lines were running 24 hours a day to keep up with domestic demand.
Further advances are in the pipeline. In April, Okamoto’s group produced a silicon solar cell that reached 25.6 percent efficiency, breaking a 15-year-old world record of 25.0 percent. Though the record was set in the lab using a prototype device, Okamoto predicts that the group will ultimately be able to produce commercial cells whose efficiency is within a few percentage points of crystalline silicon’s theoretical limit, 29 percent.
Repowering
Across the coastal mountains from the smashed reactors at Fukushima Daiichi and the contaminated landscape they created, one of the world’s most advanced facilities dedicated to renewable-energy R&D is gearing up. The $100 million complex opened in April in Koriyama, Fukushima Prefecture’s commercial center, and pulls together previously disparate research by Japan’s science and technology agencies. The institute is not here by accident. It’s an explicit commitment to the emotionally and economically devastated region.
The verdant prefecture north of Tokyo remains depopulated after the earthquake, tsunami, and meltdowns of March 2011. Many of the more than 100,000 residents rendered homeless by the disasters will never return. Replacing lost residents and businesses in an area known for radioactive contamination is not easy. Solar-powered radioactivity monitors in Koriyama show that the air is safe, but 100 kilometers to the east, Tokyo Electric Power Company (TEPCO) still struggles to keep contamination from polluting both groundwater and the sea.
The Koriyama R&D facility boasts state-of-the-art labs for crystallizing, slicing, and patterning silicon wafers, and its production line can churn out up to 360 wafers an hour. Outside, a variety of photovoltaics are being tested, along with a modest-sized wind turbine and a large grid-connected battery. Its most ambitious program is directed by Makoto Konagai, one of Japan’s most celebrated solar scientists, who has moved to Koriyama from the Tokyo Institute of Technology. His goal is to smash through the theoretical efficiency limit of silicon cells, demonstrating rates of 30 percent by 2016 and up to 40 percent by 2021. It is an ambitious plan, but three large manufacturers, including Panasonic, have signed on.
Workers watched in October as a crane lifted a section of a radiation shroud that had been placed over a reactor at Fukushima after the earthquake. Lifting the cover exposed the debris inside the destroyed building for the first time since 2011.
While some other researchers seek more efficient alternatives to silicon, which accounts for 90 percent of current solar production, Konagai seeks to redesign the silicon cell from top to bottom. One of his teams, for example, is developing a casting method to produce higher-quality silicon ingots. Another team is rethinking the way semiconductor structures are patterned to turn silicon wafers into cells: Konagai’s plan is to etch or build vertical structures just a few nanometers across, almost 100,000 times narrower than the silicon wafer itself. If his simulations are good, the resulting nanowires or nanowalls will alter the electrical behavior of the silicon within, boosting its potential to absorb light and gather electrical charge.
In June 2011, Fukushima’s previously pro-nuclear governor, Yuhei Sato, declared that Fukushima should pin its future on renewable energy. Community activists initiated dozens of projects across the prefecture, and in 2012 it set a goal of increasing renewable energy from 22 percent to 100 percent of its power supply by 2040.
The cold reality of Japan’s energy predicament, however, is that such bold ambitions are likely to fall short. The type of solar expansion that can be expected from feed-in tariffs alone isn’t likely to meet the prefecture’s goals—or even to replace the power that Japan’s nuclear fleet once delivered. And political and economic forces don’t seem to favor policies that would expand renewables more dramatically.
Projections by the Japan Photovoltaic Energy Association, a Tokyo-based trade group, suggest that annual solar installations will peak this year just shy of seven gigawatts. The group predicts that total installed solar capacity in Japan will reach 102 gigawatts by 2030, which would be enough to meet only a small fraction of the country’s electricity needs. Moderate deployment of wind power would provide some additional electricity. But Japan needs far more. While Japanese consumers and industry have cut power demand since 2011, utilities covered most of the nuclear shortfall by ramping up combustion of imported natural gas, petroleum, and coal. Fossil fuels accounted for some 89 percent of Japan’s electricity generation in 2012. As a result, its total greenhouse-gas emissions were 7 percent higher that year than in 2010.
The prospects for renewable power could get worse. To hedge against the possibility that they may be unable to restart nuclear reactors, utilities are building a new generation of coal-fired power stations. By Ohbayashi’s count, some 13 gigawatts of new coal-fired power generation are now in development.
Meanwhile, the relatively high cost of Japan’s solar power threatens to incite a backlash against renewable energy, encouraged by the pro-nuclear utilities. “There is no doubt that with the current photovoltaics, power generation is expensive,” says Okamoto, expressing his personal viewpoint rather than Panasonic’s. He fears negative reactions from ratepayers, whose rising power bills pay the tariffs that fund photovoltaic systems on rooftops and at power plants like Mitsui Chemicals’: “If we continue to expand our business with the current level of costs, we may have objections.”
What’s more, the old politics that favor nuclear power seem to be returning. Though opinion polls consistently show that a majority of Japanese oppose restarting the utilities’ idled reactors, Prime Minister Shinzo Abe vows to restart those deemed safe by Japan’s Nuclear Regulation Authority. In July the agency issued the first such certification, to a pair of reactors on the southern island of Kyushu—even though offsite emergency control centers mandated after Fukushima have yet to be completed and the reactors are dangerously close to an active volcano. Iodine pills were quickly distributed to the reactors’ neighbors, and the precedent-setting restart is expected soon, after getting the green light from the local governor and the plant’s host city, Satsumasendai, whose economy is crippled without the jobs, tax dollars, and business that the plant provides.
At the same time, utilities are delaying grid connections to renewable developments or imposing grid-upgrade fees that render renewable projects infeasible. The pushback is hitting wind power hardest. Japan’s meager market for wind turbines has actually slowed since Fukushima.
This summer METI launched a committee to manage the implementation of new energy policies. One topic: recent efforts by utilities and the government to restrain further solar installations. Ohbayashi says METI is backpedaling because it misjudged the commercial potential of renewables and their potential impact on the utilities. Says Ohbayashi, “They didn’t foresee the explosive growth of photovoltaics.”
The Japanese government has plans to radically overhaul the country’s balkanized wholesale market and power grid, preparing for a future in which producers compete for the right to deliver power. In that scenario, renewable energy could thrive.
The most critical step, however, is still years away: forcing the vertically integrated utilities to “unbundle” their power generation and transmission businesses. Unbundling is essential to create a level playing field for producers and a system optimized to deliver the cheapest and cleanest power available in real time.
Reëngineering the grid to accommodate massive flows of renewables such as wind and solar is a potentially expensive route for Japan. However, it’s not necessarily more costly than the path back to nuclear that the current government and the utilities are charting. Factoring in the cost of insurance against accidents and upgrades to prevent them could double the cost of nuclear energy.
As former prime minister Naoto Kan told me, the disaster at Fukushima Daiichi has forever altered the economics of nuclear power. “In the past, nuclear power was said to be able to supply power at a very cheap cost, but we know now that is not correct,” he said. “That calculation assumed that no accidents could occur. Now we know they can.”
Peter Fairley is a contributing editor for MIT Technology Review.

Drones of 2014: Quadcopters

Drones of 2014: Quad copters that give you a view from above

Drones, drones, drones!

1You've probably heard the word "drones" one too many times this year, but the hobby is definitely taking off (no pun intended).
You might think these are just radio-controlled quad copters with cameras and, well, some of them are. But many of them are packing an array of sensors to help them fly autonomously and be piloted from far out of sight.
All of the ones here are quad copters we're currently reviewing (or will be when they're available) and are targeted at those getting started in the hobby or just looking for a ready-to-fly experience instead of DIY.
 
 

2DJI Phantom 2 Vision+

This update to the Phantom 2 Vision arrived in April and has the longest flight time of the current consumer quadcopter crop at 25 minutes. The Vision+ improvements include a three-axis gimbal to stabilize the camera, increased Wi-Fi range, and the ability to have it fly autonomously to up to 16 way points.
 

Blade 350 QX2 AP Combo RTF

3

The 350 QX2 might not look as polished as the Vision+, but it's a little less expensive and  includes a two-axis brushless gimbal with optional pitch control and a 1080p camera with 720p/30fps video down link to mobile devices.
It has three different piloting modes depending on your experience level or to improve video quality, and it has the all-important return-to-home feature to help bail you out if you get turned around.
The QX2 is joined by the recently announced 350 QX3 AP Combo RTF, which offers improved GPS performance; a new a 16-megapixel camera that can capture 1080p video at 60fps and a three-axis brushless gimbal; an updated transmitter with a tilt control for the gimbal as well as spring-loaded sticks so they bounce back to neutral when you release; and user-definable flight boundaries.
 

4Parrot Rolling Spider

The Rolling Spider showed up at CES 2014 and is the most simple quad copter here. Controlled with your smartphone or tablet, it has some pretty high-end electronics in its tiny body. However, it can't fly completely autonomously like the rest of the quads here. It is a lot of fun, though, despite a short flight time.
 
 
 

 

Parrot Bebop Drone

5A follow-up of sorts to Parrot's AR.Drone 2.0, the Bebop has an f2.2 fish-eye lens with a 180-degree angle of view and a 14-megapixel camera sensor. It can capture video at 1080p full-HD resolution and photos can be captured in JPEG or Adobe DNG raw format.
Though it can be controlled completely with a smartphone or tablet, those who want physical controls can get the Bebop with Parrot's new Sky controller. This gives you two sticks for piloting; discrete controls for the camera; a button for taking off and landing and one for emergency motor cutoff; status lights for the battery of the Bebop and the controller; a return-to-home button; and you can wirelessly pair a tablet or phone with it for first-person-view (FPV) flying.
The Bebop will be available in December for $499 at Best Buy and Apple, in stores and online. The Bebop with the Skycontroller will sell for $899. Those in Australia will be able to buy from Apple and Harvey Norman in December, too, with presales starting November 20. Pricing for the Bebop alone will be AU$699 (including GST) or with the Sky-controller for AU$1,299 (including GST). Pricing and availability for the UK is still being determined.
 

DJI Inspire 1

6Billed by DJI as the world's first 4K flying camera, the Inspire 1 is a step-up model from the Phantom 2 Vision+. And it's a pretty big step-up, too, with a matching price tag: $2,900. Pricing for the UK is £2,380 and in Australia you'll be paying AU$4,130.
You do get a lot for your money, though, and it's about as ready-to-fly as they come: just spin on the props, charge it up and start flying (assuming you already know how to fly one).
 
 
 
 

3D Robotics IRIS+

7This is one we can't wait to get our hands on. Among its many other attributes, the IRIS+ has a Follow Me feature, so that you can set it up to follow any GPS-enabled Android device.
Beyond that, 3DR's Droid  Planner app can be used to plan flights just by drawing a flight plan on any Android tablet or phone with as many way points as you want.
 
 
 

Honorable mention: Parrot Jumping Sumo

8More robot than drone, perhaps, but nonetheless the Jumping Sumo is a blast to drive around. Its front camera gives you a view from the ground.
 
 

Sunday, December 21, 2014

Forget Hydrogen Cars, and Buy a Hybrid

Hybrids are a much more cost-effective way to reduce carbon emissions than newly released hydrogen fuel cell cars.


fuel.carx299If you want to help cut greenhouse gas emissions, you should probably skip the hydrogen fuel cell cars now coming to market and buy a (much cheaper) hybrid instead.
After decades of research and small-scale demonstrations, hydrogen cars are finally rolling into view. These vehicles use electric motors, but their electricity comes not from a battery but from hydrogen, processed in a chemical reaction that takes place inside a fuel cell.
Researchers and engineers have greatly lowered the costs of fuel cells—by as much as 95 percent—in recent years. That, along with pressure to meet emissions regulations in California, means the technology is finally coming to market. Earlier this year, Hyundai started leasing hydrogen-powered Tucson Fuel Cell SUVs in California. Toyota plans to launch a newly designed compact hydrogen car called the Mirai in Japan this month and in the U.S. next year. Meanwhile, GM, Honda, and others are developing their own hydrogen vehicles.
Carmakers are keen to extol the environmental credentials of these new models. Hyundai advertises that its cars emit no carbon dioxide, while Toyota boasts that its hydrogen cars “leave nothing behind but water.”
But these ads are a little misleading.
The only thing that comes out of the cars’ tailpipes is, indeed, water vapor, but the hydrogen they run on is mostly made from natural gas via a process that releases significant amounts of greenhouse gases into the atmosphere.
Fuel cells are still greener than some conventional cars. Based on an analysis by the Union of Concerned Scientists, producing hydrogen from natural gas for the Hyundai Tucson Fuel Cell vehicle emits about as much carbon dioxide as a car that gets 38 miles per gallon. That’s far better than the gasoline-powered version of the Tucson, which gets 25 miles per gallon. But you can buy a number of cars that get better than 38 miles per gallon. That’s relatively easy to do with small cars. But even the hybrid Toyota Prius v, which is slightly roomier than the Tucson, gets 42 miles per gallon. And it’s far cheaper than the Tucson Fuel Cell vehicle. The Tucson Fuel Cell leases for about $499 per month, which includes the cost of hydrogen fuel. In some areas, you can lease a Prius v for $159 per month.
Emerging technologies for producing hydrogen could eventually make fuel-cell cars cleaner and cheaper. For example, hydrogen can be made using renewable sources of electricity to power an electrolyzer, which splits water into its constituent hydrogen and oxygen atoms. The problem is that this is still far more costly than making hydrogen from natural gas.
Longer term, instead of using solar power to generate electricity, and then using that electricity to split water, it may be possible to engineer catalysts to absorb sunlight and use its energy to split water. That would make hydrogen generation simpler and cheaper. But for now, the main advantage of hydrogen cars over electric cars is that they can be recharged more quickly. Even the fastest chargers available— for the Tesla Model S—take about 20 minutes to add 130 miles of charge. You can fill a Hyundai’s hydrogen tank, which holds enough for 265 miles, in 10 minutes.
Quick refills would be more convenient for long road trips. On the other hand, while there are plans to install 40 pubic hydrogen-fueling stations next year—mainly as a result of investment by the California government and some automakers—right now there are still only three public hydrogen stations in the entire United States.

Saturday, December 20, 2014

Router Vulnerability Puts 12 Million Home and Business Routers at Risk

More than 12 million routers in homes and businesses around the world are vulnerable to a critical software bug that can be exploited by hackers to remotely monitor users’ traffic and take administrative control over the devices, from a variety of different manufacturers.
The critical vulnerability actually resides in web server “RomPager” made by a company known as AllegroSoft, which is typically embedded into the firmware of router , modems and other “gateway devices” from about every leading manufacturer. The HTTP server provides the web-based user-friendly interface for configuring the products.
26Researchers at the security software company Check Point have discovered that the RomPager versions prior to 4.34 — software more than 10 years old — are vulnerable to a critical bug, dubbed as Misfortune Cookie. The flaw named as Misfortune Cookie because it allows attackers to control the “fortune” of an HTTP request by manipulating cookies.
HOW MISFORTUNE COOKIE FLAW WORKS
The vulnerability, tracked as CVE-2014-9222 in the Common Vulnerabilities and Exposures database, can be exploited by sending a single specifically crafted request to the affected RomPager server that would corrupt the gateway device’s memory, giving the hacker administrative control over it. Using which, the attacker can target any other device on that network.
“Attackers can send specially crafted HTTP cookies [to the gateway] that exploit the vulnerability to corrupt memory and alter the application and system state,” said Shahar Tal, malware and vulnerability research manager with Check Point. “This, in effect, can trick the attacked device to treat the current session with administrative privileges – to the misfortune of the device owner.
Once attackers gain the control of the device, they could monitor victims’ web browsing, read plaintext traffic traveling over the device, change sensitive DNS settings, steal account passwords and sensitive data, and monitor or control Webcams, computers, or other network connected devices.
MAJOR ROUTERS & GATEWAY BRANDS VULNERABLE
At least 200 different models of gateway devices, or small office/home office (SOHO) routers from various manufacturers and brands are vulnerable to Misfortune Cookie, including kit from D-Link, Edimax, Huawei, TP-Link, ZTE, and ZyXEL.
The bug not only affects routers, modems and other gateway devices, but anything connected to them from PCs, smartphones, tablets and printers to “smart home” devices such as toasters, refrigerators, security cameras and more. This simply means if a vulnerable router is compromised, all the networked device within that LAN is at risk.
WORSE ATTACK SCENARIO
Misfortune Cookie flaw can be exploited by any attacker sitting anywhere in the world even if the gateway devices are not configured to expose its built-in Web-based administration interface to the wider Internet, making the vulnerability more dangerous.
Because many routers and gateway devices are configured to listen for connection requests publicly on port 7547 as part of a remote management protocol called TR-069 or CWMP (Customer Premises Equipment WAN Management Protocol), allowing attackers to send a malicious cookie from far away to that port and hit the vulnerable server software.
12 MILLION DEVICES OPEN TO HIJACK
The critical vulnerability was introduced in 2002, and AllegroSoft apparently fixed the bug in its RomPager software back in 2005, but hardware from major companies such as Huawei, D-Link, ZTE and others currently sell products contains the vulnerable versions of RomPager. As demonstrated by Check Point’s finding that 12 million vulnerable gateway devices in homes, offices and other locations still exist.
“We believe that devices exposing RomPager services with versions before 4.34 (and specifically 4.07) are vulnerable. Note that some vendor firmware updates may patch RomPager to fix Misfortune Cookie without changing the displayed version number, invalidating this as an indicator of vulnerability.”
“Misfortune Cookie is a serious vulnerability present in millions of homes and small businesses around the world, and if left undetected and unguarded, could allow hackers to not only steal personal data, but control peoples’ homes,” Tal said.
So far, Check Point has not observed an attack involving Misfortune Cookie in the wild, but the company is having a close look on the older unresolved issues in which routers and gateway devices were compromised in different and unknown ways.

Friday, December 19, 2014

Chemical-Sensing Displays and Other Surprising Uses of Glass

An inside look at Corning’s labs suggests what’s next for the inventor of Gorilla Glass.


Someday your smartphone might be able to help you in a new way when you’re traveling: by telling you whether the water is safe to drink.
Although a water app isn’t close yet, researchers at Corning and elsewhere recently discovered that they could use Gorilla Glass, the toughened glass made by Corning that’s commonly used on smartphone screens, to make extremely sensitive chemical and biological sensors. It could detect, say, traces of sarin gas in the air or specific pathogens in water.
The sensors are just one project I learned about during a visit to Corning’s R&D labs in upstate New York. In the last few decades, Corning’s advances in glass-making have led to technologies such as fiber optics and flat-panel displays. Now, thanks to Gorilla Glass, it’s associated with the latest smartphones. But despite the remarkable success of that product, it is keen to catch the next high-tech boom.
corningx299 copyCorning spends about 8 percent of its sales on R&D—which will amount to about $800 million this year. It’s a hedge against the very real possibility that one of its businesses could go dark—as has happened in the past. Between 2000 and 2002, Corning lost more than half of its revenue when its fiber-optics business collapsed with much of the rest of the telecom market. Its stock plummeted from $113 to just over $1. This year, it got another scare when one of its largest customers, Apple, came close to replacing Gorilla Glass in iPhones with sapphire (see “Why Apple Failed to Make Sapphire Phones”).
Displays, in one way or another, account for about half of Corning’s revenue, with roughly a third of that coming from Gorilla Glass. To expand this market and withstand challenges from other materials, Corning is trying to add capabilities to Gorilla Glass, such as the sensor application. And it’s looking for new markets for Gorilla Glass beyond displays.
The ability to turn your phone into a biological and chemical sensor is one of the earliest-stage projects in the lab. Researchers at Corning and Polytechnique Montreal discovered that they could make very high quality waveguides, which confine and direct light, in Gorilla Glass. The researchers were able to make these waveguides very near to the surface, which is essential for sensors. Doing so in ordinary glass would break it. Making the waveguide involves focusing a beam of intense laser light near the surface of the glass, then tracing it along the glass, which locally changes its optical properties.
To make a sensor, the researchers make a waveguide that splits into two identical pathways for light. Then the paths converge, and the light from both paths meet up. One path serves as the sensing path, and the other as a reference. Even a tiny change to the light in the sensing path—such as its intensity—can be detected by observing how the light from the two paths interacts when they meet, producing distinct patterns.
The researchers demonstrated a simple sensor that detects changes in temperature. Heating up the sensing path changes its shape, which changes the properties of the light passing through it. Because the waveguide is so close to the surface, part of the light actually extends out of the glass, and anything placed on the surface of the glass will interact with part of the light. This means that to make a chemical or biological sensor, you could prepare the surface of the glass so that a specific target will bind to it. For example, you might treat it with antibodies that latch onto E. coli. or other contaminants; detecting their presence would be as simple as putting a drop of water on the phone.
The waveguides are microscopically thin, and therefore invisible, so they wouldn’t obscure a display. And because they’re quite small, sensors for several different biological or chemical targets could be incorporated into a smartphone.
Corning researchers have also discovered that Gorilla Glass has useful acoustic properties. The way it vibrates is different than conventional glass—it damps sound waves. The simplest application is noise insulation—it blocks sound better than ordinary glass.
But the same acoustic properties could also turn displays into speakers. I saw such a prototype in one of Corning’s labs. A wire in the display attaches to a small actuator that vibrates the glass to produce sound waves. Because of the way the waves propagate through the glass, they can be more precisely controlled than with ordinary glass, allowing for higher quality sound reproduction.
In another lab, researchers showed off a seemingly ordinary window. Then, with a flip of a switch on a circuit board, it turned into a display—one showing an old Coke commercial—and I could only barely make out what was behind the image. When the ad was over, I could see through the display again. Corning was particularly secretive about how it managed to make this technology work.
The most uncanny thing I saw was a Slinky-like glass toy. It’s made of thin Gorilla Glass cut in a spiral shape with a new laser manufacturing tool. As with a Slinky, if you hold one part and let go of the rest, it extends toward the floor. Ordinary glass would just shatter, but because it’s tougher, this glass springs back like plastic. The key to having glass this flexible is making it thin.
Corning recently developed Willow Glass, which is about 100 micrometers thick, one-fourth the thickness of the Gorilla Glass normally used for displays. It can be shipped to customers in rolls, making it easier and cheaper to use in manufacturing. Potential customers are still evaluating how to use it; one likely application is as a component inside displays. But already, an even more flexible kind of glass is in development, says Corning’s chief technology officer, David Morse. It can fold around the edge of something as thin as a reporter’s notebook, and do so millions of times without breaking. It could be important in future foldable electronic devices.
Founded in 1851, Corning survived in the past because of its ability to keep reinventing the possibilities of glass. At about the same time that the market for fiber optics collapsed, its business selling glass for cathode-ray-tube TVs also took a steep dive. It was saved by a process it had invented for making the high quality glass needed for the transistors that control pixels in LCD displays—the very display technology that was destroying its cathode-ray business. A few years later, the company got a call from Steve Jobs, who needed tough glass for the first iPhone. Corning just happened to have a technology sitting on the shelf—the toughened glass that came to be called Gorilla Glass. Corning hopes to be ready for the next call.

“Nanobuds” Could Turn Almost Any Surface into a Touch Sensor

Stretchy, conductive films made of novel nanobuds could bring touch sensors to more surfaces.

 

Transparent films containing carbon nanobuds —molecular tubes of carbon with ball-like appendages—could turn just about any surface, regardless of its shape, into a touch sensor.

The films were developed by a Finnish startup,Canatu, and could be used to add touch controls to curved automobile consoles and dashboards, for example. The films are rugged and can be repeatedly bent around something as thin as the cord for your earbuds, so they could be handy for adding buttons to flexible devices.
Touch screens are usually made by overlaying a display screen with a transparent sheet of indium tin oxide. This material is brittle, however, and can’t be used on anything other than a flat surface. Individual carbon nanotubes have long been seen as a promising alternative because they conduct electricity so well. But carbon nanotubes have performed badly in touch screens due to poor electrical connections between different nanotubes. Carbon nanobuds are better because the ball-like appendages are particularly good at emitting electrons, which improves those electrical connections.
A nanobud consists of a tube of carbon atoms with a bud-like appendage.
Canatu has 40 prototype products in the works. It recently built its first full-scale manufacturing equipment, which can produce enough film to cover hundreds of thousands of smartphone touch screens every month. Next year the company plans to install enough machines to supply millions of smartphones.
Films containing carbon nanotubes have previously been too expensive to produce commercially. Canatu’s founders, researchers at Aalto University, in Finland, improved the electrical connections between carbon nanotubes by modifying their shape, and also found a way to make nanobud films cheaply.
Making carbon nanotubes films conventionally is a complex process that requires costly purification steps that can sometimes damage the nanotubes. Canatu’s manufacturing approach starts with carbon-containing gases, which are converted directly into nanobuds and deposited to make a transparent film in one step, without the need for purification.
The material isn’t a good fit for all applications, though. The conductivity isn’t high enough for very large screens, for example.
But the nanobud films can stretch over a surface, says Erkki Soininen, vice president of marketing and sales, sometimes by more than 200 percent without losing much performance. Most other stretchable touch screens stretch by only a few percent. The material stretches because the carbon nanobuds are able to slide past one another while maintaining good electrical contact.

 

Popular Posts

  • popular Image board 4chan hacked
    The next day after Bihar BJP's official website get hacked by hacker claimed to be from Pakistan, the official website of Senior B...
  • NASA and the ESA confirm that the lost Beagle-2 orbiter has been found on Mars
    Back in 2003, a full month before NASA’s Opportunity landed on Mars, the British probe Beagle-2 entered orbit as part of the Mars Expres...
  • NASA’s New Horizons space probe: Powered by PlayStation
    Today is a milestone for the New Horizons probe. The spacecraft, which launched nearly nine years ago, has just begun its official six...
  • NVIDIA DEMOS A CAR COMPUTER TRAINED WITH “DEEP LEARNING”
    Many cars now include cameras or other sensors that record the passing world and trigger intelligent behavior, such as automatic braking o...
  • Chemical-Sensing Displays and Other Surprising Uses of Glass
    An inside look at Corning’s labs suggests what’s next for the inventor of Gorilla Glass. Someday your smartphone might be able to help...
  • Toyota Recalls 20,000 Vehicles For Possible Fuel Leak
    Chevy Motor Corp said it’s remembering about 20,000 automobiles globally over possible energy leaking, Reuters revealed on Friday. Most o...
  • Desalination out of Desperation
    Even in drought-stricken California, San Diego stands out. It gets less rain than parched Los Angeles or Fresno. The region has less groundw...
  • South Korea nuclear plant hit by hacker
    The hacking comes in the wake of increased tension and trouble from North Korea, though the source has not been confirmed. Computers a...
  • News Details of 5 of the best hackers in the world
    There are many hackers around the world. Hackers are famous for their many, many infamous and distinguished or None. Today's top 5 in...
  • INTEL’S BROADWELL IS COMING TO MAINSTREAM LAPTOPS
    Intel’s Broadwell is coming to mainstream laptops — here’s what you need to know CES has always been a major launch window for Intel a...

 
Hacker News © 2014. All Right Reserved
DMCA | Privacy Policy
  • Facebook
  • twitter
  • googleplus
  • youtube