Friends

Historic Photos Reveal a Mercury Never Seen Before

Historic Photos Reveal a Mercury Never Seen BeforeOf the four rocky planets in our solar system -- Earth, Mars and Venus are the other three -- Mercury is the smallest, the densest, the one with the oldest surface, the one with the largest daily variations in surface temperature, and the least explored. Of particular interest is whether Mercury might have some vestige of a magnetic core. "The only reason we have an atmosphere and don't die is because of our magnetic field," noted SLU professor Paul Czysz.




NASA's MESSENGER spacecraft on Tuesday and Wednesday captured and delivered to Earth the first photographs of Mercury ever taken from within the planet's orbit

mercury messenger simply beautiful 




















Taken at 5:20 am EDT Tuesday, the historic first photo was soon joined by 364 more of the solar system's innermost planet, and several of them were released on Wednesday. Photos were taken by MESSENGER's Mercury Dual Imaging System as the spacecraft sailed high above the planet's south pole, providing a glimpse of portions of Mercury's surface that had not previously been seen by humans.
"The entire MESSENGER team is thrilled that spacecraft and instrument checkout has been proceeding according to plan," said MESSENGER Principal Investigator Sean Solomon of the Carnegie Institution of Washington.

"The first images from orbit and the first measurements from MESSENGER's other payload instruments are only the opening trickle of the flood of new information that we can expect over the coming year," Solomon added. "The orbital exploration of the solar system's innermost planet has begun."
Orbiting Every 12 Hours
NASA's MESSENGER -- short for "MErcury Surface, Space ENvironment, GEochemistry, and Ranging" -- on March 17 became the first spacecraft ever to enter Mercury's orbit after completing more than a dozen laps within the inner solar system over the past 6.6 years.
The probe will continue to orbit the planet once every 12 hours for the duration of its primary mission. On April 4, the yearlong science phase of the mission will begin, and the first orbital science data from Mercury will be returned.
In the meantime, thousands more images will be captured and studied in order to better understand the planet.

mercury messenger first color image





















Newly Imaged Terrain 
In addition to the first image taken on Tuesday, a series of several more were released on Wednesday, including a color version of that first photo. Visible in the upper portion of the historic image is a rayed crater known as Debussy. The smaller crater Matabei is visible to the west of Debussy and is notable for its unusual dark rays.
The bottom portion of the image is near Mercury's south pole and includes a region of Mercury's surface not previously seen by spacecraft. That newly imaged terrain can be seen by comparing the new image with the planned image footprint.
Mercury's diameter is 3,030 miles. Simulated views on the MESSENGER website provide a glimpse of the spacecraft's current position.


One of 4 Rocky Planets

'They've had pictures of Mercury before, but this is the first time they've gone completely around the planet," Paul Czysz, professor emeritus of aerospace engineering at St. Louis University, told TechNewsWorld.
Mercury is of interest to scientists because it is one of only four rocky planets identified so far in our solar system. Joining it on that list is also Earth, of course, as well as Venus and Mars.
Among those planets, Mercury is the smallest, the densest (after correcting for self-compression), the one with the oldest surface, the one with the largest daily variations in surface temperature, and the least explored. So, developing a better understanding of Mercury is a key to understanding how the planets in our solar system formed and evolved.

'One Side Is Blasted Bare'

Making Mercury particularly interesting is its proximity to the sun, and the fact that one side of the planet faces the sun most of the time, Czysz noted.
"One side is blasted bare" by the sun, while "the other side is dark and cold," he explained. "It takes years for it to do a complete revolution."
In fact, the extremely high temperatures on the sun-facing side of the planet meant that MESSENGER had to be designed carefully to be able to withstand such heat, Czysz noted. "They've designed the craft so a lot of the sensitive stuff is shielded behind, in the shadow of the sun."
Whereas the Juno probe slated to visit Jupiter later this year had to be designed to withstand that planet's nuclear radiation, MESSENGER's challenge was radiation of the thermal kind, he pointed out.

Looking for a Magnetic Core


messenger's wide-angle camera
MESSENGER's wide-angle camera (WAC) is not a typical color camera. It can image in 11 colors, ranging from 430 to 1020 nm wavelength (visible through near-infrared). (Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Carnegie Institution of Washington)
Insights that could be gained from a deeper understanding of Mercury include a better idea of the solar system's origins, Czysz said.

Our current understanding is that Earth is distinct among the solar system's four rocky planets in that it has a significant magnetic field, resulting from its molten nuclear core. Though neither Mars nor Venus has anything comparable today, there is evidence that suggests Mars may have had something similar at one time, he added.
Mercury, then, could be our last chance to find another rocky planet with at least the vestiges of a magnetic core like Earth's, Czysz explained.
"The only reason we have an atmosphere and don't die is because of our magnetic field," he pointed out. That's because the magnetic field is what deflects the solar winds and radiation that are constantly trying to bombard our planet.

'The Largest Nuclear Reactor Ever Conceived'

"We're trying to put the pieces together as to why some rocky planets have magnetic fields, or nuclear cores, and why some don't," Czysz noted.
Such information is particularly timely in the wake of the nuclear disaster currently facing Japan, Czysz added.
"We're all afraid of what happened in Japan," he pointed out.
In fact, Czysz said, "we're sitting on the largest nuclear reactor ever conceived -- it's called the core of the Earth."

mercury messenger simply beautiful 

Red Hat's New Java Alternative: From Coffee to Tea

Red Hat's New Java Alternative: From Coffee to Tea"The only way Ceylon can kill Java is by driving developers to suicide from switching back and forth between the two languages," said Slashdot blogger Barbara Hudson. "I'd say Java's about as likely to be killed by Cylons as by Ceylon. Larry Ellison's Battlestar Oracle has nothing to worry about on this front."



When a FOSS company gets to be the size of Red Hat (NYSE: RHT), pretty much every move it makes is of interest to those of us here in the Linux community.
So when said company unveils plans to create an alternative to none other than Java, well, let's just say everyone sits up and starts listening.
Sure enough, that's just what leaked out into the Linux blogosphere last week, thanks first to one Marc Richards and then the rowdy crowds over at Slashdot.
In no time at all, Red Hat's own Gavin King was chiming in on the subject, which has inspired no end of discussion. 

'We're Frustrated'

"Why a new language?" King wrote. "Well, we've been designing and building frameworks and libraries for Java for ten years, and we know its limitations intimately. And we're frustrated."
Among the key limitations, King went on, is that "we simply can't solve to our satisfaction in Java -- or in any other existing JVM language -- the problem of defining user interfaces and structured data using a typesafe, hierarchical syntax," he explained. "Without a solution to this problem, Java remains joined at the hip to XML."
Then, too, there's the fact that "the extremely outdated class libraries that form the Java SE SDK are riddled with problems," King went on. "Developing a great SDK is a top priority of the project."
That new project, dubbed "Ceylon," is outlined in a .PDF King links to from his blog, which goes on to include two further posts -- here and here -- for clarification.
Bottom line? Linux bloggers have had plenty to chew on.

'They Could Just Use Pascal'

"Java is tainted by Sun's hesitancy to Free the software and Oracle's (Nasdaq: ORCL) attempts to prevent free use of Java," blogger Robert Pogson opined. "If Ceylon escapes those burdens and brings back the ':=' -- I am an old programmer who has used Algol, Modula-2 and Pascal -- then I am all for it."
Of course, "they could save some retooling and just use Pascal," Pogson added. "That's what I do. There is source code for the compiler and the runtime at http://FreePascal.org. It's GPL with an exception for static libraries."
Either way, Ceylon is unlikely to become the "Java killer" many have said it could be, opined Barbara Hudson, a blogger on Slashdot who goes by "Tom" on the site. 

'Oracle Has Nothing to Worry About'

"For years, I've been saying that Java would be better if it offered more of the features found in c++, such as multiple inheritance and operator overloading," Hudson offered. "Every time I bring this up, the javanistas freak out. So now we have another Java derivative, one that the author says *will* allow operator overloading, and nobody says 'boo.' Go figure. :-)"
Of course, "what the main story giveth, the footnotes taketh away," Hudson added. Namely, "*function* overloading, a very handy feature borrowed from c++, disappears."
Add in "all the other random changes, and the only way Ceylon can kill Java is by driving developers to suicide from switching back and forth between the two languages," Hudson concluded.
In other words, "I'd say Java's about as likely to be killed by Cylons as by Ceylon," she quipped. "Larry Ellison's Battlestar Oracle has nothing to worry about on this front."

Is There Room for Another Language?

In order for Ceylon to be any kind of Java killer, "the main thing Red Hat can do is make sure the code runs fast," Thoughts on Technology blogger and Bodhi Linux lead developer Jeff Hoogland told Linux Girl.
"Java's only real selling point is cross-platform, and python+QT does an equally good job of that," Hoogland explained. "At the same time, Java is fairly slow for larger applications, making C/C++ a better choice IMO."
Indeed, "it will be interesting to see if there is room for another language to take off in this day and age, where it seems like we have an announcement every other month for some new language that they tell us we all need to be using," consultant and Slashdot blogger Gerhard Mack mused.

'Java Is a Lot Like Windows'

Slashdot blogger hairyfeet wasn't so sure any such room will be found.
"How many 'Java Killers' are we up to now?" hairyfeet began. "A hundred?
"The enterprise guys LIKE Java the way it is -- the last thing they want is yet another version with new incompatibilities," hairyfeet said. "Hell, now that Mono is on Android and iOS, if you are looking for cross-platform you'd probably be better off with Mono/.NET, and I don't see many going that route either."
In the end, "Java is a lot like Windows," hairyfeet opined. "It is this big, huge bunch of code, where just as many apps were written to exploit its quirks as to follow the specs. Trying to replace that will be about as easy a time as ReactOS is having trying to recreate Windows from scratch. Yeah, good luck with that, buddy!"

'It Looks Promising'

Chris Travers, a Slashdot blogger who works on the LedgerSMB project, took a more measured approach.
"It looks promising, but it's too soon to tell," Travers told Linux Girl. "I tend to wait until languages are a little more mature before I'd suggest jumping on the band wagon; a lot of ideas look good on paper, but it will be interesting to see what details do not work as expected in the real world."
One thing Travers does like about Ceylon, however, "is the use of the PL/1 assignment operator (:=)," he added. "One of the real issues with many computer languages is the use of the = sign for assignment, when many of us are taught since grade school to treat it as the comparison operator.
"I suspect a lot of computer programmers will find this very disorienting at first, but there are places where this is already used -- for example, in PL/1-based extensions to SQL," he pointed out. "In the long run, I think this operator makes more sense than the way the industry does it currently."


The Rare-Earth Crisis

Today's electric cars and wind turbines rely on a few elements that are mined almost entirely in China. Demand for these materials may soon exceed supply. Will this be China's next great economic advantage?

Mighty mine: This 50-acre mine on the eastern edge of California’s Mojave Desert was once the world’s leading supplier of rare-earth metals. Water pooled at the bottom of the mine while it lay idle after being shut down a decade ago. Credit: Photography by Daniel Hennessy

On the eastern edge of the Mojave Desert, an hour's drive southwest of Las Vegas in Mountain Pass, California, lies a 1.4-billion-year-old deposit of cerium, neodymium, and other metals that is the richest source of rare-earth elements in the United States. Beside hills populated by cacti, Joshua trees, and wandering tortoises is a vast waste dump of tan and white rocks that was built up over more than 50 years of production at a 50-acre open-pit mine here. The mine was once the world's biggest producer of these metals, which are crucial to such diverse products as computer hard drives, compact fluorescent light bulbs, and the magnets used in electric vehicles' motors. And the site still holds enough of them to mine for at least another 30 years. But in 2002 it was shut down, owing to severe environmental problems and the emergence of Chinese producers that supplied the metals at lower cost. The mine sat idle for a decade.
With worldwide demand for the materials exploding, the site's owner, Molycorp Minerals, restarted mining at Mountain Pass last December. It is now the Western Hemisphere's only producer of rare-earth metals and one of just a handful outside of China, which currently produces 95 percent of the world's supply. Last September, after China stopped exporting the materials to Japan for two months, countries around the world began scrambling to secure their own sources. But even without Chinese restrictions and with the revival of the California mine, worldwide supplies of some rare earths could soon fall short of demand. Of particular concern are neodymium and dysprosium, which are used to make magnets that help generate torque in the motors of electric and hybrid cars and convert torque into electricity in large wind turbines. In a report released last December, the U.S. Department of Energy estimated that widespread use of electric-drive vehicles and offshore wind farms could cause shortages of these metals by 2015.

Sensors for Tracking Home Water Use

Sensors track devices' electricity, water, and gas consumption from one spot.
 Finding the flush: This sensor attaches to a water pipe and wirelessly communicates changes in pressure to a microcontroller that infers the use of specific fixtures. A Bluetooth transmitter streams the data to a personal computer.


When a cell phone or credit-card bill arrives, each call or purchase is itemized, making it possible to track trends in calling or spending, which is especially helpful if you use a phone plan with limited minutes or are trying to stick to a budget. Within the next few years, household utilities could be itemized as well, allowing residents to track their usage and see which devices utilize the most electricity, water, or gas. New sensor technology that consists of a single device for each utility, which builds a picture of household activity by tracing electrical wiring, plumbing, and gas lines back to specific devices or fixtures, could make this far simpler to implement.
Shwetak Patel, a professor of computer science and electrical engineering at the University of Washington, in Seattle, developed the sensors, which plug directly into existing infrastructure in buildings, thereby eliminating the need for an elaborate set of networked sensors throughout a structure. For example, an electrical sensor plugs into a single outlet and monitors characteristic "noise" in electrical lines that are linked to specific devices, such as cell-phone chargers, refrigerators, DVD players, and light switches. And a gas sensor attaches to a gas line and monitors pressure changes that can be correlated to turning on a stove or furnace, for instance.
Now, Patel and his colleagues have developed a pressure sensor that fits around a water pipe. The technology, called Hydrosense, can detect leaks and trace them back to their source, and can recognize characteristic pressure changes that indicate that a specific fixture or appliance is in use.
Patel hopes to incorporate electrical, gas, and water sensors into a unified technology and has cofounded a soon-to-be-named startup that he hopes will start offering combined smart meters to utility companies within the next year or so. The goal, says Patel, is to make a "smart home" universally deployable. "I looked at the existing infrastructures," he says, "and saw that they could be retrofitted."
Smart sensors have become increasingly popular over the past few years as more people have become interested in cutting their utility bills and minimizing the resources that they consume. A number of startups offer to connect utility providers and consumers so that resource use can be tracked over the Internet. So far, however, no company or utility has been able to provide the sort of fine-grain resource usage that Patel hopes to offer with his startup.
The idea behind the water sensor has its origins in Patel's original work with electrical lines. Rather than simply looking at the amount of power consumed by all the devices in a house, he decided to look at noise patterns--irregularities in the electrical signal--that propagated over household power lines as a result of electrical consumption. "Let's say you turn on a light switch in the bathroom and kitchen," he says. "We can tell the difference between the two" due to electrical impulses that resonate at a high frequency. "So if you have two different impulses you see originate from two different locations inside the home, you can trace them back to a particular device," Patel says, noting that location can be determined by the amount of time that it takes for a signal to reach the sensor, which is usually just plugged into a spare wall outlet. 
Likewise, Hydrosense consists of a single device attached to a cutoff valve or water bib that monitors the entire plumbing infrastructure. "When you open a valve, the pressure on the entire system goes down," says Patel. "And whenever you change the water flow from static to kinetic, you get a shock wave that propagates throughout the pipes." He explains that the shock wave, while relatively mild, has a characteristic shape that can be used to identify different fixtures--even the distinction between the toilets in different bathrooms.
Using data collected in nine homes of varying style and age and with a diversity of plumbing systems located in three different cities, Patel and his colleagues have shown that by monitoring these shock waves, it is possible to identify individual fixtures with 95.6 percent accuracy.
"The idea of being able to plug one device into a home and build a picture of what's going on and off is really fascinating," says Adrien Tuck, CEO of Tendril Networks, a company that makes smart meters and plugs for homes. But he suspects that there will be some kinks to iron out before the technology is deployable at a large scale. "If it were easy, it would have been done already," he says, "and that probably means that there are some things that need to be teased out."
In addition to monitoring utility usage, Patel says that the sensors can track human activity within a home, which could be useful for elder care and reducing energy waste. He has also developed a fourth sensor that can be integrated into a home's heating and cooling systems. By monitoring pressure changes that occur when people open and close doors and when they enter and exit a room, a sensor within an air-conditioning unit can infer with relative accuracy where people are within a home or apartment, Patel says.


Microsoft Explores Privacy-Protecting Personalization

A researcher is experimenting with ways that a Web browser could tighten the limits on information provided to websites.

Today, many websites ask users to take a devil's deal: share personal information in exchange for receiving useful personalized services. New research from Microsoft, which will be presented at the IEEE Symposium on Security and Privacy in May, suggests the development of a Web browser and associated protocols that could strengthen the user's hand in this exchange. Called RePriv, the system mines a user's behavior via a Web browser but controls how the resulting information is released to websites that want to offer personalized services, such as a shopping site that automatically knows users' interests.
"The browser knows more about the user's behavior than any individual site," says Ben Livshits, a researcher at Microsoft who was involved with the work. He and colleagues realized that the browser could therefore offer a better way to track user behavior, while it also protects the information that is collected, because users won't have to give away as much of their data to every site they visit.
The RePriv browser tracks a user's behavior to identify a list of his or her top interests, as well as the level of attention devoted to each. When the user visits a site that wants to offer personalization, a pop-up window will describe the type of information the site is asking for and give the user the option of allowing the exchange or not. Whatever the user decides, the site doesn't get specific information about what the user has been doing—instead, it sees the interest information RePriv has collected.
Livshits explains that a news site could use RePriv to personalize a user's view of the front page. The researchers built a demonstration based on the New York Times website. It reorders the home page to reflect the user's top interests, also taking into account data collected from social sites such as Digg that suggests which stories are most popular within different categories.
Livshits admits that RePriv still gives sites some data about users. But he maintains that the user remains aware and in control. He adds that cookies and other existing tracking techniques sites already collect far more user data than RePriv supplies.
The researchers also developed a way for third parties to extend RePriv's capabilities. They built a demonstration browser extension that tracks a user's interactions with Netflix  to collect more detailed data about that person's movie preferences. The extension could be used by a site such as Fandango to personalize the movie information it presents—again, with user permission.
"There is a clear tension between privacy and personalized technologies, including recommendations and targeted ads," says Elie Bursztein, a researcher at the Stanford Security Laboratory, who is developing an extension for the Chrome Web browser that enables more private browsing. "Putting the user in control by moving personalization into the browser offers a new way forward," he says.
"In the medium term, RePriv could provide an attractive interface for service providers that will dissuade them from taking more abusive approaches to customization," says Ari Juels, chief scientist and director of RSA Laboratories, a corporate research center.
Juels says RePriv is generally well engineered and well thought out, but he worries that the tool goes against "the general migration of data and functionality to the cloud." Many services, such as Facebook, now store information in the cloud, and RePriv wouldn't be able to get at data there—an omission that could hobble the system, he points out.
Juels is also concerned that most people would be permissive about the information they allow RePriv to release, and he believes many sites would exploit this. And he points out that websites with a substantial competitive advantage in the huge consumer-preference databases they maintain would likely resist such technology. "RePriv levels the playing field," he says. "This may be good for privacy, but it will leave service providers hungry." Therefore, he thinks, big players will be reluctant to cooperate with a system like this.
Livshits argues that some companies could use these characteristics of RePriv to their advantage. He says the system could appeal to new services, which struggle to give users a personalized experience the first time they visit a site. And larger sites might welcome the opportunity to get user data from across a person's browsing experience, rather than only from when the user visits their site. Livshits believes they might be willing to use the system and protect user privacy in exchange.

Batteries that Recharge in Seconds

 
A new process could let your laptop and cell phone recharge a hundred times faster than they do now.

Foam power: This lithium-ion battery cathode can be used to make a battery that holds as much energy as a conventional one, but can recharge a hundred times faster.
Credit: Paul Braun

 
A new way of making battery electrodes based on nanostructured metal foams has been used to make a lithium-ion battery that can be 90 percent charged in two minutes. If the method can be commercialized, it could lead to laptops that charge in a few minutes or cell phones that charge in 30 seconds.
The methods used to make the ultrafast-charging electrodes are compatible with a range of battery chemistries; the researchers have also used them to make nickel-metal-hydride batteries, the kind commonly used in hybrid and electric vehicles.
How fast a battery can charge up and then release that power is primarily limited by the movement of electrons and ions into and out of the cathode, the electrode that is negative during recharging. Researchers have been trying to use nanostructured materials to improve the process, but there's usually a trade-off between total energy storage capacity (which determines how long a battery can run before needing a recharge) and charge rates. "People solved half the problem," says Paul Braun, professor of materials science and engineering at the University of Illinois at Urbana-Champaign.
Braun's group has made highly porous metal foams coated with a large amount of active battery materials. The metal provides high electrical conductivity, and even though it's porous, the structure holds enough active material to store a sufficient amount of energy. The pores allow for ions to move about unimpeded.
The first step in making the cathodes is to create a slurry of polymer spheres on the surface of a conductive substrate. Because of their shape and surface charge, the spheres self-assemble into a regular pattern. The Illinois researchers then use a common technique called electroplating to fill the space between the spheres with nickel. Next, they dissolve the polymer spheres, and most of the metal, to leave a nickel sponge that's about 90 percent open space. Finally, they grow the active material on top of the sponge.
"It's some distance to a product, but we have pretty good lab demos" with nickel-metal-hydride and lithium-ion batteries, says Braun. The Illinois group has made lithium-ion batteries that charge almost entirely in about two minutes. The method should be applicable to the cell sizes needed for laptops and electric cars, though the researchers have not made them yet.
"The performance they got is unprecedented," says Andreas Stein, a professor of chemistry at the University of Minnesota. Stein pioneered the polymer-particle templating method that Braun's group used. Braun's work is described in the journal Nature Nanotechnology.
Jeff Dahn, professor of physics at Dalhousie University, is skeptical that these electrodes will ever end up in products. "When you look at the flow chart for making this structure, it's pretty complicated, and that is going to be expensive," he says.
Braun acknowledges: "There are lots of people coming up with elegant [electrode] structures, but manufacturing them is tricky." He says, however, that his fabrication process combines existing methods that are currently widely used to make other products, if not to make batteries, and that it shouldn't be too difficult to adapt them. The process would add extra steps to making a battery, but these steps aren't particularly expensive or complex, Braun says.
Braun's group will next test the electrode structure with a wider range of battery chemistries and work on improving batteries' other half, the anode—a trickier project.

The Case for Moving U.S. Nuclear Fuel to Dry Storage


One of the lesser-noted facts of the Fukushima nuclear disaster—where loss of coolant in spent-fuel pools has resulted in massive radiation releases—is that some fuel at the plant was stored in so-called dry casks, and these casks survived the March 11 earthquake and tsunami intact.
This fact is likely to result in new calls to move some spent fuel out of water pools at reactor sites in the United States—where it is packed more densely than the fuel in the stricken Japanese pools—and into outdoor dry casks, experts say.
"What will likely happen very quickly is that the [Nuclear Regulatory Commission] and utilities will arrive at a consensus that moving fuel to dry storage needs to be accelerated to get as much spent fuel out of the pools as fast as possible," says Ron Ballinger, an MIT nuclear engineer. In Japan, he says, "the dry storage casks weathered the earthquake and tsunami with zero problems."
Until now, U.S. regulators have decided that keeping fuel in pools—and even allowing the fuel to be more densely packed—is safe. Most U.S. nuclear reactors have air-cooled, dry-cask storage for some reactor waste, but generally this is only because the pools cannot fit any more.  Older waste that has had a chance to cool for a few years in pools can be moved to dry casks.
The U.S. is home to at least 65,000 tons of nuclear reactor waste, more than in any other nation, and this figure grows by about 2,200 tons each year.
"In general, U.S. reactors have a great deal more fuel in their spent-fuel-pools than the reactors at Fukushima," says Richard Lester, who heads the Department of Nuclear Science and Engineering at MIT. If a Fukushima-scale event were to strike a typical U.S. nuclear plant fuel pool, he says, "I think you would potentially have a worse situation simply by virtue of there being more fuel—a lot more fuel in the cases of the pools at the U.S. reactors."
Spent uranium reactor fuel generates great quantities of heat even after it is removed from the core of a reactor. For that reason, spent rods must be immersed in deep pools of circulating water for several years in order to cool them enough. But after several years, dry casks become a feasible storage option. The casks—generally barrel-shaped steel-and-concrete structures that stand 20 feet high and sit outdoors—only need passive air cooling.
In a pool, by contrast, the proximity of fuel rods to one another causes heat buildup that requires water to be circulated continually. As Fukushima has demonstrated, pumps and their backup systems can fail, and water in spent fuel pools can leak out or boil away.
Over the past three decades, delays in opening a permanent repository for spent nuclear fuel in the United States has led the U.S. Nuclear Regulatory Commission to allow existing spent fuel pools to be "reracked" to increase the density of rods inside them.
Of 84 current or former U.S. reactor sites holding spent fuel—a figure that includes some sites with more than one power plant—63 already have dry casks, 10 are applying to build them, and 11 haven't yet announced plans, according to Nuclear Regulatory Commission 
 
 
 
 
Spent fuel: In the United States, 63 current and former nuclear reactor sites (including power plant complexes and government facilities) already have dry-cask storage facilities. Another 10 are applying to build them, and 11 haven’t yet announced plans to do so. But these casks are only keeping pace with newly generated waste. At most locations, liquid pools for holding and cooling fuel are still full of waste, and in many cases these pools are packed more densely than is the case at the stricken Fukushima reactors.
Credit: Nuclear Regulatory Commission
"If there is a loss-of-coolant accident, you are going to be in big trouble, especially with these high-density racks and the pools being heavily loaded—and even more so if there happens to be freshly discharged fuel in the pool," says Allison Macfarlane, a geologist and associate professor of environmental science and policy at George Mason University, who was one of several coauthors of a 2003 report warning of the danger posed by dense reracking. "A lot of these pools are in upper stories at the power plant," meaning breaches or cracks could let water run out. "If there is a loss of water, you can have a release of radioactivity much larger than Chernobyl, because there is a lot more fuel in the pool than in the core of the reactor."
Last year, President Obama canceled plans to open the Yucca Mountain underground fuel repository 90 miles northwest of Las Vegas, and appointed a commission to come up with alternatives. The commission, due to issue its report in June, has not made any statements about Fukushima. Macfarlane, a commission member, says she could not discuss its possible suggestions. However, the body is scheduled to meet in a public session May 13 in Washington.
The 2003 report said that in the event of coolant loss in a densely packed pool, air cooling would not suffice. Temperatures could rise to 600 °C within an hour, causing the zirconium fuel cladding to rupture, and then increase to 900 °C, whereupon the cladding would burn, resulting in huge quantities of released radioactive material, the report said.
The report proposed immediate reversion to lower-density pool configurations, with more cooled fuel put in dry casks and moved to central sites. In looser-packed pools, the report said, airflow alone could be enough to prevent fire in the event of coolant loss. It said this could be done for no more than $7 billion nationally, which would work out to a wholesale electricity price increase of 0.06 cents per kilowatt-hour generated from the fuel.
These steps were not carried out. A subsequent National Research Council report also said the fire scenarios required more study, and suggested other measures while leaving dense configurations intact. "It appears to be feasible to reduce the likelihood of a zirconium cladding fire by rearranging spent fuel assemblies in the pool and making provision for water-spray systems that would be able to cool the fuel, even if the pool or overlying building were severely damaged," the report said. Fuel rearranging and backup cooling of pools are being implemented, a Nuclear Regulatory Commission spokesman says.
If the U.S. government had followed through on its 1982 commitment to open a spent-fuel repository— and its subsequent contracts with utilities to begin removing the fuel in 1998—the pressure on U.S. spent fuel pools would have been relieved, Lester says.  "There were schedules that described how the DOE [Department of Energy] was going to move the fuel, and which fuel would be moved," he says. "I think we can say, on the basis of all of that, that the pools would not be nearly as full as they are now."
He says it was crucial to begin establishing central sites for dry-cask storage as part of a comprehensive plan for waste storage and disposal, which he says should not rule out Yucca Mountain.  "One possible use for the site is for temporary storage," Lester says.

A Browser that Speaks Your Language


Early adopters can now get a sneak peek at the future of the Web by downloading the latest prerelease, or "beta," version of Chrome, Google's Web browser. One of the most interesting new features is an ability to translate speech to text—entirely via the Web.
The feature is the result of work Google has been doing with the World Wide Web Consortium's HTML Speech Incubator Group, the mission of which is "to determine the feasibility of integrating speech technology in HTML5," the Web's new, emerging standard language.
A Web page employing the new HTML5 feature could have an icon that, when clicked, initiates a recording through the computer's microphone, via the browser. Speech is captured and sent to Google's servers for transcription, and the resulting text is sent back to the website.
To experiment with the voice-to-text feature, download the latest beta version of Chrome here. Then go to this webpage, click on the microphone, and start talking. You'll probably find the results mixed, and sometimes hilarious. Using the finest elocution I could muster, I read the opening passage of Richard Yates's Revolutionary Road: "The final dying sounds of their dress rehearsal left the Laurel Players with nothing to do but stand there, silent and helpless." I got error messages several times in a row ("speech not recognized" or "connection to speech servers failed"). Once, I received this transcription: "9 sounds good restaurants on the world there's nothing to do with fam vans island."
The new feature derives in large part from experiments Google conducted through its Android operating system for mobile devices. For more than a year, says Vincent Vanhoucke, a member of Google's voice recognition team, Android app developers have been able to integrate voice recognition into their apps using technology provided by Google. This has provided Google with useful voice data with which to train its voice-recognition algorithms. Today, some 20 percent of searches on Android phones are conducted using voice recognition, says Vanhoucke: people use voice recognition to write texts, send emails, or conduct searches. "It has really opened up interesting new avenues," says Vanhoucke.
However, unlike desktop voice-to-text software, which first accustoms itself to a user's voice, Chrome is trying to churn out text from voice without prior training. 
"I suppose if they keep track of [the] IP address, they could adapt" to a given user's voice, says Jim Glass, a speech recognition expert at MIT. Glass notes that the mobile phone provides an acoustic environment very different from that of a laptop or desktop computer; for one thing, a phone's microphone is reliably placed right at the user's mouth, unlike computer microphone setups in homes or offices. "This is the beta version of Chrome," says Glass. "They'll be collecting data, and we can be sure they will be refining their models--that's the nature of the speech-recognition game."
Even if it's rough around the edges, sometimes the technology impresses. I tried once again and got back "the final warning sounds of the dress rehearsal at laurel players with nothing to do with stand there." Not so bad. And the Chrome app nailed it to a letter when all I said was "the quick brown fox jumps over the lazy dog."
Third-party programmers have also begun creating Web pages capable of using the new feature of Chrome. Already available for trial is a browser plugin called Speechify that lets you search Google, Hulu, YouTube, Amazon, and other sites using voice with Chrome.
Other inventive uses could soon follow. "Games could be taking keyboard, mouse, touch, accelerometer, and speech input together," says Karl Westin, an expert on HTML5 who works for Nerd Communications, based in Berlin, Germany. "Having an aeroplane game where you could actually scream 'up, UP, UUUPPP!' could be fantastic."
But the technology is more than just a toy—it also points the way to a much more capable Web. HTML4, the last major version of the HTML language, emerged in 1997. Since then, plugins like Silverlight and Flash have added media-processing capabilities to the Web. But HTML5 enables media playback and offline storage via the browser.
"The insight we had was that more and more people were spending all their time in the browser," says Google's Brian Rakowski, group product manager for Chrome. E-mail and instant messaging increasingly take place in browsers rather than in separate e-mail or AIM applications. "We'd like it to be case that you never have to install a native application again," says Rakowski. "The Web should be able to do all of it."

Zotac GeForce GTX 580 AMP! Edition

Zotac GeForce GTX 580 AMP! Edition
The scaly green dragon spewing fire from its maw, as depicted on the both the box of and the decal affixed to Zotac's GeForce GTX 580 AMP! Edition video card, is an apt symbol for this particular piece of hardware. Like the original GTX 580 reference version, this one is a killer: the most powerful single-GPU video card you can buy, featuring all of Nvidia's latest innovations and technologies and giving you an outstanding gaming experience. The AMP! is also overclocked, to give you even more performance. But, as it springs from Nvidia's standard, you'll have to deal with the twin demons of high price ($529.99 list) and high power usage to get all those benefits, so make sure you're prepared before you drop a cool half-grand on this hot card.
Except for that dragon sticker, the GeForce GTX 580 AMP! Edition looks like an identical twin to Nvidia's reference model. It's 10.5 inches in length; long enough that it won't fit in every case out there, but not irresponsibly so. It has a beveled interior edge to aid in air circulation, especially in multicard Scalable Link Interface (SLI) setups. (You can connect up to three cards together.) And it's got three output ports on the back panel: two dual-link DVI and one mini HDMI. Because of the size of its fan and heat sink unit, this PCI Express (PCIe) x16 card will block an additional expansion slot.



Zotac GeForce GTX 580 AMP! Edition : Left
 
 

Zotac GeForce GTX 580 AMP! Edition : Bottom 

 




Zotac GeForce GTX 580 AMP! Edition : Right


 Zotac GeForce GTX 580 AMP! Edition : Packaging

Inside, the hardware hasn't changed much, either. The GPU is still the GF110, the fully stocked version of the top Fermi architecture's latest iteration. This means it's loaded with 16 Streaming Multiprocessors, and thus a towering total of 512 CUDA cores, 16 polymorph engines, four raster units, 64 texture units, and 48 ROPs. (The amount of memory, 1,536MB of GDDR5 working across a 384-bit memory interface, remains unchanged.) Video hardware this substantial requires a fair amount of power: Nvidia recommends a power supply of at least 600 watts (that's if you're just using one card, mind you), and you'll need two direct connections from that power supply: one six-pin and one eight-pin. Like all of the other cards that use Nvidia's silicon, the card supports the full range of CUDA parallel processing, PhysX physics processing, and 3D Vision stereoscopic 3D technologies.
Where the Zotac card differs from the baseline are in its clock rates, all of which have been bumped up slightly. The standard graphics clock of 772 MHz has been raised to 815 MHz, the memory clock from 4,008 MHz to 4,100 MHz, and the processor clock from 1,544 MHz to 1,630 MHz.
As you might expect, this does make a difference in performance—if not a huge one. Across the board in our performance tests, we saw measurable, if minute, increases from the regular GTX 580's scores at the top resolutions (1,920 by 1,200 and 2,560 by 1,600) with all details maxed out. 3DMark 11, on the Extreme (1,920-by-1,200) preset, rose from 1,961 to 2,045. Aliens vs. Predator inched to 45.6 frames per second (fps) from 43.8 at the lower resolution, and from 27.7 to 28.8 on the higher. Ditto Far Cry 2 (103.85 to 107.48; 71.7 to 74.7), Heaven Benchmark 2.1 (31.5 to 32.9; 22.1 to 22.9, one of the lowest increases we saw), Just Cause 2 (36.7 to 38; 24.22 to 25.55), Lost Planet 2 (51.3 to 53.7; 36.8 to 38.8), Metro 2033 (32.33 to 33.67; 20.33 to 21.33), and S.T.A.L.K.E.R: Call of Pripyat (59.8 to 62.6; 37.9 to 39.7). (Technically, we saw the same thing on H.A.W.X. 2 as well, but the increases at these speeds—142 to 148 and 94 to 99—are effectively meaningless. The card can handle the game, trust us!)
There was a slight difference in power usage, too. We measured slightly lower values during idle (about 143 watts for the Zotac card, about 145 for the reference model), and slightly higher values under load in our Metro 2033 test using an Extech Datalogger (the Zotac pulling in about 360 watts, the reference model just under 359). Again, individual watts don't matter too much here—either way, the GTX 580 is going to gobble up power when you push it to its limits, so make sure you (and maybe your electricity provider) are prepared.
All these cases raise the question: Is the Zotac GeForce GTX 580 AMP! Edition worth the slight premium over a stock-clocked model? It's a close call, especially given how modest Zotac's factory overclocking is—unless you're running a huge monitor and want to squeeze out every spare frame you can, most ordinary hard-core players won't see a substantial difference. A better argument is the inclusion of Prince of Persia: The Forgotten Sands, which will give you a fun way to flex your card's muscles right out of the box. But if, aside from the game, the Zotac card isn't dynamically better than another GTX 580 running at regular speeds, getting it will still net you by far the most capable single-GPU card on the market.

Droid Incredible 2 Pictures Leaked

Droid Incredible 2 Recently, a leaked image of the Droid Incredible 2 has established one thing for sure, that this handset will be coming Verizon branded. Many rumors have surfaced regarding the branding of the smartphone which is yet to hit the stores. The picture also discloses that this phone will be quire similar to the Incredible S in outer features. The fact that Incredible S holds the same brand is perhaps a plain coincidence besides other similarities. Though, we are not definite whether or not Android 2.3 would be the final OS, the leaked image shows it is running on Android 2.2.1.
HTC is likely to incorporate a 4 inch WVGA SLCD capacitive display its upcoming Droid Incredible 2 smartphone. Another noteworthy feature that is anticipated is the wireless charging capabilities. Dual cameras would be normal for the Incredible 2 with dual LED flash and HD video recording capacity at 720p on the 8 megapixel rear one.
The listing price of Droid Incredible 2 most probably will be approximately $199 as the new leak suggested. The expected date of launch of this handset is probably around the end of this month, on 28th.

Read more: Droid Incredible 2 Pictures Leaked | Tech, Gadgets, Web, How To, Latest News and Updates
TechGadgetsWeb

Twitter Delicious Facebook Digg Stumbleupon Favorites More