Monday, March 21, 2011

Engineers Make Breakthrough in Ultra-Sensitive Sensor Technology

The sensor, which is the most sensitive of its kind to date, relies on a completely new architecture and fabrication technique developed by the Princeton researchers. The device boosts faint signals generated by the scattering of laser light from a material placed on it, allowing the identification of various substances based on the color of light they reflect. The sample could be as small as a single molecule.

The technology is a major advance in a decades-long search to identify materials using Raman scattering, a phenomena discovered in the 1920s by an Indian physicist, Chandrasekhara Raman, where light reflecting off an object carries a signature of its molecular composition and structure.

"Raman scattering has enormous potential in biological and chemical sensing, and could have many applications in industry, medicine, the military and other fields," said Stephen Y. Chou, the professor of electrical engineering who led the research team."But current Raman sensors are so weak that their use has been very limited outside of research. We've developed a way to significantly enhance the signal over the entire sensor and that could change the landscape of how Raman scattering can be used."

Chou and his collaborators, electrical engineering graduate students, Wen-Di Li and Fei Ding, and post-doctoral fellow, Jonathan Hu, published a paper on their innovation in February in the journalOptics Express. The research was funded by the Defense Advance Research Projects Agency.

In Raman scattering, a beam of pure one-color light is focused on a target, but the reflected light from the object contains two extra colors of light. The frequency of these extra colors are unique to the molecular make-up of the substance, providing a potentially powerful method to determine the identity of the substance, analogous to the way a finger print or DNA signature helps identify a person.

Since Raman first discovered the phenomena -- a breakthrough that earned him Nobel Prize -- engineers have dreamed of using it in everyday devices to identify the molecular composition and structures of substances, but for many materials the strength of the extra colors of reflected light was too weak to be seen even with the most sophisticated laboratory equipment.

Researchers discovered in the 1970s that the Raman signals were much stronger if the substance to be identified is placed on a rough metal surface or tiny particles of gold or silver. The technique, known as surface enhanced Raman scattering (SERS), showed great promise, but even after four decades of research has proven difficult to put to practical use. The strong signals appeared only at a few random points on the sensor surface, making it difficult to predict where to measure the signal and resulting in a weak overall signal for such a sensor.

Abandoning the previous methods for designing and manufacturing the sensors, Chou and his colleagues developed a completely new SERS architecture: a chip studded with uniform rows of tiny pillars made of metals and insulators.

One secret of the Chou team's design is that their pillar arrays are fundamentally different from those explored by other researchers. Their structure has two key components: a cavity formed by metal on the top and at the base of each pillar; and metal particles of about 20 nanometers in diameter, known as plasmonic nanodots, on the pillar wall, with small gaps of about 2 nanometers between the metal components.

The small particles and gaps significantly boost the Raman signal. The cavities serve as antennae, trapping light from the laser so it passes the plasmonic nanodots multiple times to generate the Raman signal rather than only once. The cavities also enhance the outgoing Raman signal.

The Chou's team named their new sensor"disk-coupled dots-on-pillar antenna-array" or D2PA, for short.

So far, the chip is a billion times (109) more sensitive than was possible without SERS boosting of Raman signals and the sensor is uniformly sensitive, making it more reliable for use in sensing devices. Such sensitivity is several orders of magnitude higher than the previously reported.

Already, researchers at the U.S. Naval Research Laboratory are experimenting with a less sensitive chip to explore whether the military could use the technology pioneered at Princeton for detecting chemicals, biological agents and explosives.

In addition to being far more sensitive than its predecessors, the Princeton chip can be manufactured inexpensively at large sizes and in large quantities. This is due to the easy-to-build nature of the sensor and a new combination of two powerful nanofabrication technologies: nanoimprint, a method that allows tiny structures to be produced in cookie-cutter fashion; and self-assembly, a technique where tiny particles form on their own. Chou's team has produced these sensors on 4-inch wafers (the basis of electronic chips) and can scale the fabrication to much larger wafer size.

"This is a very powerful method to identify molecules," Chou said."The combination of a sensor that enhances signals far beyond what was previously possible, that's uniform in its sensitivity and that's easy to mass produce could change the landscape of sensor technology and what's possible with sensing."


Source

Thursday, March 3, 2011

New Developments in Quantum Computing

At the Association for Computing Machinery's 43rd Symposium on Theory of Computing in June, associate professor of computer science Scott Aaronson and his graduate student Alex Arkhipov will present a paper describing an experiment that, if it worked, would offer strong evidence that quantum computers can do things that classical computers can't. Although building the experimental apparatus would be difficult, it shouldn't be as difficult as building a fully functional quantum computer.

Aaronson and Arkhipov's proposal is a variation on an experiment conducted by physicists at the University of Rochester in 1987, which relied on a device called a beam splitter, which takes an incoming beam of light and splits it into two beams traveling in different directions. The Rochester researchers demonstrated that if two identical light particles -- photons -- reach the beam splitter at exactly the same time, they will both go either right or left; they won't take different paths. It's another quantum behavior of fundamental particles that defies our physical intuitions.

The MIT researchers' experiment would use a larger number of photons, which would pass through a network of beam splitters and eventually strike photon detectors. The number of detectors would be somewhere in the vicinity of the square of the number of photons -- about 36 detectors for six photons, 100 detectors for 10 photons.

For any run of the MIT experiment, it would be impossible to predict how many photons would strike any given detector. But over successive runs, statistical patterns would begin to build up. In the six-photon version of the experiment, for instance, it could turn out that there's an 8 percent chance that photons will strike detectors 1, 3, 5, 7, 9 and 11, a 4 percent chance that they'll strike detectors 2, 4, 6, 8, 10 and 12, and so on, for any conceivable combination of detectors.

Calculating that distribution -- the likelihood of photons striking a given combination of detectors -- is a hard problem. The researchers' experiment doesn't solve it outright, but every successful execution of the experiment does take a sample from the solution set. One of the key findings in Aaronson and Arkhipov's paper is that, not only is calculating the distribution a hard problem, but so is simulating the sampling of it. For an experiment with more than, say, 100 photons, it would probably be beyond the computational capacity of all the computers in the world.

The question, then, is whether the experiment can be successfully executed. The Rochester researchers performed it with two photons, but getting multiple photons to arrive at a whole sequence of beam splitters at exactly the right time is more complicated. Barry Sanders, director of the University of Calgary's Institute for Quantum Information Science, points out that in 1987, when the Rochester researchers performed their initial experiment, they were using lasers mounted on lab tables and getting photons to arrive at the beam splitter simultaneously by sending them down fiber-optic cables of different lengths. But recent years have seen the advent of optical chips, in which all the optical components are etched into a silicon substrate, which makes it much easier to control the photons' trajectories.

The biggest problem, Sanders believes, is generating individual photons at predictable enough intervals to synchronize their arrival at the beam splitters."People have been working on it for a decade, making great things," Sanders says."But getting a train of single photons is still a challenge."

Sanders points out that even if the problem of getting single photons onto the chip is solved, photon detectors still have inefficiencies that could make their measurements inexact: in engineering parlance, there would be noise in the system. But Aaronson says that he and Arkhipov explicitly consider the question of whether simulating even a noisy version of their optical experiment would be an intractably hard problem. Although they were unable to prove that it was, Aaronson says that"most of our paper is devoted to giving evidence that the answer to that is yes." He's hopeful that a proof is forthcoming, whether from his research group or others'.


Source