Friday, January 21, 2011

Single Photon Management for Quantum Computers

In principle, quantum computers can perform calculations that are impossible or impractical using conventional computers by taking advantage of the peculiar rules of quantum mechanics. To do this, they need to operate on things that can be manipulated into specific quantum states. Photons are among the leading contenders.

The new NIST papers address one of the many challenges to a practical quantum computer: the need for a device that produces photons in ready quantities, but only one at a time, and only when the computer's processor is ready to receive them. Just as garbled data will confuse a standard computer, an information-bearing photon that enters a quantum processor together with other particles -- or when the processor is not expecting it -- can ruin a calculation.

The single-photon source has been elusive for nearly two decades, in part because no method of producing these particles individually is ideal."It's a bit like playing a game of whack-a-mole, where solving one problem creates others," says Alan Migdall of NIST's Optical Technology Division."The best you can do is keep all the issues under control somewhat. You can never get rid of them."

The team's first paper addresses the need to be certain that a photon is indeed coming when the processor is expecting it, and that none show up unexpected. Many kinds of single-photon sources create a pair of photons and send one of them to a detector, which tips off the processor to the fact that the second, information-bearing photon is on its way. But since detectors are not completely accurate, sometimes they miss the"herald" photon -- and its twin zips into the processor, gumming up the works.

The team effort, in collaboration with researchers from the Italian metrology laboratory L'Istituto Nazionale di Ricerca Metrologica (INRIM), handled the issue by building a simple gate into the source. When a herald photon reaches the detector, the gate opens, allowing the second photon past."You get a photon when you expect one, and you don't get one when you don't," Migdall says."It was an obvious solution; others proposed it long ago, we were just the first ones to build it. It makes the single photon source better."

In a second paper, the NIST team describes a photon source to address two other requirements. Quantum computers will need many such sources working in parallel, so sources must be able to be built in large numbers and operate reliably; and so that the computer can tell the photons apart, the sources must create multiple individual photons, but all at different wavelengths. The team outlines a way to create just such a source out of silicon, which has been well-understood by the electronics industry for decades as the material from which standard computer chips are built.

"Ordinarily a particular material can produce only pairs in a specific pair of wavelengths, but our design allows production of photons at a number of regular and distinct wavelengths simultaneously, all from one source," Migdall says."Because the design is compatible with microfabrication techniques, this accomplishment is the first step in the process of creating sources that are part of integrated circuits, not just prototype computers that work in the hothouse of the lab."


Source

Wednesday, January 19, 2011

Better Than the Human Eye: Tiny Camera With Adjustable Zoom Could Aid Endoscopic Imaging, Robotics, Night Vision

The"eyeball camera" has a 3.5x optical zoom, takes sharp images, is inexpensive to make and is only the size of a nickel. (A higher zoom is possible with the technology.)

While the camera won't be appearing at Best Buy any time soon, the tunable camera -- once optimized -- should be useful in many applications, including night-vision surveillance, robotic vision, endoscopic imaging and consumer electronics.

"We were inspired by the human eye, but we wanted to go beyond the human eye," said Yonggang Huang, Joseph Cummings Professor of Civil and Environmental Engineering and Mechanical Engineering at Northwestern's McCormick School of Engineering and Applied Science."Our goal was to develop something simple that can zoom and capture good images, and we've achieved that."

The tiny camera combines the best of both the human eye and an expensive single-lens reflex (SLR) camera with a zoom lens. It has the simple lens of the human eye, allowing the device to be small, and the zoom capability of the SLR camera without the bulk and weight of a complex lens. The key is that both the simple lens and photodetectors are on flexible substrates, and a hydraulic system can change the shape of the substrates appropriately, enabling a variable zoom.

The research is being published the week of Jan. 17 by theProceedings of the National Academy of Sciences(PNAS).

Huang, co-corresponding author of the PNAS paper, led the theory and design work at Northwestern. His colleague John Rogers, the Lee J. Flory Founder Chair in Engineering and professor of materials science and engineering at the University of Illinois, led the design, experimental and fabrication work. Rogers is a co-corresponding author of the paper.

Earlier eyeball camera designs are incompatible with variable zoom because these cameras have rigid detectors. The detector must change shape as the in-focus image changes shape with magnification. Huang and Rogers and their team use an array of interconnected and flexible silicon photodetectors on a thin, elastic membrane, which can easily change shape. This flexibility opens up the field of possible uses for such a system. (The array builds on their work in stretchable electronics.)

The camera system also has an integrated lens constructed by putting a thin, elastic membrane on a water chamber, with a clear glass window underneath.

Initially both detector and lens are flat. Beneath both the membranes of the detector and the simple lens are chambers filled with water. By extracting water from the detector's chamber, the detector surface becomes a concave hemisphere. (Injecting water back returns the detector to a flat surface.) Injecting water into the chamber of the lens makes the thin membrane become a convex hemisphere.

To achieve an in-focus and magnified image, the researchers actuate the hydraulics to change the curvatures of the lens and detector in a coordinated manner. The shape of the detector must match the varying curvature of the image surface to accommodate continuously adjustable zoom, and this is easily done with this new hemispherical eye camera.

In addition to Huang and Rogers, other authors of the paper are Chaofeng Lu and Ming Li, from Northwestern; Inhwa Jung, Jianliang Xiao, Viktor Malyarchuk and Jongseung Yoon, from the University of Illinois; and Zhuangjian Liu, from the Institute of High Performance Computing, Singapore.


Source

Thursday, January 13, 2011

Fastest Movie in the World Recorded: Method to Film Nanostructures Developed

A"molecular movie" that shows how a molecule behaves at the crucial moment of a chemical reaction would help us better understand fundamental processes in the natural sciences. Such processes are often only a few femtoseconds long. A femtosecond is a millionth of a billionth of a second.

While it is possible to record a single femtosecond picture using an ultra-short flash of light, it has never been possible to take a sequence of pictures in such rapid succession. On a detector that captures the image, the pictures would overlap and"wash out." An attempt to swap or refresh the detector between two images would simply take too long, even if it could be done at speed of light.

In spite of these difficulties, members of the joint research group"Functional Nanomaterials" of HZB and the Technische Universität Berlin have now managed to take ultrafast image sequences of objects mere micrometres in size using pulses from the X-ray laser FLASH in Hamburg, Germany. Furthermore, they chart out a path how their approach can be scaled to nanometer resolution in the future. Together with colleagues from DESY and the University of Münster, they have published their results in the journalNature Photonics.

The researchers came up with an elegant way to descramble the information superimposed by the two subsequent X-ray pulses. They encoded both images simultaneously in a single X-ray hologram. It takes several steps to obtain the final image sequence: First, the scientists split the X-ray laser beam into two separate beams. Using multiple mirrors, they force one beam to take a short detour, which causes the two pulses to reach the object under study at ever so slightly different times -- the two pulses arrive only 0.00000000000005 seconds apart. Due to a specific geometric arrangement of the sample, the pulses gen-erate a"double-hologram." This hologram encodes the structure of the object at the two times at which the X-ray pulses hit., Using a mathematical reconstruction procedure, the researchers can then simply associate the images with the respective X-ray puses and thus determine the image sequence in correct temporal order.

Using their method, the scientists recorded two pictures of a micro-model of the Brandenburg Gate, separated by only 50 femtoseconds."In this short time interval, even a ray of light travels no further than the width of a human hair," says PhD student Christian Günther, the first author of the publication. The short-wavelength X-rays used allow to reveal extremely small detail, since the shorter the wavelength of light you use, the smaller the objects you can resolve.

"The long-term goal is to be able to follow the movements of molecules and nanostructures in real time," says project head Prof. Dr. Stefan Eisebitt. The extremely high temporal resolution in conjunction with the possibility to see the tiniest objects was the motivation to develop the new technique. A picture may be worth a thousand words, but a movie made up of several pictures can tell you about an object's dynamics.


Source