Monthly Archives: June 2017

Toward optical quantum computing

Ordinarily, light particles — photons — don’t interact. If two photons collide in a vacuum, they simply pass through each other.

An efficient way to make photons interact could open new prospects for both classical optics and quantum computing, an experimental technology that promises large speedups on some types of calculations.

In recent years, physicists have enabled photon-photon interactions using atoms of rare elements cooled to very low temperatures.

But in the latest issue of Physical Review Letters, MIT researchers describe a new technique for enabling photon-photon interactions at room temperature, using a silicon crystal with distinctive patterns etched into it. In physics jargon, the crystal introduces “nonlinearities” into the transmission of an optical signal.

“All of these approaches that had atoms or atom-like particles require low temperatures and work over a narrow frequency band,” says Dirk Englund, an associate professor of electrical engineering and computer science at MIT and senior author on the new paper. “It’s been a holy grail to come up with methods to realize single-photon-level nonlinearities at room temperature under ambient conditions.”

Joining Englund on the paper are Hyeongrak Choi, a graduate student in electrical engineering and computer science, and Mikkel Heuck, who was a postdoc in Englund’s lab when the work was done and is now at the Technical University of Denmark.

Photonic independence

Quantum computers harness a strange physical property called “superposition,” in which a quantum particle can be said to inhabit two contradictory states at the same time. The spin, or magnetic orientation, of an electron, for instance, could be both up and down at the same time; the polarization of a photon could be both vertical and horizontal.

If a string of quantum bits — or qubits, the quantum analog of the bits in a classical computer — is in superposition, it can, in some sense, canvass multiple solutions to the same problem simultaneously, which is why quantum computers promise speedups.

Most experimental qubits use ions trapped in oscillating magnetic fields, superconducting circuits, or — like Englund’s own research — defects in the crystal structure of diamonds. With all these technologies, however, superpositions are difficult to maintain.

Because photons aren’t very susceptible to interactions with the environment, they’re great at maintaining superposition; but for the same reason, they’re difficult to control. And quantum computing depends on the ability to send control signals to the qubits.

That’s where the MIT researchers’ new work comes in. If a single photon enters their device, it will pass through unimpeded. But if two photons — in the right quantum states — try to enter the device, they’ll be reflected back.

The quantum state of one of the photons can thus be thought of as controlling the quantum state of the other. And quantum information theory has established that simple quantum “gates” of this type are all that is necessary to build a universal quantum computer.

Computer system predicts products of chemical reactions

When organic chemists identify a useful chemical compound — a new drug, for instance — it’s up to chemical engineers to determine how to mass-produce it.

There could be 100 different sequences of reactions that yield the same end product. But some of them use cheaper reagents and lower temperatures than others, and perhaps most importantly, some are much easier to run continuously, with technicians occasionally topping up reagents in different reaction chambers.

Historically, determining the most efficient and cost-effective way to produce a given molecule has been as much art as science. But MIT researchers are trying to put this process on a more secure empirical footing, with a computer system that’s trained on thousands of examples of experimental reactions and that learns to predict what a reaction’s major products will be.

The researchers’ work appears in the American Chemical Society’s journal Central Science. Like all machine-learning systems, theirs presents its results in terms of probabilities. In tests, the system was able to predict a reaction’s major product 72 percent of the time; 87 percent of the time, it ranked the major product among its three most likely results.

“There’s clearly a lot understood about reactions today,” says Klavs Jensen, the Warren K. Lewis Professor of Chemical Engineering at MIT and one of four senior authors on the paper, “but it’s a highly evolved, acquired skill to look at a molecule and decide how you’re going to synthesize it from starting materials.”

With the new work, Jensen says, “the vision is that you’ll be able to walk up to a system and say, ‘I want to make this molecule.’ The software will tell you the route you should make it from, and the machine will make it.”

With a 72 percent chance of identifying a reaction’s chief product, the system is not yet ready to anchor the type of completely automated chemical synthesis that Jensen envisions. But it could help chemical engineers more quickly converge on the best sequence of reactions — and possibly suggest sequences that they might not otherwise have investigated.

Jensen is joined on the paper by first author Connor Coley, a graduate student in chemical engineering; William Green, the Hoyt C. Hottel Professor of Chemical Engineering, who, with Jensen, co-advises Coley; Regina Barzilay, the Delta Electronics Professor of Electrical Engineering and Computer Science; and Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering and Computer Science.

New 3-D chip combines computing and data storage

As embedded intelligence is finding its way into ever more areas of our lives, fields ranging from autonomous driving to personalized medicine are generating huge amounts of data. But just as the flood of data is reaching massive proportions, the ability of computer chips to process it into useful information is stalling.

Now, researchers at Stanford University and MIT have built a new chip to overcome this hurdle. The results are published today in the journal Nature, by lead author Max Shulaker, an assistant professor of electrical engineering and computer science at MIT. Shulaker began the work as a PhD student alongside H.-S. Philip Wong and his advisor Subhasish Mitra, professors of electrical engineering and computer science at Stanford. The team also included professors Roger Howe and Krishna Saraswat, also from Stanford.

Computers today comprise different chips cobbled together. There is a chip for computing and a separate chip for data storage, and the connections between the two are limited. As applications analyze increasingly massive volumes of data, the limited rate at which data can be moved between different chips is creating a critical communication “bottleneck.” And with limited real estate on the chip, there is not enough room to place them side-by-side, even as they have been miniaturized (a phenomenon known as Moore’s Law).

To make matters worse, the underlying devices, transistors made from silicon, are no longer improving at the historic rate that they have for decades.

The new prototype chip is a radical change from today’s chips. It uses multiple nanotechnologies, together with a new computer architecture, to reverse both of these trends.

Instead of relying on silicon-based devices, the chip uses carbon nanotubes, which are sheets of 2-D graphene formed into nanocylinders, and resistive random-access memory (RRAM) cells, a type of nonvolatile memory that operates by changing the resistance of a solid dielectric material. The researchers integrated over 1 million RRAM cells and 2 million carbon nanotube field-effect transistors, making the most complex nanoelectronic system ever made with emerging nanotechnologies.

The RRAM and carbon nanotubes are built vertically over one another, making a new, dense 3-D computer architecture with interleaving layers of logic and memory. By inserting ultradense wires between these layers, this 3-D architecture promises to address the communication bottleneck.

However, such an architecture is not possible with existing silicon-based technology, according to the paper’s lead author, Max Shulaker, who is a core member of MIT’s Microsystems Technology Laboratories. “Circuits today are 2-D, since building conventional silicon transistors involves extremely high temperatures of over 1,000 degrees Celsius,” says Shulaker. “If you then build a second layer of silicon circuits on top, that high temperature will damage the bottom layer of circuits.”

The key in this work is that carbon nanotube circuits and RRAM memory can be fabricated at much lower temperatures, below 200 C. “This means they can be built up in layers without harming the circuits beneath,” Shulaker says.

This provides several simultaneous benefits for future computing systems. “The devices are better: Logic made from carbon nanotubes can be an order of magnitude more energy-efficient compared to today’s logic made from silicon, and similarly, RRAM can be denser, faster, and more energy-efficient compared to DRAM,” Wong says, referring to a conventional memory known as dynamic random-access memory.

“In addition to improved devices, 3-D integration can address another key consideration in systems: the interconnects within and between chips,” Saraswat adds.

“The new 3-D computer architecture provides dense and fine-grained integration of computating and data storage, drastically overcoming the bottleneck from moving data between chips,” Mitra says. “As a result, the chip is able to store massive amounts of data and perform on-chip processing to transform a data deluge into useful information.”

To demonstrate the potential of the technology, the researchers took advantage of the ability of carbon nanotubes to also act as sensors. On the top layer of the chip they placed over 1 million carbon nanotube-based sensors, which they used to detect and classify ambient gases.

Due to the layering of sensing, data storage, and computing, the chip was able to measure each of the sensors in parallel, and then write directly into its memory, generating huge bandwidth, Shulaker says.

Three-dimensional integration is the most promising approach to continue the technology scaling path set forth by Moore’s laws, allowing an increasing number of devices to be integrated per unit volume, according to Jan Rabaey, a professor of electrical engineering and computer science at the University of California at Berkeley, who was not involved in the research.

HSA Connections

HSA Q&A with Dr. John Glossner

HSA computing standards have progressed significantly since the HSA Foundation (HSAF) was established in 2012. Today, for instance, there are not only royalty free open specifications available but also fully operational production systems.

Representatives from newly joined HSA Foundation members in China

Pictured: Representatives from newly joined HSA Foundation members in China

In this Q&A, Dr. John Glossner, HSA Foundation president, provides additional insights on HSA-specific trends and issues:

What are the connections/differences between heterogeneous computing, general purpose computing and specialized computing? If heterogeneous computing is the future, what will happen to general purpose computing and specialized computing?

General purpose computing is what you find in a CPU. It is meant to be able to process any function but streaming data, like artificial intelligence (AI), might not always be efficiently processed on a CPU.

Specialized computing would be a design made for one particular application such as AI but it would not be intended to run general purpose code (sometimes called control code). The specialized accelerator typically has the advantage that it is much lower power to execute the special purpose application (e.g., AI).

Heterogeneous computing combines the best of both. It specifies how a CPU can talk to an accelerator and often finds both integrated onto the same silicon die. So heterogeneous processors, meaning different types – such as CPUs, GPUs, DSPs, specialized accelerators and others, are all integrated together and cooperate to achieve an ideal balance of performance and power consumption for a given application.

What is the ultimate goal for the HSAF? How and what need to be done to achieve this?

The goal of the HSA Foundation is to make heterogeneous programming easier. That means creating standards that allow different types of processors to be programmed in the same language, using one single source file, and then automatically distributing parts of the application to the best processor to do the computing.

If research institutions and companies participate in establishing and promoting the standards of heterogeneous computing, will it affect their current development and solutions?

With open specifications and open source implementations of standards and tools, the Foundation’s hope is that it accelerates the pace of development and adoption of the technology. Corporations participating in HSAF enjoy royalty free access to all technologies developed.

The Foundation announced the formation of the China Regional Committee (CRC) in May. What were the motivations and goals in establishing the CRC and what is the connection/differences between CRC standards and HSA standards?

While the HSA Foundation has made a lot of progress there are always regional considerations and research opportunities to improve current systems. Recently China has become a leader in AI and other semiconductor technologies. With the emergence of low latency applications such as AI and virtual reality (VR) the Foundation anticipates improvements to current specifications. As this is an area of research and development being led by China, it is natural to invite key scientists and companies from China to adopt and adapt technologies and specifications.

How many local organizations have joined the CRC? What are members’ perspectives?

More than 30 members have joined the CRC to date. They comprise semiconductor companies, research universities and institutes (e.g., Chinese Academy of Sciences), tools and algorithms designers, test verification, and China standardization groups.