Category Archives: Chiro Computer

Toward optical quantum computing

Ordinarily, light particles — photons — don’t interact. If two photons collide in a vacuum, they simply pass through each other.

An efficient way to make photons interact could open new prospects for both classical optics and quantum computing, an experimental technology that promises large speedups on some types of calculations.

In recent years, physicists have enabled photon-photon interactions using atoms of rare elements cooled to very low temperatures.

But in the latest issue of Physical Review Letters, MIT researchers describe a new technique for enabling photon-photon interactions at room temperature, using a silicon crystal with distinctive patterns etched into it. In physics jargon, the crystal introduces “nonlinearities” into the transmission of an optical signal.

“All of these approaches that had atoms or atom-like particles require low temperatures and work over a narrow frequency band,” says Dirk Englund, an associate professor of electrical engineering and computer science at MIT and senior author on the new paper. “It’s been a holy grail to come up with methods to realize single-photon-level nonlinearities at room temperature under ambient conditions.”

Joining Englund on the paper are Hyeongrak Choi, a graduate student in electrical engineering and computer science, and Mikkel Heuck, who was a postdoc in Englund’s lab when the work was done and is now at the Technical University of Denmark.

Photonic independence

Quantum computers harness a strange physical property called “superposition,” in which a quantum particle can be said to inhabit two contradictory states at the same time. The spin, or magnetic orientation, of an electron, for instance, could be both up and down at the same time; the polarization of a photon could be both vertical and horizontal.

If a string of quantum bits — or qubits, the quantum analog of the bits in a classical computer — is in superposition, it can, in some sense, canvass multiple solutions to the same problem simultaneously, which is why quantum computers promise speedups.

Most experimental qubits use ions trapped in oscillating magnetic fields, superconducting circuits, or — like Englund’s own research — defects in the crystal structure of diamonds. With all these technologies, however, superpositions are difficult to maintain.

Because photons aren’t very susceptible to interactions with the environment, they’re great at maintaining superposition; but for the same reason, they’re difficult to control. And quantum computing depends on the ability to send control signals to the qubits.

That’s where the MIT researchers’ new work comes in. If a single photon enters their device, it will pass through unimpeded. But if two photons — in the right quantum states — try to enter the device, they’ll be reflected back.

The quantum state of one of the photons can thus be thought of as controlling the quantum state of the other. And quantum information theory has established that simple quantum “gates” of this type are all that is necessary to build a universal quantum computer.

Computer system predicts products of chemical reactions

When organic chemists identify a useful chemical compound — a new drug, for instance — it’s up to chemical engineers to determine how to mass-produce it.

There could be 100 different sequences of reactions that yield the same end product. But some of them use cheaper reagents and lower temperatures than others, and perhaps most importantly, some are much easier to run continuously, with technicians occasionally topping up reagents in different reaction chambers.

Historically, determining the most efficient and cost-effective way to produce a given molecule has been as much art as science. But MIT researchers are trying to put this process on a more secure empirical footing, with a computer system that’s trained on thousands of examples of experimental reactions and that learns to predict what a reaction’s major products will be.

The researchers’ work appears in the American Chemical Society’s journal Central Science. Like all machine-learning systems, theirs presents its results in terms of probabilities. In tests, the system was able to predict a reaction’s major product 72 percent of the time; 87 percent of the time, it ranked the major product among its three most likely results.

“There’s clearly a lot understood about reactions today,” says Klavs Jensen, the Warren K. Lewis Professor of Chemical Engineering at MIT and one of four senior authors on the paper, “but it’s a highly evolved, acquired skill to look at a molecule and decide how you’re going to synthesize it from starting materials.”

With the new work, Jensen says, “the vision is that you’ll be able to walk up to a system and say, ‘I want to make this molecule.’ The software will tell you the route you should make it from, and the machine will make it.”

With a 72 percent chance of identifying a reaction’s chief product, the system is not yet ready to anchor the type of completely automated chemical synthesis that Jensen envisions. But it could help chemical engineers more quickly converge on the best sequence of reactions — and possibly suggest sequences that they might not otherwise have investigated.

Jensen is joined on the paper by first author Connor Coley, a graduate student in chemical engineering; William Green, the Hoyt C. Hottel Professor of Chemical Engineering, who, with Jensen, co-advises Coley; Regina Barzilay, the Delta Electronics Professor of Electrical Engineering and Computer Science; and Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering and Computer Science.

New 3-D chip combines computing and data storage

As embedded intelligence is finding its way into ever more areas of our lives, fields ranging from autonomous driving to personalized medicine are generating huge amounts of data. But just as the flood of data is reaching massive proportions, the ability of computer chips to process it into useful information is stalling.

Now, researchers at Stanford University and MIT have built a new chip to overcome this hurdle. The results are published today in the journal Nature, by lead author Max Shulaker, an assistant professor of electrical engineering and computer science at MIT. Shulaker began the work as a PhD student alongside H.-S. Philip Wong and his advisor Subhasish Mitra, professors of electrical engineering and computer science at Stanford. The team also included professors Roger Howe and Krishna Saraswat, also from Stanford.

Computers today comprise different chips cobbled together. There is a chip for computing and a separate chip for data storage, and the connections between the two are limited. As applications analyze increasingly massive volumes of data, the limited rate at which data can be moved between different chips is creating a critical communication “bottleneck.” And with limited real estate on the chip, there is not enough room to place them side-by-side, even as they have been miniaturized (a phenomenon known as Moore’s Law).

To make matters worse, the underlying devices, transistors made from silicon, are no longer improving at the historic rate that they have for decades.

The new prototype chip is a radical change from today’s chips. It uses multiple nanotechnologies, together with a new computer architecture, to reverse both of these trends.

Instead of relying on silicon-based devices, the chip uses carbon nanotubes, which are sheets of 2-D graphene formed into nanocylinders, and resistive random-access memory (RRAM) cells, a type of nonvolatile memory that operates by changing the resistance of a solid dielectric material. The researchers integrated over 1 million RRAM cells and 2 million carbon nanotube field-effect transistors, making the most complex nanoelectronic system ever made with emerging nanotechnologies.

The RRAM and carbon nanotubes are built vertically over one another, making a new, dense 3-D computer architecture with interleaving layers of logic and memory. By inserting ultradense wires between these layers, this 3-D architecture promises to address the communication bottleneck.

However, such an architecture is not possible with existing silicon-based technology, according to the paper’s lead author, Max Shulaker, who is a core member of MIT’s Microsystems Technology Laboratories. “Circuits today are 2-D, since building conventional silicon transistors involves extremely high temperatures of over 1,000 degrees Celsius,” says Shulaker. “If you then build a second layer of silicon circuits on top, that high temperature will damage the bottom layer of circuits.”

The key in this work is that carbon nanotube circuits and RRAM memory can be fabricated at much lower temperatures, below 200 C. “This means they can be built up in layers without harming the circuits beneath,” Shulaker says.

This provides several simultaneous benefits for future computing systems. “The devices are better: Logic made from carbon nanotubes can be an order of magnitude more energy-efficient compared to today’s logic made from silicon, and similarly, RRAM can be denser, faster, and more energy-efficient compared to DRAM,” Wong says, referring to a conventional memory known as dynamic random-access memory.

“In addition to improved devices, 3-D integration can address another key consideration in systems: the interconnects within and between chips,” Saraswat adds.

“The new 3-D computer architecture provides dense and fine-grained integration of computating and data storage, drastically overcoming the bottleneck from moving data between chips,” Mitra says. “As a result, the chip is able to store massive amounts of data and perform on-chip processing to transform a data deluge into useful information.”

To demonstrate the potential of the technology, the researchers took advantage of the ability of carbon nanotubes to also act as sensors. On the top layer of the chip they placed over 1 million carbon nanotube-based sensors, which they used to detect and classify ambient gases.

Due to the layering of sensing, data storage, and computing, the chip was able to measure each of the sensors in parallel, and then write directly into its memory, generating huge bandwidth, Shulaker says.

Three-dimensional integration is the most promising approach to continue the technology scaling path set forth by Moore’s laws, allowing an increasing number of devices to be integrated per unit volume, according to Jan Rabaey, a professor of electrical engineering and computer science at the University of California at Berkeley, who was not involved in the research.

HSA Connections

HSA Q&A with Dr. John Glossner

HSA computing standards have progressed significantly since the HSA Foundation (HSAF) was established in 2012. Today, for instance, there are not only royalty free open specifications available but also fully operational production systems.

Representatives from newly joined HSA Foundation members in China

Pictured: Representatives from newly joined HSA Foundation members in China

In this Q&A, Dr. John Glossner, HSA Foundation president, provides additional insights on HSA-specific trends and issues:

What are the connections/differences between heterogeneous computing, general purpose computing and specialized computing? If heterogeneous computing is the future, what will happen to general purpose computing and specialized computing?

General purpose computing is what you find in a CPU. It is meant to be able to process any function but streaming data, like artificial intelligence (AI), might not always be efficiently processed on a CPU.

Specialized computing would be a design made for one particular application such as AI but it would not be intended to run general purpose code (sometimes called control code). The specialized accelerator typically has the advantage that it is much lower power to execute the special purpose application (e.g., AI).

Heterogeneous computing combines the best of both. It specifies how a CPU can talk to an accelerator and often finds both integrated onto the same silicon die. So heterogeneous processors, meaning different types – such as CPUs, GPUs, DSPs, specialized accelerators and others, are all integrated together and cooperate to achieve an ideal balance of performance and power consumption for a given application.

What is the ultimate goal for the HSAF? How and what need to be done to achieve this?

The goal of the HSA Foundation is to make heterogeneous programming easier. That means creating standards that allow different types of processors to be programmed in the same language, using one single source file, and then automatically distributing parts of the application to the best processor to do the computing.

If research institutions and companies participate in establishing and promoting the standards of heterogeneous computing, will it affect their current development and solutions?

With open specifications and open source implementations of standards and tools, the Foundation’s hope is that it accelerates the pace of development and adoption of the technology. Corporations participating in HSAF enjoy royalty free access to all technologies developed.

The Foundation announced the formation of the China Regional Committee (CRC) in May. What were the motivations and goals in establishing the CRC and what is the connection/differences between CRC standards and HSA standards?

While the HSA Foundation has made a lot of progress there are always regional considerations and research opportunities to improve current systems. Recently China has become a leader in AI and other semiconductor technologies. With the emergence of low latency applications such as AI and virtual reality (VR) the Foundation anticipates improvements to current specifications. As this is an area of research and development being led by China, it is natural to invite key scientists and companies from China to adopt and adapt technologies and specifications.

How many local organizations have joined the CRC? What are members’ perspectives?

More than 30 members have joined the CRC to date. They comprise semiconductor companies, research universities and institutes (e.g., Chinese Academy of Sciences), tools and algorithms designers, test verification, and China standardization groups.

Careers at Emerging Technologies

For this ComputingEdge issue, we focus on emerging technologies as they relate to an increasing popular career transition for computing professionals—the shift from industry to academia. Prior to obtaining a full professorship in information systems at California State University Fullerton, Sorel Reisman held senior management positions at IBM, Toshiba, and EMI in the US and Canada. He served as 2011 IEEE Computer Society president and is currently a member of both the IEEE Publications Services and Products Board, as well as the IEEE Education Activities Board.
ComputingEdge: You spent considerable time in industry working for multinational companies, starting as an engineer and rising to vice president of development. Why did you leave for academia?
Reisman: When I finished graduate school, I fully intended to pursue an academic career. But academic positions were in short supply at the time, so I looked for a job in industry. And once there, I got caught up in the dynamic of raises and promotions. However, work in industry proved unstable, and unsuccessful company campaigns and projects encouraged me to change jobs several times.
Eventually, a friend who was a physics professor told me about a tenure-track position in Cal State Fullerton’s business school. Having had enough of the insecurity of everything related to the computer industry, I applied for and was hired as an associate professor. Seven years later, they university gave me tenure and promoted me to full professor.
ComputingEdge: What are the opportunities in academia for pursuing an interest in emerging technologies?
Reisman: An academic can pursue an interest in emerging technologies in three areas, which align with the three criteria used to assess a professor’s work performance: research, teaching, and service.
When I entered academia, I didn’t have a specific research agenda. However, I’d been involved with multimedia computing in industry, and it seemed reasonable to work in that area again as a professor. How much real research you can do in emerging technologies depends on the funding available for that kind of work. In industry, companies typically fund their own projects. In academia, on the other hand, external grants typically fund research. Your success in obtaining a grant determines your ability to pursue your interest in emerging technologies.
In terms of instruction, undergraduate courses are limited in opportunities to teach emerging technologies, but there’s more freedom to do this with graduate level courses. At the undergraduate level, accredited academic departments must adhere to a prescribed set of topics to provide students with foundational knowledge. You can introduced newer, advanced topics in some courses, but you don’t have time to deal with them in depth.
You can do as I did and propose optional courses related to an emerging technology, but there might not be enough interested students to justify the university offering the class. I created a course called “personal computer systems and architectures” that became very popular, just as PCs were being widely adopted. I also designed and a class on e-commerce systems—which was just starting to become an emerging topic of interest—that was popular with graduate students.
Service means volunteer personal or professional community work. I chose to invest my service time with the IEEE Computer Society, which has paid off immeasurably. I’ve learned about many new technologies, and my involvement with the IEEE and Computer Society digital libraries has enabled me to bring new ideas and concepts to the work I do it at the university.
ComputingEdge: Can you describe one or two major differences between working in industry versus working in an academic environment
Reisman: The environments are completely different. For example, they don’t work with the same decision-making time frames. Industry tends to make decisions and act on them much more quickly than academia.
Also, in industry, you usually work on common objectives as a team with others. As a professor, you typically don’t work in teams, and your objectives may be completely different from those of your colleagues.
ComputingEdge: Are you glad to have moved from industry to academia?
Reisman: Absolutely! Probably the best thing about academia is the freedom to pursue your own interests, whether personal or professional. Also, unlike industry, whose projects go on for a long time, academic work is divided into shorter segments: semesters. If you’re teaching a class you don’t like, it ends with the semester. And you can change courses over time to make the experience better. In addition, breaks between semesters and school years enable you to recharge your batteries.
ComputingEdge: What important advice would you give colleagues considering moving from industry to academia?
Reisman: Don’t assume that because you held senior management positions in industry that your academic colleagues will value your industry accomplishments. The kinds of skills that helped you achieve those positions are mostly irrelevant in academia, unless you take an administrative job.

The Real Future of Quantum Computing?

Instead of creating quantum computers based on qubits that can each adopt only two possible options, scientists have now developed a microchip that can generate “qudits” that can each assume 10 or more states, potentially opening up a new way to creating incredibly powerful quantum computers, a new study finds.

Classical computers switch transistors either on or off to symbolize data as ones and zeroes. In contrast, quantum computers use quantum bits, or qubits that, because of the bizarre nature of quantum physics, can be in a state of superposition where they simultaneously act as both 1 and 0.

The superpositions that qubits can adopt let them each help perform two calculations at once. If two qubits are quantum-mechanically linked, or entangled, they can help perform four calculations simultaneously; three qubits, eight calculations; and so on. As a result, a quantum computer with 300 qubits could perform more calculations in an instant than there are atoms in the known universe, solving certain problems much faster than classical computers. However, superpositions are extraordinarily fragile, making it difficult to work with multiple qubits.

Most attempts at building practical quantum computers rely on particles that serve as qubits. However, scientists have long known that they could in principle use qudits with more than two states simultaneously. In principle, a quantum computer with two 32-state qudits, for example, would be able to perform as many operations as 10 qubits while skipping the challenges inherent with working with 10 qubits together.

Problems With Current Ticketing Systems

Ticketing systems (or issue tracking systems) are a convenient way to help your customers with tough problems, and help your development team find and address bugs faster. For example, you may use an email ticketing system to automatically notify your team when a user submits a potential issue; from there, you can have an individual address the issue, and mark it as resolved in a central database, along with notes on what they fixed (if they fixed anything) and how.
However, like all modern technologies, current ticketing systems aren’t perfect and can cause headaches if you aren’t prepared for their potential downsides.
Biggest Problems With Modern Ticketing Systems
These are some of the most common issues that development teams and customer service representatives face:

Documenting the ticket flow. Let’s say you have a new issue tracking system in place, and it automatically notifies everyone on your development team when there’s a ticket. What happens then? Is someone supposed to log into the platform and claim the issue as their own? Should there be a discussion over chat? If your ticket flow process isn’t clear, you’ll likely end up duplicating efforts or you’ll have a host of unresolved tickets that never see any further action. To this end, you’ll need to create and document a standard operating procedure that everyone can follow. Documentation is important because it gives all members of the team a consistent resource to reference; that way, if there’s ever an argument or discrepancy, you can check the document for clarity. It’s also useful for training purposes.

Bad UI. Some ticketing systems’ user interfaces (UI) are downright abysmal. Once logged in, you’re left in a dashboard with dozens of unclear options, and no intuitive tools to tell you what to do next or how to do it. Obviously, you’ll need to train your employees on how to use the system the way you intend them to, but overall, it should be fairly intuitive. If nothing else, the system should be customizable enough for you to remove some of the features that you don’t immediately need, and/or add some of the features that aren’t already present.

Poor descriptions from customers. Most issue tracking systems only do the grunt work of bringing you the issues that customers are inventing—and sometimes, customers aren’t articulate or specific about what they’re noticing. If you want your issue tracking system to be more efficient, and worthwhile for your employees to use, you’ll need to prompt your customers for more specific information, and give your developers tools they can use to deal with tickets that don’t immediately make sense.

Inconsistent training. Another problem with ticketing systems comes into play when you have too many team members working on the same platform—and some newbies thrown into the mix. Different people will likely have different preferences and different intuitive drives, and on top of that, they’ll have different styles of training. Some might leave detailed notes with their tickets while others leave none at all. There are many feasibly effective approaches to ticket management, but you need to be consistent if you want yours to work—and that consistency can only come from consistent training.

Feedback holes. Do you have a plan in place to collect feedback from customers submitting issues? Are you listening tofeedback from your team? Chances are, your issue tracking management won’t be perfect on the first go; you’ll need to carefully and attentively listen to your customers and employees alike if you want to find the holes and patch them with alternative workflows and ongoing changes. You can do this by creating anonymous feedback submission forms, or simply by having open conversations with your team members. Don’t continue using a platform that continues to cause headaches for your team.

Is the Problem With Ticketing Systems? 
As you’ve undoubtedly noticed, the majority of the problems listed above aren’t inherent to ticketing systems; instead, they’re flaws in the way that companies implement and use ticketing systems. It’s important for you to take your time considering different issue tracking systems, and choose the best option for your team, but beyond that, you need to understand that no ticket system will be effective on its own. You need to have the right people and processes in place to make the most of that system, or its benefits will be minimal.

Quantum Computing Secret

You may not need a quantum computer of your own to securely use quantum computing in the future. For the first time, researchers have shown how even ordinary classical computer users could remotely access quantum computing resources online while keeping their quantum computations securely hidden from the quantum computer itself.

Tech giants such as Google and IBM are racing to build universal quantum computers that could someday analyze millions of possible solutions much faster than today’s most powerful classical supercomputers. Such companies have also begun offering online access to their early quantum processors as a glimpse of how anyone could tap the power of cloud-based quantum computing. Until recently, most researchers believed that there was no way for remote users to securely hide their quantum computations from prying eyes unless they too possessed quantum computers. That assumption is now being challenged by researchers in Singapore and Australia through a new paper published in the 11 July issue of the journal Physical Review X.

“Frankly, I think we are all quite surprised that this is possible,” says Joseph Fitzsimons, a theoretical physicist for the Centre for Quantum Technologies at the National University of Singapore and principal investigator on the study. “There had been a number of results showing that it was unlikely for a classical user to be able to hide [delegated quantum computations] perfectly, and I think many of us in the field had interpreted this as evidence that nothing useful could be hidden.”

The technique for helping classical computer users hide their quantum computations relies upon a particular approach known as measurement-based quantum computing. Quantum computing’s main promise relies upon leveraging quantum bits (qubits) of information that can exist as both 1s and 0s simultaneously—unlike classical computing bits that exist as either 1 or 0. That means qubits can simultaneously represent and process many more states of information than classical computing bits.

In measurement-based quantum computing, a quantum computer puts all its qubits into a particular state of quantum entanglement so that any changes to a single qubit affect all the qubits. Next, qubits are individually measured one by one in a certain order that specifies the program being run on the quantum computer. A remote user can provide step-by-step instructions for each qubit’s measurement that encode both the input data and the program being run. Crucially, each measurement depends on the outcome of previous measurements.

Fitzsimons and his colleagues figured out how to exploit this step-wise approach to quantum computing and achieve a new form of “blind quantum computation” security. They showed how remote users relying on classical computers can hide the meaning behind each step of the measurement sequence from the quantum computer performing the computation. That means the owner of the quantum computer cannot tell the role of each measurement step and which qubits were used for inputs, operations, or outputs.

The finding runs counter to previous assumptions that it was impossible to guarantee data privacy for users relying on ordinary classical computers to remotely access quantum computers. But Fitzsimons says that early feedback to the group’s work has been “very positive” because the proposed security mechanism—described as the “flow ambiguity effect”—is fairly straightforward.

Protecting Your Computer with Free Software

Q. Are those free PC antivirus programs safe to use?

A. The web is full of choices, but if you are looking for free protection for your computer, go with a program from an established security software company. You can find roundups and reviews online and the AV-Test.orgsite has a list of well-known software creators. Programs that pepper your screen with pop-ups or try to convince you that your computer is full of worms and viruses are often spyware or scams themselves.

Several companies offer free basic versions of their more complete security suites to home users — including Avast, AVG, Bitdefender, Sophos and ZoneAlarm. As the range of malicious software has expanded to other computing platforms, some companies now offer free tools for the Mac and mobile platforms as well; Malwarebytes Anti-Malware for Mac is among the options. Free apps that specifically protect against ransomware (like Bitdefender’s Anti-Ransomware Tool for Windows) can also be found.

When browsing for software, make sure you are actually getting a copy of the company’s free antivirus tool — and not just the free trial version of a more comprehensive paid program. Depending on the program, you may be asked to share user data for research or see ads and upgrade offers within the free software. Paid versions typically provide more comprehensive protections, like network or game scanning.

Microsoft makes its own antivirus software for its Windows systems. If it is not already installed, Windows 7 users can download the Microsoft Security Essentialsprogram from the company’s site. The current version of Windows 10 comes with the Windows Defender Security Center for blocking viruses and other threats; go to the Settings app and open the Update & Security icon to check your coverage. (Apple builds in protective features like app-screening and anti-phishing alerts into its Mac OS software, but a third-party program goes further.)

Security software can help block malicious code from invading your computer, but be on guard for more socially engineered attacks from email and other online sources. StaySafeOnline.org has a guide to spam and phishing lures, and other threats to avoid.

The Computer Memory Terminal

COMMUNITY MEMORY is the name we give to this experimental information service. It is an attempt to harness the power of the computer in the service of the community. We hope to do this by providing a sort of super bulletin board where people can post notices of all sorts and can find the notices posted by others rapidly.

We are Loving Grace Cybernetics, a group of Berkeley people operating out of Resource One Inc., a non-profit collective located in Project One in S.F. Resource One grew out of the San Francisco Switchboard and has managed to obtain control of a computer (XDS 940) for use in communications.

Pictured above is one of the Community Memory teletype terminals. The first was installed at Leopold’s Records, a student-run record store in Berkeley. The terminal connected by modem to a time-sharing computer in San Francisco, which hosted the electronic bulletin-board system. Users could exchange brief messages about a wide range of topics: apartment listings, music lessons, even where to find a decent bagel. Reading the bulletin board was free, but posting a listing cost a quarter, payable by the coin-op mechanism. The terminals offered many users their first interaction with a computer.

Among the volunteers who made up Loving Grace Cybernetics and Resource One was Lee Felsenstein, who would go on to help establish the Homebrew Computer Club and who played a number of other pioneering roles in the nascent personal computing industry. For Felsenstein, Community Memory was important for, among other things, opening “the door to cyberspace.”