A replacement for the ordinary transistor may make it to market by the end of this decade, an event that will herald a radical redesign of traditional computer architectures. The memristor, the subject of much study over the last six years, could become the basic building block for an array of new devices—from the sensors and memory chips being built into the “Internet of Things” (connected, sensor-embedded devices) to the giant computers used for big data applications by scientists, engineers and Wall Street.
Today, and for the past 50 years, computers have worked by processing data in fast dynamic memory and pushing it down wires—input/output channels—to slower-speed permanent disk storage. Memristors may combine into a single device the best characteristics of both dynamic memory (the RAM in a desktop computer) and hard drives or flash memory chips, which retain data when the electricity goes off.
The original idea dates back to the late 1990s, when Senior HP Fellow Stan Williams set up Hewlett–Packard’s Information and Quantum Systems Laboratory to scope out the next two decades of computing. For 40 years the industry has relied on its ability to manufacture ever-shrinking, ever-cheaper transistors based on Moore’s Law (the observation made by Intel founder Gordon Moore in 1965 that the number of transistors that can fit on a chip doubles about every two years).
Williams’ team accordingly began by studying increasingly small transistors, which led them to consider what would happen when the devices shrink to the size of individual molecules, in which the movement of a single atom would affect performance. At that size, the researchers encountered an effect they didn’t understand until 2008, when one of the team read a paper written more than 35 years earlier by Leon Chua, a professor in electrical engineering and computer sciences at University of California, Berkeley.
In it Chua calculated that memristors would become a fourth electronic component, along with resistors, capacitors and inductors. Williams recognized that his team was seeing Chua’s prediction materialize in a thin film of titanium dioxide. Subsequently, others have joined the search. In 2012 HRL Labs, a research facility jointly owned by General Motors and Boeing, announced the first successfully functioning memristor array—built with the complementary metal-oxide semiconductor (CMOS) manufacturing process used for most electronic devices.
The old and the new electronics function in fundamentally different ways. Transistors toggle between an on or off state, whereas memristors, like analog devices, can occupy a range of in-between states. Developers had expected memristor development to proceed more quickly than it has. In 2010 HP predicted that memristor devices would reach the market this year. Not likely, according to Kirk Bresniker, HP Labs chief architect and HP fellow. The devices still need more work before they are ready for commercial release. HP and the company’s development partners are still scouring the periodic table looking for the precise combination of elements and the specific manufacturing processes that will allow the best memristive effect to preserve data intact. They also want to incorporate this technology into standard CMOS chips that can be mass-manufactured at a reasonable cost.
Meanwhile the concept of what can be built with memristors has continued to evolve. At HP’s Discover Conference in mid-June, company chief technology officer Martin Fink outlined a simple architecture he called simply “The Machine.” It consists of a set of memory circuits connected using optical fibers instead of copper wires to connect to highly efficient special-purpose processors.
The industry has several goals in making the shift. Memristors can vastly improve energy efficiency of electronic components, and are better able to cope with the floods of data expected from the Internet of Things, which monitor or control equipment or systems in factories, office buildings or homes. Essential to their development is a continuation of the exponential growth in computing power and storage density that has seen prices plunge over the past 40 years. For similar reasons, IBM has just announced it will spend $3 billion to pursue experimental “post-silicon” architectures and chips, predicting a fundamental revamping of existing systems in 10 years.
These changes will produce a fundamental overhaul of computer operating systems to accommodate hardware that no longer differentiates between dynamic memory and long-term storage. Bresniker sees the change as an opportunity to jettison layers of cumbersome operating system code that was previously adopted to accommodate the limitations of older hardware.
HP’s current development timetable has memristors going into the earliest stage of production in 2015 and launching as DIMMs (dual in-line memory modules) for computer memory in 2016. The operating system for “The Machine” will go into wider public beta testing in 2017, and the new architecture is intended to be integrated into actual products in 2019. Even if none of this pans out, Bresniker believes the attempt is worth it: “Each of the elements is interesting…[on its own]. Pulling out that copper and dropping in that piece of fiber will be more efficient, even with a traditional computing and memory regime all around it…. We need a replacement memory technology. If it does nothing else than drop in where my DIMMs drop in today, that will be a useful thing.”