|An optical neural network is a physical implementation of an artificial neural network with optical components / Photo Credit: Sebastien Decoret via 123rf|
Since the dawn of digital computing, scientists have thrived in building artificial neural networks, which function like biological brains and essentially solve difficult problems. These machine-learning models could be widely used for different tasks, including powering driverless cars, medical imaging, drug development, natural language processing, robotic object identification, and many more.
Recently, researchers from MIT have developed a novel “photonic chip” that can be used to process massive neural networks millions of times more efficient compared to today’s computers. However, these neural networks grow more complicated each day and eat tons of power. Thus, researchers and major tech companies decided to develop chips called “AI accelerators” that can significantly improve the efficiency and speed of training and testing neural networks.
The MIT researchers have also begun developing photonic accelerators for optical neural networks, a physical implementation of an artificial neural network with optical components. The Robot Report, an online site that provides robotics news, research, analysis, and investment tracking for engineers, technology, and business professionals, mentioned that these new AI accelerators can drastically reduce both power consumption and chip area using more compact optical components and optical signal-processing techniques.
The AI accelerators have the capacity to process neural networks about 1,000 times below the limit of photonic accelerators and more than 10 million times below the energy-consumption limit of traditional, electrical-based accelerators. In a statement, Ryan Hamerly, a postdoc in the Research Laboratory of Electronics, said, “People are looking for technology that can compute beyond the fundamental limits of energy consumption. Photonic accelerators are promising…but our motivation is to build a [photonic accelerator] that can scale up to large neural networks.”
|Scientists nowadays are both devoted and eager to build artificial neural networks / Photo Credit: whitehoune via 123rf|
Multilayer All-Optical Artificial Neural Network
In terms of risk management, pattern recognition, and other complex tasks, the human brain is deemed more reliable compared to the most powerful computers. Researchers have tried to create neural networks that would run and solve these tasks at the speed of light. However, they have found it difficult to translate the nonlinear activation function from the electronic to the optical realm—key mathematical components of artificial neurons. Thus, researchers developed an all-optical artificial neural network and applied it to a complex simulation.
The all-optical artificial neural network incorporates both linear functions based on the quantum interference effect known as electromagnetically induced transparency (EIT). This is powered by spatial light modulators and nonlinear activation functions. Science Daily, an American website that aggregates press releases and publishes lightly edited press releases about science, said that this first-of-its-kind multilayer, all-optical artificial neural network can tackle complex problems that are impossible with traditional computational approaches.
The researchers from the Hong Kong University of Science and Technology designed the multilayer all-optical artificial neural network that is faster and consume less power. This is compared to traditional computers that are designed with extensive computational resources that are both time-consuming and energy-intensive. They used cold atoms with electromagnetically induced transparency to perform nonlinear functions since nonlinear optics typically require high-power lasers that are hard to implement in an optical neural network.
Shengwang Du, a member of the research team, stated that the light-induced effect is based on nonlinear quantum interference, which is why they can achieve even with weak laser power. Aside from that, there’s a high possibility of extending the system into a quantum neural network. This could solve problems that can be hard to control or deal with using classical methods.
"Our all-optical scheme could enable a neural network that performs optical parallel computation at the speed of light while consuming little energy. Large-scale, all-optical neural networks could be used for applications ranging from image recognition to scientific research,” said Junwei Liu, a member of the research team.
A Device that Processes Information at the Speed of Light
It has been proven that humans can process visual data or information better than any type of machine. The human brain can process images 60,000 times faster than text. At the same time, 90 percent of information transmitted to the brain is visual. On a larger scale, we generate an estimated 2.5 quintillion bytes of data every single day. The researchers from the UCLA Samueli School of Engineering used this as an inspiration to improve optical neural networks in identifying or processing information at the speed of light.
According to Science Daily, the researchers took advantage of the parallelization and scalability of optical-based computational systems to develop optical neural networks that can lead to intelligent camera systems. This can also be used in systems like self-driving cars or robots that can help make decisions faster and use less power than computer-based systems. Also, optical neural networks could design intelligent camera structures that put information together simply from the patterns of light that run through a 3D-engineered material structure.
The researchers have hugely increased the device’s accuracy since adding a second set of detectors on the system. It has helped the team in improving their prediction accuracy for unknown objects that were seen by the optical neural network. Using the image datasets of the hand-written digit, items of clothing, and a broader set of various vehicles and animals known as the CIFAR-10 image dataset to test their system's accuracy. They revealed that the image recognition accuracy rates are 98.6 percent, 91.1 percent, and 51.4 percent respectively.
"Such a system performs machine-learning tasks with light-matter interaction and optical diffraction inside a 3D fabricated material structure, at the speed of light and without the need for extensive power, except the illumination light and a simple detector circuitry," said Aydogan Ozcan, Chancellor's professor of electrical and computer engineering and the principal investigator on the research.