Whether it’s helping neural networks learn how to learn or getting them to work with pseudo-labelled data, most of the advances in deep learning and artificial intelligence have happened in research labs.
Researchers at the University of California at Berkeley, Switzerland’s IDSIA and the University of Tokyo have used the DGX-1 to take their deep learning to the next level.
Attendees of this week’s International Conference on Machine Learning in Sydney, Australia, can hear from these three NVIDIA AI Labs (NVAIL) partners. They’re all presenting papers on their research at ICML.
The NVAIL programme helps keep AI pioneers ahead of the curve with support for students, assistance from our researchers and engineers, and gives them access to the industry’s most advanced GPU computing power.
Imagine if robots and other AI-infused devices could learn more like humans. That’s what Assistant Professor Sergey Levine and his students at NVAIL partner UC Berkeley want to make into a reality.
By teaching deep neural networks to learn how to learn, Levine’s team wants to help intelligent agents to learn faster and need less training.
“Look at how people do it,” said Levine, an assistant professor in Berkeley’s Department of Electrical Engineering and Computer Sciences. “We never learn things entirely from scratch. We draw on our past experience to help us learn new skills quickly. So we’re trying to get our learning algorithms to do the same.”
With current AI methods, robots have to experience things over and over again to learn how to best respond to stimuli. Levine’s thinking is that by enabling robots to learn without all that repetition, they’ll not only be more adaptive, they’ll also be able to learn much more.
“If a robot can learn one skill from a thousand times less experience, it can learn a thousand skills in the same time it would have otherwise taken it to learn one,” Levine said. “We’re unlikely to ever build machines that never make mistakes, but we can try to build machines that learn from their mistakes quickly and don’t have to make them more than a few times.”
Levine and his team have been using an NVIDIA DGX-1 system to train their algorithms how to coordinate movement and visual perception. Chelsea Finn, a Ph.D. student with Levine and Abbeel at UC Berkeley, is presenting a research paper on this work at ICML. Levine and Finn are also giving a tutorial on “Deep Reinforcement Learning, Decision Making, and Control.”
The powerful combination of recurrent neural networks and long short-term memory (LSTM) has been a boon to those working on handwriting and speech recognition.
Unlike feedforward networks, which automatically push each computation to the next step, RNNs can tap internal memory to process arbitrary data (such as different pronunciations or variations in handwriting), using previous decisions and current stimuli to learn on the fly.
That said, RNNs have had a weakness: they become more difficult to work with as they move deeper into a neural network, slowing down the deep learning process. But researchers at the Swiss AI lab and NVAIL partner IDSIA think they’ve found an answer: recurrent highway networks.
“Until now, it was extremely difficult to train recurrent networks with even two layers in the sequential transition,” said Rupesh Srivastava, an AI researcher at IDSIA and one of the co-authors of a research paper on the topic being presented at ICML. “Now, with recurrent highway networks, we can train recurrent networks with tens of layers in the recurrent transition.”
Srivastava said this advance allows for more efficient models for attacking sequential processing tasks, and enables the use of more complicated models.
“These early experiments indicate that we may able to tackle much more complex tasks without requiring the training of gigantic models in the future,” he said.
Srivastava’s team has been using NVIDIA Tesla K40, K80, TITAN X and GeForce GTX 1080 GPUs to speed up training, along with CUDA and cuDNN for deep learning. But the arrival of the DGX-1 AI supercomputer, he said, “significantly accelerated the experimental cycle, allowing all lab projects to progress faster.”
He also said he’s excited by the prospect of using the DGX-1 to speed up the parallel training of recurrent network models. Eventually, he hopes that recurrent highway networks will lead to better reinforcement learning.
At the very least, the research will help to make deep learning models deeper.
“It is an important development,” said Srivastava, “because the ability to utilise the efficiency brought by deep models in different ways is a cornerstone of deep learning.”
Deep Learning Trickery Deep learning isn’t always a tidy process. When training a model to perform, say, large-scale speech recognition, it’s important that it be able to account for variations such as background noise or accents.
This concept, known as domain adaptation, is where the intelligence in artificial intelligence is derived. It’s easy to be intelligent in the simpler setting of a training lab. It’s another thing to be intelligent in the unsupervised and unpredictable real world.
Researchers at the University of Tokyo believe they’ve developed a method for getting around many of the challenges of unsupervised domain adaptation. They’ve tapped the power of the DGX-1 to assign “pseudo-labels” to unlabeled data in target domains.
This enables deep learning models to apply what they’ve learned about a source domain — such as the ability to categorise book reviews — to a different target domain, such as movie reviews, without having to train a new model.
To do this, a team at the University of Tokyo proposed a concept they call “asymmetric tri-training,” which involves assigning different roles to three classifiers, employing three separate neural networks. Two networks are used to label unlabelled target samples. The third is trained by pseudo-labelled target samples. So far, the results have been encouraging.
“Transferring knowledge from a simple or synthesized domain to a diverse or realistic domain is a practical and challenging problem,” says Tatsuya Harada, a professor in the University of Tokyo Graduate School of Information Science and Technology. “We believe that our method showed a significant step for realizing adaptation from the simple to the diverse domain.”
Harada is one of the authors of a research paper on this work that’s being presented this week at ICML. It’s a complicated undertaking. Harada acknowledges it will likely need parallel efforts to achieve its potential. He’s hopeful that sharing his team’s research will speed up that process. “The research on fusing deep learning and pseudo-labels is ongoing,” he said. “We expect our research to stimulate more such research.”
ICML continues in Sydney through Friday.
Why nation-state attacks are everyone’s problem
Hear from Invictus Games Sydney 2019 CEO, Patrick Kidd OBE and Head of Technology, @James-d-smith -share their insights on how they partnered with Unisys to protect critical data over an open, public WiFi solution.
With so much change all the time, how can executives best prepare their businesses to meet the security challenges of the coming years? CSO Australia, in conjunction with Mimecast, explored this question in an interactive Webinar that looks at how the threat landscape has evolved – and what we can expect in 2019 and beyond.
An interview with CSO's David Braue and Ian Yip, Chief Technology Officer, McAffee.
According to new research conducted by the Ponemon Institute, Australia and New Zealand have the highest levels of data breaches out of the nine countries investigated. This was linked to heavy investment in security detection and an under-investment in security and vulnerability response capabilities