Researchers from Carnegie Mellon University’s College of Engineering share what they have learned about artificial intelligence while working in the field—from what led to the explosion of AI applications, to where it could have the biggest impact in the future, to areas still ripe for discovery.
Three things came together to allow the widespread implementation of AI: increased computational power, storage capacity, and data.
Artificial intelligence research began in the 1950s with theoretical exploration leading up to the advent of deep neural networks, a type of AI system loosely based on the way our brains work where computations are made through a series of interconnected nodes with tens or even hundreds of layers. Three key advances in computing came about which enabled deep layer neural networks to work really well, and AI exploded.
“After deep layer neural networks were introduced, it was like flood gates—AI research and applications abounded,” says Kumar Bhagavatula, director of CMU-Africa and an electrical and computer engineering professor who has worked in AI for more than 30 years. “Three technological breakthroughs converged together at once: computational power in the form of GPUs, increased storage capacity that enabled cloud computing, and the collection of tons of data through sensors and IoT devices. The availability of hardware is really what enabled AI to be implemented in such a broad way.”
AI is not magic.
Artificial intelligence does not magically discover what isn’t already there. AI is based on concepts that engineers, mathematicians, and computer scientists already know: math, statistics, signal processing, and optimization, but put together in a way that can handle bigger data and a broader scope.
AI is not magic. It cannot create something from nothing and is built on concepts we already know.Liz Holm, Professor, Materials Science and Engineering
“AI is not magic. It cannot create something from nothing and is built on concepts we already know,” says Liz Holm, a professor of materials science and engineering. “The results are also not magic—the information is already in the data and AI is a way of getting it out. Sometimes it does that better than humans because we think differently, but it is not making anything up; it’s only finding things that are hard for us to see.”
Big does not always mean better for data.
Access to more data is one reason why AI has been able to solve many problems that humans cannot. But just because a lot of data is available, it doesn’t always mean it is better. There are times when data doesn’t exist, when it is costly to obtain and label, or when there’s more noise than signal that renders much of the data useless. Researchers are finding ways to make small data meaningful by designing algorithms to work with small data and get more from less data.
“More data is not always better,” says ECE Assistant Professor Yuejie Chi. “It is if the data quality is good, but one issue with big data is that it can be very messy, and you might have a lot of missing data. Big data problems also involve a lot of computation, so we want to minimize the computational complexity of the algorithm by doing more with less data.”
As a fast-paced field, there’s potential to have an immediate impact on a lot of people.
Mechanical Engineering Assistant Professor Amir Barati Farimani sees AI as a way to rapidly improve quality of life for many people because the period from research to product can be very quick. Right now, applications in robotics and healthcare are some of the fastest-growing areas of AI research. For researchers, one of the biggest challenges is keeping up with developments and advancements in methodology in the field.
“It’s exciting because the time from developing the technology to the product stage is really short for AI products,” says Barati Farimani. “This is really interesting, especially for engineers, to think about creating products that are meaningful and have an immediate impact on people’s lives.”
On-device AI will bring the biggest impact.
One way that technology will work to improve quality of life is through on-device AI. AI systems currently rely on powerful machines or the cloud to run, so applications like Siri or Alexa only work by sending data to and from the cloud. But researchers like ECE Professor Radu Marculescu are busy making on-device AI a reality. Rather than running on a giant super computer or sending data to the cloud, computations using AI can take place locally on your device at an increased speed while preserving your privacy.
“The biggest impact of AI research will come when mobile and wearable devices will run AI applications,” he says. “This will completely change the way we interact among ourselves and with the environment. On-device AI is also an essential evolution in a world where demand for privacy and security from consumers is growing exponentially.”
We need to understand how and why AI systems predict and decide.
Many AI systems that make high-stakes decisions about credit, insurance, and other factors operate as black boxes—meaning there’s no way to tell how and why it makes its decisions, or what variables most heavily influence its decisions. For high-stakes applications where the cost of a wrong answer could mean to unjustly deny someone credit, there is a great need to understand what happens inside the black box to ensure fairness and trust.
“My perspective is that it is incredibly important for us to look inside black box AI systems and understand the rationale behind their predictions and decisions,” says Anupam Datta, an ECE professor whose research seeks to peer inside these black box systems. “It's important in order to ensure that models continue to be performant in production and unjust bias is mitigated.”