AI News and Accomplishments
January 22, 2020
Four experts in diverse aspects of artificial intelligence have joined Rensselaer Polytechnic Institute as part of the Artificial Intelligence Research Collaboration (AIRC), a recently formed joint initiative of Rensselaer and IBM Research.
“The addition of these faculty is expanding our interdisciplinary cohort of AI researchers across the entire campus. We expect these four outstanding faculty members are the first wave of hires who will increase our capabilities for AI and machine learning research across all five of Rensselaer’s schools,” said James Hendler, director of the AIRC, and a Rensselaer Tetherless World Professor of Computer, Web, and Cognitive Science.
The Rensselaer-IBM AIRC is dedicated to advancing the science of artificial intelligence and enabling the use of AI and machine learning in research investigations, innovations, and applications of joint interest to both Rensselaer and IBM. The collaboration fosters the growth of AI and machine learning capabilities through faculty hires, by funding specific research initiatives, and through funding top graduate students as IBM AI Horizons fellows.
For more information about the AIRC, watch this video:
The appointments add expertise in machine learning algorithms for text prediction, intelligent systems that collaborate with humans for problem solving, machine learning in materials discovery, and algorithms for understanding the social linguistics of disinformation.
“The AIRC is looking at a very wide range of problems that includes building trust in these systems, mathematical optimization of key algorithms using new techniques from math and data analytics, and really applying those to the kinds of problems that engineers and scientists work on,” Hendler said. “By bringing on new faculty through the AIRC, we’re able to get immediately involved in larger multidisciplinary projects.”
Tianyi Chen, an assistant professor in the Department of Electrical, Computer, and Systems Engineering, is developing machine learning algorithms that improve the learning accuracy of applications such as image classification and text prediction, by leveraging datasets from multiple users without compromising user privacy. His work touches on areas of machine learning and artificial intelligence, mathematical optimization, signal processing, and communication networks.
Lydia Manikonda, an assistant professor in the Lally School of Management, applies machine learning and AI techniques to unstructured social media data, including images, to understand the offline behaviors of users through their online footprints. Her current projects seek to use social media data to investigate the behavioral traits and dynamics in domains such as finance and public health (such as mental health, obesity, or personal health goals).
“Broadly speaking, my research interests include modeling and building technologies where humans and machines can collaborate and work together to solve real-world everyday problems. The network that surrounds me as part of the AIRC helps to nourish that work,” Manikonda said.
Trevor Rhone, an assistant professor in the Department of Physics, Applied Physics, and Astronomy, uses machine learning tools for materials discovery and knowledge discovery, searching for new 2D materials with exotic properties, predictive capabilities for industrially relevant catalytic reactions, or other compelling problems.
Tomek Strzalkowski, a professor in the Department of Cognitive Science, researches in the area of natural language processing, disinformation, and computational sociolinguistics. His work uses algorithms to understand how individuals use language to influence others, particularly as it relates to disinformation.
“How can language be used to manipulate human behavior and social relations? What techniques are being used? What makes someone influential? How can we tell in a group of people who are talking that someone is influencing others? Can we also see that these people are being influenced?” Strzalkowski said. “Pursuing my work through the structure of the AIRC, being able to work together, across disciplines, no boundaries, is a wonderful opportunity.”
June 10, 2019
TROY, N.Y. —Machine learning has the potential to vastly advance medical imaging, particularly computerized tomography (CT) scanning, by reducing radiation exposure and improving image quality.
Those new research findings were just published in Nature Machine Intelligence by engineers at Rensselaer Polytechnic Institute and radiologists at Massachusetts General Hospital and Harvard Medical School.
According to the research team, the results published in this high-impact journal make a strong case for harnessing the power of artificial intelligence to improve low-dose CT scans.
“Radiation dose has been a significant issue for patients undergoing CT scans. Our machine learning technique is superior, or, at the very least, comparable, to the iterative techniques used in this study for enabling low-radiation dose CT,” said Ge Wang, the Clark & Crossan Endowed Chair Professor of biomedical engineering at Rensselaer, and a corresponding author on this paper. “It’s a high-level conclusion that carries a powerful message. It’s time for machine learning to rapidly take off and, hopefully, take over.”
Low-dose CT imaging techniques have been a significant focus over the past several years in an effort to alleviate concerns about patient exposure to X-ray radiation associated with widely used CT scans. However, decreasing radiation can decrease image quality.
To solve that, engineers worldwide have designed iterative reconstruction techniques to help sift through and remove interferences from CT images. The problem, Wang said, is that those algorithms sometimes remove useful information or falsely alter the image.
The team set out to address this persistent challenge using a machine learning framework. Specifically, they developed a dedicated deep neural network and compared their best results to the best of what three major commercial CT scanners could produce with iterative reconstruction techniques.
This work was performed in close collaboration with Dr. Mannudeep Kalra, a professor of radiology at Massachusetts General Hospital and Harvard Medical School, who was also a corresponding author on the paper.
The researchers were looking to determine how the performance of their deep learning approach compared to the selected representative iterative algorithms currently being used clinically.
Several radiologists from Massachusetts General Hospital and Harvard Medical School assessed all of the CT images. The deep learning algorithms developed by the Rensselaer team performed as well as, or better than, those current iterative techniques in an overwhelming majority of cases, Wang said.
Researchers found that their deep learning method is also much quicker, and allows the radiologists to fine-tune the images according to clinical requirements, Dr. Kalra said.
These positive results were realized without access to the original, or raw, data from all the CT scanners. Wang pointed out that if original CT data is made available, a more specialized deep learning algorithm should perform even better.
“This has radiologists in the loop,” Wang said. “In other words, this means that we can integrate machine intelligence and human intelligence together in the deep learning framework, facilitating clinical translation.”
He said that these results confirm that deep learning could help produce safer, more accurate CT images while also running more rapidly than iterative algorithms.
“We are excited to show the community that machine learning methods are potentially better than the traditional methods,” said Wang. “It sends the scientific community a strong signal. We should go for machine learning.”
This research by Wang’s team is among the significant advancements consistently being made by faculty in the Biomedical Imaging Center within the Center for Biotechnology and Interdisciplinary Studies (CBIS) at Rensselaer.
“Professor Wang’s work is an excellent example of how advances in artificial intelligence, and machine and deep learning, can improve biomedical tools and practices by addressing hard problems—in this case helping to provide high-quality CT images using a lower radiation dose. Transformative developments from these collaborative teams will lead to more precise and personalized medicine,” said Deepak Vashishth, director of CBIS.
Hongming Shan, a postdoctoral researcher at Rensselaer, is the first author of the paper. Uwe Kruger, professor of practice in biomedical engineering at Rensselaer, was instrumental when it came to statistical analysis in this project. Radiologists from Massachusetts General Hospital in Boston and Ramathibodi Hospital in Bangkok are also coauthors on this research. This work was supported in part by a grant from the National Institute of Biomedical Imaging and Bioengineering within the National Institutes of Health.
April 10, 2019
TROY, N.Y. — A wide-eyed, soft-spoken robot named Pepper motors around the Intelligent Systems Lab at Rensselaer Polytechnic Institute. One of the researchers tests Pepper, making various gestures as the robot accurately describes what he’s doing. When he crosses his arms, the robot identifies from his body language that something is off.
Pepper’s ability to pick up on non-verbal cues is a result of the enhanced “vision” the lab’s researchers are developing. Using advanced computer vision and artificial intelligence technology, the team is enhancing the ability of robots like this one to naturally interact with humans.
“What we have been doing so far is adding visual understanding capabilities to the robot, so it can perceive human action and can naturally interact with humans through these non-verbal behaviors, like body gestures, facial expressions, and body pose,” said Qiang Ji, professor of electrical, computer, and systems engineering, and the director of the Intelligent Systems Lab.
With the support of government funding over the years, researchers at Rensselaer have mapped the human face and body so that computers, with the help of cameras built into the robots and machine-learning technologies, can perceive non-verbal cues and identify human action and emotion.
Among other things, Pepper can count how many people are in a room, scan an area to look for a particular person, estimate an individual’s age, recognize facial expressions, and maintain eye contact during an interaction.
Another robot, named Zeno, looks more like a person and has motors in its face making it capable of closely mirroring human expression. The research team has been honing Zeno’s ability to mimic human facial communication in real time right down to eyebrow – and even eyeball – movement.
Ji sees computer vision as the next step in developing technologies that people interact with in their homes every day. Currently, most popular AI-enabled virtual assistants rely almost entirely on vocal interactions.
“There’s no vision component. Basically, it’s an audio component only,” Ji said. “In the future, we think it’s going to be multimodal, with both verbal and nonverbal interaction with the robot.”
The team is working on other vision-centered developments, like technology that would be able to track eye movement. Tools like that could be applied to smart phones and tablets.
Ji said the research being done in his lab is currently being supported by the National Science Foundation and Defense Advanced Research Projects Agency. In addition, the Intelligent Systems Lab has received funding over the years from public and private sources including the U.S. Department of Defense, the U.S. Department of Transportation, and Honda.
What Ji’s team is developing could also be used to make roads safer, he said, by installing computer-vision systems into cars.
“We will be able to use this technology to ultimately detect if the driver is fatigued, or the driver is distracted,” he said. “The research that we’re doing is more human-centered AI. We want to develop AI, machine-learning technology, to extend not only humans’ physical capabilities, but also their cognitive capabilities.”
That’s where Pepper and Zeno come in. Ji envisions a time when robots could keep humans company and improve their lives. He said that is the ultimate goal.
“This robot could be a companion for humans in the future,” Ji said, pointing to Pepper. “It could listen to humans, understand human emotion, and respond through both verbal and non-verbal behaviors to meet humans’ needs.”
March 6, 2019
TROY, N.Y. — Generating comprehensive molecular images of organs and tumors in living organisms can be performed at ultra-fast speed using a new deep learning approach to image reconstruction developed by researchers at Rensselaer Polytechnic Institute.
The research team’s new technique has the potential to vastly improve the quality and speed of imaging in live subjects and was the focus of an article recently published in Light: Science and Applications, a Nature journal.
Compressed sensing-based imaging is a signal processing technique that can be used to create images based on a limited set of point measurements. Recently, a Rensselaer research team proposed a novel instrumental approach to leverage this methodology to acquire comprehensive molecular data sets, as reported in Nature Photonics. While that approach produced more complete images, processing the data and forming an image could take hours.
This latest methodology developed at Rensselaer builds on the previous advancement and has the potential to produce real-time images, while also improving the quality and usefulness of the images produced. This could facilitate the development of personalized drugs, improve clinical diagnostics, or identify tissue to be excised.
In addition to providing an overall snapshot of the subject being examined, including the organs or tumors that researchers have visually targeted with the help of florescence, this imaging process can reveal information about the successful intracellular delivery of drugs by measuring the decay rate of the fluorescence.
To enable almost real-time visualization of molecular events, the research team has leveraged the latest developments in artificial intelligence. The vastly improved image reconstruction is accomplished using a deep learning approach. Deep learning is a complex set of algorithms designed to teach a computer to recognize and classify data. Specifically, this team developed a convolutional neural network architecture that the Rensselaer researchers call Net-FLICS, which stands for fluorescence lifetime imaging with compressed sensing.
“This technique is very promising in getting a more accurate diagnosis and treatment,” said Pingkun Yan, co-director of the Biomedical Imaging Center at Rensselaer. “This technology can help a doctor better visualize where a tumor is and its exact size. They can then precisely cut off the tumor instead of cutting a larger part and spare the healthy, normal tissue.”
Yan developed this approach with corresponding author Xavier Intes, the other co-director of the Biomedical Imaging Center at Rensselaer, which is part of the Rensselaer Center for Biotechnology and Interdisciplinary Studies. Doctoral students Marien Ochoa and Ruoyang Yao supported the research.
“At the end, the goal is to translate these to a clinical setting. Usually when you have clinical systems you want to be as fast as possible,” said Ochoa, as she reflected on the speed with which this new technique allows researchers to capture these images.
Further development is required before this groundbreaking new technology can be used in a clinical setting. However, its progress has been accelerated by incorporating simulated data based on modeling, a particular specialty for Intes and his lab.
“For deep learning usually you need a very large amount of data for training, but for this system we don’t have that luxury yet because it’s a very new system,” said Yan.
He said that the team’s research also shows that modeling can innovatively be used in imaging, accurately extending the model to the real experimental data.
February 6, 2019
TROY, N.Y. — Researchers at Rensselaer Polytechnic Institute who developed a blood test to help diagnose autism spectrum disorder have now successfully applied their distinctive big data-based approach to evaluating possible treatments.
The findings, recently published in Frontiers in Cellular Neuroscience, have the potential to accelerate the development of successful medical interventions. One of the challenges in assessing the effectiveness of a treatment for autism is how to measure improvement. Currently, diagnosis and evaluating the success of an intervention rely heavily on observations by professionals and caretakers.
“Having some kind of a measure that measures something that’s happening inside the body is really important,” said Juergen Hahn, systems biologist, professor, and head of the Rensselaer Department of Biomedical Engineering.
Hahn and his team use machine-learning algorithms to analyze complex data sets. That is how he previously discovered patterns with certain metabolites in the blood of children with autism that can be used to successfully predict diagnosis. You can watch Hahn discuss that here.
In this most recent analysis, the team used a similar set of measurements from three different clinical trials that examined potential metabolic interventions. The researchers were able to compare data from before and after treatment, and look for correlations between those results and any observed changes of adaptive behavior.
“What we did here is showed that if you actively try to change concentrations of these metabolites that are being measured, then you will also see changes in the behavior,” Hahn said.
Hahn said that this approach was unique in that it analyzed multiple medical markers at the same time, unveiling correlations not seen in the data if each measurement is investigated individually.
“It can speed up the development process because you now have an additional tool that tells you how well a treatment has worked,” he said.
Hahn expects this type of approach to become an important component of clinical trials for autism in the future. “Having medical tests that measure quantities directly related to the physiology is important and we hope that they get incorporated into future trials,” he said.
Hahn, a member of the Rensselaer Center for Biotechnology and Interdisciplinary Studies, worked on this study with Rensselaer graduate student Troy Vargason, undergraduate student Emily Roth, and Uwe Kruger, who is a professor of practice in the biomedical engineering department.
In addition to developing and successfully testing the first physiological test for autism and this recent work, Hahn has also worked with colleagues to apply his method to determining a pregnant mother’s relative risk for having a child with autism spectrum disorder.
Rensselaer News Feed
Faculty from Rensselaer Polytechnic Institute served as experts in an exchange of information about developments in the field of sustainable energy, large-scale environmental change, and innovative and interdisciplinary research into energy storage and smart systems in the built environment on a recent visit by two members of the U.S. Congress.
In the middle of the COVID-19 pandemic, social distancing protocols did not slow Rensselaer Polytechnic Institute students Wyatt Delans and Jake Szottfried down when they needed to design and develop code for a robotic system capable of assembling a model trophy for a class project. Using new simulation and virtual reality lab capabilities at the university, they were able to design most of the project in their dorm rooms and then test it in a virtual environment before physically manufacturing it in the Manufacturing Innovation Learning Lab (MILL) at Rensselaer.
TROY, N.Y. — Researchers from Rensselaer Polytechnic Institute will study whether body heat, or even humidity from a person’s breath, for instance, may impact the effectiveness of the porous fibers that are used to make protective technologies, like face masks. With the support of a National Science Foundation grant, the team will use its expertise in fluid and solid mechanics to study the mechanical performance of fibrous materials when they are exposed to warm temperatures and humidity.
TROY, N.Y. — A novel experiment aimed at studying the mechanics of amyloid fibrils — a type of protein aggregation associated with diseases like diabetes, Alzheimer’s, and Parkinson’s — started today aboard the International Space Station (ISS), led by a team at Rensselaer Polytechnic Institute.
Like many other cells and organs within the body, cardiac cells possess a type of asymmetry that may play an important role in healthy heart formation and could serve as the basis for interventions to prevent congenital heart defects.