2025
_BODY OF RESEARCH

AI Under Examination
Drexel experts aren’t just investigating AI applications — they’re working to ensure the technology is effective, accountable and aligned with society.

Artificial intelligence is no longer a distant frontier — it’s an engine of transformation, reshaping industries and driving debates about the future of work.

AI-related software spending is projected to reach nearly $300 billion by 2027, by which time, a quarter of all global organizations are predicted to be in the AI-planning stages, according to the consultancy Gartner. As AI’s influence accelerates, so do urgent questions about its risks, regulations and use cases.

At Drexel, researchers are lending their expertise to national efforts to establish safeguards. Engineering and informatics faculty were among the first cohort of experts selected by the U.S. National Science Foundation to develop frameworks for safe, secure and trustworthy AI. Their work, recognized in a White House ceremony last spring, includes using machine learning algorithms to improve transparency and oversight of large language models.

Drexel is also one of more than 200 institutions participating in the U.S. AI Safety Consortium, a Department of Commerce initiative that unites academia, industry, government and civil society organizations to shape the responsible development of AI.

Beyond policy and oversight, Drexel researchers are harnessing AI to solve practical challenges. Among them, a team from the College of Computing & Informatics is collaborating on a Defense Advanced Research Projects Agency (DARPA) initiative that uses AI to support leaders thrust into crisis situations. Across disciplines, AI is being leveraged to combat degenerative brain disease, enhance protective measures for frontline medical workers, and monitor aging roads and bridges. Other projects explore AI’s role in designing climate-resilient communities and empowering teens to recognize online predators.

Read on to learn more about Drexel’s work on AI.

1. Deepfake Detection

Matthew Stamm is working to stem the rising tide of deceptive videos known as “deepfakes.”

The ability of AI tools to produce strikingly realistic content from a few simple text prompts raises the specter of AI being used to mislead people on a massive scale.

One challenge is that current methods of detection won’t work against AI-generated video. But Stamm’s work shows that AI can also be used to fight back: A machine-learning approach could be the key to unmasking these synthetic creations. His team has had success with a machine learning algorithm that can be trained to extract and recognize digital “fingerprints” of many different video generators.

Bad actors will find a way to use AI for deception. “That’s why we’re working to stay ahead of them by developing the technology to identify synthetic videos from patterns and traits that are endemic to the media,” he says.

2. Fissure Spotter

Cracks in concrete may start small, but they can signal serious structural issues — something Arvin Ebrahimkhanlou and his colleagues are tackling with AI.

In one project, Ebrahimkhanlou and Pedram Bazrafshan have developed an AI-powered method to quickly assess damage in concrete structures by analyzing surface cracking patterns.

With hundreds of thousands of aging bridges, levees, roadways and buildings across the country, knowing which ones need urgent repair is critical. Traditional manual inspections are time-consuming and inconsistent, relying heavily on an inspector’s judgment.

To make the process faster and more reliable, the researchers combined AI algorithms with a classic mathematical technique for analyzing web-like networks. Their approach quantifies structural damage based solely on crack patterns, offering a more efficient and objective way to prioritize repairs.

Building on this work in a separate but related project, Ebrahimkhanlou is also developing AI-driven tools to support robotic inspection of bridges, buildings and roads.

With research assistant Ali Ghadimzadeh Alamdari, he created a multi-scale system that uses computer vision and machine learning programs to identify cracks in concrete and direct robotic scanning, modeling and monitoring.

They believe their AI-powered system could enable autonomous robots to efficiently locate and inspect problem areas — reducing the need for human inspectors in hazardous conditions while catching structural issues earlier.

“Cracks can be regarded as a patient’s medical symptoms that should be screened in the early stages,” the researchers wrote in Automation in Construction. “Consequently, early and accurate detection and measurement of cracks are essential for timely diagnosis, maintenance and repair efforts, preventing further deterioration and mitigating potential hazards.”

3. Brain Age Predictor

In the fight against degenerative brain disease, John Kounios and Fengqing Zhang are leveraging AI to estimate “brain age,” a key predictor of age-related diseases like dementia, mild cognitive impairment and Parkinson’s disease.

When a brain ages prematurely, early intervention could help delay or prevent serious health problems — but identifying at-risk patients has been challenging. Kounios and his colleagues have developed a machine-learning method that uses electroencephalography (EEG) instead of MRI scans to estimate brain age.

EEGs are less expensive and less invasive than MRIs, making widespread screening more feasible. “It can be used as a relatively inexpensive way to screen large numbers of people,” Kounios says.

4. Smart Eco Zoning

Philadelphia’s path to cutting greenhouse gas emissions may depend on smarter zoning — and machine learning could be the key. Simi Hoque is using AI to model energy use at a granular level, helping predict how consumption will shift as neighborhoods evolve.

Her research, recently published in Energy & Buildings, could support the city’s 2050 plan to reduce greenhouse gases by identifying how zoning policies influence energy efficiency.

“For Philadelphia in particular, neighborhoods vary so much from place to place in prevalence of certain housing features and zoning types that it’s important to customize energy programs for each neighborhood, rather than trying to enact blanket policies for carbon reduction across the entire city or county,” she says.

5. Robots for the Sea

Step into James Tangorra’s lab, and you might mistake it for an aquarium. But the sleek, mechanical sea creatures moving through the water aren’t alive — they’re bio-inspired robots, designed to mimic the movements of marine animals.

Underwater vehicles are crucial for mapping the ocean floor and gathering environmental data, but today’s designs struggle with agility and performance. Tangorra and his team are turning to nature for better solutions.

Their latest project is SEAMOUR, a robotic sea lion that models the swimming mechanics of its real-world counterpart. Using AI-driven reinforcement learning, the team is testing thousands of movement patterns, fine-tuning its speed, stability and maneuverability.

The goal: underwater robots that move as effortlessly as the creatures they’re designed after.

6. Mask Minders

Aleksandra Sarcevic studies human-computer interaction in health care settings and is leading Drexel’s participation in an NIH-funded effort to boost protective-gear compliance among frontline workers. Her team is exploring how AI-powered reminders can help doctors, nurses and hospital staff stay properly masked.

Currently, PPE compliance is monitored through manual, low-tech methods, leaving room for error. Drexel researchers are sharing their expertise on integrating AI technology to provide real-time prompts when gear needs to be worn or adjusted.

By combining computer vision and AI, the team hopes to automate PPE monitoring and ensure adherence in high-risk environments.

This solution could potentially be applied “in any setting that are now relying on human-based PPE monitoring, like other health care settings, common hospital areas, construction sites, and even public spaces, such as airports and train stations,” Sarcevic says.

7. Patterns in Patients

Drexel faculty member Scott Haag, who also serves as a supervisor at the Children’s Hospital of Philadelphia, is leveraging AI to uncover patterns in patient health records.

With Maryam Daniali (MS ’21, PhD ’23), Haag has developed a process that applies AI and natural language processing to analyze more than 53 million pediatric patient notes. This system helps identify similarities among patient groups and assess their risk for developing certain diseases in the future.

The researchers introduced a novel technique combining text mining and ontology-based approaches, demonstrating its practicality on terabytes of patient data. By extracting meaningful insights from electronic health records, this method enables health care professionals to detect patterns, predict risks and make more informed treatment decisions.

8. Predator Catcher

Afsaneh Razi is investigating how to make AI safer for young people — from spotting online predators to investigating the use of AI companions and mental health chatbots.

In one study, she led a groundbreaking effort to make social media safer — by enlisting young users themselves. In collaboration with researchers from Vanderbilt, Georgia Tech, and Boston University, Razi’s team is developing a machine-learning model to detect unwanted sexual advances on Instagram.

Trained on over 5 million direct messages — annotated by 150 adolescents who had experienced uncomfortable or unsafe conversations — the technology can quickly and accurately flag risky interactions.

“In the year 2023 alone, the National Center for Missing and Exploited Children received more than 36.2 million reports of online child sexual exploitation,” Razi says. “This is not a problem that can be ignored.”

Razi’s team is also exploring human-AI interactions, particularly in the context of conversational agents and AI companions. In one study, they analyzed user reviews of Replika, a chatbot designed to provide companionship, and found troubling instances where the AI exhibited sexually aggressive behavior. The team has identified this concerning emerging issue as AI-induced sexual harassment. Their work outlines future research directions and design implications to help mitigate potential harm.

In addition to studying risks, the team is exploring young people’s preferences for AI-generated responses in situations where they might seek emotional support. They found that while AI can offer meaningful help in some cases, it often falls short in more sensitive situations — pointing to both the potential and the limitations of using AI for mental health. Building on these insights, the team is now examining safety concerns around conversational systems designed to support emotional well-being.


WATCH: How Old Is Your Brain? A team of researchers from Drexel and Stockton universities have developed a new and practical way to monitor general brain health and detect premature brain aging using a low-cost EEG-headset and a machine-learning algorithm, presenting a quick and easy way to screen for vulnerability to age-related pathology and monitor the effectiveness of intervention techniques.