Arka Daw: Bringing trust, transparency to AI

Arka Daw’s passion for artificial intelligence started with a simple, almost serendipitous introduction during his undergraduate years. It was 2015, and Daw, then a sophomore at Jadavpur University in India, stumbled upon machine learning while working with a professor at the Indian Statistical Institute. “I was always intrigued with physics and math, but when I saw these simple statistical models, like k-Nearest Neighbors, which could take in data and predict things, I realized there was so much potential,” Daw said. This early exposure set him on a path toward his research in AI.

Today, Daw is a Distinguished Staff Fellow at the Department of Energy Oak Ridge National Laboratory, where he tackles some of the most critical challenges in AI. His focus? Making AI systems more robust and trustworthy by exploring their generalizability on out-of-distribution samples and enhancing their robustness against attacks. In a world increasingly reliant on AI for everything from climate modelling to national security, Daw’s work has the potential to improve how these systems are understood and relied upon.

Daw’s path to ORNL was not linear. In 2017, during his junior year pursuing a bachelor’s degree in electronics and communications engineering, he was awarded a prestigious fellowship to pursue research in Germany. There, he worked on retinal artery-vein classification using AI, which can be used to detect diabetic retinopathy – a disease that can lead to blindness. This project was a turning point for Daw. “It was impactful because if we could improve the classification accuracy of arteries and veins, we could detect diseases much earlier,” he added.

This experience reinforced his belief in AI’s power but also made him aware of its limitations. The models Daw used delivered more accurate results than traditional image processing approaches, but how they worked remained a mystery. “We got these really accurate outputs,” he said, “but we didn’t know how or why they worked. That was a major concern.” 

This realization led Daw to Virginia Tech, where he earned his doctorate in computer science, specializing in physics-informed machine learning. His research bridged AI with scientific principles, ensuring that machine learning models adhered to known physical laws. This step forward in making AI more reliable laid the foundation for his current work at ORNL.

Daw’s work at ORNL focuses on understanding the robustness and reliability of AI models to ensure their safety and trustworthiness. AI systems are often described as “black boxes” due to their ability to generate predictions without offering clear explanations into how those predictions are made. 

“It’s like magic,” Daw added, “you feed in an input, and out comes a prediction. But we don’t always know why.” 

This lack of transparency can make it difficult for users, particularly in critical fields such as biometrics, climate modelling and national security, to trust AI’s conclusions. Daw’s research aims to open that black box by developing innovative deep learning frameworks and training methodologies for AI. These advancements aim to make AI more understandable and trustworthy, and thus, suitable for deployment in critical infrastructures.

Daw’s work also includes building defense techniques to make AI models more resilient to adversarial attacks. These attacks involve small, often imperceptible changes to input data that can trick an AI system into making critical errors. 

“You can make very tiny perturbations in the images, and the AI will start predicting something else,” he said. “That’s a huge risk when you’re using AI in fields like autonomous driving.” 

He is studying the optimization process of training AI models to better understand what causes these vulnerabilities in the first place. By identifying these weaknesses, Daw hopes to create more reliable AI systems that can perform even in unpredictable situations. 

Despite his deep involvement in AI research, Daw has a passion for exploration that extends beyond the lab. One of his favorite hobbies is cooking. 

“Indian food is my comfort zone,” he said, referencing his roots in Kolkata, one of the most populous cities in India. The diversity of Indian cuisine, with its wide array of spices and techniques, mirrors Daw’s approach to life – constantly experimenting and trying new things. 

“I also love cooking Italian and Asian food,” he added, though he admitted that baking is still a work in progress. 

Cricket also remains close to Daw’s heart. Growing up in India, where cricket is a national pastime, Daw played in local leagues and continues to follow the sport avidly. “I love watching the India team play, and the Ashes between England and Australia is always exciting,” he said. While his busy schedule at ORNL leaves little time for playing these days, he stays connected to the game by following international tournaments and occasionally catching a match with friends. 

Daw’s career path is a testament to the power of curiosity and embracing change. He started in electronics, moved to AI, and transitioned from AI for science to AI security. Each shift was driven by a willingness to explore new fields, even if it meant stepping into unfamiliar territory. “I was insecure at first, jumping from electronics to computer science without a master’s degree,” he added. “But it all worked out in the end.” 

For future fellows, Daw has one piece of advice: do not be afraid to try something new. 

“I explored so many different areas, and each time I took a leap, it opened up new opportunities,” he said. 

At ORNL, Daw has found the perfect environment to continue his exploration. “There’s so much cool research going on, and you want to be part of all of it,” he said.

As AI continues to evolve, Daw’s research at ORNL is playing a crucial role in shaping its future. By tackling the challenges of trust and transparency, Daw is helping to ensure that AI systems can be relied on in critical applications. While there is still much work to be done, Daw’s contributions are bringing us closer to a world where AI is not just smart, but also secure and interpretable.

“We’re making progress, but we need to take AI security very seriously,” he said. “It’s a challenging problem, but it’s one worth solving.”

With his deep commitment to AI security and his personal love for cooking and cricket, Daw exemplifies the balance between rigorous scientific pursuit and a life filled with passion. He is not just advancing technology — he is making sure it is something we can trust. 

ORNL’s Distinguished Staff Fellowship program aims to cultivate future scientific leaders by providing dedicated mentors, world-leading scientific resources and enriching research opportunities at a national laboratory. Fellowships are awarded to outstanding early-career scientists and engineers who demonstrate success within their academic, professional and technical areas. Fellowships are awarded for fundamental, experimental and computational sciences in a wide range of science areas. Factsheets about the lab’s fellows are available here.

UT-Battelle manages ORNL for DOE’s Office of Science, the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, visit energy.gov/science. — Neil Gillette

This Oak Ridge National Laboratory news article "Arka Daw: Bringing trust, transparency to AI" was originally found on https://www.ornl.gov/news

 

Scroll to Top