About me
I am a PhD student at The Ohio State University in the Department of Computer Science and Engineering (CSE), working in Audio, Speech and Perceptually-Inspired Research (ASPIRE) and Perception and Neurodynamics Laboratory (PNL). My research interests span several areas of speech signal processing using deep learning and signal processing approaches, with a focus on robust speech enhancement, separation, localization for environments with moving sources and multi-modal target/attended speaker extraction using EEG. I am currently advised by Professor. Donald Williamson, and I was previously advised by Professor DeLiang Wang. Before joining Ohio State, I earned my bachelor’s degree, Bachelor of Technology (B.Tech.), wiith a major in Electrical Engineering and a minor in Entrepreneurship from Indian Institute of Technology, Hyderabad.
With 5 years of industry experience, I have developed expertise in end-to-end solutions for real-time speech enhancement on edge devices such as Alexa and smart glasses, focusing on both high-level prototyping in research and low-level implementation for system optimization. I have hands-on expertise in designing, implementing, and debugging both standalone deep learning systems and hybrid systems that combine deep learning and signal processing approaches. Additionally, I have experience with multi-node, multi-GPU, and mixed-precision training for large models. My goal is to bridge the gap between research and application by developing efficient solutions for real-time and offline auditory processing tasks.
