Welcome!

I am an assistant professor in the School of Informatics, Computing, and Engineering (SICE) at Indiana University. I am also a part of the Data Science program, Cognitive Science program, and the Center for Algorithms and Machine Learning (CAML). My research broadly addresses ways that enable computers to process, understand, and response to sound information. I have specific interests in the areas of speech separation/enhancement, speech evaluation, and speech recognition, to name a few, where I am interested in using these methods in real-world devices, such as cell phones, hearing aids, and robots. A combination of machine learning, signal processing, and statistical-based techniques are used.

I completed my Ph.D. in the Computer Science and Engineering department at The Ohio State University, under the supervision of Prof. DeLiang Wang. Prior to that, I was fortunate to be a Member of the Engineering Staff at Lockheed Martin for a few years. I received a Masters of Science degree from the Department of Electrical and Computer Engineering at Drexel University and a Bachelor's degree from the Department of Electrical and Computer Engineering at the University of Delaware. I recently became a GT Scholar, as part of GT-IDEA.

Latest News

4/30/2019

I'm extremely excited to now be a part of SICE's Data Science program!

3/13/2019

Our abstract on the "Impact of Amplification on Speech Enhancement Algorithms using an Objective Evaluation Metric" was accepted to the Internation Congress on Acoustics (ICA) 2019 conference! Look forward to writing the full paper version.

2/13/2019

Very excited to be a Grant Thornton (GT) Scholar and to be collaborating with GT, SPEA, and Kelly as part of GT-IDEA #GT Scholar #GT-IDEA

2/1/2019

Congratulations to our group member, Zhuohuang Zhang, who has his first ICASSP publication! The title of his paper is, "OBJECTIVE COMPARISON OF SPEECH ENHANCEMENT ALGORITHMS WITH HEARING LOSS SIMULATION."

11/20/2018

Our joint paper on "Building a Common Voice Corpus for Laiholh (Hakha Chin)" was accepted to ComputEL-3. This is just the beginning for addressing an extremely important problem. [PDF]

10/28/2018

The future is bright for STEM. There were so many wonderful and intelligent young women at the OurCS #HelloResearch workshop. I'm so glad that I co-led one of the projects. [OurCS]

7/8/2018

Our paper on phase-aware denoising was accepted to MLSP, which will be held in Aalborg, Denmark! [PDF]

3/23/2018

Congratulations to Xuan Dong on getting his 1st publication! His work on long-term SNR estimation was accepted to the LVA ICA conference which will be held in the UK! [PDF]

2/2018

I'm excited to announce that the National Science Foundation (NSF) has decided to fund our grant for the CISE CRII program! This grant provides ~$175,000 and will help fund graduate students and allow us to make progress towards our research goals. Thanks NSF!

9/11/2017

Prof. Williamson received a NVIDIA GPU grant valued at ~$2,000. This grant provides two NVIDIA TITAN Xp GPUs that will be installed in our private server

6/23/2017

Prof. Williamson gave a poster talk on complex masking at the Midwest Music and Audio Day (MMAD) at Northwestern University

6/6/2017

Our paper on the "Impact of Phase Estimation on Single-Channel Speech Separation Based on Time-Frequency Masking" was accepted for publication in the Journal of the Acoustical Society of America (JASA) [PDF]

4/19/2017

Prof. Williamson gave a talk to IU's Data Science Club about work on "Separating Speech from Background Noise using a Deep Neural Network and a Complex Mask".

4/9/2017

Our paper on "Time-Frequency Masking in the Complex Domain for Speech Dereverberation and Denoising" was accepted for publication in IEEE Trans. on Audio, Speech, and Lang. Proc. (TASLP) [PDF]

12/12/2016

Our paper on "SPEECH DEREVERBERATION AND DENOISING USING COMPLEX RATIO MASKS" was accepted for publication in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) 2017 [PDF]

11/7/2016

Prof. Williamson gave a talk at IU's Intelligent & Interactive Systems (IIS) Talk Series today, about our recent work on "Separating Speech from Background Noise using a Deep Neural Network and a Complex Mask". [Video Link]