About Me

I am a Senior Research Fellow at the School of Computing in the National University of Singapore. I am currently working in the Augmented Human Lab with A/Prof Suranga Nanayakkara on assistive technologies. In my previous post-doc position, I worked with A/Prof Lonce Wyse on audio generative models. In my first post-doc position with Prof Haizhou Li, I worked on applications of ASR in music.

My goal as a researcher is to build evidence-based technologies to improve people's lives. My research interests include designing generative models for creative and assistive applications, and music information retrieval.

I have founded the company, MuSigPro which is a music tech company that aims to provide solutions to bring the power of music closer to people. It has a singing competition platform with an AI-driven singing quality evaluation technology, that was designed during my PhD, and an automatic music-to-lyrics aligner that was designed during my first PostDoc.


Download CV Google Scholar Github


   Publications

    2024

  1.  SonicVista: Towards Creating Awareness of Distant Scenes through Sonification
    Chitralekha Gupta, Shreyas Sridhar, Denys Matthies, Christophe Jouffrais, and Suranga Nanayakkara
    Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT) 2024.
  2.  Example-Based Framework for Perceptually Guided Audio Texture Generation
    Purnima Kamath, Chitralekha Gupta, Lonce Wyse, and Suranga Nanayakkara
    IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2024.
  3.  VR.net: A Real-world Dataset for Virtual Reality Motion Sickness Research - Best Paper Award 🏆
    Elliot Wen, Chitralekha Gupta, Prasanth Sasikumar, Mark Billinghurst, James Wilmott, Emily Skow, Arindam Dey, and Suranga Nanayakkara
    IEEE VR, 2024.

  4. 2023

  5.  EMO-KNOW: A Large Scale Dataset on Emotion-Cause
    Mia Nguyen, Yasith Samaradivakara, Prasanth Sasikumar, Chitralekha Gupta, and Suranga Nanayakkara
    Findings of the Association for Computational Linguistics: EMNLP 2023.
  6.  Can AI Models Summarize Your Diary Entries? Investigating Utility of Abstractive Summarization for Autobiographical Text
    Chitralekha Gupta*, Shamane Siriwardhana*, Tharindu Kaluarachchi, Vipula Dissanayake, Suveen Ellawela, and Suranga Nanayakkara
    International Journal of Human–Computer Interaction (IJHCI), 2023.
  7.  Towards Controllable Audio Texture Morphing
    Chitralekha Gupta*, Purnima Kamath*, Yize Wei, Zhuoyao Li, Suranga Nanayakkara, and Lonce Wyse*
    in ICASSP, 2023.
  8.  Evaluating Descriptive Quality of AI-Generated Audio Using Image-Schemas
    Purnima Kamath, Zhuoyao Li, Chitralekha Gupta, Suranga Nanayakkara, and Lonce Wyse
    in ACM IUI, 2023.

  9. 2022

  10.  Parameter Sensitivity of Deep-Feature based Evaluation Metrics for Audio Textures
    Chitralekha Gupta, Yize Wei, Zequn Gong, Purnima Kamath, Zhuoyao Li, and Lonce Wyse
    in ISMIR, 2022.
  11.  Deep Learning Approaches in Topics of Singing Information Processing (Overview Paper)
    Chitralekha Gupta, Haizhou Li, and Masataka Goto
    IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2022.
  12.  Automatic Lyrics Transcription of Polyphonic Music With Lyrics-Chord Multi-Task Learning
    Xiaoxue Gao, Chitralekha Gupta, and Haizhou Li
    IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2022.
  13.  Genre-conditioned Acoustic Models for Automatic Lyrics Transcription of Polyphonic Music
    Xiaoxue Gao, Chitralekha Gupta, and Haizhou Li
    in ICASSP 2022.
  14.  Music-robust Automatic Lyrics Transcription of Polyphonic Music
    Xiaoxue Gao, Chitralekha Gupta, and Haizhou Li
    in SMC 2022.
  15.  Sound Model Factory: An Integrated System Architecture for Generative Audio Modelling
    Lonce Wyse, Purnima Kamath, and Chitralekha Gupta
    in Springer LNCS, EvoMusART 2022.
  16.  PoLyScribers: Joint Training of Vocal Extractor and Lyrics Transcriber for Polyphonic Music
    Xiaoxue Gao, Chitralekha Gupta, and Haizhou Li
    submitted to TASLP 2022.

  17. 2021

  18.  Training Explainable Singing Quality Assessment Network with Augmented Data
    Jinhu Li, Chitralekha Gupta, and Haizhou Li
    in proceedings of APSIPA 2021.
  19.  Towards Reference-Independent Rhythm Assessment of Solo Singing
    Chitralekha Gupta, Jinhu Li, and Haizhou Li
    in proceedings of APSIPA 2021.
  20.  Signal Representations for Synthesizing Audio Textures with Generative Adversarial Networks
    Chitralekha Gupta, Purnima Kamath, and Lonce Wyse
    in proceedings of SMC 2021.

  21. 2020

  22.  Spectral Features and Pitch Histogram for Automatic Singing Quality Evaluation with CRNN
    Huang Lin, Chitralekha Gupta, and Haizhou Li
    in proceedings of APSIPA 2020.
  23.  Automatic Rank Ordering of Singing Vocals with Twin Neural Network
    Chitralekha Gupta, Huang Lin, and Haizhou Li
    in Proceedings of ISMIR 2020.
  24.  Automatic Leaderboard: Evaluation of Singing Quality without a Standard Reference
    Chitralekha Gupta, Haizhou Li, and Ye Wang
    IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2020
  25.  Automatic Lyrics Alignment and Transcription in Polyphonic Music: Does Background Music Help?
    Chitralekha Gupta, Emre Yilmaz, and Haizhou Li
    In Proceddings of ICASSP 2020.

  26. 2019

  27.  Acoustic Modeling for Automatic Lyrics-to-Audio Alignment
    Chitralekha Gupta, Emre Yilmaz, and Haizhou Li
    In Proceedings of Interspeech 2019
  28.  Automatic Lyrics-to-Audio Alignment on Polyphonic Music using Singing-Adapted Acoustic Models
    Chitralekha Gupta*, Bidisha Sharma*, Haizhou Li, and Ye Wang
    In Proceedings of ICASSP 2019 (*equal contributors)

  29. 2018

  30.  Automatic Evaluation of Singing Quality without a Reference
    Chitralekha Gupta, Haizhou Li, and Ye Wang
    In Proceedings of APSIPA ASC 2018
  31.  A Technical Framework for Automatic Perceptual Evaluation of Singing Quality
    Chitralekha Gupta, Haizhou Li, and Ye Wang
    Transactions of APSIPA 2018
  32.  Semi-supervised lyrics and solo-singing alignment
    Chitralekha Gupta, Rong Tong, Haizhou Li, and Ye Wang
    In Proceedings of ISMIR 2018
  33.  Automatic Pronunciation Evaluation of Singing
    Chitralekha Gupta, Haizhou Li, and Ye Wang
    In Proceedings of Interspeech 2018
  34.  Empirically weighing the importance of decision factors when selecting music to sing.
    Michael Mustaine, Karim Ibhrahim, Chitralekha Gupta, and Ye Wang
    Accepted for ISMIR 2018

  35. 2017

  36.  Perceptual Evaluation of Singing Quality - Best Student Paper Award 🏆
    Chitralekha Gupta, Haizhou Li, and Ye Wang
    In Proceedings of Asia-Pacific Signal and Information Processing Association (APSIPA), Kuala Lumpur, Dec. 2017.
  37.  Using Music Technology to Motivate Foreign Language Learning
    Douglas Turnbull, Chitralekha Gupta, Dania Murad, Michael Barone, and Ye Wang
    In Proceedings of International Conference on Orange Technologies, ICOT, Singapore, Dec. 2017
  38.  Towards automatic mispronunciation detection in singing
    Chitralekha Gupta, David Grunberg, Preeti Rao, and Ye Wang
    In Proceedings of International Society of Music Information Retrieval (ISMIR), Suzhou, Oct. 2017
  39.  Intelligibility of Sung Lyrics: A Pilot Study
    Karim Magdi, David Grunberg, Kat Agres, Chitralekha Gupta, and Ye Wang
    In Proceedings of International Society of Music Information Retrieval (ISMIR), Suzhou, Oct. 2017
  40.  SECCIMA: Singing and Ear Training for Children with Cochlear Implants via a Mobile Application
    Zhiyan Duan, Chitralekha Gupta, Graham Percival, David Grunberg, and Ye Wang
    In Proceedings of Sound and Music Computing (SMC), Helsinki, July 2017

  41. Earlier Publications

  42.  Spectral Estimation of Clutter for Matched Illumination
    Chitralekha Gupta, Kaushal Jadia, Avik Santra, and Rajan Srinivasan
    In Proceedings of International Radar Symposium India (IRSI)}, Bangalore, Dec. 2013
  43.  Objective Assessment of Ornamentation in Indian Classical Singing
    Chitralekha Gupta, and Preeti Rao
    S. Ystad et al. (Eds.): CMMR/FRSM 2011, Springer Lecture Notes on Computer Science 7172, pp. 1-25, 2012 (Master's thesis work)
  44.  An objective evaluation tool for ornamentation in singing
    Chitralekha Gupta, and Preeti Rao
    In Proceedings of International Symposium on Computer Music Modelling and Retrieval (CMMR) and Frontiers of Research on Speech and Music (FRSM), Bhubaneswar, India, March 2011
  45.  Context-aware features for singing voice detection in polyphonic music
    Vishweshwara Rao, Chitralekha Gupta, and Preeti Rao
    In 9th International Workshop on Adaptive Multimedia Retrieval, Barcelona, July 2011
  46.  Evaluating Vowel Pronunciation Quality: Formant Space Matching versus ASR Confidence Scoring
    Ashish Patil, Chitralekha Gupta, and Preeti Rao
    In Proceedings of 16th National Conference on Communications – 2010, IIT Madras, Chennai, Jan. 2010

   Ongoing Projects

 Sonic Vista

We are building a wearable artifact for people with visual impairments, called SonicVista, that can provide information and experience of distant surrounding environmental scenes (that are beyond audible range) through generative sounds. We are exploring the use of Meta's ARIA glasses in this project.

April 2024: Our paper titled SonicVista: Towards Creating Awareness of Distant Scenes through Sonification accepted in Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT) 2024.
March 2024: I and my colleague Shreyas were invited to present our work at Meta's ARIA Summit in Redmond WA, USA.



 Controlled Audio Gen

Video games and movies often require custom sound effects. We are investigating methods to generate environmental sounds with the ability to control various aspects of these sounds, through supervised as well as unsupervised methods of control. For example, we would like to be able to generate wind sounds at low or high wind strength, or the sound of rain when it drizzles or buckets down.

April 2024: Our paper titled Example-Based Framework for Perceptually Guided Audio Texture Generation is accepted in IEEE/ACM Transactions of Audio, Speech, and Language Processing, 2024.
Previous Publications:

Demo videos

MuSigPro

MuSigPro Product Demo

MuSigPro's singing karaoke app is now available on Google Playstore

Try it out now! -- Download App

MuSigPro

Automatic Leaderboard Generation of Singers using Reference-Independent Singing Quality Evaluation Methods

This technology has been awarded the NUS Graduate Research Innovation Program (GRIP) start-up grant, to establish the company MuSigPro Pte. Ltd.

Try it out yourself! -- https://musigpro.com

AutoLyrixAlign

Automatic lyrics-to-audio alignment system for polyphonic music audio

Demo submitted to ICASSP 2020 Show and Tell

This system has outperformed all other systems in the International Music Information Retrieval Evaluation eXchange platform MIREX 2019, with mean absolute word alignment error of less than 200 ms across all test datasets (Mirex Results)

Try it out yourself! -- https://autolyrixalign.hltnus.org

Speak-to-Sing

A Personalized Speech-to-Singing Conversion System

Presented at Interspeech 2019 Show and Tell, Graz Austria Poster link

Try it out yourself! -- https://speak-to-sing.hltnus.org