About Me

I am a Senior Research Fellow at the National University of Singapore.

My research interests are neural audio synthesis, singing voice analysis and evaluation, applications of ASR in music such as lyrics alignment and transcription, music information retrieval, and applications of music in education and health therapy.

I have also founded my company, MuSigPro which is a music tech company that provides a singing competition platform with our AI-driven singing quality evaluation technology.


Download CV


   Publications

    2023

  1.  Example-Based Framework for Perceptually Guided Audio Texture Generation
    Purnima Kamath, Chitralekha Gupta, Lonce Wyse, and Suranga Nanayakkara
    IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2023 (under review).
  2.  Towards Controllable Audio Texture Morphing
    Chitralekha Gupta*, Purnima Kamath*, Yize Wei, Zhuoyao Li, Suranga Nanayakkara, and Lonce Wyse*
    in ICASSP, 2023.
  3.  Evaluating Descriptive Quality of AI-Generated Audio Using Image-Schemas
    Purnima Kamath, Zhuoyao Li, Chitralekha Gupta, Suranga Nanayakkara, and Lonce Wyse*
    in ACM IUI, 2023.

  4. 2022

  5.  Parameter Sensitivity of Deep-Feature based Evaluation Metrics for Audio Textures
    Chitralekha Gupta, Yize Wei, Zequn Gong, Purnima Kamath, Zhuoyao Li, and Lonce Wyse
    in ISMIR, 2022.
  6.  Deep Learning Approaches in Topics of Singing Information Processing (Overview Paper)
    Chitralekha Gupta, Haizhou Li, and Masataka Goto
    IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2022.
  7.  Automatic Lyrics Transcription of Polyphonic Music With Lyrics-Chord Multi-Task Learning
    Xiaoxue Gao, Chitralekha Gupta, and Haizhou Li
    IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2022.
  8.  Genre-conditioned Acoustic Models for Automatic Lyrics Transcription of Polyphonic Music
    Xiaoxue Gao, Chitralekha Gupta, and Haizhou Li
    in ICASSP 2022.
  9.  Music-robust Automatic Lyrics Transcription of Polyphonic Music
    Xiaoxue Gao, Chitralekha Gupta, and Haizhou Li
    in SMC 2022.
  10.  Sound Model Factory: An Integrated System Architecture for Generative Audio Modelling
    Lonce Wyse, Purnima Kamath, and Chitralekha Gupta
    in Springer LNCS, EvoMusART 2022.
  11.  PoLyScribers: Joint Training of Vocal Extractor and Lyrics Transcriber for Polyphonic Music
    Xiaoxue Gao, Chitralekha Gupta, and Haizhou Li
    submitted to TASLP 2022.

  12. 2021

  13.  Training Explainable Singing Quality Assessment Network with Augmented Data
    Jinhu Li, Chitralekha Gupta, and Haizhou Li
    in proceedings of APSIPA 2021.
  14.  Towards Reference-Independent Rhythm Assessment of Solo Singing
    Chitralekha Gupta, Jinhu Li, and Haizhou Li
    in proceedings of APSIPA 2021.
  15.  Signal Representations for Synthesizing Audio Textures with Generative Adversarial Networks
    Chitralekha Gupta, Purnima Kamath, and Lonce Wyse
    in proceedings of SMC 2021.

  16. 2020

  17.  Spectral Features and Pitch Histogram for Automatic Singing Quality Evaluation with CRNN
    Huang Lin, Chitralekha Gupta, and Haizhou Li
    in proceedings of APSIPA 2020.
  18.  Automatic Rank Ordering of Singing Vocals with Twin Neural Network
    Chitralekha Gupta, Huang Lin, and Haizhou Li
    in Proceedings of ISMIR 2020.
  19.  Automatic Leaderboard: Evaluation of Singing Quality without a Standard Reference
    Chitralekha Gupta, Haizhou Li, and Ye Wang
    IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2020
  20.  Automatic Lyrics Alignment and Transcription in Polyphonic Music: Does Background Music Help?
    Chitralekha Gupta, Emre Yilmaz, and Haizhou Li
    In Proceddings of ICASSP 2020.

  21. 2019

  22.  Acoustic Modeling for Automatic Lyrics-to-Audio Alignment
    Chitralekha Gupta, Emre Yilmaz, and Haizhou Li
    In Proceedings of Interspeech 2019
  23.  Automatic Lyrics-to-Audio Alignment on Polyphonic Music using Singing-Adapted Acoustic Models
    Chitralekha Gupta*, Bidisha Sharma*, Haizhou Li, and Ye Wang
    In Proceedings of ICASSP 2019 (*equal contributors)

  24. 2018

  25.  Automatic Evaluation of Singing Quality without a Reference
    Chitralekha Gupta, Haizhou Li, and Ye Wang
    In Proceedings of APSIPA ASC 2018
  26.  A Technical Framework for Automatic Perceptual Evaluation of Singing Quality
    Chitralekha Gupta, Haizhou Li, and Ye Wang
    Transactions of APSIPA 2018
  27.  Semi-supervised lyrics and solo-singing alignment
    Chitralekha Gupta, Rong Tong, Haizhou Li, and Ye Wang
    In Proceedings of ISMIR 2018
  28.  Automatic Pronunciation Evaluation of Singing
    Chitralekha Gupta, Haizhou Li, and Ye Wang
    In Proceedings of Interspeech 2018
  29.  Empirically weighing the importance of decision factors when selecting music to sing.
    Michael Mustaine, Karim Ibhrahim, Chitralekha Gupta, and Ye Wang
    Accepted for ISMIR 2018

  30. 2017

  31.  Perceptual Evaluation of Singing Quality (Best student paper award)
    Chitralekha Gupta, Haizhou Li, and Ye Wang
    In Proceedings of Asia-Pacific Signal and Information Processing Association (APSIPA), Kuala Lumpur, Dec. 2017
  32.  Using Music Technology to Motivate Foreign Language Learning
    Douglas Turnbull, Chitralekha Gupta, Dania Murad, Michael Barone, and Ye Wang
    In Proceedings of International Conference on Orange Technologies, ICOT, Singapore, Dec. 2017
  33.  Towards automatic mispronunciation detection in singing
    Chitralekha Gupta, David Grunberg, Preeti Rao, and Ye Wang
    In Proceedings of International Society of Music Information Retrieval (ISMIR), Suzhou, Oct. 2017
  34.  Intelligibility of Sung Lyrics: A Pilot Study
    Karim Magdi, David Grunberg, Kat Agres, Chitralekha Gupta, and Ye Wang
    In Proceedings of International Society of Music Information Retrieval (ISMIR), Suzhou, Oct. 2017
  35.  SECCIMA: Singing and Ear Training for Children with Cochlear Implants via a Mobile Application
    Zhiyan Duan, Chitralekha Gupta, Graham Percival, David Grunberg, and Ye Wang
    In Proceedings of Sound and Music Computing (SMC), Helsinki, July 2017

  36. Earlier Publications

  37.  Spectral Estimation of Clutter for Matched Illumination
    Chitralekha Gupta, Kaushal Jadia, Avik Santra, and Rajan Srinivasan
    In Proceedings of International Radar Symposium India (IRSI)}, Bangalore, Dec. 2013
  38.  Objective Assessment of Ornamentation in Indian Classical Singing
    Chitralekha Gupta, and Preeti Rao
    S. Ystad et al. (Eds.): CMMR/FRSM 2011, Springer Lecture Notes on Computer Science 7172, pp. 1-25, 2012 (Master's thesis work)
  39.  An objective evaluation tool for ornamentation in singing
    Chitralekha Gupta, and Preeti Rao
    In Proceedings of International Symposium on Computer Music Modelling and Retrieval (CMMR) and Frontiers of Research on Speech and Music (FRSM), Bhubaneswar, India, March 2011
  40.  Context-aware features for singing voice detection in polyphonic music
    Vishweshwara Rao, Chitralekha Gupta, and Preeti Rao
    In 9th International Workshop on Adaptive Multimedia Retrieval, Barcelona, July 2011
  41.  Evaluating Vowel Pronunciation Quality: Formant Space Matching versus ASR Confidence Scoring
    Ashish Patil, Chitralekha Gupta, and Preeti Rao
    In Proceedings of 16th National Conference on Communications – 2010, IIT Madras, Chennai, Jan. 2010

   News

HTML5 Bootstrap Template by colorlib.comOrganizing ICASSP2022 at Singapore 22-27 May 2022

ICASSP 2022 is going to be a hybrid event this year, with the physical event being held at Singapore. I am serving as the local co-chair in the organizing committee. It is so nice to host a physical conference and meet researchers in-person after so long!!

HTML5 Bootstrap Template by colorlib.comMuSigPro has launched its Singing Competition Platform May, 2022

MuSigPro, a singing competition platform powered by AI singing quality assessment technology, launches singing competitions every month hosted by influencers and brands. Try it out now! DEMO VIDEO

HTML5 Bootstrap Template by colorlib.comISMIR 2020 Oct 11-15, 2020

Here's our 4-minute presentation video:

                                                  

HTML5 Bootstrap Template by colorlib.comICASSP 2020 May 4-8, 2020

Attended the first ever virtual ICASSP 2020! Here are our paper and show and tell presentation videos:

            

HTML5 Bootstrap Template by colorlib.comOrganizing ASRU 2019 at Singapore 15-18 Dec 2019

The speech groups in Singapore have come together to organize Automatic Speech Recognition and Understanding Workshop 2019. I am serving as the local logistics chair in the organizing committee.

HTML5 Bootstrap Template by colorlib.comMIREX 2019 Automatic Lyrics-to-Audio Alignment Nov, 2019

The "Automatic Lyrics-to-Audio Alignment" system developed by the team from HLT-NUS (Chitralekha Gupta, Emre Yilmaz, Haizhou Li) has outperformed all other systems in MIREX 2019.

HTML5 Bootstrap Template by colorlib.comFinalist at the 3-minute thesis competition Jul, 2018

I presented a part of my PhD thesis in the 3-min thesis competition.

   

HTML5 Bootstrap Template by colorlib.comSoC Innovation Prize 2018

Our team won this award for SLIONS: Singing and Listening to Improve Our Natural Speaking, an application for language learning through singing

HTML5 Bootstrap Template by colorlib.comOrganizing ISMIR 2017 at Suzhou, China 23-28 Oct 2017

Our research group at Sound and Music Computing Lab, NUS, organized the international conference for music information retrieval in China. It was a delightful first experience for me to participate in the organizing committee of an international conference as the Website co-chair.

Demo videos

MuSigPro

MuSigPro Product Demo

MuSigPro's singing karaoke app is now available on Google Playstore

Try it out now! -- Download App

MuSigPro

Automatic Leaderboard Generation of Singers using Reference-Independent Singing Quality Evaluation Methods

This technology has been awarded the NUS Graduate Research Innovation Program (GRIP) start-up grant, to establish the company MuSigPro Pte. Ltd.

Try it out yourself! -- https://musigpro.com

AutoLyrixAlign

Automatic lyrics-to-audio alignment system for polyphonic music audio

Demo submitted to ICASSP 2020 Show and Tell

This system has outperformed all other systems in the International Music Information Retrieval Evaluation eXchange platform MIREX 2019, with mean absolute word alignment error of less than 200 ms across all test datasets (Mirex Results)

Try it out yourself! -- https://autolyrixalign.hltnus.org

Speak-to-Sing

A Personalized Speech-to-Singing Conversion System

Presented at Interspeech 2019 Show and Tell, Graz Austria Poster link

Try it out yourself! -- https://speak-to-sing.hltnus.org