Imperial College London researchers claim they’ve developed a voice analysis method that supports applications like speech recognition and identification while removing sensitive attributes such as emotion, gender, and health status. Their framework receives voice data and privacy preferences as auxiliary information and uses the preferences to filter out sensitive attributes which could otherwise be extracted from recorded speech.

Voice signals are a rich source of data, containing linguistic and paralinguistic information including age, likely gender, health status, personality, mood, and emotional state. This raises concerns in cases where raw data is transmitted to servers; attacks like attribute inference can reveal attributes not intended to be shared. In fact, the researchers assert attackers could use a speech recognition model to learn further attributes from users,…

Read More


Leave a Reply

Your email address will not be published.