Spoken language enables us to explicitly or implicitly convey our emotional states. Prosody—variations in voice pitch plays a key role in this transmission. Measuring prosody has clinical value, as it can be relatively automated or integrated into telemedicine or telepsychology platforms. Today, artificial intelligence models can automatically detect affective dimensions in speech: dominance (sense of control), intensity (strength of emotion), and valence (pleasant or unpleasant tone). However, most of these models were trained on English-speaking data, and no model has yet been adjusted using non-Québécois French-speaking data. Yet, given linguistic and cultural differences, it is crucial to adapt AI models to the target populations. The goal of the DIVA project is to collect average scores of valence, arousal, and dominance on a French-speaking (non-Québécois) audio corpus (GEMEP), from a general population aged 18 and over, in order to fine-tune a pre-trained machine learning model for emotional recognition in vocal recordings.