Institute for Telecommunication Sciences
the research laboratory of the National Telecommunications and Information Administration

Margaret H. Pinson; Lucjan Janowski; Romuald Pépion; Quan Huynh-Thu; Christian Schmidmer; Philip J. Corriveau; Audrey Younkin; Patrick Le Callet; Marcus Barkowsky; William Ingram

Abstract: Traditionally, audio quality and video quality are evaluated separately in subjective tests. Best practices within the quality assessment community were developed before many modern mobile audiovisual devices and services came into use, such as internet video, smart phones, tablets and connected televisions. These devices and services raise unique questions that require jointly evaluating both the audio and the video within a subjective test. However, audiovisual subjective testing is a relatively under-explored field. In this paper, we address the question of determining the most suitable way to conduct audiovisual subjective testing on a wide range of audiovisual quality. Six laboratories from four countries conducted a systematic study of audiovisual subjective testing. The stimuli and scale were held constant across experiments and labs; only the environment of the subjective test was varied. Some subjective tests were conducted in controlled environments and some in public environments (a cafeteria, patio or hallway). The audiovisual stimuli spanned a wide range of quality. Results show that these audiovisual subjective tests were highly repeatable from one laboratory and environment to the next. The number of subjects was the most important factor. Based on this experiment, 24 or more subjects are recommended for Absolute Category Rating (ACR) tests. In public environments, 35 subjects were required to obtain the same Student’s t-test sensitivity. The second most important variable was individual differences between subjects. Other environmental factors had minimal impact, such as language, country, lighting, background noise, wall color, and monitor calibration. Analyses indicate that Mean Opinion Scores (MOS) are relative rather than absolute. Our analyses show that the results of experiments done in pristine, laboratory environments are highly representative of those devices in actual use, in a typical user environment.

Keywords: audiovisual quality; environment effect; language


To request a reprint of this report, contact:

Lilli Segre, Publications Officer
Institute for Telecommunication Sciences
(303) 497-3572
LSegre@ntia.gov

For technical information concerning this report, contact:

Margaret H. Pinson
Institute for Telecommunication Sciences
(303) 497-3579
mpinson@ntia.doc.gov


Disclaimer: Certain commercial equipment, components, and software may be identified in this report to specify adequately the technical aspects of the reported results. In no case does such identification imply recommendation or endorsement by the National Telecommunications and Information Administration, nor does it imply that the equipment or software identified is necessarily the best available for the particular application or uses.

Back to Search Results