Video Quality Experts Group (VQEG)

Audiovisual HD Quality (AVHD)

The AVHD group is the direct successor of the former individual projects HDTV 2 and Multimedia 2. These were merged into AVHD, because the scopes of the two projects became gradually very similar and a lot of synergies were found between the former projects. The new group AVHD has therefore been established with a broadened scope and is currently working on three main topics:

  1. AVHD-AS / P.NATS Phase 2 Project)
  2. Advanced Subjective Methods (AVHD-SUB)
  3. Audiovisual Quality Integration (AVHD-AV)

#1) AVHD-AS/P.NATS Phase 2 Project

AVHD-AS/P.NATS Phase 2 is a joint project of VQEG and ITU Study Group 12 Question 14. The main goal is to develop a multitude of objective models, varying in terms of complexity/type of input/use-cases for the assessment of video quality in HTTP/TCIP based adaptive bitrate streaming services e.g., YouTube, Vimeo, Amazon Video, Netflix, etc. For these services quality experienced by the end user is affected by video coding degradations, and delivery degradations due to initial buffering, re-buffering and media adaptations caused by the changes in bitrate, resolution, and frame rate.

In general three categories of models will be developed for a variety of H.264/H.265/VP9 coded adaptive transmissions of HD/UHD videos of up to 5 minute lengths. The target resolution-bitrate parameters are referenced to typically available ranges of modern day streaming services.

  1. Bitstream-based models: These models mainly target continuous mid-point and continuous mobile video monitoring use-cases where parametric input derived from bitstream is used as the model input.
  2. Pixel-based models: Full-reference, reduced-reference, and no-reference models will be evaluated, targeting continuous geographically distributed monitoring and drive testing.
  3. Hybrid models: Hybrid no-reference model, taking both bitstream based parametric information as well as pixels as input.

Tentative Time Plan:

Important Details:

To get the communication related to AVHD-AS/P.NATS Phase 2 (weekly conference calls, project follow ups)

All communication is sent to the 3 lists:

To access the document server visit (Guest account needed)

Question relating to this project should be sent to Shahid Mahmood Satti and Silvio Borer

#2) Advanced Subjective Methods (AVHD-SUB) Project

The AVHD group investigates improved audiovisual subjective quality testing methods. This effort may lead to a revision of ITU-T Rec. P.911. Presentations on this topic are encouraged at all VQEG meetings.

Novel Scene Experiment Design Validation 

The AVHD group has begun investigation into alternative experiment designs for subjective tests. This goal is to validate subjective testing of long video sequences that are only viewed once by each subject. Adaptive streaming requires the use of long video sequences (e.g., 5 min). This prohibits the traditional full matrix design where each source scene is viewed 10 to 25 times.

An effort is underway to evaluate experiment designs that avoid scene reuse. The subjective test was prepared and distributed at the Glasgow 2015 VQEG meeting. Multiple labs will run subjects through this experiment to test the repeatability and stability of the alternative experiment designs. The goal is to run all subjects by the March 2016 VQEG meeting.

To get involved, contact Lucjan Janowski and Margaret Pinson.

Subjective Test Environment (Finished)

The AVHD group conducted a joint investigation into the impact of environment on mean opinion scores (MOS). The results of that study were published in the following paper:

"The Influence of Subjects and Environment on Audiovisual Subjective Tests: An International Study," IEEE Journal of Selected Topics in Signal Processing, Vol. 6, No. 6, October 2012, pp. 640-651.

Margaret H. Pinson, Lucjan Janowski, Romuald Pépion, Quan Huynh-Thu, Christian Schmidmer, Phillip Corriveau, Audrey Younkin, Patrick Le Callet, Marcus Barkowsky, and William Ingram

#3) Audiovisual Quality Integration (AVHD-AV) Project

 Audiovisual Combining Function (Dormant)

The goal of this project is to compare different methods of combining an audio-only MOS and a video-only MOS to an audio-visual MOS. Several proposals have been presented in the past and were collected and validated by Margaret Pinson (NTIA/ITS) and one particular method seemed most suitable:

audio-only MOS × video-only MOS = audio-visual MOS

The results of that study were published in the following paper:

"Audiovisual Quality Components: An Analysis," IEEE Signal Processing Magazine, vol.28, no.6, pp.60-67, Nov. 2011.

Margaret H. Pinson, William J. Ingram, and Arthur A. Webster

The original plan was to conduct a competitive benchmark, but at the moment this seems obsolete and the project is dormant.

Questions should be addressed to the AVHD Chair Christian Schmidmer and Vice Chair Quan Huynh-Thu

VQEG is Co-Chaired by: Margaret Pinson, NTIA/ITS and Kjell Brunnström, RISE Research Institute of Sweden AB
The VQEG website is hosted by ITS