4.0 Presentation Layer: Data Conversion and Formatting
The Presentation Layer is responsible for the conversion and formatting of the broadcast information defined at the Application Layer. Its primary functions are to manage the audio source encoding process and to structure the associated Service Information (SI) for presentation to the user. This layer effectively translates the broadcaster’s content into a standardized digital format suitable for transmission.
4.1 Audio Source Encoding
The audio source encoding method specified for Digital System A is ISO/IEC MPEG-Audio Layer II, as defined in the ISO Standard 11172-3.
The encoding process begins with pulse code modulation (PCM) audio signals, which are sampled at 48 kHz. The encoder then generates a compressed digital bit stream at a selectable bit rate. For each monophonic channel, the available bit rates are 32, 48, 56, 64, 80, 96, 112, 128, 160, or 192 kbit/s.
The selection of a bit rate involves a direct trade-off between audio quality and bandwidth efficiency. For high-quality broadcasting purposes, a bit rate of 192 kbit/s per stereo programme is recommended to achieve fully transparent audio quality, providing a margin for multiple encoding/decoding processes. A rate of 96 kbit/s for mono provides sound quality roughly the same as that of a conventional AM broadcast. For speech-only programmes, a bit rate in the range of 32-48 kbit/s may be sufficient, allowing for a greater number of services within the system multiplex.
The core of the audio encoder utilizes a polyphase filter bank to divide the digital audio signal into 32 sub-bands. A sophisticated psycho-acoustic model, which mimics the characteristics of the human ear, is then used to control the quantization and coding of these sub-band samples. This perceptual coding strategy is fundamental to the system’s efficiency, as it intelligently discards acoustically irrelevant information, thereby maximizing perceived audio quality for a given bit rate.
4.2 Service Information (SI) Presentation
In addition to audio, the system makes various elements of Service Information available for presentation on a receiver’s display. This enhances the user experience by providing context and supplementary data. The available SI elements include:
- Basic programme label (i.e., the name of the programme)
- Time and date
- Cross-reference to the same or a similar programme being transmitted in another ensemble or being simulcast by an AM or FM service
- Extended service label for programme-related services
- Programme information (e.g., the names of performers)
- Language
- Programme type (e.g., news, sport, music, etc.)
- Transmitter identifier
- TMC (Traffic Message Channel), which may use a speech synthesizer in the receiver
The selection of and access to this presented information are managed by the Session Layer, the next level down in the architecture.