Wednesday 9 March 2011

SAF, the aggregation of LASeR and audiovisual material

SAF (Simple Aggregation Format) is part of the LASeR standard defining tools to fulfill all the requirements of rich-media service design at the interface between scene representation and transport mechanisms. SAF features the following functionality:
- simple aggregation of any type of media streams (MPEG or non-MPEG streams), resulting in a SAF stream with a low overhead multiplexing schema for low bandwidth networks,
- and possibility to cache SAF streams.

The result of the multiplexing of media streams is a SAF stream which can be delivered over any delivery mechanism: download-and-play, progressive download, streaming or broadcasting.
The purpose of the LASeR Systems decoder model is to provide an abstract view of the behaviour of the terminal. It may be used by the sender to predict how the receiving terminal will behave in terms of buffer management and synchronization when decoding data received in the form of elementary streams. The LASeR systems decoder model includes a timing model and a buffer model. The LASeR systems decoder model specifies:
- the conceptual interface for accessing data streams (Delivery Layer),
- decoding buffers for coded data for each elementary stream,
- the behavior of elementary stream decoders,
- composition memory for decoded data from each decoder, and
- the output behavior of composition memory towards the compositor.

Each elementary stream is attached to one single decoding buffer.
A multimedia presentation is a collection of a scene description and media (zero, one or more). A media is an individual audiovisual content of the following type: image (still picture), video (moving pictures), audio and by extension, font data. A scene description is constituted of text, graphics, animation, interactivity and spatial, audio and temporal layout. The sequence of a scene description and its timed modifications is called a scene description stream. A scene description stream is called a LASeR Stream.
Modifications to the scenes are called LASeR Commands. A command is used to act on elements or attributes of the scene at a given instant in time. LASeR Commands that need to be executed at the same time are grouped into one LASeR Access Unit (AU)

A scene description specifies four aspects of a presentation:
- how the scene elements (media or graphics) are organised spatially, e.g. the - spatial layout of the visual elements;
- how the scene elements (media or graphics) are organised temporally, i.e. if and how they are synchronised, when they start or end;
- how to interact with the elements in the scene (media or graphics), e.g. when a user clicks on an image;
- and if the scene is changing, how the scene changes happen.
Mattia Donna Bianco

Friday 4 March 2011

LASeR Standard

LASeR (Lightweight application scene representation ) is the MPEG RichMedia standard dedicated to the mobile, embedded and consumer electronics industries. LASeR provides a fluid user experience of enriched content, including Audio, Video, Text, and Graphics on constrained networks and devices.
LASeR standard is specified in the MPEG-4 Part 20.

The LASeR standard specifies the coded representation of multimedia presentations for rich media services. In the LASeR specification, a multimedia presentation is a collection of a scene description and media (zero, one or more). A media is an individual audiovisual content of the following type: image (still picture), video (moving pictures), audio and by extension, font data. A scene description is composed of text, graphics, animation, interactivity and spatial and temporal layout.
A LASeR scene description specifies four aspects of a presentation:
  • how the scene elements (media or graphics) are organized spatially, e.g. the spatial layout of the visual elements;
  • how the scene elements (media or graphics) are organized temporally, i.e. if and how they are synchronized, when they start or end;
  • how to interact with the elements in the scene (media or graphics), e.g. when a user clicks on an image;
  • and if the scene is changing, how these changes happen.
The sequence of a scene description and its timed modifications is called a LASeR stream.
LASeR handles access units, i.e. self-contained chauncks of data, which may be adapted for transmission over a variety of protocols. LASeR streams may be packaged with some or all of their related media into files of the ISO base media file format family (e.g. MP4) and delivered over reliable protocols.

LASeR :
  • Brings smart and pleasurable navigation within streamed and real-time AV contents,
  • Is compliant with existing business models, and
  • Allows an increased ARPU(Average Revenue Per Unit) by boosting service subscription thanks to interactivity.
Thanks to the LASeR standard, operators can enrich their service offers and generate user-addiction with the next generation rich-media technology, by leveraginge infrastructures and easily deployable over multi devices and networks.
Mattia Donna Bianco