On March 19 and 20, 2012 the AR Standards Community will gather in Austin, Texas. In the weeks leading up to the next (the fifth) International AR Standards Community meeting, sponsored by Khronos Group and the Open Geospatial Consortium, experts are preparing their position papers and planning contributions.
I am pleased to be able to share a recent interview with one of the participants of the upcoming meeting, Marius Preda. Marius is Associate Professor with the Institut TELECOM, France, and the founder and responsible of GRIN – Graphics and Interactive Media. He is currently the chairperson of MPEG 3D Graphics group. He has been actively involved in MPEG since 1998, especially focusing on Video and 3D Graphics coding. He is the main contributor of the new animation tools dedicated to generic synthetic objects. More recently, he is the leader of MPEG-V and MPEG AR groups. He is also the editor of several MPEG standards.
Marius Preda’s research interests include 3D graphics compression, virtual character, rendering, interactive rich multimedia and multimedia standardization. And, he also leads several research projects at the institutional, national and European and international levels.
Spime Wrangler: Where did you first learn about the work going on in the AR Standards community?
MP: In July 2011, during the preparations of the 97th MPEG meeting, held in Turin, Italy, I had the pleasure to meet Christine Perey. She came to the meeting of MPEG-V AhG, a group that is creating, under the umbrella of MPEG, a series of standards dealing with sensors, actuators and, in general, the frontier between physical and virtual world.
Spime Wrangler: What sorts of activities are going on in MPEG (ISO/IEC JTC1 SC29 WG11) that are most relevant to AR and visual search? Is there a document or white paper you have written on this topic?
MP: Since 1998, when the first edition of MPEG-4 was published, the concept of mixed – natural and synthetic – content was made possible in an open and rich standard, relatively advanced for that time. MPEG-4 was not only advancing the compression of audio and video, but also introducing, for the first time, the compression for graphics assets. Later on, MPEG revisited the framework for 3D graphics compression and grouped in Part 16 of MPEG-4 several tools allowing compact representation of 3D assets.
Separately, MPEG published in 2011 the first edition of MPEG-V specification, a standard defining the representation format for sensors and actuators. Using this standard, it is possible to deal with data from simplest sensors such as temperature, light, orientation, position to very complex ones such as biosensors and motion cameras. Similarly for actuators. From the simple vibration effect today embedded in almost all the mobile phones to complex motion chairs such the ones used in 4D theatres, these can all be specified in standard-compliant libraries.
Finally, several years ago, MPEG standardized MPEG-7, a method for describing descriptors attached to media content. This work is currently being extended. With a set of compact descriptors for natural objects, we are working on Visual Search. MPEG has also ongoing work on compression of 3D video, a key technology in order for realistic augmentation of the captured image to be provided and rendered in real time.
Based on these specifications and the significant know-how in domains highly relevant to AR, MPEG decided in December 2011 to publish an application format for Augmented Reality, grouping together relevant standards in order to build a deterministic, solid and useful model for AR applications and services.
More information related to MPEG standards is available here.
Spime Wrangler: Why are you going to attend the meeting in Austin? I mean, what are your motivations and what do you hope to achieve there?
MP: The objective of AR Standards is laudable but, at the same time, relatively difficult to achieve. There are currently several, probably too many, standardization bodies that claim to deliver relevant standards for AR to the industry. Our duty, as standard organizations, is to provide interoperable solutions. This is not new. Organizations, including Standards Development bodies, always try to use mechanisms such as liaisons or to cooperate rather than to compete.
A recent very successful example of this is the work on video coding jointly created by ISO/IEC MPEG and ITU-T VCEG and published individually under the names MPEG-4 AVC and h.264 respectively. In fact, there is exactly the same document and a product compliant with one is implicitly compliant with the second. My motivation in participating to Austin meeting is to verify if such collaborative approach is possible as well in the field of AR.
Spime Wrangler: Can you please give us a sneak peak into what you are going to present and share with the community on March 19-20?
MP: I’ll present two aspects of MPEG work related to AR. In a first presentation, I’ll talk about the MPEG-4 Part 16 and Part 25. The first is proposing a set of tools for 3D graphics compression, the second an approach on how to apply these tools to scene graph representations other than the one proposed by MPEG-4, e.g. COLLADA and X3D. So, as you can see, there are several AR-related activities going on in parallel.
In the second presentation I’ll talk about the MPEG Augmented Reality Application Format (ARAF), and MARBle, an MPEG browser developed by the TOTEM project for AR (currently available for use on Android phones). ARAF is an ongoing activity in MPEG and early contact with other standards body may help us all to work towards the same vision of providing a one stop solution for AR applications and services.