Monday, 10 September 2012

WimLive – live streaming that makes economic sense


With WimLive the WimTV platform makes available another service to WimTV users: the ability to offer live streaming services with a simple way to get paid for the service, possibly in combination with other parties.

Off line monetization of events has never been a problem. Also on line monetization of large scale events is not a problem. However, on line monetization of medium-to-small scale live events is an uphill battle because:
1. Setting up and maintain a live streaming service may entail significant costs
2. The cost of collecting payments may easily offset the revenues
3. If more parties are involved it may be difficult to establish trusts between parties.

WimLive has solved these problems. Let’s follow a simple yet quite realistic walkthrough by looking at the picture below.




As you can see, everyone can stream own live events independently, deciding whether to transmit to his audience for free or in pay-per-view mode. In addiction, the system also offers the possibility of make live streaming in cooperation with other entities, dividing the proceeds among all participants. In particular:


1. An Event Organiser holding rights to an event makes an agreement with an Event Reseller to promote and distribute events to end users and agree on revenue sharing (this step is not needed if there is no Event Reseller)
2. The event takes places and a cameraman shoots the scene and sends the digital stream to WimTV
3. An end user clicks on the event, pays and watches the event
4. The payment is split between WimTV and Event Organiser
5. Payments are split also with the Event Reseller in case promotion and distribution is done by a third party.

WimLive is a Unified solution for on demand and live video where
- An arbitrary number of parties may claim rights to event
- Revenues are accredited to each party as soon as payments are effected
- WimTV plays the role of trusted third party
- WimLive entails very low administrative costs
- Can be easily integrated with other application platforms (Moodle…)

Who are the typical WimLive users?
1. Hotel chains
2. Organisers of cultural, musical and sport events
3. Companies offering training courses
4. Small to medium size film makers
5. Local TVs
6. And many more...

To run a WimLive event you need to 
1. Register on WimTV as a WebTV
2. Sending an email to sales@wimlabs.com to get a URL
3. Input a few data (date, time, duration, price to watch the event etc.)
4. And go!

Wednesday, 13 June 2012

Split payment opportunity with WimTV


By using the Wim.tv version 1.0 (http://wim.tv/), a distributor of “on demand” video content (Web TV) can offer individual pay and subscription content and see each revenue immediately accredited to the WebTV’s PayPal account with no obligation of minimum revenue.

In July 2012 Wim.tv will offer the possibility to transmit “live” events and, with the split payment technology, split revenues automatically and immediately between the organiser and the distributor of the event.

Wim.tv offers to the manifold components of the audio-visual world at all levels – amateur, professional andcorporate – an environment where new business relationships can be easily created and exploited in a rewarding way. There is no need to make investments in devices and technologies and no need to delegate to other players, often with a conflict of interest, a share of their business.

Wim.tv is a neutral entity that offers services with which operators can interact, more easily and on a equal footing, augmenting the value of their offer and reaching the end user.

The optimisation of those relationships allows, on the one hand, to obtain remuneration of one’s professional role and, on the other, to provide the end user with the best choice of content.


Leonardo Chiariglione

Monday, 4 June 2012

Earnings for all with WimTV


WimTV is an ICT platform designed to support an ecosystem of players dealing with digital content.
In such an ecosystem you seldom find that a relationship is confined between two players, bearing in mind that WimLabs, the company operating WimTV itself is a platform player.

So far WimTV supported multi-party relationships using its Local Exchange Trading System (LETS) “currency” WimCent. But with the recently released v1.0, WimTV is making one step in the direction of supporting business that is based on real currencies.
Do you own valuable video content that you want to monetise in a secure environment? WimTV v1.0 is the solution for you.

You can upload your videos to your private space MyMedia” and attach some descriptions to make them more marketable. Then you can open your WebTV on WimTV – the entry level with a limited degree of customisation or the advanced level with a high degree of customisation – post your “streaming-ready” videos on MyStreamingMedia attaching a price for Pay per View consumption or defining your own bundles of videos for subscription consumption.

Your customers will enjoy your content with the very robust WimTV player, a plug-in that works on Linux, Mac and Windows for the Chrome, Firefox, IE and Safari browsers and 50% of each Pay per View or subscription payment received by WimTV will immediately be accredited to your PayPal account.
Playing the role of video publisher is not for all. You can be an excellent producer of video content but be poor at marketing it.
Conversely you may be an excellent publisher but be a poor content producer. Can WimTV help?

Well, not yet, at least not if you want to play with real money. This is what WimLabs has reserved for WimTV v1.1. Creators of video content will be able to post their videos for sale and WebTVs will be able to purchase videos posted for sale on a Creator’s shop.
Whenever a video posted by Creator A, purchased by WebTV B at, say 30% revenue sharing, will be watched by End User C, WimTV will automatically split the payment made by End User C to Creator A, WebTV B and WimLabs.
WimTV1.0 is just the first of a series of steps that will convert the idea of content monetisation on the web from words to reality.

Leonardo Chiariglione

Thursday, 12 April 2012

MPEG Celebrates the 100th Meeting in Geneva, May 2, 2012

The Emmy-award-winning Moving Picture Experts Group (MPEG), the committee that has developed the MP3, MPEG-2, MPEG-4 and a host of other standards that have transformed and enriched the way humans interact with media, will hold its 100th meeting in Geneva, Switzerland on 30 April to 4 May 2012.

On the 2nd of May MPEG will hold the “MPEG 100 Event” to celebrate close to a quarter of century of intense activity that has seen thousands of digital media experts from tens of countries and hundreds of companies working collaboratively to advance the frontiers of technology.

The event will be attended by top-ranking officials from the International Organization for Standardization (ISO) and the International Electrotechnical Committee (IEC), the organizations co-sponsoring the Joint ISO/IEC Technical Committee (JTC 1) on Information Technology under which MPEG operates, the International Telecommunication Union (ITU) with which MPEG has developed two video compression standards and is currently developing the High Efficiency Video Coding (HEVC) standard, and the World Intellectual Property Organization.

Digital Media have brought users a revolution in the way media are created, distributed and consumed with profound ramifications in industry, society and individuals. Digital media are now an integral part of billions of people life, making it better, more interconnected and social.

The MPEG 100 event will be an important opportunity to confirm that international organisations maintain close cooperation in charting the future of digital media.

Monday, 5 March 2012

Where will the media industry be in the next 10 to 15 years?

Already today the media industry is already shaped by the plurality of systems employed for delivery of their content. For a given type of content usually there is a primary delivery system, typically inherited by the media company from another age, but that main delivery system is more and more supplemented by other delivery systems that serve the purpose of making available the same or modified content with different features (resolution, on demand, pay, interactive etc.). In 10-15 years more delivery systems will appear further decreasing the effect of this cultural legacy. Content will be prepared for distribution based on its features and delivered over the systems that will be more convenient for the specific features.
Media companies should just continue their work but with a mind set geared toward the expected outcome and a constant ability to fine tune their behaviour based on the signals coming from the market.

A second area of attention is the constant appearance of new media technologies enhancing and extending the user experience. An example of former is higher resolution pictures made possible by new presentation devices and new compression standards and an example of the latter are full blown 3D pictures that will offer new experiences blending synthetic and natural content.
Media companies should tread a fine line balancing actual experience of new technology without over-investing in any.

Leonardo Chiariglione

Tuesday, 26 April 2011

New transport protocols for a better user experience

In the 1980s the telecom industry decided they needed a “broadband standards” and started defining some protocols under the project name “Asynchronous Transfer Mode”. Today people know ATM for something completely different because the project, which actually led to significant deployment in the telecom gears of various countries, came to a stop because of a competing technology called Internet Protocol.
A decade later IP, that did not require an access speed of 155 Mbit/s like ATM, started being deployed with a bitrate that on average did not even reach 3 orders of magnitude less the ATM’s. IP seemed to provide the level of speed that could make customers happy keeping the Plain Old Telephone System (POTS) in place while ATM required reaching millions of subscribers homes with optical fibers.

No one can blame telcos for trying to save trillions of dollars of optical fibers for the less costly Asymmetric Digital Subscriber Line (ADSL). The reality, though, is that our society is more and more video dependent but the fixed telecommunication infrastructure cannot provide the bandwidth that its users require. A lot of talk is being made these days around the “Next Generation Network” (NGN) acronym and, in due time, something is bound to come out, but very little prospects exist for the mobile network which is squeezed between a terrestrial broadcasting industry that sticks to its Ultra High Frequency (UHF) legacy while the need to carry video on mobile networks multiplies by the day.

Video is a strange beast. From time to time I regularly receive the question: How many bit/s are required to transmit video? My regular answer is: as many as you want, even no bits at all. I agree that some may see this answer as non-collaborative, but it contains a profound truth, namely that video is a really flexible beast because you can decide how many bit/s you use to transmit video. No matter how few bits you use, your correspondent will always see “something”.

Operators have exploited this feature to cope with the wide dynamics of networks characteristics. If transmitter is informed that receiver is unable to receive all the bits that it needs to decode a video, it can switch to a version of the video encoded at a lower bitrate. The user at the receiving side will see a less crisp picture and he may complain about the shortsightedness of telcos that did not invest in ATM (if they had done it, what would the phone bill be today?), but that is still better than a picture that keeps on freezing.

The problem is that operators has independently decided to use their own transmitter-receiver protocols. This was acceptable at a time when video was a past time of few, but it is no longer a solution today when video is so pervasive.

MPEG has spotted this problem and is close to releasing a new standard called DASH. The acronyms stands for Dynamic Adaptive Streaming over HTTP and is almost self-explanatory. The pervasive HyperText Transport Protocol is used to stream video but the bitrate used is dynamically adapted to network conditions using a standard protocol that any implementer can use to build interoperable solutions.

See a technical explanation at http://mpeg.chiariglione.org/technologies/mpeg-b/mpb-dash/index.htm

Leonardo Chiariglione

Wednesday, 9 March 2011

SAF, the aggregation of LASeR and audiovisual material

SAF (Simple Aggregation Format) is part of the LASeR standard defining tools to fulfill all the requirements of rich-media service design at the interface between scene representation and transport mechanisms. SAF features the following functionality:
- simple aggregation of any type of media streams (MPEG or non-MPEG streams), resulting in a SAF stream with a low overhead multiplexing schema for low bandwidth networks,
- and possibility to cache SAF streams.

The result of the multiplexing of media streams is a SAF stream which can be delivered over any delivery mechanism: download-and-play, progressive download, streaming or broadcasting.
The purpose of the LASeR Systems decoder model is to provide an abstract view of the behaviour of the terminal. It may be used by the sender to predict how the receiving terminal will behave in terms of buffer management and synchronization when decoding data received in the form of elementary streams. The LASeR systems decoder model includes a timing model and a buffer model. The LASeR systems decoder model specifies:
- the conceptual interface for accessing data streams (Delivery Layer),
- decoding buffers for coded data for each elementary stream,
- the behavior of elementary stream decoders,
- composition memory for decoded data from each decoder, and
- the output behavior of composition memory towards the compositor.

Each elementary stream is attached to one single decoding buffer.
A multimedia presentation is a collection of a scene description and media (zero, one or more). A media is an individual audiovisual content of the following type: image (still picture), video (moving pictures), audio and by extension, font data. A scene description is constituted of text, graphics, animation, interactivity and spatial, audio and temporal layout. The sequence of a scene description and its timed modifications is called a scene description stream. A scene description stream is called a LASeR Stream.
Modifications to the scenes are called LASeR Commands. A command is used to act on elements or attributes of the scene at a given instant in time. LASeR Commands that need to be executed at the same time are grouped into one LASeR Access Unit (AU)

A scene description specifies four aspects of a presentation:
- how the scene elements (media or graphics) are organised spatially, e.g. the - spatial layout of the visual elements;
- how the scene elements (media or graphics) are organised temporally, i.e. if and how they are synchronised, when they start or end;
- how to interact with the elements in the scene (media or graphics), e.g. when a user clicks on an image;
- and if the scene is changing, how the scene changes happen.
Mattia Donna Bianco