Search
Close this search box.

TAG & Skyline Dataminer

Joint Webinar

LEARN MORE

GET THE FULL WEBINAR PRESENTATION

WATCH THE RECORDING:

Webinar Transcript:

Paul Briscoe:

Hello. My name’s Paul Briscoe. I’m with TAG Vide Systems. My pleasure to welcome you morning after and evening or night, wherever you might be, to our webinar. We have a very interesting webinar today, have our wonderful partner and good partner Thomas Gunkel from DataMiner with us and we’re going to talk about OTT delivery and how you can get control of and have competency in your OTT delivery. I’d like to invite you to submit questions as we go along. We’re not going to do them in line. What we’d like to do is accumulate them for discussion at the end of the presentation. And so if you could enter them into the chat box in Zoom meeting, they will be accumulated by one of our wonderful webinar assistants here and come the end we we will go through the questions and answer what we can. We may get an awful lot of questions.

Paul Briscoe:

These things seem to happen that way. And if we don’t get to your question, live on the broadcast, don’t despair, all questions we will answer and we’ll publish out to everybody after the fact so that any questions that were submitted will be answered and everybody can see that. I just want to mention, we had talked about remote production, COVID and things like that and we don’t want to really belabor that topic. But I will say when I first started doing this kind of thing many, many years ago, somebody said to me, “Are you nervous?” And I said, “Yeah, sure. A little.” And they said, “Well always remember, close your eyes for a second and while you’re presenting, remember your audience is all sitting out there in their underwear.” And aha ha, yeah, that’s a funny, old joke.

Paul Briscoe:

I fast forward to today and I’m willing to bet that there are more than one attendees attending this webinar actually sitting in their underwear and congratulations to you for working at home. So we’re going to talk today about how you get control of your service delivery and TAG and DataMiner have solutions in that space that we think are interesting. I would like to introduce Thomas Gunkel, Market Director of Broadcast at Skyline. Hello Thomas and welcome.

Thomas Gunkel:

Thanks Paul. Thanks for the introduction and welcome everybody. Before we start, let me just say thank you once more to you Paul and to your whole TAG team for having the possibility today to do that joint webinar together. We were recently working together on a few large projects in the OTT world, and this is the thing we will cover today, giving a bit of our experience and how we tackle the challenges people have in the OTT world.

Paul Briscoe:

Fantastic. So on the next slide here, I think we have-

Thomas Gunkel:

Just very briefly, a brief introduction on the company, Skyline Communications. What are we doing? Who are we? We’re really a software house building, monitoring, orchestrating solutions. That’s all we do. We clear focus on the media broadband industry companies out there since 1985 so quite some background already. We are privately held. Pretty important those days as well during corona times. Gives us a large operation of freedom to adapt to new challenges. A few numbers here. We have about 300 people, our headquarter is based in Belgium, in Izegem close to Brussels. We have also presence in different countries. If local offices, people like me working remote have about 1,000 customers out there in more than 100 countries so really a global presence was from a customer point of view and around 6,000 systems we have deployed. And with that, I guess you also give a quick intro on TAG.

Paul Briscoe:

I guess I can make my pitch. TAG Video Systems is an amazing company. I’ve joined it recently and it’s just the most incredible place. It’s been leading in the IP industry since 2008 when a couple of guys in their garage came up with our first IP based product and Tomer and Gal, the T and G in TAG are our leads today and have driven our technology to where it is and where it’s going to go. We’re monitoring this number is I think stale well over 50,000 channels across the world with the world’s largest broadcasters in OTT providers. We provide the greatest flexibility in the asset utilization and monitoring because we’re software based and thus dynamically resourced. We run pure software. There’s no hardware involved in our solution. We ship you software.

Paul Briscoe:

And we can run on a cut server, we can run in the cloud. We really don’t care. You can build a hybrid system across both and it looks like one big homogenous system. And we do deep, realtime probing of video, audio, metadata and all of the transports below. And we serve all four broadcasting markets, which is like production of course with uncompressed playout in master control environments, distribution and legacy methods and of course OTT. And the TAG product provides the greatest selection of input and output formats for both compressed and uncompressed workflows by virtue of the fact that we’re software. On the next slide, we can talk about what we’re going to discuss today, and that’s how you deliver your content with confidence. It’s very important that you get your content to each and every viewer reliably and at very high quality. The challenges, however are many, right? Content has to go through a whole bunch of paths to get to all of your end points. And this could be through multiple types of distribution, including OTT, including legacy distribution and so on. And these are complex system architectures.

Paul Briscoe:

They may be hybrid ground and cloud deployments. And in fact, today I can guarantee you that just about everybody’s running a hybrid ground and cloud deployment because of the current work situation. This also involves multiple transport formats, multiple coding formats, and in fact, often multiple encryption formats as well that you’re using to protect your content in the last mile. This requires varying priorities of monitoring and visualization because some content is extremely high value, other content is perhaps more run rate and have less value, still a value, but some stuff you don’t need to monitor as aggressively as other things. Live events of course require the ultimate in watching and monitoring and keeping track of things working. Resilience and recovery is a very important aspect of your whole content delivery process. Path monitoring of all your paths that are actioning based upon past circumstances is important, and dynamic and agile service management is the more modern challenge to this whole thing where you get an event that comes up and you need to stand up a production and distribution system very quickly.

Paul Briscoe:

So what’s required here is end to end monitoring and management that can meet these requirements. So on the next slide, I’ll just mention the TAG solution. TAG as I mentioned is 100% IP based, 100% have software and runs 100% on COTS platforms, including compute instances in various clouds. We provide the lowest latency of uncompressed and compressed video up to UHD 4k. When somebody asks for AK, we’ll be there to do it. We have extremely high density and we monitor a very large number of parameters. The number is actually more like 350 right now I think, and we can go dense and we can go deep at any point in your system. There is no hard drive. There is no hardware to speak of that’s anywhere specific to TAG. The hardware version of it boots directly from a USB dongle and that’s the entire product.

Paul Briscoe:

The cloud image is the same except there’s no USB dongle. We combined TS and ABR monitoring in one single solution. We can do end to end monitoring across the entire system. And the entire system is accessible through our open API. This API drives our GUIs and it drives interfaces to control systems, monitoring systems and other allied systems, including of course DataMiner. And DataMiner provides very deep integration and they integrate tightly with us and they provide path based correlation, visualizations, and very, very rich control in addition to artificial intelligence to be applied to the monitoring and decision making. On the next slide. Thomas, tell us about DataMiner.

Thomas Gunkel:

Yeah. Let me also give you a quick introduction not only on the company but also on the product, which is called DataMiner. What is it? It’s actually what we say an end to end multi-vendor network management system orchestration and an OSS platform. Very important is the end to end aspect that we say we want to be able to integrate the complete operational ecosystem from really the service origin from your service feet in the stadium for example, down to the destination which might be the end customer, really the OTT clients that we monitor from each and every customer. And also important, across any vendor, any technology bounder, whats does that mean? It means any kind of protocol. It doesn’t have to be SNMP or restful APIs. It can be anything. Industry standard or even proprietary protocols are supported there. In terms of technology, that means we support legacy technology. Just example on SDI or P that all goes together in a single platform can be hardware and software.

Thomas Gunkel:

Certainly now more and more software these days as you can see with TAG and premise of premises. Doesn’t really matter where your products or equipment that needs to be controlled sits can be in the cloud, can be on-prem. You put it all together in a single platform. A very important aspect here is a proven standard, excuse me, proven off the shelf platform. What does that mean? Each of our customer, each single deployment, those 6,000 systems out there, they run on the exact same software, core software, same software version. On the right hand side, you see a few screenshots I took from different customers. This is really the way you configure the system, but under the hood everything is exactly the same. How can we do that? We have a very open architecture. Enables anybody to configure the system the way you want it. We offer turnkey systems where we do everything. We provide exactly what you want, or we also have customers. I just go for a license and I do everything on my own.

Thomas Gunkel:

We always say whatever we can do to configure the system on top of the off the shelf platform, everybody else can do as well. To design, to evolve, to do your wishlist, the logic, everything you can imagine that even can be that you do your own driver. If TAG wants to provide its own driver or connector as we say, it’s perfectly possible, the same you can do as an end customer. And with that you get a lot of flexibility at the end, and also not to forget about the agility in those systems. You might’ve heard about DevOps environments. What does that mean? It’s really a way to automate as much as possible. A simple example, TAG comes up with a new firmware, probably has some new features, new metrics, new controls. We will update our connector, our driver. The customers will automatically be informed about there is a new driver.

Thomas Gunkel:

They can download it. That happens automatically through a cloud platform in the background. And the operators using the DataMiner platform they wouldn’t even realize. In the background, we update the driver. All they see the benefit from new features, new metrics, and that’s really it, but they don’t have to log off, log on, nothing like that, just happens in the background fully automatic. And with that, I guess Paul back to you.

Paul Briscoe:

Yeah. That’s brilliant in fact. It’s funny you mentioned it’s the same application running everywhere. We have the same story. It’s a piece of code and everything the product does is in that code and it’s surprisingly not a very big chunk of code for the functionality it has. But the beauty of that of course is then that we have one deployment across the world and when we bring features to the product, we bring them uniformly across the vendors. And the beauty of our open API is you’ve just nicely advertised for me is the fact that when we update that you can add features on your side and they become transparently available to both the TAG users directly of the TAG system as well as the DataMiner users. So that’s just a brilliant relationship. And that’s the beauty of doing all of this in a software environment, right?

Paul Briscoe:

This slide is just a simple drawing to show you who lives in the ecosystem just to level the conversation a little bit here. And we have different kinds of people. Some of them span many different spots in the ecosystem, but you can see the kinds of people involved in this content creation and in this content consumption and distribution chain. So for example, we have the studio people, and I could pick on a whole bunch of them but we’ll just mention Disney, Warner, HBO. These are people we know who do live production, and these live production venues that these content creators, besides these people make the live production people like sports from NEP and Irina and so on. These all come out as live feeds. This is high value content typically. This is very important stuff. We also have the broadcasters. These are all the people we know and love TV stations, call, sign stations, broadcast networks.

Paul Briscoe:

They take content from various places and they of course have to distribute it as well. And then we get to the fun part on the right here with the OTT operators and the pay TV operators, the distribution service providers, they’re different people typically. You often find different people playing in different parts of this business, but they’re all doing the same thing, using different formats, using different distribution technologies and doing things to make TAG’s life interesting with all these stream formats, many of them run encryption and so on. So we have a large number of players across this ecosystem and they all have a stake in the game. They all want quality for their viewers, they all want to deliver their product with confidence. So on the next slide, this is broken down a little differently. Let’s look at it now in terms of the block diagrams.

Paul Briscoe:

So we have these players doing what they do. We have the content creators delivering direct to consumer now through OTT. We also have content creators delivering to broadcasters with episodics and things like that. Live production, delivers to broadcasters for live distribution. It also delivers of course to OTT. So we find these things no longer, just going through a big transmitter on a hill for the broadcasters, broadcasters are now being to deliver out these things as well, pay TV and cable TV of course has always been happening. And OTT providers distribute through that channel, but the big one today of course is OTT itself. So we have all these different pieces in the chain, many formats, many points of monitoring and many ways to view the content. So it’s an interesting and complex ecosystem and it’s very difficult for a single product to accommodate all of this.

Paul Briscoe:

But TAG has done that in terms of the monitoring, probing and visualization and DataMiner seems to have done that in terms of their ability to integrate a much larger system view. Thomas. Oh, I queued Thomas and it’s my darn slide and it’s an animated slide. Okay. Let’s put some this horrible animated slide. Just in terms of TAGs monitoring, and these are all resources DataMiner can take advanTAGe of, of course. We monitor at the transport layer. We monitor things like PCR, timing, packet loss, and arrival times, missing files, file delivery times, manifests and so on. All the stuff that’s the underpinnings of transport of your data, we monitor that in real time. At the next level, we’re monitoring important metadata. Things like SCTE. For distribution SCTE triggers are very important and reporting them, delivering triggers, logging them and so on is very important as well as alarming on missed SCTE events.

Paul Briscoe:

And this includes other metadata as well, close captioning, subtitling and so on. As well at the next layer we also look at content. We do video and audio essence monitoring. So we’re looking for things that are quality of experience related, freezing, blacking, loudness issues, a burst of color bars, presence of certain metadata, validity of certain metadata and so on at the essence layer. On top of that, we do template matching. This is where we can take a stream and look at all the parametrics of that stream from the transport up through the essence and create a template that says here is my expected behavior of this stream. This is what it should look like and then we can watch against that template and flagging deliver events and errors and alarms based upon some divergence from what’s been expected. And then finally, what TAG is well known for, of course is visualization. All of the sources we receive in addition to probing and monitoring them, we can visualize in a multi-view or we can stream back out for consumption. Thomas.

Thomas Gunkel:

All right. Yeah, how does this all work under the hood? How do we interface with the TAG pros to take multi-viewers? What we need is powerful API as we said before, we can interface with any kind of API but it must be fully featured. That’s important. Then we build what we call the DataMiner driver or DataMiner connector. technology, what do we need can be any kind of data. And we do polling for some metrics that could also be eventing. We get information on certain events like a video issue, an audio issue from a TAG probe. Other information, that could be something like the firmware we might pull after every hour or so. That can be structured data but can also be unstructured data. We integrate in one single connector. Sometimes those are locked files with additional information.

Thomas Gunkel:

And once we have all the data, it’s not just that we present the data to the operator or to the system. Very often there is some parsing required or logical processing, or sometimes only the fact that we have to make some metrics more human readable. When you have a logarithmic value, that’s something personally I cannot really read properly what it means. We might transfer that into something on a decimal value, much easier at least for me to reach out, and then we extract all those metrics and we can vary also index data. There’s a lot of possibilities under the hood. And then what we have is so called a data aggregation engine.

Thomas Gunkel:

Once we have the data available, and that does not only apply for TAG, but for each and every other vendor, we can do some actionable items and you see on the next slide what we can do with that. Very important is again the foundation, that data foundation we always say, if you don’t have the right data available, everything else is kind of useless. Everything else you see on top of all the features, applications, they really rely on the proper data foundation. We have that connector, that protocol that gives us every kind of data we need, all the data we need to read and write to control. This is something we call a standard managed object. And the important here that standard managed object is really the same object for every kind of product that can be the TAG probe, the TAG multi-viewer, it can be an encoder, a decoder, an OTT client, all the features, everything you see on top here. Today we will focus on service monitoring and service orchestration, but also all the other features, those rely on a proper set of data.

Thomas Gunkel:

To give you a few more examples before we go into monitoring the orchestration part, you can interface with ticketing systems. When you have a critical service alerts, something is really bad, you want to automatically open a ticket. And for a popup channel, you might want to collect the resource utilization and send it to a billing system. Security is a big top. You have single sign on and you need to have a single application like a DataMiner platform on top of a lot of other applications, for example, that made our customers life so especially the IT people’s life a lot easier when people had to work from home. All they need to do, they have to open one VPN connection to one from instead of opening plenty of VPN connections.

Paul Briscoe:

Can I interject a question?

Thomas Gunkel:

Sure.

Paul Briscoe:

And I’m sorry to interrupt you, but I can’t let you leave this slide with spotting artificial intelligence and asking you to elaborate on that a little bit. That’s kind of scary and very cool.

Thomas Gunkel:

Yeah, I did not talk about this at all yet. That’s something we started already a few years ago using AI or machine learning algorithms to really improve the DataMiner system as a network management platform. Maybe to give you one example here. Correlation, that’s another feature we had since many, many years. Correlation is nothing but that you set a certain set of rules and in the past we have done this in a manual way to say, okay, on a service chain, maybe on my channel when I have a loss of video on the first, on the interest and I see a lot of loss of videos, alarms and downstream, very likely I have the same root cause somewhere upstream, something goes wrong at the beginning. I said double, you say, okay, instead of getting 500 alarms telling me loss of video all over, I want to see a single alarm.

Thomas Gunkel:

You could do that manually, but that’s time consuming. You have to think of those tools. You have to set them up. You have to maintain them. Over time we want to give up on the many other correlations. And that’s a good example for artificial intelligence. We use that to use machine learning algorithms to automatically detect those correlated events, to detect the root cause. So right now with AI, all you do you just click on a tick box and say, “Activated.” That’s all you do as an operator. You don’t even realize that this is AI under the hood, and the correlation will is built fully automatic. So you don’t have to set it up anymore. Another quick example before we go into the details here is probably forecasting.

Thomas Gunkel:

We use AI and machine learning to forecast behavior of a system, to forecast the network behavior, to forecast the service behavior, to say when you move on like this, if that behavior in two hours, you have a problem. Instead of saying, right now you have a problem but it’s already too late. So a lot of things wit AI, we improve. It’s really mainly to be more proactive than we could do when we don’t use AI.

Paul Briscoe:

That is seriously cool. Thank you. That is really cool. Thanks.

Thomas Gunkel:

Thank you. Let’s probably go to the next slide. It’s on service monitoring, orchestration towers. Yeah, the OTT business. What are we doing when we say service monitoring. It’s really about to monitor your service chain end to end. We stay East-West and North-South. What is that? East-West is for us, left to right. Start somewhere when we talk about end to end at the service of origin could be again in your stadium a few cameras here and we were just drawing a very simplified service chain. You have your SDI to IP conversion. My point for the redundancy arrives in the MCR. If you’re all IP switch fabric, you do audio video processing, you go into the playout if you use server at graphics.

Thomas Gunkel:

Everything you all know goes into the distribution, can be again, private or public cloud and coding, packaging, CDN. And all products that are involved here, no matter if that’s a hardware product or software product, we monitor and put it together in one service. But then now with more and more software based applications, and we have to have challenge with all those software applications, we have to monitor not only the software application itself where we haven’t connected, but also the underlying infrastructure. And that’s what they call a North-South monitoring. You’ll see that in the middle of one example of that video processing function. To get a full picture, you also have to monitor hardware, might be your Dell, HP server, your Windows or Linux operating system, your virtual machine, or maybe your docker service, your docker container to get a full picture of the health state of that processing function.

Thomas Gunkel:

There’s still one piece missing that you will see on the slide. DataMiner is not an analyzer so we need to have probes, we need to have multi-viewers to really look into the actual stream in the data, in the STI stream, in the IP stream, get the full picture what we say, a 360 degrees service overview and probes. And this is why I would integrate with TAG. And we have a few examples here. Probes are all over in the service chain, in the original sampling or on a 2022 to the six probe, 2110 probe, somewhere on-prem and then on all ABR probes. Those probes in the cloud can be on-prem, run on commercial off the shelf software there. Altogether what we do here, we aggregate thousands of data points and again, those probes are essential. They give all the insights on actual essence into a single what we say service alarm. That’s what we do under the hood. On the next slide.

Paul Briscoe:

So on this next slide, yeah, we give a little picture to that, so I’m glad you like our probes because we certainly love your integration and this is something we do and do really well. Here’s another way to look at a system like this. We have upstream, we have sources of content and throughout the system, we have various value adds of commercial adding or targeted advertising or whatever stuff goes on in this middle zone. And ultimately we then deliver it through a various bunch of means. All along the way, TAG of course can visualize and monitor streams and so on. And we can pull all this information in different ways from different points in the chain. We can, of course through our API, deliver these events, alarms, and statistics to DataMiner for them to do their decision making and their monitoring on and they of course can control and configure the TAG systems as well.

Paul Briscoe:

On the next slide is probably a slightly better view of all lists. A little bit different. We’ve sort of just condensed the pieces a bit but this talks a little bit more now about how you monitor. And this is I think kind of important to understand. Channel A at the very top here in red high value. This is a live sporting event, this is a Superbowl, this is some global sporting event that is of extremely high value. You need to monitor this very aggressively. Channel B, the normal value channel in green. This is perhaps a broadcast channel or something that is high value to you, but it’s not as big as a onetime super lifetime sporting event. And then of course you have channel X two, 300, maybe more channels.

Paul Briscoe:

These are less critical. There’s still value to you but you don’t need to monitor them in the same way for a couple of reasons. One is they’re probably very reliable because they’re stood up and running all the time. They’re not a standup channel that requires care. But also just to be honest, probably the price of a problem in that path is lower than the price of a problem in the top most paths. You monitor according to your risk. So if we look at these little purple and green circles, a peace circle is a pro point, a purple circle is a visualization point, a multi-view tile. We also are able to do this by exception and this is where DataMiner comes in. So if we look at channel A and go across, we monitor with probes all along the way, and these are all live probes.

Paul Briscoe:

Every one of them is a live probe. We’re monitoring every point right up through the transcodes. At the transcode, we monitor every rendition from the transcode, and at the packager we’re monitoring every rendition going out. And for that matter, we can even go out beyond the CDN look at things and report back. But we do all of this real time, full time. And the reason being the extremely high value of this content. Visualization however, we only do at the head. There’s no need to do visualization downstream unless something happens and that’s where we do visualization by exception and that’s marked as Ve on this drawing. Channel B, we handle a little differently. We probe it at the origin of course, and then we probe it a couple of key middle points. So we’re probing it here in the middle of distribution, we’re probing it here near the edge, but we only upon exception monitor elsewhere.

Paul Briscoe:

So if a problem is detected, we can then spin up and turn on quickly the probes and the visualizations at other points in the path based on the need. And then for the more run rate channels, we can do a similar thing. We probably don’t necessarily need to monitor them out on the other side of the CDN. That’s a value choice. If your CD trust level is high and you have monitoring of the CDN operation, you may not need to do that. But here again, we do probing at the head, we do probing at key points, and then we visualize and probe according to what’s going on in the system.

Paul Briscoe:

These visualizations and these probes are all delivered on this orange line here to DataMiner and this is how DataMiner requires the data they need in order to see the system and to provide end to end visibility, end to end understanding of the data and then apply the artificial intelligence to what is going on in the system to dynamically turn things on, to figure out better what’s going on and once they’ve figured it out, they can then look at taking remedial action. I think Thomas, you have a little more about this on the next slide.

Thomas Gunkel:

Absolutely. So it’s not only about yeah monitoring the channels and the services. An important aspect is also to orchestrate what we say the service life cycle, to set the channels up, to spin up the probes, to make sure that a channel is available at the right time. And how do we do that? We have something called DataMiner Service and Resource Management. You see that on the right hand side of the slide and consists of seven different building blocks, a block to manage their resources. Your encoder, your probes, your multi-viewers to attach profiles to them, set off templates to establish connectivity between all those. Automation is certainly very important when you put things together. And if a scheduling engine because there’s always channels you want to spin up right now or manage right now we talk, but more and more we talk about scheduled operation that we want to plan your resources, the availability of resources and your events, your popup channels upfront.

Thomas Gunkel:

There are multiple steps that are required. First of all, we model your service within DataMiner. They don’t look the same all the time. We say, we need to start those then we have to manage a service. Something could go wrong. You have to go to a fail over. And certainly once a service is up and running, you’ve seen that we monitor the service, and then once the service is down and they use the example of the popup general again, you want to archive it and maybe reuse it in the other weekend for the same soccer game, once more. So how do we do that under the hood? We reserve all these free resources or resource pool. We have a TAG probe resource pool, we have a TAG multi-viewer resource pool and we configured those, and then we have to establish connectivity between the resources that can be classic connectivity, RF from a satellite, ASI signals, IP 2020 to their 621.

Thomas Gunkel:

That’s a matter of a trauma that is under the hood, of which signal format we put everything together into that single service to automate the whole service lineup. And on the next slide, I guess we see a real example. How does it look like in reality? On the right hand side we see a very simple service definition. My OTT service has receivers, decoders, transcorders, packaging, BRM plus all the TAG multi-viewing and probing again. That’s all controlled by DataMiners. So what do we do first? We spin up the infrastructure and I know I have to do my popup channel for the next weekend. There’s a sports game going on. You spin up my infrastructure. The multi-viewers, the multipoint probes in a very dynamic way. And specifically through the probing, there’s multiple ways how we attach probes to a service from our resource pool that can be permanent to say for the full duration of that service.

Thomas Gunkel:

Sometimes we have a situation if the customers that they spin up 20 services, 20 channels but they only have 10 probes left, what can you do? You do a bit of a round Robin across many channels, or you do it on operator the month, the operator can decide and overrule all the rules you have set in DataMiner, or certainly that’s also a use case triggered by a service alarm and the service alarm could be something goes wrong with my package. Even if there’s just a faulty power supply that service can be in alarm and they bring up that service to the operator’s attention by routing the service to a monthly fee for example, and by adding maybe more probes in a very dynamic way to dig deeper into the service chain, into the service listing. What’s most important? We always say via schedule and awareness. What does that mean?

Thomas Gunkel:

Schedule awareness starts with knowing what’s happening on the channel and that originates somewhere from the playout system where you know primary with secondary events should be played in secondary events. That’s important when it comes to probing. It could be my subtitles, my SCTE-35 triggers that we know when a certain piece of metadata should be available in the service chain that we can dynamically attach the right probe in place on a time based to the right service, to avoid false positives, to avoid really a situation that a probe monitors against something, maybe a subtitle giving an alarm saying there is no subtitles, but at the end we know there should not be a subtitle because right now there is a commercial break lane.

Thomas Gunkel:

So that alarmed suppress. Instead of getting those false positives or alarms, operators will ignore over time because they get used to it. If that approach being scheduler aware, being content aware, we avoid all those false positives to really focus on the two alarms. Maybe Paul you have different examples, SCTE-35 or any of those metrics you typically alarm against which are important when it comes to probing multi-viewing.

Paul Briscoe:

Well it’s interesting because there is so much to monitor. SCTE is a complicated one. And of course monitoring SCTE is important because that’s where you’re controlling the insertion of what brings money into your broadcast facility, right? What I find interesting actually about monitoring an IP is how it’s a little bit different, and I’ll just give you an example. In a legacy distribution system, let’s talk about back in the old, old days, video black, video at the receiver could go black for many, many reasons between the origin and the end. And so black detection is an important thing in probing, but when you’re in a pure IP system, after the encoder, there’s really no opportunity for black to come from anywhere.

Paul Briscoe:

If the encoder isn’t making encoded black and you’re losing packets in the network, you don’t need to detect black, there’s no black to detect. Instead you need to be interested in other things. And this is where some of the stats that we monitor and can provide the DataMiner can give them some forensic insight into what might be happening. So by monitoring for example the package that are on a given path. If you see that jittery increasing, if you see latency moving around, you can have some sense that there may be something wrong with that path and you can now forecast a problem might be getting ready to occur. You can look at trends and then DataMiner can take action to for example, build up another path and transfer the service or to identify the equipment that’s giving the problem or these services giving the problem.

Paul Briscoe:

So it’s very interesting how it’s so different in the IP world. And some of the things we use to monitor while they’re still important don’t exist in the same way. At the same time, we need to monitor other things because the path is no longer a simple one. You got to remember originally television was the same way from the camera to the TV screen at home all the way through the system. It was the same electrical signal and so it was easy to monitor. Today, we’re packetizing this stuff with a bunch of compression formats, with encryption wrapped around it, with metadata associated with it, both essence metadata as well as transport metadata and all of this stuff has to be looked at.

Paul Briscoe:

And in fact, it’s easy for TAG to provide all that data and it’s really especially important that we can provide that data in a meaningful manner through the API to a higher level intelligence if I can call it that, I almost don’t like that term, but a higher level system view the DataMiner provides. And as we evolve and as we continue to monitor different things and more things, and TAG will of course monitor and visualize these things and DataMiner will naturally automatically evolve through their ability to use our API to provide ongoing evolution of functionality in the product.

Thomas Gunkel:

Okay. Before we go to the next slide, maybe just one more use case here. It’s not only that we use and the TAG multi-viewers and probes to receive information, but in many scenarios, we also actively control them just to give one example, penalty box. And whenever there is a service alert, a lot of our clients that use the penalty box approach to say by default, every service is in good shape. I don’t want to see that. I don’t want to see 500 tips in my multi-view anymore, as long as all services are in good shape. And whenever there is something wrong or it goes into the data state from a service point of view, DataMiner automatically routes that critical service, maybe on different roaming points and through that penalty box. And on top of that, we often also did that for a project in the US to push additional alarm information to the multi-viewer.

Thomas Gunkel:

It’s not only about the information TAG has anyhow available within the probe, but that could be information on the video server which clip is actually playing causing that issue. Just additional information again to make the operator’s life easier. One last part here I want to mention. When we talk about spinning up probes dynamically, there’s a big difference when you compare that against on-prem systems versus probes in the cloud or equipment in the cloud. Very often we have equipment we can run in the cloud that does not exist even at the time when you plan for the next chore, for the next popup events. When I plan for a popup event for the next weekend, and I need temp probes, those probes I can reserve them already DataMiner but they don’t exist yet and only just before I go live, we can spin up those probes, and spinning up those probes means we need to interface for licensing server to know if the customer still have has enough credits because there is always enough compute power available in the cloud.

Thomas Gunkel:

It’s not about that. It’s really about having enough credits or not to make the decision as a DataMiner system if you can spin up all the cloud based instances. Now in the next slide, something that is very powerful. We just finished a big platform. It’s actually a new streaming platform in the States that works with a lot of 24/7 channels, live channels, popup channels, also video on demand. To give you some numbers here is about 250 linear channels for about 80,000 assets. I guess they add about 5,000 additional ones per week nowadays and then they do live popup channels mainly for sports. Unfortunately, not that many as they have foreseen because right now as we know there is no Olympics, so they missed a few popup channels, but it’s a big system.

Thomas Gunkel:

And on the right hand side, you see two screenshots I took last week from the system. The upper one shows all the TAG probes we controlled. So we have the pool of resources, plenty of probes handling tons of ABR streams in that ecosystem. And the second screenshot downright, that’s the few of a single channel that seem to see about 30, 40 different probes just attached to that single channel. And imagine this by times 250 before those probing points, 20, 30, 40 probing points for a single service, you get millions or billions of data and metrics. What happens our customer at the end, he’s not interested on those screenshots on the right hand side, that’s just too much data. Instead, we give very simple dashboards and the customer on the left hand side, you’ll see one that’s screenshot as a very, very simple landing page.

Thomas Gunkel:

When you looks into the CDN part of the overall system and you see our TAG probes involved also systems from other vendors like Touchstream or Conviva. Conviva doing the quality of experience measurement of each and every OTT client for them. And we aggregate all the data and like a service has, here it’s all green. You see TAG we have assigned the probes and again dynamically to different origin service in the West and in the East, this is how a customer delivers and they have different CDNs, Akamai, LimeLight, Comcast. And imagine something goes wrong or the quality goes down in one of those CDNs, you would immediately flag that. And there’s an operator when you’re interested in the details, you drill it down with two or three clicks. You end up in the details and you could even see each and every single user watching that single stream for now.

Thomas Gunkel:

Maybe it’s something wrong with the end users, OTT client, or it’s related to one of the CDNs. You will be able to drill down to the root cause. This is important to start very high level, simple to understand, and still we have all the data under the hood available to go to the details to find your root cause of the problem. That’s all already said. That’s in plenty more that we just monitor against. We are not a single stream anymore. Those are thousands of ABR files, packets, and tons of monitoring points. We need to set up in a very dynamic way to be able to monitor an OTT system and to make sure that the quality of experience, this is really what you’re interested after as a company, your customer experience that this is the same as it was probably in the old linear days.

Paul Briscoe:

I was just going to jump in and say this really highlights the value of our little marriage here, right? Because it’s fine for us. And honestly, I’m glad I’m on this side of the equation Thomas. I wouldn’t like to be the guy to have to deal with all this data. I’m happy to be the guy who provides it. And that’s our strength and your ability to aggregate it and to automate it and to put such power behind it is super important. I wanted to mention that underpinning this and speaking to what you talked about, popups and what I spoke earlier about monitoring by exception, you can deploy TAG instances anywhere you want on the ground or in the cloud. You can have one, you can have many, they can be geographically dispersed. The system doesn’t really care.

Paul Briscoe:

And the beauty of that is that combined with our licensing, which basically says you buy so many licenses and you can use them for whatever TAG can do. So today they could be multi-viewing, tomorrow they could be probing, next week they could be streaming transcodes. A license is a license for a function and you can use it across the TAG functionality. So what that means then is that you can stand up a TAG instance and stand up TAG services very, very, very quickly in the cloud. And what you talked about dynamic licensing is exactly how we work. So the ability for a new channel to be stood up, the probes we put in place and DataMiner get camped onto it is a very, very fast and a very, very robust process and considering all the data that’s being handled but actually a little scary and kind of impressive. Sorry to interrupt your Thomas. Go on.

Thomas Gunkel:

We still have a few minutes left. I guess the next slide is kind of our summary. Why do we succeed, right? Can we work together? How do we work together? It’s really the foundation again, and we need to have an open and fully featured API. So this is what TAG provides to us. And only with that, we have the right data foundation. We can put that platform on top, DataMiner on top to have a single pane of glass platform for the operation. And also the fact that both systems run on software. DataMiner and TAG can run on-prem in the cloud. Doesn’t really matter. There are no restrictions anymore. This is really the foundation to be able to automate the service lineup as we have seen, to make sure that the probes and multi-viewers are always in sync with the service so what do we actually want to achieve?

Thomas Gunkel:

It’s also something to use your probes and the assets in a more efficient way that DataMiner always knows how many probes do I have. How many licenses do I have as you said. I have maybe 10 credits today. Today I want to use it as a multi-view because I need to view something, the other day I use it as a probe. It’s about efficiency and to what that leads is a better service uptime. To really be in line with your, and you see that on the pyramid on top, if your overall company goal is to improve the quality of experience, to own the customer’s experience, this is what everybody’s after. At the same time to optimize OPEX, CapEx. What can we do when we automate? We can do more with the same amount of people and we can also do more with the same amount of assets and you can do it in a very dynamic way.

Thomas Gunkel:

And that’s probably the most important message I want to give today to enable agility, on our systems. It doesn’t help you a lot when are proud of to have a system set up once and it runs and it’s stable. Nowadays you have to be more flexible, you have to adapt to your customer’s needs to compete and to be able to do that, also to new business models by the way. To be able to do that, you have to automate those workflows. When you do those manually, they will be error prone. They might fail, and you not feel very well, very comfortable to change things anymore and that’s really what we always point out, you say okay, you do it once, that’s great, but then you have to adapt over time and that’s constantly you have to adapt. And only when you automate everything and once more, when you have the right data foundation this can work.

Paul Briscoe:

So if I can just add something here I would like to focus on two things Thomas mentioned. Agility is a very important thing. And agility was a thing before COVID and COVID made agility a real thing, and agility will never go away because after everybody got over the inconvenience and the difficulty and the misery of what we had to go through in our customers transition to an agile world, they’re discovering huge benefits. And so this agility is not going to go back to where it was in the old world. The other important thing that goes hand in hand with agility is OPEX. This has always been an industry of CapEx. You buy equipment, you advertise it, you buy equipment, you write it down, you buy equipment. And your utilization of that equipment is never 100% unless it’s a distribution amplifier somewhere in the signal chain or something like that.

Paul Briscoe:

But if you build a new studio, it’s a new studio and that equipment sits dark between new shows, right? So as we move from CapEx to OPEX models, this becomes even more important because of scalable solutions like TAG. You can take your quantity of licenses, you can buy as many as you need, you can use them for what you want. If you need more, you can get some more. And you can utilize that same license in the news studio during the news show and then you can flip that same license to a studio that’s producing a soap opera in the middle of the afternoon, and you can flip it again to something else.

Paul Briscoe:

So that agility fits beautifully in an OPEX environment and that’s where DataMiner and TAG are very strong together and that’s where we see the industry going and we’re really pleased to be partnered with DataMiner. And with that, I would like to thank Thomas for taking the time to do this with us. Thomas, it’s been a real pleasure having this conversation. And I see from some little notes coming by on the screen, there may be a few questions that have been posed through the chat box. So why don’t I turn it over to you and you can start taking some questions and we’ll figure out who answers which one?

Shannon:

Great guys. So we have a couple of questions and the first is, how does monitoring differ with OTT as compared with uncompressed or transport stream distribution?

Paul Briscoe:

Oh my, maybe I’ll take that one. Okay. So there really are three corners of a triangle although they have many commonalities. Obviously essence is common to all of them. The signal formats will differ between them. Uncompressed is uncompressed. Uncompressed requires extremely low latency monitoring. It requires things like very fast tally response and stuff you would find in a live production environment. Legacy distribution deals with continuous streams. It deals with continuous services. It deals with more traditional compression methods and packaging methods. But OTT is where it gets interesting because there are numerous flavors of OTT. And we have some compression formats within OTT, but moreover we have these OTT mechanisms, these over the top chunk distributing mechanisms. And there’s a number of them and they’re all complicated and they’re all slightly different but basically the same. So they’re all handled one at a time as they come up, we implement to accommodate them.

Paul Briscoe:

But that’s where the most complexity lies today is in OTT because OTT also brings things like multipoints of distribution edges. So monitoring is required at many geographical points globally. It brings along things like DRM, and DRM is a very important thing that’s protects your assets. But in the course of protecting your assets, it means that we have to very carefully execute DRM both to protect your DRM, because you don’t want to open up your DRM to anybody, but at the same time we have to deeply monitor what you’re doing so we need to get through the DRM to get to it. So DRM is one of the more interesting aspects of OTT.

Paul Briscoe:

And today we accommodate, I think just about every OTT out there that’s significant at the moment and most of the DRM solutions as well. But this is the evolving front we see right now is OTT on new formats and new transform methods, ultra low latencies, something that’s coming and so on. So OTT is where the complexity lies and both for probing as well as for system monitoring. Because Thomas, when you look at an OTT system, it’s not as simple left to right drawing.

Shannon:

Great, thanks. Well that leads perfectly into the next question that someone had and that’s how is encryption and DRM handled by the TAG system?

Paul Briscoe:

So we stay away from DRM as much as we can. What I mean by that is we don’t want to get involved with the customer authentication and the whole customer permission side of that. And what we do instead is a little bit unique and it works very, very well. With the collaboration and cooperation of the DRM provider and the customer, we establish a relationship with the key server used in the DRM system. So TAG has direct access to the keys used for encryption, and we basically receive those keys directly and decrypt the content. This avoids us having to deal with an inconvenience on our side, which is dealing with the whole customer authentication authorization process and all that aspect of DRM, and it also takes that away from the customer side because we’re not another viewer that has to be configured in some funny way, we’re actually a piece of infrastructure and doing it the direct way with the key management server, we found to be a very effective and very reliable way to do that.

Shannon:

Great. Thanks, Paul. Here’s another question. Can DataMiner results be integrated with TAG visibilization?

Paul Briscoe:

I think they can. Thomas you can manage the visualization of the TAG fairly completely from the DataMiner, right?

Thomas Gunkel:

Yeah. Absolutely. Again, we have all the metrics, all the data, we can read them, and then we can visualize them in a way you want and I did talk about under the hood on our DataMiner acute client. We are using Microsoft Visio and I always say you can draw in Visio the way you want to display it. Feel free to do that and we can interactive, or we use something out of Visio which is HTML5 based if you have a different client. So any kind of visualization with the data, that’s perfectly possible.

Paul Briscoe:

I have a follow up question. It’s a personal one. So what do you do for Mac users in terms of Visio?

Thomas Gunkel:

For Mac users, there is no Visio. Then we go to the HTML5.

Paul Briscoe:

Oh, okay. All right.

Thomas Gunkel:

That might be the future Visio. We use it since many, many years, but more and more we see that we go towards HTML5 that’s mighty OS capable so no restrictions therefore.

Paul Briscoe:

The only thing I miss from the old days of Windows is Visio. All right, next question Shannon.

Shannon:

Yeah. Great. So we have two more questions. I think we can fit in here. Can DataMiner integrate TAG monitoring data and visualization with status and alarming from other third party equipment?

Thomas Gunkel:

Yeah. The same as with all the other infrastructure. It doesn’t matter if it’s some other infrastructure or another third party probes. Like you’ve seen from that customer example, maybe integrate with probing systems or customer experience systems like Conviva or Touchstream and we bring it all together. So it’s not limited to a certain type of device or a certain aspect of the operations. Doesn’t really matter.

Paul Briscoe:

So that’s actually an important one too, because with TAG probes, you’ll discover for example that the video coming out of a server has disappeared in the playout system. You probably have the visibility of that server’s health and its operations. So this is what brings the customer the real strength is the fact that the loss of video tells them something upstream is broken, your ability to talk to third party equipment means they have great depth of view of what the heck upstream might’ve broken. So that’s a really, really strong point for a DataMiner integration. Thank you. Any more Shannon?

Shannon:

Yeah, we have one more question that I think we can fit into the hour. What happens if TAG comes up with a new release with many new features? Will Skyline charge to adapt the driver? I think that’s for you Thomas.

Thomas Gunkel:

No we don’t. As long as our customers have an active service level agreement with us and that’s for 99% of our customers, whenever there are updates to our own software or also to third party drivers we have developed. Let’s say TAG comes up with a new firmware features. We will update those drivers free of charge. We will deploy it automatically onto the customer system if they want and that’s all included. And also important to know we can handle different versions of third party systems. So let’s say the customer runs 50 TAG probes with the new version already and he still has a few running on a different firmware, that can be handled as well so it doesn’t have to be the very same software from our system probes.

Paul Briscoe:

Fantastic. So I’m seeing from the notes, I’m getting in a little chat here that we’re about at a time. We’ve had a good conversation. We’ve overshot our time limit a little bit, but that’s been great. It’s been a lot of fun talking to you. I would like to thank all our visitors for attending. I know your time is important and I know wasting it is not a good thing. The fact that you’ve chosen to spend it with Thomas and I here today is very, very important to us. We’re very grateful. Thank you so much for that. And a large number of questions have apparently been paused I’ve been told. We will answer these after the call and all questions paused and their answers will go out in an email to everybody attending.

Paul Briscoe:

You do have contact info from us. Please contact us, please contact Thomas if we can help you out. TAG does free demos. We can have you up and running with a cloud instance of TAG in a matter of minutes, and you can sit at your desk with your web browser and learn all about the TAG system. You can configure a multi-viewer, you can look at your multi-viewer mosaic output. And Thomas I’m not actually intimate with, but I’m sure you guys have ability to allow customers to engage and experience your product as well.

Thomas Gunkel:

Sure. Feel free to reach out either to myself or via our website. There’s always a way to try systems. We have remote capabilities. We can give demos. Feel free to talk to us and you find a good solution how to test a DataMiner system.

Paul Briscoe:

Brilliant. Thank you Thomas so much, but to our attendees particularly, thank you for giving us your time and attention and I hope it was a value to you. Have a great day, take care.

Thomas Gunkel:

Thanks so much. Thank you everybody.