-->
Save your FREE seat for 流媒体 Connect this August. 现在注册!

What Low Latency Is and How to Measure It

Learn more about streaming latency at 流媒体东部.

See complete videos and other highlights from 流媒体 West Connect on 流媒体's YouTube channel.

Read the complete transcript of this video:

罗伯特•莱因哈特: If you need under six seconds of latency, regular old HLS might get you there. I've done plenty of two-second chunk, three-second trunk manifest setups where you're getting around six seconds, 也许10. 它可能会漂移到10. Depending on what kind of edge origin architecture you set up, getting six might be difficult to pull off, but you can actually get much lower than 30 seconds latency, which is the "standard" for HLS. I've done a lot of custom Wowza setups where we're having two-second chunk sizes, and three-segment manifests and that shaves enough latency off. A lot of live broadcasts don't need super-low latency. If you need under three seconds, this new low-latency HLS spec that Apple has been working on since the end of last year, Roger Pantos talked about it in a 流媒体 keynote.

You might be able to pull that off with low-latency HLS. If you need some one-second sub-500 millisecond, then WebRTC is pretty much your only option, especially when you're going with the browser. So you've gotta work with what you've got. And again, WebRTC has come a long way. It just doesn't move as quickly as some past runtime architectures like Flash, 正确的, because you've gotta get all these stakeholders moving together, all these behemoth browsers all moving forward together.

So let's talk about latency real quick. How do you measure glass-to-glass latency? There are different ways to do it. Sometimes you can get timestamps embedded in your WebRTC outbound and your publish ingest, so you can look at it at the server and look at it on the client and do some very calculated, measurable WebRTC roundtrip times. 这可能是一种选择.

What I usually do--and it's very quick and dirty--is just to use a burn-in timecode on a test stream. 它非常适合测试, but you're not going to be able to measure that in production because not everyone's going to be streaming with burned-in timecode. And I'll show you an example of that in just a moment.

Counting tests are a very quick and easy way to see how good your latency is. What that basically means is, if I'm in a Zoom call or any kind of videoconferencing app or proof of concept, I could try to do a rapid count. 所以我选1, whoever's joining a conference with me will say two, and we'll try to quickly follow with the next number as we count up. And if there's a long delay between the perception of when I say a number and when I hear it come back from the other person, then I get a really good sense 正确的 away of how bad that latency is. If we could talk on top of each other almost, then that's fantastic. 再一次。, 那又快又脏, so it's not as measurable as some of this other stuff that I mentioned.

流媒体覆盖
免费的
合资格订户
现在就订阅 最新一期 过去的问题
相关文章

Limelight Brings Even Lower Latency to Realtime Streaming

Award-winning Limelight Realtime Streaming now offers increased global scalability and bi-directional data sharing, 启用新, innovative online video business models

Low Latency Video Streaming - How Low Can You Go?

Advances in chipsets allow for more processing at the edge, so we're going to see latency times drop even further in the near future.

Low-Latency Streaming Checklist

VideoRx CTO leads viewers through a checklist for prioritizing and achieving low-latency streaming in this clip from his presentation at 流媒体 West Connect 2020.

Companies and Suppliers Mentioned