0

i'm building a scalable infrastructure for my nginx RTMP server. I've: nginx + arut-rtmp-module + ffmpeg on the server. This is the scheme of the first architecture inside a managed group instance

First scheme Problem in this scheme is easy: Input streaming is only on Server, viewers on Server 2 will not be able to watch the streaming.

EDIT2: In this case my browser working fine: get directly all the .ts file from the server and it works! Obviously this solution, like already said, it's not scalable.

So...I think we need something shared from all new instances. I've involved a google bucket mounted with gcsfuse on each instance. (i'm using always a managed group instance) Second scheme

Problem in this scenario: while the server that get in input the streaming is creating the .ts segments, each time segment is created, bucket is updated with the .ts of 0 byte. When the server finish the writing of the .ts, bucket get the updated .ts file. So it's not really "shared"...

EDIT2: In this case my browser loads only the first 1/2 .ts segments, than it stuck on loading m3u8 in a loop.

So, i've tested these solutions but are not working. I'm wronging something here? Bucket is not the right thing to use?

Thank's

P.s. I've added a cloud CDN on bucket, so from my application i can get the .ts segment directly from the CDN. Problem is that after the 2/3.ts my application get only m3u8 but can't get the others .ts (my cdn has .ts!)

EDIT1: It's like as after the first loading of m3u8, it take the first .ts already loaded....but after can't get the next .ts!

EDIT3:This is what my browser load:

chrome_console

the m3u8 loaded in loop is:

#EXTM3U
#EXT-X-VERSION:3
#EXT-X-MEDIA-SEQUENCE:0
#EXT-X-TARGETDURATION:10
#EXT-X-DISCONTINUITY
#EXTINF:4.167,
0.ts
#EXTINF:4.167,
1.ts
#EXTINF:6.666,
2.ts
#EXTINF:4.167,
3.ts

The real m3u8 on the bucket is:

#EXTM3U
#EXT-X-VERSION:3
#EXT-X-MEDIA-SEQUENCE:0
#EXT-X-TARGETDURATION:10
#EXT-X-DISCONTINUITY
#EXTINF:4.167,
0.ts
#EXTINF:4.167,
1.ts
#EXTINF:6.666,
2.ts
#EXTINF:4.167,
3.ts
#EXTINF:6.133,
4.ts
#EXTINF:4.734,
5.ts
#EXTINF:10.016,
6.ts
#EXTINF:6.450,
7.ts
#EXTINF:8.317,
8.ts
#EXTINF:7.183,
9.ts
#EXTINF:5.850,
10.ts
#EXTINF:2.050,
11.ts

EDIT3: Uhmmmm....it's maybe a cache problem of the CDN? In this case, where can i edit cache configurations?

EDIT4: This is the current scheme that give me the problem of the file m3u8 not correctly updated on my player

Current_case

UPDATE:

Tried both application concurrently (just changing the source that i get on my player):

  1. Pointing to the m3u8 directly on server (where i have the bucket mounted with gcsfuse)
  2. Pointing to the m3u8 on the CDN

1st case works: m3u8 is updated every time. 2nd case m3u8 loads the 1st configuration (tried for example opened after 20 ts segments already created). It loads the first m3u8...then reload the same versione of the file in a loop.

Streaming, like i said, was the same: just the source to m3u8 was get from different ways: directly on bucket mounted on the server or directly from cdn (that is on the same bucket).

UPDATE2: Retried everything, if i download the m3u8 from my cdn...i get the file not updated too. If i take the same file from bucket, it's updated!! I've tried to point my player to storage.google, but from my website i get a cors error...tried to changed the cors setting from console (with gsutil), but nothing to do. How can i prevent caching on CDN? I've already setted header to no-cache and no-store :/

Here is my cloud CDN cache hit ratio....

enter image description here

tidpe
  • 201
  • 3
  • 12
  • You can definitely use Managed Instance Group with Cloud storage. The Cloud Storage (bucket) will centralize your data along your [Managed instance groups](https://cloud.google.com/compute/docs/instance-groups/creating-groups-of-managed-instances). Could you provide me more context on the .ts file. You are receiving a .ts file which is saved in the Cloud storage. Every time a new .ts file is received it will replace the existing .ts file in the Cloud storage. When you say it is not working do you mean that the .ts file is not being replaced? – Milad Tabrizi Feb 28 '20 at 16:41
  • Problem is that when i start watching the live streaming i get 1.ts and 2.ts (for example)....after that my browser just loads .m3u8 of the version that match my bandwitdh (example the directory _480/file.m3u8) but stuck on it, just loop loading on that m3u8. P.s. all the .ts segments are on the storage, but the browser can't get them after the first 2/3 segments – tidpe Feb 28 '20 at 16:43
  • To verify if CDN is the cause of this issue, try to disable it. If the issue persist, let us know. However, the Cloud CDN is not going to cache everything and the details of it can be found [here](https://cloud.google.com/cdn/docs/caching). – dany L Feb 29 '20 at 01:46
  • Sorry, in my previous comment i wronged. If i point directly the source on the load balancer over my managed group instance...it works! So problem is the CDN cache i suppose! Obviously i prefer to use CDN (i think it's smartest and fastest...) – tidpe Feb 29 '20 at 10:34
  • No one knows why CDN is serving to me the file not updated? :/ – tidpe Mar 02 '20 at 13:26

1 Answers1

0

The issue is complex, but you can use the commands indicated in the CDN troubleshooting page to find out if the responses received were served from cache.

You can also try cache invalidation, which is a process to remove an object from the cache prior to its normal expiration time. You can force an object, or set of objects, to be ignored from cache by requesting a cache invalidation. Verify to see if this gives you the expected results, and if that is the case you might want to review the caching details and adjust the expiration times.

If you want to stop a content from being cached, you can follow GCP article on Preventing caching

Preventing caching To prevent private information from being cached in Cloud CDN caches, do the following:

Include a Cache-Control: private header in responses that should not be stored in Cloud CDN caches or a Cache-Control: no-store header in responses that should not be stored in any cache, even a web browser's cache. Do not sign URLs that provide access to private information. When content is accessed using a signed URL, it is potentially eligible for caching regardless of any Cache-Control directives in the response

  • There is a way to set the expiration on cache to 0 for all the buckets **PREVIOUSLY** file are uploaded? I want to avoid run a cron each sec to upload files without cache! – tidpe Mar 04 '20 at 10:41
  • You can try to use [gsutil setmeta](https://cloud.google.com/storage/docs/gsutil/commands/setmeta#description_1) to modify the Cache-Controle for all of files in a bucket. – Alioua Mar 05 '20 at 04:45
  • Thank you Alioua, i've already seen gsutil setmeta. Problem is that i have to run the same command each second? :| – tidpe Mar 05 '20 at 09:21
  • This has to be done for each new file upload you do. However, if the file is being constantly modified, you should in that case avoid using Cloud CDN since it won't work with your configuration. You could also go over Google Live Streaming [solution](https://cloud.google.com/solutions/media-entertainment/use-cases/live-streaming) – Alioua Mar 13 '20 at 03:51