0

I'm using WMWriterAdvanced and WriteStreamSample function for writing video data to ASF file and if there is a power failure during writing samples it cause that I lost about last 20 seconds that has been already written to this file. After inspecting file with ASFView I noticed that last 500 packets are just filled with zero bytes. I understand that during power failure its possible to lose some data, but it seems that 20 sec of video is too much.

Why does already written samples are corrupted and is it possible to decrease its amount?

  • because they probably WEREN'T written to disk - the actual bits were probably floating around in a buffer or cache somewhere, waiting to get committed to actual storage media. 20 seconds of video doesn't mean much. a 1x1@0.000001fps video can be 20 seconds long, just like a 4k @ 120fps 3d video can be 20 seconds. – Marc B Apr 24 '14 at 16:59
  • I'm losing about 2 mb of data. Its H264 10fps FHD stream. But seems that data were written on disc because I can see that the file size before and after power failure was not changed but I got last 2 mb of zero bytes. – Taras.Igorovich Apr 24 '14 at 17:06
  • doesn't mean much. the encoder app could have trivially generated a full-length file and zero-filled it. that could've been right when the encoding started, long before the power failure, and WOULD have gotten committed to disk before the power blew. – Marc B Apr 24 '14 at 17:08
  • I'm pushing live samples that's why writer doesn't know the file size in advance and write caching on the HDD is disabled. – Taras.Igorovich Apr 24 '14 at 17:15

1 Answers1

0

The likely reason is that with the file still being open and being written to, internal file buffers were not yet flushed to disk and power failure caused data loss of this data as well as a portion of structure/index data. Damaged file structure might be a reason for not being able to see even some of the data present on the file, but improperly linked to the rest of the content - hence unexpectedly too many seconds of data lost.

It is typical for a file backed by NTFS file system to have zeros on the fragment where power failure prevented the data from reaching the persistent media.

Roman R.
  • 68,205
  • 6
  • 94
  • 158
  • write caching on the HDD is disabled and I can see that file size is increasing as soon as I start pushing the data... that's why is it possible somehow to decrease the amount of damaged data? It's critical for me because of cctv software. – Taras.Igorovich Apr 24 '14 at 17:12
  • I would say that data loss is inevitable here. If you open/manage the file yourself, then you might want to try [write-through mode](http://msdn.microsoft.com/en-us/library/windows/desktop/aa364218%28v=vs.85%29.aspx). Does not make much sense to me though since performance impact is is hardly worth this partial data safety. In my opinion the best strategy is writing into custom format file incrementally, then finalize into well known format such as ASF. – Roman R. Apr 24 '14 at 17:30
  • I think "writing into custom format file incrementally, then finalize to ASF" is good solution for me. Thanks for help! – Taras.Igorovich Apr 24 '14 at 17:37