0

I would like copy the contents of a file twice on terminal,
but I don't generates same files,
so I type cat a >> a and this job doesn't stop,
I kill the job and open file a and found contents of a file a have many row...
I know cat a > b ; cat a >> b is work,
but I wonder why this code ( cat a >> a ) is not working ?
Please tell me what happen if anybody knows my problem.

thanks.

crazyeyes
  • 53
  • 5

3 Answers3

3

cat a >> a will not work because you have opened file a for both read and append. You keep on reading a file to which you are adding more data, never reaching the end of file.

unxnut
  • 8,509
  • 3
  • 27
  • 41
1

While cat is reading lines from the beginning of the file, those lines are already being appended to the end of the file by >> a.

When cat reaches the the former end of the file it is not the end anymore. New lines have already been appended, and cat continues to read those lines. Those lines will also be appended by >> a and later be read by cat and appended again and read again and so on until you abort the process or run out of disk space.

sth
  • 222,467
  • 53
  • 283
  • 367
  • 1
    And this works even if the file originally contains a single byte. `cat` tries to read a few hundred or few thousand bytes, gets 1, writes it to the output, and then tries again since it didn't read zero bytes (denoting EOF), and sure enough, there's an extra byte to read, so it gets it and writes it and ... it doesn't stop until you run out of disk space. – Jonathan Leffler Sep 04 '14 at 03:54
0

Here is correct way to do what you want:

echo "$(cat a)" >>a

Because cat reads and writes at the same time (so you can cat very big files) we need to capture the output before appending it to the "a" file...

Onlyjob
  • 5,692
  • 2
  • 35
  • 35
  • For reasonably sized files, it will work; for really large files, you end up with 'argument list too large' errors. The limit is platform dependent. – Jonathan Leffler Sep 04 '14 at 03:55
  • It should be fairly obvious that this method is not suitable for large files. Yet it is commonly used for trimming log files like in the following example which re-writes last 1000 lines of the log file while dropping all the rest: `echo "$(tail -1000 large.log)" > large.log` – Onlyjob Sep 09 '14 at 02:58