my case is apparently easy, but I couldn't do it in a simple way and I need it because the real files is very large.
So, I have two txt files and I would like to generate a new file containing the both content of the two without duplicating the lines. Something like that:
file1.txt
192.168.0.100
192.168.0.101
192.168.0.102
file2.txt
192.168.0.100
192.168.0.101
192.168.1.200
192.168.1.201
I would like to merge these files above and generate another one like this:
result.txt
192.168.0.100
192.168.0.101
192.168.0.102
192.168.1.200
192.168.1.201
Any simple sugestions? Thank you
If changing the order is not an issue:
sort -u file1.txt file2.txt > result.txt
First this sorts the lines of both files (in memory), then it runs through them and outputs each unique line only once (-u flag).
There's a semi-standard idiom in awk for removing duplicates:
awk '!a[$0]++ {print}' file1.txt file2.txt
The array a counts occurrences of each line, but only prints a line the first time it is added (i.e., when a[$0] is 0 before it is incremented).
This is asymptotically faster than sorting the input (and preserves the input order), but requires more memory.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With