I'm new to programming, Python and networkx (ouch!) and trying to merge four graphml-files into one and removing the duplicate nodes, following the excellent instructions here
However, I can't figure out how to keep track of the duplicate nodes when there are FOUR files to compare, instead of two. The code I've written below won't work, but you can hopefully see how I'm thinking wrong and help me.
# script to merge one or more graphml files into one single graphml file
# First read graphml-files into Python and Networkx (duplicate variables as necessary)
A = nx.read_graphml("file1.graphml")
B = nx.read_graphml("file2.graphml")
C = nx.read_graphml("file3.graphml")
D = nx.read_graphml("file4.graphml")
# Create a new graph variable containing all the previous graphs
H = nx.union(A,B,C,D, rename=('1-','2-','3-','4-'))
# Check what nodes are in two or more of the original graphml-files
duplicate_nodes_a_b = [n for n in A if n in B]
duplicate_nodes_b_c = [n for n in B if n in C]
duplicate_nodes_c_d = [n for n in C if n in D]
all_duplicate_nodes = # How should I get this?
# remove duplicate nodes
for n in all_duplicate nodes:
n1='1-'+str(n)
n2='2-'+str(n)
n3='3-'+str(n)
n4='4-'+str(n)
H.add_edges_from([(n1,nbr)for nbr in H[n2]]) # How can I take care of duplicates_nodes_b_c, duplicates_nodes_c_d?
H.remove_node(n2)
# write the merged graphml files-variable into a new merged graphml file
nx.write.graphml(H, "merged_file.graphml", encoding="utf-8", prettyprint=True)