I've not had to do any heavy lifting with Pandas until now, and now I've got a bit of a situation and can use some guidance.
I've got some code that generates the following dataframe:
ID_x HOST_NM IP_ADDRESS_x SERIAL_x ID_y IP_ADDRESS_y COST PURCHASE_DATE ID IP_ADDRESS SERIAL_y OS
0 62bf philip 192.168.1.115 12345 32 192.168.1.115 36.78 2018-05-05 2 192.168.1.115 NaN Debian 11 Linux
1 3a73 vic 192.168.1.145 17B0P 33 192.168.1.145 749.64 2018-07-26 3 192.168.1.145 17B0P DSM 7.1-42661
2 4237 BILL 192.168.1.99 NaN 31 192.168.1.99 3584.83 2018-03-15 1 192.168.1.99 38174 Windows 10 LTSC
3 3027 tim 192.168.1.96 C02G7 34 192.168.1.96 1289.00 2021-10-13 4 192.168.1.100 C02G7 macOS Monterey 12.4
This dataframe is generated via an outer merge of three other dataframes. The duplicate data is intended. The idea is to fill in missing serial numbers if we have a hostname and consistent IP address on that row; I thought about getting all of the IP addresses in a row and if they are 80% consistent, use that address, else NaN.
Then at a later stage, I'll drop the redundant columns.
This is a PoC. The above is a small example data set, but the actual data set contains around 35K unique devices (rows) and 112 columns (4 sets of redundant data).
I'm not seeing anything in Pandas that looks like it is tailor-made for this kind of situation. Am a wrong about that?
Further examples and the code used here can be found on github here.
The code referenced above follows below.
import logging
from functools import reduce
# import numpy
import pandas
log = logging.getLogger("merge")
log.setLevel(logging.DEBUG)
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
log.addHandler(ch)
r1 = [
{
'ID': '62bf',
'HOST_NM': 'philip',
'IP_ADDRESS': '192.168.1.115',
'SERIAL': '12345',
},
{
'ID': '3a73',
'HOST_NM': 'vic',
'IP_ADDRESS': '192.168.1.145',
'SERIAL': '17B0P',
},
{
'ID': '4237',
'HOST_NM': 'BILL',
'IP_ADDRESS': '192.168.1.99',
'SERIAL': '',
},
{
'ID': '3027',
'HOST_NM': 'tim',
'IP_ADDRESS': '192.168.1.96',
'SERIAL': 'C02G7',
},
]
r2 = [
{
'ID': '34',
'HOST_NM': 'tim',
'IP_ADDRESS': '192.168.1.96',
'COST': '1289.00',
'PURCHASE_DATE': '2021-10-13',
},
{
'ID': '33',
'HOST_NM': 'vic',
'IP_ADDRESS': '192.168.1.145',
'COST': '749.64',
'PURCHASE_DATE': '2018-07-26',
},
{
'ID': '31',
'HOST_NM': 'BILL',
'IP_ADDRESS': '192.168.1.99',
'COST': '3584.83',
'PURCHASE_DATE': '2018-03-15',
},
{
'ID': '32',
'HOST_NM': 'philip',
'IP_ADDRESS': '192.168.1.115',
'COST': '36.78',
'PURCHASE_DATE': '2018-05-05',
},
]
r3 = [
{
'ID': '2',
'HOST_NM': 'philip',
'IP_ADDRESS': '192.168.1.115',
'SERIAL': '',
'OS': 'Debian 11 Linux',
},
{
'ID': '3',
'HOST_NM': 'vic',
'IP_ADDRESS': '192.168.1.145',
'SERIAL': '17B0P',
'OS': 'DSM 7.1-42661',
},
{
'ID': '1',
'HOST_NM': 'BILL',
'IP_ADDRESS': '192.168.1.99',
'SERIAL': '38174',
'OS': 'Windows 10 LTSC',
},
{
'ID': '4',
'HOST_NM': 'tim',
'IP_ADDRESS': '192.168.1.100',
'SERIAL': 'C02G7',
'OS': 'macOS Monterey 12.4',
},
]
def unique(l: list) -> list:
u = []
for e in l:
if e not in u:
u.append(e)
return list(u)
df1 = pandas.DataFrame(r1)
df2 = pandas.DataFrame(r2)
df3 = pandas.DataFrame(r3)
df_list = [df1, df2, df3]
df_keys = {
0: ["ID", "SERIAL"],
1: ["HOST_NM"],
2: ["HOST_NM", "SERIAL"],
}
target_columns = ["HOST_NM", "SERIAL", "IP_ADDRESS"]
df = reduce(lambda left, right: pandas.merge(
left, right, on=["HOST_NM"], how="outer"), df_list)
log.debug(df)
# Replace null and empty strings with numpy.NaN
# df = df.replace(r"^\s*$", numpy.NaN, regex=True)
df = df.mask(df == '')
log.debug(f'\n\n{df}')