Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

In Python, If there is a duplicate, use the date column to choose the what duplicate to use

I have code that runs 16 test cases against a CSV, checking for anomalies from poor data entry. A new column, 'Test case failed,' is created. A number corresponding to which test it failed is added to this column when a row fails a test. These failed rows are separated from the passed rows; then, they are sent back to be corrected before they are uploaded into a database.

There are duplicates in my data, and I would like to add code to check for duplicates, then decide what field to use based on the date, selecting the most updated fields.

Here is my data with two duplicate IDs, with the first row having the most recent Address while the second row has the most recent name.

ID MnLast MnFist MnDead? MnInactive? SpLast SpFirst SPInactive? SpDead Addee Sal Address NameChanged AddrChange
123 Doe John No No Doe Jane No No Mr. John Doe Mr. John 123 place 05/01/2022 11/22/2022
123 Doe Dan No No Doe Jane No No Mr. John Doe Mr. John 789 road 11/01/2022 05/06/2022

Here is a snippet of my code showing the 5th testcase, which checks for the following: Record has Name information, Spouse has name information, no one is marked deceased, but Addressee or salutation doesn't have "&" or "AND." Addressee or salutation needs to be corrected; this record is married.

import pandas as pd 
import numpy as np

data = pd.read_csv("C:/Users/file.csv", encoding='latin-1' )

# Create array to store which test number the row failed
data['Test Case Failed']= ''
data = data.replace(np.nan,'',regex=True)
data.insert(0, 'ID', range(0, len(data)))

# There are several test cases, but they function primarily the same
# Testcase 1
# Testcase 2
# Testcase 3
# Testcase 4

# Testcase 5 - comparing strings in columns
df = data[((data['FirstName']!='') & (data['LastName']!='')) & 
              ((data['SRFirstName']!='') & (data['SRLastName']!='') &
              (data['SRDeceased'].str.contains('Yes')==False) & (data['Deceased'].str.contains('Yes')==False) 
              )]
df1 = df[df['PrimAddText'].str.contains("AND|&")==False] 
data_5 = df1[df1['PrimSalText'].str.contains("AND|&")==False] 
ids = data_5.index.tolist()

# Assign 5 for each failed
for i in ids:
  data.at[i,'Test Case Failed']+=', 5'


# Failed if column 'Test Case Failed' is not empty, Passed if empty
failed = data[(data['Test Case Failed'] != '')]
passed = data[(data['Test Case Failed'] == '')]


failed['Test Case Failed'] =failed['Test Case Failed'].str[1:]
failed = failed[(failed['Test Case Failed'] != '')]

# Clean up
del failed["ID"]
del passed["ID"]

failed['Test Case Failed'].value_counts()

# Print to console 
print("There was a total of",data.shape[0], "rows.", "There was" ,data.shape[0] - failed.shape[0], "rows passed and" ,failed.shape[0], "rows failed at least one test case")

# output two files
failed.to_csv("C:/Users/Failed.csv", index = False)
passed.to_csv("C:/Users/Passed.csv", index = False)

What is the best approach to check for duplicates, choose the most updated fields, drop the outdated fields/row, and perform my test?

like image 866
feelsgood Avatar asked Nov 15 '25 15:11

feelsgood


1 Answers

First, try to set a mapping that associates update date columns to their corresponding value columns.

date2val = {"AddrChange": ["Address"], "NameChanged": ["MnFist", "MnLast"], ...}

Then, transform date columns into datetime format to be able to compare them (using argmax later).

for key in date2val.keys():
   failed[key] = pd.to_datetime(failed[key])

Then, group by ID the duplicates (since ID is the value that decides whether it is a duplicate), and for each date column get the maximum value in the group (which refers to the most recent update) and retrieve the columns to update from the initial mapping. I'll update the last row and set it as the final updated result (by putting it in corrected list).

corrected = list()
for _, grp in failed.groupby("ID"):
   for key in date2val.keys():
      recent = grp[key].argmax()
      for col in date2val[key]:
         grp.iloc[-1][col] = grp.iloc[recent][col]      
   corrected.append(grp.iloc[-1])
corrected = pd.DataFrame(corrected)
like image 190
meh Avatar answered Nov 18 '25 04:11

meh



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!