I have a large list which includes duplicate values and I wish to subset a data frame using the list values. Usually I would use the .isin
method, but I want to keep duplicate rows. Here is some example code:
df = pd.DataFrame(np.array([[1, 2, 'car'], [4, 5, 'bike'], [1, 2, 'train'], [1, 2, 'car'], [1, 2, 'train']]),columns=['a', 'b', 'c'])
lst = ['car', 'bike', 'car', 'car']
So I want to return a data frame that includes all rows each time they occur. Every time a item occurs in the list, I want to return the corresponding rows.
On a simple dataset such as the above I can loop through the list and append to a new data frame the returned values, but on a large dataset this seems to be taking an extremely long time. Any suggestions?
EDIT: So Chris' suggestion works, and provides the expected output using:
pd.concat([df[df['c'].eq(x)] for x in lst])
However, as with using a loop this is extremely slow when compared to something like the .isin
method when working with much larger data. Added this edit so that the expected output can be created.
IIUC, use pandas.concat
with a list comprehension:
df_new = pd.concat([df[df['c'].eq(x)] for x in lst], ignore_index=True)
An alternative approach could be to create a helper Series
with value_counts
method on your list and reduce original DataFrame
size filtering using .isin
method:
s = pd.Series(lst).value_counts()
df = df[df['c'].isin(set(lst))]
idx = np.concatenate([df[df['c'].eq(i)].index.repeat(r) for i, r in s.iteritems()])
df_new = df.loc[idx]
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With