I am trying to set new DataFrame columns as simple calculation of existing DataFrame, but when I run the script I get the warning from Pandas. Here is the main code
data_join['Ele_total'] = data_ele.sum(axis=1)
data_join['PV_total'] = data_pv.sum(axis=1)
data_join['SC'] = np.where(data_join['PV_total']>data_join['Ele_total'], data_join['Ele_total'], data_join['PV_total'])
data_join['SC%'] = np.where(data_join['PV_total']!= 0,round((data_join['SC']/data_join['PV_total'])*100,0),0)
data_join['SS%'] = np.where(data_join['Ele_total']!= 0,round((data_join['SC']/data_join['Ele_total'])*100,0),0)
data_join['LOLP'] = data_join['Ele_total']>data_join['PV_total']
data_join['E_tg'] = data_join['PV_total']-data_join['SC']
data_join['E_fg'] = data_join['Ele_total']-data_join['SC']
data_join['Ei'] = data_join['E_tg']-data_join['E_fg']
data_join['NGIP'] = data_join['Ei'].abs()<(GRID_LIM*n_build)
data_join['PAL'] = data_join['Ei'].abs()>(PEAK_LIM*n_build)
data_join['CO2'] = data_CO2['GWP']
data_join['CO2_net'] = data_CO2['GWP']*data_join['SC']
data_join['CO2_tot'] = data_CO2['GWP']*(data_join['E_tg']+data_join['SC'])
cash_flow = 0
npv = []
data_join_npv = pd.DataFrame()
for i in range (0,25):
if i == 0:
data_join_npv['PV_total_res_{}'.format(i)] = data_join_res['PV_total']
data_join_npv['PV_total_ind_{}'.format(i)] = data_join_ind['PV_total']
else:
data_join_npv['PV_total_res_{}'.format(i)] = data_join_npv['PV_total_res_{}'.format(i-1)]*(1-d)
data_join_npv['PV_total_ind_{}'.format(i)] = data_join_npv['PV_total_ind_{}'.format(i-1)]*(1-d)
data_join_npv['SC_res_{}'.format(i)] = np.where(data_join_npv['PV_total_res_{}'.format(i)]>data_join_res['Ele_total'], data_join_res['Ele_total'], data_join_npv['PV_total_res_{}'.format(i)])
data_join_npv['SC_ind_{}'.format(i)] = np.where(data_join_npv['PV_total_ind_{}'.format(i)]>data_join_ind['Ele_total'], data_join_ind['Ele_total'], data_join_npv['PV_total_ind_{}'.format(i)])
data_join_npv['E_tg_res_{}'.format(i)] = data_join_npv['PV_total_res_{}'.format(i)]-data_join_npv['SC_res_{}'.format(i)]
data_join_npv['E_tg_ind_{}'.format(i)] = data_join_npv['PV_total_ind_{}'.format(i)]-data_join_npv['SC_ind_{}'.format(i)]
data_join_npv['E_fg_res_{}'.format(i)] = data_join_res['Ele_total']-data_join_npv['SC_res_{}'.format(i)]
data_join_npv['E_fg_ind_{}'.format(i)] = data_join_ind['Ele_total']-data_join_npv['SC_ind_{}'.format(i)]
cash = float(data_join_npv['SC_res_{}'.format(i)].sum())*COST_OF_ENERGY_RES + float(data_join_npv['E_tg_res_{}'.format(i)].sum())*VALUE_OF_ENERGY - float(data_join_npv['E_fg_res_{}'.format(i)].sum())*COST_OF_ENERGY_RES + float(data_join_npv['SC_ind_{}'.format(i)].sum())*COST_OF_ENERGY_IND + float(data_join_npv['E_tg_ind_{}'.format(i)].sum())*VALUE_OF_ENERGY - float(data_join_npv['E_fg_ind_{}'.format(i)].sum())*COST_OF_ENERGY_IND - OM_COST*total_pv
cash_flow += cash/((1+DISC_RATE)**(i+1))
npv.append(-in_inv+cash_flow)
These are the warnings that I get:
C:\Users\Giacomo\Desktop\150\insert_data.py:342: PerformanceWarning: DataFrame is highly fragmented. This is usually the result of calling `frame.insert` many times, which has poor performance. Consider joining all columns at once using pd.concat(axis=1) instead. To get a de-fragmented frame, use `newframe = frame.copy()`
data_join_npv['E_tg_res_{}'.format(i)] = data_join_npv['PV_total_res_{}'.format(i)]-data_join_npv['SC_res_{}'.format(i)]
C:\Users\Giacomo\Desktop\150\insert_data.py:343: PerformanceWarning: DataFrame is highly fragmented. This is usually the result of calling `frame.insert` many times, which has poor performance. Consider joining all columns at once using pd.concat(axis=1) instead. To get a de-fragmented frame, use `newframe = frame.copy()`
data_join_npv['E_tg_ind_{}'.format(i)] = data_join_npv['PV_total_ind_{}'.format(i)]-data_join_npv['SC_ind_{}'.format(i)]
C:\Users\Giacomo\Desktop\150\insert_data.py:344: PerformanceWarning: DataFrame is highly fragmented. This is usually the result of calling `frame.insert` many times, which has poor performance. Consider joining all columns at once using pd.concat(axis=1) instead. To get a de-fragmented frame, use `newframe = frame.copy()`
data_join_npv['E_fg_res_{}'.format(i)] = data_join_res['Ele_total']-data_join_npv['SC_res_{}'.format(i)]
C:\Users\Giacomo\Desktop\150\insert_data.py:345: PerformanceWarning: DataFrame is highly fragmented. This is usually the result of calling `frame.insert` many times, which has poor performance. Consider joining all columns at once using pd.concat(axis=1) instead. To get a de-fragmented frame, use `newframe = frame.copy()`
data_join_npv['E_fg_ind_{}'.format(i)] = data_join_ind['Ele_total']-data_join_npv['SC_ind_{}'.format(i)]
C:\Users\Giacomo\Desktop\150\insert_data.py:337: PerformanceWarning: DataFrame is highly fragmented. This is usually the result of calling `frame.insert` many times, which has poor performance. Consider joining all columns at once using pd.concat(axis=1) instead. To get a de-fragmented frame, use `newframe = frame.copy()`
data_join_npv['PV_total_res_{}'.format(i)] = data_join_npv['PV_total_res_{}'.format(i-1)]*(1-d)
C:\Users\Giacomo\Desktop\150\insert_data.py:338: PerformanceWarning: DataFrame is highly fragmented. This is usually the result of calling `frame.insert` many times, which has poor performance. Consider joining all columns at once using pd.concat(axis=1) instead. To get a de-fragmented frame, use `newframe = frame.copy()`
data_join_npv['PV_total_ind_{}'.format(i)] = data_join_npv['PV_total_ind_{}'.format(i-1)]*(1-d)
C:\Users\Giacomo\Desktop\150\insert_data.py:340: PerformanceWarning: DataFrame is highly fragmented. This is usually the result of calling `frame.insert` many times, which has poor performance. Consider joining all columns at once using pd.concat(axis=1) instead. To get a de-fragmented frame, use `newframe = frame.copy()`
data_join_npv['SC_res_{}'.format(i)] = np.where(data_join_npv['PV_total_res_{}'.format(i)]>data_join_res['Ele_total'], data_join_res['Ele_total'], data_join_npv['PV_total_res_{}'.format(i)])
C:\Users\Giacomo\Desktop\150\insert_data.py:341: PerformanceWarning: DataFrame is highly fragmented. This is usually the result of calling `frame.insert` many times, which has poor performance. Consider joining all columns at once using pd.concat(axis=1) instead. To get a de-fragmented frame, use `newframe = frame.copy()`
data_join_npv['SC_ind_{}'.format(i)] = np.where(data_join_npv['PV_total_ind_{}'.format(i)]>data_join_ind['Ele_total'], data_join_ind['Ele_total'], data_join_npv['PV_total_ind_{}'.format(i)])
C:\Users\Giacomo\Desktop\150\insert_data.py:342: PerformanceWarning: DataFrame is highly fragmented. This is usually the result of calling `frame.insert` many times, which has poor performance. Consider joining all columns at once using pd.concat(axis=1) instead. To get a de-fragmented frame, use `newframe = frame.copy()
I don't use frame.insert()
as the warning suggests, so I don't get why I have this warning on fragmentation. I get the right results, but since I have to run the code multiple times inside an optimizer, I think the lots of warning I get are stopping the optimizer at some point during the analysis, and I would like to solve them.
You are getting these multiple warnings because you are repeatedly inserting columns into your dataframe data_join_npv
instead of concatenating them together after and outside the for-loop, which is much more efficient memory wise.
For instance, running this toy code:
import pandas as pd
df = pd.DataFrame({f"col{i}": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] for i in range(1_000)})
new_df = pd.DataFrame()
for i in range(1_000): # insert one thousand columns
new_df[f"new_df_col{i}"] = df[f"col{i}"]+i
print(new_df)
You will get this output:
PerformanceWarning: DataFrame is highly fragmented. This is usually the result of calling
frame.insert
many times, which has poor performance. Consider joining all columns at once using pd.concat(axis=1) instead. To get a de-fragmented frame, usenewframe = frame.copy()
new_df[f"new_df_col{i}"] = df[f"col{i}"]+i
new_df_col0 new_df_col1 new_df_col2 ... new_df_col997 new_df_col998 new_df_col999
0 0 1 2 ... 997 998 999
1 1 2 3 ... 998 999 1000
2 2 3 4 ... 999 1000 1001
3 3 4 5 ... 1000 1001 1002
4 4 5 6 ... 1001 1002 1003
5 5 6 7 ... 1002 1003 1004
6 6 7 8 ... 1003 1004 1005
7 7 8 9 ... 1004 1005 1006
8 8 9 10 ... 1005 1006 1007
9 9 10 11 ... 1006 1007 1008
[10 rows x 1000 columns]
Whereas, for instance, initializing an empty dictionary instead of a dataframe, and using Pandas concat:
import pandas as pd
df = pd.DataFrame({f"col{i}": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] for i in range(1_000)})
data = {}
for i in range(1_000):
data[f"new_col{i}"] = df[f"col{i}"] + i
new_df = pd.concat(data.values(), axis=1, ignore_index=True)
new_df.columns = data.keys() # since Python 3.7, order of insertion is preserved
print(new_df)
You will get the same dataframe without any warning:
new_col0 new_col1 new_col2 new_col3 ... new_col996 new_col997 new_col998 new_col999
0 0 1 2 3 ... 996 997 998 999
1 1 2 3 4 ... 997 998 999 1000
2 2 3 4 5 ... 998 999 1000 1001
3 3 4 5 6 ... 999 1000 1001 1002
4 4 5 6 7 ... 1000 1001 1002 1003
5 5 6 7 8 ... 1001 1002 1003 1004
6 6 7 8 9 ... 1002 1003 1004 1005
7 7 8 9 10 ... 1003 1004 1005 1006
8 8 9 10 11 ... 1004 1005 1006 1007
9 9 10 11 12 ... 1005 1006 1007 1008
[10 rows x 1000 columns]
So, try refactoring your code like this:
cash_flow = 0
npv = []
data_join_npv = {} # instead of pd.DataFrame()
for i in range (0,25): # code unchanged
...
df = pd.concat(data_join_npv.values(), axis=1, ignore_index=True)
df.columns = data_join_npv.keys()
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With