在Python中连接数据框时出现内存错误

我有一个680 MB的大型csv文件,我必须读取一个数据框。

我将文件分成块,然后将这些块附加到列表中。

然后,我试图通过使用pd.concat()创build一个合并的数据框。

我正在使用下面的代码来实现这一点:

temp_list = [] chunksize = 10 ** 5 for chunk in pd.read_csv('./data/properties_2016.csv', chunksize=chunksize, low_memory=False): temp_list.append(chunk) properties_df = temp_list[0] for df in temp_list[1:]: properties_df = pd.concat([properties_df, df], ignore_index=True) 

我试图通过运行一个泊坞窗图像来做到这一点。

我得到下面的内存错误:

 Traceback (most recent call last): File "dataIngestion.py", line 53, in <module> properties_df = pd.concat([properties_df, df], ignore_index=True) File "/usr/local/lib/python3.6/site-packages/pandas/core/reshape/concat.py", line 206, in concat copy=copy) File "/usr/local/lib/python3.6/site-packages/pandas/core/reshape/concat.py", line 266, in __init__ obj._consolidate(inplace=True) File "/usr/local/lib/python3.6/site-packages/pandas/core/generic.py", line 3156, in _consolidate self._consolidate_inplace() File "/usr/local/lib/python3.6/site-packages/pandas/core/generic.py", line 3138, in _consolidate_inplace self._protect_consolidate(f) File "/usr/local/lib/python3.6/site-packages/pandas/core/generic.py", line 3127, in _protect_consolidate result = f() File "/usr/local/lib/python3.6/site-packages/pandas/core/generic.py", line 3136, in f self._data = self._data.consolidate() File "/usr/local/lib/python3.6/site-packages/pandas/core/internals.py", line 3573, in consolidate bm._consolidate_inplace() File "/usr/local/lib/python3.6/site-packages/pandas/core/internals.py", line 3578, in _consolidate_inplace self.blocks = tuple(_consolidate(self.blocks)) File "/usr/local/lib/python3.6/site-packages/pandas/core/internals.py", line 4525, in _consolidate _can_consolidate=_can_consolidate) File "/usr/local/lib/python3.6/site-packages/pandas/core/internals.py", line 4548, in _merge_blocks new_values = new_values[argsort] MemoryError 

请帮助这里!

连接DataFrame不能这样工作。 我认为这个链接会有所帮助

这是做到这一点的正确方法

 temp_list = [] chunksize = 10 ** 5 for chunk in pd.read_csv('./data/properties_2016.csv', chunksize=chunksize, low_memory=False): temp_list.append(chunk) frames = [] for df in temp_list: frames.append(df) properties_df = pd.concat(frames, ignore_index=True) 

我尝试了一个小文件,并工作,请让我知道,如果你仍然有同样的错误。

Interesting Posts