Hi Lars,
This helps. The best approach hence is (atleast for the initial load phase) is to break into chunks and load and merge..probably a DS job which at the end of transaction also triggers a in-memory merge.
On your suggestion of larger machine/license:
Frankly, Its really not the concern today, as the current system has almost 3+ TB of free space(but each nodes is of 512GB only). My analysis was that, the above table was although partitioned, but it was reaching the cut-off limit in one of the nodes and hence erroring out.
So I think if the nodes were of 1TB each we would not have faced this issue as all required memory would be available in 1 node or the other. Please correct my analysis here...
Regards,
Rahul