Rather than attempting parse every line from the url and put it into specific rows for a csv file, you can just push it all into a text file to clean up the formating, and then read back from it, it may seem like a bit more works but this is generally my approach to comma delimited information from a URL.
import requests
URL = "http://www.cftc.gov/dea/newcot/FinFutWk.txt"
r = requests.get(URL,stream=True)
with open('file.txt','w') as W:
W.write(r.text)
with open('file.txt', 'r') as f:
lines = f.readlines()
for line in lines:
print(line.split(','))
You can take what is in that forloop, and swap it around to actually saving the lists into a array of lists so you can use rather than print them.
content = []
for line in lines:
content.append(line.split(','))
Also note that upon splitting, you will still notice that there is content that has quite a large amount of white space after it, you could run through the entire list, for each list in the array, and remove all white space but that would ruin the first element in the list, or just convert the numeric values which have the white space into actual integers as they were read in as strings. That would be your preference. If you have any questions feel free to add a comment below.
EDIT 1:
On a side note, if you do not wish to keep the file that was saved with the content, import the os library and then after you read the lines into the lines array, remove the file.
import os
os.remove('file.txt')