The Python GIL prevents Python code from being executed in parallel; threads executing Python code can only be run concurrently. For the same amount of CPU work, the same amount of time or more is required.

To be able to add to the same datastructure from multiple threads, you'd have to add locking, slowing down threading more.

Your code is slow because it is wasting cycles, because you are recreating the set object each iteration and then discarding it again. This is sucking up all the time as proxies continues to grow, so in the end you created sets for each size of proxies, from length 1 all the way up to length 70k, approaching 5 million steps to throw away 70k sets.

You should produce the set once. You can do so in a set comprehension:

with open('proxy.txt') as f:
proxies = {tuple(line.strip().split(':')) for line in f}

Email codedump link for Speed up a list operation using threads in python