Solving Captchas with Simulated GAN

Sep 4, 2017

With simulated unsupervised learning, breaking captchas has never been easier. There is no need to label any captchas manually for convnet. By using a captcha synthesizer and a refiner trained with GAN, it’s feasible to generate synthesized training pairs for classification captchas.

Decompression

Each challenge file is actually a json object containing 1000 base64 encoded jpg image file. So for each of these challenge files, we decompress each base64 strs into a jpeg and put that under a seprate folder.

Convert to black and white

Instead of RGB, binarized image saves significant compute. Here we hardcode a threshold and iterate over each pixel to obtain a binary image.

defgray(img_path):# convert to grayscale, then binarizeimg=Image.open(img_path).convert("L")img=img.point(lambdax:255ifx>200orx==0elsex)# value found through T&Eimg=img.point(lambdax:0ifx<255else255,"1")img.save(img_path)forimg_pathinIMG_FNAMES:gray(img_path)

im=Image.open(example_image_path)im

Find mask

As you may have noticed, all the captchas share the same horizontal lines. Since this is a contest, it was a function of participant’s username. In the real world, these noises can be filtered out using morphological transformation with OpenCV.

We will extract and save the lines(noise) for later use. Here we average all 20000 captchas and set a threshold as above. Another method is using a bit mask (&=) to iteratively filter out surrounding black pixels i.e.

mask = np.ones((height, width))
for im in ims:
mask &= im

The effectiveness of bit mask depends on how clean the binarized data is. With the averaging method, some error is allowed.

defget_image_batch(generator):"""keras generators may generate an incomplete batch for the last batch"""img_batch=generator.next()iflen(img_batch)!=BATCH_SIZE:img_batch=generator.next()assertlen(img_batch)==BATCH_SIZEreturnimg_batch

What happened next?

Plug all the data in an MNIST-like classifier and call it a day. Unfortunately, it’s not that simple.

I actually spent a long time fine-tuning the network but accuracy plateued around 55% sampled. The passing requirement is 10000 out of 15000 submitted or 90% accuracy or 66% per char. I was facing a dilemma: tune the model even further or manually label x amount of data:

0.55 * (15000-x) + x = 10000
x = 3888

Obviously I am not going to label 4000 captchas and break my neck in the process.

Meanwhile, there happened a burnt out guy who decided to label all 10000 captchas. This dilligent dude was 2000 in. I asked if he is willing to collaborate on a solution. It’s almost like he didn’t want to label captchas anymore. He agreed immediately.

Using the same model, accuracy immediately shot up to 95% and we both qualified for HackMIT.

/aside

After the contest, I perfected the model and got 95% without labelling a single image. Here is the model for SimGAN:

Model Definition

There are three components to the network:

Refiner

The refiner network, Rθ, is a residual network (ResNet). It modifies the synthetic image on a pixel level, rather than holistically modifying the image content, preserving the global structure and annotations.

Discriminator

The discriminator network Dφ, is a simple ConvNet that contains 5 conv layers and 2 max-pooling layers. It’s abinary classifier that outputs whether a captcha is synthesized or real.

Combined

Pipe the refined image into discriminator.

defrefiner_network(input_image_tensor):"""
:param input_image_tensor: Input tensor that corresponds to a synthetic image.
:return: Output tensor that corresponds to a refined synthetic image.
"""defresnet_block(input_features,nb_features=64,nb_kernel_rows=3,nb_kernel_cols=3):"""
A ResNet block with two `nb_kernel_rows` x `nb_kernel_cols` convolutional layers,
each with `nb_features` feature maps.
See Figure 6 in https://arxiv.org/pdf/1612.07828v1.pdf.
:param input_features: Input tensor to ResNet block.
:return: Output tensor from ResNet block.
"""y=layers.Convolution2D(nb_features,nb_kernel_rows,nb_kernel_cols,border_mode='same')(input_features)y=layers.Activation('relu')(y)y=layers.Convolution2D(nb_features,nb_kernel_rows,nb_kernel_cols,border_mode='same')(y)y=layers.merge([input_features,y],mode='sum')returnlayers.Activation('relu')(y)# an input image of size w × h is convolved with 3 × 3 filters that output 64 feature mapsx=layers.Convolution2D(64,3,3,border_mode='same',activation='relu')(input_image_tensor)# the output is passed through 4 ResNet blocksfor_inrange(4):x=resnet_block(x)# the output of the last ResNet block is passed to a 1 × 1 convolutional layer producing 1 feature map# corresponding to the refined synthetic imagereturnlayers.Convolution2D(1,1,1,border_mode='same',activation='tanh')(x)defdiscriminator_network(input_image_tensor):"""
:param input_image_tensor: Input tensor corresponding to an image, either real or refined.
:return: Output tensor that corresponds to the probability of whether an image is real or refined.
"""x=layers.Convolution2D(96,3,3,border_mode='same',subsample=(2,2),activation='relu')(input_image_tensor)x=layers.Convolution2D(64,3,3,border_mode='same',subsample=(2,2),activation='relu')(x)x=layers.MaxPooling2D(pool_size=(3,3),border_mode='same',strides=(1,1))(x)x=layers.Convolution2D(32,3,3,border_mode='same',subsample=(1,1),activation='relu')(x)x=layers.Convolution2D(32,1,1,border_mode='same',subsample=(1,1),activation='relu')(x)x=layers.Convolution2D(2,1,1,border_mode='same',subsample=(1,1),activation='relu')(x)# here one feature map corresponds to `is_real` and the other to `is_refined`,# and the custom loss function is then `tf.nn.sparse_softmax_cross_entropy_with_logits`returnlayers.Reshape((-1,2))(x)# Refinersynthetic_image_tensor=layers.Input(shape=(HEIGHT,WIDTH,1))refined_image_tensor=refiner_network(synthetic_image_tensor)refiner_model=models.Model(input=synthetic_image_tensor,output=refined_image_tensor,name='refiner')# Discriminatorrefined_or_real_image_tensor=layers.Input(shape=(HEIGHT,WIDTH,1))discriminator_output=discriminator_network(refined_or_real_image_tensor)discriminator_model=models.Model(input=refined_or_real_image_tensor,output=discriminator_output,name='discriminator')# Combinedrefiner_model_output=refiner_model(synthetic_image_tensor)combined_output=discriminator_model(refiner_model_output)combined_model=models.Model(input=synthetic_image_tensor,output=[refiner_model_output,combined_output],name='combined')defself_regularization_loss(y_true,y_pred):delta=0.0001# FIXME: need to figure out an appropriate value for thisreturntf.multiply(delta,tf.reduce_sum(tf.abs(y_pred-y_true)))# define custom local adversarial loss (softmax for each image section) for the discriminator# the adversarial loss function is the sum of the cross-entropy losses over the local patchesdeflocal_adversarial_loss(y_true,y_pred):# y_true and y_pred have shape (batch_size, # of local patches, 2), but really we just want to average over# the local patches and batch size so we can reshape to (batch_size * # of local patches, 2)y_true=tf.reshape(y_true,(-1,2))y_pred=tf.reshape(y_pred,(-1,2))loss=tf.nn.softmax_cross_entropy_with_logits(labels=y_true,logits=y_pred)returntf.reduce_mean(loss)# compile modelsBATCH_SIZE=512sgd=optimizers.RMSprop()refiner_model.compile(optimizer=sgd,loss=self_regularization_loss)discriminator_model.compile(optimizer=sgd,loss=local_adversarial_loss)discriminator_model.trainable=Falsecombined_model.compile(optimizer=sgd,loss=[self_regularization_loss,local_adversarial_loss])

Pre-training

It is not necessary to pre-train GANs but it seems pretraining makes GANs converge faster. Here we pre-train both models. For the refiner, we train by supplying the identity. For the discriminator, we train with the correct real, synth labeled pairs.

# the target labels for the cross-entropy loss layer are 0 for every yj (real) and 1 for every xi (refined)y_real=np.array([[[1.0,0.0]]*discriminator_model.output_shape[1]]*BATCH_SIZE)y_refined=np.array([[[0.0,1.0]]*discriminator_model.output_shape[1]]*BATCH_SIZE)asserty_real.shape==(BATCH_SIZE,discriminator_model.output_shape[1],2)

Training

This is the most important training step in which we refine a synthesized captcha, then pass it through the discriminator and backprop gradients.

fromimage_history_bufferimportImageHistoryBufferk_d=1# number of discriminator updates per stepk_g=2# number of generative network updates per stepnb_steps=1000# TODO: what is an appropriate size for the image history buffer?image_history_buffer=ImageHistoryBuffer((0,HEIGHT,WIDTH,1),BATCH_SIZE*100,BATCH_SIZE)combined_loss=np.zeros(shape=len(combined_model.metrics_names))disc_loss_real=np.zeros(shape=len(discriminator_model.metrics_names))disc_loss_refined=np.zeros(shape=len(discriminator_model.metrics_names))# see Algorithm 1 in https://arxiv.org/pdf/1612.07828v1.pdfforiinrange(nb_steps):print('Step: {} of {}.'.format(i,nb_steps))# train the refinerfor_inrange(k_g*2):# sample a mini-batch of synthetic imagessynthetic_image_batch=get_image_batch(synth_generator())# update θ by taking an SGD step on mini-batch loss LR(θ)combined_loss=np.add(combined_model.train_on_batch(synthetic_image_batch,[synthetic_image_batch,y_real]),combined_loss)for_inrange(k_d):# sample a mini-batch of synthetic and real imagessynthetic_image_batch=get_image_batch(synth_generator())real_image_batch=get_image_batch(real_generator)# refine the synthetic images w/ the current refinerrefined_image_batch=refiner_model.predict_on_batch(synthetic_image_batch)# use a history of refined imageshalf_batch_from_image_history=image_history_buffer.get_from_image_history_buffer()image_history_buffer.add_to_image_history_buffer(refined_image_batch)iflen(half_batch_from_image_history):refined_image_batch[:batch_size//2]=half_batch_from_image_history# update φ by taking an SGD step on mini-batch loss LD(φ)disc_loss_real=np.add(discriminator_model.train_on_batch(real_image_batch,y_real),disc_loss_real)disc_loss_refined=np.add(discriminator_model.train_on_batch(refined_image_batch,y_refined),disc_loss_refined)ifnoti%LOG_INTERVAL:# log loss summaryprint('Refiner model loss: {}.'.format(combined_loss/(LOG_INTERVAL*k_g*2)))print('Discriminator model loss real: {}.'.format(disc_loss_real/(LOG_INTERVAL*k_d*2)))print('Discriminator model loss refined: {}.'.format(disc_loss_refined/(LOG_INTERVAL*k_d*2)))combined_loss=np.zeros(shape=len(combined_model.metrics_names))disc_loss_real=np.zeros(shape=len(discriminator_model.metrics_names))disc_loss_refined=np.zeros(shape=len(discriminator_model.metrics_names))# save model checkpointsmodel_checkpoint_base_name=os.path.join(MODEL_DIR,'{}_model_step_{}.h5')refiner_model.save(model_checkpoint_base_name.format('refiner',i))discriminator_model.save(model_checkpoint_base_name.format('discriminator',i))

Results of SimGAN:

As you can see below, we no longer have the cookie-cutter fonts. There are quite a few artifacts that did not exist before refinement. The edges are blurred and noisy - which is impossible to simulate heuristically. And it is exactly these tiny things that renders MNIST-like convnet useless.