Remote sensing image has been widely applied to many tasks such as earth resources investigation, natural disaster prediction and environmental monitoring.

However, because of the influence of atmospheric density and cloud change, many remote sensing images are occluded by clouds.

The cloud layer makes the attenuation of the object information acquired in the image, which has a negative impact on further analysis.

For meteorological analysis, we can find extreme climate phenomena and summarize the corresponding change rules by studying the distribution of clouds.

Cloud detection is regarded as the key to remote sensing image recognition, classification and analysis. It's also an important basis for remote sensing image restoration.

After obtaining the cloud location information through the cloud detection, the image can be improved by restoring the blocked area according to the nearby objects information.

The reconstruction of remote sensing ground information can save a large number of satellite resources and avoid repeated shooting.

In addition, for some areas with relatively weak clouds, how to eliminate the influence of the thin cloud (fog) on the visualization and enhance the contrast of the ground objects is of the same significance for improving the utilization of remote sensing images.

Although the above problems have been extensively studied in remote sensing field, there are still some problems, such as low accuracy and weak generalization ability.

With the rise of deep learning, people try to apply this method to remote sensing images.

Although some research results have been obtained, there are still many key technologies needed to break through.
Therefore, for remote sensing image processing, a deep learning based neural network is constructed to solve the difficulties in cloud segmentation and cloud removal. The main work and contribution of this article are as follows:
1. In this paper, a convolutional neural network based segmentation model is proposed and applied to the extraction of cloud area in remote sensing images.

In terms of remote sensing data, we set up a dataset containing training, validation and testing part by annotating a large number of cloud regions.

In order to solve the misjudgement caused by cloud boundary blurring, the edge detection branch is used to update the model and strengthen the cloud boundary weight in the model structure.

In order to solve the training divergence caused by the difference in the surface of remote sensing images, this paper proposes a easy-to-hard training strategy, which is trained from simple samples and gradually increases the difficulty of the sample.

In this way, the difficult samples can be better fitted.

Comparative experiments show that the proposed method can converge faster and achieve higher cloud area extraction accuracy.
2. In this paper, a network structure based on generative adversarial model is designed and applied to remotely sensed image cloud removal.

By enhancing ground contrast and restoring the covered area, the utilization rate of remote sensing image can be improved.

In terms of data, this paper simulates cloud and fog in natural scenes by manual synthesis, and obtains data sets including training, validation and testing parts.

In order to obtain better results, this paper proposes to exploit the context semantic information of the ground features. The discriminator is adopted to evaluate the quality of the image generation. In the training process, the two parts are playing together and improving together.

Experimental results show that the proposed method is more effective than the auto-encoder machine.