What can we learn about clouds and their representation in models from the synergy of radar and lidar observations? Robin Hogan, Julien Delanoë, Nicky.

Similar presentations

Presentation on theme: "What can we learn about clouds and their representation in models from the synergy of radar and lidar observations? Robin Hogan, Julien Delanoë, Nicky."— Presentation transcript:

1 What can we learn about clouds and their representation in models from the synergy of radar and lidar observations?Robin Hogan, Julien Delanoë, Nicky Chalmers,Thorwald Stein, Nicola Pounder, Anthony IllingworthUniversity of ReadingThanks to Richard Forbes, Steve Woolnough,Alessandro Battaglia, Doug Parker

5 CloudSat and Calipso sensitivityIn July 2006, cloud occurrence in the subzero troposphere was 13.3%The fraction observed by radar was 65.9%The fraction observed by lidar was 65.0%The fraction observed by both was 31.0%

6 Ingredients of a variational retrievalAim: to retrieve an optimal estimate of the properties of clouds, aerosols and precipitation from combining these measurementsTo make use of integral constraints must retrieve components togetherFor each ray of data, define observation vector y:Radar reflectivity valuesLidar backscatter valuesInfrared radiancesShortwave radiancesSurface radar echo (provides two-way attenuation)Define state vector x of properties to be retrieved:Ice cloud extinction, number concentration and lidar-ratio profileLiquid water content profile and number concentrationRain rate profile and number concentrationAerosol extinction coefficient profile and lidar ratioForward model H(x) to predict the observationsMicrophysical component: particle scattering propertiesRadiative transfer component

7 The cost function + Smoothness constraintsThe essence of the method is to find the state vector x that minimizes a cost function:Each observation yi is weighted by the inverse of its error varianceThe forward model H(x) predicts the observations from the state vector xSome elements of x are constrained by a prior estimateThis term can be used to penalize curvature in the retrieved profile+ Smoothness constraints

8 Ingredients developed before In progress Not yet developedRetrieval framework1. New ray of data: define state vectorUse classification to specify variables describing each species at each gateIce: extinction coefficient , N0’, lidar extinction-to-backscatter ratioLiquid: liquid water content and number concentrationRain: rain rate and normalized number concentrationAerosol: extinction coefficient, particle size and lidar ratio3a. Radar modelIncluding surface return and multiple scattering3b. Lidar modelIncluding HSRL channels and multiple scattering3c. Radiance modelSolar and IR channels4. Compare to observationsCheck for convergence6. Iteration methodDerive a new state vectorEither Gauss-Newton or quasi-Newton scheme3. Forward modelNot convergedConvergedProceed to next ray of data2. Convert state vector to radar-lidar resolutionOften the state vector will contain a low resolution description of the profile5. Convert Jacobian/adjoint to state-vector resolutionInitially will be at the radar-lidar resolution7. Calculate retrieval errorError covariances and averaging kernelIngredients developed beforeIn progressNot yet developed

25 Radiative transfer forward modelsInfrared radiancesDelanoe and Hogan (2008) modelCurrently testing RTTOV (widely used, can do microwave, has adjoint)Solar radiancesCurrently testing LIDORTRadar and lidarSimplest model is single scattering with attenuation: b’=b exp(-2d)Problem from space is multiple scattering: contains extra information on cloud properties (particularly optical depth) but no-one has previously been able to rigorously make use of data subject to pulse stretchingUse combination of fast “Photon Variance-Covariance” method and “Time-Dependent Two-Stream” methodsAdjoints for these models recently codedForward model for lidar depolarization is in progress

28 Time-dependent 2-stream approx.Describe diffuse flux in terms of outgoing stream I+ and incoming stream I–, and numerically integrate the following coupled PDEs:These can be discretized quite simply in time and space (no implicit methods or matrix inversion required)Time derivative Remove this and we have the time-independent two-stream approximationSource Scattering from the quasi-direct beam into each of the streamsGain by scattering Radiation scattered from the other streamLoss by absorption or scatteringSome of lost radiation will enter the other streamSpatial derivative Transport of radiation from upstreamHogan and Battaglia (J. Atmos. Sci., 2008.)

31 Results for a sine profileSimulated test with 200-m sinusoidal structure in extinctionWith one FOV, only retrieve first 2 optical depthsWith three FOVs, retrieve structure of extinction profile down to 6 optical depthsBeyond that the information is smeared outNicola Pounder

32 Optical depth from multiple FOV lidarDespite vertical smearing of information, the total optical depth can be retrieved to ~30 optical depthsLimit is closer to 3 for one narrow field-of-view lidarNicola Pounder

35 Unified algorithm: progressDone:Functioning algorithm framework existsC++: object orientation allows code to be completely flexible: observations can be added and removed without needing to keep track of indices to matrices, so same code can be applied to different observing systemsCode to generate particle scattering libraries in NetCDF filesAdjoint of radar and lidar forward models with multiple scattering and HSRL/Raman supportInterface to L-BFGS quasi-Newton algorithm in GNU Scientific LibraryIn progress / future work:Implement full ice, liquid, aerosol and rain constituentsEstimate and report error in solution and averaging kernelInterface to radiance modelsTest on a range of ground-based, airborne and spaceborne instruments, particularly the A-Train and EarthCARE satellites

36 OutlookUse of radiances in retrieval should make retrieved profiles consistent with broadband fluxes (can test this with A-Train and EarthCARE)EarthCARE will take this a step furtherUse imager to construct 3D cloud field km wide beneath satelliteUse 3D radiative transfer to test consistency with broadband radiances looking at the cloud field in 3 directions (overcome earlier 3D problem)How can we use these retrievals to improve weather forecasts?Assimilate cloud products, or radar and lidar observations directly?Assimilation experiments being carried out by ECMWFStill an open problem as to how to ensure clouds are assimilated such that the dynamics and thermodynamics of the model are modified so as to be consistent with the presence of the cloudHow can we use these retrievals to improve climate models?We will have retrieved global cloud fields consistent with radiationSo can diagnose in detail not only what aspects of clouds are wrong in models, but the radiative error associated with each error in the representation of clouds

39 Minimizing the cost functionGradient of cost function (a vector)Gauss-Newton methodRapid convergence (instant for linear problems)Get solution error covariance “for free” at the endLevenberg-Marquardt is a small modification to ensure convergenceNeed the Jacobian matrix H of every forward model: can be expensive for larger problems as forward model may need to be rerun with each element of the state vector perturbedand 2nd derivative (the Hessian matrix):Gradient Descent methodsFast adjoint method to calculate xJ means don’t need to calculate JacobianDisadvantage: more iterations needed since we don’t know curvature of J(x)Quasi-Newton method to get the search direction (e.g. L-BFGS used by ECMWF): builds up an approximate inverse Hessian A for improved convergenceScales well for large xPoorer estimate of the error at the end