Skip navigation.
Home
uk.sci.weather resources

NWP basics

NWP prediction is performed by solving mathematical equations which describe the behaviour of the atmosphere. The principal equations involved, and the purposes for which they are evaluated are:-

Three equations of motion:  Under Newtonian laws, the acceleration of a particle is proportionate to the forces acting upon that particle. These equations handle the 'balance' between the pressure gradient force (horizontal & vertical), the Coriolis 'force' (primarily horizontal) & acceleration due to gravity (principally in the vertical).  Fundamental to the analysis & forecast of wind flow, hence advection of air, convergence / divergence, vorticity development, vertical motion (in concert with the continuity equation) etc.
Continuity equation  A defined 'parcel' of air retains the same mass, regardless of how it moves about, or how it is deformed: 'conservation of mass'. Using this statement, a link is established between vertical and horizontal convergence.  Used to define vertical motion (not measured directly) from fields of horizontal motion ('the wind'), and hence key to describing formation / dispersal of clouds, precipitation etc. Indirectly involved with diagnosis of development due to vorticity changes.
Thermodynamic (or heat) equation  Relates heat transformation due to atmospheric processes to 'parcel' volume & internal energy changes as air ascends / descends: the 'bicycle pump' analogy.  Required to describe the the various adiabatic processes mathematically, key to understanding the degree of stability of an air mass.
Moisture equation  In similar fashion to the continuity equation (above), this describes the fact that moisture is not gained or lost, but can change phase (with consequent latent heat exchange), or is diffused via mixing actions (i.e. exchange with water / land surfaces).  Describes the changes in humidity content of an air mass (hence dew point, vapour pressure & indirectly to wet-bulb variants), which in turn are used implicitly to handle cloud / precipitation processes in the 'model' atmosphere.
Equation of state  Relates the density, pressure & temperature of a 'parcel' in a perfect gas; the atmosphere is not a perfect gas, but can be assumed to behave as such.  Used to describe mathematically changes in these fundamental properties of a parcel as they move vertically, and in concert with the thermodynamic equation, are used to describe the behaviour under adiabatic transformation.
Hydrostatic equation  In the vertical, there are two (main) forces which are roughly in balance: the upward-directed force ("buoyancy") due to the pressure gradient [ pressure falls with height ] & the downward force due to gravity. Any disturbance of this balance following atmospheric development leads to vertical accelerations.  Describes the relationship between height & pressure (the 'barometric equation') and is used in the algorithms to diagnose atmospheric stability; also important when considering the explicit modelling of small-scale vertical motion.

And the variables usually 'carried' within the model schemes are:-

  • Three components of wind velocity: two horizontal & one vertical;
  • Air density;
  • Temperature (or Potential temperature);
  • Pressure;
  • Humidity (usually in terms of the humidity mixing ratio):
    [ In addition, some models (certainly small-scale models) will also 'hold' an explicit representation of liquid water and ice fraction, surface wetness / character (i.e. ice/snow cover), vegetation development etc. ]

We can't possibly analyse each field at the level the atmosphere produces: it is difficult enough to describe the broader (or synoptic) scale motions. This means that instabilities are already 'built-in' from the point of analysis - which if not 'damped' mathematically can feed through to the ensuing forecast in an unstable fashion. This used to be a common problem with early NWP, but is a rare occurrence now.

All analyses are approximate (human or model); the key is obtaining the most consistent such starting point. NWP 'runs' don't start with a 'blank chart' - they integrate short-period forecasts from a previous solution (either the 'main' run, or a special 'update' calculation) and by careful injection of new data, a "best solution" starting point (or initialisation stage) is achieved.

There are broadly two types of atmospheric computer model in use around the globe:

In a grid-point model, the variables above are held at intersections of a regular matrix, or at fixed points between the intersections, and it is at these points that the equations outlined above are applied. Finite-difference calculations are used to solve the equations, i.e. solutions for the variable concerned 'nudge' the calculations a small fraction of time (a "time-step"). A grid point model (such as the UKMO Global) is usually defined in terms of the number of vertical levels, and the horizontal (approximate) grid-length.

In a spectral model, the variables are represented by periodic functions which are the sum of several waves, the calculations being performed via various methods such as Fourier series, spherical harmonics etc. The fields are therefore represented by wave functions of differing wavelengths. To take a 'real world' example, it is possible to model the behaviour of a cork bobbing on the surface of a river by considering the individual energy strands carried within that flow, then integrating them at one point - the cork. Spectral models (such as the GFS, NGP & EC) are regarded as computationally more efficient, and the solutions are available for every point on the globe, rather than tied to a regular grid array. These models are usually defined by the numbers of vertical levels and the wave-number truncation: thus T382L64 would indicate 'triangular' truncation at wave-number 382, with 64 levels in the vertical. To convert the wave-number 'horizontal resolution' to an approximate grid-length, divide 360 by the wave-number, divide by 3 (it takes 3 grid-points to define a wave), then multiply by 111.1km (per deg. latitude).

Grid spacing (or wave numbers) will determine computational resources required - shorter grid-length (or more wave representation), then the greater the amount of calculating power required: also remember that the calculations are performed both horizontally and vertically. Vertical levels are not evenly spaced: there are more levels near the ground (the boundary layer) & around the tropopause, to capture the often high degree of variability in the altitude bands, but even then, there are never enough!

Some elements are not even attempted, nor can they be attempted e.g. CB/TS, turbulence, fine-detail of precipitation processes etc. To use computing time more efficiently, approximations and assumptions are used [ parameterization or parametrisation ], both when dealing with the application of the primary equations and the mathematical methods used.

The numerical simulation, both in it's analysis and forecast output can NEVER be a perfect representation of the real atmosphere - and it is difficult to conceive of a time when it might be. Users should be aware that small-scale processes (e.g. thunderstorms, sea-breeze circulation) cannot be explicitly handled in the modern generation of global models. However, these same important processes can have a significant effect on the broader-scale evolution; errors generated via the parameterization (or parametrisation) routines can have significant 'downstream' consequences.

The real atmosphere is highly complex; it is fluid-flow performing in four dimensions, which numerical models can only approximate to the processes involved. Some of the motions / processes (such as the micro-scale movement of the air itself) cannot be handled, and even fairly large 'blocks' of weather features, such as individual thunderstorms or local turbulent flow can only be 'parametrized' (or 'fudged' if you prefer).

But this introduces problems much as if you 'nudge' a supermarket trolley: it is not always possible to predict the outcome! It is no wonder that the trough disruption process for example, which is taking place throughout a large vertical slice of the atmosphere, and involves a skewing of the thermal profile at differing rates, is often poorly handled. And get the disruption wrong - everything following on is going to be in serious error.

You'll read old soaks like me bang on about how the basis of a good forecast is a good, detailed analysis. This adage applies particularly to NWP routines: a human can sometimes 'correct' or allow for a poor or inadequate analysis through experience & 'gut instinct': a computer will simply use the data it has and employ the routines written for it without mercy! Consider the analogy below:-

I sometimes think predicting the various weather variables numerically is like trying for work out what the EXACT time of arrival of a bus on a typical 'town-circular' route might be, as it is about to leave the terminus (T). You would need to model a large number of variables as below:-

bus_round_loopA To predict the time-of-arrival at stop 1 is fairly straightforward; the distance will only be a matter of a few hundred metres, and a knowledge of the average speed, instantaneous traffic flow etc., between T & 1 will yield a prediction within 60 seconds of the actual time, perhaps below 30 seconds. The short time-step, plus the restricted impact of 'external' variables allows a fairly accurate prediction. [ In meteorology, this is analogous to a 'nowcast' routine, with results over half-an-hour to 6 hour periods. ]
bus_round_loopB To predict the time at stop 2 is complicated by several factors: we need to predict the numbers of passengers getting on / off at the previous stop & whether they tender cash, the exact fare or use some form of pre-pay card / concession: we also need to predict the traffic flow between 1 & 2, and also make allowance for being held (or not) at the light-controlled junction. All these variables can be predicted in advance, but require a much greater amount of information than for the previous time-step. The accuracy of prediction at stop 2 could drop dramatically, though for the majority of cases, it will still be within 60-90 seconds. [ In meteorology, this would simulate the problem for the short-range forecast - say the first 48 hours of a run. ]
bus_round_loopC Predicting the time-of-arrival at 6 (and later stops) to any sort of accuracy (before it has left T remember) is now fraught with difficulty. Passenger loading (on/off), ticket types / difficulties, road-traffic density, timing of light-controlled crossings etc., all need to be 'built-in' to this model, either explicitly (by monitoring actual traffic / passenger behaviour) or implicitly by using 'notional' numbers based on an previous experience (analogous to parametrisation). To achieve an accuracy of +/- 3 minutes for stops 6, 7 and 8 on a typical town-circular bus route on any given trip is very difficult. [ In meteorology, this typifies the problems when trying to forecast 5 or more days out. ]

So it is with the weather: a computer model of the atmosphere must have information about a wide variety of variables that might impact upon the outcome - and the longer down-stream the forecast, the more information & mathematical processing is required.

. . . and there is the added problem that whereas the bus tends not to modify it's environment as it runs on it's course, in meteorology developments can / will significantly modify the broadscale pattern, which in turn will affect the way the forecast 'pans out'!


So, NWP models are trying to predict an outcome given the information they have (which can be limited) & using the current understanding of atmospheric physics / thermodynamics (which is developing all the time); at short 'lead times', they will do well in the main, but further 'down stream', errors can be large.

To try and handle the uncertainty almost inevitable in modern-day computer output, the concept of the 'ensemble' is used: a control run is performed (same model physics as the 'main run' but at lower spatial & temporal resolution), and then the initial field is 'perturbed' (or nudged) slightly, and a series of further forecast runs performed. Again, reference to a 'real-world' example might help. Here we have a ski slope, where the skier starts a downhill run from point 'A' and ends up at point 'D'. Each time he or she 'pushes-off' from the top, the starting point is very slightly different.

ski_slope_basic From the same starting point (analysis), and using the same physical structure (basic equations), we get a different result (forecast).
ski_slope_A At the top of the slope (B) such deviations are of little consequence - this might represent the first 6 to 12 hours of a weather forecast;
ski_slope_B . . . but at 'C' & 'D', the deviation becomes somewhat more significant. It may be that the upper height contours (or mean-sea level pressure) is the parameter to be forecast: it can be seen that early model deviation at B & perhaps C would have little impact upon the final forecast: at 'D' though, considerable variation can be found, which might make the difference between a roaring 'Storm-Force 10' and just a good 'Force 6/7 blow'.
ski_slope_C In an ensemble prediction system (EPS), several 'runs' are made using the 'control' as the core 'driver', and the results can be clustered together into like patterns. The human analyst will have high confidence in a solution where there are a large number of ensemble members (the individual ski-slope runs) giving a similar solution; the confidence would be poor where the results are peppered over a wide span of outcomes.

To see an example of ensemble output (with explanation) from the GFS EPS . . . click HERE.

The ECMWF ensemble prediction system (EPS) is regarded as one of the best in the world. It uses 51 members in its scheme: one 'control' run (a half-resolution horizontal analysis / two-thirds vertical resolution - based on the main operational output) & 50 members with an analysis slightly perturbed from the control member. Unfortunately, they aren't available on the Internet!


One question that is often asked is .. if all models use the same data, why do they come up with different results?

There are so many variables involved with atmospheric modelling, that I'm surprised when models do agree: some of the factors that distinguish one centre's output from another are:

  • Most importantly, the initial condition (the 'analysis') on which the subsequent forecast calculations are based, may be assessed in different ways. This is much as if you had two forecasters faced with the same set of observations, but who produce a slightly different analysis - location of front, depth of low, sharpness of trough etc.
  • The various centres will (usually) have available all the same raw data, but what they do with it may be different; they may not assimilate all the data, or may be selective about which datasets are used - this applies particularly to satellite-derived data.
  • A different 'weighting' may be applied to data: for example, an isolated ship in a data-sparse area might have greater influence in one centre's scheme than another.
  • The methods used in numerical analysis are now highly complex and as has been noted elsewhere, data are not just used at the 'primary' DT's (00, 06, 12 & 18 UTC), but are used asynoptically - but the methods of doing this are different from centre-to-centre, hence different analyses on any one occasion, and therefore a different forecast outcome.
  • Then of course there are different grid lengths (or wave numbers for spectral models), and a variation in the number of vertical levels used: these later will be spaced differently.
  • It is impossible for every nuance of the world's orography (depth / shape of valleys, height profiles of hills & mountains etc.) to be 'carried' within a numerical simulation: even coastlines & islands must be smoothed - think of the ins & outs of the Norwegian fjords or the clusters of small islands in the Aegean. So each centre will have a slightly different representation of the physical earth - it may not matter when output are averaged over lengthy periods, but for an individual situation, the interaction between land surface and the atmosphere may be crucial.
  • Some models are 'tuned' to produce results at higher definition for specific purposes - e.g. the tropical models, GCM's used for climate simulation work or mesoscale models used for short-range, high spatial resolution work.
  • The time-steps (or mathematical procedures used) will be different one from another, even if the actual equations [ see above ] are the same.
  • The way some processes are parameterised will be different - some may have an explicit (and 'near-real-time') representation of the surface type, whilst others will use climatology, or have a mixture of methods.
  • Models will have different ways of coping with 'polar convergence' - the physical distance for a given degree of longitude will vary from equator to pole - not only at the surface but aloft as well. Some centres will skew the grid they use to make sure that the primary area of interest is over them, so 'notional' poles will result which may affect, even if only slightly, the forecast over the domain further away.
  • Another variable that is not always obvious is that modellers may add 'fudge' factors (or correction factors) to offset know model biases. For example, no matter how many levels are squeezed in at the top of the troposphere / lower stratosphere, capturing the 'top-strength' of a jetstream (comparatively shallow vertically) is difficult. Once identified, a percentage correction may be added to the output at the post-processing stage, but in some situations this may not give an appropriate result.

All centres will have available the same raw data, but what they do with it will be different - hence the large variations on occasion, particularly at the longer lead-times.


Bibliography:

Dynamical meteorology: an introductory selection; Ed: B W Atkinson
(Methuen 1981 / The Royal Meteorological Society)

and visit the Met Office web site HERE for a lot more on numerical modelling, and in particular the way the UK national weather provider goes about it.