Anda di halaman 1dari 62

Content

Chapter 1 - Introduction

1.1 What is WRF 3


1.2 What is meant by Mesoscale 3
1.3 What are the output variables of WRF 3
1.4 System Information 4
1.5 Document key 4

Chapter 2 – Installation of WRF

2.1 Checking the compilers and installing. 5


2.2 System Environment Tests 7
2.3 Required Libraries 11
2.4 Installation of Libraries 13
2.5 Library Compatibility Test 32
2.6 Installing WRF 34
2.7 Installing WPS 38
2.8 Configuration of Static Geographic Data 41

Chapter 3 – Running the WRF Model

3.1 Geogrid 42
3.2 Ungrib 43
3.3 Metgrid 43
Chapter 4 – Configuring the run time parameters

4.1 Configuring the namelist.wps file 47

1 | ​Page
4.1.1 Section A 50

4.1.2 Section B 50

4.1.3 Section C 53

4.1.4 Section D 53

4.1.5 Section E 54

4.2 Configuring the namelist.input file 54

4.2.1 Section A 56

4.2.2 Section B 57

4.3 Downloading Meteorological Data 57

4.3.1 Global Forecasting System (GFS) 58

2 | ​Page
Chapter 1 – Introduction

1.1 What is WRF?


Weather research and forecasting system most commonly known as WRF is a
mesoscale numerical weather prediction system which can be used for atmospheric
research and operational weather forecasting systems.

It consists of two main capabilities;

a. Real cases – Initial boundary conditions for the model are supplied in the form
of real meteorological data (GFS, FNL, etc.)
● Used for real time weather forecasting (Upto 16 days if GFS data is used).
● Analysis of historical meteorological data archives.
● Research related to climatic changes and trends.

b. Ideal cases – Initial boundary conditions for the model are supplied by the
user.
● Used for simulation of mesoscale atmospheric processes
Ex : Sea breeze, Squall lines

1.2 What is meant by Mesoscale?


Meteorological phenomena occurs over a wide range of space and time scales.
Mesoscale is one such range which interprets meteorological behaviors occurring at a
horizontal level of 2 – 2000km level. The importance of a mesoscale weather prediction
is that the effect of baroclininc instability is not dominant like in the case of synoptic
scale weather prediction. (Synoptic scale – horizontal levels greater than mesoscale)

1.3 What are the output variables of WRF?


There are 55 surface variables and 5 3D variables which can be predicted by WRF
model. Some of them are given below.

a) Precipitation h) Surface Evaporation


b) Surface pressure i) Soil frozen water content
c) Mean sea level pressure j) Total soil moisture content
d) 2-meter specific humidity k) Surface runoff
e) Daily 10-meter wind speed l) Snow depth
f) Sunshine duration
g) Upward latent heat flux at surface

3 | ​Page
1.4 System Information

The following system specifications were used in installing WRFV3.8. The minimum
system requirements depend on the availability of resources and the expected load of
processing done in WRF. Therefore it is always recommended to have higher system
resources allocated for WRF since it is consumes a lot of memory for running the
model.

Processor: Intel Core i7


Graphics: Gallium 0.4 on NV117
Memory: 16Gb
OS: Ubuntu 16.04 LTS
 

1.5 Document Key


The following fonts and formatting are used throughout the document for the ease of
understanding.

Main headings

Command Lines

*Important notes

4 | ​Page
Chapter 2 – Installation of WRF

2.1 Checking the compilers and installing.

1_The following compilers should be available to compile WRF and therefore the initial
step is to check if they already exist in the system.
gcc
gfortran
cpp

which gfortran

2_If the code outputs a path similar to something like given below, it means the module
is already available in the computer.

/usr/bin/gfortran

3_If any of the three components are not available, you should download and install
using the following commands. (Should be connected to the Internet with root
privileges)

sudo apt-get update


sudo apt-get upgrade
sudo apt-get install gfortran (Install the missing modules)

5 | ​Page
4_Check the installed version by using the command

gfortran --version

The output should be something similar to this but might change with new upgrades

GNU Fortran (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609


Copyright (C) 2015 Free Software Foundation, Inc.

GNU Fortran comes with NO WARRANTY, to the extent permitted by law.


You may redistribute copies of GNU Fortran
under the terms of the GNU General Public License.

6 | ​Page
2.2 System Environment Tests

1_Create a new directory for WRF Tests in the home directory using the following
command

mkdir TEST

2_Go inside the directory using the following command

cd TEST

3_Download the test files from a terminal opened from the home directory using the
following command line

wget
http://www2.mmm.ucar.edu/wrf/OnLineTutorial/compile_tutorial/tar_
files/Fortran_C_tests.tar

4_Unpack the tar file by using the following command

tar -xf Fortran_C_tests.tar

Test 1 – Fixed Format Fortran Test

1_Type the following in the command prompt one after the other

gfortran TEST_1_fortran_only_fixed.f

./a.out

2_If the test is successful, the following message should be displayed on the screen.

SUCCESS test 1 fortran only fixed format

7 | ​Page
Test 2 – Free Format Fortran Test

1_Type the following in the command prompt one after the other

gfortran TEST_2_fortran_only_free.f90

./a.out

2_If the test is successful, the following message should be displayed on the screen.

Assume Fortran 2003: has FLUSH, ALLOCATABLE, derived type, and ISO C
Binding
SUCCESS test 2 fortran only free format

Test 3 – C Test

1_Type the following in the command prompt one after the other

gcc TEST_3_c_only.c

./a.out

2_If the test is successful, the following message should be displayed on the screen

SUCCESS test 3 c only

Test 4 – Fortran Calling a C Function Test

1_Type the following in the command prompt one after the other

gcc -c -m64 TEST_4_fortran+c_c.c

gfortran -c -m64 TEST_4_fortran+c_f.f90

gfortran -m64 TEST_4_fortran+c_f.o TEST_4_fortran+c_c.o

8 | ​Page
./a.out

2_If the test is successful, the following message should be displayed on the screen.

C function called by Fortran


Values are xx = 2.00 and ii = 1
SUCCESS test 4 fortran calling c

*Before proceeding to the next tests, it is required to check whether perl, csh and sh are available
in the system. The following steps explain how to check for their availability and to install them if
they are not available.

1_The perl and sh scripting languages are included by default in Ubuntu 16.04 LTS OS
but the availability can be checked using the following command.

which perl
which csh
which sh

The output should display the path of the directory they are included.

For example:
/usr/bin/perl

2_If any of the three languages are missing, use the following commands to download
them and install.

sudo apt-get install perl


sudo apt-get install csh
sudo apt-get install sh

9 | ​Page
Test 5 – csh Test

1_Type the following in the command prompt.

./TEST_csh.csh

2_If the test is successful, the following message should be displayed on the screen.

SUCCESS csh test

Test 6 – perl Test

1_Type the following in the command prompt.

./TEST_perl.pl

2_If the test is successful, the following message should be displayed on the screen.

SUCCESS perl test

Test 7 – sh Test

1_Type the following in the command prompt.

./TEST_sh.sh

2_If the test is successful, the following message should be displayed on the screen.

SUCCESS sh test

10 | ​Page
2.3 Required Libraries

*Please note that this chapter is only documented for the purpose of introducing the libraries
required for WRF model. The links provided can be used to search more about the libraries and
download them and does not involve any installation step. But in the next chapter (4. Installation
of libraries), every step is followed in command lines including the download process.

a. NetCDF
This is the most important library which needs to be installed for WRF. NetCDF
(network Common Data Form) is a set of interfaces for array-oriented data access and
a freely distributed collection of data access libraries for C, Fortran, C++, Java, and
other languages. The netCDF libraries support a machine-independent format for
representing scientific data.

For more information about NetCDF -


http://www.unidata.ucar.edu/software/netcdf/docs/faq.html#whatisit

Although many new versions of NetCDF packages are available at present, it is


recommended to use NetCDF 4.1.3 for compiling WRF v3.8.

A link to download the recommended NetCDF version is given below.


http://www2.mmm.ucar.edu/wrf/OnLineTutorial/compile_tutorial/tar_files/netcdf-4.1.3.tar.
gz

b. MPICH
MPICH is a high-performance and widely portable implementation of the
MPI-3.1 standard from the Argonne National Laboratory. (MPI = Message Parsing
Interface). This application allows the user to select the number of cores should be
allocated for a WRF task. When WRF is run using mpich, (will be explained how to run
using MPICH in next chapter ) the WRF job will be allocated into each processor as
individual tasks which reduce the overall runtime of the model. If you are not interested
in running WRF in parallel or in other words, using more than one processor, this library
need not be installed.

For more information about MPICH -


https://www.mpich.org/documentation/guides/

Although many new versions of MPICH are available at present, it is recommended to


use MPICH 3.0.4 for compiling WRF v3.8.

11 | ​Page
A link to download the recommended version of MPICH is given below.

http://www2.mmm.ucar.edu/wrf/OnLineTutorial/compile_tutorial/tar_files/mpich-3.0.4.tar.
gz

c. JasPer
JasPer is a software tool kit for the handling of image data. The software
provides means for representing images, and facilitates the manipulation of image data,
as well as the import/export of such data in numerous formats. This library is vital for the
WPS copmponent of the model since it helps in ungribbing process of GRIB2 files.

For more information about JasPer​ ​-


https://www.ece.uvic.ca/~frodo/jasper/

A link to download the recommended version of JasPer for WRF is given below.
http://www2.mmm.ucar.edu/wrf/OnLineTutorial/compile_tutorial/tar_files/jasper-1.900.1.t
ar.gz

d. libpng
libpng is an open source development which is created for the purpose to
maintain the reference library for use in applications that read, create and manipulate
PNG (Portable Network Graphics) raster image files. The use of this library for WRF is
that it allows the GRIB2 data manipulation for the compiling of WPS component.

For further information about libpng​ ​-


https://libpng.sourceforge.io/

A link to download the recommended version of libpng for WRF is given below.
http://www2.mmm.ucar.edu/wrf/OnLineTutorial/compile_tutorial/tar_files/libpng-1.2.50.ta
r.gz

e. zlib
zlib is a lossless compression format which can be used in almost any hardware
and operating system. This library is vital for WRF since many data sets used in WRF
model are in compressed formats like GRIB2. This library is specially important in
compiling the WPS component of the model.

For more information about zlib -


https://zlib.net/

Although many new versions of zlib are available at present, it is recommended to use
zlib 1.2.7 for compiling WRF v3.8.

12 | ​Page
A link to download the recommended version of zlib is given below.
http://www2.mmm.ucar.edu/wrf/OnLineTutorial/compile_tutorial/tar_files/zlib-1.2.7.tar.gz

2.4 Installation of Libraries

1- It is recommended to do the installation in the home directory but a user can


choose if required to compile WRF in a different target.

2- Go to the home directory in linux by clicking on the second icon from the column
of icons in the desktop of Ubuntu 16.04 freshly installed. Once you are in the
home directory, right click and select open in terminal. This is the easiest way to
change the directories in a terminal to go to a specified location on the computer.
If you are familiar with linux command prompt, you can also choose to run the
command given below to change the directory to home in a terminal window.
(You can open the terminal window in ubuntu using the hotkey ctrl+alt+t )

cd /home/

3- Create a new folder for the installation of WRF in the home directory by simply
right click and make new folder or use the following command in the command
prompt

mkdir WRF

4- Copy all the downloaded tar files into this folder which are in the downloads
directory

NetCDF

1_Download the tar file using the following command

wget
http://www2.mmm.ucar.edu/wrf/OnLineTutorial/compile_tutorial/tar_
files/netcdf-4.1.3.tar.gz

2_Untar the NetCDF tar file using the following commands

tar xzvf netcdf-4.1.3.tar.gz.1

3_If the .gz extension is not present in your downloaded file, use the command

13 | ​Page
tar netcdf-4.1.3.tar

4_To go inside the unzipped file, use the command

cd netcdf-4.1.3

*Important note on using linux command prompt in changing directories – It is easy to change
the path directories by typing cd followed by the folder name. But to avoid mistakes in typing
wrong and to make sure you get to the correct directory, try pressing the Tab button after typing
the first few letters of the folder name. With the tab pressed, it will auto complete the name of
the whole directory and this will give you an idea that such a folder exist and in fact your
command is correct.

Before installing NetCDF, it is very important to follow the below mentioned steps to set
the environment variables.

14 | ​Page
If you follow the given instructions as given above and use a similar operating system
(Ubuntu 16.04 LTS), you will be working on a bash (bourne-again shell) shell which is
the widely used default at present. If you are using any other shell or operating system,
check the relevant shell and change the script syntax as appropriate. In this manual, we
provide how to manipulate for bash shell and csh (C shell) and also how to find which
shell you are using.

5_Type the following command to check which shell you are using currently. It will
display which shell you are actively using.

echo $0

6_If you want to check the default shell you use whenever you login, type the following
command to display the default shell used along with its path directory in the shell
environment variable.

echo $SHELL

7_Once you find out which shell you are using, you can proceed with the next steps
given below.

For bash shell (which is the default)

Type the following commands on the command prompt with a terminal opened inside
the unzipped NetCDF folder which is in this case the netcdf-4.1.3 folder created by the
unzip command and the command line should display something close to this
depending on your user account names.

Example:
user@user-714-150l:~/WRF3.8/netcdf-4.1.3#

*If you are not the root user, the hash symbol would probably be replaced with a $
symbol. It is always recommended to have the root privileges in compiling WRF.

8_Type the following commands

export CC gcc
export CXX g++
export FC gfortran
export FCFLAGS -m64
export F77 gfortran
export FFLAGS -m64

15 | ​Page
9_Configure NetCDF using the following command. These commands are very
important and therefore make sure to the check disable attributes correctly before
running the configuration.

./configure --disable-dap --disable-netcdf-4 –disable-shared

10_After the configuration of NetCDF above, the next step is to install the library. The
‘make’ command is used followed by ‘make check’ command to check for any errors in
the compilation. Type the following commands in the given sequence. If there is any
error in the configuration of the specific library, it will be displayed on the screen while
the ‘make check’ command is executed.

make

make check

16 | ​Page
make install

17 | ​Page
18 | ​Page
MPICH

1_Move back to the WRF3.8 directory using the following command.

cd ..

2_Download the tar file using the following command

Wget
http://www2.mmm.ucar.edu/wrf/OnLineTutorial/compile_tutorial/tar_
files/mpich-3.0.4.tar.gz

3_Untar the mpich-3.0.4.tar.gz file using the following command.

tar xzvf mpich-3.0.4.tar.gz

19 | ​Page
4_Change the working directory to inside the mpich file using the following command.

cd mpich-3.0.4

5-Configure MPICH using the following command.

./configure

6_After the configuration of MPICH, install the library using the following three
commands in sequence.

make

make check

make install

20 | ​Page
21 | ​Page
zlib

1_Move back to the WRF3.8 directory using the following command in the terminal
window.

cd ..

2_Download the tar file using the following command

Wget
http://www2.mmm.ucar.edu/wrf/OnLineTutorial/compile_tutorial/tar_
files/zlib-1.2.7.tar.gz

22 | ​Page
3_Untar the zlib-1.2.7 file using the following command.

tar xzvf zlib-1.2.7.tar.gz

4_Move inside the zlib directory using the following command.

cd zlib-1.2.7

5_Configure zlib using the following command.

./configure

23 | ​Page
6_After the configuration of zlib, install the library using the following commands in
sequence.

make

make check

make install

24 | ​Page
libpng

1_Move back to the WRF directory using the following command in the terminal
window.

cd ..

2_Download the tar file using the following command

Wget
http://www2.mmm.ucar.edu/wrf/OnLineTutorial/compile_tutorial/tar_
files/libpng-1.2.50.tar.gz

3_Untar the libpng-1.2.50 file using the following command

tar xzvf libpng-1.2.50.tar.gz

25 | ​Page
4_Move inside the libpng directory using the following command.

cd libpng-1.2.50

5_Configure libpng using the following command.

./configure

26 | ​Page
6_After the configuration of libpng, install the library using the following commands in
sequence.

make

make check

make install

27 | ​Page
28 | ​Page
7_While installing the libraries, and before configuring, it is advised to see the files
inside the library using the following command to display the files inside the directory

ls

Jasper

1_Move back to the WRF directory using the following command in the terminal
window.

cd ..

2_Download the tar file using the following command

Wget
http://www2.mmm.ucar.edu/wrf/OnLineTutorial/compile_tutorial/tar_
files/jasper-1.900.1.tar.gz

3-Untar the jasper-1.900.1 file using the following command

tar xzvf jasper-1.900.1.tar.gz.1

29 | ​Page
4_Move inside the zlib directory using the following command.

cd jasper-1.900.1

5_Configure Jasper using the following command.

./configure

30 | ​Page
6_After the configuration of Jasper, install the library using the following commands in
sequence.

make

make check

make install

31 | ​Page
32 | ​Page
m4

m4 is a macro processor which is either built in or user defined. It is much similar to cpp
(C-pre processor). However m4 is much advanced and powerful when compared with
the abilities of cpp and is a required component to run WRF

1-Go to the WRF directory and type the following command

sudo apt-get install m4

33 | ​Page
2.5 Library Compatibility Test

1_The library compatibility tests are run in order to verify that the libraries are able to
work with the compilers that are going to be used for the compilation of WRF and WPS
components. Go to the TESTS folder in the home directory and type the following in the
command prompt to download the test files required.

wget
http://www2.mmm.ucar.edu/wrf/OnLineTutorial/compile_tutorial/tar_
files/Fortran_C_NETCDF_MPI_tests.tar

2_Unpack the tar file using the following command line.

tar -xf Fortran_C_NETCDF_MPI_tests.tar

Test 1​ - ​ Fortran + C + NetCDF

1_Type the following command to copy the netcdf.inc file from the default installation
location to the current directory which is TEST.

cp /usr/local/include/netcdf.inc .

*Notice the last dot (.) placed after the filename to be copied after a space. This is very important
to mak esure the file gets copied to your current address. If the NetCDF has not installed in the
default location, use which command to find its directory and adjust the code as necessary.

2_Type the following in the command prompt one after the other

gfortran -c 01_fortran+c+netcdf_f.f

gcc -c 01_fortran+c+netcdf_c.c

gfortran 01_fortran+c+netcdf_f.o 01_fortran+c+netcdf_c.o


-L${NETCDF}/lib -lnetcdff -lnetcdf
34 | ​Page
./a.out

3_If the test is successful, the following message should be displayed on the screen.

C function called by Fortran


Values are xx = 2.00 and ii = 1
SUCCESS test 1 fortran + c + netcdf

Test 2 – Fortran + C + NetCDF + MPI Test

1_Type the following command to copy the netcdf.inc file from the default installation
location to the current directory which is TEST.

cp /usr/local/include/netcdf.inc .

2_Type the following in the command prompt one after the other

mpif90 -c 02_fortran+c+netcdf+mpi_f.f

mpicc -c 02_fortran+c+netcdf+mpi_c.c

mpif90 02_fortran+c+netcdf+mpi_f.o​ ​02_fortran+c+netcdf+mpi_c.o


-L${NETCDF}/lib -lnetcdff –lnetcdf

mpirun ./a.out

3_If the test is successful, the following message should be displayed on the screen.

C function called by Fortran


Values are xx = 2.00 and ii = 1
status = 2
SUCCESS test 2 fortran + c + netcdf + mpi

35 | ​Page
2.6 Installing WRF

1_Download the WRF3.8 files using the following command from the WRF directory

wget http://www2.mmm.ucar.edu/wrf/src/WRFV3.8.TAR.gz

2_Untar the file using the following commands

gunzip WRFV3.8.TAR.gz
tar –xf WRFV3.8.TAR

Setting up environment variables for NetCDF to install WRF3.8

3_Change the directory from WRF to the WRFV3 directory using the following
command

cd WRFV3/

4_Type the following lines on the command prompt

export WRFIO_NCD_LARGE_FILE_SUPPORT=1
export NETCDF=/usr/local
export NETCDF_LIB=/usr/local/lib
export NETCDF_INC=/usr/local/include

36 | ​Page
5_Then configure WRF according to your desired usage of WRF using the following
command

./configure

6_A list of options will be displayed from which you can choose one depending on your
compiler and mode of processing

Compilers

There are several compiler options and therefore it is vital to use the same compiler
used to install the libraries for compiling WRF. In this documentation, we recommend
and use gfortra/gcc

Mode of processing

The WRF model can be run in four modes

● serial = single processor


This option is best suited if you do not possess multiple processor resources and
intend to use only one processor.

● smpar = shared memory option (OpenMPI)


This options shares the memory allocated for each program where parallelism
occurs through the grant of access to each parallel thread towards all the data.

● dmpar = distributed memory option (MPI)

37 | ​Page
This option is best suited and most recommended if you are planning to use
multiple processors in your computer running WRF. The general computers
these days consist of intel i3,5,7 processors with multiple cores and therefore this
option is the most commonly used option.

● dm+sm = distributed memory and shared memory


This option is a combination of dmpar and smpar.

According to our system specifications and compiler used, the option recommended to
choose is 34

34. (dmpar) GNU (gfortran/gcc)

After the selection of the compiler and mode, another selection will be displayed with 4
options for nesting.

● The most general two options are the basic (=1) and the vortex following (=3).

● The preset (=2) was initially used for testing WRF and therefore it is
recommended to avoid this option.

● For basic weather forecasting using meteorological data, it is recommended to


use the basic (=1)

● For cyclone tracking applications, the vortex following (=3) option is the most
suited.

If the configuration is successful, a message will be displayed with a testing for NetCDF,
C and Fortran compiler.

7_After this step, type the following command to compile WRF3.8

./compile

8_This would result in another list of options for which WRF can be compiled as given
below.

Usage:
compile [-j n] wrf compile wrf in run dir (NOTE: no real.exe,
ndown.exe, or ideal.exe generated)
or choose a test case (see README_test_cases for details) :
compile [-j n] em_b_wave
compile [-j n] em_convrad
compile [-j n] em_esmf_exp
compile [-j n] em_fire

38 | ​Page
compile [-j n] em_grav2d_x
compile [-j n] em_heldsuarez
compile [-j n] em_hill2d_x
compile [-j n] em_les
compile [-j n] em_quarter_ss
compile [-j n] em_real
compile [-j n] em_scm_xy
compile [-j n] em_seabreeze2d_x
compile [-j n] em_squall2d_x
compile [-j n] em_squall2d_y
compile [-j n] em_tropical_cyclone
compile [-j n] exp_real
compile [-j n] nmm_real
compile [-j n] nmm_tropical_cyclone
compile -j n parallel make using n tasks if
supported (default 2)
compile -h help message

The most basic and widely used capability of WRF is the em_real test case which
allows the user to forecast using real meteorological datasets.

The ideal cases can be run to simulate a climatic condition with used defined initial
boundary conditions whereas in the real cases, the initial boundary conditions are
supplied to the WRF through actual meteorological data.

9_Type the following command with your choice of test case. In here, the test case
compiled for is em_real.

./compile em_real >& log.compile

39 | ​Page
The log.compile command will create a log file for the compilation which will help you
troubleshoot in case of an unsuccessful attempt.

10_To check for the successful compilation of WRFV3.8, type the following command.

cat log.compile

If the compilation was successful, the following lines will be displayed at the bottom of
the log file.

==========================================================================
build started: Wed Nov 15 15:00:14 +07 2017
build completed: Wed Nov 15 15:06:37 +07 2017
---> Executables successfully built <---
-rwxr-xr-x 1 root root 38340768 Nov 15 15:06 main/ndown.exe
-rwxr-xr-x 1 root root 38217784 Nov 15 15:06 main/real.exe
-rwxr-xr-x 1 root root 37857816 Nov 15 15:06 main/tc.exe
-rwxr-xr-x 1 root root 42000344 Nov 15 15:06 main/wrf.exe
=========================================================================

This is the end of the first component of the WRFV3.8. The next step is to compile
WPS. The compilation is only required for real cases. If you are intending to use WRF
for ideal cases, the WPS compilation is not required.

2.7 Installing WPS

1_Go back to the WRF directory by using the following command

cd ..

2_Download WPSV3.8 using the following command line

wget http:/www2.mmm.ucar.edu/wrf/src/WPSV3.8.TAR.gz

3_Unzip the downloaded file using the following two command lines one after the other.

gunzip WPSV3.8.TAR.gz
tar –xf WPSV3.8.TAR

4_Go inside the WPS directory created by using the following command.

40 | ​Page
cd WPS/

5_Exporting Jasper libraries for the installation of WPS with GRIB2 support.

export JASPERLIB=/usr/local/lib
export JASPERINC=/usr/local/include/jasper

6_Configure WPS using the following command

./configure

If the NetCDF and Jasper libraries are configured properly, following lines will be
displayed on top.

41 | ​Page
will use NETCDF in dir: /usr/local
Found Jasper environment variables for GRIB2 support…
$JASPERLIB = usr/local/lib
$JASPERINC = /usr/local/include/jasper

7_Then the user should select the working platform from the list of supported platforms
given below. In here the selection is 3

Please select from among the following supported platforms.

1. Linux x86_64, gfortran (serial)


2. Linux x86_64, gfortran (serial_NO_GRIB2)
3. Linux x86_64, gfortran (dmpar)
4. Linux x86_64, gfortran (dmpar_NO_GRIB2)
5. Linux x86_64, PGI compiler (serial)
6. Linux x86_64, PGI compiler (serial_NO_GRIB2)
7. Linux x86_64, PGI compiler (dmpar)
8. Linux x86_64, PGI compiler (dmpar_NO_GRIB2)
9. Linux x86_64, PGI compiler, SGI MPT (serial)
10. Linux x86_64, PGI compiler, SGI MPT (serial_NO_GRIB2)
11. Linux x86_64, PGI compiler, SGI MPT (dmpar)
12. Linux x86_64, PGI compiler, SGI MPT (dmpar_NO_GRIB2)
13. Linux x86_64, IA64 and Opteron (serial)
14. Linux x86_64, IA64 and Opteron (serial_NO_GRIB2)
15. Linux x86_64, IA64 and Opteron (dmpar)
16. Linux x86_64, IA64 and Opteron (dmpar_NO_GRIB2)
17. Linux x86_64, Intel compiler (serial)
18. Linux x86_64, Intel compiler (serial_NO_GRIB2)
19. Linux x86_64, Intel compiler (dmpar)
20. Linux x86_64, Intel compiler (dmpar_NO_GRIB2)
21. Linux x86_64, Intel compiler, SGI MPT (serial)
22. Linux x86_64, Intel compiler, SGI MPT (serial_NO_GRIB2)
23. Linux x86_64, Intel compiler, SGI MPT (dmpar)
24. Linux x86_64, Intel compiler, SGI MPT (dmpar_NO_GRIB2)
25. Linux x86_64, Intel compiler, IBM POE (serial)
26. Linux x86_64, Intel compiler, IBM POE (serial_NO_GRIB2)
27. Linux x86_64, Intel compiler, IBM POE (dmpar)
28. Linux x86_64, Intel compiler, IBM POE (dmpar_NO_GRIB2)
29. Linux x86_64 g95 compiler (serial)
30. Linux x86_64 g95 compiler (serial_NO_GRIB2)
31. Linux x86_64 g95 compiler (dmpar)
32. Linux x86_64 g95 compiler (dmpar_NO_GRIB2)
33. Cray XE/XC CLE/Linux x86_64, Cray compiler (serial)
34. Cray XE/XC CLE/Linux x86_64, Cray compiler (serial_NO_GRIB2)

42 | ​Page
35. Cray XE/XC CLE/Linux x86_64, Cray compiler (dmpar)
36. Cray XE/XC CLE/Linux x86_64, Cray compiler (dmpar_NO_GRIB2)
37. Cray XC CLE/Linux x86_64, Intel compiler (serial)
38. Cray XC CLE/Linux x86_64, Intel compiler (serial_NO_GRIB2)
39. Cray XC CLE/Linux x86_64, Intel compiler (dmpar)
40. Cray XC CLE/Linux x86_64, Intel compiler (dmpar_NO_GRIB2)
Enter selection [1-40] : 3
----------------------------------------------------------------------

If the configuration is successful, the following message will be displayed as given


below.

--
Configuration successful. To build the WPS, type: compile
----------------------------------------------------------------------
--

Testing for NetCFD, C and Fortran compiler

This installation NetcCFD is 64-bit


C compiler is 64 bit
Fortran compiler is 64-bit

8_After the configuration of WPS, compile WPS using the following command line.

./compile &> compile.log

9_To check whether the compilation of WPS is successful, type the following command.

ls

10_Check whether the given three components are available and non zero file sized

a) geogrid.exe c) metgrid.exe
b) ungrib.exe

If these three .exe files are available, you can now proceed to the configuration of static
geographic data for WRF.

43 | ​Page
2.8 Configuration of Static Geographic Data

1_Go back to the WRF directory by using the following command

cd ..

2_Download the geographical input dataset using the following command line. Make
sure to choose the correct and required dataset suitable for your own task since the
data set size increases as your desired resolution of the forecast increases.

For example, if you need 1km level resolution for your forecast, you can easily use the
following dataset with lowest resolution of each mandatory field for WRF.

wget
http://www2.mmm.ucar.edu/wrf/src/wps_files/geog_minimum.tar.bz2

3_If you are interested in running the model with a much greater resolution, it is always
recommended to use the complete dataset which accommodates the highest resolution
with the following command.

wget
http://www2.mmm.ucar.edu/wrf/src/wps_files/geog_complete.tar.gz

The difference between these two datasets is in their ability to cater your resolution
needs and the download file size, although the complete dataset accommodates higher
resolution, it comes with the expense of greater file size and hence even the run time for
WPS will be extended depending on your hardware resources.

44 | ​Page
Chapter 3 – Running the WRF Model

There are several components that needs to be combined in order to run a successful
WRF real case scenario.

45 | ​Page
3.1 Geogrid

Geogrid component uses the static geographical data previously downloaded by the
user before and starts preparing required information to suit the specifications and
attributes listed by the user in the namelist.wps file.

It is important to make sure you have installed the required amount of Static
Geographical Data to cater your resolution needs. If there is any data missing, when
running the geogrid, it will display an error message saying which files are missing.
These files can be thereafter downloaded from the following link and copied to your
Static Geographical Data folder.

http://www2.mmm.ucar.edu/wrf/src/wps_files/

After the geogrid.exe is successfully run, the information required for the metgrid is sent
to be processed with ungrib data

3.2 Ungrib

Ungrib component extracts data recorded in meteorological data which should be


downloaded separately and feeds into the metgrid so that the data from geogrid and
ungrib can be combined.

3.3 Metgrid

Metgrid component combines the static geographical data and the meteorlogical data
and outputs files in WRF I/O API conforming format. This is the last step in WRF
preprocessing as metgrid operation produces the final input file required for running the
WRF.

*Note that the configuration of the namelist.wps, namelist.input and downloading


meteorological data will be explained in detail on the next chapter.

WRF Pre Processing System (WPS)

46 | ​Page
1_Go to the WPS directory and type the following commands in a terminal window.

sudo -i gedit /home/WRF/ WPS/namelist.wps

2_This will open a text file containing several parameters for different WPS
configurations. The required editing will be explained in the next chapter.

In the text file, there will be one attribute named geog_data_path which should be
configured when running the WPS for the first time. Replace this attribute with the
location address of your geog data.

geog_data_res = '10m','10m','10m',
dx = 9000,
dy = 9000,
map_proj = 'mercator',
ref_lat = 7.5,
ref_lon = 80.3,
truelat1 = 7.4,
stand_lon = 80.2,
geog_data_path = '/home/gic/WRF/geog/'

3_Save the file and close it.

4_Type the following command to run geogrid.exe

./geogrid.exe

47 | ​Page
5_Once the geogrid.exe is successfully run, type the following command line to link the
meteorological data. Assuming the meteorological data is already downloaded into a
directory named METDATA and the format is GFS,

./link_grib.csh /home/WRF/METDATA/gfs*

6_Link the variable tables according to the data format you use. (AWIP or GFS).
Assuming you are using GFS data,

ln -sf /home/WRF/WPS/ungrib/Variable_Tables/Vtable.GFS Vtable

7_Run the ungrib.exe

./ungrib.exe

48 | ​Page
8_After the ungrib process is completed successfully, you can next proceed to the
metgrid.exe using the following command.

./metgrid.exe

9_After the successful completion of metgrid.exe, you can proceed to the real.exe
assuming that the namelist.input is configured properly. Type the following command
line in a terminal window to change directory to em_real.

cd /home/WRF/WRFV3/test/em_real

49 | ​Page
10_Link the metdata to the em_real directory

ln –sf /home/WRF/WPS/met_em.d0* .

11_Run the real.exe

mpirun -np 1 ./real.exe

12_After the completion of real.exe, run the wrrf.exe

mpirun –np 4 ./wrf.exe

*Note that the number following ‘np’ in the above two commands determines the number of
cores you dedicate for each task. The number of cores in your computer depends on your
hardware resources. Usually the real.exe does not require higher processing power compared to
wrf.exe and therefore in the above command lines, only one core is allocated for the task. Since
wrf.exe requires greater processing power, it is recommended to allocate the maximum possible
number of cores for it. Keep in mind that this capability is only available if you choose ‘dmpar’
mode of processing. By utilizing more cores parallel for a WRF task, the run time can be
minimized.

50 | ​Page
Chapter 4 – Configuring the Runtime Parameters

As discussed in the previous chapter, the model run is straight forward and systematic
once all the run time parameters are set up. The trivial part in the WRF model run is the
setting up of the run time parameters according to the user requirement.

There are three basic tasks that you need to focus before running a WRF real case
scenario.

1. Configuring the namelist.wps file.


2. Configuring the namelist.input file.
3. Downloading sufficient amount of meteorological data.

Once these three tasks are completed only, the WRF model can be executed.

4.1 Configuring the namelist.wps file

The namelist.wps file contains many attributes and parameters that can be adjusted by
the user which are relevant to the run of WPS. A sample of the namelist.wps file is given
on the next page with detailed explanations.

51 | ​Page
&share
wrf_core = 'ARW',
max_dom = 2,
start_date = '2006-08-16_12:00:00','2006-08-16_12:00:00',
end_date = '2006-08-16_18:00:00','2006-08-16_12:00:00',
interval_seconds = 21600
io_form_geogrid = 2,
/

&geogrid
parent_id = 1, 1,
parent_grid_ratio = 1, 3,
i_parent_start = 1, 31,
j_parent_start = 1, 17,
e_we = 74, 112,
e_sn = 61, 97,

!!!!!!!!!!!!!!!!!!!!!!!!!!!! IMPORTANT NOTE


!!!!!!!!!!!!!!!!!!!!!!!!!!!!
! This namelist is specific for use with the lowest resolution
option for
! each field in the static geographic tar file. It is mandatory to
use
! the below settings for geog_data_res.
!!!!!!!!!!!!!!!!!!!!!!!!!!!! IMPORTANT NOTE
!!!!!!!!!!!!!!!!!!!!!!!!!!!!

geog_data_res = '10m','10m','10m',
dx = 9000,
dy = 9000,
map_proj = 'mercator',
ref_lat = 7.5,
ref_lon = 80.3,
truelat1 = 7.4,
stand_lon = 80.2,
geog_data_path = '/home/gic/WRF/geog/'
/

&ungrib
out_format = 'WPS',
prefix = 'FILE',
/

52 | ​Page
&metgrid
fg_name = 'FILE'
io_form_metgrid = 2,
/

4.1.1 Section A

These parameters are common for all three components of WPS (geogrid, ungrib,
metgrid).

max_dom – This defines the number of domains used for the model run. The default
value is 1 which means the model requires at least one domain (mother domain).

start_date – The UTC (Cordinate Universal Time) date of the simulation start in the
following format.

end_date – The UTC (Cordinate Universal Time) date of the simulation end in the
following format.

‘xxxx-xx-xx_xx:xx:xx’

Interval_seconds – The gap or the duration between two time varying meteorological
data input files. This depends on the type of data you use and generally comes in 3
hours (10800s) or 6 hours (21600s).

io_form_geogrid – This defines the format in which the domain files created by the
geogrid.exe. Possible formats are given below.

● Binary (suffix .int)


● NetCDF (This is the default format suffix .nc)
● GRIB1 (suffix .gr1)

53 | ​Page
4.1.2 Section B

These parameters are only defined for the geogrid.exe

parent_id – This defines the id of each domains parent. For the coarsest domain (which
is the mother domain) the parent domain is itself and the value assigned is 1. For each
other sub domains/ nested domains, the adjacent coarser domain id becomes its parent
id.

parent_grid_ratio – This parameter defines the spacing of nested domain to that of its
parent domain. Since the parent domain of the mother domain is itself and identical, the
grid ratio value is one.

*Note that there are two values for some parameter like start date, end date, parent id and parent grid
ratio. This is due to the selection of 2 domains from the max domain parameter. Similarly, if ‘x’ number of
domains are used, there will be ‘x’ number of values for several parameters. The separate columns
demarcated through commas (,) belong to separate domains.

&geogrid
parent_id = 1, 1,
parent_grid_ratio = 1, 3,
i_parent_start = 1, 31,
j_parent_start = 1, 17,
e_we = 74, 112,
e_sn = 61, 97,

The i_parent_start and j_parent_start parameters defines the lower left most corner of
the domain in relation to its parent domain. Hence the i,j coordinates of the mother
domain becomes 1,1 and the nested domain starting i,j coordinates take their respective
coordinates in relation to its own mother domain.

54 | ​Page
e_we and e_sn parameters define the nests full west east dimension and south north
dimension respectively.

55 | ​Page
*Note that the selection of grid size (i.e. e_sn and e_we for the mother domain is easy and
straight forward but the determination of e_sn and e_we for nested domains in tricky. An easy
method to calculate is given below.

The starting coordinates of the nested domain (D2) in the above example is given as (31, 17).
This refers to the bottom left corner of the grid. Then you can decide the top right corner of the
grid coordinates in respect to mother domain (D1) coordinates which in the above case decided
as (68, 49). Then;

e_we = (Parent Grid Ratio) × (ending i coordinate – starting i coordinate) + 1


= (3 × (68 – 31)) + 1
= 111 + 1
= 112
Similarly
e_sn = (Parent Grid Ratio) × (ending j coordinate – starting j coordinate) + 1
= (3 × (49-17)) + 1
= 96 + 1
= 97
4.1.3 Section C

These parameters are only defined for the geogrid.exe

geog_data_res – This parameter defines which geog data resolution files to be used for
the geogrid.exe process. Depending on this ‘string’, the geogrid.exe will search for the
required static data on your geog data path and displays if any of the required files are
missing.

dx – This value determines the grid distance in coordinate x direction with a map scale
factor of 1.For ‘polar’, ‘lambert’ and ‘mercator’ projections, this value denotes the
distance in meters and in degrees latitude for ‘lat-lon’ projection in WRF-ARW.

dy - This value determines the grid distance in coordinate y direction with a map scale
factor of 1.For ‘polar’, ‘lambert’ and ‘mercator’ projections, this value denotes the
distance in meters and in degrees latitude for ‘lat-lon’ projection in WRF-ARW.

map_proj – The specific projection used out of the four possible options. We personally
recommend to use Mercator projection since most of the modern day maps are in
web-mercator projection. (ex: Google maps)

● Polar
● Mercator

56 | ​Page
● Lambert (Default)
● Lat-lon

ref_lat - This defines the latitude of the centre of the coarse domain (mother domain).

ref_lon - This defines the longitude of the centre of the coarse domain (mother domain).

4.1.4 Section D

These parameters are only defined for the ungrib.exe

out_format – This defines the output format of the ungrib.exe. The possible output
formats are;

● WPS (Default)
● SI
● MM5

prefix – This defines a string to be added as a prefix for the intermediate file names
produced by the ungrib.exe. It helps to differentiate between intermediate files run with
the same date and time from multiple sources of GRIB data.

4.1.5 Section E

These parameters are only defined for the metgrid.exe

fg_name – A string character defining the path and prefix of the ungridded data files.
Recommended to keep the same as the ‘prefix’ string character in section 4.

io_form_metgrid – This defines the output format by the metgrid.exe from the following
possible options.

● 1 for Binary format


● 2 for NetCDF format (Default)
● 3 for GRIB1 format

4.2 Configuring the namelist.input file

57 | ​Page
The namelist.input file contains many attributes and parameters that can be adjusted by
the user which are relevant to the run of WRF. A sample of the namelist.input file is
given below with detailed explanations.

&time_control
run_days = 0,
run_hours = 24,
run_minutes = 0,
run_seconds = 0,
start_year = 2016, 2000, 2000,
start_month = 03, 01, 01,
start_day = 23, 24, 24,
start_hour = 00, 12, 12,
start_minute = 00, 00, 00,
start_second = 00, 00, 00,
end_year = 2016, 2000, 2000,
end_month = 03, 01, 01,
end_day = 24, 25, 25,
end_hour = 00, 12, 12,
end_minute = 00, 00, 00,
end_second = 00, 00, 00,
interval_seconds = 21600
input_from_file = .true.,.true.,.true.,
history_interval = 180, 60, 60,
frames_per_outfile = 1, 1000, 1000,
restart = .false.,
restart_interval = 720,
io_form_history = 2
io_form_restart = 2
io_form_input = 2
io_form_boundary = 2
debug_level = 0
/

&domains
time_step = 180,
time_step_fract_num = 0,
time_step_fract_den = 1,
max_dom = 1,
e_we = 75, 112, 94,
e_sn = 70, 97, 91,
e_vert = 35, 30, 30,
p_top_requested = 5000,
num_metgrid_levels = 27,
num_metgrid_soil_levels = 4,
dx = 30000, 10000, 3333.33,
dy = 30000, 10000, 3333.33,

58 | ​Page
grid_id = 1, 2, 3,
parent_id = 0, 1, 2,
i_parent_start = 1, 31, 30,
j_parent_start = 1, 17, 30,
parent_grid_ratio = 1, 3, 3,
parent_time_step_ratio = 1, 3, 3,
feedback = 1,
smooth_option = 0
/

&physics
physics_suite = 'CONUS'
radt = 30, 30, 30,
bldt = 0, 0, 0,
cudt = 5, 5, 5,
icloud = 1,
num_soil_layers = 4,
num_land_cat = 24,
sf_urban_physics = 0, 0, 0,
/

&fdda
/

&dynamics
w_damping = 0,
diff_opt = 1, 1, 1,
km_opt = 4, 4, 4,
diff_6th_opt = 0, 0, 0,
diff_6th_factor = 0.12, 0.12, 0.12,
base_temp = 290.
damp_opt = 0,
zdamp = 5000., 5000., 5000.,
dampcoef = 0.2, 0.2, 0.2
khdif = 0, 0, 0,
kvdif = 0, 0, 0,
non_hydrostatic = .true., .true., .true.,
moist_adv_opt = 1, 1, 1,
scalar_adv_opt = 1, 1, 1,
gwd_opt = 1,
/

&bdy_control

59 | ​Page
spec_bdy_width = 5,
spec_zone = 1,
relax_zone = 4,
specified = .true., .false.,.false.,
nested = .false., .true., .true.,
/

&grib2
/

&namelist_quilt
nio_tasks_per_group = 0,
nio_groups = 1,
/

4.2.1 Section A

● This section includes all the parameters related to time control of the model run.
● Starting from the run_days up to end_second, all these parameters should be
identical with the given values of namelist.wps. All these variables in the
namelist.input should match with that of the namelist.wps before running wrf.exe
● Interval seconds should be set similar to that of namelist.wps.
● It is recommended to use the default values for other parameters such as
io_form_history, io_form_input etc unless you are using WRF for advanced
functions like integrating other numerical models.

4.2.2 Section B

● This section includes all the parameters related to the domain control of the
model run.
● It is vital to match the values given for parameters that comes up in both
namelist.wps and namelist.input for domain information.
● However, in the namelist.input file, you have to specifically allocate values for dx
and dy separately even for nested domains.
● This calculation of dx and dy for the nested domain should be done according to
the parent grid ratio.

60 | ​Page
● For example if the parent grid ratios are 1, 2, 3 and the mother domain dx as
given in the namelist.wps is 1200. Then the dx of the other two domains will be
600 and 200.

Sections C,D and E are involved with physics, dynamics and boundary control
parameters which require further knowledge about mesoscale weather patterns
and different physical and dynamic parameters which most accurately interprets
changes in weather as an accurate numerical model for the most rapidly
changing natural phenomena.

4.3 Downloading Meteorological Data

The meteorological data which are used for WRF real cases can be downloaded from
the internet. A good understanding about different data sets and how to access them
prior to any case study is important. In this document, we will explain about the most
commonly used data set by wrf users worldwide out of many available datasets.

4.3.1 Global Forecasting System (GFS)

● The Global Forecasting System (GFS) is a weather forecast model produced by


the National Centers for Environmental Predictions (NCEP), United States of
America.
● GFS covers the total globe and its forecasting models predict the weather up to
16 days.
● These GFS products are freely available in National Operational Model Archive
and Distribution System (NOMADS) website.
● The GFS data for forecasting can be downloaded with 0.25̊, 0.5̊ and 1̊
resolutions. The data output time steps vary with the resolution.

61 | ​Page
● The model is run 4 times per day and results are available 4 times each day for
0h, 06h, 12h, and 18h.

Data resolution and time step summary

Resolution Time Step Time Duration No. of Files Average File


size
0.25̊ 1 hour 000h to 120h 121 206mb
3 hour 123h to 240h 40 206mb
12 hour 252h to 384h 12 186mb
0.5̊ 3 hour 000h to 240h 81 65mb
12 hour 252h to 384h 12 62mb
1̊ 3 hour 000h to 240h 81 20mb
12 hour 252h to 384h 12 19mb

The data files can be accessed freely from the following link

http://www.ftp.ncep.noaa.gov/data/nccf/com/gfs/prod/

Once the required date is selected, a list of downloadable files will be displayed. The
following naming convention will help you download your desired and required datat
files successfully.

gfs.t06z.pgrb2.0p25.f135

62 | ​Page