Project

General

Profile

Lib-gmtsar » History » Version 1

Herve Caumont, 2013-06-19 18:05

1 1 Herve Caumont
h1. GMTSAR tutorial
2
3
{{>toc}}
4
5
h2. Sandbox per-requisites
6
7
*We assume that you have already a Sandbox ready and the following items are completed:*
8
* *You have accessed your sandbox as described in the Getting Started guide*
9
10
> It is not mandatory but strongly recommended to follow the [[Getting started]] tutorial before starting this one.
11
12
h2. 1. Application concepts and terminology
13
14
In order to ease the execution of the tutorial, it is important to understand the concept of an application and its terminology. This section describes the example application used all along this guide. When a word is put in *+underlined bold+*, it is an terminology keyword always designating the same concept.
15
16
h3. 1.1 The application workflow
17
18
Our example in this tutorial is an interferometry application that processes SAR data (Envisat ASAR Image Mode level 0) to generate interferogram between a master and one or more slave products. It is composed of 6 steps that run independently but in a specific order and produce results that are inputs for the other remaining steps.
19
The following figure illustrates the *+Workflow+* of our application as a directed acyclic graph (DAG). This is also how the CIOP framework handles the execution of processes in terms of parallel computing and orchestration of the processing steps.
20
21
!! _<<insert here the DAG that represent the workflow >>_
22
23
Each box represents a *+Job+* which is a step of our application process. The arrows represents the data flow between the *+jobs+*. When a *+job+* is connected to another, it means that the *+output+* of this *+job+* is passed as *+input+* for the other.
24
25
It is important to keep in mind that in CIOP framework, *+input+* and *+ouput+* are text references (e.g. to data). Indeed, when a *+job+* process *+input+*, it actually reads *line by line* the reference workflow as described in the next figure
26
27
!https://ciop.eo.esa.int/attachments/40/focus-align.png!
28
29
It is therefore important to define precisely the inter- *+job+* references.
30
31
h3. 1.2 The job
32
33
Each *+job+* has a set of basic characteristics:
34
35
* a unique *+Job name+* in the workflow. (e.g. 'PreProc')
36
* zero, 1 or several *+sources+* that define the *+jobs+* interdependency. In the example, the *+job+* 'Interfere' has 2 dependencies: 'AlignSlave' and 'MakeTropo'.
37
* a maximum number of simultaneous *+tasks+* in which it can be forked. This is further explained in section [[Sandbox Application Integration Tutorial#1.3 The processing task|1.3 The processing task]].
38
* a *+processing trigger*+ which is a software executable of the *+job template+* that handles *+input+*/*+output+* streaming process. Practically, the executable that reads the *+input+* lines and writes the *+output+* lines.
39
40
The job characteristics above are mandatory in the *+workflow+* definition. 
41
If incomplete, the CIOP framework reports error in the workflow.
42
43
For out tutorial example, here are the characteristics for 'PreProc', 'AlignSlave' and 'Interfere' *+jobs+*:
44
45
* *PreProc*
46
47
PreProc is the first job in the workflow. It takes both SAR products, master and slave, and pre-processess them:
48
> It is a job with a single task, the _defaultJobconf_ _property_ (a CIOP property, not an application property) _*ciop.job.max.tasks*_ is set to *1*
49
> Its executable is located is /application/preproc/run
50
> It has a number of application parameters to run: SAT, master, num_patches, near_range, earth_radius, fd1, stop_on_error. The parameter values are not set in this job template.
51
52
<pre><code class="xml">
53
<jobTemplate id="preproc">
54
	<streamingExecutable>/application/preproc/run</streamingExecutable>	<!-- processing trigger -->
55
	<defaultParameters>	<!-- default parameters of the job -->
56
		<!-- Default values are specified here -->
57
		<parameter id="SAT"></parameter>	<!-- no default value -->
58
		<parameter id="master"></parameter>
59
		<parameter id="num_patches"></parameter>	<!-- no default value -->
60
		<parameter id="near_range"></parameter>	<!-- no default value -->
61
		<parameter id="earth_radius"></parameter>	<!-- no default value -->
62
		<parameter id="fd1">1</parameter>	<!-- no default value -->
63
		<parameter id="stop_on_error">false</parameter> 	<!-- don't stop on error by default -->
64
	</defaultParameters>
65
	<defaultJobconf>
66
		<property id="ciop.job.max.tasks">1</property>	<!-- Maximum number of parallel tasks -->
67
	</defaultJobconf>
68
</jobTemplate>
69
</code></pre>
70
71
* *+Job name+*: *'AlignSlave'*
72
* *+sources+*: 'PreProc'
73
* maximum number of simultaneous *+tasks+*: unlimited
74
* *+processing trigger*+: /application/align/run
75
76
* *+Job name+*: *'Interfere'*
77
* *+sources+*:  'AlignSlave' and 'MakeTropo'
78
* maximum number of simultaneous *+tasks+*: 1
79
* *+processing trigger*+: /application/interfere/run
80
81
h3. 1.3 The processing task
82
83
To exploit the parallelism offered by the CIOP framework, a *+job+* may process its *+input+* in several *+tasks+*. In principle, the CIOP framework will run those *+tasks+* in parallel. This is an important and sometimes complex paradigm that can be addressed in different ways. 
84
The following questions & answers describe the parallelism paradigm of the CIOP framework.
85
86
* Is it *+task+* parallelism or *+job+* parallelism?
87
88
> In this section, we definitely speak about task parallelism. Job parallelism is at level !+1. In the example, 'MakeTropo' and 'AlignSlave' are two *+jobs+* that run in parallel besides the number of *+tasks+* each of them may trigger.
89
> 'AlignSlave' may be forked into an unlimited number of *+tasks+*. Practically the framework calculates automatically the number *n* of available processing slots in the computing resource and start *n* times the *+processing trigger+* (based on the number of *+inputs+*).
90
91
* How to divide a *+job+* into *+tasks+*?
92
93
> It is actually the application developer who chooses the granularity of the *+job+* division. The computing framework will simply divide the *+input+* flow (*k* lines) in to the *n* *+tasks+*. In the example provided in this tutorial, if the *+job+* 'PreProc' produces *+output+* of 11 lines and the computing resources divide the *+job+* 'AlignSlave' into 4 *+tasks+* and the following division is done:
94
95
!https://ciop.eo.esa.int/attachments/41/parallelism.png!
96
97
Where stands the processing loop?
98
99
> The processing loop stands in the +*processing trigger*+. As shown in this tutorial with the example, the *+processing triggers+* implements a loop that reads *line by line* the *+task*+ input.
100
101
h2. 2. Home Directory
102
103
The *+home+* directory of your sandbox is attached to a separate savable disk and thus persistent. This disk is mounted on the /home/{USERNAME} folder. In this folder you may upload, compile, test manually, etc. all your data. This is a free space where you can do anything BUT:
104
105
> *+Be careful to never link any element (e.g. executable, auxiliary data) from the application directory to the home directory which is critical for the application+*. Indeed, the *+home+* directory is not present when using CIOP in Runtime Environment mode and therefore any linked elements won't be available and cause the processing phase to fail.
106
107
h2. 3. Application Directory
108
109
The *+application directory+* of your sandbox is attached to a separate savable disk and thus persistent. This disk is mounted on the /application folder.
110
The application directory is the place where integrated application resides. 
111
It should be a clean environment and thus +*SHOULD NOT*+ contain any temporary files or used to do compilations and/or manual testing. Instead, it is used for the simulation the application *+jobs+* and +*workflow*+.
112
113
At the instantiation of your sandbox, the *+application directory+* contains this sample application example unless you configured the sandbox with one of your application disk previously saved in your application library.
114
In the next sections the elements of the application directory are described.
115
116
h3. 3.1 Files and folders structure
117
118
The application directory follows some best practices in its folders and files structure to ease the subsequent deployment of the application to the CIOP Runtime Environment.
119
The folder structure of the application example with the description of each item is shown below:
120
121
!https://ciop.eo.esa.int/attachments/44/application_folder.png!
122
123
124
> Even if the names are quite similar in the our tutorial example, *+job+* and +*job template*+ are not the same concept. _A *job* is an instance of a +*job template*+ in a given +*workflow*+. This paradigm allows to have several *+jobs+* in a +*workflow*+ that point to the same *+job template+*. This is explained in more detail in the next section.
125
126
h3. 3.2 The Application XML definition file
127
128
The Application XML definition file is the reference of your application for the CIOP computing framework. It contains all the characteristics of the *+job templates+* and the *+workflows+*.
129
130
The Application XML definition file is is described in the page [[Application XML definition file]].
131
132
h2. 4 -- Using sample datasets -- (section under revision)
133
134
This section guides you through the tutorial example to introduce the data manipulation tools. 
135
136
There are mainly two command line tools to discover and access data previously selected in your sandbox ([[Getting_Started#2.1 Sandbox EO Data Services|see here]]):
137
* *ciop-catquery* to query the sandbox catalogue containing all the metadata of the selected sample dataset
138
* *ciop-copy* to copy the data from logical of physical location to a local directory of the sandbox
139
140
Use
141
<pre>ciop-<command> -h</pre>
142
to display the CLI reference
143
144
These commands can be used in the processing triggers of a +*job*+.
145
146
h3. 4.1 Query sandbox catalogue
147
148
For the tutorial purpose, the first test is to ensure the test dataset needed for the Application integration and testing is complete.
149
150
(to be updated)
151
152
h3. 4.2 Copy data
153
154
To copy data from a reference link as displayed in the previous section, just use the following command:
155
156
<pre><code class="ruby">[user@sb ~]$ ciop-copy http://localhost/catalogue/sandbox/ASA_IM__0P/ASA_IM__0CNPDE20040602_091147_000000152027_00222_11799_1335.N1/rdf</code></pre>
157
158
output:
159
160
<pre>
161
[INFO   ][ciop-copy][starting] url 'http://localhost/catalogue/sandbox/ASA_IM__0P/ASA_IM__0CNPDE20040602_091147_000000152027_00222_11799_1335.N1/rdf' > local '/application/'
162
[INFO   ][ciop-copy][success] got URIs 'https://eo-virtual-archive4.esa.int/supersites/ASA_IM__0CNPDE20040602_091147_000000152027_00222_11799_1335.N1 '
163
[INFO   ][ciop-copy][starting] url 'https://eo-virtual-archive4.esa.int/supersites/ASA_IM__0CNPDE20040602_091147_000000152027_00222_11799_1335.N1' > local '/application/'
164
[INFO   ][ciop-copy][success] url 'https://eo-virtual-archive4.esa.int/supersites/ASA_IM__0CNPDE20040602_091147_000000152027_00222_11799_1335.N1' > local '/application/ASA_IM__0CNPDE20040602_091147_000000152027_00222_11799_1335.N1'
165
/home/user/ASA_IM__0CNPDE20040602_091147_000000152027_00222_11799_1335.N1
166
</pre>
167
168
The command displays information on the _stderr_ by default and returns on _stdout_ the path of the copied data.
169
170
Many other data schemas are supported by the ciop-copy CLI such as http, https, hdfs, etc. 
171
There are also many other options to specify the output directory or to unpack compressed data. 
172
The complete reference is available here [[ciop-copy CLI reference|ciop-copy usage]] or by using the inline help typing:
173
<pre><code class="ruby">[user@sb ~]$ciop-copy -h</code></pre>
174
175
h3. 4.3 Using other sources of data in a job
176
177
So far we have introduced two types of data sources:
178
* data coming from a catalogue series
179
* data coming from a previous job in the workflow.
180
181
In the first case, we define the workflow for the job imager:
182
183
<pre><code class="xml"><workflow id="testVomir">							<!-- Sample workflow -->
184
		<workflowVersion>1.0</workflowVersion>
185
		<node id="Vimage">							<!-- workflow node unique id -->
186
			<job id="imager"></job>					<!-- job defined above -->
187
			<sources>
188
				<source refid="cas:serie" >ATS_TOA_1P</source>
189
			</sources>
190
			<parameters>							<!-- parameters of the job -->
191
				<parameter id="volcano_db"></parameter>
192
			</parameters>
193
		</node>
194
		<node id="Quarc">
195
			<job id="quarcXML"/>
196
			<sources>
197
				<source refid="wf:node" >Vimage</source>
198
			</sources>
199
		</node></code></pre>
200
201
In the second case, we define the workflow for the Quarc job:
202
203
<pre><code class="xml"><workflow id="testVomir">							<!-- Sample workflow -->
204
		<workflowVersion>1.0</workflowVersion>
205
		<node id="Vimage">							<!-- workflow node unique id -->
206
			<job id="imager"></job>					<!-- job defined above -->
207
			<sources>
208
				<source refid="cas:serie" >ATS_TOA_1P</source>
209
			</sources>
210
			<parameters>							<!-- parameters of the job -->
211
				<parameter id="volcano_db"></parameter>
212
			</parameters>
213
		</node>
214
		<node id="Quarc">
215
			<job id="quarcXML"/>
216
			<sources>
217
				<source refid="wf:node" >Vimage</source>
218
			</sources>
219
		</node></code></pre>
220
221
It may be the case where the input data does not come from EO catalogues and thus there is the need to define another source of data.
222
223
<pre><code class="xml"><workflow id="someworkflow">							<!-- Sample workflow -->
224
		<workflowVersion>1.0</workflowVersion>
225
		<node id="somenode">							<!-- workflow node unique id -->
226
			<job id="somejobid"></job>					<!-- job defined above -->
227
			<sources>
228
				<source refid="file:urls" >/application/test.urls</source>
229
			</sources>
230
		</node>
231
</code></pre>
232
233
where the file test.urls contains the input lines that will be pipped to the processing trigger executable
234
235
h2. 5. Job integration
236
237
In this section, the +*job template+* 'align' and its instance the *+job+* 'AlignSlave' is integrated using the tools previously introduced in this tutorial.
238
239
h3. 5.1 Installation and configuration of the GMTSAR toolbox on the Sandbox
240
241
The steps below install the GMTSAR toolbox on the sandbox. These are specific but it shows the common approach to follow when installing software on the sandbox.
242
> The steps below are done in the +*home directory*+
243
244
* Step - Download the GMTSAR
245
246
GMTSAR software is available on the University of California web server:
247
248
<pre><code class="ruby">
249
[user@sb ~]$ wget http://topex.ucsd.edu/gmtsar/tar/GMTSAR.tar
250
</code></pre>
251
 
252
Then the GMTSAR.tar archive is unpacked
253
254
<pre><code class="ruby">
255
[user@sb ~]$ tar xvf GMTSAR.tar 
256
</code></pre>
257
258
GMTSAR relies on GMT with the dependencies netCDF, GMT is installed via yum:
259
260
<pre><code class="ruby">
261
[user@sb ~]$ cd GMTSAR
262
[user@sb GMTSAR]$ sudo yum search gmt
263
[user@sb GMTSAR]$ sudo yum install GMT-devel
264
[user@sb GMTSAR]$ sudo yum install netcdf-devel
265
[user@sb GMTSAR]$ make
266
</code></pre>
267
268
The steps above compile GMTSAR in the +*home directory*+. The required files (binaries, libraries, etc.) are copied to the /Application environment (remember that the +*home directory*+ is only available in the CIOP Sandbox mode and not in the Runtime Environment).
269
270
h3. 5.2 Job template definition in the application.xml
271
272
The application.xml file has two main blocks: the job template section and the workflow template section.
273
274
The first part is to define the *+job templates+* in the workflow XML application definition file.
275
Each processing block of the GMTSAR workflow needs a *+job template+*.
276
277
Here is the *+job template+* for the 'align'
278
279
<pre><code class="xml">
280
<jobTemplate id="preproc">
281
	<streamingExecutable>/application/preproc/run</streamingExecutable> <!-- processing trigger -->		
282
	<defaultParameters> <!-- default parameters of the job -->
283
		<!-- Default values are specified here -->
284
		<parameter id="SAT"></parameter>	<!-- no default value -->
285
		<parameter id="master"></parameter>
286
		<parameter id="num_patches"></parameter>	<!-- no default value -->
287
		<parameter id="near_range"></parameter>	<!-- no default value -->
288
		<parameter id="earth_radius"></parameter>	<!-- no default value -->
289
		<parameter id="fd1">1</parameter>	<!-- no default value -->
290
		<parameter id="stop_on_error">false</parameter>	<!-- dont stop on error by default -->
291
	</defaultParameters>
292
	<defaultJobconf>
293
		<property id="ciop.job.max.tasks">1</property>	<!-- Maximum number of parallel tasks -->
294
		</defaultJobconf>
295
</jobTemplate>
296
</code></pre>
297
298
To test this +*job*+ with the +*ciop-symjob*+, we need to fill the second part of the +*application.xml*+ to add a *+node+*:
299
300
<pre><code class="xml">
301
<node id="PreProc">	<!-- workflow node unique id -->
302
	<job id="preproc"></job> <!-- job template defined before -->
303
	<sources>
304
		<!-- Source is the series of data selection -->
305
		<source refid="cas:serie">ASA_IM__0P</source>
306
	</sources>
307
	<parameters>	<!-- parameters of the job -->
308
		<parameter id="SAT">ENV</parameter>
309
		<parameter id="master">http://localhost/catalogue/sandbox/ASA_IM__0P/ASA_IM__0CNPDE20040602_091147_000000152027_00222_11799_1335.N1/rdf</parameter>
310
		<parameter id="near">978992.922</parameter>
311
		<parameter id="radius">6378000</parameter>
312
		<parameter id="stop_on_error">true</parameter>	<!-- during integration, preferably stop on error -->	
313
	</parameters>
314
</node>
315
</code></pre>
316
317
> The complete application.xml is available here: TBW
318
> The application.xml and all its elements are described in details in the page [[Application XML definition file]].
319
320
Since this processing is the first in the *+workflow+* chain, it has a special source which is the serie 'ASA_IM__0P'. Practically, it means that when the *+job+* is submitted for execution, the computing framework will query the sandbox catalogue for the data ASA_IM__0P registered as sample dataset and will prepare a list of data reference as *+input+* for the job 'PreProc'. In our example, the resulting list is:
321
322
<pre>
323
http://localhost/catalogue/sandbox/ASA_IM__0P/ASA_IM__0CNPDE20090412_092436_000000162078_00079_37207_1556.N1/rdf
324
http://localhost/catalogue/sandbox/ASA_IM__0P/ASA_IM__0CNPAM20080427_092430_000000172068_00079_32197_3368.N1/rdf
325
</pre>
326
327
h3. 5.3 Processing trigger script 
328
329
In section 1.2, we have seen that each +*job*+ must have a processing trigger which is specified in <streamingExecutable> element of the +*job template*+. In our example, this executable shall be a shell script:
330
331
<pre><code class="ruby">
332
# FIRST OF ALL, LOAD CIOP INCLUDES
333
source ${ciop_job_include}
334
335
# If you want to have a complete debug information during implementation
336
ciop-enable-debug
337
338
# All return codes are predefined
339
SUCCESS=0
340
ERR_BADARG=2
341
ERR_MISSING_PREPROC_BIN=3
342
ERR_MISSING_NEAR_PARAM=4
343
ERR_MISSING_RADIUS_PARAM=5
344
ERR_MISSING_MASTER_PARAM=6
345
ERR_MISSING_SAT_PARAM=7
346
ERR_MISSING_FD1_PARAM=8
347
ERR_MISSING_NUMPATCH_PARAM=9
348
ERR_INPUT_DATA_COPY=18
349
ERR_PREPROC_ERROR=19
350
ERR_NOOUTPUT=20
351
DEBUG_EXIT=66
352
353
# This functions handle the exit of the executable
354
# with the corresponding error codes and will return a short message
355
# with the termination reason. It is important to have a synthetic and brief
356
# message because it will be raised to many upper level of the computing framework
357
# up to the user interface
358
function cleanExit ()
359
{
360
   local retval=$?
361
   local msg=""
362
   case "$retval" in
363
		$SUCCESS)
364
    		msg="Processing successfully concluded";;
365
		$ERR_BADARG)
366
    		msg="function checklibs called with non-directory parameter, returning $res";;
367
		$ERR_MISSING_PREPROC_BIN)
368
    		msg="binary 'pre_proc' not found in path, returning $res";;
369
		$ERR_MISSING_NEAR_PARAM)
370
    		msg="parameter 'near_range' missing or empty, returning $res";;
371
		$ERR_MISSING_RADIUS_PARAM)
372
    		msg="parameter 'earth_radius' missing or empty, returning $res";;
373
        $ERR_MISSING_MASTER_PARAM)
374
            msg="parameter 'master' missing or empty, returning $res";;
375
        $ERR_MISSING_FD1_PARAM)
376
            msg="parameter 'fd1' missing or empty, returning $res";;
377
        $ERR_MISSING_SAT_PARAM)
378
            msg="parameter 'sat' missing or empty, returning $res";;
379
        $ERR_MISSING_NUMPATCH_PARAM)
380
            msg="parameter 'num_patch' missing or empty, returning $res";;
381
        $ERR_INPUT_DATA_COPY)
382
            msg="Unable to retrieve an input file";;
383
        $ERR_PREPROC_ERROR)
384
            msg="Error during processing, aborting task [$res]";;
385
        $ERR_NOOUTPUT)
386
            msg="No output results";;
387
		$DEBUG_EXIT)
388
    		msg="Breaking at debug exit";;
389
   *)
390
      msg="Unknown error";;
391
   esac
392
   [ "$retval" != 0 ] && ciop-log "ERROR" "Error $retval - $msg, processing aborted" || ciop-log "INFO" "$msg"
393
   exit "$retval"
394
}
395
396
# trap an exit signal to exit properly
397
trap cleanExit EXIT
398
399
# Use ciop-log to log message at different level : INFO, WARN, DEBUG
400
ciop-log "DEBUG" '##########################################################'
401
ciop-log "DEBUG" '# Set of useful environment variables                    #'
402
ciop-log "DEBUG" '##########################################################'
403
ciop-log "DEBUG" "TMPDIR                  = $TMPDIR"                  # The temporary directory for the task.
404
ciop-log "DEBUG" "_JOB_ID                 = ${_JOB_ID}"               # The job id
405
ciop-log "DEBUG" "_JOB_LOCAL_DIR          = ${_JOB_LOCAL_DIR}" 		  # The job specific shared scratch space 
406
ciop-log "DEBUG" "_TASK_ID                = ${_TASK_ID}"              # The task id
407
ciop-log "DEBUG" "_TASK_LOCAL_DIR         = ${_TASK_LOCAL_DIR}"       # The task specific scratch space
408
ciop-log "DEBUG" "_TASK_NUM               = ${_TASK_NUM}"             # The number of tasks
409
ciop-log "DEBUG" "_TASK_INDEX             = ${_TASK_INDEX}"           # The id of the task within the job
410
411
# Get the processing trigger directory to link binaries and libraries
412
PREPROC_BASE_DIR=`dirname $0`
413
export PATH=$PREPROC_BASE_DIR/bin:$PATH
414
export LD_LIBRARY_PATH=$PREPROC_BASE_DIR/lib:$LD_LIBRARY_PATH
415
416
${_CIOP_APPLICATION_PATH}/GMTSAR/gmtsar_config
417
418
# Test that all my necessary binaries are accessible
419
# if not, exit with the corresponding error.
420
PREPROC_BIN=`which pre_proc_batch.csh`
421
[ -z "$PREPROC_BIN" ] && exit $ERR_MISSING_PREPROC_BIN
422
423
# Processor Environment
424
# definition and creation of input/output directory
425
OUTPUTDIR="$_TASK_LOCAL_DIR/output"		    # results directory
426
INPUTDIR="$_TASK_LOCAL_DIR/input"		    # data input directory
427
MASTERDIR="$_TASK_LOCAL_DIR/master"
428
mkdir -p $OUTPUTDIR $INPUTDIR
429
430
# Processing Variables
431
# Retrieve the near variable
432
NUMPATCH=`ciop-getparam num_patches`
433
[ $? != 0 ] && exit $ERR_MISSING_NUMPATCH_PARAM
434
NEAR=`ciop-getparam near_range`
435
[ $? != 0 ] && exit $ERR_MISSING_NEAR_PARAM
436
RADIUS=`ciop-getparam earth_radius`
437
[ $? != 0 ] && exit $ERR_MISSING_RADIUS_PARAM
438
FD1=`ciop-getparam fd1`
439
[ $? != 0 ] && exit $ERR_MISSING_FD1_PARAM
440
SAT=`ciop-getparam SAT`
441
[ $? != 0 ] && exit $ERR_MISSING_SAT_PARAM
442
MASTER=`ciop-getparam master`
443
[ $? != 0 ] && exit $ERR_MISSING_MASTER_PARAM
444
STOPONERROR=`ciop-getparam stop_on_error`
445
[ $? != 0 ] && STOPONERROR=false
446
447
# Create the batch.config parameter file
448
cat >${_TASK_LOCAL_DIR}/batch.config << EOF
449
450
num_patches = $NUMPATCH
451
earth_radius = $RADIUS
452
near_range = $NEAR
453
fd1 = $FD1
454
 
455
EOF
456
457
# the parameter 'master' is a reference to a data file
458
# we need to copy it for the rest of our processing
459
# This parameter is at job level so if another prallel task on the same
460
# computing resource has already copied it, we save a useless copy
461
masterFile=`ciop-copy -c -o "$MASTERDIR" -r 10 "$MASTER"`
462
[[ -s $masterFile ]] || { 
463
	ciop-log "ERROR" "Unable to retrieve master input at $url" ; exit $ERR_INPUT_DATA_COPY ; 
464
}
465
ciop-log "INFO" "Retrieved master input at $masterFile"
466
467
echo $masterFile >${_TASK_LOCAL_DIR}/data.in
468
469
# Begin processing loop
470
# Read line by line the input in url variable
471
while read url
472
do
473
		# First we copy the data in the INPUT dire
474
		ciop-log "INFO" "Copying data $url" "preproc"
475
		# ciop-copy $url in $INPUTDIR and retry 10 times in case of failure
476
		# local path of the copied file is returned in the $tmpFile variable
477
        tmpFile=`ciop-copy -o "$INPUTDIR" -r 10 "$url"`
478
        [[ -s $tmpFile ]] || { 
479
			ciop-log "ERROR" "Unable to retrieve inputfile at $url" ; 
480
			[[ $STOPONERROR == true ]] && exit $ERR_INPUT_DATA_COPY ; 
481
		}
482
        ciop-log "INFO" "Retrieved inputfile $tmpFile"
483
	
484
		echo $tmpFile >${_TASK_LOCAL_DIR}/data.in
485
486
done
487
488
# here we start the processing of the stack of data
489
ciop-log "INFO" "Processing stack of data" "preproc"
490
ciop-log "DEBUG" "$PREPROC_BIN $SAT data.in batch.config"
491
492
$PREPROC_BIN $SAT ${_TASK_LOCAL_DIR}/data.in ${_TASK_LOCAL_DIR}/batch.config >$OUTPUTDIR/preproc.log 2>&1
493
rcpp=$?
494
        
495
if [ "$rcpp" != 0 ]; then
496
	ciop-log "ERROR" "$PREPROC_BIN failed to process, return code $?" "preproc"
497
	cat $OUTPUTDIR/preproc.log >&2
498
	exit $ERR_PREPROC_ERROR
499
fi
500
501
ciop-log "INFO" "Processing complete" "preproc"
502
503
# The the results are "published" for next job
504
# Practically, the output shall be published to a job shared space
505
# and the directory referenced as an url for next job
506
ciop-publish $OUTPUTDIR/
507
508
exit 0
509
510
</code></pre>
511
512
> /\ !!! Keep in mind that the execution shall take place in a non-interactive environment so +error catching+ and +logging+ are very important. They enforce the robustness of your application and avoid loosing time later in debugging !!!
513
514
Here is a summary of the framework tools used in this script with their eventual online help:
515
* *source ${ciop_job_include}* --> include library for many functions such as ciop-log, ciop-enable-debug and ciop-getparam
516
* *ciop-enable-debug* --> this enable the DEBUG level for logging system, otherwise just INFO and WARN message are displayed
517
* *ciop-log* --> log message both in interactive computing framework and in processing stdout/err files. [[ciop-log CLI reference|ciop-log usage]]
518
* *ciop-getparam* --> retrieve job parameter. [[ciop-getparam CLI reference|ciop-getparam usage]]
519
* *ciop-catquery* --> query the EO Catalogue of the sandbox. [[ciop-catquery CLI reference|ciop-catquery usage]]
520
* *ciop-copy* --> copy remote file to local directory. [[ciop-copy CLI reference|ciop-copy usage]]
521
* *ciop-publish* --> copy *+task+* result files in +*workflow*+ shared space. [[ciop-publish CLI reference|ciop-publish usage]]
522
523
h3. 5.4 Simulating a single job of the workflow
524
525
*ciop-simjob* --> simulates the execution of one processing +*job*+ of the +*work=flow*+. [[ciop-simjob CLI reference|ciop-simjob usage]]
526
527
We will use it to test the first processing block of GMTSAR:
528
529
<pre><code class="ruby">ciop-simjob -f PreProc</code></pre>
530
531
This will output to _stdout_ the URL of the Hadoop Map/Reduce job. Open the link to check if the processing is correctly executed. 
532
The command will show the progress messages:
533
534
<pre>
535
Deleted hdfs://sb-10-10-14-24.lab14.sandbox.ciop.int:8020/tmp/sandbox/sample/input.0
536
rmr: cannot remove /tmp/sandbox/sample/PreProc/logs: No such file or directory.
537
mkdir: cannot create directory /tmp/sandbox/sample/PreProc: File exists
538
Deleted hdfs://sb-10-10-14-24.lab14.sandbox.ciop.int:8020/tmp/sandbox/sample/workflow-params.xml
539
Submitting job 25764 ...
540
12/11/21 12:26:56 WARN streaming.StreamJob: -jobconf option is deprecated, please use -D instead.
541
packageJobJar: [/var/lib/hadoop-0.20/cache/emathot/hadoop-unjar5187515757952179540/] [] /tmp/streamjob7738227981987732817.jar tmpDir=null
542
12/11/21 12:26:58 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
543
12/11/21 12:26:58 WARN snappy.LoadSnappy: Snappy native library not loaded
544
12/11/21 12:26:58 INFO mapred.FileInputFormat: Total input paths to process : 1
545
12/11/21 12:26:58 INFO streaming.StreamJob: getLocalDirs(): [/var/lib/hadoop-0.20/cache/emathot/mapred/local]
546
12/11/21 12:26:58 INFO streaming.StreamJob: Running job: job_201211101342_0045
547
12/11/21 12:26:58 INFO streaming.StreamJob: To kill this job, run:
548
12/11/21 12:26:58 INFO streaming.StreamJob: /usr/lib/hadoop-0.20/bin/hadoop job  -Dmapred.job.tracker=sb-10-10-14-24.lab14.sandbox.ciop.int:8021 -kill job_201211101342_0045
549
12/11/21 12:26:58 INFO streaming.StreamJob: Tracking URL: http://sb-10-10-14-24.lab14.sandbox.ciop.int:50030/jobdetails.jsp?jobid=job_201211101342_0045
550
12/11/21 12:26:59 INFO streaming.StreamJob:  map 0%  reduce 0%
551
12/11/21 12:27:06 INFO streaming.StreamJob:  map 17%  reduce 0%
552
12/11/21 12:27:07 INFO streaming.StreamJob:  map 33%  reduce 0%
553
12/11/21 12:27:13 INFO streaming.StreamJob:  map 67%  reduce 0%
554
12/11/21 12:27:18 INFO streaming.StreamJob:  map 83%  reduce 0%
555
12/11/21 12:27:19 INFO streaming.StreamJob:  map 100%  reduce 0%
556
12/11/21 12:27:24 INFO streaming.StreamJob:  map 100%  reduce 33%
557
12/11/21 12:27:27 INFO streaming.StreamJob:  map 100%  reduce 100%
558
^@12/11/21 12:28:02 INFO streaming.StreamJob:  map 100%  reduce 0%
559
12/11/21 12:28:05 INFO streaming.StreamJob:  map 100%  reduce 100%
560
12/11/21 12:28:05 INFO streaming.StreamJob: To kill this job, run:
561
12/11/21 12:28:05 INFO streaming.StreamJob: /usr/lib/hadoop-0.20/bin/hadoop job  -Dmapred.job.tracker=sb-10-10-14-24.lab14.sandbox.ciop.int:8021 -kill job_201211101342_0045
562
12/11/21 12:28:05 INFO streaming.StreamJob: Tracking URL: http://sb-10-10-14-24.lab14.sandbox.ciop.int:50030/jobdetails.jsp?jobid=job_201211101342_0045
563
12/11/21 12:28:05 ERROR streaming.StreamJob: Job not successful. Error: NA
564
12/11/21 12:28:05 INFO streaming.StreamJob: killJob...
565
Streaming Command Failed!
566
[INFO   ][log] All data, output and logs available at /share//tmp/sandbox/sample/PreProc
567
</pre>
568
569
At this point you can use your browser to display the URL of the _*Traking URL*_
570
571
!https://ciop.eo.esa.int/attachments/46/single_job_debug_1.png!
572
573
This is a single thread job, see the reduce in the table
574
You can click on the kill job in the reduce line. The page shows the task attempt, usually one.
575
576
!https://ciop.eo.esa.int/attachments/47/single_job_debug_2.png!
577
578
In this case, the job ended with exit code 19
579
580
Click on the task link, the same info as before is shown but there are more details (e.g. log in the last column)
581
582
You can click on _*all*_
583
584
_stout_ and _stderr_ appears, you can debug the processing job with this information.
585
586
h3. 5.6 -- Simulating a complete workflow -- (Section under revision)
587
588
h2. 6. -- Application deployment -- (section under revision)
589
590
This section describes the procedure to deploy your application once ready and successfully integrated.
591
592
h3. 6.1 Deploy as a service
593
594
h3. 6.2 Test application in pre-operation
595
596
h3. 6.3 Plan the production