Sandbox dashboard » History » Revision 1
Revision 1/2
| Next »
Herve Caumont, 2013-06-19 18:05
Understanding the Sandbox¶
- Table of contents
- Understanding the Sandbox
The Sandbox dashboard¶
The Sandbox dashboard is a Web User Interface exposing the Sandbox information and functionalities.
The Sandbox dashboard is accessible at the address:
http://<sandbox address>/dashboard
The Sandbox dashboard main tabs are described below.
Dashboard¶
The Sandbox dashboard dashboard tab contains information about the Sandbox itself:- Sandbox Information:
- Name
- Type
- Virtual Machine ID
- Status
- Owner
- Sandbox Controls
- ...
- Security Information
- Get private key
- Network Information
- Host Name
- IP address
- Mac Address
Application¶
The Application dashboard tab allows editing the Application Descriptor file as you would do it on a graphical XML editor (the editor features code completion).
The Save button will save the changes on the Application Descriptor file.
The *Reload" button will reload the Application Descriptor file e.g. if the Application Descriptor file has been edited in the Sandbox shell.
The Deploy Service packages the application to be deployed as a processing Service on the Web Portal (requires support from the Web Portal team).
The Start Workflow Sample triggers the execution of the workflow (it is equivalent to executing the ciop-simwf command from the Sandbox shell).
Data¶
Compute¶
The Compute dashboard tab show the Sandbox computing resources information and health:- Sandbox total nodes
- Sandbox Service State
- CPU Usage
- Memory Usage
- Home Disk Usage
- Application Disk Usage
Support¶
The Support dashboard tab allows submitting support issues and list the submitted issues.
The issues are managed on the Redmine support portal.
The Sandbox filesystems¶
In the context of the application life-cycle in CIOP, the Sandbox has three filesystems (or directory):- /home/<user> that we refer to as HOME
- /application that we refer to as APPLICATION
- /share that we refer to as SHARE
HOME directory¶
A user's home directory is intended to contain that user's files; including text documents, music, pictures or videos, etc. It may also include their configuration files of preferred settings for any software they have used there and might have tailored to their liking: web browser bookmarks, favorite desktop wallpaper and themes, passwords to any external services accessed via a given software, etc. The user can install executable software in this directory, but it will only be available to users with permission to this directory. The home directory can be organized further with the use of sub-directories.
Source Wikipedia
As such, in CIOP, the HOME is used to store the user's files. It can be used to store source files (the compiled programs would then go APPLICATION).
At job or workflow execution time, CIOP uses a system user to execute the application. This system user cannot read files in HOME.
When the application is run on the CIOP Runtime Environment, the HOME directory is not available in any of the computing nodes.
APPLICATION filesystem¶
The APPLICATION filesystem contains all the files required to run the application.
The APPLICATION filesystem is available on the Sandbox as /application.
Whenever an application wrapper script needs to use the APPLICATION value (/application) the variable $_CIOP_APPLICATION_PATH, example:
export BEAM_HOME=$_CIOP_APPLICATION_PATH/common/beam-4.11The APPLICATION contains
- the Application Descriptor File, named application.xml and described here: Application descriptor
- a folder for each job template
- the streaming executable, a script that deals with the stdin managed by CIOP (e.g. EO data URLs to be passed to ciop-copy). There isn't a defined naming convention although it is often called run.
Tip: The streaming executable will read its inputs via stdin managed by the CIOP Hadoop Map Reduce streaming underlying layer
- a set of folders such as:
- /application/<job template name>/bin standing for "binaries" and contains certain fundamental job utilities which are in part needed by the job wrapper script.
- /application/<job template name>/etc containing job-wide configuration files
- /application/<job template name>/lib containing the job libraries
- ...
There aren't any particular rules for the folders in the job template folder
The APPLICATION of a workflow with two jobs can then be represented as
/application/
application.xml
/job_template_1
run
/bin
/etc
/job_template_2
run
/bin
/lib
SHARE filesystem¶
The SHARE filesystem is the Sandbox distributed filesystem mount point. It is a HDFS filesystem used to store the application's job outputs generated by the execution of ciop-simjob and/or ciop-simwf.
The SHARE filesystem is available on the Sandbox as /share and the HDFS distributed filesystem acces point is /tmp thus, on the Sandbox, /share/tmp is the root of the distributed filesysyem.
SHARE for ciop-simjob¶
When the ciop-simjob is invoked to run a node of the workflow, the outputs are found in:
/share/tmp/sandbox/<workflow name>/<node name>
A job can be executed several times but the results of a previous execution will be deleted.
Tip: the workflow and node names are found in the Application Descriptor File, named application.xml and described here: Application descriptor
Tip: ciop-simjob -n will list the workflow node name(s), check the ciop-simjob reference page here: ciop-simjob
SHARE for ciop-simwf¶
When the ciop-simwf is invoked to run the complete application workflow, the outputs are found in a dedicated folder under SHARE:
/share/tmp/sandbox/run/<run identifier>/<node name>/data
Contrarly to ciop-simjob, ciop-simwf keeps all workflow execution runs. This feature allows comparing the results of different sets of parameters for example.
Tip: check the Application descriptor page to define default parameter values and how to override these in the workflow
Updated by Herve Caumont over 11 years ago · 1 revisions