nullMudit Goel, Edward A. Lee, Xiaowen Xinnull
<p>PN Directors are natural candidates for managing workflows that require parallel processing on distributed computing systems. PN workflows are powerful because they have few restrictions. On the other hand, they can be very inefficient.</p>
<p>The Process Network (PN) Director is similar to the SDF Director in that it does not have a notion of time. However, unlike the SDF Director, the PN Director does not statically calculate firing schedules. Instead, a PN workflow is driven by data availability: tokens are created on output ports whenever input tokens are available and the outputs can be calculated. Output tokens are passed to connected actors, where they are held in a buffer until that next actor collects all required inputs and can fire. The PN Director finishes executing a workflow only when there are no new data token sources anywhere in the workflow. </p>
<p>The same execution process that gives the PN Director its flexibility can also lead to some unexpected results: workflows may refuse to automatically terminate because tokens are always generated and available to downstream actors, for example. If one actor fires at a much higher rate than another, a downstream actor's memory buffer may overflow, causing workflow execution to fail.</p>
<p>There are at least three ways for a PN model to terminate itself:
<ol>
<li>Have the model starve itself. Typically, a boolean switch is used.
See the PN OrderedMerge demo at
<code>ptolemy/domains/pn/demo/OrderedMerge/OrderedMerge.xml</code></li>
<li>Have the model call the Stop actor. See the PN RemoveNilTokens demo at
<code>ptolemy/domains/pn/demo/RemoveNilTokens/RemoveNilTokens.xml</code></li>
<li>Set the <i>firingCountLimit</i> actor
parameter to the number of iterations desired. Actors such as Ramp
extend LimitedFiringSource and have the <i>firingCountLimit</i> parameter.</li>
</ol></p>
The initial size of the queues for each communication channel. The value is an integer that defaults to 1. This is an advanced parameter that can usually be left at its default value.The maximum size of the queues for each communication channel. The value is an integer that defaults to 65536. To specify unbounded queues, set the value to 0. This is an advanced parameter that can usually be left at its default value.nullYuhong Xiongnull<p>A Composite actor is an aggregation of actors. It may have a local director that is responsible for executing the contained actors. A Composite actor with a local director is called an opaque actor. Composite actors do not require a local director. Composite actors with no local director "inherit" the director from the containing workflow and are called non-opaque.</p>
<p>To create a composite actor, drag and drop the Composite actor onto the Workflow canvas. Right-click the actor and select Open Actor from the drop-down menu. A new Kepler application window will open for designing the composite.</p>nullFrankie Kwok, Chandrika Sivaramakrishnan, Jared Chase$Id: GenericJobLauncher.xml 30999 2012-10-31 22:15:59Z jianwu $
<p>GenericJobLauncher actor is a generic actor that can create, submit and manage a job on a remote machine accessible through SSH. The user may choose to wait till the job has attained a specific status in the queue - for example till it is Running, Complete, Not in Queue etc.</p>
<p>This actor is based on the JobCreator, JobManager, JobSubmitter and JobStatus actors. It abstracts all the actions of these actors, and the control flow required to combine these in a job launching workflow.</p>
<p>Five more parameters are configurable in 'expert' mode. They are "job submit options" (optional parameters to pass to submitting a job), "binary path" (the full path to the jobmanager commands on the target machine), "executable file" (executable file's name parameter and port), "use given workdir" (use the value of parameter 'workdir' and doesn't create unique sub directory if selected), "Use default fork script" (flag to set if you want the actor to stage the default fork script). To enable Expert mode, double-click the actor and choose 'Preferences' button. In the popup dialogue, choose 'expert mode'.</p>
The submit file to be used at job submission. Absolute (or relative to current dir of Java) file path should be provided. The job file must be provided here, or the contents can be specified in cmdText.Logging information of job status or for any error messages.The name of the job scheduler to be used. Should be a name, for which a supporter class exists as org.kepler.job.JobSupport{scheduler}.classThe machine to be used at job submission. Target in user@host:port format. If user is not provided, the local username will be used. If port is not provided, the default port 22 will be applied. If target is "local" or empty, all commands will be executed locally.The text of the job specification. The job specification must either be provided in this parameter or the file name in cmdFile.One or more jobs that must successfully complete before this job can run.The string array of local input files that will be copied to the working directory. Absolute path names, or relative to current dir of the running java virtual machine, should be provided.Logging information of job status query. Useful to inform user about problems at unsuccessful status query but it also prints out job status and jobid on successful query. This token can be used (delaying it with a Sleep actor) to ask its Status again and again until the job is finished or aborted. This port is an output port of type Object.boolean flag to indicate if job launch was successfulThe working directory in which the actual job submission command will be executed. It should be an absolute path, or a relative one. In the latter case on remote machine, the directory path will be relative to the user's home directory (coming from the use of ssh). </p> By default, a new unique sub directory is created within this workdir based on the job id created by kepler. Job is run from this sub directory. This can be overwritten by setting the parameter "use given workdir".The string array of remote input files that will be copied to the working directory. Absolute path names, or relative to the user home dir on the remote host should be provided.The submit file to be used at job submission. Absolute (or relative to current dir of Java) file path should be provided. The job file must be provided here, or the contents can be specified in cmdText.By default, Kepler creates a unique sub directory within workdir based on the the job id it creates for the job. Job is run from this sub directory. Set this flag to true if you want job to be run directly from workdir instead of a subdir.The name of the job scheduler to be used. It currently support multiple job schedulers: Condor, PBS, SGE, Fork, NCCS, LoadLeveler, LSF, and can be extended for more. It should be a name, for which a supporter class exists as org.kepler.job.JobSupport{scheduler}.class.Wait until the job has a reached specific status. The available status' that can be reached are: any, wait, running, not in queue, and error.The number of tasks for the job - used in a task parallel jobThe executable file to be used at job submission. Absolute path names, or relative to current dir of the running java virtual machine, should be provided. If it is "" then it is considered to be already at the remote site, otherwise the actor will look for it locally and stage it to the <i>workdir</i> before job submission.Specifying whether the cmdFile is locally stored or on the remote target.The machine to be used at job submission. It should be null, "" or "local" for the local machine or [user@]host to denote a remote machine accessible with ssh.The text of the job specification. The job specification must either be provided in this parameter or the file name in cmdFile.The string array of inputfiles. Absolute path names, or relative to current dir of the running java virtual machine, should be provided.Boolean flag to indicate if the default fork script should be staged. If bin path is provided the default script is uploaded to bin path, else it is uploaded to the working directoryAmount of time (in seconds) to sleep between checking job status.The path to the job manager commands on the target machines. Commands are constructed as <i>binPath/command</i> and they should be executable this way. The working directory in which the actual job submission command will be executed. It should be an absolute path, or a relative one. In the latter case on remote machine, the directory path will be relative to the user's home directory (coming from the use of ssh). </p> By default, a new unique sub directory is created within this workdir based on the job id created by kepler. Job is run from this sub directory. This can be overwritten by setting the parameter "use given workdir"The string array of remote input files. Absolute path names, or relative to the user home dir on the remote host should be provided.nullFrankie Kwok, Chandrika Sivaramakrishnan, Jared Chase$Id: GenericJobLauncher.xml 30999 2012-10-31 22:15:59Z jianwu $
<p>GenericJobLauncher actor is a generic actor that can create, submit and manage a job on a remote machine accessible through SSH. The user may choose to wait till the job has attained a specific status in the queue - for example till it is Running, Complete, Not in Queue etc.</p>
<p>This actor is based on the JobCreator, JobManager, JobSubmitter and JobStatus actors. It abstracts all the actions of these actors, and the control flow required to combine these in a job launching workflow.</p>
<p>Five more parameters are configurable in 'expert' mode. They are "job submit options" (optional parameters to pass to submitting a job), "binary path" (the full path to the jobmanager commands on the target machine), "executable file" (executable file's name parameter and port), "use given workdir" (use the value of parameter 'workdir' and doesn't create unique sub directory if selected), "Use default fork script" (flag to set if you want the actor to stage the default fork script). To enable Expert mode, double-click the actor and choose 'Preferences' button. In the popup dialogue, choose 'expert mode'.</p>
The submit file to be used at job submission. Absolute (or relative to current dir of Java) file path should be provided. The job file must be provided here, or the contents can be specified in cmdText.Logging information of job status or for any error messages.The name of the job scheduler to be used. Should be a name, for which a supporter class exists as org.kepler.job.JobSupport{scheduler}.classThe machine to be used at job submission. Target in user@host:port format. If user is not provided, the local username will be used. If port is not provided, the default port 22 will be applied. If target is "local" or empty, all commands will be executed locally.The text of the job specification. The job specification must either be provided in this parameter or the file name in cmdFile.One or more jobs that must successfully complete before this job can run.The string array of local input files that will be copied to the working directory. Absolute path names, or relative to current dir of the running java virtual machine, should be provided.Logging information of job status query. Useful to inform user about problems at unsuccessful status query but it also prints out job status and jobid on successful query. This token can be used (delaying it with a Sleep actor) to ask its Status again and again until the job is finished or aborted. This port is an output port of type Object.boolean flag to indicate if job launch was successfulThe working directory in which the actual job submission command will be executed. It should be an absolute path, or a relative one. In the latter case on remote machine, the directory path will be relative to the user's home directory (coming from the use of ssh). </p> By default, a new unique sub directory is created within this workdir based on the job id created by kepler. Job is run from this sub directory. This can be overwritten by setting the parameter "use given workdir".The string array of remote input files that will be copied to the working directory. Absolute path names, or relative to the user home dir on the remote host should be provided.The submit file to be used at job submission. Absolute (or relative to current dir of Java) file path should be provided. The job file must be provided here, or the contents can be specified in cmdText.By default, Kepler creates a unique sub directory within workdir based on the the job id it creates for the job. Job is run from this sub directory. Set this flag to true if you want job to be run directly from workdir instead of a subdir.The name of the job scheduler to be used. It currently support multiple job schedulers: Condor, PBS, SGE, Fork, NCCS, LoadLeveler, LSF, and can be extended for more. It should be a name, for which a supporter class exists as org.kepler.job.JobSupport{scheduler}.class.Wait until the job has a reached specific status. The available status' that can be reached are: any, wait, running, not in queue, and error.The number of tasks for the job - used in a task parallel jobThe executable file to be used at job submission. Absolute path names, or relative to current dir of the running java virtual machine, should be provided. If it is "" then it is considered to be already at the remote site, otherwise the actor will look for it locally and stage it to the <i>workdir</i> before job submission.Specifying whether the cmdFile is locally stored or on the remote target.The machine to be used at job submission. It should be null, "" or "local" for the local machine or [user@]host to denote a remote machine accessible with ssh.The text of the job specification. The job specification must either be provided in this parameter or the file name in cmdFile.The string array of inputfiles. Absolute path names, or relative to current dir of the running java virtual machine, should be provided.Boolean flag to indicate if the default fork script should be staged. If bin path is provided the default script is uploaded to bin path, else it is uploaded to the working directoryAmount of time (in seconds) to sleep between checking job status.The path to the job manager commands on the target machines. Commands are constructed as <i>binPath/command</i> and they should be executable this way. The working directory in which the actual job submission command will be executed. It should be an absolute path, or a relative one. In the latter case on remote machine, the directory path will be relative to the user's home directory (coming from the use of ssh). </p> By default, a new unique sub directory is created within this workdir based on the job id created by kepler. Job is run from this sub directory. This can be overwritten by setting the parameter "use given workdir"The string array of remote input files. Absolute path names, or relative to the user home dir on the remote host should be provided.nullYuhong Xiongnull<p>A Composite actor is an aggregation of actors. It may have a local director that is responsible for executing the contained actors. A Composite actor with a local director is called an opaque actor. Composite actors do not require a local director. Composite actors with no local director "inherit" the director from the containing workflow and are called non-opaque.</p>
<p>To create a composite actor, drag and drop the Composite actor onto the Workflow canvas. Right-click the actor and select Open Actor from the drop-down menu. A new Kepler application window will open for designing the composite.</p>nullFrankie Kwok, Chandrika Sivaramakrishnan, Jared Chase$Id: GenericJobLauncher.xml 30999 2012-10-31 22:15:59Z jianwu $
<p>GenericJobLauncher actor is a generic actor that can create, submit and manage a job on a remote machine accessible through SSH. The user may choose to wait till the job has attained a specific status in the queue - for example till it is Running, Complete, Not in Queue etc.</p>
<p>This actor is based on the JobCreator, JobManager, JobSubmitter and JobStatus actors. It abstracts all the actions of these actors, and the control flow required to combine these in a job launching workflow.</p>
<p>Five more parameters are configurable in 'expert' mode. They are "job submit options" (optional parameters to pass to submitting a job), "binary path" (the full path to the jobmanager commands on the target machine), "executable file" (executable file's name parameter and port), "use given workdir" (use the value of parameter 'workdir' and doesn't create unique sub directory if selected), "Use default fork script" (flag to set if you want the actor to stage the default fork script). To enable Expert mode, double-click the actor and choose 'Preferences' button. In the popup dialogue, choose 'expert mode'.</p>
The submit file to be used at job submission. Absolute (or relative to current dir of Java) file path should be provided. The job file must be provided here, or the contents can be specified in cmdText.Logging information of job status or for any error messages.The name of the job scheduler to be used. Should be a name, for which a supporter class exists as org.kepler.job.JobSupport{scheduler}.classThe machine to be used at job submission. Target in user@host:port format. If user is not provided, the local username will be used. If port is not provided, the default port 22 will be applied. If target is "local" or empty, all commands will be executed locally.The text of the job specification. The job specification must either be provided in this parameter or the file name in cmdFile.One or more jobs that must successfully complete before this job can run.The string array of local input files that will be copied to the working directory. Absolute path names, or relative to current dir of the running java virtual machine, should be provided.Logging information of job status query. Useful to inform user about problems at unsuccessful status query but it also prints out job status and jobid on successful query. This token can be used (delaying it with a Sleep actor) to ask its Status again and again until the job is finished or aborted. This port is an output port of type Object.boolean flag to indicate if job launch was successfulThe string array of remote input files that will be copied to the working directory. Absolute path names, or relative to the user home dir on the remote host should be provided.The working directory in which the actual job submission command will be executed. It should be an absolute path, or a relative one. In the latter case on remote machine, the directory path will be relative to the user's home directory (coming from the use of ssh). </p> By default, a new unique sub directory is created within this workdir based on the job id created by kepler. Job is run from this sub directory. This can be overwritten by setting the parameter "use given workdir".The submit file to be used at job submission. Absolute (or relative to current dir of Java) file path should be provided. The job file must be provided here, or the contents can be specified in cmdText.By default, Kepler creates a unique sub directory within workdir based on the the job id it creates for the job. Job is run from this sub directory. Set this flag to true if you want job to be run directly from workdir instead of a subdir.The name of the job scheduler to be used. It currently support multiple job schedulers: Condor, PBS, SGE, Fork, NCCS, LoadLeveler, LSF, and can be extended for more. It should be a name, for which a supporter class exists as org.kepler.job.JobSupport{scheduler}.class.Wait until the job has a reached specific status. The available status' that can be reached are: any, wait, running, not in queue, and error.The number of tasks for the job - used in a task parallel jobThe executable file to be used at job submission. Absolute path names, or relative to current dir of the running java virtual machine, should be provided. If it is "" then it is considered to be already at the remote site, otherwise the actor will look for it locally and stage it to the <i>workdir</i> before job submission.The machine to be used at job submission. It should be null, "" or "local" for the local machine or [user@]host to denote a remote machine accessible with ssh.Specifying whether the cmdFile is locally stored or on the remote target.The string array of inputfiles. Absolute path names, or relative to current dir of the running java virtual machine, should be provided.The text of the job specification. The job specification must either be provided in this parameter or the file name in cmdFile.Boolean flag to indicate if the default fork script should be staged. If bin path is provided the default script is uploaded to bin path, else it is uploaded to the working directoryAmount of time (in seconds) to sleep between checking job status.The path to the job manager commands on the target machines. Commands are constructed as <i>binPath/command</i> and they should be executable this way.The string array of remote input files. Absolute path names, or relative to the user home dir on the remote host should be provided. The working directory in which the actual job submission command will be executed. It should be an absolute path, or a relative one. In the latter case on remote machine, the directory path will be relative to the user's home directory (coming from the use of ssh). </p> By default, a new unique sub directory is created within this workdir based on the job id created by kepler. Job is run from this sub directory. This can be overwritten by setting the parameter "use given workdir"