Vitas Laniauskas Greenville County School District Greenville, SC
Windows Server LSF 9.0.1.5, S3 Apps 9.0.1.5, PFI 9.0.1, LBI Landmark 10.0.4.7, LTM 10.1.0.6, IPA 10.0.3.10
Hi Vitas -
I can provide some guidance, but it's been a while since I've touched these processes. Unfortunately, I can't provide a copy of our script due to company policy.
This is not for the faint of heart - you'll want to involve someone with strong scripting skills in your OS language. Note that we are on AIX, so it may be different if you are on Windows.
In short, we use the "dataimport" command to update and then start a PfiTrigger record. The dataimport command takes an xml file as input and inserts it into the async cycle so it performs just like a call from the application.
Prep: If you run command "dataimport -da {prodline} -layout PfiTrigger Update" and "dataimport -da {prodline} -layout PfiTrigger Start", it will dump out xml templates for the Update and Start command. These are the centerpiece of our script.
Script: Our script takes three parameters: 1) the PfiTrigger number that you want to trigger 2) the flow name 3) an expected runtime in minutes.
The first step is that the script appends a current datetime string to the end of the PARAM2 worktitle, then builds an xml file based on the above template with the PfiTrigger value from PARAM1 and the PARAM2_datetime value. NOTE: Those are the only two fields included in our XML file - we excluded all the other fields.
The second step is that the script calls command "dataimport -da {prodline} -f {xmlfile} -action com.lawson.apps.pfi.PfiTrigger Update", which updates the PfiTrigger record with the new work title (so that the spawned PfiWorkunit has a unique WorkTitle).
The third step is that the script calls "dataimport -da {prodline} -f {xmlfile} -action com.lawson.apps.pfi.PfiTrigger Start" to actually start a new WorkUnit.
The last step is that the script monitors for the script to complete. It does this in a "while" loop that calls "dbdisplay -h -F WorkTitle,Status {prodline} pfiworkunit|grep {PARAM2_datetime}". It repeats this check occasionally (the wait time is dependent on the expected runtime from PARAM3) until the PfiWorkunit record completes or fails.
WARNING: For long-running flows, we frequently have situations where the connection is lost between our scheduler system and our Landmark system. When this happens, the scheduler returns an error and we have to follow up manually. We have talked about separating the monitoring from the triggering, and scheduling the monitoring as a recurring job instead, but we haven't gone there yet.
PfiTrigger: For each flow that you want to schedule, you will need to create a PfiTrigger record (just like you do to manually run a flow or schedule a flow in Landmark). In our case, we create triggers specifically for our external scheduler because we have the script change the trigger title every time it runs for status tracking purposes.
Obviously there is lots of error handling and logging that we have included in our script, but that is something that your team will need to figure out for yourselves.
I hope this points you in the right direction.
Good Luck! Kelly