3 Replies Latest reply on Sep 7, 2007 7:24 AM by (Tom_Warfield)

    one file - multi output processing

    Andreas Lenzinger
      I got the following issue: I have an input that I am able to split into multible output files. But the next job step is only picking up JfServer.TFA and creates the PCL output. The Files JfServer1.TFA, JfServer2.TFA are sitting in the data folder and they don't get processed.
      How can I get jetform, to process this files too?

      This how my config looks:

      # **** Job table ****
      !f SPOTII * c:\JetForm\Central\Server\forms\SPOTII.tdf * 1 T JFTRANS * A C *
      !f SPOTII * c:\JetForm\Central\Server\forms\SPOTII.mdf * 1 T PCLFNSYS A * C *

      # **** Task table ****
      !x PCLFNSYS * JFMERGE "@MDFName @InFile -l -apr@PreambleName -all@LogFileName -asl0 -amq@ManagedMem -ams@MSTName -m@Macro#.@LoadFlag -zc:\jetform\central\server\Postbox,u.pcl @OtherJobTokens. -aii@IniFileName -anf\169 -anl\174" *
        • 1. Re: one file - multi output processing
          Level 1
          What you describe is normal operating procedure for the software. At best it expects a single output file from one step to be passed to the next step, with the name of JFSERVER.TFA, JFSERVER.TFB, etc. That third position of the "extension" part of the file name is what is specified within the job definition. The "name" itself is fixed.

          I don't know how you are getting multiple output files from JFTRANS (I use it but I'm not an expert) but I don't think you can get the software to process (pass along to the next step) the files as they are currently named. At best, I'd expect you to have to use the standard naming (JFSERVER.TF_) and then run the PCLFNSYS task multiple times within the same job with each one referencing the appropriate JFSERVER.TF_ file as its input. Of course, this requires a fixed number of output files, not a variable number.

          Other than that, the files would need to be named with the DAT extension and include the appropriate ^job statement and form references so that the software would see them as "normal" files to be processed.
          • 2. Re: one file - multi output processing
            Andreas Lenzinger Level 1
            reading the manual "datatrans" I can read the following:
            Creating Multiple Output Files
            Typically, Transformation Agent will process the input and create one output file.
            However, by defining file boundaries, you can direct Transformation Agent to create multiple output files, each containing a particular record from the input file. This feature
            applies to data files in overlay and fixed record formats. It is useful in circumstances where you want to divide the input file into a separate file for each record, for example:
            - Each customer record is faxed to a different fax number.

            You define a file boundary based on a search string. When Transformation Agent
            encounters the search string within the input file, it directs all subsequent output to a
            different output file until it encounters another file boundary.
            Transformation Agent generates the output files with the same file name as the input
            file and then appends a sequential number to the file name.
            For each output file, Transformation Agent calls the head script, if one exists, before
            any output begins and the tail script, if one exists, after all of the data is processed.

            all of the above is working, but I get the multible files JFSERVER.TFA; JFSERVER1.TFA, JFSERVER2.TFA .... but I am not able to pick up the files by the next job step.
            • 3. Re: one file - multi output processing
              Level 1
              OK. I found that in the manual, too. Now I know that multiple output files are possible.

              While I was looking through my manuals, in the "Advanced Transformations" manual I found the following:

              >Transformation Agent generates the output files with the same file name as the input file, and then appends a sequential number to the filename.

              >Specify the extension in the Task Table entry of the Central Job Management Database or on the ^job command. For example, if the input file name is test.dat, and the Task Table entry specifies the output file extension as .out, Transformation Agent names the output files: test.out, test1.out, test2.out, and so on.

              >Note: If the output files will be processed by another task using Central, the file extension should be .dat, which is the default file extension for Central.

              That "note" makes me think that the files are not expected to be processed within the same job but by different jobs. This would require that the scripting would need to produce the necessary ^job statement at the beginning of the file so the appropriate task for each output file would be ran (as I previously stated).

              The alternative would be to define each of the necessary tasks for each of the output files within the same job, including "hard coding" the input file name instead of using the
              b @Infile.
              variable reference for the input file name.