Task step input processing setting Row by row vs Bulk

Hey there,

I’ve got somehow lost with the task step input processing → Row by row and Bulk.
What is the difference between these 2 settings and how much does this setting actually matter?
I have recently made some excel files using external data from database and serialized those and wanted to write it to files. I failed to do so when my settings were set on Row by row on file writer, but I was OK when I did the Bulk settings.
Does anyone know exact impact of this setting on the output?
Thanks in advance.

Hi there,

input type processing bulk vs. row-by-row is making a very big difference in the data processing of tasks.

  • Row-by-row: in the majority of cases you are going to use this type of processing. This means that in case you have 4 rows on input then the task step is executed for each row separately. In this case, the step will run 4 times. Output data produced by such a step are attached at the end of the data schema from the previous step. Row-by-row allows you to build up a data schema where each step can add some additional data.

  • Bulk: is used for large data manipulation such as SQL database connectors. Sometimes it is also good in combination with JS mapper where you can process all data rows in one script. The use of bulk processing causes that produced data are located in the row of task data schema.

Some of the connectors support only row-by-row or bulk processing type and some of them allow to use of both. Visit our Help center > Connector academy for more details.

Regards, Tomas.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.