How to combine data from two API endpoints

Recently I had to combine data from two different API endpoints into one result. One endpoint is returning the list of entities and second one is returning the IDs that meet specific condition. My goal was to have the list of entities with the last column containing Yes/No flag from second API endpoint.

Here is my solution on how to combine two data sources into one input schema and process it in the JS mapper using map and find function.

  • The first step is rest API connector reading the endpoint with the list of entities.
  • The second step is also rest API connector reading data with required information for flagging.
  • The third step is JS mapper combining the data together.

In order to be able to pass result from step #1 and step #2 into step #3 I have to build up the data schema flow. This can be done by defining the input schema that is not used anywhere in the step or connector configuration but it will create the relation we need:

In this case I’m using simple schema “Text” and linking it to any string field within the output of the step #1. This allows me to use the following mapping for JS mapper step:

In this way I was able to pass text output of the first API as well as output of second API. Here the script combining these data together:

var inDataConn = JSON.parse(inputData[0].Connectors);
var inDataBack = JSON.parse(inputData[0].BackgroundFalg);

var outData = inDataConn.Connectors.map(function(conn) {             
    return {                                            
      Name: conn.Name,
      Description: conn.Description,
      Guid: conn.Guid,
      LastVersion: conn.Versions[0].Version,
      Background: (inDataBack.find(obj => obj.Guid === conn.Guid).Guid ? '': 'No')
    };
});

return outData;

I hope this can help you to link the data schema when needed for further processing. You have to understand that data coming as input into current step have to be from same data branch. In case you have connector that doesn’t use any input or is on “bulk” processing mode it always create new data branch.