How to Parameterize Dataset for an API Request using postman in Jenkins

Valliappan Thenappan
3 min readJul 3, 2020

As software engineers, we might run into a use case of running a batch job to fetch data from a resource using an API with a bulk dataset from time to time. The traditional approach to solving this would be to write a simple Shell script or a quick python request script to read data from a CSV, frame the API request, and call them repeatedly in a loop.

Is there a way to simplify this even more? The answer is ‘Yes’.

Postman has this tool called Collection Runner where you can repeat a particular collection of requests ’n’ number of times based on a data file. This is a very useful tool for data-driven API testing. But let's say our data set is pretty huge. Why would anyone want to run an Electron-based app for a long period of time which eats away the CPU Resources significantly? But thankfully, postman has a command-line version of the same collection runner called ‘newman’ which consumes less memory than the postman UI.

So let's get into Action. We want Jenkins to orchestrate this long job, so let us do some pre-setup:

Things we need:

Postman
Nodejs (v>10.x)
Jenkins
A Text Editor/Excel to create CSV file

(1) Form the API Request with parameters in Postman:

(2) Export this request and save it to a directory called ‘batchscript’.

export to json collection

(3) Prepare the CSV Datafile with Param name matching the request in step 1 and save it to the same directory as the exported collection:

data.csv
data.csv

(4) Next, we will prepare a dependency file that will make our life easier in CI environment. Open a terminal and navigate to the ‘batchscript’ directory and type:

npm init

Fill in all the basic info — For most of the questions — you can just type enter

Once done, we will add newman as a dependency using this command:

npm install newman --save-dev

(5) Next step is to push this directory to Git

(6) If you have made it this far, Great! Next we will create a freestyle jenkins job with source repo as the repository from step 5.

(7) In the execute shell part of the Job, add these 2 lines:

npm install
newman run Medium_Api_Batch.postman_collection.json -d data.csv

Basically, we are instructing the CI job to install the dependencies and invoking newman client with the collection name and data file.

(8) Save and Run the job, Voila — You have made the API Collection Runner which can orchestrate bulk data files from here on. If you run into the same problem in the future — all you have to is just replace the request in collection.json and the csv file.

But why use Postman for this? Because you will get a pretty neat report on how many requests passed/failed with the Iteration number at the end of the collection runner without specifically coding for it.

Hope you like this post. If any questions, please leave them in the comments section.

References:

--

--