Smarter ideas worth writing about.

The ALF Agent

Don't know about our ALF project? Read about it here.

Introduction

In the planning stages of Application Lifecycle Foundation (ALF), we discussed how to architect the individual agents that would live silently on each of the Application Lifecycle Management (ALM) servers waiting for instructions. First, we talked about having a Java/Spring application that would run on a Tomcat web server to expose web services. This application would then execute the bash scripts to be used to configure that ALM service.  However, we soon realized that this would be overkill and thus, started looking into Node.js.  

Background

Since the mid 1990s, JavaScript has helped create rich web applications by running on the client side in an Internet browser. Now, Node.js allows one to run JavaScript code on the server and interact with the operating system in ways that are impossible using standard browser based JavaScript.  With Node.js, we are able to interact with the file system, communicate with other servers over the network, and read/write directly to databases.  It is built on Google’s V8 JavaScript engine, the same JavaScript engine that runs in Google Chrome, and the libuv platform abstraction layer.  The entire premise of Node.js is that it allows one to write event-driven, asynchronous, non-blocking I/O applications that have a low overhead and are super-fast while running on just a single thread.

Application Stages

The ALF agent has the following 5 main stages:

1. Upon startup, the Agent reads and processes a configuration file.

2. The Agent sends that configuration data to the ALF Control Server.

3. The Agent continually sends heartbeat pings letting the Control Server know that the agent is online  

4. The Agent exposes a set of web services that the Control Server can use to communicate with the Agent.

5. The Agent provides a way to execute the correct bash scripts that are responsible for creating or destroying a service.  

After the initial installation of an Agent on an ALM server, we wanted the agent to have the responsibility for knowing the information about itself and be responsible for telling the Control Server who it is and the services that it offers.  Thus, we created a JSON configuration file that was passed to the Agent as a command line argument.  This file contains information about the Control Server, the services that it offers, and where to create and delete bash scripts for each ALM service.  Once parsed, it is turned into a standard JavaScript object that is then passed around the application.

Once the Agent has its configuration information, it needs a way to tell the Control Server that it is alive.  To do this, we used Node’s built in HTTP module to make web service calls to the Control Server.  The Control Server exposes two web services for the Agent to consume. The first gets called when the agent comes online.  The Agent sends the web service its hostname, port, and the list of services that are available.  The Control Server then takes that information and places it in its database.  Once the initial communication is established, the Agent sends a heartbeat message to the Control Server every 10 minutes in order to let the Control Server know that it is still online and accepting new service requests. 

Code for sending the initial heartbeat:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
function heartbeatInit(properties) {
    
    console.log('HeartbeatService.init.heartbeatInit: Sending initial heartbeat to ALF server @ ' + properties.alfServer + ':' + properties.alfServerPort +  properties.PATH_TO_ALF_HEARTBEAT_SERVICE + ' for host "' + hostName + '"...');
    
    // Setup the data that's going to be sent to the ALF server on agent init
    var initData = {
        agentName: properties.agentName,
        hostName: hostName,
        port: properties.port,
        services: properties.services
    };
    
    // Configure the HTTP Request options
    var options = {
        host: properties.alfServer,
        port: properties.alfServerPort,
        path: properties.PATH_TO_ALF_HEARTBEAT_SERVICE,
        method: 'POST',
        headers: {
          'Content-Type': 'application/json'
        }
    };
    
    // Setup the HTTP request /w the options above, and register onSuccess/onError handlers
    var request = http.request(options, function(response) {
        console.log('HeartbeatService.init.heartbeatInit: Successfully sent heartbeat.');
    });
    request.on('error', function(e) {
        console.log('HeartbeatService.init.heartbeatInit: Unable to establish a connection to the ALF server ' + properties.alfServer + '. Error Message: ' + e.message);
    });
    
    // Send the initData from above on the HTTP request
    request.write(JSON.stringify(initData));
    request.end();
};

 

Fully functional web server in 30 lines of code:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
var http = require('http');
var path = require('path');
var fs = require('fs');
var base = __dirname;

http.createServer (function(req, res){
    pathname = base + req.url;
    console.log(pathname);

    path.exists(pathname, function(exists){
        if(!exists){
            res.writeHead(404);
            res.write('404 Bad Request');
            res.end();
        }else{
            res.setHeader('Content-type', 'text/html');
            res.statusCode = 200;
            var file = fs.createReadStream(pathname);
            file.on('open', function(){
                file.pipe(res);
            });
            file.on('error', function(err){
                res.writeHead(404);
                res.write('404 Bad Request');
                res.end();
            })
        }
    })
}).listen('8881');
console.log('Server started: Listening on port 8881');

 

With Node.js, a very basic but fully functional web server can be written in just a few lines of code (see above). However, we used a 3rd party middleware client called Express.js to create our REST service.  Express.js handles all of the little nuances that we take for granted in a web server such as content-type, 404 pages, etc., while having an easy-to-use and well-documented API for creating REST services.  The ALF Agent exposes two web services, one for creating an ALM service and another for deleting an ALM service.   Adhering to RESTful best practices, one is a POST and the other is a DELETE HTTP method.

To create a service such as Jenkins or SVN we would use this URL
POST /jenkins
POST /svn  
and passed the project name within the body of the request.

Code for creating the POST route:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
app.post(new RegExp(routePatterns.post), function(request, response) {
    var service = request.params[0];
    
    console.log('ServicesController: Received request to create new ' + service + ' instance /w request=' + request + '...');
    
    // Validate the request
    if (!validate(request)){
        return errorResponse(response, 'Missing or invalid query parameters');
    }

    if (properties.testMode){
        return testModeResponse(response);
    }
    
    var projectName = request.body.name;
    var users = request.body.users;
    var createScript = properties[service].createScript;
    
    console.log('ServicesController: Creating ' + service + ' instance for "' + projectName + '" with users ' + users);
    
    var cmd = properties.baseDir 
        + '/' + createScript 
        + ' ' + projectName
        + ' ' + users;

    console.log('Creating ' + service + ' command: '  + cmd);
    
    executeCommand({
        projectName: projectName,
        cmd: cmd,
        service: service,
        response: response
    });
   
});

 

For deleting a service we specified the project name on the URL like so,

DELETE /jenkins/<project name>
DELETE /svn/<project name>

The code for the DELETE looks very similar to the POST code above.

Express.js allows us to use a regular expression to verify whether a web service request should be accepted or rejected with a 404. On startup, when the agent configuration file is read initially, the agent determines what ALM services are available on that server and then generates the appropriate regular expressions that are used to create the web service endpoints.  

Code to generate the Regular Expression:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
routePatterns: function(){
    var services = alfAgent.properties.services;
    var s = [];
    for(var i=0; i < services.length; i++){
        s.push(services[i].name.toLowerCase());
    }
   
    return routePatterns = {
        post: "^\\/(" + s.join("|") + ")$",
        delete: "^\\/(" + s.join("|") + ")\\/(.+)$"
    }
}

 

This is possible because each service endpoint does the same thing.  First, some validation checking to make sure that we passed a valid project name in the request body.  Then it looks up the correct bash script filename and path to generate the appropriate bash command to be executed by Node.js’s child_process.exec method. The exec function will return the stdout, stderr, and error messages for that process which is returned to the Control Server so it can display a status message on the screen. These responses are also saved in log files on the server.

Code to execute the bash script and send the response message:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
var executeCommand = function(obj){
    exec(obj.cmd, function(err, stdout, stderr){
        var data = {
            error: err,
            stdout:stdout,
            stderr: stderr
        }
   
        if (!writeToLog(obj.projectName, obj.service, data)){
            errorResponse(response, 'Unable to write to log directory, check permissions')
        }

        obj.response.setHeader('Content-type', 'application/json');
        obj.response.status(200).end(JSON.stringify(data)); 
    });
}

 

Code to write the log files to the file system:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
var writeToLog = function(project, service, data){

    var date = new Date();
    var directory = properties.baseDir + '/logs';

    fs.exists(directory, function (exists) {
        if (!exists){
            fs.mkdirSync(directory);
        }
        for (var key in data) {
            var file = project + '_'+ service + '_' + key + '.log';
            var logPathFile = directory + '/' + file;
           
            if (data[key]){
                var consoleMsg = key + '\\t[' + service.toUpperCase() + ']\\n' + data[key] + '\\n\\n';
                
                if (properties.logToConsole){
                    console.log(consoleMsg);
                }
                
                var msg = date + '\\t[' + service.toUpperCase() + ']\\n' + data[key] + '\\n\\n';
               
                fs.appendFile(logPathFile, msg, function(err) {
                    if(err) {
                        console.log('Unable to write to log file: ' + logPathFile);
                        console.log(err);
                        return false;
                    }

                }); 
            }
        }    
    });
    return true;
}

 

Pros, Cons, and the Future of Node

Using Node.js allowed us create the Agent in a minimal amount of time.  It has a smaller overall footprint and uses fewer system resources than if we had used the Java/Spring/Tomcat solution.  We were also able to write an agent installation bash script to export the latest version of the Agent out of our SVN repository.  Then it will configure the file permissions, and download any 3rd party dependencies such as Express.js using Node’s package management tool, NPM.

One major advantage of Node.js is that anyone who knows JavaScript for the browser already knows the syntax for creating Node.js applications.  There are some subtle differences, but Node.js’s website does a great job of explaining them and documenting the built in modules for interacting with the operating system and the different types of I/O.

Despite the advantages that Node.js has to offer, there are some pitfalls.  Because Node.js is relatively new, there were not any documented best practices on how to structure a large application. Further, most of the examples found were for relatively small projects and they didn’t apply to ours.  However, we were able to break out pieces of the application into individual files and then expose that functionality to other parts of the application via Node’s export function.  This allowed us to keep the MVC feel that we were used to.  

One function of Node.js that is difficult to get used to is that almost every function call is asynchronous and requires a callback function.  In other words, if one call depends on another to complete before it can be run, that call has to be placed within the callback first function, and if there is a third call that has to be called after the first and second calls are complete it will need to be placed in the callback of the second function. Soon you will find yourself in a callback Hell.  There are 3rd party Node modules that help deal with this.  One such application, Async, has approximately 20 different patterns for dealing with async flow control.  But writing asynchronous functions and applications is something that one needs to be aware of before starting a Node.js project.  While this callback Hell may seem like a pain, it is what allows Node to be single threaded.

One can easily see that Node.js has a very bright future.  It has thousands of active developers creating new open source modules that are easy to find and download thanks to Node’s package manager; NPM.  It is also used and backed by some of the biggest web technologies on the Internet such as Google, Walmart, Yahoo, LinkedIn and now by Cardinal Solutions Inc.

 

Share:

About The Author

Senior Consultant
Terry is a Senior Consultant in Raleigh's Application Development group focusing on enterprise Java and JavaScript applications.