XCode 5.1/iOS7.1 requires upgrade and patch to Cordova

I discovered yesterday that Cordova doesn’t play too nicely when you upgrade XCode to 5.1 so you can deploy to iOS 7.1 devices.

That said there are several steps that you need to follow in order to get Cordova working again in 5.1 prior to the Cordova 3.5.0 update being released:

  1. Update your local version of cordova using: npm install -g cordova (this should upgrade you to 3.4 if you were on a previous release, which I was on 3.3.0)
  2. You then need to upgrade your version of Cordova for your projects by issuing cordova platform update [ios | android] (do each separately and any other platform you may be using)
  3. Once upgraded, you’ll want to add the diffs from this post. They’re basically patches for the time being until Cordova 3.5.0 is released. Thanks much to @shazron for these!
  4. After that you can follow these directions from Shazron to ensure you have XCode configured properly. It’s very important you remove ALL these extra settings (e.g. Any iOS SDK, etc.) under Architectures->Debug (and Release) and add “arm64″ to Valid Architectures for your CordovaLib project settings (then target) as follows:cordova-ios-71
  5. To get it working in Android you have to re-add the platforms/android/CordovaLib directory to be sure the new JAR is introduced into your project. I use IntelliJ so all you have to do is remove the old one under Project Structure and add the new lib.  Then add the resulting lib to your Android project and move it up to the top in your dependency list.

Hopefully this gets you up and running quickly again!

2013 in review

The WordPress.com stats helper monkeys prepared a 2013 annual report for this blog.

Here’s an excerpt:

The concert hall at the Sydney Opera House holds 2,700 people. This blog was viewed about 35,000 times in 2013. If it were a concert at Sydney Opera House, it would take about 13 sold-out performances for that many people to see it.

Click here to see the complete report.

By Loutilities Posted in General

Grunt: Targeted environment builds for Node.js + JSHint + Closure Compiler

I’ve been using Node.js and was looking for a way to target my Node.js app for different environments (e.g., local, NONPROD, PROD). This would be especially useful when configuring different URLs and appnames and even setting up NewRelic, which I use for monitoring my application in different environments. Up to now, I’ve been doing these changes manually but discovered this tool Grunt that fit the bill. I come from a Maven build background for Java so we were looking for a tool that was similar for JavaScript. Not only can I do filtering but I can add plugins for things like JSHint, JS compression, unit testing, and a bunch of other nifty plugins.

To use Grunt for your Node app follow the getting started guide. After you install and configure it, you can refer my Gruntfile.js below which I used to set-up different environment variables, run JSHint and use Google’s Closure Compiler to minify it all. My basic approach is to create a “dist” directory where I copy the pertinent files to it. In my base directory, I have used $ variables assigned to the things I wanted to change in the files I wanted updated. I then used grunt-string-replace to update those based on the environment I was targeting. Let’s take a look.

On line 6, I clean the “dist” directory from any previous builds. On lines 7-13, I copy the files I want from the root directory to the “dist” directory (NOTE: user “!” to exclude files). Starting on line 14, I use string replace to update variables depending if I’m running dev or prod builds. Line 114, I run JSHint to make sure my code is in order. Line 127, I run node-unit for my unit tests and on line 130, I run Google’s closure-compiler to minify my JavaScript and use Advanced Optimizations for peak performance.

On lines 166-67, you’ll see how I call out different tasks depending on the target environment I’m after, in this case dev or prod. On the CLI, you can kick off the default which targets the DEV environment just by running “grunt” and for prod, just add that modifier, “grunt prod”. That’s all there’s to it!

(function () {
    'use strict';
    module.exports = function (grunt) {
        grunt.initConfig({
            pkg: grunt.file.readJSON('package.json'),
            clean: ["dist"],
            copy: {
                build: {
                    files: [
                        {src: ['./**/*.js', './*.json', './stackato.yml', './README.md', '!./nunit.js', './test/**/*', '!./dist/**/*', '!./node_modules/**/*', '!./Gruntfile.js'], dest: 'dist/'}
                    ]
                }
            },
            'string-replace': {
                dev: {
                    files: {
                        "dist/": ["newrelic.js", "stackato.yml", "package.json"]
                    },
                    options: {
                        replacements: [
                            {
                                pattern: '$APPNAME',
                                replacement: "services-people"
                            },
                            {
                                pattern: '$VERSION',
                                replacement: "1.0.6"
                            },
                            {
                                pattern: 'server.js',
                                replacement: "server.min.js"
                            },
                            {
                                pattern: '$ENV',
                                replacement: "DEV"
                            },
                            {
                                pattern: '$PDS_PWD',
                                replacement: "xxx!"
                            },
                            {
                                pattern: '$INSTANCES',
                                replacement: "1"
                            },
                            {
                                pattern: '$NEWRELIC_TRACE_LVL',
                                replacement: "trace"
                            },
                            {
                                pattern: '$URL1',
                                replacement: "xxx1-dev.com"
                            },
                            {
                                pattern: '$URL2',
                                replacement: "xxx2-dev.com"
                            },
                            {
                                pattern: '$URL3',
                                replacement: "xxx3-dev.com"
                            }
                        ]
                    }
                },
                prod: {
                    files: {
                        "dist/": ["newrelic.js", "stackato.yml", "package.json"]
                    },
                    options: {
                        replacements: [
                            {
                                pattern: '$APPNAME',
                                replacement: "services-people"
                            },
                            {
                                pattern: '$VERSION',
                                replacement: "1.0.6"
                            },
                            {
                                pattern: 'server.js',
                                replacement: "server.min.js"
                            },
                            {
                                pattern: '$ENV',
                                replacement: "PROD"
                            },
                            {
                                pattern: '$PDS_PWD',
                                replacement: "xxx!"
                            },
                            {
                                pattern: '$INSTANCES',
                                replacement: "2"
                            },
                            {
                                pattern: '$NEWRELIC_TRACE_LVL',
                                replacement: "info"
                            },
                            {
                                pattern: '$URL1',
                                replacement: "xxx1.com"
                            },
                            {
                                pattern: '$URL2',
                                replacement: "xxx2.com"
                            },
                            {
                                pattern: '$URL3',
                                replacement: "xxx3.com"
                            }
                        ]
                    }
                }
            },
            jshint: {
                options: {
                    curly: true,
                    eqeqeq: true,
                    eqnull: true,
                    strict: true,
                    globals: {
                        jQuery: true
                    },
                    ignores: ['dist/test/**/*.js']
                },
                files: ['Gruntfile.js', 'dist/**/*.js']
            },
            nodeunit: {
              all: ['dist/test/*-tests.js']
            },
            'closure-compiler': {
                build: {
                    closurePath: '.',
                    js: 'dist/**/*.js',
                    jsOutputFile: 'dist/server.min.js',
                    maxBuffer: 500,
                    options: {
                        compilation_level: 'ADVANCED_OPTIMIZATIONS',
                        language_in: 'ECMASCRIPT5_STRICT',
                        debug: false
//                        formatting: 'PRETTY_PRINT'
                    }
                }
            },
            // Uglify is somewhat the defacto standard for minifying Node.js but Closure compiler yields better perf (ops/sec)
            // http://jsperf.com/testing-code-performance-by-compression-type/3
            uglify: {
                options: {
                    banner: '/*! <%= pkg.name %> <%= grunt.template.today("yyyy-mm-dd") %> */\n'
                },
                build: {
                    src: 'dist/**/*.js',
                    dest: 'dist/server.min.js'
                }
            }
        });

        grunt.loadNpmTasks('grunt-contrib-uglify');
        grunt.loadNpmTasks('grunt-closure-compiler');
        grunt.loadNpmTasks('grunt-contrib-copy');
        grunt.loadNpmTasks('grunt-contrib-clean');
        grunt.loadNpmTasks('grunt-contrib-jshint');
        grunt.loadNpmTasks('grunt-contrib-nodeunit');
        grunt.loadNpmTasks('grunt-string-replace');

        // Default task(s).
        grunt.registerTask('default', ['clean', 'copy:build', 'string-replace:dev', 'jshint', 'closure-compiler:build']);
        grunt.registerTask('prod', ['clean', 'copy:build', 'string-replace:prod', 'closure-compiler:build']);
    };
})();

A better way to implement Back Button in Sencha Touch’s NavigationView

If you’re building a Sencha Touch 2 app and have deployed it to Android, expect that you’re going to get hit up about having the device’s hardware back button working.  Sencha will try to push you towards using routes as evidenced in their docs when discussing history, but to me that’s invasive and goes against the very nature of event driven UI events that have made this framework so popular. Moreover, this gets more complicated when using NavigationView as your main mechanism to move back and forth on a given tab, especially for me where my application had multiple navigation tabs.

To that end, I decided it would be much easier to simply use the browser’s history to manage the back button.  In this way, you can simply push the browser’s state as the user moves forward in your app and then pop the state as the user moves backward in the application. It was also important that different tabs would not interfere with each other’s state. Let’s take a look at how I did that assuming this is an MVC-based application.

Step 1: Add refs and controls to your navigation view’s controller for your view and your navbar with your application’s back button (different from your browser or device back button):

        refs: {
            portal: 'portal',  //Portal Tab Panel
            myContainer: 'MyContainer',  //NavigationView
            navBar: 'myContainer #navbar',  //itemId for navigationBar on NavigationView
        },
        control: {
            myContainer: {
                push: 'onPush'
            },
            navBar: {
                back: 'onBack'  //trap the back event for the app's back button
            }
        }

Here we’re just establishing references to the view components and binding the push and back events to methods we will implement in the next step. Notice that it’s important to trap your app’s back button so it pops the state similar to how the browser or device back button would.

Step 2: Add the implementation for onPush and onBack:

    onPush: function (view, item) {
        history.pushState(); //push the state
    },

    onBack: function () {
        history.back();  //pop the state to trigger listener in step 3
        return false;  // return false so listener will take care of this
    },

Here we leverage the JavaScript history object to push a new state on the stack as the user moves forward in the app’s NavigationView and pop the state from the stack as the user moves back.

Step 3: Add a “popstate” event listener to the window in your Controller’s launch method:

    launch: function () {
        var that = this;
        window.addEventListener('popstate', function () {
            var portal = that.getPortal();  // won't have portal until app is initialized
            if (portal) {
                var container = getTabContainer(portal.getActiveItem());
                if (container && container.getItemId() === "MyTab"
                    && container.getActiveItem().getItemId() !== "baseView") {
                    container.pop();
                }
            }
        }, false);
    },

Here we add a “popstate” event listener to the window so that we can pop the last window off the stack as the user moves back. Notice I do a few checks, one to be sure we have instantiated the portal and the other to check that the container I’m on is the one for this NavigationView (i.e., the “MyTab” check). You could imagine an app with multiple tabs that you want to make sure the other tab controllers aren’t responding to the event when the user uses the device back button (which is NOT tied to a controller; just the popstate event). The final check is to check to see if I am on the “baseView” because I have no need to pop the container if I’m at the root of a particular NavigationView.

That’s all there’s too it. No need to rearchitect your app to use Sencha routes and no complicated code to manage each NavigationView’s state. All you need to do is implement this same code in each of your NavigationView tabs and you’re all set.

Thanks to Greg from Sencha Touch support for pointing out this a viable alternative!

Using SSL Self Signed Certs with Node.js

If you plan to proxy sites or services that are SSL enabled and are signed with self-signed certs, then you need to be aware that you have to configure a few extra parameters to make sure the SSL handshake happens properly. Otherwise, the request goes through without validating the self-signed certs (which is a strange default behavior IMO).

Namely, you have to do the following:

  1. Use the https module (API docs here)
  2. Set the agent to false (unless you plan to provide one)
  3. Set the ca to the location where the self-signed cert is located respective to your node file
  4. Set the rejectUnauthorized to true so that an error is emitted upon failure

Here is a snippet of code that you can use as an example:

var https = require('https'),
        fs = require('fs'),
        host = 'localhost',
        port = 443;

    exports.getTest = function (req, res, next) {
        var url = '/login.html';

        processRequest(req, res, next, url);
    };

    function processRequest (req, res, next, url) {
        var httpOptions = {
            hostname: host,
            path: url,
            port: port,
            method: 'GET',
            agent: false,
            ca: [fs.readFileSync('ssl/myroot_cert.crt')],
            rejectUnauthorized: true
        };

        var reqGet = https.request(httpOptions, function (response) {
            var content = '';
            response.on('data', function (chunk) {
                content += chunk;
            });
            response.on('end', function () {
                try {
                        res.send("Successful SSL Handshake");
                }
                catch (e) {
                    res.send(500);
                }
            });
        });

        reqGet.on('error', function (e) {
            res.send("Unable to SSL Handshake", 401);
        });

        reqGet.end();

        return next();
    }

Restart Node.js without restarting node.js

Found this awesome little npm package called nodemon that allows you to continually develop your node application without having to restart your application every time you make a change to your code. It basically watches the files in your dev directory and restarts the node.js process for you. All you need to do is install it globally and then use nodemon to start your app:

npm install nodemon -g 

nodemon server.js

I realize this might be a little lazy, but so be it. Thanks @rem for this!

Complementing MongoDB with Real-time Solr Search

Overview

I’ve been a long time user and evangelist of Solr given its amazing ability to fulltext index large amounts of structured and unstructured data. I’ve successfully used it on a number of projects to add both Google-like search and faceted (or filtered) search to our applications. I was quite pleased to find out that MongoDB has a connector for Solr to allow that same type of searching against my application that is back-ended with MongoDB. In this blog post, we’ll explore how to configure MongoDB and Solr and demonstrate its usage with the MongoDB application I wrote several months back that’s outlined in my blog post Mobile GeoLocation App in 30 minutes – Part 1: Node.js and MongoDB.

Mongo-Connector: Realtime access to MongoDB with Solr

I stumbled upon this connector during my research, mongo-connector. This was exactly the sort of thing I was looking for namely because it hooks into MongoDB’s oplog (somewhat similar to a transaction log in Oracle) and updates Solr in real-time based on any create-update-delete operations made to the system. The oplog is critical to MongoDB for master-slave replication, thus it is a requirement that MongoDB needs to be set-up as a replica set (one primary, n-number of slaves; in my case 2). Basically, I followed the instructions here to setup a developer replica set. Once established, I started each mongod instance as follows so they would run in the background (–fork) and use minimal space due to my disk space limitation (–smallfiles).

% mongod –port 27017 –dbpath /srv/mongodb/rs0-0 –replSet rs0 –smallfiles –fork –logpath /srv/mongodb/rs0-0.log

% mongod –port 27018 –dbpath /srv/mongodb/rs0-1 –replSet rs0 –smallfiles –fork –logpath /srv/mongodb/rs0-1.log

% mongod –port 27019 –dbpath /srv/mongodb/rs0-2 –replSet rs0 –smallfiles –fork –logpath /srv/mongodb/rs0-2.log

Once you have MongoDB configured and running you need to install the mongo-connector separately. It relies on Python, so if not installed, you will want to install version 2.7 or 3. To install the mongo-connector I simply ran this command to install it as a package:

% pip install mongo-connector

After it is installed you can run it is as follows so that it will run in the background as well using nohup (hold off on running this till after the next section):

% nohup sudo python mongo_connector.py -m localhost:27017 -t http://solr-pet.xxx.com:9650/solr-pet -d ./doc_managers/solr_doc_manager.py > mongo-connector.out 2>&1

A couple things to note here is that the -m option points to the localhost and port of the primary node in the MongoDB replica set. The -b option is the location of Solr server and context. In my case, it was a remote based instance of Solr. The -n option is the namespace to the the Mongo databases and collection I wish to have indexed by Solr (without this it would index the entire database). Finally, the -d option indicates which doc_manager I wish to use, which of course, in my case is Solr. There are other options for Elastic search as well, if you chose to use that instead.

With this is place your MongoDB instance is configured to start pushing updates to Solr in real-time, however, let’s take a look at the next section to see what we need to do on the Solr side of things.

Configuring Solr to work with Mongo-Connector

Before we run the mongo-connector, there are a few things we need to do in Solr to get it to work propertly. First, to get the mongo-connector to post documents to Solr you must be sure that you have the Solr REST service available for update operations. Second, you must configure the schema.xml with specific fields that are required as well as any fields that are being stored in Mongo. On the first point, we need to be sure that the following line exists in the solr.xml config:

<requestHandler name=”/update” class=”solr.UpdateRequestHandler”/>

As of version 4.0 of Solr, this request handler supports XML, JSON, CSV and javabin. It allows the mongo-connector to send the data to the REST interface for incremental indexing. Regarding the schema, you must include a field per each entry you have (or are going to add) to your Mongo schema. Here’s an example of what my schema.xml looks like:

<schema name="solr-suggest-box" version="1.5">
        <types>
                <fieldType name="string" class="solr.StrField" sortMissingLast="true" omitNorms="true"/>
                <fieldType name="long" class="solr.TrieLongField" precisionStep="0" omitNorms="true" positionIncrementGap="0" />
                <fieldType name="text_wslc" class="solr.TextField" positionIncrementGap="100">
                        <analyzer type="index">
                                <tokenizer class="solr.WhitespaceTokenizerFactory"/>
                                <filter class="solr.LowerCaseFilterFactory"/>
                        </analyzer>
                        <analyzer type="query">
                                <tokenizer class="solr.WhitespaceTokenizerFactory"/>
                                <filter class="solr.LowerCaseFilterFactory"/>
                        </analyzer>
                </fieldType>
                <fieldType name="tdouble" class="solr.TrieDoubleField" precisionStep="8" positionIncrementGap="0"/>
                <fieldType name="location" class="solr.LatLonType" subFieldSuffix="_coordinate"/>
                <fieldType name="tdate" class="solr.TrieDateField" omitNorms="true" precisionStep="6" positionIncrementGap="0"/>
        </types>

        <fields>
                <field name="_id" type="string" indexed="true" stored="true" required="true" />
                <field name="name" type="text_wslc" indexed="true" stored="true" />
                <field name="description" type="text_wslc" indexed="true" stored="true" />
                <field name="date" type="tdate" indexed="true" stored="true" />
                <field name="nmdsc" type="text_wslc" indexed="true" stored="true" multiValued="true" />
                <field name="coordinate" type="location" indexed="true" stored="true"/>
                <field name="_version_" type="long" indexed="true" stored="true"/>
                <field name="_ts" type="long" indexed="true" stored="true"/>
                <field name="_ns" type="string" indexed="true" stored="true"/>
                <field name="ns" type="string" indexed="true" stored="true"/>
                <field name="coords" type="string" indexed="true" stored="true" multiValued="true" />
                <dynamicField name="*" type="string" indexed="true" stored="true"/>
        </fields>

        <uniqueKey>_id</uniqueKey>

        <defaultSearchField>nmdsc</defaultSearchField>

        <!-- we don't want too many results in this usecase -->
        <solrQueryParser defaultOperator="AND"/>

        <copyField source="name" dest="nmdsc"/>
        <copyField source="description" dest="nmdsc"/>
</schema>

I found that all the underscore fields (lines 21-32) I have were required to get this working correctly. To future proof this, on line 32 I added a dynamicField so that the schema could change without affecting the Solr configuration — a tenant of MongoDB is to have flexible schema. Finally, I use copyfield on lines 42-43 to only include those fields I wish to search against, which name and description were only of interest for my use case. The “nmdsc” field will be used as the default search field for the UI as per line 37, which I will go into next.

After your config is in place and you start the Solr server, you can now launch the mongo-connector successfully and it will continuously update Solr with any updates that are saved to Mongo in real-time. I used nohup to kick it off in the background as shown above.

Using Solr in the DogTags Application

To tie this all together, we need to alter the UI of the original application to allow for Solr searching. See my original blog post for a refresher: Mobile GeoLocation App in 30 minutes – Part 2: Sencha Touch. Recall that this is a Sencha Touch MVC application and so all I needed to do was add a new store for the Solr REST/JSONP service that I will call for searching and update the UI to provide a control for the user to conduct a search. Let’s take a look at each of these:

Ext.define('MyApp.store.PetSearcher', {
    extend: 'Ext.data.Store',
    requires: [
        'MyApp.model.Pet'
    ],
    config: {
        autoLoad: true,
        model: 'MyApp.model.Pet',
        storeId: 'PetSearcher',
        proxy: {
            type: 'jsonp',
            url: 'http://solr-pet.xxx.com:9650/solr-pet/select/',
            callbackKey: 'json.wrf',
            limitParam: 'rows',
            extraParams: {
                wt: 'json',
                'json.nl': 'arrarr'
            },
            reader: {
                root: 'response.docs',
                type: 'json'
            }
        }
    }
});

Above is the new store I’m using to call Solr and map its results back to the original model that I used before. Note the differences from the original store that our specific to Solr, namely the URL and some of the proxy parameters on lines 10-18. The collection of docs are a bit buried in the response, so I have to set the root accordingly as I did on line 20.

The next thing I need to do is add a control to my view so the user can interact with the search service. In my case I chose to use a search field docked at the top and have it update the list based on the search term. In my view, the code looks as follows:

Ext.define('MyApp.view.PetPanel', {
    extend: 'Ext.Panel',
    alias: 'widget.petListPanel',
    config: {
        layout: {
            type: 'fit'
        },
        items: [
            {
                xtype: 'toolbar',
                docked: 'top',
                title: 'Dog Tags'
            },
            {
                xtype: 'searchfield',
                docked: 'top',
                name: 'query',
                id: 'SearchQuery'
            },
            {
                xtype: 'list',
                store: 'PetTracker',
                id: 'PetList',
                itemId: 'petList',
                emptyText: "<div>No Dogs Found</div>",
                loadingText: "Loading Pets",
                itemTpl: [
                    '<div>{name} is a {description} and is located at {latitude} (latitude) and {longitude} (longitude)</div>'
                ]
            }
        ],
        listeners: [
            {
                fn: 'onPetsListItemTap',
                event: 'itemtap',
                delegate: '#PetList'
            },
            {
                fn: 'onSearch',
                event: 'change',
                delegate: '#SearchQuery'
            },
            {
                fn: 'onReset',
                event: 'clearicontap',
                delegate: '#SearchQuery'
            }
        ]
    },
    onPetsListItemTap: function (dataview, index, target, record, e, options) {
        this.fireEvent('petSelectCommand', this, record);
    },
    onSearch: function (dataview, newValue, oldValue, eOpts) {
        this.fireEvent('petSearch', this, newValue, oldValue, eOpts);
    },
    onReset: function() {
        this.fireEvent('reset', this);
    }
});

Lines 15-18 add the control and lines 38-47 define the listeners I’m using to fire events in my controller. The controller supports those events as follows:

    onPetSearch: function(view, value, oldvalue, opts) {
        if (value) {
            var store = Ext.getStore('PetSearcher');
            var list = this.getPetList();
            store.load({
                params: {q:value},
                callback: function() {
                    console.log("we searched");
                    list.setData(this._proxy._reader.rawData.response.docs);
                }
            });
            list.setStore(store);
        }
    },

    onReset: function (view) {
        var store = Ext.getStore('PetTracker');
        var list = view.down("#petList");
        store.getProxy().setUrl('http://nodetest-loutilities.rhcloud.com/dogtag/');
        store.load();
        list.setStore(store);
    },

Since the model is essentially the same between Mongo and Solr, all I have to do is swap the stores and reload them to get the results updated accordingly. On line 6, you can see where I pass in the dynamic search term so that is loads the PetSearcher store with that value. When I reset the search value, I want to go back to the original PetTracker store to reload the full results as per lines 17-21. In both, I set the list component’s view to the corresponding store as I did on lines 12 and 21 so that the list will show the results according to the store it has been set to.

Conclusion

In this short example, we established that we could provide real-time search with Solr against MongoDB and augment an existing application to add a search control to use it. This has the potential of being a great compliment to Mongo because it keeps us from having to add additional indexes to MongoDB for searching which has a performance cost to it, especially as the record set grows. Solr removes this burden from Mongo and leverages an incremental index that can be updated in real-time for extremely fast queries. I see this approach being very powerful for modern applications.

RabbitMQ, Node.js and Groovy: Making Messaging Easy

Overview

If you’ve been following my blog, you’re probably well aware of my penchant for node.js and the few blog posts I’ve already posted on the subject. More recently, I wanted to explore RabbitMQ, a messaging platform similar to JMS, that can be easily leveraged with node.js as an alternative to saving messages with HTTP POST using a REST interface. In this blog post, I’m going to extend the DogTag application I blogged about in DogTag App in 30 minutes – Part 1: Node.js and MongoDB and show you how you can use RabbitMQ to push messages into MongoDB using Node.js on the server and a simple Groovy client as the publisher of those messages.

What is RabbitMQ?

RabbitMQ provides robust messaging that is easy to use and supported by a large number of developer platforms as well as operating systems. Its low barrier to entry makes it quite suitable to use with node.js as you will be amazed at the remarkable little code that is required to establish a RabbitMQ exchange hosted within a node.js application. RabbitMQ is similar to JMS in that it supports many of the same paradigms you may have seen in this or other messaging platforms (e.g., Queues, Topics, Routing, Pub/Sub, RPC, etc.). Primary language support includes Python and Java, but there are so many others supported as well.

In this blog post, I’ll be using a NPM package aptly named node-amqp. AMQP stands for “Advanced Message Queuing Protocol”. AMQP is different from API-level approaches, like JMS, because it is a binary wire-level protocol that defines how data will be formatted for transmission on the network. This results in better interoperability between systems based on a standard supported through the Oasis standards body. AMQP started off as a beta but released a 1.0 version in 2011 and has since grown in popularity (see here for comparison).

RabbitMQ and Node.js

In example I’m about to show you, I’m basically providing a simple means to batch several messages to save new “dog tags” in the DogTags application I referenced earlier. I am going to post the messages to a queue and have the server save them into the database. Let’s take a look at the JavaScript code necessary to do this in node.js:


    var amqp = require('amqp');
    var conn = amqp.createConnection({url: mq_url});
    conn.on('ready', function () {
        var exchange = conn.exchange('queue-dogtag', {'type': 'fanout', durable: false}, function () {
            var queue = conn.queue('', function () {
                console.log('Queue ' + exchange.name + ' is open');
                
                queue.bind(exchange.name, '');
                queue.subscribe(function (msg) {
                    console.log('Subscribed to msg: ' + JSON.stringify(msg));
                    api.saveMsg(msg);
                });
            });
            queue.on('queueBindOk', function () {
                console.log('Queue bound successfully.');
            });
        });
    });

On lines 1-2, you can see how I’m including the amqp package from NPM and establishing a connection with a provided URL (e.g., amqp://guest:guest@localhost:5672). Once the connection is established we create what’s called an exchange on line 4. Here you define the type of exchange, the name and whether it is durable or not. For further details on this and other settings, I will refer you to the docs here. You may be asking why not just send directly to a queue? While this is possible, it’s not something that is practiced given the messaging model of RabbitMQ; namely, that you should never send any messages directly to a queue. Instead, the producer should send messages to an exchange which knows exactly what to do with them. In my case, I chose “fanout” as my exchange type, which simply broadcasts all the messages to all the queues it is aware of, which in my case is just the one queue defined on line 5. Other exchanges you may want to look into include Direct, Headers and Topic exchanges which are described in detail here.

Once I have defined the exchange and the queue, it is time to bind the queue to the exchange name and subscribe to any messages published to the queue. I do this on line 8-11. It is then a simple matter of processing each message and using the api object I’ve established to save the message to MongoDB and to record an appropriate log message. As an added measure, I also think it is worthwhile to see if the queue is bound to the exchange prior to any processing and report the results the log. I do this on lines 14-16.

The Groovy Publisher

So now that we have the server-side component in place ready to subscribe to any messages, we need to create a publisher to publish messages to the exchange. To do this, I chose to write a CLI tool in Groovy and leverage Spring MQ to publish the message. Groovy is great language to create CLI tools and since it is maintained by the folks at SpringSource, it was just a natural fit to use the SpringMQ framework which can generate AMQP messages without much fuss. Of course, as a prerequisite, you’ll have to grab the latest Groovy binaries so you can run the script. Let’s take a look at the code:

@Grab(group = 'org.springframework.amqp', module = 'spring-amqp', version = '1.1.1.RELEASE')
@Grab(group = 'org.springframework.amqp', module = 'spring-rabbit', version = '1.1.1.RELEASE')
@Grab(group = 'com.rabbitmq', module = 'amqp-client', version = '2.8.7')
@Grab(group = 'org.codehaus.jackson', module = 'jackson-mapper-asl', version = '1.9.9')

import org.springframework.amqp.rabbit.connection.*
import org.springframework.amqp.rabbit.core.RabbitTemplate
import org.springframework.amqp.support.converter.JsonMessageConverter

class RabbitMqUploader {
    static void main(args) {
        println("Starting...")
        def cli = new CliBuilder(usage: 'uploader.groovy [filename.csv]')

        def factory = new CachingConnectionFactory();
        factory.setUsername("username");
        factory.setPassword("password");
        factory.setVirtualHost("ve2b460e7e6b24b5da88c935ff63c4e86");
        factory.setHost("hostname")
        factory.setPort(10001)

        RabbitTemplate templ = new RabbitTemplate(factory);
        templ.setExchange("queue-dogtag");
        templ.setMessageConverter(new JsonMessageConverter());

        def options = cli.parse(args)
        if (!options) {
            // Means no parameters were provided
            cli.usage()
            return
        }

        def extraArguments = options.arguments()
        def filename = ''
        if (extraArguments) {
            if (extraArguments.size() > 0) {
                filename = extraArguments[0]
            }
            //load and split the file
            new File(filename).splitEachLine("\n") { row ->
                def field = row[0].split(',')
                def dt = new DogTag()
                dt.name = field[0]
                dt.description = field[1]
                dt.longitude = field[2]
                dt.latitude = field[3]
                dt.phone = field[4]
                templ.convertAndSend(dt)
                println('Sent >>>' + row)
            }
        }

        factory.destroy()
    }
}

So after our discussion in the previous section, you can follow the code here to see how simple it is to publish a message to the RabbitMQ exchange we set up. One thing I simply love about Groovy is that I don’t need to grab any dependencies or build any jars or define any classpaths. For dependencies, I simply use the @Grab annotation to pull those required for SpringMQ, RabbitMQ and the JSON parser down to the client so that they can run the groovy script without any set-up. In lines 15-20, I set-up the configuration to point to my host (which I recommend using a free aPaaS like OpenShift or Nodejitsu to host your Node.js app). Lines 22-24, I define the RabbitMQ template based on the config, set the exchange name and establish a JSON converter since MongoDB likes JSON format. Finally, in lines 34-55, I parse a CSV file provided as a command line parameter and send each line as an AMQP message to our server.

To run this program, I simply run: groovy RabbitMqUploader.groovy ./dogtags.csv.

Conclusion

With a few simple changes to my original application, it was easy to augment it to support RabbitMQ and define an exchange to consume messages. With Groovy, it was easy to write an autonomous, concise client to establish a connection to my node.js server and broadcast messages over the AMQP wire protocol for the server to process. I expect to use this approach in the future where perhaps a REST interface may not be best suited and using well established messaging paradigms make more sense. I encourage you to explore this interesting messaging protocol for your future needs as well.

QConSF 2012: Mobile Web Tooling

Pete LePage (@petele) gives a great overview of all the different tools you should consider when embarking on mobile web application development.

Sublime – text editor for Mac

Codekit
  • Minify, compiles
  • live browser reloads
  • JSHint, JSLint
  • Watches all files in direcotry for reloads

Great sites for good design patterns for mobile dev:

Great HCI guidelines from Android, iOS, Safari, etc. can be found on their respective websites

Start with boilerplates from SenchaTouch, bootstrap, and maybe even jqueryMobile

github.com/ftlabs/fastclick for improving perfomance of button clicks (usually delayed by 300ms for pinch zoom, slide, etc.)

To debug, jsconsole.com can be used for remote debugging on other devices and will show stuff on site….coolio.

Hammer.js is cool for jestures to listen to those events

brian.io/lawnchair good to store some transient data by abstracting away for IndexDB, WebSQL or LocalStorage

@media, cool for high density resolution or using CSS -webkit-image-set for both 1x and 2x image densities for hi-rez icons

CharlesProxy good to test network latency or Network Link Conditioner on Mac

Chrome Dev Tools have cool features to override geolocation, user agent, and device orientation

Android and Safari remote debugging possible but need USB cable to connect.

m.chromeexperiments.com to show off your skills.

new.mcrbug.com for filing any bugs

User Experience is x-axis and Platforms is y when trying to decide whether to go native or not, lower-right native, upper-left is hybrid. Left of line is better for hybrid because of hardware acceleration and browser improvements.

PhoneGap Perf Tips:
  1. Keep small footprint by limiting libs
  2. Use touch events, not click events
  3. Use hardware accelerated CSS transitions
  4. Avoid network access and cache locally.

zepto.js is a lightweight version of jquery that doesn’t worry about legacy browsers

Phonegap API explorer is a reference application to look at from the AppStore
Checkout PhoneGap Build to target up to 6 platforms without having to load XCode, Eclipse, etc to build all the individual platforms by hand.

QConSF 2012: Real Time MVC with Geddy

Daniel Erickson from yammer presents on mixing MVC with Real Time apps.

MVC is here to say because…

  • It provides structure to a project
  • Allow people to easily jump into a project and multiple people at that
  • Easier to get started with

Realtime is great for:

  • Getting instant feedback from your users
  • Works well on mobile (Voxer)
  • WebSockets over long/short polling

How can MVC and Real Time can be mixed?

Enter Geddy…if you know Rails, NoSQL/SQL ORM layers and templating languages than Geddy is a natural progression. PostGres, Riak, MongoDB adapters; EJS and Jade TL’s. Supports RT OOTB.

Geddy gives you a reach ORM/MVC framework that you can readily work into your Node.js applications.  I’ll have to take a look when I get back!