XCode 5.1/iOS7.1 requires upgrade and patch to Cordova

I discovered yesterday that Cordova doesn’t play too nicely when you upgrade XCode to 5.1 so you can deploy to iOS 7.1 devices.

That said there are several steps that you need to follow in order to get Cordova working again in 5.1 prior to the Cordova 3.5.0 update being released:

  1. Update your local version of cordova using: npm install -g cordova (this should upgrade you to 3.4 if you were on a previous release, which I was on 3.3.0)
  2. You then need to upgrade your version of Cordova for your projects by issuing cordova platform update [ios | android] (do each separately and any other platform you may be using)
  3. Once upgraded, you’ll want to add the diffs from this post. They’re basically patches for the time being until Cordova 3.5.0 is released. Thanks much to @shazron for these!
  4. After that you can follow these directions from Shazron to ensure you have XCode configured properly. It’s very important you remove ALL these extra settings (e.g. Any iOS SDK, etc.) under Architectures->Debug (and Release) and add “arm64” to Valid Architectures for your CordovaLib project settings (then target) as follows:cordova-ios-71
  5. To get it working in Android you have to re-add the platforms/android/CordovaLib directory to be sure the new JAR is introduced into your project. I use IntelliJ so all you have to do is remove the old one under Project Structure and add the new lib.  Then add the resulting lib to your Android project and move it up to the top in your dependency list.

Hopefully this gets you up and running quickly again!

RabbitMQ, Node.js and Groovy: Making Messaging Easy

Overview

If you’ve been following my blog, you’re probably well aware of my penchant for node.js and the few blog posts I’ve already posted on the subject. More recently, I wanted to explore RabbitMQ, a messaging platform similar to JMS, that can be easily leveraged with node.js as an alternative to saving messages with HTTP POST using a REST interface. In this blog post, I’m going to extend the DogTag application I blogged about in DogTag App in 30 minutes – Part 1: Node.js and MongoDB and show you how you can use RabbitMQ to push messages into MongoDB using Node.js on the server and a simple Groovy client as the publisher of those messages.

What is RabbitMQ?

RabbitMQ provides robust messaging that is easy to use and supported by a large number of developer platforms as well as operating systems. Its low barrier to entry makes it quite suitable to use with node.js as you will be amazed at the remarkable little code that is required to establish a RabbitMQ exchange hosted within a node.js application. RabbitMQ is similar to JMS in that it supports many of the same paradigms you may have seen in this or other messaging platforms (e.g., Queues, Topics, Routing, Pub/Sub, RPC, etc.). Primary language support includes Python and Java, but there are so many others supported as well.

In this blog post, I’ll be using a NPM package aptly named node-amqp. AMQP stands for “Advanced Message Queuing Protocol”. AMQP is different from API-level approaches, like JMS, because it is a binary wire-level protocol that defines how data will be formatted for transmission on the network. This results in better interoperability between systems based on a standard supported through the Oasis standards body. AMQP started off as a beta but released a 1.0 version in 2011 and has since grown in popularity (see here for comparison).

RabbitMQ and Node.js

In example I’m about to show you, I’m basically providing a simple means to batch several messages to save new “dog tags” in the DogTags application I referenced earlier. I am going to post the messages to a queue and have the server save them into the database. Let’s take a look at the JavaScript code necessary to do this in node.js:


    var amqp = require('amqp');
    var conn = amqp.createConnection({url: mq_url});
    conn.on('ready', function () {
        var exchange = conn.exchange('queue-dogtag', {'type': 'fanout', durable: false}, function () {
            var queue = conn.queue('', function () {
                console.log('Queue ' + exchange.name + ' is open');
                
                queue.bind(exchange.name, '');
                queue.subscribe(function (msg) {
                    console.log('Subscribed to msg: ' + JSON.stringify(msg));
                    api.saveMsg(msg);
                });
            });
            queue.on('queueBindOk', function () {
                console.log('Queue bound successfully.');
            });
        });
    });

On lines 1-2, you can see how I’m including the amqp package from NPM and establishing a connection with a provided URL (e.g., amqp://guest:guest@localhost:5672). Once the connection is established we create what’s called an exchange on line 4. Here you define the type of exchange, the name and whether it is durable or not. For further details on this and other settings, I will refer you to the docs here. You may be asking why not just send directly to a queue? While this is possible, it’s not something that is practiced given the messaging model of RabbitMQ; namely, that you should never send any messages directly to a queue. Instead, the producer should send messages to an exchange which knows exactly what to do with them. In my case, I chose “fanout” as my exchange type, which simply broadcasts all the messages to all the queues it is aware of, which in my case is just the one queue defined on line 5. Other exchanges you may want to look into include Direct, Headers and Topic exchanges which are described in detail here.

Once I have defined the exchange and the queue, it is time to bind the queue to the exchange name and subscribe to any messages published to the queue. I do this on line 8-11. It is then a simple matter of processing each message and using the api object I’ve established to save the message to MongoDB and to record an appropriate log message. As an added measure, I also think it is worthwhile to see if the queue is bound to the exchange prior to any processing and report the results the log. I do this on lines 14-16.

The Groovy Publisher

So now that we have the server-side component in place ready to subscribe to any messages, we need to create a publisher to publish messages to the exchange. To do this, I chose to write a CLI tool in Groovy and leverage Spring MQ to publish the message. Groovy is great language to create CLI tools and since it is maintained by the folks at SpringSource, it was just a natural fit to use the SpringMQ framework which can generate AMQP messages without much fuss. Of course, as a prerequisite, you’ll have to grab the latest Groovy binaries so you can run the script. Let’s take a look at the code:

@Grab(group = 'org.springframework.amqp', module = 'spring-amqp', version = '1.1.1.RELEASE')
@Grab(group = 'org.springframework.amqp', module = 'spring-rabbit', version = '1.1.1.RELEASE')
@Grab(group = 'com.rabbitmq', module = 'amqp-client', version = '2.8.7')
@Grab(group = 'org.codehaus.jackson', module = 'jackson-mapper-asl', version = '1.9.9')

import org.springframework.amqp.rabbit.connection.*
import org.springframework.amqp.rabbit.core.RabbitTemplate
import org.springframework.amqp.support.converter.JsonMessageConverter

class RabbitMqUploader {
    static void main(args) {
        println("Starting...")
        def cli = new CliBuilder(usage: 'uploader.groovy [filename.csv]')

        def factory = new CachingConnectionFactory();
        factory.setUsername("username");
        factory.setPassword("password");
        factory.setVirtualHost("ve2b460e7e6b24b5da88c935ff63c4e86");
        factory.setHost("hostname")
        factory.setPort(10001)

        RabbitTemplate templ = new RabbitTemplate(factory);
        templ.setExchange("queue-dogtag");
        templ.setMessageConverter(new JsonMessageConverter());

        def options = cli.parse(args)
        if (!options) {
            // Means no parameters were provided
            cli.usage()
            return
        }

        def extraArguments = options.arguments()
        def filename = ''
        if (extraArguments) {
            if (extraArguments.size() > 0) {
                filename = extraArguments[0]
            }
            //load and split the file
            new File(filename).splitEachLine("\n") { row ->
                def field = row[0].split(',')
                def dt = new DogTag()
                dt.name = field[0]
                dt.description = field[1]
                dt.longitude = field[2]
                dt.latitude = field[3]
                dt.phone = field[4]
                templ.convertAndSend(dt)
                println('Sent >>>' + row)
            }
        }

        factory.destroy()
    }
}

So after our discussion in the previous section, you can follow the code here to see how simple it is to publish a message to the RabbitMQ exchange we set up. One thing I simply love about Groovy is that I don’t need to grab any dependencies or build any jars or define any classpaths. For dependencies, I simply use the @Grab annotation to pull those required for SpringMQ, RabbitMQ and the JSON parser down to the client so that they can run the groovy script without any set-up. In lines 15-20, I set-up the configuration to point to my host (which I recommend using a free aPaaS like OpenShift or Nodejitsu to host your Node.js app). Lines 22-24, I define the RabbitMQ template based on the config, set the exchange name and establish a JSON converter since MongoDB likes JSON format. Finally, in lines 34-55, I parse a CSV file provided as a command line parameter and send each line as an AMQP message to our server.

To run this program, I simply run: groovy RabbitMqUploader.groovy ./dogtags.csv.

Conclusion

With a few simple changes to my original application, it was easy to augment it to support RabbitMQ and define an exchange to consume messages. With Groovy, it was easy to write an autonomous, concise client to establish a connection to my node.js server and broadcast messages over the AMQP wire protocol for the server to process. I expect to use this approach in the future where perhaps a REST interface may not be best suited and using well established messaging paradigms make more sense. I encourage you to explore this interesting messaging protocol for your future needs as well.

QConSF 2012: Mobile Web Tooling

Pete LePage (@petele) gives a great overview of all the different tools you should consider when embarking on mobile web application development.

Sublime – text editor for Mac

Codekit
  • Minify, compiles
  • live browser reloads
  • JSHint, JSLint
  • Watches all files in direcotry for reloads

Great sites for good design patterns for mobile dev:

Great HCI guidelines from Android, iOS, Safari, etc. can be found on their respective websites

Start with boilerplates from SenchaTouch, bootstrap, and maybe even jqueryMobile

github.com/ftlabs/fastclick for improving perfomance of button clicks (usually delayed by 300ms for pinch zoom, slide, etc.)

To debug, jsconsole.com can be used for remote debugging on other devices and will show stuff on site….coolio.

Hammer.js is cool for jestures to listen to those events

brian.io/lawnchair good to store some transient data by abstracting away for IndexDB, WebSQL or LocalStorage

@media, cool for high density resolution or using CSS -webkit-image-set for both 1x and 2x image densities for hi-rez icons

CharlesProxy good to test network latency or Network Link Conditioner on Mac

Chrome Dev Tools have cool features to override geolocation, user agent, and device orientation

Android and Safari remote debugging possible but need USB cable to connect.

m.chromeexperiments.com to show off your skills.

new.mcrbug.com for filing any bugs

User Experience is x-axis and Platforms is y when trying to decide whether to go native or not, lower-right native, upper-left is hybrid. Left of line is better for hybrid because of hardware acceleration and browser improvements.

PhoneGap Perf Tips:
  1. Keep small footprint by limiting libs
  2. Use touch events, not click events
  3. Use hardware accelerated CSS transitions
  4. Avoid network access and cache locally.

zepto.js is a lightweight version of jquery that doesn’t worry about legacy browsers

Phonegap API explorer is a reference application to look at from the AppStore
Checkout PhoneGap Build to target up to 6 platforms without having to load XCode, Eclipse, etc to build all the individual platforms by hand.

QConSF 2012: Real Time MVC with Geddy

Daniel Erickson from yammer presents on mixing MVC with Real Time apps.

MVC is here to say because…

  • It provides structure to a project
  • Allow people to easily jump into a project and multiple people at that
  • Easier to get started with

Realtime is great for:

  • Getting instant feedback from your users
  • Works well on mobile (Voxer)
  • WebSockets over long/short polling

How can MVC and Real Time can be mixed?

Enter Geddy…if you know Rails, NoSQL/SQL ORM layers and templating languages than Geddy is a natural progression. PostGres, Riak, MongoDB adapters; EJS and Jade TL’s. Supports RT OOTB.

Geddy gives you a reach ORM/MVC framework that you can readily work into your Node.js applications.  I’ll have to take a look when I get back!

QConSF 2012: Graph Database Overview from Emil Eifrem

@emileifrem gives a great introduction on NoSQL databases, especially graph databases.  He also gives a quick overview on use cases for graph databases and highlights some of the benefits of using Neo4j in this endeavor.  Here are my notes from this session:

Trends in BigData & NoSQL:

  • Increasing data size
  • Increasing connected data
  • Semi-structured data
  • Architecture – a facade over multiple services

Categories of NoSQL

Key/Value store (heritage from Amazon Dynamo) –

  • Riak, Redis, and Voldermort are implementation examples
  • Strengths is that it’s a simple but also weakness

Columnar Stores

  • BigTable – every row can potentially have it’s
  • Examples are HBase and Cassandra, Hyper Table
  • Supports semi-structured data but weakness is nested or connections

Document Database

  • Collections of documents, JSON that could potentially be nested
  • 90% uptick on NoSQL in MongoDB but also CouchDB
  • Strength is simplicity of database, but hard to do connected data

Graph Database

  • Nodes and Relationships
  • Examples are Neo4j, InfiniteDB, OrientDB, etc.
  • Great for connectiveness of data and complexity but harder to scale to size

Graph databases can provide statistical data on how likely a node is related to other nodes.  Relationships are first class citizens in graph databases to give color to how nodes are related.  Nodes and relationships have simple key/value pair types.  Indexes are also available.  Types are being considered for post production control of data.

Speed comparison for social graph database example from presentation shows that MySQL is 2000ms and Neo4j is 2ms.  For 2k to 1mil people if they’re connected is still 2ms.  In SQL, JOIN clause explosion due to combinatorial issue.  Neo4j can visit about 1-2mil nodes/per second.

Graph queries:

Cypher gives you higher abstraction and ease of use, but slower performance. It uses graph patterns to defines nodes and relationships represented like:

START A=node:person(name=”A”)

MATCH (A) – [:LOVES] -> (B)

RETURN B as lover

pattern-matching query language, declarative grammer and aggregation, ordering, limits…tabular results

Neo4j is fully ACID compliant as opposed to eventual consistency like most other NoSQL systems

Use your Maven build to auto deploy to WebLogic 10.3

Ever wonder how you can deploy to WebLogic server using Maven. In the 10.3 version of WebLogic you can use WLDeploy (an ANT task) to do this in Maven using the antrun plugin. The XML is provided below and what I did is wrap this in a profile called “wlsDeploy”. On our CI build, I simply added -PwlsDeploy to the command line set-up so that it would deploy to WebLogic.

In the code below, you simply need to have the WebLogic JAR available to you on the build server, update the path property variable and replace the other variables with properties in your Maven build or add -D parameters to the run command line, especially for things like passwords.

Once complete, your builds will auto deploy to WebLogic based on your build schedule, assuming you’re using a CI build server for continuous integration. Otherwise, you can run this locally as well.

<plugins>
    <plugin>
        <artifactId>maven-antrun-plugin</artifactId>
        <executions>
            <execution>
                <id>local-deploy</id>
                <phase>package</phase>
                <goals>
                    <goal>run</goal>
                </goals>
                <configuration>
                    <tasks>
                        <taskdef name="wldeploy" classname="weblogic.ant.taskdefs.management.WLDeploy">
                            <classpath>
                                <pathelement path="${path.to.weblogic.jar.local}"/>
                            </classpath>
                        </taskdef>
                        <wldeploy action="stop"
                                  name="${deploy.name}"
                                  failonerror="false"
                                  user="${wls.dev.username}"
                                  password="${wls.dev.password}"
                                  verbose="true"
                                  adminurl="http://${wls.dev.hostname}:${wls.dev.port}"
                                  targets="${wls.deploy.target}"/>
                        <wldeploy action="undeploy"
                                  name="${deploy.name}"
                                  failonerror="false"
                                  user="${wls.dev.username}"
                                  password="${wls.dev.password}"
                                  verbose="true"
                                  usenonexclusivelock="true"
                                  adminurl="http://${wls.dev.hostname}:${wls.dev.port}"
                                  targets="${wls.deploy.target}"/>
                        <wldeploy action="deploy"
                                  name="${deploy.name}"
                                  source="${deploy.source}"
                                  remote="true"
                                  upload="true"
                                  user="${wls.dev.username}"
                                  password="${wls.dev.password}"
                                  verbose="true"
                                  usenonexclusivelock="true"
                                  adminurl="http://${wls.dev.hostname}:${wls.dev.port}"
                                  targets="${wls.deploy.target}"/>
                    </tasks>
                </configuration>
            </execution>
        </executions>
    </plugin>
</plugins>

Setting up GIT with Apache Smart HTTP/S and LDAP

I recently was put on a project to explore how we could use GIT over HTTP and integrate with our existing LDAP for authnz.  The reason for HTTP is that it is pretty easy to set-up and you can encrypt the content transfer with SSL.  Also, HTTP/S is firewall friendly.  The downside is that HTTP is a “dumb” protocol. The information here consolidates some information I found on the web to accomplish this.  I am using RHEL 6, Apache 2.2, OpenLDAP and msysgit for my GIT client on my Windows machine.

First off, HTTP wasn’t necessarily the fastest protocol to use with GIT until they added a mod called git-http-backend, or SMART-HTTP, as of GIT 1.6.6. This article from the Pro GIT author, @chacon, details this and from my experience I cut my download times by two-thirds using this approach.  Moreover, github is also supporting this.  Basically what you need to do is as follows:

  1. Confirm you have Apache 2.2 installed: rpm -q httpd (install it with yum otherwise)
  2. Clone your GIT repo to Apache by doing the following (as per Pro GIT book):
    $ cd /var/www/html/git (mkdir if necessary)
    $ git clone  --bare /path/to/git_project gitproject.git
    $ cd gitproject.git
    $ mv  hooks/post- update.sample  hooks/post- update
    $ chmod a+x  hooks/post- update 
  3. Update your httpd.conf to include this:
    SetEnv GIT_PROJECT_ROOT /var/www/html/git
    SetEnv GIT_HTTP_EXPORT_ALL
    ScriptAlias /git/ /usr/libexec/git-core/git-http-backend/
  4. Add LDAP Authentication (in this case, any valid LDAP user will have access to the git location) as follows:
    <LocationMatch "^/git/.*/git-receive-pack$">
            SSLRequireSSL
            Order deny,allow
            Deny from All
            AuthName "GIT Repo"
            AuthType Basic
            AuthBasicProvider ldap
            AuthzLDAPAuthoritative off
            AuthLDAPURL "ldap://ldap-server.company.com:389/ou=users,o=company?uid"
            Require valid-user
    </LocationMatch>
  5. Restart httpd: /etc/init.d/httpd restart

For LDAP authorization, of course you may have several different repos running off the same host, all which require certain users or groups access to the given location. This site explains this in detail, but here is an example I used so that I could bind a particular repo location to an LDAP group with SSL in place:

<LocationMatch "/git/gitproject*">
        SSLRequireSSL
        Order deny,allow
        Deny from All
        AuthName "GIT Repo"
        AuthType Basic
        AuthBasicProvider ldap
        AuthzLDAPAuthoritative on
        LDAPTrustedGlobalCert CA_BASE64 /etc/pki/tls/http/rootCA.crt
        AuthLDAPURL "ldaps://ldap-server.company.com:636/ou=users,o=company?uid"
        AuthLDAPGroupAttribute member
        AuthLDAPGroupAttributeIsDN on
        Require ldap-group cn=my.group,ou=groups,o=company
        Satisfy any
</LocationMatch>

In this example, I’m binding project.git (use wildcard for LocationMatch in-case users forget to add .git extension) to any member in the LDAP group “my.group”. Note that you may need to define a different LDAP group attribute to match the field that will contain the DN of your users .  If you are not storing DN, then you can set AuthLDAPGroupAttributeIsDN to off.

The last step is enable SSL on your Apache server.  We use self-signed CERTs internally so you’re going to have to add those certs to your GIT clients unless of course your using a well known root CA like Verisign.  To do that with msysgit, you open the $MYSYSGIT_INSTALL/bin/curl-ca-bundle.crt and add the base-64 encoded text of your keys to the end of this file.  Then run the following command from the GIT BASH:

$ git config --global http.sslcainfo c:\\apps\\Git\\bin\\curl-ca-bundle.crt

That’s pretty much it!  Now you can simply connect to your repo from msysgit with SSL (no SSH keys req’d) and LDAP authorization:

$ git clone https://mygitserver.company.com/git/project.git
Cloning Project...
Username: [put user name that's in the LDAP group]
Password: [password]
remote: Counting objects: 17, done.
remote: Compressing objects: 100% (13/13), done.
remote: Total 17 (delta 0), reused 0 (delta 0)
Unpacking objects: 100% (17/17), done.