Adding Enterprise Features to a Application with Kaazing WebSockets

As we discussed in my prior blog post Meeting WebSockets for real, WebSockets enable development of distributed event-driven applications, particularly ones that exchange messages.  Kaazing’s high-level client libraries substantially simplify this effort.  It is possible to add these capabilities to a new Web application in a matter of minutes.

However, what if you already have a NodeJS application and your application already exchanges messages using very popular library? is a fantastic open source technology that is very easy to use and adopt. However, many critical requirements of enterprise applications may present some challenges when using  Most enterprises require advanced security, enhanced scalability, fault tolerance, etc.

In addition, ‘out of the box’ does not have support for mobile clients that many enterprises mandate.  In these cases there is a need to transition to an enterprise-class platform and library for using Websocket.

In this article I want to discuss how easy it is to modify your existing code written with to work with Kaazing WebSockets using a publish/subscribe model.


Let’s First Start with a Working App

Let’s start with my AngularJS implementation of the canonical TodoMVC application that uses   I had a lot of fun building that app; you can find it here.  We are going to make the simple Todo application into a shared app so one user can see the changes of other users).  In addition we can add a NodeJS server component to store the current state of the items; so new users will receive an initial update.  And to prevent potential race conditions when items are being edited by a user, we will simply disable the updating of items by other users; other more sophisticated code can be added later.

Let’s look at several sections of my application’s main controller (complete source available on GitHub).


// Connect to pub-sub service

// Load initial data on connect
$scope.socket.on('connect', $scope.loadData);

// Setup receive callback
$scope.socket.on('todomvc-snd', $scope.processReceivedCommand);


The first thing is to connect to


Once a connection is established, we send the request to the server to load any initial data .  We do this by sending the command “init” to “todomvc-rcv” socket together with the attached to the $scope client ID (server will send the reply to this command to all the clients – clients with different IDs will ignore it):

$scope.socket.on('connect', $scope.loadData);


        var msg={
        $scope.socket.emit('todomvc-rcv', msg);
        $"Sent initialization!");

Now, we need to setup the callback method to receive messages from the server:

$scope.socket.on('todomvc-snd', $scope.processReceivedCommand);

$scope.processReceivedCommand is straightforward and just responds to individual commands.

        $"Received command "+cmd.command);
        if (cmd.command==='insert'){
        else if (cmd.command==='remove'){
        else if (cmd.command==='update'){
        else if (cmd.command==='initdata'){

Note, we are setting up callback on a todomvc-snd topic, while sending the messages to todomvc-rcv topic. I could not find the way to have send the data from one client to the others directly. To overcome this issue, the client will send the data to a server on the topic A and the server will retransmit the message to the other clients on the topic B.
Sending the data is pretty trivial. For example, to add the item:

var msg={
$scope.socket.emit('todomvc-rcv', msg);

That was the client side of the application.  Let’s now look at the server side NodeJS component.  Besides the code to process the received message and serve static pages it also contains the part that connects to

io.on('connection', function(s){
      console.log('a user connected');
      s.on('disconnect', function(){
          console.log('user disconnected');
      s.on('todomvc-rcv', processMessage);

Function processMessage contains the code to store the items that also includes send message calls to retransmit.

function processMessage(cmd) {
        console.log('Command: '+ cmd);
        if (cmd.command === 'insert') {
        else if (cmd.command === 'remove') {
        else if (cmd.command === 'update') {
        else if (cmd.command === 'init') {

Similar to the client side code, sending messages is very simple.  If we want to send the message retCmd to everyone:

var retCmd = {
  command: "initdata",
  client: cmd.client,
  items: todos

socket.emit("todomvc-snd", retCmd);

If we want the message to be received to all the sockets except the one that sent it:


Now, we can run and test the application.  This page will give you detailed instructions.

Now… Adding the Facade to Kaazing WebSockets

To make things easy, we are going to employ the Kaazing Gateway AMQP Edition since NodeJS has robust libraries for AMQP.  While the Kaazing Gateway comes pre-packaged with the Apache Qpid AMQP broker, it can easily be RabbitMQ or another AMQP-compliant broker.  Refer to for the instructions for the installation of the Gateway.

Modifying client code

We are going to use Kaazing Universal Client Library for AngularJS to communicate with WebSocket Gateway.  This client library provides a high-level messaging abstraction that is independent of the underlying AMQP or JMS implementations.
Here are the steps we need to take

  • Modify app.js to include the Kaazing Universal Client Library service and create connection parameters contstants (see )
    angular.module('todomvc', ['ngRoute', 'ngResource', 'KaazingClientService']).
        .config(function ($routeProvider) {
        .constant('todoMvcWebSocketConfig', {
            URL: "ws://localhost:8001/amqp",
            TOPIC_PUB: 'todomvc',
            TOPIC_SUB: 'todomvc',
            username: 'guest',
            password: 'guest';
  • In the main controller source code
    • Connect to Gateway and establish all callbacks (replace $scope.socket=io() and all $scope.socket.on calls)
      // Connect to WebSocket
            'amqp, - use AMQP protocol
            todoMvcWebSocketConfig.URL,  // connect URL (default is ws://localhost:8001/amqp)
            todoMvcWebSocketConfig.username,// use guest account
            todoMvcWebSocketConfig.password,// use guest pswd
            todoMvcWebSocketConfig.TOPIC_PUB, // use 'todomvc' as publishing topic
            todoMvcWebSocketConfig.TOPIC_SUB, // use 'todomvc' as subscription topic
              true, // No Local flag set to true which prevents the client to receive messages that it sent
              $scope.processReceivedCommand, // function for processing received messages
                   function(e){alert(e);}, // function to process WebSockets errors
            $scope.logWebSocketMessage, // function to process WebSocket logging messages
            $scope.loadData // function that will be called when the connection is established
    • Replace $scope.socket.emit with AngularUniversalClient.sendMessage
      var msg={
          command: 'insert',

Note that we no longer need to use different topics for publishing and subscription.  The Kaazing WebSocket Gateway will send published messages to all subscribed clients without the need to retransmit them in the server code!

Server code changes

To preserve our server side code we are going to use the Kaazing wrapper located in the node subdirectory of the todomvc implementation.
The wrapper hides the details of the use of AMQP protocol (which is used for communication with NodeJS) while exposing interface that mimics the one used in

With our wrapper, we only need to make two changes on the server:

  • Change both send and receive topics to “todomvc”
  • Remove the code that retransmits messages

Here is the final server code implementation.

Summary of Changes

If you were to run the diff:

  • app.js vs app-socketio.js:
    • Added declaration of the Kaazing Universal Client Service
    • Added constant for the connection parameters
  • todoCtrl.js vs todoCtrl-socketio.js
    • Added reference to Kaazing Universal Client Service
    • Added reference to the connection parameters constant
    • Replaced connection and callbacks for with a single call to initialize Kaazing Unviersal Client
    • Replaced socket.emit functions with Kaazing Universal Client sendMessage cals
  • serverws.js vs serversocketio.js
    • Replaced declaration of io:
      var io = require('')(http);

      was changed to

      var io=require('./node/socketioalt.js')('amqp://localhost:5672');
    • Removed all retransmit calls socket.broadcast.emit(“todomvc-snd”,cmd);
    • Changed the topics “todomvc-snd” and “todomvc-rcv” to “todomvc


Its very easy to upgrade a application to use the enterprise-class Kaazing WebSocket Gateway.  There are barely any source changes in your existing code.  Not only do you now get high-performance, advanced security and scalability features, Kaazing technology will extend your former applications to work with mobile clients and many B2B and inside-the-firewall applications.

It’s a huge win.

Posted in html5, Kaazing, WebSocket | Tagged , , | Leave a comment

How to Replace Your Legacy VPN in Minutes

KWICies #007 – When You Gotta Go, You Gotta Go

“Intelligence is the ability to adapt to change”
– Stephen Hawking

“The lion and the calf shall lie down together but the calf won’t get much sleep”
– Woody Allen



An Enterprise Parable
The alluring sound of the cloud siren beckons you to write a killer service using the latest server-side technologies to solve a huge problem in your industry. You haul out the buzzword-compliant big guns, i.e., big-data, machine intelligence and Docker microservices. Your team builds this awesome, mission-critical, revenue-generating web service. All this gleaming power is now ready to be used by your Fortune 500 customers.

You pause and secretly sneer with glee.

The management gods smile upon you. It’s an evil smile, but what the heck, at least those bastards are smiling.

You present to your first customers.

“… and it’s scalable and secure. Its accessed with a simple HTTP call. You merely upload your users’ credentials into our repository in the cloud and we take care of authentication.   Thank you.  Payments to ACME Software are due 30 days ARO”.

But the customers don’t rejoice. In fact they remain stone-faced, stand up silently and exit single-file.

The management gods drop their smiles and their lit cigars.   You reach into your pocket and solemnly crinkle that envelope with that down payment on your new digs in the Hamptons.

Cue the sad music.

sad+clown                                     But I like the Beach…


What Goes On in the Datacenter Stays in the Datacenter
Most companies have legal constraints, regulatory restrictions or corporate mandates on what types of systems can physically exist in an off-premises cloud. Very often, critical user credentials and other types of “family jewel” information cannot leave the premises of the enterprise. In many geographic locations, in particular Europe, there are stringent rules on where data must reside. Data protection directives and privacy laws can be quite strict and heavily regulated

In addition to data protection and privacy constraints, many corporations simply may not want their highly sensitive services in a public cloud provider. It may not even be for data sensitivity reasons; companies are very reluctant to create subsets of data since it involves costly data synchronization and maintenance. So there are a variety of reasons that certain systems and services must remain on-premises.

Caribbean_Map                              Preferable off-premises islands of information

However, these powerful SaaS-style systems of engagement running in a public cloud commonly require access to the on-premises systems of record.

For example, a CRM system running in Microsoft Azure may require authentication from an on-premises LDAP service. A portfolio reconciliation system running in Google GCP needs real-time market data feeds originating from several on-premises sources. A managed machine intelligence service running in Amazon AWS requires event-based data from your supply chain partners.

These are not unusual on-prem/off-prem scenarios.  Au contraire mon ami, most SaaS services have this requirement.

Historically there were three basic solutions for this type of requirement:

  1. You ask your customers to create a conventional REST-style web service that allows your external cloud service to call them. Your champions at your customer hear you, feel a stabbing sensation in their stomachs and wince politely. This solution is quite painful and very costly for their IT departments. Designing and deploying an application server that requires several months to develop and incurs maintenance expenses is painful for your customers.
  1. You ask your customers to open incoming non-standard TCP ports for your SaaS cloud service.This is potentially a humongous security hole. You are destined to be a case study in the future. Prepare your CV.
  1. You ask your customers to install a legacy VPN.Your customers are familiar with VPNs. They know they can install an expensive hardware VPN device from a large networking company with a lengthy maintenance agreement. Or they can deploy a software SSL VPN, perhaps even an open-source one with the fugly user interface and confusing administrative dashboard. All your customers need is to get approval from their InfoSec and Operations teams. And signoff from your own InfoSec team. And from their Managing Director. And their CTO. And their CIO. Should be simple, right?

The usual 30-year old answer to this scenario is to setup a legacy VPN to connect the two systems.

shrug                                                        Do I really have a choice?


However there are many downsides to setting up traditional or cloud-based VPNs:

  • The on-boarding process can be onerous especially between external organizations, despite the straightforward technology setup.
  • They are not easy to manage in an agile, constantly changing federated environment, which is the norm.
  • VPNs may require additional infrastructure for mobile devices that experience disconnects, cross-application network connection retries, additional security, etc.
  • Even one VPN can be quite difficult for a business unit to deploy, maintain and understand the security issues. In a business-driven cloud services world, this reduces agility for the revenue generators in an enterprise.
  • They typically allow low-level potentially dangerous access especially if home computers are used to access corporate assets.
  • VPN Access control commonly uses the hard-to-manage, black list security model.
  • They present huge surface areas with many attack vectors for hackers to exploit. Some researchers have even discovered many VPN vendors leak low-level IP data.
  • VPN vendor hardware and software are not always interoperable or compatible. A particular VPN architecture may not be suitable across multiple VPN vendors.
  • VPN products typically offer poor user experiences.
  • TCP and Web VPN requirements are not necessarily the same. This drives up costs. In terms of security,
  • Do legacy VPNs fit in a multi-cloud, on-demand and microservices world? All connectivity must be uber convenient and on-demand.

And as the Internet-of-Things (IoT) and Web-of-Things (WoT) wave matures over the next 5-10 years, VPNs are simply too clumsy, inconvenient and heavyweight to handle agile remote connectivity for the many billions of devices to come.

And these devices will arrive in huge waves. The connectivity and data volumes are large now, but when IP is implemented over Bluetooth LE, the connectivity fabric will spread faster than a Lady Gaga video on YouTube. You don’t have to be Nostradamus to predict a future discontinuity in the increased data volumes, the increased number of machine intelligence applications and the necessary secure connectivity.

Customers definitely want secure connectivity with all these apps, but they also want convenience.


convenience                                    Is Elon Musk driving that Really Smart Car?


Enter WebSocket
We’ve talked about WebSocket in detail back in KWICies #003. To quickly review, our fearless hero WebSocket is an official IETF wire protocol (Dec 2011) and an (essentially) official W3C JavaScript API to use it (note: the W3C only specifies a JavaScript API). WebSocket is a peer protocol to HTTP; in other words both HTTP and WebSocket (and their TLS/SSL encrypted versions) are physically implemented “on top of” TCP.


wshttp                       The Web is now a humongous collection of APIs and Services


Unlike HTTP, WebSocket is a persistent (and full-duplex) connection between two endpoints. A persistent connection means event-based programming is now finally possible over the web. Btw if you really want to be hip, replace “event-based” with “reactive”; you’ll make the application server developers swoon during your next corporate presentation.

HTTP is clearly an excellent protocol for document up/download and we certainly have tweaked it over the past 5-7 years to do things it was never intended.  And HTTP remains the protocol of choice if you need caching of static entities. But it was never intended for asynchronous distributed computing.

On the other hand, WebSocket can be thought of as a “TCP for the web” (certainly not physically true). As a persistent and full-duplex connection, WebSocket allows all sorts of additional protocols to be implemented over the web, e.g., messaging, events, telemetry, data acquisition, et al.  And like TCP, WebSocket is a low-level transport; many other types of higher-level application protocols and APIs can be implemented over WebSocket.  As a matter of fact, any TCP-based application protocol can use WebSocket as a transport to traverse the web.

Similar to any other wire protocol, WebSocket does not have to be used with a browser (e.g., Slack’s native client uses WebSocket under the hood), but there are certainly a lot of examples of WebSocket use in a browser (Google docs, Trello, BrowserQuest, etc.).

And since WebSocket is like a TCP, you can envision other non-browser use cases like… wait for it… replacing many VPN scenarios.  The Kaazing KWIC software leverages this new communication model by securely converting TCP to WebSocket from one side and reversing the process on the other side. Literally in a few minutes you can have secure hybrid cloud services connectivity using the WebSocket-powered KWIC software without the pain and administrative headaches of a legacy VPN.

If you need on-demand, program-to-program service connectivity for your modern applications, why are you still dealing with old-school, 30 year old VPN technology?

Frank Greco

Posted in cloud, Kaazing, Security, WebSocket | Tagged , , , , | Leave a comment

Kaazing and NGINX – The Best of Both Worlds

Jesse Selitham

Kaazing is well-known for a multitude of products based on the WebSocket IETF and W3C standard.  Using WebSocket allows us to extend different types of asynchronous, message-oriented infrastructure such as JMS and AMQP easily and securely over the web.

NGINX is another awesome tool.  It is one of the world’s most popular, high-powered web servers.  Many of the large websites on the planet prefer NGINX as their webserver of choice.

Wouldn’t it be really cool to combine the secure messaging features of the Kaazing Gateway together with the high performance Nginx web server?  Your hyper fast website would have additional security measures to protect your internal network from unauthenticated users and you’d be able to automagically connect to internal enterprise messaging systems without learning brand-new APIs.

Now that’s a powerful combination.  Let’s setup a deployment to get the best of both.  

We can have incoming WebSocket-based messaging requests passed through an Nginx webserver (proxy) instance to a Kaazing WebSocket Gateway server instance.  Normal incoming HTTP/S traffic would be directly handled by Nginx.


This topology also allows us to avoid exposing the internal URL or private infrastructure information that hosts the Kaazing WebSocket Gateway server (which is a simple Java process btw).



The first step is to download the Kaazing WebSocket Gateway if you haven’t already.  I’m using the standalone JMS Edition 4.0 on Windows that you can find here (Make sure you download the “Gateway + demos” version).  Once you’ve unzipped the file, we’ll want to make one change to the default gateway.config file (gateway-config.xml).  This file is located in the “…\conf” subdirectory below the your install directory.


Down the middle of the page of the gateway-config.xml file, within the JMS <service> tag, we’ll need to change this (parameterized) line here


This property is enabled for security reasons by default.  This setting will block incoming websocket connections if the request is NOT originating from the specified host (${gateway.hostname}) and port (${gateway.extras.port}).  In a non-proxy topology, you would typically keep the origin URL (typically coming from a browser) set for protection.  But in our case, Nginx will be proxying the connection coming from a different port than the actual origin.  For the purposes of development simplicity, let’s change this setting:


Using “*”  allows websocket connections from any origin through to the JMS service.  In production, you will definitely want to strengthen security.  The security documentation for the Kaazing Gateway has detailed information for enhanced security.



The JMS Edition of the Kaazing Gateway was designed to work out-of-the-box with any JMS 1.1 compliant message broker.   For convenience the binary version of the  popular message broker ActiveMQ is included in the JMS Edition distribution.  In addition, the gateway-config.xml configuration file has a simple, default configuration for ActiveMQ.

Let’s start the Kaazing WebSocket Gateway and the ActiveMQ message broker.  The startup scripts for both are located in the …\bin subdirectory.





The last step is to install and configure Nginx.  Setup is very easy.  The installation documentation on the Nginx website can help you.    Below is my configuration for my nginx.conf file.

events {
    worker_connections 1024;

http {
    server {
        listen 9000;

        location /jms {
            proxy_pass http://localhost:8001/jms;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
        location / {
            proxy_pass http://localhost:8001;

I’ve stripped the Nginx configuration down to the bare minimum.  This configuration simply proxies requests through to the Kaazing Gateway server.  

Now we simply start up Nginx.   

At this point, we have an instance of Nginx acting as a normal webserver handling HTTP requests and also acting as a WebSocket proxy.  The JMS-over-WebSocket requests (identified by “http://localhost:8001/jms” are passed through Nginx and handled by an instance of the Kaazing WebSocket Gateway.  The Gateway takes that connection request and handles it as it would have if there were no proxy and completes the connection to the instance of the ActiveMQ message broker.   Everything is now in place for a full integration test.



Included with the JMS Edition of the Kaazing Gateway is a simple JavaScript publish/subscribe messaging demonstration.  We can use this demo to test our full setup.  

Start your favorite browser and visit this URL:


If you look in the Location textfield, you will see the URL: ws://localhost:9000/jms

Notice the port we are using here, i.e., 9000.  This is the port that we configured our nginx server to listen on, which means Nginx served up the webpage.  Next click on the Connect button.  If we’ve done everything correctly, we should see “CONNECTED” in the “Log messages” text area to the right.


Voila!  We have successfully allowed a WebSocket connection to pass through Nginx, connect with the Kaazing Gateway and subsequently connect to the back-end ActiveMQ message broker.  I will leave the subscribe and publish fun of the demo app up to you!   

Posted in cloud, html5, IoT, JMS, Kaazing, Uncategorized, WebSocket | Tagged , , , | Leave a comment

Deploying the Kaazing WebSocket Gateway Using Docker

Many of our customers deploy our WebSocket Gateway using traditional operations procedures.  Works perfectly fine, but like typical enterprise-grade middleware products, there’s still a number of moving parts to get correctly configured with other systems.

Enter Docker.

In case you haven’t checked it out yet, Docker is an exciting management platform that makes setting up tech infrastructure a snap.  It allows you to manage container deployments which are self-contained services that share the operating system.  You should think of containers as mini-virtual-machines but significantly lighter weight.  And these days, to help deploy multi-container applications Docker now comes with the “docker-compose” (Compose) utility which makes deployment ultra easy.

To get you caught up, I will walk you through the basics of getting Docker installed and running a short example.  Let’s use the popular echo demo application from and deploy it in a single command.  Then I’ll show you how to leverage Compose to easily define and run a sample application that connects the Websocket gateway to a backend JMS message bus.  We’ll keep it brief and point you to the useful Docker docs.  All of these examples are hosted on github.

Sounds good?  Let’s get started.

First, what really is a container?  A container wraps an implementation of a software service in a complete filesystem and OS-like environment that contains everything it needs to run.  Docker provides an API and container management platform that allows people to deploy, run and share them with others.  It’s very similar to traditional virtual machine (VM) management that we’ve used for the past few years.  But Docker has all the bells and whistles that make container management simple and significantly more performant.  

Let’s install Docker by following the Docker installation docs.  After you have installed and started Docker, you can run a single command to deploy and run the WebSocket demo.

docker run -name websocket.container -h -p 8000:8000 kaazing/gateway

So how does this work?  

On your machine you are running a docker daemon which is listening to commands.  The ‘run’ command tells the docker daemon to launch a docker container on your behalf, specifically the kaazing/gateway container.  But how did you get the kaazing/gateway container?  Well, it is made available to you in the form of an image via a trusted and official docker repository that is accessible to the public called docker hub.  The next question you’ll ask is “Where is my container running”?  Containers are a Linux feature, if you are running on a Linux machine then it is running in an isolated namespace on your OS.  If you have a Mac or Windows machine, then your container needs a Linux VM on your Mac/Windows box as a host environment to run. Luckily, docker distributions now include a Virtual Machine manager like VirtualBox.

So now you have a docker container running.   Cool.

To connect to it you will need to add an entry into /etc/hosts for “” that points to the IP address of the machine the docker container is running on.  If you are using the docker installation referenced above you can get this by running docker-machine ip your-docker-machinename.  On my machine that returns  So I would add the following to my /etc/hosts.

Once you configure your /etc/hosts file, you can run a quick test.  Open your browser and visit:

Perfect, step one is done!

But setting up a Kaazing Gateway in this way still requires configuring the Gateway to interact with other components in your infrastructure.  Rather than setting up each component separately, you can use Compose to launch and configure them all at once.  For example, a common setup is to use the Kaazing Gateway with a messaging broker that allows internal publish/subscribe messages to be delivered over the firewall.

That’s why we need Compose.

Compose now comes pre-installed with the latest Docker releases.  With Compose you specify the components in your infrastructure via a docker-compose.yml file.  Below is an example:

  build: dockerized.gateway
    - "8000:8000"
    - "8001:8001"

In this file two containers are defined: broker, and gateway.  Each points to a directory that defines where the container is built from; dockerized.gateway and respectively.  The gateway container is linked to the broker container via, which means the gateway container can reach the broker on that address.

To run this we will need to checkout the example github repository and run docker-compose up, which launches the configuration.  Here are the commands:

git clone .
docker-compose -f up

Now point your brower to and you can see the jms demo.

Docker has a fast-growing image repository.  The Docker team recently announced they were 5.6 million docker images pulled every day.  Its immense popularity is based on its ability to enable developers to easily build, package, and deploy sophisticated enterprise applications that consist of a collection of powerful high-performance components.  Now the Kaazing WebSocket Gateway joins the Docker ecosystem as a valuable tool for application developers.

Posted in Uncategorized | Leave a comment

Wild Week of Web Wonderment – HTML5DevConf

KWICies #006: Thoughts on HTML5 and HTML5DevConf 2015

“We live in a web of ideas, a fabric of our own making.”
– Joseph Chilton Pearce

“Roses are red, violets are blue, I’m schizophrenic, and so am I.”
– Oscar Levant

The Web is Your Oyster, Now Watch Out for the Clams…




Let’s face it. HTML5 was originally designed as a weapon.

HTML5 is the culmination of a political movement to overthrow Microsoft Internet Explorer (IE) as the web platform for all of humanity.

If you recall from your readings of the ancient dead (ANSI) C scrolls of the Internet circa mid to late 1990’s, Microsoft’s IE (a browser licensed from Spyglass) was the only browser allowed by most companies. And many large companies wrote apps specifically for IE, effectively shutting out any other browser vendor. This was tough to take for the other browser vendors at that time such as Mozilla, Apple and Opera. The browser was the portal to knowledge and Microsoft was the gatekeeper. Not only that, the pace of innovation with Microsoft’s browser back then was glacially slow (and that’s being kind). Microsoft also leveraged this control to constrain the growth of the Java browser plugin to run applets (anyone remember any useful Java applets? Me neither). When you have a monopoly, why rush to innovate? It’s no different than in any other business sector.


“I’m certainly not interested in forking a shell”


But this situation was only good for Microsoft and gave enterprises a false sense of security. Yes, you can be sure there’s a browser running on your user’s desktop so you can deploy your web app, but its frickin’ IE with its own proprietary implementation of HTML and related technologies. Didn’t we learn portability as a trait of good software back in Computer Science 101?

Since the browser is the portal to knowledge and information, and just an indispensible tool (to buy incredibly bad country music, to rent movies from the 70’s with a laugh track funnier than the movie, and to watch videos of cute kitties riding on Roombas), the other browser vendors were at a disadvantage. They could not run those non-portable, IE-specific apps.

And quite honestly, the W3C as an organization back then was no zippy roadrunner either. It moved quite slowly especially with the evolution of XML-flavored page markup, which thankfully died on the vine. Hopefully the vine died a painful death too.


What were they smoking?


Because of the slow evolution of this markup language by the W3C (and because they needed to stop the IE monopoly), Apple, Mozilla and Opera proposed a new unofficial web standards group called the WHATWG (Web Hypertext Application Technology Working Group) in 2004. These three proposed HTML5 to the W3C as the basis of the next generation of web applications not web documents. Effectively this turned the web into a programmatic platform rather than a document storage platform. HTML5 was not just an upgrade of HTML4, it proposed sophisticated graphics, animation and a collection of useful ECMAScript (note JavaScript is an Oracle trademark) APIs such as File I/O, Geolocation, Database, Messaging, Threading, Touch Events, Audio, MIDI, Speech Recognition, et al.

Cool HTML5 stuff.


Eventually Google recognized the pervasive, far-reaching, accessible power of HTML5 and joined the programming plumbing party with the ever-growing list of HTML5 goodies.   And HTML5 is providing all of these features without plugins. Since plugins were attack vectors for hackers, the more plugins you have, the more possible security breaches. HTML5 only allows a single way to connect the browser to the web. This significantly reduces the possible ways that hacker can break in. It certainly doesn’t eliminate breaches, but HTML5’s no-plugin philosophy dramatically reduces the attack surface area (just love that jargon from the security boys).


The only time more Doors is better

So now we have a large number of modern browsers (amazing what competition does, right?) all with varying compliance to the list of really cool HTML5 features.




Any many of these features were on display at the 2015 HTML5DevConf in San Francisco. The conference chairperson Ann Burkett put on a wild week of web wonderment at the Yerba Buena Center for the Arts in the city by the bay.   There were speakers from Netflix, Google, Meteor, Microsoft, Yelp, Dolby, PayPal, Adobe, Wal-Mart, Yahoo, Couchbase, Qualcomm and of course Kaazing.


I particularly enjoyed Jennifer Tong’s talk on “Prototyping the Internet of Things with Firebase”. Jennifer, a Mountain View Googler, did an incredible job explaining simple electronics to the web-savvy audience. She brought everyone up-to-speed on simple hardware hacking in the first 20 minutes of her talk and setup her live software demos using Google’s Firebase, the Johnny Five library and the Raspberry Pi small computer. Excellent presentation.


The original Johnny “Five”… Johnny Bench


Steve Souders is no stranger to the web world; he’s well known in high performance web circles and the web in general. Steve had successful stints at Google and Yahoo! as the lead performance expert. His session on Design+Performance was certainly very informative. Essentially users want both a media-heavy website but a very fast user experience, but how do you design an optimal site that satisfies both criteria. Steve talked about gathering metrics using new tools and employing an interesting development process that joins both requirements at a project’s outset.


Peter Moskovits and I delivered a well-attended session on novel uses of WebSocket using our own Kaazing Gateway server. Historically WebSocket (and its technically incorrect but more popular moniker “WebSockets”) has been used to push data from the server to the browser. But now there have been several advances in alternative mechanisms for simple data push such as the Notification specification and the Push API. There is also the new HTTP/2 standard, where multiple HTTP connections can share a single, underlying TCP connection for 20-50% more performance. The use of WebSocket specifically for browsers is now more suitable for certain high-performance or highly reliable messaging use cases.


Kaazing’s Peter Moskovits Talks WebSocket


As we pointed out in our session, the overwhelming majority of web usage is not via the browser. Most of the web is consumed via APIs. Currently the dominant API model is REST (there were a few people in the audience actually admitting they used SOAP, poor souls). REST is a very easy synchronous API model that typically uses request/response HTTP as its transport, which means REST calls have to wait for a response.


But as streaming and reactive services mature and continue to proliferate (especially with the IoT wave growing exponentially), the need for higher-level asynchronous mechanisms and APIs for developers to use will grow significantly. The world is asynchronous and event-driven; many applications in the future just cannot use REST, which was never truly designed for events. WebSocket is perfectly suitable for these types of use cases.


We also proposed a novel application for WebSocket as an alternative to an old-fashioned VPN. Since WebSocket is a bidirectional, web-friendly software tool, why not use it to create an on-demand connection between applications or server processes? Since WebSocket is effectively a “TCP for the web”, let’s use it like a TCP. That’s the basis of our KWIC software, which provides on-demand VPN-like connectivity using WebSocket under the hood.

There were certainly many other sessions with interesting topics and excellent speakers that you can check out at their website. The HTMLDevConf just gets better ever year!

Frank Greco



Posted in cloud, Events, html5, Kaazing, Security, WebSocket | Tagged , , | Leave a comment

Build an Enterprise Mobile Real Time App in under 30 Minutes

In the mobile world, there are no excuses for any user experience that isn’t instantaneous, dynamic, and safe.

A cool way to develop these types of apps is with the use of a growing technology standard, WebSocket.  This standard has been around since 2011 and allows you to add nifty real-time features to a mobile app.

Let’s use the Kaazing WebSocket Gateway and build our first real-time mobile app.  Download the  JMS Edition of the Gateway to get started.  Included are a collection of Web, native and hybrid JMS demo apps for both iOS and Android to learn from.  But why not just build one yourself?

All of the demo apps involve the same major programming steps (and model–view–controller pattern).  All you have to do is simply import the Kaazing WebSocket Gateway client libraries and then add the following methods:

  1. Event listeners for the user actions in the Touch User Interface (TUI).
  2. Connect and disconnect methods for connecting the app to the Gateway and updating the TUI.
  3. A method for creating JMS topics and queues.
  4. A message listener to manage JMS messages.
  5. An exception listener to handle when a JMS provider detects an exception.
  6. Methods for when the app is paused, resumed, and closed.

That’s all you need.

Thirty minutes from now you can have your own Enterprise-level WebSocket JMS mobile app for you to experiment, extend, and impress with. Excited? Well off you go:

  1. Go get the JMS Edition Gateway and start it. For information on starting the Gateway, see the Setup Guide.
  2. Download and install any JMS-compliant message broker. Or better yet, use the Apache ActiveMQ JMS broker included in the JMS Edition of the Gateway. See the Setup Guide for how to start ActiveMQ.  It’s dead simple.
  3. Pick a walkthrough to build your app:
    1. Native iOS or Android.
    2. Hybrid iOS or Android.
    3. Web JavaScript for mobile browsers.
    4. There’s even a Microsoft .NET/Silverlight hybrid for iOS and Android using Xamarin.

Told you it was easy!


Posted in Uncategorized | 1 Comment

Meeting WebSockets for Real

Years ago I was developing mission-critical applications that required updates based on incoming real-time events.  At that time I was obsessed with the notion of an Enterprise Service Bus (ESB) and Service Oriented Architecture.  It all looked so cool; you create small atomic services and have them process incoming events and exchange messages with each other and the client.

There was one problem.

In the era of Web applications, I could not figure out a really good way to send the messages back to the browser.  Of course there was the obvious solution: create a facade WebService and call it repeatedly from the browser in a request-response manner while using the ESB to orchestrate all that on the server side.

But that just did not sound cool in an ‘events-driven world’.  For starters, what if there were no events?  Do I just keep calling just to get nothing back?  That seemed wasteful.  Secondly, services ‘orchestration’ works really well when there is an event to process.  It does not necessarily work well when you are calling it every 100 ms just to retrieve an event and start its processing.  That’s a lot of latency in my architecture that I wanted to avoid.  Anyway, the whole deal was kinda falling apart.

Later I moved to developing more typical business applications using REST APIs with jQuery, AngularJS and similar frameworks.  It all seemed to work rather well.

Then one day, I had an issue with one of my apps.  I needed to update different parts of the page (and quite a few of them) to reflect different changes happening in the system (results of long operations, other users activities, etc). Creating one ‘big’ REST call to capture all the changes did not sound like a good idea.  Our team decided it was better to create a REST call for every possible type of update.

For a while it worked.  Unfortunately we soon found ourselves with 100+ timers and REST calls going on at the same time.  Performance of the page decreased dramatically and maintenance became a huge nightmare.

At that time somebody mentioned “WebSockets”.

One would expect me to say that we started using them.  We did not.  Why?  Mostly because we had no idea what they were. I looked online, found a nice Wikipedia article (great source of information – yeah right, don’t get me started) and thought “Wow!  WebSockets are a great thing for event-driven systems that I used to work on.  The next time I need to develop a stock market streaming app with real-time positions updates or something similar I will use it!”.  And I had tremendous misconceptions about WebSockets.

  • I thought that not all browsers support them since it is something new.
    Wrong!!! The WebSocket protocol and standard APIs are very mature.  Both the IETF and W3C have formally standardized WebSockets since 2011 and is fully supported by all modern browsers!
  • I thought that the learning curve will be too steep and we plainly did not have time to deal with it.
    Wrong again!!!  With client libraries that are provided by the WebSockets vendor it takes very little time (an hour or less) to get familiar with the technology and start developing an application.
  • I thought that we will have to rewrite our all great REST beauty entirely to accommodate WebSockets
    Wrong #3!  All it takes, is to move the very same code from $http(…).then() (for AngularJS) or $.ajax(…).then() callback function into the callback function that is called when the WebSocket message is received.

Later I was asked by a friend of mine to play with the Kaazing WebSocket Gateway (I was not working for Kaazing at that time). I tried to pick the most ‘trivial’ app (that was not domain specific such as data streaming, gaming, etc.) and decided on a good-old TODO list. Except, mine had to be shared between multiple users.

Even with such a simple app I  immediately realized the benefits that WebSockets offered comparing to doing it with REST calls.

If we were to implement shared TODO using REST we would at least have to deal with several issues:

  • Server Load. A shared TODO application with REST clients has to continuously query the server for the changes. Needless to say these REST calls impact overall performance regardless if anything has changed, or not.  If I had 100,000 clients, that means 100,000 calls to the server and database, etc.
  • Server Logic to Detect Changes. Clearly we do not want to send the whole list of TODO items to everyone.  There is a need to implement the logic to detect changes and notify interested client apps about these changes.  Not too trivial.
  • Race conditions.  REST implementations will require timers to go off rather often to address the situation when multiple users are updating the same record. Ideally, I would want to disable the record for everyone else once some user is working on it.  Using REST could potentially result in a seriously high load on both the servers and the clients. And the browser may not be fond of the JavaScript code that issues the REST call every 100 or so ms. Server will get less and less happy as more and more clients get on board.  Think of the extreme case: 100,000 clients @ 100 ms each = 1,000,000 calls/sec. which may, by the way, simply inform that no changes occurred!

Then the lightbulb went off.  Using WebSockets addresses all of these concerns!

  • Servers load is not an issue anymore.  Performance is now based on the number of changes but not on the number of the clients.  As the change occurs, all the interested clients are notified.  The rest of the time, nothing is happening.  There is no wasted computing resources.
  • There is no need of any server logic to track the changes at all! Once the user changed the TODO item, a message is sent to all the interested clients to simply update their UI. We did have to also have another listener on the server to update the database with the changes. But the database is not overloaded at all.  It just has to work a little bit just for the initial load to get the state for the new coming clients.
  • With a high-throughput gateway (such as Kaazing), clients can easily send messages when the user’s mouse hovers over a certain item (either in or out).  Other clients that are not interested in these events can merely just disable their interest.  Certainly it would be incorrect to say that race conditions will never happen, but the possibility is far more remote.

The sample app I created resulted in a tutorials that can be found at:

I also learned a critical fact that somehow I missed in that earlier Wikipedia article.  WebSocket can be and should be used as a low-level transport layer to allow any applications protocol such as the publish/subscribe model (e.g., JMS, AMQP or some custom protocol) to run over the web. While it may not sound too exciting for front-end developers it, actually, opens a whole world of features that enterprise developers have been successfully using for years.

Now that I’ve laid to rest my initial misconceptions about WebSocket and had my “Aha!” moment with this cool technology, I am going to start creating  samples for different use cases to compile a ‘library’ of WebSockets Architectural patterns to share with all you.

Stay tuned!

Posted in Uncategorized | Leave a comment