Deploying the Kaazing WebSocket Gateway Using Docker

Many of our customers deploy our WebSocket Gateway using traditional operations procedures.  Works perfectly fine, but like typical enterprise-grade middleware products, there’s still a number of moving parts to get correctly configured with other systems.

Enter Docker.

In case you haven’t checked it out yet, Docker is an exciting management platform that makes setting up tech infrastructure a snap.  It allows you to manage container deployments which are self-contained services that share the operating system.  You should think of containers as mini-virtual-machines but significantly lighter weight.  And these days, to help deploy multi-container applications Docker now comes with the “docker-compose” (Compose) utility which makes deployment ultra easy.

To get you caught up, I will walk you through the basics of getting Docker installed and running a short example.  Let’s use the popular echo demo application from and deploy it in a single command.  Then I’ll show you how to leverage Compose to easily define and run a sample application that connects the Websocket gateway to a backend JMS message bus.  We’ll keep it brief and point you to the useful Docker docs.  All of these examples are hosted on github.

Sounds good?  Let’s get started.

First, what really is a container?  A container wraps an implementation of a software service in a complete filesystem and OS-like environment that contains everything it needs to run.  Docker provides an API and container management platform that allows people to deploy, run and share them with others.  It’s very similar to traditional virtual machine (VM) management that we’ve used for the past few years.  But Docker has all the bells and whistles that make container management simple and significantly more performant.  

Let’s install Docker by following the Docker installation docs.  After you have installed and started Docker, you can run a single command to deploy and run the WebSocket demo.

docker run -name websocket.container -h -p 8000:8000 kaazing/gateway

So how does this work?  

On your machine you are running a docker daemon which is listening to commands.  The ‘run’ command tells the docker daemon to launch a docker container on your behalf, specifically the kaazing/gateway container.  But how did you get the kaazing/gateway container?  Well, it is made available to you in the form of an image via a trusted and official docker repository that is accessible to the public called docker hub.  The next question you’ll ask is “Where is my container running”?  Containers are a Linux feature, if you are running on a Linux machine then it is running in an isolated namespace on your OS.  If you have a Mac or Windows machine, then your container needs a Linux VM on your Mac/Windows box as a host environment to run. Luckily, docker distributions now include a Virtual Machine manager like VirtualBox.

So now you have a docker container running.   Cool.

To connect to it you will need to add an entry into /etc/hosts for “” that points to the IP address of the machine the docker container is running on.  If you are using the docker installation referenced above you can get this by running docker-machine ip your-docker-machinename.  On my machine that returns  So I would add the following to my /etc/hosts.

Once you configure your /etc/hosts file, you can run a quick test.  Open your browser and visit:

Perfect, step one is done!

But setting up a Kaazing Gateway in this way still requires configuring the Gateway to interact with other components in your infrastructure.  Rather than setting up each component separately, you can use Compose to launch and configure them all at once.  For example, a common setup is to use the Kaazing Gateway with a messaging broker that allows internal publish/subscribe messages to be delivered over the firewall.

That’s why we need Compose.

Compose now comes pre-installed with the latest Docker releases.  With Compose you specify the components in your infrastructure via a docker-compose.yml file.  Below is an example:

  build: dockerized.gateway
    - "8000:8000"
    - "8001:8001"

In this file two containers are defined: broker, and gateway.  Each points to a directory that defines where the container is built from; dockerized.gateway and respectively.  The gateway container is linked to the broker container via, which means the gateway container can reach the broker on that address.

To run this we will need to checkout the example github repository and run docker-compose up, which launches the configuration.  Here are the commands:

git clone .
docker-compose -f up

Now point your brower to and you can see the jms demo.

Docker has a fast-growing image repository.  The Docker team recently announced they were 5.6 million docker images pulled every day.  Its immense popularity is based on its ability to enable developers to easily build, package, and deploy sophisticated enterprise applications that consist of a collection of powerful high-performance components.  Now the Kaazing WebSocket Gateway joins the Docker ecosystem as a valuable tool for application developers.

Posted in Uncategorized | Leave a comment

Wild Week of Web Wonderment – HTML5DevConf

KWICies #006: Thoughts on HTML5 and HTML5DevConf 2015

“We live in a web of ideas, a fabric of our own making.”
– Joseph Chilton Pearce

“Roses are red, violets are blue, I’m schizophrenic, and so am I.”
– Oscar Levant

The Web is Your Oyster, Now Watch Out for the Clams…




Let’s face it. HTML5 was originally designed as a weapon.

HTML5 is the culmination of a political movement to overthrow Microsoft Internet Explorer (IE) as the web platform for all of humanity.

If you recall from your readings of the ancient dead (ANSI) C scrolls of the Internet circa mid to late 1990’s, Microsoft’s IE (a browser licensed from Spyglass) was the only browser allowed by most companies. And many large companies wrote apps specifically for IE, effectively shutting out any other browser vendor. This was tough to take for the other browser vendors at that time such as Mozilla, Apple and Opera. The browser was the portal to knowledge and Microsoft was the gatekeeper. Not only that, the pace of innovation with Microsoft’s browser back then was glacially slow (and that’s being kind). Microsoft also leveraged this control to constrain the growth of the Java browser plugin to run applets (anyone remember any useful Java applets? Me neither). When you have a monopoly, why rush to innovate? It’s no different than in any other business sector.


“I’m certainly not interested in forking a shell”


But this situation was only good for Microsoft and gave enterprises a false sense of security. Yes, you can be sure there’s a browser running on your user’s desktop so you can deploy your web app, but its frickin’ IE with its own proprietary implementation of HTML and related technologies. Didn’t we learn portability as a trait of good software back in Computer Science 101?

Since the browser is the portal to knowledge and information, and just an indispensible tool (to buy incredibly bad country music, to rent movies from the 70’s with a laugh track funnier than the movie, and to watch videos of cute kitties riding on Roombas), the other browser vendors were at a disadvantage. They could not run those non-portable, IE-specific apps.

And quite honestly, the W3C as an organization back then was no zippy roadrunner either. It moved quite slowly especially with the evolution of XML-flavored page markup, which thankfully died on the vine. Hopefully the vine died a painful death too.


What were they smoking?


Because of the slow evolution of this markup language by the W3C (and because they needed to stop the IE monopoly), Apple, Mozilla and Opera proposed a new unofficial web standards group called the WHATWG (Web Hypertext Application Technology Working Group) in 2004. These three proposed HTML5 to the W3C as the basis of the next generation of web applications not web documents. Effectively this turned the web into a programmatic platform rather than a document storage platform. HTML5 was not just an upgrade of HTML4, it proposed sophisticated graphics, animation and a collection of useful ECMAScript (note JavaScript is an Oracle trademark) APIs such as File I/O, Geolocation, Database, Messaging, Threading, Touch Events, Audio, MIDI, Speech Recognition, et al.

Cool HTML5 stuff.


Eventually Google recognized the pervasive, far-reaching, accessible power of HTML5 and joined the programming plumbing party with the ever-growing list of HTML5 goodies.   And HTML5 is providing all of these features without plugins. Since plugins were attack vectors for hackers, the more plugins you have, the more possible security breaches. HTML5 only allows a single way to connect the browser to the web. This significantly reduces the possible ways that hacker can break in. It certainly doesn’t eliminate breaches, but HTML5’s no-plugin philosophy dramatically reduces the attack surface area (just love that jargon from the security boys).


The only time more Doors is better

So now we have a large number of modern browsers (amazing what competition does, right?) all with varying compliance to the list of really cool HTML5 features.




Any many of these features were on display at the 2015 HTML5DevConf in San Francisco. The conference chairperson Ann Burkett put on a wild week of web wonderment at the Yerba Buena Center for the Arts in the city by the bay.   There were speakers from Netflix, Google, Meteor, Microsoft, Yelp, Dolby, PayPal, Adobe, Wal-Mart, Yahoo, Couchbase, Qualcomm and of course Kaazing.


I particularly enjoyed Jennifer Tong’s talk on “Prototyping the Internet of Things with Firebase”. Jennifer, a Mountain View Googler, did an incredible job explaining simple electronics to the web-savvy audience. She brought everyone up-to-speed on simple hardware hacking in the first 20 minutes of her talk and setup her live software demos using Google’s Firebase, the Johnny Five library and the Raspberry Pi small computer. Excellent presentation.


The original Johnny “Five”… Johnny Bench


Steve Souders is no stranger to the web world; he’s well known in high performance web circles and the web in general. Steve had successful stints at Google and Yahoo! as the lead performance expert. His session on Design+Performance was certainly very informative. Essentially users want both a media-heavy website but a very fast user experience, but how do you design an optimal site that satisfies both criteria. Steve talked about gathering metrics using new tools and employing an interesting development process that joins both requirements at a project’s outset.


Peter Moskovits and I delivered a well-attended session on novel uses of WebSocket using our own Kaazing Gateway server. Historically WebSocket (and its technically incorrect but more popular moniker “WebSockets”) has been used to push data from the server to the browser. But now there have been several advances in alternative mechanisms for simple data push such as the Notification specification and the Push API. There is also the new HTTP/2 standard, where multiple HTTP connections can share a single, underlying TCP connection for 20-50% more performance. The use of WebSocket specifically for browsers is now more suitable for certain high-performance or highly reliable messaging use cases.


Kaazing’s Peter Moskovits Talks WebSocket


As we pointed out in our session, the overwhelming majority of web usage is not via the browser. Most of the web is consumed via APIs. Currently the dominant API model is REST (there were a few people in the audience actually admitting they used SOAP, poor souls). REST is a very easy synchronous API model that typically uses request/response HTTP as its transport, which means REST calls have to wait for a response.


But as streaming and reactive services mature and continue to proliferate (especially with the IoT wave growing exponentially), the need for higher-level asynchronous mechanisms and APIs for developers to use will grow significantly. The world is asynchronous and event-driven; many applications in the future just cannot use REST, which was never truly designed for events. WebSocket is perfectly suitable for these types of use cases.


We also proposed a novel application for WebSocket as an alternative to an old-fashioned VPN. Since WebSocket is a bidirectional, web-friendly software tool, why not use it to create an on-demand connection between applications or server processes? Since WebSocket is effectively a “TCP for the web”, let’s use it like a TCP. That’s the basis of our KWIC software, which provides on-demand VPN-like connectivity using WebSocket under the hood.

There were certainly many other sessions with interesting topics and excellent speakers that you can check out at their website. The HTMLDevConf just gets better ever year!

Frank Greco



Posted in cloud, Events, html5, Kaazing, Security, WebSocket | Tagged , , | Leave a comment

Build an Enterprise Mobile Real Time App in under 30 Minutes

In the mobile world, there are no excuses for any user experience that isn’t instantaneous, dynamic, and safe.

A cool way to develop these types of apps is with the use of a growing technology standard, WebSocket.  This standard has been around since 2011 and allows you to add nifty real-time features to a mobile app.

Let’s use the Kaazing WebSocket Gateway and build our first real-time mobile app.  Download the  JMS Edition of the Gateway to get started.  Included are a collection of Web, native and hybrid JMS demo apps for both iOS and Android to learn from.  But why not just build one yourself?

All of the demo apps involve the same major programming steps (and model–view–controller pattern).  All you have to do is simply import the Kaazing WebSocket Gateway client libraries and then add the following methods:

  1. Event listeners for the user actions in the Touch User Interface (TUI).
  2. Connect and disconnect methods for connecting the app to the Gateway and updating the TUI.
  3. A method for creating JMS topics and queues.
  4. A message listener to manage JMS messages.
  5. An exception listener to handle when a JMS provider detects an exception.
  6. Methods for when the app is paused, resumed, and closed.

That’s all you need.

Thirty minutes from now you can have your own Enterprise-level WebSocket JMS mobile app for you to experiment, extend, and impress with. Excited? Well off you go:

  1. Go get the JMS Edition Gateway and start it. For information on starting the Gateway, see the Setup Guide.
  2. Download and install any JMS-compliant message broker. Or better yet, use the Apache ActiveMQ JMS broker included in the JMS Edition of the Gateway. See the Setup Guide for how to start ActiveMQ.  It’s dead simple.
  3. Pick a walkthrough to build your app:
    1. Native iOS or Android.
    2. Hybrid iOS or Android.
    3. Web JavaScript for mobile browsers.
    4. There’s even a Microsoft .NET/Silverlight hybrid for iOS and Android using Xamarin.

Told you it was easy!


Posted in Uncategorized | Leave a comment

Meeting WebSockets for Real

Years ago I was developing mission-critical applications that required updates based on incoming real-time events.  At that time I was obsessed with the notion of an Enterprise Service Bus (ESB) and Service Oriented Architecture.  It all looked so cool; you create small atomic services and have them process incoming events and exchange messages with each other and the client.

There was one problem.

In the era of Web applications, I could not figure out a really good way to send the messages back to the browser.  Of course there was the obvious solution: create a facade WebService and call it repeatedly from the browser in a request-response manner while using the ESB to orchestrate all that on the server side.

But that just did not sound cool in an ‘events-driven world’.  For starters, what if there were no events?  Do I just keep calling just to get nothing back?  That seemed wasteful.  Secondly, services ‘orchestration’ works really well when there is an event to process.  It does not necessarily work well when you are calling it every 100 ms just to retrieve an event and start its processing.  That’s a lot of latency in my architecture that I wanted to avoid.  Anyway, the whole deal was kinda falling apart.

Later I moved to developing more typical business applications using REST APIs with jQuery, AngularJS and similar frameworks.  It all seemed to work rather well.

Then one day, I had an issue with one of my apps.  I needed to update different parts of the page (and quite a few of them) to reflect different changes happening in the system (results of long operations, other users activities, etc). Creating one ‘big’ REST call to capture all the changes did not sound like a good idea.  Our team decided it was better to create a REST call for every possible type of update.

For a while it worked.  Unfortunately we soon found ourselves with 100+ timers and REST calls going on at the same time.  Performance of the page decreased dramatically and maintenance became a huge nightmare.

At that time somebody mentioned “WebSockets”.

One would expect me to say that we started using them.  We did not.  Why?  Mostly because we had no idea what they were. I looked online, found a nice Wikipedia article (great source of information – yeah right, don’t get me started) and thought “Wow!  WebSockets are a great thing for event-driven systems that I used to work on.  The next time I need to develop a stock market streaming app with real-time positions updates or something similar I will use it!”.  And I had tremendous misconceptions about WebSockets.

  • I thought that not all browsers support them since it is something new.
    Wrong!!! The WebSocket protocol and standard APIs are very mature.  Both the IETF and W3C have formally standardized WebSockets since 2011 and is fully supported by all modern browsers!
  • I thought that the learning curve will be too steep and we plainly did not have time to deal with it.
    Wrong again!!!  With client libraries that are provided by the WebSockets vendor it takes very little time (an hour or less) to get familiar with the technology and start developing an application.
  • I thought that we will have to rewrite our all great REST beauty entirely to accommodate WebSockets
    Wrong #3!  All it takes, is to move the very same code from $http(…).then() (for AngularJS) or $.ajax(…).then() callback function into the callback function that is called when the WebSocket message is received.

Later I was asked by a friend of mine to play with the Kaazing WebSocket Gateway (I was not working for Kaazing at that time). I tried to pick the most ‘trivial’ app (that was not domain specific such as data streaming, gaming, etc.) and decided on a good-old TODO list. Except, mine had to be shared between multiple users.

Even with such a simple app I  immediately realized the benefits that WebSockets offered comparing to doing it with REST calls.

If we were to implement shared TODO using REST we would at least have to deal with several issues:

  • Server Load. A shared TODO application with REST clients has to continuously query the server for the changes. Needless to say these REST calls impact overall performance regardless if anything has changed, or not.  If I had 100,000 clients, that means 100,000 calls to the server and database, etc.
  • Server Logic to Detect Changes. Clearly we do not want to send the whole list of TODO items to everyone.  There is a need to implement the logic to detect changes and notify interested client apps about these changes.  Not too trivial.
  • Race conditions.  REST implementations will require timers to go off rather often to address the situation when multiple users are updating the same record. Ideally, I would want to disable the record for everyone else once some user is working on it.  Using REST could potentially result in a seriously high load on both the servers and the clients. And the browser may not be fond of the JavaScript code that issues the REST call every 100 or so ms. Server will get less and less happy as more and more clients get on board.  Think of the extreme case: 100,000 clients @ 100 ms each = 1,000,000 calls/sec. which may, by the way, simply inform that no changes occurred!

Then the lightbulb went off.  Using WebSockets addresses all of these concerns!

  • Servers load is not an issue anymore.  Performance is now based on the number of changes but not on the number of the clients.  As the change occurs, all the interested clients are notified.  The rest of the time, nothing is happening.  There is no wasted computing resources.
  • There is no need of any server logic to track the changes at all! Once the user changed the TODO item, a message is sent to all the interested clients to simply update their UI. We did have to also have another listener on the server to update the database with the changes. But the database is not overloaded at all.  It just has to work a little bit just for the initial load to get the state for the new coming clients.
  • With a high-throughput gateway (such as Kaazing), clients can easily send messages when the user’s mouse hovers over a certain item (either in or out).  Other clients that are not interested in these events can merely just disable their interest.  Certainly it would be incorrect to say that race conditions will never happen, but the possibility is far more remote.

The sample app I created resulted in a tutorials that can be found at:

I also learned a critical fact that somehow I missed in that earlier Wikipedia article.  WebSocket can be and should be used as a low-level transport layer to allow any applications protocol such as the publish/subscribe model (e.g., JMS, AMQP or some custom protocol) to run over the web. While it may not sound too exciting for front-end developers it, actually, opens a whole world of features that enterprise developers have been successfully using for years.

Now that I’ve laid to rest my initial misconceptions about WebSocket and had my “Aha!” moment with this cool technology, I am going to start creating  samples for different use cases to compile a ‘library’ of WebSockets Architectural patterns to share with all you.

Stay tuned!

Posted in Uncategorized | Leave a comment

KWICies #005: It Definitely Will Be More Cloudy Tomorrow

Thoughts on AWS re:Invent 2015

“Nature is a mutable cloud which is always and never the same.”
― Ralph Waldo Emerson

“I never said most of the things I said.”
― Yogi Berra




Let’s Get Real
Just a few years ago, many industry pundits (aka “talking heads”) proclaimed in-person conferences were dead. And trade shows and industry conferences would only be experienced in Second Life by your avatar who would learn more than you and eventually take over your job. What we didn’t know is that these pundits were from Colorado and Oregon [pause here for cognitive exercise].  And most are still living in Second Life selling virtual adult toys to socially challenged people with Walter Mitty fantasies.

Instead of holding a virtual conference and exchanging high performance networking tips with a talking frog, the Amazon “AWS re:Invent” conference was a traditional and effective analog one. The show was buzzing with 19,000 living, breathing devops hounds sniffing out all the cool and useful truffles in the AWS service forest. It was indeed an impressive show at the Sands Expo Convention Center in Las Vegas. Kudos goes to the Amazon organizing staff for pulling it off so successfully.



Kaazing’s Peter Moskovits Explaining KWIC


There were many excellent presentations on the new AWS services such as Kinesis Streams, Inspector, Docker integration and the new IoT Cloud. And the vendors on the exhibit floor seemed to always be incredibly crowded with inquisitive visitors. Certainly the team at the Kaazing booth was extremely busy during the entire show. Our new devops tool, Kaazing Websocket Intercloud Connect (KWIC), that allows SaaS applications to easily and securely connect back to on-premise services for things like LDAP/AD authentication, databases and streaming data feeds, was quite popular. Seems like there’s a huge dislike for old-fashioned, legacy VPNs to connect application services on-demand.


Very Much In-demand Kaazing T-Shirt Schwag


Mind the Gap
As the cloud infrastructure market continues its amazing growth rate, AWS maintains its position as the big gorilla in the business. Maybe more like King Kong than the average gorilla you see on the street [just making sure you’re paying attention].   And at re:Invent, Amazon used the opportunity to announce dozens of new cloud services and enhancements to their existing services to further widen the already-huge gap from its nearest (and distant) competitor.

Competitors Find AWS Hard to Swallow


There were dozens of new and updated AWS services for databases, analytics, security inspection, API management, virtual desktops and EC2 (their IaaS computing platform). You can find out details on these from the re:Invent website and the AWS YouTube Channel.


Personally I found the Kinesis updates, Docker integration, Lambda enhancements and their IoT Cloud most interesting.

Streams Turn Into Rivers
Kinesis is a collection of services that make it easy to manage real-time streaming data in the AWS cloud.  The era of connected devices and intelligent things are starting to generate a mind-boggling amount of streaming data. This data needs to be collected, persisted and analyzed. Synchronous distributed computing and polling is not going to cut the mustard for streaming data.  Kinesis is a set of services that addresses these new types of applications. Hundreds of sensor types, wearables, industrial machinery and many sophisticated big-data systems will use these services. The current trifecta of Kinesis services are: Kinesis Firehose for loading streaming data into AWS, Kinesis Analytics for analysis of the data using SQL queries and Kinesis Streams for building your own custom streaming applications. In true AWS fashion, you can easily integrate Kinesis with other AWS services.


Their Ploy of Deploy
Amazon’s EC2 Container Service is based on Docker and container deployment of microservices. They have now integrated the Docker registry with their Identity and Access Management (IAM) for authorization and access control. Developers understand the agility advantages that containers (Docker and non-Docker) bring to the table. And in typical Amazon fashion, there is now a command-line interface (CLI) for this style of agile deployment.   Very cool.


Code on Demand
The AWS Lambda service is quite innovative and a natural evolution of cloud infrastructure. As a developer, you are quite good at building server-side code but you may not want to deal with provisioning or managing any servers. You just want to upload your code and have it run when a certain condition occurs or perhaps when you want to manually invoke your service. Pretty nifty. And you only pay when your code runs.


Pre-IoT Era Cash Register

Left to Our Own Devices
The AWS IoT Cloud is a major first step for Amazon. Imo, it’s going to grow beyond anyone’s imagination (probably not Amazon’s). It’s a managed cloud infrastructure for connected devices for data collection, storage and analytics. Given the recent advances in device CPUs (ARM, Intel Edison, Apple) and low-power pervasive Internet connectivity, there will be an explosion of data soon after the current hacker era of IoT matures (the usual prelude to a technology wave). An interesting core component of the AWS IoT cloud is an MQTT message broker that clearly indicates Amazon feels strongly publish/subscribe is the right approach to IoT communication and not request-response. This subsystem will surely grow and evolve quickly.



What’s Next?
Amazon is stepping up big time with their cloud services.   They have done an amazing job so far.  I’m sure if they continue their torrid growth, we’ll probably start hearing from their competitors that Amazon has a monopolistic hold on customers.  It’ll be the same that we heard during the IBM (70’s), Microsoft (80’s) and Google (90’s) eras.  No one cried monopoly during the Facebook era because they were too busy posting videos of dancing kittens, singing dogs, angrily venting about politics or taking selfies.


Btw, Clouds and truffles share pricing models. Discuss.



Frank Greco

Posted in cloud, Events, html5, IoT, JMS, Kaazing, Security, WebSocket | Tagged , , , , | Leave a comment

KWICies #004 – The Stream Police, They Live Inside of my Head

Securing Streaming Data Over the Web

“Security is the chief enemy of mortals.” ― William Shakespeare

“The user’s going to pick dancing pigs over security every time.” ― Bruce Schneier


Take Me to the River
It’s a real-time world. Enterprises live in real-time. Business processes happen in real-time with live, streamed information passing from app to app, server to server.

These types of business-critical streaming systems apply to a vast number of use cases.  Today’s data analytics doesn’t wait for overnight crunching or hours of offline study. Mention the word “batch” and you’ll get raised eyebrows and derogatory comments about your old-fashioned taste for classic rock music and intense hatred for selfie sticks.

Many of these on-demand, streaming processes occur in and outside the firewall, among on-premises and off-premises cloud infrastructures. Your enterprise, partners, customers and entire ecosystem depend on many of these real-time events.

Historically we have seen a huge trend with programmatic ecosystem integration to address this cross-firewall connectivity. The hipsters have proclaimed this wave as the “API Economy” (btw, anyone want to buy my “Web Properties”? They’re near a soothing digital stream). Major enterprises are rushing to extend their businesses over the firewall with programmatic interfaces or APIs.   This type of approach has potentially more rewarding business implications for additional revenue streams and deepening the customer engagement. There’s no question this is a valuable evolutionary trend


The trend is your friend?


Go With the Flow
Thousands of these public and private B2B APIs from such companies as Amazon, Twitter, Facebook, Bloomberg, British Airways, NY Times and the US Government are now available. A quick visit to the very popular ProgrammableWeb indicates the rapidly growing numbers of APIs that connect to very useful services.

However, many of these APIs primarily use a heavyweight, non-reactive communication model called “request-response” initiated by the client application using the traditional, legacy network plumbing of the web, HTTP.


Full duplex its not. Thankfully.


Alternatively some of these companies and others have been recently offering modern, streaming APIs for browsers, mobile devices and embedded “Internet of Things” applications. We are seeing applications in logistics, health-monitoring, smart government, risk management, surveillance, management dashboards and others offering real-time distributed business logic that provide a significantly higher level of customer or partner engagement.


Cry Me a River
However, there are huge security and privacy issues to consider when deploying streaming interfaces.


They hacked the doughnuts server!


Offering these real-time and non-real-time APIs seems risky despite their business potential. Not only do they have to be reliable, robust and efficient in a mobile environment, they have to be encrypted, fully authenticated, comply with corporate privacy entitlements and deployed in a multi-DMZ environment where no business logic should be deployed. And certainly no enterprise wants any open ports on the internal, private network to be compromised by the “black hats”. If we could just solve this last one, we could avoid creating replicant and expensive systems in the DMZ for the purposes of security and privacy.


Here’s just a short list of deployment concerns to be aware of for offering streaming (and even conventional synchronous) APIs.


Device Agnostic
Streaming data must be able to be sent to (or received from) all types of mobile devices, desktops and browsers.   It may be an iOS, Android, Linux, Windows or Web-endowed device. Your target device may even be a television, car or perhaps some type of wearable. Services and data collection/analytics must use consistent APIs to provide coherent and pervasive functionality for all endpoints. Using different APIs for different devices is inelegant, which we computer geeks know means more complexity and more glitch potential. And it means more wasted weekends lost to debugging on your Linux box instead of having a few of those great margaritas and tequila shots at that hip TexMex bar downtown.


Go Jira!


HTTP was not really designed for persistent connections that are needed for streaming data. Yes, you can twist and fake out HTTP for long-lived connections and use Comet-style pushes from the server and get something to work.   But let’s face it, after you’re done hacking, you feel good as an engineer but you feel really lousy as an architect… and real nervous if you’re the CTO.

The typical networking solution for streaming and persistent connections in general is either to create and manage a legacy-style VPN or, open non-standard ports over the Internet. Since most operations people enjoy the comfort of employment, asking them to open a non-standard port will either have them laughing hysterically or pretending you didn’t exist.   Installing yet another old-fashioned low-level VPN doesn’t seem fun either. You have to get many more management signoffs than you originally thought, and have to deal with mind-numbing political and administrative constraints. Soon you start to question your own sanity.

“And what about our IoT requirements?” bellows your CIO during your weekly status meeting (and its a lovely deep bellow too). Remember streaming needs to be bidirectional. While enterprise-streaming connectivity is primarily sending to endpoints, IoT connectivity is sending from the endpoints. A unified architecture needs to handle both enterprise and IoT use cases in a high-performance manner.


DMZ Deployment
As with most business-critical networking topologies, any core streaming services deployment must be devoid of business logic and capable of installation into a DMZ or series of DMZ protection layers. You need to assume the black hats will break in to your outer-most DMZ, so there shouldn’t be any valuable business or security intelligence resident in your DMZ layers. At the least, you should try to avoid read-only replication copies of back-end services in the DMZ as much as possible… because its yet another management time and money sink.


As your ecosystem grows (and shrinks), connectivity must adapt on-demand and take place over a very reliable connection.


Crossing the Chasm


Leveraging the economies and agility of the web and WebSocket is phenomenally useful, but automatic reconnection between critical partners over the Web is even more so.


Just for the record, production conversations that traverse open networks must be secure via TLS/SSL encryption. So always use secure WebSocket (wss://) and secure HTTP (https://) for business purposes. Nuff said.


Of course, users must be checked to confirm they are allowed to connect. Instead of dealing with low-level access via legacy VPNs that potentially grant open access at the network layer, it is significantly more secure to only allow application-service access.  This Application-to-Application (A2A) services connectivity (using standard URIs) is a tiny surface area for the black hats, which btw, becomes microscopic with Kaazing’s Enterprise Shield feature. This feature shuts down 100% of all incoming ports and further masks internal service and topology data. Yes, I did say 100%.


Once a user is fully authenticated and connected to your streaming service, what operations are they entitled to perform? In other words, their access-control rights need to be confirmed. Again ideally this type of control should not be in the DMZ. Telling the operations team to incur several weeks of headaches getting corporate signoffs because you need a replicant Identity subsystem in the DMZ will not be easy. Don’t expect an invitation to their holiday party after that request.


Stream Protocol Validation
Real-time data need to be inspected for conformance to A2A protocol specifications and avoid injection of insecure code. Any streaming infrastructure needs to guarantee any application protocol must follow the rules of conversation. Any data in an unexpected format or in violation of a framing specification must be immediately discarded and the connection terminated.


There are certainly additional issues to consider for streaming data for your B2B ecosystem. Performance, scalability, monitoring, logging, et al, are equally important. We’ll cover those in a future KWICie soon.


Watching the tide roll away indeed!


Sittin’ on the Dock of the Bay
If you’re attending AWS re:Invent 2015, please stop by the Kaazing booth (K24) to say hello. I’m always interested to chat with customers and colleagues about the future of cloud computing, containers, autonomous computing, microservices, IoT and the unfortunate state of real music.


Frank Greco

Posted in cloud, IoT, Kaazing, Security, WebSocket | Tagged , , , , | Leave a comment

AWS Re:Invent 2015 – Peter’s Cloud Security Talk Picks

With AWS Re:Invent approaching fast, I started reviewing the talks I absolutely wanted to see this year. Given our recent work with KWIC (Kaazing WebSocket Intercloud Connect), my focus this year is geared towards security and connectivity related topics. Here they go:

ARC344 – How Intuit Improves Security and Productivity with AWS Virtual Networking, identity, and Account Services
Brett Weaver – Software Architect, Intuit Inc
Don Southard – AWS Senior Solutions Architect Manager, Amazon Web Services
Abstract: Intuit has an “all in” strategy in adopting the AWS cloud. We have already moved some large workloads supporting some of our flagship products (TurboTax, Mint) and are expecting to launch hundreds of services in AWS over the coming years. To provide maximum flexibility for product teams to iterate on their services, as well as provide isolation of individual accounts from logical errors or malicious actions, Intuit is deploying every application into its own account and virtual private cloud (VPC). This talk discusses both the benefits and challenges of designing to run across hundreds or thousands of VPCs within an enterprise. We discuss the limitations of connectivity, sharing data, strategies for IAM access across account, and other nuances to keep in mind as you design your organization’s migration strategy. We share our design patterns that can help guide your team in developing a plan for your AWS migration. This talk is helpful for anyone who is planning or in the process of moving a large enterprise to AWS with the difficult decisions and tradeoffs in structuring your deployment.

DVO206 – Lessons from a CISO: How to Securely Scale Teams, Workloads, and Budgets
James Hoover – VP, Chief Information Security Officer, Infor
Adam Boyle – Director of Product Management, Cloud Workload Security, Trend Micro
Abstract: Are you a CISO in cloud or security operations and architecture? The decisions you make when migrating and securing workloads at scale in the AWS cloud have a large impact on your business. This session will help you jump-start your migration to AWS or, if you’re already running workloads in AWS, teach you how your organization can secure and improve the efficiency of those deployments.
Infor’s Chief Information Security Officer will share what the organization learned tackling these issues at scale. You’ll hear how managing a traditional large-scale infrastructure can be simplified in AWS. You’ll understand why designing around the workload can simplify the structure of your teams and help them focus. Finally, you’ll see what these changes mean to your CxOs and how better visibility and understanding of your workloads will drive business success. Session sponsored by Trend Micro.

DVO312 – Sony: Building At-Scale Services with AWS Elastic Beanstalk
Sumio Okada – Cloud Engineer, Sony Corporation
Shinya Kawaguchi – Software Engineer, Sony Corporation
Abstract: Learn about Sony’s efforts to build a cloud-native authentication and profile management platform on AWS. Sony engineers demonstrate how they used AWS Elastic Beanstalk (Elastic Beanstalk) to deploy, manage, and scale their applications. They also describe how they use AWS CloudFormation for resource provisioning, Amazon DynamoDB for the main database, and AWS Lambda and Amazon Redshift for log handling and analysis. This discussion focuses on best practices, security considerations, tradeoffs, and final architecture and implementation. By the end of the session, you will clearly understand how to use Elastic Beanstalk as a platform to quickly and easily build at-scale web application on AWS, and how to use Elastic Beanstalk with other AWS services to build cloud-native applications.

If you’re in Vegas for re:Invent, be sure to stop by at the Kaazing booth (K24) to have a chat! See you there…

Posted in cloud, Events, Kaazing, Security, Uncategorized, WebSocket | Leave a comment