Build an Enterprise Mobile Real Time App in under 30 Minutes

In the mobile world, there are no excuses for any user experience that isn’t instantaneous, dynamic, and safe.

A cool way to develop these types of apps is with the use of a growing technology standard, WebSocket.  This standard has been around since 2011 and allows you to add nifty real-time features to a mobile app.

Let’s use the Kaazing WebSocket Gateway and build our first real-time mobile app.  Download the  JMS Edition of the Gateway to get started.  Included are a collection of Web, native and hybrid JMS demo apps for both iOS and Android to learn from.  But why not just build one yourself?

All of the demo apps involve the same major programming steps (and model–view–controller pattern).  All you have to do is simply import the Kaazing WebSocket Gateway client libraries and then add the following methods:

  1. Event listeners for the user actions in the Touch User Interface (TUI).
  2. Connect and disconnect methods for connecting the app to the Gateway and updating the TUI.
  3. A method for creating JMS topics and queues.
  4. A message listener to manage JMS messages.
  5. An exception listener to handle when a JMS provider detects an exception.
  6. Methods for when the app is paused, resumed, and closed.

That’s all you need.

Thirty minutes from now you can have your own Enterprise-level WebSocket JMS mobile app for you to experiment, extend, and impress with. Excited? Well off you go:

  1. Go get the JMS Edition Gateway and start it. For information on starting the Gateway, see the Setup Guide.
  2. Download and install any JMS-compliant message broker. Or better yet, use the Apache ActiveMQ JMS broker included in the JMS Edition of the Gateway. See the Setup Guide for how to start ActiveMQ.  It’s dead simple.
  3. Pick a walkthrough to build your app:
    1. Native iOS or Android.
    2. Hybrid iOS or Android.
    3. Web JavaScript for mobile browsers.
    4. There’s even a Microsoft .NET/Silverlight hybrid for iOS and Android using Xamarin.

Told you it was easy!

 

Posted in Uncategorized | 1 Comment

Meeting WebSockets for Real

Years ago I was developing mission-critical applications that required updates based on incoming real-time events.  At that time I was obsessed with the notion of an Enterprise Service Bus (ESB) and Service Oriented Architecture.  It all looked so cool; you create small atomic services and have them process incoming events and exchange messages with each other and the client.

There was one problem.

In the era of Web applications, I could not figure out a really good way to send the messages back to the browser.  Of course there was the obvious solution: create a facade WebService and call it repeatedly from the browser in a request-response manner while using the ESB to orchestrate all that on the server side.

But that just did not sound cool in an ‘events-driven world’.  For starters, what if there were no events?  Do I just keep calling just to get nothing back?  That seemed wasteful.  Secondly, services ‘orchestration’ works really well when there is an event to process.  It does not necessarily work well when you are calling it every 100 ms just to retrieve an event and start its processing.  That’s a lot of latency in my architecture that I wanted to avoid.  Anyway, the whole deal was kinda falling apart.

Later I moved to developing more typical business applications using REST APIs with jQuery, AngularJS and similar frameworks.  It all seemed to work rather well.

Then one day, I had an issue with one of my apps.  I needed to update different parts of the page (and quite a few of them) to reflect different changes happening in the system (results of long operations, other users activities, etc). Creating one ‘big’ REST call to capture all the changes did not sound like a good idea.  Our team decided it was better to create a REST call for every possible type of update.

For a while it worked.  Unfortunately we soon found ourselves with 100+ timers and REST calls going on at the same time.  Performance of the page decreased dramatically and maintenance became a huge nightmare.

At that time somebody mentioned “WebSockets”.

One would expect me to say that we started using them.  We did not.  Why?  Mostly because we had no idea what they were. I looked online, found a nice Wikipedia article (great source of information – yeah right, don’t get me started) and thought “Wow!  WebSockets are a great thing for event-driven systems that I used to work on.  The next time I need to develop a stock market streaming app with real-time positions updates or something similar I will use it!”.  And I had tremendous misconceptions about WebSockets.

  • I thought that not all browsers support them since it is something new.
    Wrong!!! The WebSocket protocol and standard APIs are very mature.  Both the IETF and W3C have formally standardized WebSockets since 2011 and is fully supported by all modern browsers!
  • I thought that the learning curve will be too steep and we plainly did not have time to deal with it.
    Wrong again!!!  With client libraries that are provided by the WebSockets vendor it takes very little time (an hour or less) to get familiar with the technology and start developing an application.
  • I thought that we will have to rewrite our all great REST beauty entirely to accommodate WebSockets
    Wrong #3!  All it takes, is to move the very same code from $http(…).then() (for AngularJS) or $.ajax(…).then() callback function into the callback function that is called when the WebSocket message is received.

Later I was asked by a friend of mine to play with the Kaazing WebSocket Gateway (I was not working for Kaazing at that time). I tried to pick the most ‘trivial’ app (that was not domain specific such as data streaming, gaming, etc.) and decided on a good-old TODO list. Except, mine had to be shared between multiple users.

Even with such a simple app I  immediately realized the benefits that WebSockets offered comparing to doing it with REST calls.

If we were to implement shared TODO using REST we would at least have to deal with several issues:

  • Server Load. A shared TODO application with REST clients has to continuously query the server for the changes. Needless to say these REST calls impact overall performance regardless if anything has changed, or not.  If I had 100,000 clients, that means 100,000 calls to the server and database, etc.
  • Server Logic to Detect Changes. Clearly we do not want to send the whole list of TODO items to everyone.  There is a need to implement the logic to detect changes and notify interested client apps about these changes.  Not too trivial.
  • Race conditions.  REST implementations will require timers to go off rather often to address the situation when multiple users are updating the same record. Ideally, I would want to disable the record for everyone else once some user is working on it.  Using REST could potentially result in a seriously high load on both the servers and the clients. And the browser may not be fond of the JavaScript code that issues the REST call every 100 or so ms. Server will get less and less happy as more and more clients get on board.  Think of the extreme case: 100,000 clients @ 100 ms each = 1,000,000 calls/sec. which may, by the way, simply inform that no changes occurred!

Then the lightbulb went off.  Using WebSockets addresses all of these concerns!

  • Servers load is not an issue anymore.  Performance is now based on the number of changes but not on the number of the clients.  As the change occurs, all the interested clients are notified.  The rest of the time, nothing is happening.  There is no wasted computing resources.
  • There is no need of any server logic to track the changes at all! Once the user changed the TODO item, a message is sent to all the interested clients to simply update their UI. We did have to also have another listener on the server to update the database with the changes. But the database is not overloaded at all.  It just has to work a little bit just for the initial load to get the state for the new coming clients.
  • With a high-throughput gateway (such as Kaazing), clients can easily send messages when the user’s mouse hovers over a certain item (either in or out).  Other clients that are not interested in these events can merely just disable their interest.  Certainly it would be incorrect to say that race conditions will never happen, but the possibility is far more remote.

The sample app I created resulted in a tutorials that can be found at:

https://github.com/kaazing/tutorials

I also learned a critical fact that somehow I missed in that earlier Wikipedia article.  WebSocket can be and should be used as a low-level transport layer to allow any applications protocol such as the publish/subscribe model (e.g., JMS, AMQP or some custom protocol) to run over the web. While it may not sound too exciting for front-end developers it, actually, opens a whole world of features that enterprise developers have been successfully using for years.

Now that I’ve laid to rest my initial misconceptions about WebSocket and had my “Aha!” moment with this cool technology, I am going to start creating  samples for different use cases to compile a ‘library’ of WebSockets Architectural patterns to share with all you.

Stay tuned!

Posted in Uncategorized | Leave a comment

KWICies #005: It Definitely Will Be More Cloudy Tomorrow

Thoughts on AWS re:Invent 2015

“Nature is a mutable cloud which is always and never the same.”
― Ralph Waldo Emerson

“I never said most of the things I said.”
― Yogi Berra

 

reinvent

 

Let’s Get Real
Just a few years ago, many industry pundits (aka “talking heads”) proclaimed in-person conferences were dead. And trade shows and industry conferences would only be experienced in Second Life by your avatar who would learn more than you and eventually take over your job. What we didn’t know is that these pundits were from Colorado and Oregon [pause here for cognitive exercise].  And most are still living in Second Life selling virtual adult toys to socially challenged people with Walter Mitty fantasies.

Instead of holding a virtual conference and exchanging high performance networking tips with a talking frog, the Amazon “AWS re:Invent” conference was a traditional and effective analog one. The show was buzzing with 19,000 living, breathing devops hounds sniffing out all the cool and useful truffles in the AWS service forest. It was indeed an impressive show at the Sands Expo Convention Center in Las Vegas. Kudos goes to the Amazon organizing staff for pulling it off so successfully.

 

 

busybooth
Kaazing’s Peter Moskovits Explaining KWIC

 

There were many excellent presentations on the new AWS services such as Kinesis Streams, Inspector, Docker integration and the new IoT Cloud. And the vendors on the exhibit floor seemed to always be incredibly crowded with inquisitive visitors. Certainly the team at the Kaazing booth was extremely busy during the entire show. Our new devops tool, Kaazing Websocket Intercloud Connect (KWIC), that allows SaaS applications to easily and securely connect back to on-premise services for things like LDAP/AD authentication, databases and streaming data feeds, was quite popular. Seems like there’s a huge dislike for old-fashioned, legacy VPNs to connect application services on-demand.

 

tshirts
Very Much In-demand Kaazing T-Shirt Schwag

 

Mind the Gap
As the cloud infrastructure market continues its amazing growth rate, AWS maintains its position as the big gorilla in the business. Maybe more like King Kong than the average gorilla you see on the street [just making sure you’re paying attention].   And at re:Invent, Amazon used the opportunity to announce dozens of new cloud services and enhancements to their existing services to further widen the already-huge gap from its nearest (and distant) competitor.

kong
Competitors Find AWS Hard to Swallow

 

There were dozens of new and updated AWS services for databases, analytics, security inspection, API management, virtual desktops and EC2 (their IaaS computing platform). You can find out details on these from the re:Invent website and the AWS YouTube Channel.

 

Personally I found the Kinesis updates, Docker integration, Lambda enhancements and their IoT Cloud most interesting.

Streams Turn Into Rivers
Kinesis is a collection of services that make it easy to manage real-time streaming data in the AWS cloud.  The era of connected devices and intelligent things are starting to generate a mind-boggling amount of streaming data. This data needs to be collected, persisted and analyzed. Synchronous distributed computing and polling is not going to cut the mustard for streaming data.  Kinesis is a set of services that addresses these new types of applications. Hundreds of sensor types, wearables, industrial machinery and many sophisticated big-data systems will use these services. The current trifecta of Kinesis services are: Kinesis Firehose for loading streaming data into AWS, Kinesis Analytics for analysis of the data using SQL queries and Kinesis Streams for building your own custom streaming applications. In true AWS fashion, you can easily integrate Kinesis with other AWS services.

 

Their Ploy of Deploy
Amazon’s EC2 Container Service is based on Docker and container deployment of microservices. They have now integrated the Docker registry with their Identity and Access Management (IAM) for authorization and access control. Developers understand the agility advantages that containers (Docker and non-Docker) bring to the table. And in typical Amazon fashion, there is now a command-line interface (CLI) for this style of agile deployment.   Very cool.

 

Code on Demand
The AWS Lambda service is quite innovative and a natural evolution of cloud infrastructure. As a developer, you are quite good at building server-side code but you may not want to deal with provisioning or managing any servers. You just want to upload your code and have it run when a certain condition occurs or perhaps when you want to manually invoke your service. Pretty nifty. And you only pay when your code runs.

 

ecommerce
Pre-IoT Era Cash Register

Left to Our Own Devices
The AWS IoT Cloud is a major first step for Amazon. Imo, it’s going to grow beyond anyone’s imagination (probably not Amazon’s). It’s a managed cloud infrastructure for connected devices for data collection, storage and analytics. Given the recent advances in device CPUs (ARM, Intel Edison, Apple) and low-power pervasive Internet connectivity, there will be an explosion of data soon after the current hacker era of IoT matures (the usual prelude to a technology wave). An interesting core component of the AWS IoT cloud is an MQTT message broker that clearly indicates Amazon feels strongly publish/subscribe is the right approach to IoT communication and not request-response. This subsystem will surely grow and evolve quickly.

 

 

What’s Next?
Amazon is stepping up big time with their cloud services.   They have done an amazing job so far.  I’m sure if they continue their torrid growth, we’ll probably start hearing from their competitors that Amazon has a monopolistic hold on customers.  It’ll be the same that we heard during the IBM (70’s), Microsoft (80’s) and Google (90’s) eras.  No one cried monopoly during the Facebook era because they were too busy posting videos of dancing kittens, singing dogs, angrily venting about politics or taking selfies.

 

Btw, Clouds and truffles share pricing models. Discuss.

truffles-big

 

Frank Greco

Posted in cloud, Events, html5, IoT, JMS, Kaazing, Security, WebSocket | Tagged , , , , | Leave a comment

KWICies #004 – The Stream Police, They Live Inside of my Head

Securing Streaming Data Over the Web

“Security is the chief enemy of mortals.” ― William Shakespeare

“The user’s going to pick dancing pigs over security every time.” ― Bruce Schneier

 

Take Me to the River
It’s a real-time world. Enterprises live in real-time. Business processes happen in real-time with live, streamed information passing from app to app, server to server.

These types of business-critical streaming systems apply to a vast number of use cases.  Today’s data analytics doesn’t wait for overnight crunching or hours of offline study. Mention the word “batch” and you’ll get raised eyebrows and derogatory comments about your old-fashioned taste for classic rock music and intense hatred for selfie sticks.

Many of these on-demand, streaming processes occur in and outside the firewall, among on-premises and off-premises cloud infrastructures. Your enterprise, partners, customers and entire ecosystem depend on many of these real-time events.

Historically we have seen a huge trend with programmatic ecosystem integration to address this cross-firewall connectivity. The hipsters have proclaimed this wave as the “API Economy” (btw, anyone want to buy my “Web Properties”? They’re near a soothing digital stream). Major enterprises are rushing to extend their businesses over the firewall with programmatic interfaces or APIs.   This type of approach has potentially more rewarding business implications for additional revenue streams and deepening the customer engagement. There’s no question this is a valuable evolutionary trend

 

bicycle-hippy
The trend is your friend?

 

Go With the Flow
Thousands of these public and private B2B APIs from such companies as Amazon, Twitter, Facebook, Bloomberg, British Airways, NY Times and the US Government are now available. A quick visit to the very popular ProgrammableWeb indicates the rapidly growing numbers of APIs that connect to very useful services.

However, many of these APIs primarily use a heavyweight, non-reactive communication model called “request-response” initiated by the client application using the traditional, legacy network plumbing of the web, HTTP.

 

rest-area
Full duplex its not. Thankfully.

 

Alternatively some of these companies and others have been recently offering modern, streaming APIs for browsers, mobile devices and embedded “Internet of Things” applications. We are seeing applications in logistics, health-monitoring, smart government, risk management, surveillance, management dashboards and others offering real-time distributed business logic that provide a significantly higher level of customer or partner engagement.

 

Cry Me a River
However, there are huge security and privacy issues to consider when deploying streaming interfaces.

 

cry
They hacked the doughnuts server!

 

Offering these real-time and non-real-time APIs seems risky despite their business potential. Not only do they have to be reliable, robust and efficient in a mobile environment, they have to be encrypted, fully authenticated, comply with corporate privacy entitlements and deployed in a multi-DMZ environment where no business logic should be deployed. And certainly no enterprise wants any open ports on the internal, private network to be compromised by the “black hats”. If we could just solve this last one, we could avoid creating replicant and expensive systems in the DMZ for the purposes of security and privacy.

 

Here’s just a short list of deployment concerns to be aware of for offering streaming (and even conventional synchronous) APIs.

 

Device Agnostic
Streaming data must be able to be sent to (or received from) all types of mobile devices, desktops and browsers.   It may be an iOS, Android, Linux, Windows or Web-endowed device. Your target device may even be a television, car or perhaps some type of wearable. Services and data collection/analytics must use consistent APIs to provide coherent and pervasive functionality for all endpoints. Using different APIs for different devices is inelegant, which we computer geeks know means more complexity and more glitch potential. And it means more wasted weekends lost to debugging on your Linux box instead of having a few of those great margaritas and tequila shots at that hip TexMex bar downtown.

 

drink
Go Jira!

 

Connectivity
HTTP was not really designed for persistent connections that are needed for streaming data. Yes, you can twist and fake out HTTP for long-lived connections and use Comet-style pushes from the server and get something to work.   But let’s face it, after you’re done hacking, you feel good as an engineer but you feel really lousy as an architect… and real nervous if you’re the CTO.

The typical networking solution for streaming and persistent connections in general is either to create and manage a legacy-style VPN or, open non-standard ports over the Internet. Since most operations people enjoy the comfort of employment, asking them to open a non-standard port will either have them laughing hysterically or pretending you didn’t exist.   Installing yet another old-fashioned low-level VPN doesn’t seem fun either. You have to get many more management signoffs than you originally thought, and have to deal with mind-numbing political and administrative constraints. Soon you start to question your own sanity.

“And what about our IoT requirements?” bellows your CIO during your weekly status meeting (and its a lovely deep bellow too). Remember streaming needs to be bidirectional. While enterprise-streaming connectivity is primarily sending to endpoints, IoT connectivity is sending from the endpoints. A unified architecture needs to handle both enterprise and IoT use cases in a high-performance manner.

 

DMZ Deployment
As with most business-critical networking topologies, any core streaming services deployment must be devoid of business logic and capable of installation into a DMZ or series of DMZ protection layers. You need to assume the black hats will break in to your outer-most DMZ, so there shouldn’t be any valuable business or security intelligence resident in your DMZ layers. At the least, you should try to avoid read-only replication copies of back-end services in the DMZ as much as possible… because its yet another management time and money sink.

 

Reliability
As your ecosystem grows (and shrinks), connectivity must adapt on-demand and take place over a very reliable connection.

 

chasm
Crossing the Chasm

 

Leveraging the economies and agility of the web and WebSocket is phenomenally useful, but automatic reconnection between critical partners over the Web is even more so.

 

Encryption
Just for the record, production conversations that traverse open networks must be secure via TLS/SSL encryption. So always use secure WebSocket (wss://) and secure HTTP (https://) for business purposes. Nuff said.

 

Authentication
Of course, users must be checked to confirm they are allowed to connect. Instead of dealing with low-level access via legacy VPNs that potentially grant open access at the network layer, it is significantly more secure to only allow application-service access.  This Application-to-Application (A2A) services connectivity (using standard URIs) is a tiny surface area for the black hats, which btw, becomes microscopic with Kaazing’s Enterprise Shield feature. This feature shuts down 100% of all incoming ports and further masks internal service and topology data. Yes, I did say 100%.

 

Entitlements/Authorization
Once a user is fully authenticated and connected to your streaming service, what operations are they entitled to perform? In other words, their access-control rights need to be confirmed. Again ideally this type of control should not be in the DMZ. Telling the operations team to incur several weeks of headaches getting corporate signoffs because you need a replicant Identity subsystem in the DMZ will not be easy. Don’t expect an invitation to their holiday party after that request.

 

Stream Protocol Validation
Real-time data need to be inspected for conformance to A2A protocol specifications and avoid injection of insecure code. Any streaming infrastructure needs to guarantee any application protocol must follow the rules of conversation. Any data in an unexpected format or in violation of a framing specification must be immediately discarded and the connection terminated.

 

There are certainly additional issues to consider for streaming data for your B2B ecosystem. Performance, scalability, monitoring, logging, et al, are equally important. We’ll cover those in a future KWICie soon.

 

otis
Watching the tide roll away indeed!

 

Sittin’ on the Dock of the Bay
If you’re attending AWS re:Invent 2015, please stop by the Kaazing booth (K24) to say hello. I’m always interested to chat with customers and colleagues about the future of cloud computing, containers, autonomous computing, microservices, IoT and the unfortunate state of real music.

 

Frank Greco

Posted in cloud, IoT, Kaazing, Security, WebSocket | Tagged , , , , | Leave a comment

AWS Re:Invent 2015 – Peter’s Cloud Security Talk Picks

With AWS Re:Invent approaching fast, I started reviewing the talks I absolutely wanted to see this year. Given our recent work with KWIC (Kaazing WebSocket Intercloud Connect), my focus this year is geared towards security and connectivity related topics. Here they go:

ARC344 – How Intuit Improves Security and Productivity with AWS Virtual Networking, identity, and Account Services
Brett Weaver – Software Architect, Intuit Inc
Don Southard – AWS Senior Solutions Architect Manager, Amazon Web Services
Abstract: Intuit has an “all in” strategy in adopting the AWS cloud. We have already moved some large workloads supporting some of our flagship products (TurboTax, Mint) and are expecting to launch hundreds of services in AWS over the coming years. To provide maximum flexibility for product teams to iterate on their services, as well as provide isolation of individual accounts from logical errors or malicious actions, Intuit is deploying every application into its own account and virtual private cloud (VPC). This talk discusses both the benefits and challenges of designing to run across hundreds or thousands of VPCs within an enterprise. We discuss the limitations of connectivity, sharing data, strategies for IAM access across account, and other nuances to keep in mind as you design your organization’s migration strategy. We share our design patterns that can help guide your team in developing a plan for your AWS migration. This talk is helpful for anyone who is planning or in the process of moving a large enterprise to AWS with the difficult decisions and tradeoffs in structuring your deployment.

DVO206 – Lessons from a CISO: How to Securely Scale Teams, Workloads, and Budgets
James Hoover – VP, Chief Information Security Officer, Infor
Adam Boyle – Director of Product Management, Cloud Workload Security, Trend Micro
Abstract: Are you a CISO in cloud or security operations and architecture? The decisions you make when migrating and securing workloads at scale in the AWS cloud have a large impact on your business. This session will help you jump-start your migration to AWS or, if you’re already running workloads in AWS, teach you how your organization can secure and improve the efficiency of those deployments.
Infor’s Chief Information Security Officer will share what the organization learned tackling these issues at scale. You’ll hear how managing a traditional large-scale infrastructure can be simplified in AWS. You’ll understand why designing around the workload can simplify the structure of your teams and help them focus. Finally, you’ll see what these changes mean to your CxOs and how better visibility and understanding of your workloads will drive business success. Session sponsored by Trend Micro.

DVO312 – Sony: Building At-Scale Services with AWS Elastic Beanstalk
Sumio Okada – Cloud Engineer, Sony Corporation
Shinya Kawaguchi – Software Engineer, Sony Corporation
Abstract: Learn about Sony’s efforts to build a cloud-native authentication and profile management platform on AWS. Sony engineers demonstrate how they used AWS Elastic Beanstalk (Elastic Beanstalk) to deploy, manage, and scale their applications. They also describe how they use AWS CloudFormation for resource provisioning, Amazon DynamoDB for the main database, and AWS Lambda and Amazon Redshift for log handling and analysis. This discussion focuses on best practices, security considerations, tradeoffs, and final architecture and implementation. By the end of the session, you will clearly understand how to use Elastic Beanstalk as a platform to quickly and easily build at-scale web application on AWS, and how to use Elastic Beanstalk with other AWS services to build cloud-native applications.

If you’re in Vegas for re:Invent, be sure to stop by at the Kaazing booth (K24) to have a chat! See you there…

Posted in cloud, Events, Kaazing, Security, Uncategorized, WebSocket | Leave a comment

KWICies #003: WebSocket is Hot and Cool – You Dig?

How Fat-Pipe Connectivity over the Web Changes Everything

“We are our choices.”  ― Jean-Paul Sartre

“As a child my family’s menu consisted of two choices: take it or leave it.”  ― Buddy Hackett

Giant Steps
We all know about the hip HTML5 apps and ritzy sites out there. There are a lot of swell and keen web applications that leverage the mind-blowing array of new functionality being added to the web tool chest. I’ll level with you, it’s amazing for me as a long time tech cat to watch the back seat bingo, handcuff and tying of the knot of Hypertext to the Internet. They just got goofy and now the combination is just nifty. Back in the 80’s, Hypertext needed the Internet and the Internet wanted Hypertext.  The two just jammed, natch.  And after 25 plus years, the Web is hip to the jive and is just smokin.

During the courtship, Sir Tim (“TBL” to those who know him well) decided on using a simplified document-sharing distributed computing protocol that is known and loved/liked/sorta-liked as HTTP.   For the kiddies out there, HTTP is HyperText Transfer Protocol, btw, note the historic “hypertext” connection (for extra credit Google “Vannevar Bush”, “Ted Nelson” and “Douglas Engelbart” and take notice of their visionary contributions to our everyday lives).  For the past two-plus decades, the World Wide Web has been primarily based on a synchronous (make a request and wait for a response regardless of how long it takes), document-centric sharing application protocol designed in 1989. Yep, the same year as Lotus Notes was launched, the 25MHz Intel 80486 with 1.2 million transistors came out and when Baywatch was on TV (sigh… they don’t make quality programs like that anymore).

 

notes
Technology state of the art when HTTP was invented.

 

All Blues
As more and more users of the web demanded more interactivity, some de facto and rather loose standards started to emerge. DHMTL or Dynamic HTML (hey… don’t blame me for the name) was an inconsistent collection of technologies introduced by Microsoft in the late 90’s in an attempt to create customized or interactive web sites. And in an attempt to add more functionality between Internet Explorer and Outlook, Microsoft added XMLHTTPRequest (XHR for the hipsters) to improve the communication between client and server. As the memetic evolution to increase the bandwidth for more user interactivity continued, other technologies such as AJAX, Asynchronous JavaScript And XML (which doesn’t have to be asynchronous, nor JavaScript nor XML btw) and Comet or “reverse-AJAX” (server push to the browser) appeared moving all of us towards a more dynamic world at the expense of more complexity. After all the very essence of the web (as with most hypertext systems) was not focused on interactivity, just document sharing.

As is the case with increasing complexity in any dynamic system, there becomes a point where chaos gets out of control and the need for stability becomes critical for evolution and continued existence. With its lack of standards… heck, lack of a real definition, DHTMTL and related technologies created a morass for all the web developers on the planet.

 

morass
You guessed it. A morass.

 

In addition, simulating real-time interactivity using HTTP-centric tools such as AJAX and Comet is clumsy and heavyweight. These workarounds used HTTP in a way that it was never intended. Our hats are off to the developers who originated these hacks since there was no good alternative, but they are indeed hacks for missing functionality.

In other words, when the Hack Quotient (or chaos index) is high, the time is ripe for innovation. Now that I think about it, this is a corollary of [enable canyon echo] Greco’s Law of Scalability (hey if Gordon Moore can call his thing a law and not an observation, why not me?).

 

grecoslaw

 

I apply this heuristic (ok, at least I’m humble) to complex distributed systems, but it seems to apply to innovation as well. When growing complexity, perhaps from hacks and workarounds, overwhelms a system’s ability to grow, its time for a major overhaul, i.e., what’s known in the industry as a “do-over”.

 

Unforgettable
For the web, this do-over was the HTML5 initiative. HTML5 gives us a renewed focus on standardized APIs, enhanced graphics, offline capabilities, and enhanced browser security. However the early HTML5 specification continued to only use HTTP as the communications protocol. Yes, the same synchronous, request-response distributed computing model used in the late 70’s and 80’s which was proven to be brittle, difficult to manage and clumsy to scale for many use cases.   I can’t even mutter the word “CORBA” or else I’ll get nauseous.

About 10 years ago, there started a spirited discussion on a new lower-latency communication model for the web within the HTML5, IETF and WHATWG standards communities. Google’s Ian Hickson, the HTML5 specification lead, had a personal interest in such a model. Ian is a model train aficionado and was quite interested in controlling his model trains through a browser. Simulating real-time control by tricking out HTTP was just not cutting the mustard for Ian. He thought saving milliseconds of latency when two locomotives were speeding at each other could be considered quite valuable.

Several proposals for a more modern communications substrate were submitted.   One proposal was to have a real TCP connection that used ports other than 80 or 443 and was known blandly as “TCPConnection”. Another proposal came from Kaazing’s co-founder (and current CTO) John Fallows along with another Kaazing colleague. They submitted a proposal for a modern communications technology called “WebSocket” that recommended the initial connection be made via HTTP on a standard port and then request the server to upgrade the protocol to WebSocket on the same port. This clever scheme guaranteed every web server on the planet was capable in the future of speaking WebSocket. The majority of the Fallows proposal is now part of the official WebSocket specification.

 

So What
So we now have an official, standards-blessed, browser-enabled, web-friendly mechanism for a “TCP for the Web” (yes, we all know physically HTTP runs over TCP as does WebSocket). WebSocket is something we’ve wanted ever since the dawn of the web. We can now perform enterprise-style distributed computing in a web-safe manner. We don’t need the browser to poll for data, use hanging-GETs, forever frames, HTTP streaming or even need to have the web server to poll for data from another service to eventually relay to the browser (using another poll).

Most of these techniques have high overhead due to the nature of HTTP meta-data which is sent for every HTTP call. For document-centric applications, where the data to meta-data ratio is relatively large, this is usually acceptable. For event-based messaging applications such as in finance, enterprise infrastructure and now with IoT, this data/non-data ratio is very small; additionally much of the time the meta-data is as redundant as hitting the elevator button multiple times.

For these use cases, there’s clearly a high price to pay by using HTTP as the underlying protocol.   To make it worse, you’re polling which in of itself is resource-intensive. WebSocket does not have the same HTTP meta-data overhead so it’s very lightweight. And since its lightweight and simple, it can be quite fast and scalable. For critical event-based applications, WebSocket is the obvious technology of choice. But note it’s not a replacement for HTTP, which is great network plumbing for synchronous communications especially those that serve lots of static elements and have caching requirements.

The W3C has standardized the WebSocket JavaScript API, so there is an official JavaScript API. This API has been mimicked in other languages such as Java, Ruby and C#, but only the JavaScript API is formally standardized. We’ll review the API in a future blog and how to handle legacy or minimal browsers that do not have WebSocket capability.

 

How High the Moon
As we’ve discussed, HTTP has been used by a browser to have a conversation with a webserver to get and put full documents or parts of documents. It’s a useful and network friendly protocol that allows people to extend the reach of their documents beyond the corporate firewall. This type of access is exceptionally valuable for offering request/response services. The vast majority of all such (non-video) web usage is hidden via these synchronous (request and wait for a response) APIs.

 

iceberg
The programmatic web is hidden underneath the viewable web

 

There’s no surprise the popular REST API architecture for accessing these services uses HTTP.  Because of the ubiquity and language neutrality of HTTP, these services can be reachable by any client application regardless of programming language or operating system.

But think about WebSocket. The IETF standardized the WebSocket protocol as an independent application protocol based on TCP. As an efficient network substrate, it is designed to support any other application protocol that typically would use TCP. There is even a mechanism in the specification (“sub-protocols”) that can be used to allow the client to identify the higher order protocol to the server so the server can manage that protocol conversation effectively and efficiently.  For many use cases, WebSocket is more appropriate than using REST, particularly for streaming, persistent connections or event-based applications.

 

What a Wonderful World
By implementing well-known TCP-based application protocols over WebSocket, many applications can subsequently traverse the web as easily as HTTP. Most importantly the protocols remain unchanged avoiding any semantic loss in translating to extremely coarse-grained HTTP semantics. This implies messaging and datastore systems, notification mechanisms, monitoring frameworks, hybrid cloud dynamic connectivity, IoT aggregation, risk management, trading systems, logistics architectures, etc., can now all traverse the web, on-demand and in real-time with no polling.

Pretty cool Daddy-o.  Now fix me a drink.

Btw, if anyone is attending AWS re:Invent, please stop by the Kaazing booth (#K24) to say hello, ask some WebSocket questions, talk about the future of hybrid clouds and containers, see some cool KWIC demos and all that jazz.

Frank Greco

Posted in Uncategorized | Tagged , , , , , , | Leave a comment

KWICies #002 – To B2B, or Not B2B, That’s a Question?

 On-demand App-to-App Cloud Connectivity

“Innovation distinguishes between a leader and a follower.” ― Steve Jobs

“Life doesn’t imitate art, it imitates bad television.”― Woody Allen

Free “Enterprise”
In the Information Technology (IT) world, the word “enterprise” is bandied about quite often. First, I really have no idea how to bandy about. If it involves incense and oils, I may have an idea, but let’s talk about enterprises instead.

If you do some serious digging, you’ll find out an enterprise is a federation, in other words, a collection of related business units with a common goal of profitability. It is an aggregate, dynamic yet unified entity that provides a product or service to benefit customers in return for revenue and profit. You’ll probably hear “enterprise” loosely and actually incorrectly used interchangeably with “company” or “business”.

Federation of Plan Its
As most of us are well aware, the success of this type of federation is very dependent on its network of vendors, service providers, partners and even the IT systems of its customers. In other words most enterprises rely on their supply chain network (tip: instead use “cooperative cloud ecosystem” at your next evening event with tragically hip baristas to get smiling nods of approval). This cooperative ensemble usually includes information management, purchasing, inventory, manufacturing, process-flow, logistics, research/development, distribution and customer service. This is true regardless whether you are a large retailer, a telecom provider, an investment bank or a television network.

This means a spectrum of enterprise applications needs connectivity with other applications and services. Trading systems, real-time inventory, big data analytics, complex event processing, systems monitoring and management, mobile notifications, social media sentiment analysis, et al, increasingly require traversal across multiple organizational boundaries. And today, many of these applications reside in IT systems off-premises within a cloud provider outside of the traditional firewall.

A Storm in Any Port
So the success of the “enterprise” now depends on a federation of organizations, integrating multiple external applications over multiple firewalls opening multiple ports (and maintaining friendships with your poker-playing buds in the InfoSec group).   It is an atmosphere where the art of negotiation becomes more critical rather than mandating localized governance. And it’s an environment that clearly demonstrates and reinforces why agility and technology standards are truly useful.

Summarized, an enterprise is in the B2B2B2B2B2B [take a breath] B2B2B2B2B2B business with A2A2A2A2A2A connectivity needs.

Make it Sew
The usual answer for application-to-application (A2A) connectivity is a traditional Virtual Private Network (VPN), which has been around since the mid-90’s, i.e., before Clinton/Lewinsky and stained blue dresses. Heck, VPNs were invented in a time when Google didn’t even exist, Amazon was called Cadabra, and Altavista was your Google.

Over the past decade, VPNs have done an excellent job of connecting data centers, cloud infrastructures and other large networks. Large cloud vendors such as Amazon even offer virtual private clouds (VPC) along with hardware Gateways to create a VPN. There are clear use cases for traditional VPNs.

But there are some significant downsides to traditional and cloud-based VPNs for modern, on-demand A2A communication.

  • The on-boarding process can be onerous especially between external organizations, despite the straightforward technology setup.
  • They typically allow low-level potentially dangerous access especially if home computers are used to access corporate assets.
  • VPN Access control usually uses the hard-to-manage, black list model.
  • They present huge surface areas with many attack vectors for hackers to exploit.
  • VPN vendor hardware and software are not always interoperable or compatible. A particular VPN architecture may not be suitable across multiple VPN vendors.
  • They are not easy to manage in an agile, constantly changing federated environment.
  • VPNs may require additional infrastructure for mobile devices that experience disconnects, cross-application network connection retries, additional security, etc.
  • Even one VPN can be quite difficult for a business unit to deploy, maintain and understand the security issues. In a business-driven cloud services world, this reduces agility for the revenue generators in an enterprise.
  • VPN products typically offer poor user experiences.
  • TCP and Web VPN requirements are not necessarily the same. This drives up costs. In terms of security,
  • Do legacy VPNs fit in a multi-cloud, on-demand, microservices world?

Certainly feels time for a makeover, doesn’t it?

Standard Orbit with KWIC
As I mentioned in the last KWICies, the web standards bodies (IETF and W3C) blessed the WebSocket standard back in 2011. And right after those standards came out, we saw simple web push applications with WebSocket replacing Comet/Reverse-AJAX on some websites.  But we need to recall, WebSocket is not just a formally standard API; it is also an application protocol similar to HTTP.  It provides on-demand, fat pipe connectivity that’s web-friendly.  Think about that for a few milliseconds (btw, which is about the same time a message can flow over a WebSocket across the web).  It’s a full-throttle, TCP-like connection that is web-friendly.   And it’s an excellent foundational substrate to use for agile A2A for the modern enterprise. This is the basis of KWIC and why its perfectly suited for today’s A2A connectivity.

“I have spent my whole life trying to figure out crazy ways of doing things. I’m telling ya, as one engineer to another – I can do this.”  -[any non-Googlian guesses?]

Frank Greco

Posted in cloud | Tagged , , | Leave a comment