KWICies #005: It Definitely Will Be More Cloudy Tomorrow

Thoughts on AWS re:Invent 2015

“Nature is a mutable cloud which is always and never the same.”
― Ralph Waldo Emerson

“I never said most of the things I said.”
― Yogi Berra




Let’s Get Real
Just a few years ago, many industry pundits (aka “talking heads”) proclaimed in-person conferences were dead. And trade shows and industry conferences would only be experienced in Second Life by your avatar who would learn more than you and eventually take over your job. What we didn’t know is that these pundits were from Colorado and Oregon [pause here for cognitive exercise].  And most are still living in Second Life selling virtual adult toys to socially challenged people with Walter Mitty fantasies.

Instead of holding a virtual conference and exchanging high performance networking tips with a talking frog, the Amazon “AWS re:Invent” conference was a traditional and effective analog one. The show was buzzing with 19,000 living, breathing devops hounds sniffing out all the cool and useful truffles in the AWS service forest. It was indeed an impressive show at the Sands Expo Convention Center in Las Vegas. Kudos goes to the Amazon organizing staff for pulling it off so successfully.



Kaazing’s Peter Moskovits Explaining KWIC


There were many excellent presentations on the new AWS services such as Kinesis Streams, Inspector, Docker integration and the new IoT Cloud. And the vendors on the exhibit floor seemed to always be incredibly crowded with inquisitive visitors. Certainly the team at the Kaazing booth was extremely busy during the entire show. Our new devops tool, Kaazing Websocket Intercloud Connect (KWIC), that allows SaaS applications to easily and securely connect back to on-premise services for things like LDAP/AD authentication, databases and streaming data feeds, was quite popular. Seems like there’s a huge dislike for old-fashioned, legacy VPNs to connect application services on-demand.


Very Much In-demand Kaazing T-Shirt Schwag


Mind the Gap
As the cloud infrastructure market continues its amazing growth rate, AWS maintains its position as the big gorilla in the business. Maybe more like King Kong than the average gorilla you see on the street [just making sure you’re paying attention].   And at re:Invent, Amazon used the opportunity to announce dozens of new cloud services and enhancements to their existing services to further widen the already-huge gap from its nearest (and distant) competitor.

Competitors Find AWS Hard to Swallow


There were dozens of new and updated AWS services for databases, analytics, security inspection, API management, virtual desktops and EC2 (their IaaS computing platform). You can find out details on these from the re:Invent website and the AWS YouTube Channel.


Personally I found the Kinesis updates, Docker integration, Lambda enhancements and their IoT Cloud most interesting.

Streams Turn Into Rivers
Kinesis is a collection of services that make it easy to manage real-time streaming data in the AWS cloud.  The era of connected devices and intelligent things are starting to generate a mind-boggling amount of streaming data. This data needs to be collected, persisted and analyzed. Synchronous distributed computing and polling is not going to cut the mustard for streaming data.  Kinesis is a set of services that addresses these new types of applications. Hundreds of sensor types, wearables, industrial machinery and many sophisticated big-data systems will use these services. The current trifecta of Kinesis services are: Kinesis Firehose for loading streaming data into AWS, Kinesis Analytics for analysis of the data using SQL queries and Kinesis Streams for building your own custom streaming applications. In true AWS fashion, you can easily integrate Kinesis with other AWS services.


Their Ploy of Deploy
Amazon’s EC2 Container Service is based on Docker and container deployment of microservices. They have now integrated the Docker registry with their Identity and Access Management (IAM) for authorization and access control. Developers understand the agility advantages that containers (Docker and non-Docker) bring to the table. And in typical Amazon fashion, there is now a command-line interface (CLI) for this style of agile deployment.   Very cool.


Code on Demand
The AWS Lambda service is quite innovative and a natural evolution of cloud infrastructure. As a developer, you are quite good at building server-side code but you may not want to deal with provisioning or managing any servers. You just want to upload your code and have it run when a certain condition occurs or perhaps when you want to manually invoke your service. Pretty nifty. And you only pay when your code runs.


Pre-IoT Era Cash Register

Left to Our Own Devices
The AWS IoT Cloud is a major first step for Amazon. Imo, it’s going to grow beyond anyone’s imagination (probably not Amazon’s). It’s a managed cloud infrastructure for connected devices for data collection, storage and analytics. Given the recent advances in device CPUs (ARM, Intel Edison, Apple) and low-power pervasive Internet connectivity, there will be an explosion of data soon after the current hacker era of IoT matures (the usual prelude to a technology wave). An interesting core component of the AWS IoT cloud is an MQTT message broker that clearly indicates Amazon feels strongly publish/subscribe is the right approach to IoT communication and not request-response. This subsystem will surely grow and evolve quickly.



What’s Next?
Amazon is stepping up big time with their cloud services.   They have done an amazing job so far.  I’m sure if they continue their torrid growth, we’ll probably start hearing from their competitors that Amazon has a monopolistic hold on customers.  It’ll be the same that we heard during the IBM (70’s), Microsoft (80’s) and Google (90’s) eras.  No one cried monopoly during the Facebook era because they were too busy posting videos of dancing kittens, singing dogs, angrily venting about politics or taking selfies.


Btw, Clouds and truffles share pricing models. Discuss.



Frank Greco

Posted in cloud, Events, html5, IoT, JMS, Kaazing, Security, WebSocket | Tagged , , , , | Leave a comment

KWICies #004 – The Stream Police, They Live Inside of my Head

Securing Streaming Data Over the Web

“Security is the chief enemy of mortals.” ― William Shakespeare

“The user’s going to pick dancing pigs over security every time.” ― Bruce Schneier


Take Me to the River
It’s a real-time world. Enterprises live in real-time. Business processes happen in real-time with live, streamed information passing from app to app, server to server.

These types of business-critical streaming systems apply to a vast number of use cases.  Today’s data analytics doesn’t wait for overnight crunching or hours of offline study. Mention the word “batch” and you’ll get raised eyebrows and derogatory comments about your old-fashioned taste for classic rock music and intense hatred for selfie sticks.

Many of these on-demand, streaming processes occur in and outside the firewall, among on-premises and off-premises cloud infrastructures. Your enterprise, partners, customers and entire ecosystem depend on many of these real-time events.

Historically we have seen a huge trend with programmatic ecosystem integration to address this cross-firewall connectivity. The hipsters have proclaimed this wave as the “API Economy” (btw, anyone want to buy my “Web Properties”? They’re near a soothing digital stream). Major enterprises are rushing to extend their businesses over the firewall with programmatic interfaces or APIs.   This type of approach has potentially more rewarding business implications for additional revenue streams and deepening the customer engagement. There’s no question this is a valuable evolutionary trend


The trend is your friend?


Go With the Flow
Thousands of these public and private B2B APIs from such companies as Amazon, Twitter, Facebook, Bloomberg, British Airways, NY Times and the US Government are now available. A quick visit to the very popular ProgrammableWeb indicates the rapidly growing numbers of APIs that connect to very useful services.

However, many of these APIs primarily use a heavyweight, non-reactive communication model called “request-response” initiated by the client application using the traditional, legacy network plumbing of the web, HTTP.


Full duplex its not. Thankfully.


Alternatively some of these companies and others have been recently offering modern, streaming APIs for browsers, mobile devices and embedded “Internet of Things” applications. We are seeing applications in logistics, health-monitoring, smart government, risk management, surveillance, management dashboards and others offering real-time distributed business logic that provide a significantly higher level of customer or partner engagement.


Cry Me a River
However, there are huge security and privacy issues to consider when deploying streaming interfaces.


They hacked the doughnuts server!


Offering these real-time and non-real-time APIs seems risky despite their business potential. Not only do they have to be reliable, robust and efficient in a mobile environment, they have to be encrypted, fully authenticated, comply with corporate privacy entitlements and deployed in a multi-DMZ environment where no business logic should be deployed. And certainly no enterprise wants any open ports on the internal, private network to be compromised by the “black hats”. If we could just solve this last one, we could avoid creating replicant and expensive systems in the DMZ for the purposes of security and privacy.


Here’s just a short list of deployment concerns to be aware of for offering streaming (and even conventional synchronous) APIs.


Device Agnostic
Streaming data must be able to be sent to (or received from) all types of mobile devices, desktops and browsers.   It may be an iOS, Android, Linux, Windows or Web-endowed device. Your target device may even be a television, car or perhaps some type of wearable. Services and data collection/analytics must use consistent APIs to provide coherent and pervasive functionality for all endpoints. Using different APIs for different devices is inelegant, which we computer geeks know means more complexity and more glitch potential. And it means more wasted weekends lost to debugging on your Linux box instead of having a few of those great margaritas and tequila shots at that hip TexMex bar downtown.


Go Jira!


HTTP was not really designed for persistent connections that are needed for streaming data. Yes, you can twist and fake out HTTP for long-lived connections and use Comet-style pushes from the server and get something to work.   But let’s face it, after you’re done hacking, you feel good as an engineer but you feel really lousy as an architect… and real nervous if you’re the CTO.

The typical networking solution for streaming and persistent connections in general is either to create and manage a legacy-style VPN or, open non-standard ports over the Internet. Since most operations people enjoy the comfort of employment, asking them to open a non-standard port will either have them laughing hysterically or pretending you didn’t exist.   Installing yet another old-fashioned low-level VPN doesn’t seem fun either. You have to get many more management signoffs than you originally thought, and have to deal with mind-numbing political and administrative constraints. Soon you start to question your own sanity.

“And what about our IoT requirements?” bellows your CIO during your weekly status meeting (and its a lovely deep bellow too). Remember streaming needs to be bidirectional. While enterprise-streaming connectivity is primarily sending to endpoints, IoT connectivity is sending from the endpoints. A unified architecture needs to handle both enterprise and IoT use cases in a high-performance manner.


DMZ Deployment
As with most business-critical networking topologies, any core streaming services deployment must be devoid of business logic and capable of installation into a DMZ or series of DMZ protection layers. You need to assume the black hats will break in to your outer-most DMZ, so there shouldn’t be any valuable business or security intelligence resident in your DMZ layers. At the least, you should try to avoid read-only replication copies of back-end services in the DMZ as much as possible… because its yet another management time and money sink.


As your ecosystem grows (and shrinks), connectivity must adapt on-demand and take place over a very reliable connection.


Crossing the Chasm


Leveraging the economies and agility of the web and WebSocket is phenomenally useful, but automatic reconnection between critical partners over the Web is even more so.


Just for the record, production conversations that traverse open networks must be secure via TLS/SSL encryption. So always use secure WebSocket (wss://) and secure HTTP (https://) for business purposes. Nuff said.


Of course, users must be checked to confirm they are allowed to connect. Instead of dealing with low-level access via legacy VPNs that potentially grant open access at the network layer, it is significantly more secure to only allow application-service access.  This Application-to-Application (A2A) services connectivity (using standard URIs) is a tiny surface area for the black hats, which btw, becomes microscopic with Kaazing’s Enterprise Shield feature. This feature shuts down 100% of all incoming ports and further masks internal service and topology data. Yes, I did say 100%.


Once a user is fully authenticated and connected to your streaming service, what operations are they entitled to perform? In other words, their access-control rights need to be confirmed. Again ideally this type of control should not be in the DMZ. Telling the operations team to incur several weeks of headaches getting corporate signoffs because you need a replicant Identity subsystem in the DMZ will not be easy. Don’t expect an invitation to their holiday party after that request.


Stream Protocol Validation
Real-time data need to be inspected for conformance to A2A protocol specifications and avoid injection of insecure code. Any streaming infrastructure needs to guarantee any application protocol must follow the rules of conversation. Any data in an unexpected format or in violation of a framing specification must be immediately discarded and the connection terminated.


There are certainly additional issues to consider for streaming data for your B2B ecosystem. Performance, scalability, monitoring, logging, et al, are equally important. We’ll cover those in a future KWICie soon.


Watching the tide roll away indeed!


Sittin’ on the Dock of the Bay
If you’re attending AWS re:Invent 2015, please stop by the Kaazing booth (K24) to say hello. I’m always interested to chat with customers and colleagues about the future of cloud computing, containers, autonomous computing, microservices, IoT and the unfortunate state of real music.


Frank Greco

Posted in cloud, IoT, Kaazing, Security, WebSocket | Tagged , , , , | Leave a comment

AWS Re:Invent 2015 – Peter’s Cloud Security Talk Picks

With AWS Re:Invent approaching fast, I started reviewing the talks I absolutely wanted to see this year. Given our recent work with KWIC (Kaazing WebSocket Intercloud Connect), my focus this year is geared towards security and connectivity related topics. Here they go:

ARC344 – How Intuit Improves Security and Productivity with AWS Virtual Networking, identity, and Account Services
Brett Weaver – Software Architect, Intuit Inc
Don Southard – AWS Senior Solutions Architect Manager, Amazon Web Services
Abstract: Intuit has an “all in” strategy in adopting the AWS cloud. We have already moved some large workloads supporting some of our flagship products (TurboTax, Mint) and are expecting to launch hundreds of services in AWS over the coming years. To provide maximum flexibility for product teams to iterate on their services, as well as provide isolation of individual accounts from logical errors or malicious actions, Intuit is deploying every application into its own account and virtual private cloud (VPC). This talk discusses both the benefits and challenges of designing to run across hundreds or thousands of VPCs within an enterprise. We discuss the limitations of connectivity, sharing data, strategies for IAM access across account, and other nuances to keep in mind as you design your organization’s migration strategy. We share our design patterns that can help guide your team in developing a plan for your AWS migration. This talk is helpful for anyone who is planning or in the process of moving a large enterprise to AWS with the difficult decisions and tradeoffs in structuring your deployment.

DVO206 – Lessons from a CISO: How to Securely Scale Teams, Workloads, and Budgets
James Hoover – VP, Chief Information Security Officer, Infor
Adam Boyle – Director of Product Management, Cloud Workload Security, Trend Micro
Abstract: Are you a CISO in cloud or security operations and architecture? The decisions you make when migrating and securing workloads at scale in the AWS cloud have a large impact on your business. This session will help you jump-start your migration to AWS or, if you’re already running workloads in AWS, teach you how your organization can secure and improve the efficiency of those deployments.
Infor’s Chief Information Security Officer will share what the organization learned tackling these issues at scale. You’ll hear how managing a traditional large-scale infrastructure can be simplified in AWS. You’ll understand why designing around the workload can simplify the structure of your teams and help them focus. Finally, you’ll see what these changes mean to your CxOs and how better visibility and understanding of your workloads will drive business success. Session sponsored by Trend Micro.

DVO312 – Sony: Building At-Scale Services with AWS Elastic Beanstalk
Sumio Okada – Cloud Engineer, Sony Corporation
Shinya Kawaguchi – Software Engineer, Sony Corporation
Abstract: Learn about Sony’s efforts to build a cloud-native authentication and profile management platform on AWS. Sony engineers demonstrate how they used AWS Elastic Beanstalk (Elastic Beanstalk) to deploy, manage, and scale their applications. They also describe how they use AWS CloudFormation for resource provisioning, Amazon DynamoDB for the main database, and AWS Lambda and Amazon Redshift for log handling and analysis. This discussion focuses on best practices, security considerations, tradeoffs, and final architecture and implementation. By the end of the session, you will clearly understand how to use Elastic Beanstalk as a platform to quickly and easily build at-scale web application on AWS, and how to use Elastic Beanstalk with other AWS services to build cloud-native applications.

If you’re in Vegas for re:Invent, be sure to stop by at the Kaazing booth (K24) to have a chat! See you there…

Posted in cloud, Events, Kaazing, Security, Uncategorized, WebSocket | Leave a comment

KWICies #003: WebSocket is Hot and Cool – You Dig?

How Fat-Pipe Connectivity over the Web Changes Everything

“We are our choices.”  ― Jean-Paul Sartre

“As a child my family’s menu consisted of two choices: take it or leave it.”  ― Buddy Hackett

Giant Steps
We all know about the hip HTML5 apps and ritzy sites out there. There are a lot of swell and keen web applications that leverage the mind-blowing array of new functionality being added to the web tool chest. I’ll level with you, it’s amazing for me as a long time tech cat to watch the back seat bingo, handcuff and tying of the knot of Hypertext to the Internet. They just got goofy and now the combination is just nifty. Back in the 80’s, Hypertext needed the Internet and the Internet wanted Hypertext.  The two just jammed, natch.  And after 25 plus years, the Web is hip to the jive and is just smokin.

During the courtship, Sir Tim (“TBL” to those who know him well) decided on using a simplified document-sharing distributed computing protocol that is known and loved/liked/sorta-liked as HTTP.   For the kiddies out there, HTTP is HyperText Transfer Protocol, btw, note the historic “hypertext” connection (for extra credit Google “Vannevar Bush”, “Ted Nelson” and “Douglas Engelbart” and take notice of their visionary contributions to our everyday lives).  For the past two-plus decades, the World Wide Web has been primarily based on a synchronous (make a request and wait for a response regardless of how long it takes), document-centric sharing application protocol designed in 1989. Yep, the same year as Lotus Notes was launched, the 25MHz Intel 80486 with 1.2 million transistors came out and when Baywatch was on TV (sigh… they don’t make quality programs like that anymore).


Technology state of the art when HTTP was invented.


All Blues
As more and more users of the web demanded more interactivity, some de facto and rather loose standards started to emerge. DHMTL or Dynamic HTML (hey… don’t blame me for the name) was an inconsistent collection of technologies introduced by Microsoft in the late 90’s in an attempt to create customized or interactive web sites. And in an attempt to add more functionality between Internet Explorer and Outlook, Microsoft added XMLHTTPRequest (XHR for the hipsters) to improve the communication between client and server. As the memetic evolution to increase the bandwidth for more user interactivity continued, other technologies such as AJAX, Asynchronous JavaScript And XML (which doesn’t have to be asynchronous, nor JavaScript nor XML btw) and Comet or “reverse-AJAX” (server push to the browser) appeared moving all of us towards a more dynamic world at the expense of more complexity. After all the very essence of the web (as with most hypertext systems) was not focused on interactivity, just document sharing.

As is the case with increasing complexity in any dynamic system, there becomes a point where chaos gets out of control and the need for stability becomes critical for evolution and continued existence. With its lack of standards… heck, lack of a real definition, DHTMTL and related technologies created a morass for all the web developers on the planet.


You guessed it. A morass.


In addition, simulating real-time interactivity using HTTP-centric tools such as AJAX and Comet is clumsy and heavyweight. These workarounds used HTTP in a way that it was never intended. Our hats are off to the developers who originated these hacks since there was no good alternative, but they are indeed hacks for missing functionality.

In other words, when the Hack Quotient (or chaos index) is high, the time is ripe for innovation. Now that I think about it, this is a corollary of [enable canyon echo] Greco’s Law of Scalability (hey if Gordon Moore can call his thing a law and not an observation, why not me?).




I apply this heuristic (ok, at least I’m humble) to complex distributed systems, but it seems to apply to innovation as well. When growing complexity, perhaps from hacks and workarounds, overwhelms a system’s ability to grow, its time for a major overhaul, i.e., what’s known in the industry as a “do-over”.


For the web, this do-over was the HTML5 initiative. HTML5 gives us a renewed focus on standardized APIs, enhanced graphics, offline capabilities, and enhanced browser security. However the early HTML5 specification continued to only use HTTP as the communications protocol. Yes, the same synchronous, request-response distributed computing model used in the late 70’s and 80’s which was proven to be brittle, difficult to manage and clumsy to scale for many use cases.   I can’t even mutter the word “CORBA” or else I’ll get nauseous.

About 10 years ago, there started a spirited discussion on a new lower-latency communication model for the web within the HTML5, IETF and WHATWG standards communities. Google’s Ian Hickson, the HTML5 specification lead, had a personal interest in such a model. Ian is a model train aficionado and was quite interested in controlling his model trains through a browser. Simulating real-time control by tricking out HTTP was just not cutting the mustard for Ian. He thought saving milliseconds of latency when two locomotives were speeding at each other could be considered quite valuable.

Several proposals for a more modern communications substrate were submitted.   One proposal was to have a real TCP connection that used ports other than 80 or 443 and was known blandly as “TCPConnection”. Another proposal came from Kaazing’s co-founder (and current CTO) John Fallows along with another Kaazing colleague. They submitted a proposal for a modern communications technology called “WebSocket” that recommended the initial connection be made via HTTP on a standard port and then request the server to upgrade the protocol to WebSocket on the same port. This clever scheme guaranteed every web server on the planet was capable in the future of speaking WebSocket. The majority of the Fallows proposal is now part of the official WebSocket specification.


So What
So we now have an official, standards-blessed, browser-enabled, web-friendly mechanism for a “TCP for the Web” (yes, we all know physically HTTP runs over TCP as does WebSocket). WebSocket is something we’ve wanted ever since the dawn of the web. We can now perform enterprise-style distributed computing in a web-safe manner. We don’t need the browser to poll for data, use hanging-GETs, forever frames, HTTP streaming or even need to have the web server to poll for data from another service to eventually relay to the browser (using another poll).

Most of these techniques have high overhead due to the nature of HTTP meta-data which is sent for every HTTP call. For document-centric applications, where the data to meta-data ratio is relatively large, this is usually acceptable. For event-based messaging applications such as in finance, enterprise infrastructure and now with IoT, this data/non-data ratio is very small; additionally much of the time the meta-data is as redundant as hitting the elevator button multiple times.

For these use cases, there’s clearly a high price to pay by using HTTP as the underlying protocol.   To make it worse, you’re polling which in of itself is resource-intensive. WebSocket does not have the same HTTP meta-data overhead so it’s very lightweight. And since its lightweight and simple, it can be quite fast and scalable. For critical event-based applications, WebSocket is the obvious technology of choice. But note it’s not a replacement for HTTP, which is great network plumbing for synchronous communications especially those that serve lots of static elements and have caching requirements.

The W3C has standardized the WebSocket JavaScript API, so there is an official JavaScript API. This API has been mimicked in other languages such as Java, Ruby and C#, but only the JavaScript API is formally standardized. We’ll review the API in a future blog and how to handle legacy or minimal browsers that do not have WebSocket capability.


How High the Moon
As we’ve discussed, HTTP has been used by a browser to have a conversation with a webserver to get and put full documents or parts of documents. It’s a useful and network friendly protocol that allows people to extend the reach of their documents beyond the corporate firewall. This type of access is exceptionally valuable for offering request/response services. The vast majority of all such (non-video) web usage is hidden via these synchronous (request and wait for a response) APIs.


The programmatic web is hidden underneath the viewable web


There’s no surprise the popular REST API architecture for accessing these services uses HTTP.  Because of the ubiquity and language neutrality of HTTP, these services can be reachable by any client application regardless of programming language or operating system.

But think about WebSocket. The IETF standardized the WebSocket protocol as an independent application protocol based on TCP. As an efficient network substrate, it is designed to support any other application protocol that typically would use TCP. There is even a mechanism in the specification (“sub-protocols”) that can be used to allow the client to identify the higher order protocol to the server so the server can manage that protocol conversation effectively and efficiently.  For many use cases, WebSocket is more appropriate than using REST, particularly for streaming, persistent connections or event-based applications.


What a Wonderful World
By implementing well-known TCP-based application protocols over WebSocket, many applications can subsequently traverse the web as easily as HTTP. Most importantly the protocols remain unchanged avoiding any semantic loss in translating to extremely coarse-grained HTTP semantics. This implies messaging and datastore systems, notification mechanisms, monitoring frameworks, hybrid cloud dynamic connectivity, IoT aggregation, risk management, trading systems, logistics architectures, etc., can now all traverse the web, on-demand and in real-time with no polling.

Pretty cool Daddy-o.  Now fix me a drink.

Btw, if anyone is attending AWS re:Invent, please stop by the Kaazing booth (#K24) to say hello, ask some WebSocket questions, talk about the future of hybrid clouds and containers, see some cool KWIC demos and all that jazz.

Frank Greco

Posted in Uncategorized | Tagged , , , , , , | Leave a comment

KWICies #002 – To B2B, or Not B2B, That’s a Question?

 On-demand App-to-App Cloud Connectivity

“Innovation distinguishes between a leader and a follower.” ― Steve Jobs

“Life doesn’t imitate art, it imitates bad television.”― Woody Allen

Free “Enterprise”
In the Information Technology (IT) world, the word “enterprise” is bandied about quite often. First, I really have no idea how to bandy about. If it involves incense and oils, I may have an idea, but let’s talk about enterprises instead.

If you do some serious digging, you’ll find out an enterprise is a federation, in other words, a collection of related business units with a common goal of profitability. It is an aggregate, dynamic yet unified entity that provides a product or service to benefit customers in return for revenue and profit. You’ll probably hear “enterprise” loosely and actually incorrectly used interchangeably with “company” or “business”.

Federation of Plan Its
As most of us are well aware, the success of this type of federation is very dependent on its network of vendors, service providers, partners and even the IT systems of its customers. In other words most enterprises rely on their supply chain network (tip: instead use “cooperative cloud ecosystem” at your next evening event with tragically hip baristas to get smiling nods of approval). This cooperative ensemble usually includes information management, purchasing, inventory, manufacturing, process-flow, logistics, research/development, distribution and customer service. This is true regardless whether you are a large retailer, a telecom provider, an investment bank or a television network.

This means a spectrum of enterprise applications needs connectivity with other applications and services. Trading systems, real-time inventory, big data analytics, complex event processing, systems monitoring and management, mobile notifications, social media sentiment analysis, et al, increasingly require traversal across multiple organizational boundaries. And today, many of these applications reside in IT systems off-premises within a cloud provider outside of the traditional firewall.

A Storm in Any Port
So the success of the “enterprise” now depends on a federation of organizations, integrating multiple external applications over multiple firewalls opening multiple ports (and maintaining friendships with your poker-playing buds in the InfoSec group).   It is an atmosphere where the art of negotiation becomes more critical rather than mandating localized governance. And it’s an environment that clearly demonstrates and reinforces why agility and technology standards are truly useful.

Summarized, an enterprise is in the B2B2B2B2B2B [take a breath] B2B2B2B2B2B business with A2A2A2A2A2A connectivity needs.

Make it Sew
The usual answer for application-to-application (A2A) connectivity is a traditional Virtual Private Network (VPN), which has been around since the mid-90’s, i.e., before Clinton/Lewinsky and stained blue dresses. Heck, VPNs were invented in a time when Google didn’t even exist, Amazon was called Cadabra, and Altavista was your Google.

Over the past decade, VPNs have done an excellent job of connecting data centers, cloud infrastructures and other large networks. Large cloud vendors such as Amazon even offer virtual private clouds (VPC) along with hardware Gateways to create a VPN. There are clear use cases for traditional VPNs.

But there are some significant downsides to traditional and cloud-based VPNs for modern, on-demand A2A communication.

  • The on-boarding process can be onerous especially between external organizations, despite the straightforward technology setup.
  • They typically allow low-level potentially dangerous access especially if home computers are used to access corporate assets.
  • VPN Access control usually uses the hard-to-manage, black list model.
  • They present huge surface areas with many attack vectors for hackers to exploit.
  • VPN vendor hardware and software are not always interoperable or compatible. A particular VPN architecture may not be suitable across multiple VPN vendors.
  • They are not easy to manage in an agile, constantly changing federated environment.
  • VPNs may require additional infrastructure for mobile devices that experience disconnects, cross-application network connection retries, additional security, etc.
  • Even one VPN can be quite difficult for a business unit to deploy, maintain and understand the security issues. In a business-driven cloud services world, this reduces agility for the revenue generators in an enterprise.
  • VPN products typically offer poor user experiences.
  • TCP and Web VPN requirements are not necessarily the same. This drives up costs. In terms of security,
  • Do legacy VPNs fit in a multi-cloud, on-demand, microservices world?

Certainly feels time for a makeover, doesn’t it?

Standard Orbit with KWIC
As I mentioned in the last KWICies, the web standards bodies (IETF and W3C) blessed the WebSocket standard back in 2011. And right after those standards came out, we saw simple web push applications with WebSocket replacing Comet/Reverse-AJAX on some websites.  But we need to recall, WebSocket is not just a formally standard API; it is also an application protocol similar to HTTP.  It provides on-demand, fat pipe connectivity that’s web-friendly.  Think about that for a few milliseconds (btw, which is about the same time a message can flow over a WebSocket across the web).  It’s a full-throttle, TCP-like connection that is web-friendly.   And it’s an excellent foundational substrate to use for agile A2A for the modern enterprise. This is the basis of KWIC and why its perfectly suited for today’s A2A connectivity.

“I have spent my whole life trying to figure out crazy ways of doing things. I’m telling ya, as one engineer to another – I can do this.”  -[any non-Googlian guesses?]

Frank Greco

Posted in cloud | Tagged , , | Leave a comment

KWICies #001 – Life in the Fast Lane

The Evolution of Cloud Connectivity

“Intelligence is based on how efficient a species became at doing the things they need to survive.” ― Charles Darwin

“My theory of evolution is that Darwin was adopted.” ― Steven Wright

In case you missed it, the first phase of cloud computing has left the building. Thousands of companies are in the cloud. Practically all organizations regardless of size already have production applications in a public, off-premises cloud or a private cloud. Yep. Been there, done that.

And the vast majority of these applications use the classic “SaaS-style” public cloud model. Someone develops a useful service and hosts it on Amazon Web Services (AWS), Microsoft Azure, IBM Cloud Marketplace, Google Cloud Platform (GCP) or one of several other cloud vendors. Accessing this external service is typically performed via a well-defined API. Typically this API invocation is made using a simple REST call (or a convenient library wrapper around a REST call). This request originates from a web browser, native app on a mobile device or some server-side application and traverses the web. Using only port 443 or 80, it connects through a series of firewalls to the actual service running in the external cloud environment. The request is serviced by a process running in a service provider’s computing environment and returns a result to the client application.


Conventional SaaS-style Access


Only the Beginning
However this scenario greatly simplifies a real-world example of accessing a service. Quite honestly, this is a very basic, hello-world cloud connectivity model.

Today’s enterprise is a federation of companies with vast collections of dynamic services that are enabled/disabled frequently with ever-changing sets of authentication and access control. To survive in this environment, a modern enterprise needs to develop an intimate yet secure ecosystem of partners, suppliers and customers. So unlike the rudimentary connectivity case, the typical production application is composed of many dozens and perhaps hundreds of services, some internal to an enterprise and some residing in a collection of external cloud infrastructures or data centers. For example, the incredibly successful Amazon ecommerce website performs 100-150 internal service calls just to get data to build a personalized web experience.

Many of these external services that exist either in an external cloud vendor or another company’s data center often need to reach back to the originating infrastructure to access internal services and data to complete their tasks. Some services may even go further and also need access to information across cloud, network and company boundaries.

This ain’t your father’s cloud infrastructure.


Get off My Cloud
A particular use case is when a service running in a cloud environment, e.g., AWS, needs to authenticate access to this service. One solution is to provide a duplicate or subset of the internal authentication credentials (usually housed in some LDAP repository, e.g., Active Directory) directly in the public cloud. However this is redundant and brings potentially dangerously insecure authentication-synchronization and general data management issues. Unsurprisingly this scenario of accessing internal authentication or entitlements information residing in an internal directory turns out to be quite common for practically all service access.

Another example involves powerful cloud-based analytics or business intelligence services. In many cases such off-premises analytics-as-a-service providers need access to internal real-time data feeds that reside on the premises of a customer. That customer may not want to put that private real-time stream into the cloud environment for a variety of reasons, e.g., security, unnecessary data synchronization, additional management, etc.

The architectural solutions for both of these use cases involve either negotiating with the enterprise customer to create a REST API and deploy a family of application servers (extremely complex and highly improbable), or more typically, setting up a virtual private network (VPN) to achieve a real-time, “fat-pipe” connection.


Old-School Approach to Application Connectivity


Nothing Else Matters
While the technical aspects of setting up a legacy-style VPN are relatively straightforward, there is often a lengthy period of corporate signoffs and inter-company negotiations that precede the technical work.  For some companies this period of time can be many weeks. For some other large corporations, getting approvals for yet another VPN can take several months. This painfully long lead-time negatively impacts business agility and the all-important time-to-revenue.

In addition, VPN access is at the low-level TCP layer of the network stack. Despite various access control systems, the open nature of a VPN represents a security risk by potentially providing unauthorized (and authorized) users free-reign to many internal enterprise services. Also, VPN implementations vary. Some are proprietary and may cause potential issues when interfacing among various VPN vendors, especially VPNs that extend access to mobile devices.


What a Wonderful World
Ideally you would want to completely eliminate any legacy VPN requirement to significantly reduce unnecessary friction from the sales and deployment process. And you’d want an agile, on-demand connection that connects Application-to-Application (A2A) via a “white list” approach. To help future proof your infrastructure and accelerate operations, a container deployment approach based on the popular Docker would be more than useful and attractive to your developers.


Do You Believe in Magic
As of December 2011, the Internet standards bodies (IETF and W3C) formally approved a mechanism for a persistent connection over the web without using any additional ports and consequently maintaining your friendships in the InfoSec group. This standard is called “WebSocket” and effectively is a “TCP for the Web”.

Like most innovations being used for the first time, WebSocket was initially used as a mere replacement for inelegant browser push (AJAX) mechanisms to send data from a server to a user.

But by using the WebSocket protocol and its standardized API as a powerful foundation for wide-area TCP-style distributed computing, we get a phenomenally powerful innovation.    Enhancing basic WebSocket functionality with the necessary enterprise-grade security and reliability envelope, applications can now easily and most importantly securely access services on-demand through the firewall.   This type of enhanced approach to WebSocket avoids the awkward conversion of any enterprise application protocol to coarse-grained HTTP semantics.  Performance is rarely an issue with WebSocket.


WebSocket for App-to-App (A2A) Communication


This LAN is Your LAN
If you’re looking for a way for an external cloud application to access an internal, on-premises service in an on-demand Application-to-Application manner, the Kaazing Websocket Intercloud Connect (KWIC… yep, yet-another caffeine-induced acronym) provides this functionality. It’s based on the open-source Kaazing Gateway and works with any TCP-based protocol. You can see an example of KWIC used for LDAP access in the AWS Marketplace (if you don’t need support, KWIC is totally free…).

Frank Greco

Posted in cloud | Tagged , | Leave a comment

Real-Time Tic Tac Toe Light

I had the great privilege of being a speaker at HTML5 Developer Conference in San Francisco recently.  It was the second HTML5 Dev Conf I have presented at, with the first one being October 2013.  This time, I paired with Frank Greco to present a session entitled “WebSockets: Past, Present and Future”.  Frank took the stage for the first half of the session, and I followed up with some hands-on Internet of Things (IoT) demonstrations that were integrated with Kaazing Gateway.

Introducing the Tic Tac Toe Light

My personal favorite demonstration was a project I called the “Tic Tac Toe Light”.  I called it this because the custom-built enclosure houses nine (9) Adafuit NeoPixels in a three-by-three (3×3) grid.  The enclosure, made using foam core board and a hot knife, also contained an Arduino Yun.  I have grown to be a big fan of the Arduino Yun for real-time IoT/web projects.  The board is the same profile as an Arduino Uno, but includes integrated wireless (802.11 b/g/n), an ATmega32u4 (similar to the Arduino Leonardo), and a Linux system on a chip (SoC).

image02    image01


Using a web-based user interface, attendees of the HTM5 Dev Conf session could use their laptop, tablet or smartphone to control each NeoPixel (RGB LED) in the enclosure.  At the same time, the web user interface kept in sync with all the attendees selections – across all screens.  The Arduino Yun was also listening on a real-time connection for color change messages, which is how it knew what lights to change to what colors.

Why Kaazing Gateway

I think the bigger question here is “Why real-time?”  Although I do not know the exact count, I would say that the session had nearly 200 attendees.  The ATmega32u4 has a clock speed of 16 MHz with 32 KB of RAM.  If all those attendees were selecting light colors at anywhere near the same time using HTTP, the Arduino would be crushed under the load.  In a real-time scenario however, there is but one connection, and about twenty (20) bytes of data for each color change.  The end result was a far more scalable solution.


And it had to scale too!  The lights on the Tic Tac Toe box were blinking wildly for the duration of the time I had it plugged in (before I had to move on to my next demonstration).

Can you imagine the user experience over HTTP, even if the 16 MHz chip could handle the load?  You would select a color, and at some interval later, the color would be set.  That lag however would leave you wondering “Was that my color selection?”  This as compared to an instant response using Kaazing Gateway, even over conference wireless.  Not to mention keeping all the other connected users in sync.  The additional HTTP polling load for that would make the whole project come to a crawl (or just crash).

What Next

The 3×3 grid was actually happenstance – I happened to have ten (10) NeoPixels on hand in my component drawer.  I wanted a square, so 3×3 it was.  This led to the name of Tic Tac Toe.  But then I started to wonder.  What if this was the physical manifestation of two players in an actual game of tic-tac-toe?  Or even better yet, maybe artificial intelligence (AI) on the server was playing the other side in real-time!

This is where I would like to take the project next.  If you want to see the code for the project, you can hop on over to my GitHub account where I have posted more details, as well as code itself for the Arduino Yun and the web client.  The fabrication plans are also posted there should you want to take on a project like this yourself.  If you have any questions, feel free to hit me up on Twitter, or drop a comment below.


Thanks Matthias Schroeder for the Vine video of Tic Tac Toe in action during the session.

Posted in Events, html5, IoT | Tagged , , , | Leave a comment