KWICies #003: WebSocket is Hot and Cool – You Dig?

How Fat-Pipe Connectivity over the Web Changes Everything

“We are our choices.”  ― Jean-Paul Sartre

“As a child my family’s menu consisted of two choices: take it or leave it.”  ― Buddy Hackett

Giant Steps
We all know about the hip HTML5 apps and ritzy sites out there. There are a lot of swell and keen web applications that leverage the mind-blowing array of new functionality being added to the web tool chest. I’ll level with you, it’s amazing for me as a long time tech cat to watch the back seat bingo, handcuff and tying of the knot of Hypertext to the Internet. They just got goofy and now the combination is just nifty. Back in the 80’s, Hypertext needed the Internet and the Internet wanted Hypertext.  The two just jammed, natch.  And after 25 plus years, the Web is hip to the jive and is just smokin.

During the courtship, Sir Tim (“TBL” to those who know him well) decided on using a simplified document-sharing distributed computing protocol that is known and loved/liked/sorta-liked as HTTP.   For the kiddies out there, HTTP is HyperText Transfer Protocol, btw, note the historic “hypertext” connection (for extra credit Google “Vannevar Bush”, “Ted Nelson” and “Douglas Engelbart” and take notice of their visionary contributions to our everyday lives).  For the past two-plus decades, the World Wide Web has been primarily based on a synchronous (make a request and wait for a response regardless of how long it takes), document-centric sharing application protocol designed in 1989. Yep, the same year as Lotus Notes was launched, the 25MHz Intel 80486 with 1.2 million transistors came out and when Baywatch was on TV (sigh… they don’t make quality programs like that anymore).


Technology state of the art when HTTP was invented.


All Blues
As more and more users of the web demanded more interactivity, some de facto and rather loose standards started to emerge. DHMTL or Dynamic HTML (hey… don’t blame me for the name) was an inconsistent collection of technologies introduced by Microsoft in the late 90’s in an attempt to create customized or interactive web sites. And in an attempt to add more functionality between Internet Explorer and Outlook, Microsoft added XMLHTTPRequest (XHR for the hipsters) to improve the communication between client and server. As the memetic evolution to increase the bandwidth for more user interactivity continued, other technologies such as AJAX, Asynchronous JavaScript And XML (which doesn’t have to be asynchronous, nor JavaScript nor XML btw) and Comet or “reverse-AJAX” (server push to the browser) appeared moving all of us towards a more dynamic world at the expense of more complexity. After all the very essence of the web (as with most hypertext systems) was not focused on interactivity, just document sharing.

As is the case with increasing complexity in any dynamic system, there becomes a point where chaos gets out of control and the need for stability becomes critical for evolution and continued existence. With its lack of standards… heck, lack of a real definition, DHTMTL and related technologies created a morass for all the web developers on the planet.


You guessed it. A morass.


In addition, simulating real-time interactivity using HTTP-centric tools such as AJAX and Comet is clumsy and heavyweight. These workarounds used HTTP in a way that it was never intended. Our hats are off to the developers who originated these hacks since there was no good alternative, but they are indeed hacks for missing functionality.

In other words, when the Hack Quotient (or chaos index) is high, the time is ripe for innovation. Now that I think about it, this is a corollary of [enable canyon echo] Greco’s Law of Scalability (hey if Gordon Moore can call his thing a law and not an observation, why not me?).




I apply this heuristic (ok, at least I’m humble) to complex distributed systems, but it seems to apply to innovation as well. When growing complexity, perhaps from hacks and workarounds, overwhelms a system’s ability to grow, its time for a major overhaul, i.e., what’s known in the industry as a “do-over”.


For the web, this do-over was the HTML5 initiative. HTML5 gives us a renewed focus on standardized APIs, enhanced graphics, offline capabilities, and enhanced browser security. However the early HTML5 specification continued to only use HTTP as the communications protocol. Yes, the same synchronous, request-response distributed computing model used in the late 70’s and 80’s which was proven to be brittle, difficult to manage and clumsy to scale for many use cases.   I can’t even mutter the word “CORBA” or else I’ll get nauseous.

About 10 years ago, there started a spirited discussion on a new lower-latency communication model for the web within the HTML5, IETF and WHATWG standards communities. Google’s Ian Hickson, the HTML5 specification lead, had a personal interest in such a model. Ian is a model train aficionado and was quite interested in controlling his model trains through a browser. Simulating real-time control by tricking out HTTP was just not cutting the mustard for Ian. He thought saving milliseconds of latency when two locomotives were speeding at each other could be considered quite valuable.

Several proposals for a more modern communications substrate were submitted.   One proposal was to have a real TCP connection that used ports other than 80 or 443 and was known blandly as “TCPConnection”. Another proposal came from Kaazing’s co-founder (and current CTO) John Fallows along with another Kaazing colleague. They submitted a proposal for a modern communications technology called “WebSocket” that recommended the initial connection be made via HTTP on a standard port and then request the server to upgrade the protocol to WebSocket on the same port. This clever scheme guaranteed every web server on the planet was capable in the future of speaking WebSocket. The majority of the Fallows proposal is now part of the official WebSocket specification.


So What
So we now have an official, standards-blessed, browser-enabled, web-friendly mechanism for a “TCP for the Web” (yes, we all know physically HTTP runs over TCP as does WebSocket). WebSocket is something we’ve wanted ever since the dawn of the web. We can now perform enterprise-style distributed computing in a web-safe manner. We don’t need the browser to poll for data, use hanging-GETs, forever frames, HTTP streaming or even need to have the web server to poll for data from another service to eventually relay to the browser (using another poll).

Most of these techniques have high overhead due to the nature of HTTP meta-data which is sent for every HTTP call. For document-centric applications, where the data to meta-data ratio is relatively large, this is usually acceptable. For event-based messaging applications such as in finance, enterprise infrastructure and now with IoT, this data/non-data ratio is very small; additionally much of the time the meta-data is as redundant as hitting the elevator button multiple times.

For these use cases, there’s clearly a high price to pay by using HTTP as the underlying protocol.   To make it worse, you’re polling which in of itself is resource-intensive. WebSocket does not have the same HTTP meta-data overhead so it’s very lightweight. And since its lightweight and simple, it can be quite fast and scalable. For critical event-based applications, WebSocket is the obvious technology of choice. But note it’s not a replacement for HTTP, which is great network plumbing for synchronous communications especially those that serve lots of static elements and have caching requirements.

The W3C has standardized the WebSocket JavaScript API, so there is an official JavaScript API. This API has been mimicked in other languages such as Java, Ruby and C#, but only the JavaScript API is formally standardized. We’ll review the API in a future blog and how to handle legacy or minimal browsers that do not have WebSocket capability.


How High the Moon
As we’ve discussed, HTTP has been used by a browser to have a conversation with a webserver to get and put full documents or parts of documents. It’s a useful and network friendly protocol that allows people to extend the reach of their documents beyond the corporate firewall. This type of access is exceptionally valuable for offering request/response services. The vast majority of all such (non-video) web usage is hidden via these synchronous (request and wait for a response) APIs.


The programmatic web is hidden underneath the viewable web


There’s no surprise the popular REST API architecture for accessing these services uses HTTP.  Because of the ubiquity and language neutrality of HTTP, these services can be reachable by any client application regardless of programming language or operating system.

But think about WebSocket. The IETF standardized the WebSocket protocol as an independent application protocol based on TCP. As an efficient network substrate, it is designed to support any other application protocol that typically would use TCP. There is even a mechanism in the specification (“sub-protocols”) that can be used to allow the client to identify the higher order protocol to the server so the server can manage that protocol conversation effectively and efficiently.  For many use cases, WebSocket is more appropriate than using REST, particularly for streaming, persistent connections or event-based applications.


What a Wonderful World
By implementing well-known TCP-based application protocols over WebSocket, many applications can subsequently traverse the web as easily as HTTP. Most importantly the protocols remain unchanged avoiding any semantic loss in translating to extremely coarse-grained HTTP semantics. This implies messaging and datastore systems, notification mechanisms, monitoring frameworks, hybrid cloud dynamic connectivity, IoT aggregation, risk management, trading systems, logistics architectures, etc., can now all traverse the web, on-demand and in real-time with no polling.

Pretty cool Daddy-o.  Now fix me a drink.

Btw, if anyone is attending AWS re:Invent, please stop by the Kaazing booth (#K24) to say hello, ask some WebSocket questions, talk about the future of hybrid clouds and containers, see some cool KWIC demos and all that jazz.

Frank Greco

Posted in Uncategorized | Tagged , , , , , , | Leave a comment

KWICies #002 – To B2B, or Not B2B, That’s a Question?

 On-demand App-to-App Cloud Connectivity

“Innovation distinguishes between a leader and a follower.” ― Steve Jobs

“Life doesn’t imitate art, it imitates bad television.”― Woody Allen

Free “Enterprise”
In the Information Technology (IT) world, the word “enterprise” is bandied about quite often. First, I really have no idea how to bandy about. If it involves incense and oils, I may have an idea, but let’s talk about enterprises instead.

If you do some serious digging, you’ll find out an enterprise is a federation, in other words, a collection of related business units with a common goal of profitability. It is an aggregate, dynamic yet unified entity that provides a product or service to benefit customers in return for revenue and profit. You’ll probably hear “enterprise” loosely and actually incorrectly used interchangeably with “company” or “business”.

Federation of Plan Its
As most of us are well aware, the success of this type of federation is very dependent on its network of vendors, service providers, partners and even the IT systems of its customers. In other words most enterprises rely on their supply chain network (tip: instead use “cooperative cloud ecosystem” at your next evening event with tragically hip baristas to get smiling nods of approval). This cooperative ensemble usually includes information management, purchasing, inventory, manufacturing, process-flow, logistics, research/development, distribution and customer service. This is true regardless whether you are a large retailer, a telecom provider, an investment bank or a television network.

This means a spectrum of enterprise applications needs connectivity with other applications and services. Trading systems, real-time inventory, big data analytics, complex event processing, systems monitoring and management, mobile notifications, social media sentiment analysis, et al, increasingly require traversal across multiple organizational boundaries. And today, many of these applications reside in IT systems off-premises within a cloud provider outside of the traditional firewall.

A Storm in Any Port
So the success of the “enterprise” now depends on a federation of organizations, integrating multiple external applications over multiple firewalls opening multiple ports (and maintaining friendships with your poker-playing buds in the InfoSec group).   It is an atmosphere where the art of negotiation becomes more critical rather than mandating localized governance. And it’s an environment that clearly demonstrates and reinforces why agility and technology standards are truly useful.

Summarized, an enterprise is in the B2B2B2B2B2B [take a breath] B2B2B2B2B2B business with A2A2A2A2A2A connectivity needs.

Make it Sew
The usual answer for application-to-application (A2A) connectivity is a traditional Virtual Private Network (VPN), which has been around since the mid-90’s, i.e., before Clinton/Lewinsky and stained blue dresses. Heck, VPNs were invented in a time when Google didn’t even exist, Amazon was called Cadabra, and Altavista was your Google.

Over the past decade, VPNs have done an excellent job of connecting data centers, cloud infrastructures and other large networks. Large cloud vendors such as Amazon even offer virtual private clouds (VPC) along with hardware Gateways to create a VPN. There are clear use cases for traditional VPNs.

But there are some significant downsides to traditional and cloud-based VPNs for modern, on-demand A2A communication.

  • The on-boarding process can be onerous especially between external organizations, despite the straightforward technology setup.
  • They typically allow low-level potentially dangerous access especially if home computers are used to access corporate assets.
  • VPN Access control usually uses the hard-to-manage, black list model.
  • They present huge surface areas with many attack vectors for hackers to exploit.
  • VPN vendor hardware and software are not always interoperable or compatible. A particular VPN architecture may not be suitable across multiple VPN vendors.
  • They are not easy to manage in an agile, constantly changing federated environment.
  • VPNs may require additional infrastructure for mobile devices that experience disconnects, cross-application network connection retries, additional security, etc.
  • Even one VPN can be quite difficult for a business unit to deploy, maintain and understand the security issues. In a business-driven cloud services world, this reduces agility for the revenue generators in an enterprise.
  • VPN products typically offer poor user experiences.
  • TCP and Web VPN requirements are not necessarily the same. This drives up costs. In terms of security,
  • Do legacy VPNs fit in a multi-cloud, on-demand, microservices world?

Certainly feels time for a makeover, doesn’t it?

Standard Orbit with KWIC
As I mentioned in the last KWICies, the web standards bodies (IETF and W3C) blessed the WebSocket standard back in 2011. And right after those standards came out, we saw simple web push applications with WebSocket replacing Comet/Reverse-AJAX on some websites.  But we need to recall, WebSocket is not just a formally standard API; it is also an application protocol similar to HTTP.  It provides on-demand, fat pipe connectivity that’s web-friendly.  Think about that for a few milliseconds (btw, which is about the same time a message can flow over a WebSocket across the web).  It’s a full-throttle, TCP-like connection that is web-friendly.   And it’s an excellent foundational substrate to use for agile A2A for the modern enterprise. This is the basis of KWIC and why its perfectly suited for today’s A2A connectivity.

“I have spent my whole life trying to figure out crazy ways of doing things. I’m telling ya, as one engineer to another – I can do this.”  -[any non-Googlian guesses?]

Frank Greco

Posted in cloud | Tagged , , | Leave a comment

KWICies #001 – Life in the Fast Lane

The Evolution of Cloud Connectivity

“Intelligence is based on how efficient a species became at doing the things they need to survive.” ― Charles Darwin

“My theory of evolution is that Darwin was adopted.” ― Steven Wright

In case you missed it, the first phase of cloud computing has left the building. Thousands of companies are in the cloud. Practically all organizations regardless of size already have production applications in a public, off-premises cloud or a private cloud. Yep. Been there, done that.

And the vast majority of these applications use the classic “SaaS-style” public cloud model. Someone develops a useful service and hosts it on Amazon Web Services (AWS), Microsoft Azure, IBM Cloud Marketplace, Google Cloud Platform (GCP) or one of several other cloud vendors. Accessing this external service is typically performed via a well-defined API. Typically this API invocation is made using a simple REST call (or a convenient library wrapper around a REST call). This request originates from a web browser, native app on a mobile device or some server-side application and traverses the web. Using only port 443 or 80, it connects through a series of firewalls to the actual service running in the external cloud environment. The request is serviced by a process running in a service provider’s computing environment and returns a result to the client application.


Conventional SaaS-style Access


Only the Beginning
However this scenario greatly simplifies a real-world example of accessing a service. Quite honestly, this is a very basic, hello-world cloud connectivity model.

Today’s enterprise is a federation of companies with vast collections of dynamic services that are enabled/disabled frequently with ever-changing sets of authentication and access control. To survive in this environment, a modern enterprise needs to develop an intimate yet secure ecosystem of partners, suppliers and customers. So unlike the rudimentary connectivity case, the typical production application is composed of many dozens and perhaps hundreds of services, some internal to an enterprise and some residing in a collection of external cloud infrastructures or data centers. For example, the incredibly successful Amazon ecommerce website performs 100-150 internal service calls just to get data to build a personalized web experience.

Many of these external services that exist either in an external cloud vendor or another company’s data center often need to reach back to the originating infrastructure to access internal services and data to complete their tasks. Some services may even go further and also need access to information across cloud, network and company boundaries.

This ain’t your father’s cloud infrastructure.


Get off My Cloud
A particular use case is when a service running in a cloud environment, e.g., AWS, needs to authenticate access to this service. One solution is to provide a duplicate or subset of the internal authentication credentials (usually housed in some LDAP repository, e.g., Active Directory) directly in the public cloud. However this is redundant and brings potentially dangerously insecure authentication-synchronization and general data management issues. Unsurprisingly this scenario of accessing internal authentication or entitlements information residing in an internal directory turns out to be quite common for practically all service access.

Another example involves powerful cloud-based analytics or business intelligence services. In many cases such off-premises analytics-as-a-service providers need access to internal real-time data feeds that reside on the premises of a customer. That customer may not want to put that private real-time stream into the cloud environment for a variety of reasons, e.g., security, unnecessary data synchronization, additional management, etc.

The architectural solutions for both of these use cases involve either negotiating with the enterprise customer to create a REST API and deploy a family of application servers (extremely complex and highly improbable), or more typically, setting up a virtual private network (VPN) to achieve a real-time, “fat-pipe” connection.


Old-School Approach to Application Connectivity


Nothing Else Matters
While the technical aspects of setting up a legacy-style VPN are relatively straightforward, there is often a lengthy period of corporate signoffs and inter-company negotiations that precede the technical work.  For some companies this period of time can be many weeks. For some other large corporations, getting approvals for yet another VPN can take several months. This painfully long lead-time negatively impacts business agility and the all-important time-to-revenue.

In addition, VPN access is at the low-level TCP layer of the network stack. Despite various access control systems, the open nature of a VPN represents a security risk by potentially providing unauthorized (and authorized) users free-reign to many internal enterprise services. Also, VPN implementations vary. Some are proprietary and may cause potential issues when interfacing among various VPN vendors, especially VPNs that extend access to mobile devices.


What a Wonderful World
Ideally you would want to completely eliminate any legacy VPN requirement to significantly reduce unnecessary friction from the sales and deployment process. And you’d want an agile, on-demand connection that connects Application-to-Application (A2A) via a “white list” approach. To help future proof your infrastructure and accelerate operations, a container deployment approach based on the popular Docker would be more than useful and attractive to your developers.


Do You Believe in Magic
As of December 2011, the Internet standards bodies (IETF and W3C) formally approved a mechanism for a persistent connection over the web without using any additional ports and consequently maintaining your friendships in the InfoSec group. This standard is called “WebSocket” and effectively is a “TCP for the Web”.

Like most innovations being used for the first time, WebSocket was initially used as a mere replacement for inelegant browser push (AJAX) mechanisms to send data from a server to a user.

But by using the WebSocket protocol and its standardized API as a powerful foundation for wide-area TCP-style distributed computing, we get a phenomenally powerful innovation.    Enhancing basic WebSocket functionality with the necessary enterprise-grade security and reliability envelope, applications can now easily and most importantly securely access services on-demand through the firewall.   This type of enhanced approach to WebSocket avoids the awkward conversion of any enterprise application protocol to coarse-grained HTTP semantics.  Performance is rarely an issue with WebSocket.


WebSocket for App-to-App (A2A) Communication


This LAN is Your LAN
If you’re looking for a way for an external cloud application to access an internal, on-premises service in an on-demand Application-to-Application manner, the Kaazing Websocket Intercloud Connect (KWIC… yep, yet-another caffeine-induced acronym) provides this functionality. It’s based on the open-source Kaazing Gateway and works with any TCP-based protocol. You can see an example of KWIC used for LDAP access in the AWS Marketplace (if you don’t need support, KWIC is totally free…).

Frank Greco

Posted in cloud | Tagged , | Leave a comment

Real-Time Tic Tac Toe Light

I had the great privilege of being a speaker at HTML5 Developer Conference in San Francisco recently.  It was the second HTML5 Dev Conf I have presented at, with the first one being October 2013.  This time, I paired with Frank Greco to present a session entitled “WebSockets: Past, Present and Future”.  Frank took the stage for the first half of the session, and I followed up with some hands-on Internet of Things (IoT) demonstrations that were integrated with Kaazing Gateway.

Introducing the Tic Tac Toe Light

My personal favorite demonstration was a project I called the “Tic Tac Toe Light”.  I called it this because the custom-built enclosure houses nine (9) Adafuit NeoPixels in a three-by-three (3×3) grid.  The enclosure, made using foam core board and a hot knife, also contained an Arduino Yun.  I have grown to be a big fan of the Arduino Yun for real-time IoT/web projects.  The board is the same profile as an Arduino Uno, but includes integrated wireless (802.11 b/g/n), an ATmega32u4 (similar to the Arduino Leonardo), and a Linux system on a chip (SoC).

image02    image01


Using a web-based user interface, attendees of the HTM5 Dev Conf session could use their laptop, tablet or smartphone to control each NeoPixel (RGB LED) in the enclosure.  At the same time, the web user interface kept in sync with all the attendees selections – across all screens.  The Arduino Yun was also listening on a real-time connection for color change messages, which is how it knew what lights to change to what colors.

Why Kaazing Gateway

I think the bigger question here is “Why real-time?”  Although I do not know the exact count, I would say that the session had nearly 200 attendees.  The ATmega32u4 has a clock speed of 16 MHz with 32 KB of RAM.  If all those attendees were selecting light colors at anywhere near the same time using HTTP, the Arduino would be crushed under the load.  In a real-time scenario however, there is but one connection, and about twenty (20) bytes of data for each color change.  The end result was a far more scalable solution.


And it had to scale too!  The lights on the Tic Tac Toe box were blinking wildly for the duration of the time I had it plugged in (before I had to move on to my next demonstration).

Can you imagine the user experience over HTTP, even if the 16 MHz chip could handle the load?  You would select a color, and at some interval later, the color would be set.  That lag however would leave you wondering “Was that my color selection?”  This as compared to an instant response using Kaazing Gateway, even over conference wireless.  Not to mention keeping all the other connected users in sync.  The additional HTTP polling load for that would make the whole project come to a crawl (or just crash).

What Next

The 3×3 grid was actually happenstance – I happened to have ten (10) NeoPixels on hand in my component drawer.  I wanted a square, so 3×3 it was.  This led to the name of Tic Tac Toe.  But then I started to wonder.  What if this was the physical manifestation of two players in an actual game of tic-tac-toe?  Or even better yet, maybe artificial intelligence (AI) on the server was playing the other side in real-time!

This is where I would like to take the project next.  If you want to see the code for the project, you can hop on over to my GitHub account where I have posted more details, as well as code itself for the Arduino Yun and the web client.  The fabrication plans are also posted there should you want to take on a project like this yourself.  If you have any questions, feel free to hit me up on Twitter, or drop a comment below.


Thanks Matthias Schroeder for the Vine video of Tic Tac Toe in action during the session.

Posted in Events, html5, IoT | Tagged , , , | Leave a comment

KAAZING Leads Education in Modern Web and Mobile App Development

KAAZING was the first organization delivering formal, instructor-led HTML5 and WebSocket training globally back in 2008. Since then, we have trained thousands of web developers and engineers on the latest best practices and techniques for building state of the art Web and mobile applications.

Today, we still remain very active in educating the web development community at all skill levels. Earlier this year, we launched two new intermediate to advanced level courses for web developers focused on new best practices and techniques for modern Web and mobile app development using open-source tools, and developing secure, real-time networked Web and mobile applications.

Richard Clark, Head of Global Training at KAAZING, will be teaching a 1-day beginner level JavaScript course at the HTML5 Developers Conference in San Francisco on May 19 from 9am-4pm. This course is called Leveling up in JavaScript. For more information and to register, click here. – clark-leveling

For those looking to learn core foundational web development skills, you can pick up the fundamentals of JavaScript in less than a day. This course is for those who want to go beyond static design and into the world of interactive programming. Once you know how JavaScript works, you can take advantage of it to make your web pages and apps come alive.

richard-clarkRichard Clark is the Head of Global Training at KAAZING. He is an experienced software developer and instructor. He has authored commercial web applications and blends the practical knowledge of a developer with extensive experience as an instructional designer and trainer. Richard has taught for Apple and Hewlett-Packard, written immersive simulations, developed multiple high-performance web applications for the Fortune 100, and published Apple iOS applications. He is an evangelist for new technologies with a focus on their practical uses.

Posted in Uncategorized | Leave a comment

Why Should The Internet Of Things Be Central To Your IT Strategy?

Not since the Web began has there been an era of disruption like the one ushered in by the Internet of Things. An IoT connected world is fast becoming a reality that promises to link our homes and business and improve efficiency. 

By the year 2020, the number of Things connected to the Internet will be 6X the number of humans. With the explosive growth of connected things, an IoT world brings both tremendous risks and enormous opportunities.

Join KAAZING and Gigaom Research for “The Internet of Things: Making it Happen in Your Business,” a free analyst roundtable webinar on Tuesday, April 22, 2014 at 10:00 a.m. PT.

Our expert panelists, Craig Foster, freelance analyst, writer and consultant, Rich Morrow, founder / head geek, quicloud LLC and Jonas Jacobi, Co-Founder & President, KAAZING will discuss the risks and benefits of implementing a secure IT strategy to help you adopt and leverage the Internet of Things in your business.

This webinar will introduce a spectrum of IoT use cases through current examples that will help you identify the process and technology changes required to support IoT-based initiatives.

Prepare your business to succeed in an IoT connected world. Learn what our experts have to say and gain valuable insights on how to make the IoT central to your IT strategy. Register now. 

Posted in Featured | Leave a comment

KAAZING Is Expanding, And So We Are Moving…

Unless you have been hiding under a rock since January 1st, you will have noticed that KAAZING in 2014 has entered a period of rapid expansion.

Whether it is measured in terms of participation in industry events worldwide, high-profile customer wins, bold new hires, or improvements to and additional flavors of the KAAZING Gateway, the company is visibly, energetically and relentlessly expanding in lockstep with the explosive growth of the Internet of Things.

No surprise then that we have also out-grown our current office space. Which is why on February 10, 2014, we are re-locating our Worldwide HQ.

While remaining in the heart of Silicon Valley, our new headquarters will henceforth be in San Jose – in the America Center, hailed when completed in 2009 as one of the “greenest” office projects in the Valley, back when LEED-certified office projects were still a rarity.

They say that moving on is not about never looking back, it’s about taking a glance at yesterday and noticing how much you’ve grown since then. All at KAAZING have been doing exactly that. We loved Mountain View, where the company was born and raised. But we are ready for this bigger and better facility, where Zingers will henceforth have access to a fitness center. The new location also provides easy access to jogging and bike trails.

“It’s all about growth and execution,” said Vikram Mehta, KAAZING’s CEO. “We’re a hot business in a hot industry. The move gives us the space to execute better, faster, and with a larger team.”

Mehta noted that the entire Enterprise IT & Infrastructure industry has already been asked to update its records with KAAZING’s new information and added that the company “looks forward to seeing all our partners and associates at our new location.”


Here is our new address (pictured to the left):

Kaazing Corporation
6001 America Center Drive, Suite 250
San Jose, CA 95002

Our phone numbers have not changed:

T +1 (877) KAAZING
T +1 (877) 522-9464
F +1 (650) 960-8145

Posted in Featured, Kaazing, Release, Uncategorized | Leave a comment