Tuesday, February 18, 2014

The High Cost of Cross Platform Standards

Recently Jason Bock posed a question to our company about where should validation happen in an application; on the client, on the server, both?  In the past the pendulum in the world of application development has swung back and forth between thin and thick client and as it has the location of validation has shifted.  Green screen terminals with heavy logic on the back end, one and two tier applications with the majority of validation in the client.  The rise of COM with the ability of creating n-tier applications where the same business logic could be written on the client and the server (I'll ignore the fact that so many people did this poorly).

How did this all happen?  Companies were able to unilaterally innovate and proprietary systems reigned supreme.  With the rise of the PC companies made DBase, FoxPro, VB; each was independently successful on the PC platform.  Several years later MS could unilaterally put COM in place and later .Net.  These were more than just languages, frameworks and syntactic sugar, they were designed to allow new paradigms.

Since the late '90s we are now also in the world of HTML and JavaScript.  These represented a cross platform standard which is great.  Theoretically we can write applications once and they will run anywhere.  Of course in practice this is more like write once and tweak everywhere but the promise is there.  When web applications first started being written they were amazingly primitive.  Mostly just simple validation with much of the complex validation and business logic happening on the server.  But, as has happened in the past the people using the applications wanted a richer experience.  Over time, in a glacially slow progression, we started getting tools to allow this such as AJAX, JQuery and eventually a whole plethora of new JavaScript frameworks like Knockout and Angular.

Now we have all kinds of cool things like TypeScript and SPA frameworks all is good in the world, right?  Maybe, but maybe not.  The agreement on and adoption of HTML5 has been glacially slow.  Without a change to JavaScript or the HTML standard any new innovations must be layered on top of what is already there.  It is very hard with something like HTML and JavaScript to do fundamental changes like creating COM or the .Net framework.  For example, Google has been working on native object binding in HTML with Object.Observe() but of course it's not that easy to get something like that in the standard.  What do they have to do instead?  Build binding in Angular which of course is written in JavaScript and much slower than native data binding would have been.  How long do we wait for new JavaScript extensions and HTML 6?

In answer to Jason's question: the users will continue to demand richer and richer applications.  This means that business logic will have to be written in the browser in JavaScript or something layered on top of JavaScript.  To make sure that the information being sent to the web server is correct we will end up writing it again in C#, emiting simple JavaScript based on logic written in C# or running to node.js to write our server side validation so we only write the business logic once in JS.  What is not easily possible is to quickly make a major change in the underlying technologies of HTML and JavaScript to make a better, easier language to meet these demands. 

For better or for worse we have the cross platform advantages that JavaScript represents while we work around the slow pace of change that the cross platform also represents.  The people we write applications for are not going to be willing to wait for the language to change on its own so there will continue to be a place for Node.js, Angular, JQuery, Knockout and TypeScript.  Constructs written on top of JavaScript will do their best to make up for what the language can't do or isn't optimized to do.  A cross platform language like JavaScript cannot directly keep up with the pace of change that our industry demands.