Back when I started this whole software thing, the primary interface for users was what was known as the "smart terminal". Glass screen, keyboard, serial port with a DB 25 connector stuck in the back. They had sophisticated elements like bold, blink, and underline text as well as line graphics. Rainbow of colors: white, green, and amber. VT100's and Wyse 50's were most popular and representative of the lot, but anyone who has ever looked at a modern termcap file can see that there were hundreds of these things floating around from every manufacturer.
While the different terminals and their capabilities were novel (I had one that had TWO, count 'em, serial ports making it easy to flip back and forth between sessions), what's more relevant at this point is the idiom of computing they represented.
At the time, it was called time share. Monolithic computers sharing time across several different applications and users. Many were billed by the CPU time they consumed, and computer utilization was a big deal because of the cost of the main computer. Batch processing was more popular because it gave operators better control over that utilization, and ensured that the computer was used to capacity. Idle computer time is wasted computer time, and wasted computer time is unbilled computer time.
As computers became cheaper, they became more interactive, since now it's more affordable to let a computer sit idle waiting for a user to do something next.
The primary power, though, was that you had this large central computer that could be shared. Sally on one terminal can enter data that Bob can then see on his terminal, because all of the data was contained on the single, central computer.
The users interacted with the system, it would think, crunch and grind, and then spit out the results on the screen. Dumb Terminals would just scroll, but Smart Terminals had addressable cursors, function keys, some even had crude form languages. You could download a simple form spec showing static text along with fields that can be entered by the user. The computer sends the codes, the terminal paints the form, the user interacts, locally, with the form, then ships back the whole kit with a single SEND key. This differs from how many folks today are used to interacting with terminals, especially if they're only used to something like vi on the Linux command line. Send a key and the computer responds directly.
Now, this was all late 80's and early 90's. During that time, the world shifted a bit with the introduction of the PC on a wide spread basis. Now folks were using PCs for personal work, and maybe running "Green Screen" apps through a terminal emulator.
It didn't take long for people to want to leverage those PCs for shared work, and with the aid of networking and database servers, the era of Client Server Applications was born. Large applications, locally installed and running on individual computers while the central computer was delegated to serving up data from its database. Visual Basic, Power Builder, and a plethora of other Client Server "4GLs" took the market by storm, and every back office application coder was pointing and clicking their way to GUI glory.
Of course C/S programming is still with us today, "fat apps" we call them now. The 4GLs have kind of lost their luster. Make no mistake, there are a zillion lines of VB and Java being written today for back office applications, but the more specialized tools are no longer as popular as they once were. The general purpose tools seem to be doing the job nicely.
However, the Buzz of application development hasn't been with the Fat C/S app. Fat apps have all sorts of deployment, compatibility, resource, and portability issues. Having to roll an update of a C/S application out to 1000 users is a headache for everyone involved.
No, today it's the web. Everything on the web. Universal access, pretty GUIs, fast deployments, centralized control. We all know how the web works, right?
Sure. The computer sends the codes, the browser paints the form, the user interacts, locally, with the form, then ships back the whole kit with a single SUBMIT key.
Where have we heard that before? Central computer, bunch of "smart clients". Of course we use TCP/IP today instead of RS-232, and the clients are much more interesting. The browser is vastly more capable and offers higher bandwidth interfaces than a Green Screen ever could. But, the principle is pretty much the same. Everything on the central computer, even if the "central computer" is now several computer and a bunch of networking gear.
If you've been paying attention, you may have noticed over the past couple of years the the browser is getting much, much smarter.
It's not that this is new, it's been happening for quite some time. More and more Web Applications are pushing more and more logic down in to the browser. AJAX is the buzzword, and GMail is the poster child.
The JavaScript engines within the browsers, along with other resources, are going to change how you as an application developer are going to develop your back office applications. Not today, not this second, but it's always good to look ahead and down the road.
First, we have good old Microsoft. Who'd have thought the thing that threatens Microsoft the most comes from with Microsoft Labs itself. Yes, this is all their fault.
Microsoft, in their brilliance and genius gave the world the ubiquitous "XHR", the XmlHttpRequest. XmlHttpRequest is the little nugget of logic that enables browsers to easily talk back to the server through some mechanism other than the user click a link or a submit button. There were other "hacks" that offered similar abilities, but they were, well, hacks. Ungainly and difficult to use. But XmlHttpRequest, that's easy.
From the introduction of XHR, we get the rise of the AJAX libraries. Things like prototype, and jQuery. JavaScript libraries that, mostly, give ready access to the HTML DOM within the browser, but also provide ready access to servers via XHR. There's a lot you can do with JavaScript and doing DOM tricks, and pretty animations, and what not. But it gets a lot more interesting when you can talk to a server as a dynamic data source. So, while DOM wrangling was fun and flashy for menus and what not, XHR is why more folks are interested in it today than before.
These first generation JS libraries provide a foundation that folks can start to build upon. They basically open up the primitives and elements used to construct browser pages. And once you open that up, folks, being programmers, are going to want to make that easier to use.
Enter the next generation. JavaScript Component Libraries. Dojo, YUI, ExtJS. From primitive DOM and node shuffling to high level widgets. Widgets that start to work across the different browsers (cross browser compatibility always bringing tears of joy to any coder who has had to deal with it...well, tears at least).
With a Widget Library, you start getting in to the world that Windows and Mac OS coders started with. You end up with a clean slate of a page, an event model of some kind to handle the keyboard and mouse, and high level widgets that you can place at will upon that clean slate, to do a lot of the heavy lifting. On top of that, you have simple network connectivity.
This is where we were as an industry in the late 80's and early 90's. This kind of technology was becoming cheap, available, and commonplace.
And what did we get when this happened in the 90s? The 4GLs. VB, Power Builder, etc. Higher level language systems that made combining widgets and data together easier for more mundane users.
That's the 3rd generation for the browser. That's where we are today. On the one hand, you have the server centric component frameworks, like JSF, .NET, Wicket, etc. Really, these aren't yet quite as rich as what the modern browser can provide. They have "AJAX Components", but in truth the developers are still coding up forms with dynamic bits that use JavaScript rather than JavaScript Applications that run essentially purely in the browser.
There's GWT, a clever system that lets you write your client code in Java and download it in to a browser to run, after the toolkit compiles the Java in to JavaScript. Here, you can create "fat" JavaScript applications.
But, also, we have the recent announcements of Apples work with SproutCore, as well as the 280 North folks with their "Objective-J". These are not simply widget frameworks. They're entire programming systems where the premise is that the browser is the runtime environment, while the server is simply there for data services. Classic Client/Server computing ca 1992.
Of course today, the server protocol is different. Back then we were shoving SQL up and getting data back. Today, nobody in their right mind is pushing back SQL (well, somebody is, but there's that "right mind" qualifier). Rather they're talking some kind of higher level web service API (REST, SOAP, POX, JSON, not really important what). Today we have App servers that are more powerful, complicated, and robust than DBMSs and Stored Procedures.
Take note of the work of Apple and Mozilla with their efforts to speed up and improve JavaScript. Because like it or not, JavaScript is becoming the lingua franca of the modern "Fat App". The language is getting better, meeting the desires of the dynamic language wonks, as well as the "programming in the large" folks with better modularization, giving us more flexibility and expressibility. JavaScript is also getting faster, and the modern browsers are gearing up to be able to handle downloading 1MB of JS source code, compile it, and execute it efficiently, and for a long period of time (which means fast code, good garbage collectors, etc.).
You'll note that this work isn't in the area of Flash or Silverlight. Programming Flash or Silverlight is no different than programming Java Applets. You create some compiled application that's downloaded to an installed runtime on the clients computer. By promoting JavaScript, and the HTML DOM, even though more effort is being made to hide that from the day to day coder, Apple and Mozilla are promoting more open standards. IE, FireFox, Opera, Safari, four different JS and HTML "runtimes", not to mention the bazillion phones and other implementations.
Of course, once you start doing all of your client logic in JavaScript, it won't take long for folks to want to do the same thing on the server side. Why learn two languages when one is powerful and expressive enough for most everything you would want to do?
With the rise of client side JavaScript, I think we'll see a rise of server side JavaScript as well. It will be slower. Hard to fight the tide of PHP and Java, but, at least in Java's case, JavaScript runs fine on Java. So, it's not difficult start using JS for server side logic. Heck, the old Netscape web server offered server side JS 10 years ago, I don't know if the Sun Web server maintains it any more or not (SWS is the NS heir). Running JS via CGI with Spidermonkey is trivial right now as well, but I doubt you'll find many $5/month hosts with a handy Spidermonkey install.
So, no, not quite prime time yet..but soon. Soon.
Of course, maybe not. Perhaps will end up being delegated as a runtime language for the GWTs and Objective-J's of the world.
The biggest impact will be to the web designers. Static web pages won't be going away any time soon, but more and more web designers are going to have to become programmers. They won't like it. Web Design != Programming, different mindsets, different skill sets.
Anyway, get your party hat on and get ready to welcome back the "Fat App". Oh, and perhaps polish up on your JavaScript if you think you'd like to play in this arena. Yes, you too you server side folks.
Friday, June 27, 2008
Thursday, June 5, 2008
Jave the next GAE language? That's probably half right.
Chris Herron pointed me to the article where Michael Podrazik suggests that Java will be the next language for the Google App Engine runtime.
I think he's half right.
By that I think if there's any Java in the next GAE language, it will be JavaScript.
Why is that?
It's pretty clear that Google has a lot of experience in house with JavaScript. The GWT runtime is entirely in JavaScript. They have their own XSLT processor in JavaScript (for browsers that don't support XSLT natively). Also, they have their Rhino on Rails project, which is a Ruby on Rails port to JavaScript.
Next, JavaScript fits nicely in to the existing GAE infrastructure. It can be run just like Python is now. Also, there are several OSS JavaScript interpreters available to be used, of varying quality. The new runtime for FireFox 3 based on Adobes ActionScript is one, also the recently announced SquirrelFish runtime from WebKit could be used.
The GAE API would fit well in to a JavaScript world, with less "square peg round hole" work that using Java would entail.
JavaScript, with its push to JavaScript 2.0 is rapidly growing up. It's always been an elegant language with its prototype inheritance scheme (some would argue it's a blight on the planet, but that's more a paradigm complaint I think). The 2.0 changes will make it a bit more mainstream, make it faster, and even more powerful. So JavaScript is powerful today, but getting even moreso. The tooling surrounding it is getting better as well.
Finally, there are a bazillion web developers who are becoming, whether they like it or not, conversational in JavaScript. Before there was a clean separation between the client side and server side developers. Client side did HTML and CSS, while server side did scripting and logic.
But with the modern browsers having powerful JavaScript engines, and UI demands requiring fancier client side scripting for effects etc., not to mention Ajax, the client side developer has had the world of scripting and programming logic thrust upon them.
Some take to it well and become adept at leveraging JavaScript and its powers. Others simply cut and paste their way to success using the multitude of examples on the web. Either way, whether novice or expert, the client side developer is learning the fundamentals of programming and the nuances of the runtime through JavaScript.
If the client side developer were able to leverage that JavaScript knowledge on the server side, that empowers them even more.
JavaScript has had a mixed history on the server side. Netscapes server has supported server side JavaScript since forever, but obviously when someone thinks about the server, JavaScript is far from their mind. It has almost no mindshare.
Yet, we have, for example, the Phobos project which is a JavaScript back end, as well as the previously mentioned Rhino on Rails internal Google project. These are recent, though, without a lot of public history.
Now, to be fair, these are both Java systems operating as the host for a JavaScript based system. But there's no reason they have to be Java. The major browsers certainly don't use a Java runtime for their JavaScript systems, they use a C/C++ implementation.
With a C/C++ implementation, Google could readily launch a JavaScript runtime for their GAE that would fit quite well with their current infrastructure. Also, since there's very little momentum on the JavaScript server side, there's no real competition. No "why can't it operate like Project X". This gives Google even more freedom in terms of shaping the runtime the way they think it should be done.
So, I think that if there is any Java in GAE future, the near term will be in name only, with JavaScript.
You heard it here first.
I think he's half right.
By that I think if there's any Java in the next GAE language, it will be JavaScript.
Why is that?
It's pretty clear that Google has a lot of experience in house with JavaScript. The GWT runtime is entirely in JavaScript. They have their own XSLT processor in JavaScript (for browsers that don't support XSLT natively). Also, they have their Rhino on Rails project, which is a Ruby on Rails port to JavaScript.
Next, JavaScript fits nicely in to the existing GAE infrastructure. It can be run just like Python is now. Also, there are several OSS JavaScript interpreters available to be used, of varying quality. The new runtime for FireFox 3 based on Adobes ActionScript is one, also the recently announced SquirrelFish runtime from WebKit could be used.
The GAE API would fit well in to a JavaScript world, with less "square peg round hole" work that using Java would entail.
JavaScript, with its push to JavaScript 2.0 is rapidly growing up. It's always been an elegant language with its prototype inheritance scheme (some would argue it's a blight on the planet, but that's more a paradigm complaint I think). The 2.0 changes will make it a bit more mainstream, make it faster, and even more powerful. So JavaScript is powerful today, but getting even moreso. The tooling surrounding it is getting better as well.
Finally, there are a bazillion web developers who are becoming, whether they like it or not, conversational in JavaScript. Before there was a clean separation between the client side and server side developers. Client side did HTML and CSS, while server side did scripting and logic.
But with the modern browsers having powerful JavaScript engines, and UI demands requiring fancier client side scripting for effects etc., not to mention Ajax, the client side developer has had the world of scripting and programming logic thrust upon them.
Some take to it well and become adept at leveraging JavaScript and its powers. Others simply cut and paste their way to success using the multitude of examples on the web. Either way, whether novice or expert, the client side developer is learning the fundamentals of programming and the nuances of the runtime through JavaScript.
If the client side developer were able to leverage that JavaScript knowledge on the server side, that empowers them even more.
JavaScript has had a mixed history on the server side. Netscapes server has supported server side JavaScript since forever, but obviously when someone thinks about the server, JavaScript is far from their mind. It has almost no mindshare.
Yet, we have, for example, the Phobos project which is a JavaScript back end, as well as the previously mentioned Rhino on Rails internal Google project. These are recent, though, without a lot of public history.
Now, to be fair, these are both Java systems operating as the host for a JavaScript based system. But there's no reason they have to be Java. The major browsers certainly don't use a Java runtime for their JavaScript systems, they use a C/C++ implementation.
With a C/C++ implementation, Google could readily launch a JavaScript runtime for their GAE that would fit quite well with their current infrastructure. Also, since there's very little momentum on the JavaScript server side, there's no real competition. No "why can't it operate like Project X". This gives Google even more freedom in terms of shaping the runtime the way they think it should be done.
So, I think that if there is any Java in GAE future, the near term will be in name only, with JavaScript.
You heard it here first.
Subscribe to:
Posts (Atom)