Sunday, July 10, 2011

So, jdubray at Carnets de bord has thrown down the gauntlet, I guess. He proposes "unREST". What is unREST? It's, umm, not REST. Simply put, it's RPC. Pretty bold proposition, don't you think?

Here's to hoping that unREST sweeps the internet so that the Restafarians can focus on working out the details of trying to apply REST as given rather than cleaning up everyone's misconception that REST is using GET over HTTP for RPC.

Let us put an end to "RESTful" web services and "RESTful" URLs, all things REST doesn't really have much of a say over or a care for. Lets hear it for "unRESTful" web services and "unRESTful" URLs. +1 to that.

J (can I call jdubray J? Hope so...) starts off with:

REST was designed with an end-user in mind, operating a user agent: the browser (I know Roy is going to get mad if he reads this). Everything that has been written to apply REST principles to a software agent-to-server scenario is simply wrong.

Actually, no. REST has nothing to do with browsers, but unfortunately, the browser model is so pervasive, and it happens to use hypermedia, that it's a reasonable go-to artifact that can be used to help explain the concepts. A common ground between the theory and practice, especially over the limited bandwidth of things like blogs and forum posts. But, naturally, folks take the example given as law and miss the forest for the trees.

He next cites three of Roy's REST premises, and damns them because "nobody" actually implements these concepts.

Finally, he boils it down that nobody does this because there's not tooling for it. That all of the other RPC mechanisms give us the mechanism of exchange "for free".

SOAP gives us WSDL "for free", CORBA gives us IDL "for free", and Java RMI uses Java classes "for free".

SOAP, CORBA, and RMI are all popular, especially behind the firewalls. On the public internet, eh, not so much.

Why is that? Why are these protocols not pervasive in the wild, cage free, free range internet when they give us all this functionality "for free"?

Because the underlying complexity of these protocols inhibit casual adoption. There are rules to follow. Folks hate rules. Because of their complexity, the universal "we" are effectively forced to use tooling to participate. When you have to rely on the tools to do the work, as a participant you tend to ignore the details on the wire. Let the tools enforce the rules.

Consider Java RMI or Remote EJB. As developers, we would create our Java interfaces, methods, parameters, etc. and publish them far and wide using the built in software. In use, did we ever actually look at the byte stream going back and forth? No. It's an opaque binary blob of unintelligible rubbish. And if there was some conflict or problem in the protocol, it certainly wasn't our problem, it was the toolings problem. We coded to the high level interface, and left the nitty gritty to the tooling.

Just like we do today with TCP packets. We don't care what the bits and bytes and network order and packet buffers and sync handshakes or whatever. Open socket, socket.write(stuff), close socket. Shazam.

SOAP, however, is "just XML". It bares its wire protocol to all. If anything the SOAP stacks over complicate things because they give the false hope that we actually have some control over the wire format, when in truth, we really don't. XML it may be, but what goes over the wire is STILL the tooling's problem.

And that's the rub. When Tooling implementation A interprets the specification one way, and implementation B interprets it another, then it's happy fun time for the poor, powerless end developers that need to make it work.

SOAP has matured greatly over the years, and interoperability is always getting better, but developers are still relying on tooling to do the heavy lifting. For J, that's a good thing. If the REST APIs had tooling to where he could drag and drop the interfaces, we'd likely not be having this discussion.

However, if REST APIs had the tooling to do all that, then there would be effectively no difference from HTTP REST implementations and SOAP implementations. Much of the complexity that's baked in to SOAP is there for a reason. On its surface, SOAP is really (really!) simple. It's an XML header and payload. It's really trivial. All of the specifications go in to details about what's IN the payload, the SOAP envelope itself is straight forward.

(Here's a secret, did you know you could implement a REST system on top of SOAP?? Really, it's true! REST doesn't care about the protocol. Does. Not. Care. It's an architecture.)

J mentions agent-to-server, and when we talk about agent-to-server and both the agents and the server are machines, we talk about Machine-to-Machine, or M2M communications.

REST represents the server architecture, and client expectations. A "well behaved" REST client can be quite complicated to be able to handle and support the flexibility that the server can leverage in order to get value out of REST. But here's the news. A BAD REST client, a client that ignores all of the pre-cepts and guidelines that are part of the contract of the REST server, does NOT invalidate the REST architecture. Just because one side does not play along by the rules, does not mean the design or implementation of the server is wrong.

What is a bad REST client? It's something that make presumptions about everything. Normally, a REST client starts at the beginning of a graph that represents the REST server. The initial entry point. From there, the server lets the client know where it can go, and what it can do, what it should do next. A bad REST client may simply go "yea, yea, yea, I've been here before" and go stomping through the servers endpoints as if they own the place.

That's what many systems do today by hard coding endpoint addresses, and making assumptions about payload, and format, and next steps. That's what we do when we write code. "Open socket, send this, get that, send this other thing, get that other thing, close socket". We code the recipe in to our clients, because "we know the way".

Have you ever gone someplace you've been before, made the right, turned left, went down two streets, and turned in to the driveway to find what you were looking for wasn't there any more? "Wa..wa..wait a second, it was just here!" That's a bad REST client.

Ideally, you would have called ahead (like your wife said you should). Perhaps the phone number changed, and you would have called that number instead. Then you would have asked "Hi, where are you located?", and got the new address. That's what a "good" REST client does. Every. Single. Time. That's a major part of HATEOS, in a nutshell.

We don't do that in normal life, because we "know" that, as a rule, things simply don't change that often. There's no way I'm calling the grocery store every week to check on their hours or make sure they're still in the same shopping center I left them in last week.

And that's why bad REST clients can "get by", and why folks see perhaps less value in REST. Because most people that write systems, write small systems. They write systems that they likely control both the server and client side of the equation. When the server moves, the pain of updating the clients to use the new locations is minor one off integration work.

Because what happens to a bad REST client when the server changes? It breaks. It gets to the parking lot and cries "Oh no, Disneyland burned down!". Then it fills your logs with stack traces all weekend, and nothing gets done.

Folks seem to feel there is some magic in REST clients, that since "REST is all about browsers and humans" that the clients should be some Skynet powered super cyber-being. Hogwash. Just code, people, they're just code.

Code that knows to ask for what it wants, code that knows to ask for directions, code that knows how to parse and process the payload as it sees them. What happens when the code gets stuff it doesn't understand? Why, it breaks! Imagine that.

Just like in real life, with humans. You know what happens when you reorganize a web page and move the BUY NOW button to the other side of the screen? The phone rings. People decry "What have you done?" "Where's the BUY NOW button?" "How do I ...". In other words, the agent breaks. Most people are resilient enough to adapt to the change. Most people.

Proper REST clients are more resilient to change. REST clients are kind of like the Unit Tests of RPC. Unit tests help enable development teams to move quickly because the unit tests can inform the team when something unexpected breaks. With resilient REST clients, the REST server can move forward, quickly, with some confidence that the clients will "keep up". They may not understand the newest features, or additional data elements, but they won't necessarily break on the changes to the old ones. When you can count on the phone not ringing everytime you push a change, you tend to be able to move more quickly in advancing the system. Because we all know how much we hate it when the phone rings. At 3am. On your honeymoon...

If you have an in house server with two in house clients, this kind of resilience is not a value add for you. The cost of implementing a proper REST client may well not pay back any time soon, it's simply cheaper to implement, keep track of, and fix bad clients.

But recall, a bad client doesn't invalidate REST. A bad client works just fine with a solid REST server, at least for some length of time.

With cooperative clients, if the server maintains some backward compatibility, then it offers more time for the clients to be recoded/retrained to take advantage of the new features. With cooperative/compliant clients, clients that actually use the information given to them rather than just making stuff up as they go, the server has more flexibility in the changes that it can make without the phone ringing. That's why the server controls the URL space, and the clients don't. It's not their URLs, it's the server URLs.

Giving the clients control over the URLs is like giving the clients control over what office you work in. "Yea, I'd like to move to the office with the window over the beach volleyball court, but, 8 months ago I told UPS what office I was in -- so I guess I'm stuck here."

L O L you say. Absurd you say. But that's what happens when you give the power of the location of your services over to your clients. No beach volleyball for you. Ensure that your clients can look stuff up, and this isn't a problem. Forget UPS, they can call me. Now I just need a set of binoculars.

This is also why the use of ubiquitous media types are important. It's the same process. The media types in a REST system DO NOT carry semantics. The media type is rules about parsing and locating information within the media type. GIF is a media type. Blurry cell phone photos of scantily clad volleyball players is semantics (it means I have a better office that you do). The server defines what goes in to the media type, in terms of actual, interesting data and the application defines the semantics. Using ubiquitous media types means that there is a good chance other systems will ALREADY KNOW how to process it. "That's good, one less thing." -- F. Gump.

In Healthcare IT, there is solid movement behind representing clincial information. One of those efforts is the Clinical Document Architecture, which is based on the HL7 v3 RIM model. There's enough documentation to represent these documents to make a nice, home leaf and flower press. But once understood, and once wide spread, implementors and architects will all be able to leverage and utilize all the work being done classifying disease, medications, procedures and preserving daisies and poppies to enable the flexibility that the ubiquity of the data formats bring. Much like we say today "do you have a fax machine" or "do you have email", we'll be able to say "do you understand a CDA C32" and send a great amount of interesting information. "For free."

Why isn't XML or JSON a "valid" REST format? Even though everyone and their sisters cell phone can process it? Same reason XML isn't an Health IT format. Because REST wants hypermedia formats, not just fancy CSV. It wants something more. HTML is a hypermedia format, XHTML is one that just so happens to be XML as well. So is Atom. XML and JSON, "in the raw", are not. They're just fancy CSV. Others are trying to build better, REST happy hypermedia formats on top of XML and JSON.

Once you get the media type out of the way, you can dwell on the details of the content that are important for the transaction. And from this you can train your systems to work with the interesting data that really matters.

That's why REST promotes using standard formats. So we don't have to learn Yet One More thing. Let's create tools around the existing specifications rather than recreating them, and retooling, and re-documenting, and re-training EVERYONE (and every thing) to use the new ones.

There's this notion, again, that M2M clients are some kind of super powered, semantic inferring, cognitive creation. The whole "out of band information" issue.

Humans (mostly) have this power, but this isn't what REST is talking about. This inference and deduction process of figuring stuff out on the fly. Consider a web site. Arguably the most RESTy thing we have today. Common formats, hyper media, caching, the whole kit.

But, see, this web site is in Japanese. Know what? I can't understand Japanese. At all. Such a website would be total, utter gibberish to me. Does that make it not REST? Hardly.

What about a DNA sequencing system. Do you know anything about the process of DNA sequencing? I don't know anything about DNA sequencing. Even in english I wouldn't know what such a system did, what I would do with it, or why I would do something with it. Does that make the system not REST? Hardly again.

Gonna need some training here. Probably a Phd would help as well. REST doesn't fix this issue. This may be a revelation to some. Amazing, I know.

So get this notion about REST being only for web sites. It's not. It's an architecture for system using coarser data than we perhaps like, capable of being more fluid that we'd like to code for, using data formats we might rather not use. REST tooling is ubiquitous but at the same time, still more in the hand tool stage than the megabytes of frameworks and code generators out there. More integration would be better.

But the true thing that's missing from REST development is the habits and experience of using it. Getting some solid good REST clients out in the wild with a code set amiable to re-use and building of other REST clients. Most folks don't get that far. Software is about abstractions, and REST systems present some very nice ones. But habitually, right now, it's simply easier to write bad clients against harsh servers and deal with the realities of not choosing the more flexible path. For most folks, the value that REST provides is simply unnecessary.

All of the issues and problems that existed with RPC then, before REST was all written down, is still wrong with it today. So, unREST away, please. With your RPC and HTTP tunnels and XML payloads honed to the keenest edge, but do it over there please.

Can I create an unREST tag on SO now?


Friday, June 17, 2011

It's Friday the Grosjeanth!

Well, unrelated to any at all that's important. I'm having a strange Grosjean Numbers day.

On StackOverflow, of all things, my score, rep, cred, karma, whatever it is they call it there reached 23,456.


At the same time, my Google inbox is showing 3456 unread messages (don't ask).
It's stupid enough to be worth capturing for all eternity on my little non-corner of the internet.

I don't dare query a Numerologist. It probably has some long term ramifications such as catching all the red lights on my way home tonight. Why taunt fate?

Wednesday, July 14, 2010

Couch 1.0 retrospect

Surprise, to me, CouchDB 1.0 hit today.

A call went out for folks to chime in about their experiences with CouchDB.

I'm a casual participant, follower, and non-user of CouchDB since around 0.9. I have dabbled with it, but have not been able to employ it anywhere in my projects. The tool is in the box, just doesn't fit anything I need right now.

I didn't head over to the Couch project trying to fill a need, rather when a friend started using it, I went over to see what the buzz was about. I learned a bit and started to linger in IRC and on the mailing list.

I made an effort to understand CouchDB. I have a solid understanding of RDBMS systems, so I really wanted to understand this "new" thing. With the understanding I have, I post random stuff on things like Stack Overflow to help answer Couch questions, and offer other bits of support.

As a "non-user", I found the community to be great. The principals live on the mailing lists and IRC. They don't just talk, they listen, and discuss the finest nuances of the systems with random strangers that just happen to show up.

For me, the Couch community is the center of the NoSQL world. I think they have a unique perspective and are doing things different from what the other DB projects are trying to do. Because of its uniqueness, it's a great place to start and radiate out in to the large DB world.

CouchDB 1.0 is exciting as it "finishes" the first leg of their journey. But the journey will continue and I wish them the best.

Congrats to the CouchDB team and un-team.

Friday, April 9, 2010

The first rule of Apple iPhone OS Development


The first rule of Apple iPhone OS development, is never talk about Apple iPhone OS development. So says Tyler^H^H^H^H^HSteve.

Apple previewed and released their iPhone OS v4 yesterday. Along with the new features comes a new development agreement. The development agreement is under NDA, but that didn't last long.

The big news tweeting up the servers is this clause from the agreement:

3.3.1 Applications may only use Documented APIs in the manner prescribed by Apple and must not use or call any private APIs. Applications must be originally written in Objective-C, C, C++, or JavaScript as executed by the iPhone OS WebKit engine, and only code written in C, C++, and Objective-C may compile and directly link against the Documented APIs (e.g., Applications that link to Documented APIs through an intermediary translation or compatibility layer or tool are prohibited).
What this basically says is that you can no longer write iPhone OS applications in whatever toolkit you want, rather you must use Obj-C, C, C++ or JavaScript.

It's an interesting limitation, and I've not heard of such a limitation on any other platform in the history of computing. It may well have happened before, perhaps back in the dark ages when IBM ruled the mainframe world with an iron fist. But certainly not at all recently.

Currently, there are several toolkits that let you create iPhone apps using languages other than C. A popular one is the Unity Framework, used in many games. This toolkit uses C# to write the programs. MonoTouch is another C# framework.

But this isn't just about what language you can use. The second clause about linking and compatibility layers is equally important, if not more so.

The compatibility layer clause eliminates toolkits that offer a single API to run across platforms. Such as QT or GTK. With those toolsets you can write a C program that runs the same on Windows, Mac, or Linux.

From the two clauses, it's clear what Apple wants. To quote Steve Balmer, "Developers, Developers, Developers, Developers". The key here, however, is that they don't want Mobile Platform developers. They want iPhone developers. They want developers specializing in and writing specifically for the iPhone.

This clause does a couple of things.

First, is strangles other mobile platforms. No longer can you write common code that will potentially run on all of the major mobile platforms, notably Android, which is Java based, and the upcoming Windows 7, which is Silverlight based.

By requiring developers to use their tools and libraries directly, they can't write code (easily) that will directly compile on other platforms, and therefore will require somewhat significant porting effort if a company wishes to distribute an application on to another mobile device.

That means that a company wishing to create an application for mobile devices will have to dedicate a team specifically for the iPhone. This is just like what happens today in the desktop world. A company has to choose whether their application is going to run on Windows, Linux, or Macintosh.

You can observe the effect this has had over the years, with Windows being the most popular desktop platform, it, by far, has the most desktop applications running on it. Microsoft development tools have no reason to make it easy for developers to port applications written for Windows on to other platforms.

Other providers, however, have reached out to meet that need for cross platform code bases, and many toolkits and development environments have appeared to fill the need.

Java is the dominant leader in cross platform runtimes today. "Write once, run everywhere" so the t-shirt says. This is mostly true. There's some "Write once, debug everywhere" involved, but Java is mostly portable. And this portability along with other aspects of the Java system have pushed Java to the top of the charts, especially in the server and enterprise space.

However, Java has not had the success in the desktop arena that they'd like to have. In fact, Java has it's own mini-platform battle being waged within its own ranks. Without digging deep in to the sordid history of why desktop Java isn't ruling the world, let me boil it down to two words: User Experience.

The User Experience, especially earlier in Java's life, was simply not as good as native applications. The major reason was that Java had to reinvent all of the buttons, windows, sliders, etc. that user applications use so that they can use them on all of their supported platforms. So, out the gate, Java applications looked like, well, Java applications. Not Windows apps, not Mac apps, not even Linux apps (which is saying something, what does a Linux app look like anyway?). No, they look and behave like Java apps.

The basics work, but the nuances aren't there. The speed wasn't there, the perfect integration wasn't there. Functional apps indeed, just not particularly desirable apps.

If you have worked with an iPhone, it may have occurred to you that Apple has very strong views on how the UI looks, and on how it functions. They spend a lot of time on the "look and feel" of the user interface. Spectacular UIs is something that Apple is famous for, and it's part of the overall identity of their products.

So, now imagine if something like Java came along to try and create applications on the iPhone. Basically the folks creating the application would have to "reinvent" all of the widgets that the iPhone provides. As history has shown, since it would be a black box reverse engineering effort, the result would be functional, but imperfect. It wouldn't quite work the same. You could just tell that the app is a little bit off. Making a cross platform application that works like that targeted platforms, is hard work.

How many hours do you think Apple spent tweaking how fast a panel slides, how far the scrolling list bounces, or tuning all of the beeps and boops? And how important do you think all of those subtle tweeks are to the overall effect and feel of the UI? I'm pretty sure it wasn't all decided in a Friday meeting after a wet lunch.

Apple has done some cross platform work. The example is the iPhone Users Guide. What many do not realize is that this iPhone application is really an embedded web application that runs on the internal Safari browser. 100% JavaScript.

The trick is, that all of the animations and such within the app, all of the look and feel that makes it feel like an iPhone app, is coded, by hand, within the app itself. Within JavaScript. A glaring example is the scrolling.

In a normal web app, you let the browser do the scrolling. You make a large list, print it out, and the browser pops up a scroll bar, and away it goes. Within the Users Guide, all of the scrolling is done in JavaScript. They rewrote that fundamental behavior to ensure that it looks and behaves properly. That's a lot of work for basically a bunch of webpages for something as mundane as a Users Guide.

But it demonstrates the drive within Apple on how apps should present themselves on the iPhone.

So, a side affect of forcing developers to use their toolkit, is that they help preserve the User Experience on the device. If Apple has proven anything, while applications empower a device, the experience they provide define the device. We have all seen GUI horrors that take perfectly good computer and make them miserable to use. Apple wants to prevent that from happening with apps on the app store.

Mind, this is a secondary benefit of the policy, a side effect, albeit a desirable one for Apple, but it's not the primary goal.

The primary goal to make iPhone OS the premiere mobile platform. Developers have been flocking to the iPhone already, and the iPad just works them up in to an even frothier frenzy. By restricting the toolset, Apple want the iPhone OS to be like Windows OS. Which means they want it to be the primary platform developers use when they want to target the mobile space.

If a developer does so, then they immediately give the iPhone OS a potential time to market advantage, because, at the moment, it's not trivial to port an iPhone application to another device. Most mobile applications are 90% UI, and with the iPhone, all of that UI will be tied up within Apple specific code. To port to another platform, you can take your 10% of logic and rewrite the entire user experience on another device.

That will take a lot of time and resources. If the iPhone market is Big Enough, many developers simply won't bother to port the application, much like many Windows developers don't port their applications to Linux or Macintosh. Now, if they had chosen a cross platform dev kit instead of, likely, a Microsoft specific dev kit in the first place, they may well port to the other platforms. But they didn't, so they don't.

The new Apple clause helps ensure that modern iPhone OS developers don't have that option of a cross platform dev kit. Well, specifically they don't have the option of using a higher level, generic dev kit that then targets multiple platforms. We have to yet wait and see if someone decided to create something that will lets an iPhone OS source base in Obj-C compile and deploy against another platform. It could happen.

All of these limitation buy Apple time. The mobile space is very hot now, Android and Microsoft are very credible. Apple WILL, in time, "lose" this war, much like Microsoft is "losing" war today with the rise of popularity of Linux and Macintosh desktops.

But as well all know, software development is the building of abstractions upon underlying foundations. This process gives code and development a momentum. A project once started on a specific development stack can be difficult to turn.

Apple has the market presence and momentum right now attracting developers. Developers are expensive to collect, so Apple is taking this opportunity to make the attractiveness of the iPhone platform a faustian bargain and building even more momentum for the platform. Doing this will help ensure, for the time being, that the iPhone OS will get the apps, and get them first.


Thursday, April 16, 2009

Google broke Java -- Boo Hoo!

Seems to be a lot of teeth gnashing over Googles incomplete Java support in their new Java Google App Engine.

Talk about looking a gift horse in the mouth.

This new limited JGAE environment single handedly advanced the Java Online presence farther than anyone, including Sun, ESPECIALLY Sun, could have.

Up till now, if you wanted to host a Java application, you couldn't do it cheap, much less free, without being severely hamstrung in terms of the actual environment you were forced to live. Both stability and resources were severely limited for the "cheap" plans. If you wanted to host a Java application before now, you were pretty much relegated to finding a VPS for it.

VPS's are wonderful, but they're expensive, and you get the great joy of effectively being responsible for every aspect of the underlying system. You get to become admin, deployer, programmer, the whole kit. 

Sure, it's fun!

Sorta...

Well...once. Maybe. 

Actually...Yea. 

Actually it's not fun at all. Cron jobs, backups, init scripts. No thanx.

Coding is better.

Now, with JGAE, you get to focus pretty solely on your code. The administration is handled for you. You don't get a "machine" any more, you get a folder, on some cloud, in some unmarked, nameless data center. You don't even have to maintain the database. How cool is that?

Oh, and it's also Free. Gratis. Even with the upcoming reductions in the Free quotas, you get a LOT of capacity for nothing more than a login.

So, now Java has a truly cheap hosting platform which can do nothing if not accelerate the adoption of Java as a server programming language, especially for the hobbyists, hackers and other practitioners that have made PHP what it is today (for good and bad).

But are you coding Java? Really? With the whitelist, and other limitations? Of course you are. Despite these limitations, there is an enormous amount of existing Java code that will run on this platform. The Java code itself won't have to change to run here, though your designs most certainly will.

Python programmers certainly had to go through the same process as JGAE coders will, adapting to the new platform. Now, I don't frequent the Python community, so I don't know what kind of backlash the original GAE might have brought. No doubt, someone, somewhere, was cranky about something (it IS the Internet, after all).

But, even so, the Python community doesn't have the standard base that the Java community has. I think the Java folks have a fair gripe about the whitelist, as, like it or not, the Java Library IS part of what makes Java, well, Java. But, saying that, that doesn't mean the Google couldn't have necessarily offered a similarly limited platform while still staying within the Java specification. "Sure, we have all these classes, you simply can't use them."

If Google had release a fully compliant Java library, but locked it down hard using the Security policies, would the outcry be the same? Are folks really arguing pedantic nits about The Standard when a compliant, but equally restricted library could have offered the same lack of functionality?

Perhaps Google simply didn't want to take the time to vet the entire library for their platform in order to get something out (it is Beta, after all, but then, so is most everything Google releases). Rather, they started with the base assumption of what they wanted to offer, a Java Servlet environment, and worked forward from there to a point when they felt they had a large enough chunk of the platform to do what they needed to pull that off.

See, someone like Sun could NOT have done what Google did. The Sun way would be trying to make the whole of the Java platform workable, to keep it as "unlimited" as practical. They really have such a commitment and investment to Java that if they did what Google did, it would have hurt far more than it would have helped. It would be like hobbling your own children for expediency sake. Imagine the screams if Sun had released this rather than Google.

But as for Google, Java isn't their "baby", and they can be more pragmatic. Taking what they need and tossing aside the rest, and shrug off the criticism. They're not the flag holder for the Java banners. Google isn't making a Java platform that runs Servlets, they're making a Servlet platform and all the Java they need to support that.

The complaints about missing APIs will be addressed, to a point, in time. "Here's the Swing classes, but, no you can't render anything."

In the meantime, though, even with the limitations, I think that the benefit of what Google offers to Java programmers, particularly new programmers, is a greater good than the damage they may cause by offering that opportunity on a limited platform.

Tuesday, July 1, 2008

VM's getting scared?

Over on java.net, they're commenting on whether Google has chosen poorly buy backing the RIA via JavaScript/CSS/DOM/HTML rather than the VM model using Java/Flash/Silverlight.

I look at it from an opposite point of view. Originally I was going to post this as a comment on their site, but for some reason they wouldn't let me.

But I have my OWN voice! Squeak in the wind it may be, I shall NOT BE SILENCED by whimsical corporate overlords. Or, something like that -- geez...not changing the world here.

Anyway.

You can argue that Google and Apple soldiering on with the web platform can be likened to those early pioneers that did the same with the Java Platform, back when it WAS slow, had platform issues, and other teething problems.

If it weren't for the likes of GMail and other RIA browser apps, the browser makers would have less incentive to push the browser as a platform. Yet, now we see that, while not perfect, the browser as RIA runtime is viable for a large class of applications, and it's just getting better.

Witness the improvements to the runtimes both via Adobes new JavaScript runtime, as well as Apples. Plus the new version of JS as a language. Also, we have the DOM changes with things like the better CANVAS tags to handle graphics, as well as improved SVG support.

All of these changes are to drive the platform farther to become more flexible and more performant in order to handle more advanced applications.

Is it perfect? No, of course not. If you want something more robust and fluid than what a browser RIA can provide today, then by all means go the VM route. But there are a lot of valid reasons to stay out of the VM.

VMs add more overhead to an already big system. You still need the browser to launch the application, and when you load that browser, you pretty much get the entire runtime as well. Heck, you can barely launch Flash today properly without JavaScript. So now you pay for both runtimes.

Of course, there's Apples iPhone, which supports neither Java or Flash, but it DOES have a full boat Safari implementation. So, GMail yay, Flex/FX/Silverlight nay.

Finally, you simply have the fragmentation effect. With Flash, Java and Silverlight cutting up the developer pie, while JS/HTML remains a cohesive and reasonably standard cross platform solution.

The number of applications that the browser runtime can support is expanding with every release of the various browsers. The momentum is for browser providers to provide a robust application environment while at the same time improving their unique UI elements for those standard browser tasks. You can not have a successful browser today that doesn't handle the large JS RIA applications.

The browser. It's not just for surfing any more.

Friday, June 27, 2008

Party like it's 1992

Back when I started this whole software thing, the primary interface for users was what was known as the "smart terminal". Glass screen, keyboard, serial port with a DB 25 connector stuck in the back. They had sophisticated elements like bold, blink, and underline text as well as line graphics. Rainbow of colors: white, green, and amber. VT100's and Wyse 50's were most popular and representative of the lot, but anyone who has ever looked at a modern termcap file can see that there were hundreds of these things floating around from every manufacturer.

While the different terminals and their capabilities were novel (I had one that had TWO, count 'em, serial ports making it easy to flip back and forth between sessions), what's more relevant at this point is the idiom of computing they represented.

At the time, it was called time share. Monolithic computers sharing time across several different applications and users. Many were billed by the CPU time they consumed, and computer utilization was a big deal because of the cost of the main computer. Batch processing was more popular because it gave operators better control over that utilization, and ensured that the computer was used to capacity. Idle computer time is wasted computer time, and wasted computer time is unbilled computer time.

As computers became cheaper, they became more interactive, since now it's more affordable to let a computer sit idle waiting for a user to do something next.

The primary power, though, was that you had this large central computer that could be shared. Sally on one terminal can enter data that Bob can then see on his terminal, because all of the data was contained on the single, central computer.

The users interacted with the system, it would think, crunch and grind, and then spit out the results on the screen. Dumb Terminals would just scroll, but Smart Terminals had addressable cursors, function keys, some even had crude form languages. You could download a simple form spec showing static text along with fields that can be entered by the user. The computer sends the codes, the terminal paints the form, the user interacts, locally, with the form, then ships back the whole kit with a single SEND key. This differs from how many folks today are used to interacting with terminals, especially if they're only used to something like vi on the Linux command line. Send a key and the computer responds directly.

Now, this was all late 80's and early 90's. During that time, the world shifted a bit with the introduction of the PC on a wide spread basis. Now folks were using PCs for personal work, and maybe running "Green Screen" apps through a terminal emulator.

It didn't take long for people to want to leverage those PCs for shared work, and with the aid of networking and database servers, the era of Client Server Applications was born. Large applications, locally installed and running on individual computers while the central computer was delegated to serving up data from its database. Visual Basic, Power Builder, and a plethora of other Client Server "4GLs" took the market by storm, and every back office application coder was pointing and clicking their way to GUI glory.

Of course C/S programming is still with us today, "fat apps" we call them now. The 4GLs have kind of lost their luster. Make no mistake, there are a zillion lines of VB and Java being written today for back office applications, but the more specialized tools are no longer as popular as they once were. The general purpose tools seem to be doing the job nicely.

However, the Buzz of application development hasn't been with the Fat C/S app. Fat apps have all sorts of deployment, compatibility, resource, and portability issues. Having to roll an update of a C/S application out to 1000 users is a headache for everyone involved.

No, today it's the web. Everything on the web. Universal access, pretty GUIs, fast deployments, centralized control. We all know how the web works, right?

Sure. The computer sends the codes, the browser paints the form, the user interacts, locally, with the form, then ships back the whole kit with a single SUBMIT key.

Where have we heard that before? Central computer, bunch of "smart clients". Of course we use TCP/IP today instead of RS-232, and the clients are much more interesting. The browser is vastly more capable and offers higher bandwidth interfaces than a Green Screen ever could. But, the principle is pretty much the same. Everything on the central computer, even if the "central computer" is now several computer and a bunch of networking gear.

If you've been paying attention, you may have noticed over the past couple of years the the browser is getting much, much smarter.

It's not that this is new, it's been happening for quite some time. More and more Web Applications are pushing more and more logic down in to the browser. AJAX is the buzzword, and GMail is the poster child.

The JavaScript engines within the browsers, along with other resources, are going to change how you as an application developer are going to develop your back office applications. Not today, not this second, but it's always good to look ahead and down the road.

First, we have good old Microsoft. Who'd have thought the thing that threatens Microsoft the most comes from with Microsoft Labs itself. Yes, this is all their fault.

Microsoft, in their brilliance and genius gave the world the ubiquitous "XHR", the XmlHttpRequest. XmlHttpRequest is the little nugget of logic that enables browsers to easily talk back to the server through some mechanism other than the user click a link or a submit button. There were other "hacks" that offered similar abilities, but they were, well, hacks. Ungainly and difficult to use. But XmlHttpRequest, that's easy.

From the introduction of XHR, we get the rise of the AJAX libraries. Things like prototype, and jQuery. JavaScript libraries that, mostly, give ready access to the HTML DOM within the browser, but also provide ready access to servers via XHR. There's a lot you can do with JavaScript and doing DOM tricks, and pretty animations, and what not. But it gets a lot more interesting when you can talk to a server as a dynamic data source. So, while DOM wrangling was fun and flashy for menus and what not, XHR is why more folks are interested in it today than before.

These first generation JS libraries provide a foundation that folks can start to build upon. They basically open up the primitives and elements used to construct browser pages. And once you open that up, folks, being programmers, are going to want to make that easier to use.

Enter the next generation. JavaScript Component Libraries. Dojo, YUI, ExtJS. From primitive DOM and node shuffling to high level widgets. Widgets that start to work across the different browsers (cross browser compatibility always bringing tears of joy to any coder who has had to deal with it...well, tears at least).

With a Widget Library, you start getting in to the world that Windows and Mac OS coders started with. You end up with a clean slate of a page, an event model of some kind to handle the keyboard and mouse, and high level widgets that you can place at will upon that clean slate, to do a lot of the heavy lifting. On top of that, you have simple network connectivity.

This is where we were as an industry in the late 80's and early 90's. This kind of technology was becoming cheap, available, and commonplace.

And what did we get when this happened in the 90s? The 4GLs. VB, Power Builder, etc. Higher level language systems that made combining widgets and data together easier for more mundane users.

That's the 3rd generation for the browser. That's where we are today. On the one hand, you have the server centric component frameworks, like JSF, .NET, Wicket, etc. Really, these aren't yet quite as rich as what the modern browser can provide. They have "AJAX Components", but in truth the developers are still coding up forms with dynamic bits that use JavaScript rather than JavaScript Applications that run essentially purely in the browser.

There's GWT, a clever system that lets you write your client code in Java and download it in to a browser to run, after the toolkit compiles the Java in to JavaScript. Here, you can create "fat" JavaScript applications.

But, also, we have the recent announcements of Apples work with SproutCore, as well as the 280 North folks with their "Objective-J". These are not simply widget frameworks. They're entire programming systems where the premise is that the browser is the runtime environment, while the server is simply there for data services. Classic Client/Server computing ca 1992.

Of course today, the server protocol is different. Back then we were shoving SQL up and getting data back. Today, nobody in their right mind is pushing back SQL (well, somebody is, but there's that "right mind" qualifier). Rather they're talking some kind of higher level web service API (REST, SOAP, POX, JSON, not really important what). Today we have App servers that are more powerful, complicated, and robust than DBMSs and Stored Procedures.

Take note of the work of Apple and Mozilla with their efforts to speed up and improve JavaScript. Because like it or not, JavaScript is becoming the lingua franca of the modern "Fat App". The language is getting better, meeting the desires of the dynamic language wonks, as well as the "programming in the large" folks with better modularization, giving us more flexibility and expressibility. JavaScript is also getting faster, and the modern browsers are gearing up to be able to handle downloading 1MB of JS source code, compile it, and execute it efficiently, and for a long period of time (which means fast code, good garbage collectors, etc.).

You'll note that this work isn't in the area of Flash or Silverlight. Programming Flash or Silverlight is no different than programming Java Applets. You create some compiled application that's downloaded to an installed runtime on the clients computer. By promoting JavaScript, and the HTML DOM, even though more effort is being made to hide that from the day to day coder, Apple and Mozilla are promoting more open standards. IE, FireFox, Opera, Safari, four different JS and HTML "runtimes", not to mention the bazillion phones and other implementations.

Of course, once you start doing all of your client logic in JavaScript, it won't take long for folks to want to do the same thing on the server side. Why learn two languages when one is powerful and expressive enough for most everything you would want to do?

With the rise of client side JavaScript, I think we'll see a rise of server side JavaScript as well. It will be slower. Hard to fight the tide of PHP and Java, but, at least in Java's case, JavaScript runs fine on Java. So, it's not difficult start using JS for server side logic. Heck, the old Netscape web server offered server side JS 10 years ago, I don't know if the Sun Web server maintains it any more or not (SWS is the NS heir). Running JS via CGI with Spidermonkey is trivial right now as well, but I doubt you'll find many $5/month hosts with a handy Spidermonkey install.

So, no, not quite prime time yet..but soon. Soon.

Of course, maybe not. Perhaps will end up being delegated as a runtime language for the GWTs and Objective-J's of the world.

The biggest impact will be to the web designers. Static web pages won't be going away any time soon, but more and more web designers are going to have to become programmers. They won't like it. Web Design != Programming, different mindsets, different skill sets.

Anyway, get your party hat on and get ready to welcome back the "Fat App". Oh, and perhaps polish up on your JavaScript if you think you'd like to play in this arena. Yes, you too you server side folks.