Agile Zone is brought to you in partnership with:

Jon Davis (aka "stimpy77") has been a programmer, developer, and consultant for web and Windows software solutions professionally since 1997, with experience ranging from OS and hardware support to DHTML programming to IIS/ASP web apps to Java network programming to Visual Basic applications to C# desktop apps. Jon is a DZone MVB and is not an employee of DZone and has posted 23 posts at DZone. You can read more from them at their website. View Full User Profile

Hate To Break It To You, But The Browser Is A Tier

08.13.2010
| 5968 views |
  • submit to reddit

“3-tier architecture”. Those words sounded like hogwash to me as I sat in my cubicle 13 years ago, and they still sound the same to me now. The reason for this is mainly because I haven’t ever actually seen a 3-tier architecture. I have only seen 2-tier and n-tier architectures with a mish-mash of strange layers and pseudo-service components.

Before I say anything more, I want to get one thing out of the way. The rough difference between a layer and a tier is that a tier is the physical isolation of software components such that these components can be distributed over the network, for example, to other servers, whereas a layer is a logical grouping of software components such that they are organized in patterns that meet the particular domain of that level or depth of the application (i.e. UI versus business logic versus database, which might be managed via isolated projects or DLLs). If you disagree with what I just said, I suggest that you go Google it for yourself. Tonight I’m somewhat merging the two terms, speaking mainly of physical tiers but also noting that these tiers are logically grouped as layers for practical purposes in most computing situations.

There is a strange belief among some developers and their managers that web servers represent the UI tier (and layer) of a web application. I am not sure where this belief comes from, but I suspect it might have something to do with prior experience with less-mature web technologies where HTML was static and the browsing interface was more of a “dumb terminal” if you will. Lynx and IE2 and Netscape 2. These days, however, web sites can be, and often are—and in my opinion often should be—wholly performing all dynamic view-layer functions within the context of dynamic HTML and AJAX. In my opinion, a “good” web application should be built to execute entirely within the web browser, and the web server should only be used to access data and to perform proprietary business logic or proprietary business algorithms.

Consider for example a web application that consists of nothing but static HTML, Javascript, CSS, and image file resources, plus isolated REST/JSON web services, perhaps on some alternate host. Such a web app can actually be as powerful and as functional as almost any web application these days, including our own which I have to work with every day. In that scenario, the combination of HTML and Javascript act as a client-side workstation application—just like a Windows Forms application or a Visual Basic 6 app of the previous decade—that just happen to be integrating with server resources via web services, those web services being the “proprietary tier and layer for processing shared data”.

By the way, this actually not bad design, architecturally. I’ve heard a lot of whining and complaining from compiled-code developers over the years that Javascript is a crappy kiddie language and should never be used for application logic, but as one who has had more experience with scripting languages than every last one of these whiners as well as on-par experience with strongly typed languages, I am quite convinced that Javascript is an excellent and mature language, one that truly supports most of the fundamentals of OOP and one that is worth betting on for professional web development, particularly if modern browsers are targeted with ECMAScript 5. Unfortunately, it does lack a few things such as design-time/compile-time semantics validation (i.e. no puking at compile-time if you misspell an object’s member reference, as there is no compilation process). But you can certainly unit test against it, that’s why there are frameworks for Javascript like JSUnit, jsunittest, FireUnit, et al, not to mention one unit testing framework I wrote of my own in a previous job. Unfortunately, tooling is still inadequate. It’s difficult, for example, to follow deeply nested “class” declarations and the declarations of their deeply nested members, and it’s true that it’s a bit harder to unit test the view complete with integration tests with Javascript than, say, business objects with C#. It’s still doable, though.

So let’s take this tier breakdown a step further. Microsoft actually implemented HTTP access support in SQL Server since I believe version 2005 if not version 2000 or even 7, alongside the native XML support. It is a poor, insecure practice, but nonetheless fully supported, but you can actually expose this on the front end. Consider then that technically one could create a rich, dynamic web site that had consisted of nothing but static HTML, Javascript, CSS, and image files, and native HTTP access to a SQL Server via SQL Server’s HTTP endpoints. This idea is horrible, but I hope it proves a point. While you will need to either download the HTML+script (etc) files to your hard drive and run them locally, or else make them available via a static HTTP server, this is truly a 2-tier application—one tier in the browser, one tier in SQL Server. Don’t like the idea of downloading the files and running them locally? Take it a step further. HTML and script files stored as database column values within SQL Server, and access them via SQL Server HTTP support. It’s still complete with multi-user support and distributed computing potential (you can rig SQL Server to be in a farm). But it remains a 2-tier application, not 1-tier; the actual execution of CPU cycles to process the application are performed in two distinct places—within the web browser and within SQL Server.

As you tear down the moving parts of a web server, et al, something begins to happen. Everything begins to become blazingly fast. Why? The more end points you have in the life cycle of an application session, the more I/O that must occur, and unpackaging and repackaging, and potential fail points. The more layers you add, the greater the struggle to keep up with the performance requirements and maintenance tasks becomes. Consider the possibility of rich business logic support within SQL Server stored procedures. Imagine all your business logic tucked away in stored procedures. It’s doable, folks, and a number of companies hire application developers who are SQL Server developers first and C# or web developers as an afterthought, where C# code is only used for UI. (I know this because I was interviewed by one such company not too long ago.) Every development team has a completely different perspective of how application development should work. Usually these differences are due to the differences in the business domains. Sometimes, however, the differences are due to variations in management experience (i.e. cluelessness).

One of the applications I work on—I won’t say whether it’s my day job or a side project being co-developed with a friend, or whatever, but let’s just say that tonight as with most of this year since I heard about these circumstances they have had me flabbergasted—is a painful 5-layer, 4-tier architecture, for a small-to-medium sized application. The 5 layers consist of browser UI, server-side app logic, a DAL accessible via WCF web services, CRUD sprocs which are the only means to DB data, and then finally the DB data. (Plus some Windows services, each with multiple tiers.) It just grew this year to 4 tiers because of a belief by a “privacy committee” that the web server, being “accessed by the customer”, needed another layer and tier to sit between the web server and the database where other customers’ data is stored. The application literally could not be migrated to new servers and be given access to the application’s own database until it met this requirement.

I won’t get into how painful it was to inject another tier—the months one developer took just to get the DAL to be proxied via WCF to the isolated web services servers. Nor will I blabber on about the complaining by the leaders who seem to want to blame the developers for wasting time with a server upgrade when there were no performance improvements, while at the same time insisting that this switch to a multi-tier architecture was somehow a good thing for the application. Nor that they had the nerve to keep calling this “3-tier” when it is clearly at least four tiers.

Introducing tiers to a web application is immensely costly to the performance of an app. If n-tier architectures mean “scaling out”, let’s just put it this way: “Scaling out” a legacy application that wasn’t built to scale out is a computer science process that is extremely hard and has no business being mandated by people who are not experienced software developers at their core unless they are willing to let go of at least six to twelve months for a rewrite. The non-leader developers in this case only had the authority and permission to patch the existing codebase in order to meet the requirements—getting the data proxied. In order to correctly build an n-tier, multi-layered application, you have to design much of it as such from the ground up. Most architectures that work best written one way in a two or three tier configuration will not scale well at all in an n-tier setting. And this proves true in this case, as some features in the app I’m referring to now take minutes to perform what used to take a few seconds to perform, because the data must undergo an extra hop via unoptimized XML and the leadership refuses to allow for binary+TCP serialization.

This is my challenge to you, the reader, who for all I know is only myself, it is a challenge nonetheless. Create a rich web application consisting of nothing but static HTML and script. Mock some web services by using static XML or JS(ON) files. Create unit tests for all business and data points. At the end of it all, ask yourself, “Why do I need RoR/ASP.NET/PHP?” I’m sure that the answer will not be, “I don’t,” but the point of this exercise is to prove out the significance and self-dependence of the browser as a tier and the power of client-side application programming for the web. Doing this should actually help, too, with server-side unit testability. Web scripting languages and ASP.NET are prone to be painfully untestable because they really require a browser to render the results before any functionality can be monitored, but if all of the server-side operations have stripped out rendering logic then unit testing the business logic, which is all that’s left, is a cakewalk.

This exercise can also prove out the importance in the value (and pay scale) of strong front-end developers. As one who has sought hard to be strong on all the layers of web development (front-end, middle-tier, and DB), it disgusts me and makes me sick to my stomach when I hear a boss shrug off a resume for being strong on the front-end, or praise a resume for lacking front-end talent. (I am not speaking of my own resume, by the way; let’s just say that I am within earshot of occasional interviews for other teams.) The truth is that every layer and every tier is important and can have equal impact on the success of an application. Leaders need to learn and understand this, as well as the costs and frustrations of adding more of these tiers and layers to an existing architecture.

References
Published at DZone with permission of Jon Davis, author and DZone MVB. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)

Comments

Ron Richins replied on Tue, 2010/08/17 - 5:40pm

I applologize. But you are full of shit. Ron

Michael Smith replied on Thu, 2010/09/02 - 10:20am in response to: Ron Richins

Spelled "apologize."

Michael Smith replied on Thu, 2010/09/02 - 11:46am

Wonderful, eye-opening article!!!

 As Jon Davis accurately notes, the "browser tier" has always represented not only a web application's direct interface with users, but also the optimal location to execute front-end logic, being hosted by the browser and its supporting hardware (CPU, file system and related hardware resources).  

Before the www-era came on the scene, standalone, self-contained "shrink-wrapped" programs ran efficiently on newly-emergent personal computers. These could be considered the original "single-tier" applications, where all resources resided in a single location - on the PC.  The advent of the global Internet revolution transformed content distribution and access, yet relegated the client-tier to "static" html pages with respect to the client-level intelligence.  Nevertheless, client-side web pages realized that over time they could eventually gain the functionality and power of their bretheren "shrink-wrapped" PC applications, while concurrently retaining the power and connectivity afforded to them by the server-side tiers.  

Initially, client-side scripting languages developed in order to promote downloaded web pages from merely book-like read-only articles to more interactive pages featuring built-in logic and primitive multimedia capabilities.  Javascript, as mentioned in Jon's article, is the primary example.  Utilizing Javascript, web pages could integrate "mini programs" such as calculators, audio and simple animation. By then end of the 20th Century, DHTML and related scripting mechanisms further enhanced the flexibility and functionality of web pages as they executed within the browser.  By the early 2000s browsers themselves began developing complete Document Object Models (DOMs), which functioned as complete ecosystems for the development and execution of scripting logic.  On the static side, HTML code became enhanced by the rise of CSS, along with newer versions of the HTML standard.  Of course, web server logic also noticed these improvements, and began integrating client-side functionality into its own environment.  For example, ASP.NET encapsulates much of the logic of complex "controls" such as calendars, balloons and other gadgets into Javascript snippets which are ultimately downloaded to the client tier where they are run by the browser's DOM.  

Eventually, as pointed out by Mr. Davis, the overbearing complexities and overhead inherent in ever-growing n-tier architectures and schemes has become a liability in terms of reliability, efficiency and performance, especially relating to the "browser tier" which represents the actual user interface with the multi-tier monster in which the client logic interacts.  

Becoming impatient at functioning as "second class citizens," browser-tier logic finally began to break free of the multi-tier tyranny. Utilizing a few underused features of the HTTP protocol, Javascript began leveraging HTTP with XML to short-circuit the traditional page lifecycle and connect directly with various tiers on the server using a new paradigm known as AJAX (asynchronous javascript and XML).  By doing so, the client logic gained power by directly requesting and accessing resources on the server end.  As also noted by Mr. Davis, the rise of json-based functionality also enhanced the autonomy of the "browser tier," also enabling browser logic to interact directly with web services. Other enhancements, as Mr. Davis notes, provided for connectivity with SQL data sources and related resources.  

Server side technologies such as ASP.NET have also noticed the paradigm shift whereby client applications (the browser tier) are gaining more and more autonomy, flexibility and functionality, thus making them less dependent on increasingly bloated server-side logic and its associated n-tier morass of complexity.  The development of ASP.NET MVC highlights part of the effort server-side entities have engaged in to compete with the growing influence of the browser tier.

Of course, many functions will always be relegated to various server side logic (database, business logic, and other services mentioned above by Mr. Davis).  He is simply making the logical argument that after being short-changed for many years, the browser tier via client logic is finally staking its claim as a powerful, first-class web tier fully capable of exploiting the latest web technologies, while simultaneously delivering a rich interactive front-end experience to the end user.  Move over, server... The Browser Tier has arrived!!!

 Great article, Jon!

Emma Watson replied on Fri, 2012/03/30 - 5:56am

I've also been thinking along these lines lately. I'm now convinced that prototyping should more often than not be done completely in the browser tier, using static json files and maybe some kind of browser storage for persistance. Then when the application is mostly done, the backend is implemented in a suitable technology. This will keep the project as agile as possible while the inevitable spec changes are streaming in.

Swing

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.