nav-left cat-right

ASP.Net Web Forms dying a slow death?

Being a web programmer for the last 10 years, I've seen a few different paradigms come and go in the web world. Looking around today, I can't help but think that the writings are on the wall for the ASP.Net "webforms" web application model. When it debuted, it was revolutionary to have a fully typed, compiled OO framework that was virtually portable to mobile and desktop. However, since then, ASP.Net has not done much to improve the initial offerings. Yes, the 2.0 framework was much more robust, and VS 2005 had lots of IDE help for webmasters, but fundamentally, the same problems that held developers of 1.0 sites back are holding developers of 3.0 sites back -- namely, the ASP.Net page lifecycle.

Viewstate and lifecycle events are still very confusing for new ASP.Net programmers. There is no easy way for them to immediately "get it" -- they have to spend the time in the trenches, watching their data disappear on post back, or double-bind, and flail around with building and rendering their own web controls. They have to see the DataGrid spew its html diarrhea and spend hours customizing it. They have to have that client ask them "what the hell is all this javascript, and why are the pages so big?" and figure out just what the heck all that gibberish on their pages are.

Moving up the experience chain, Viewstate and lifecycle events represent a significant amount of design and front-end time on a web application even for those that have been doing it for a while. You have to balance your state management with your html optimization, caching, and application maintainability. Often you have to build your own state-tracking structures or extend existing ones. You have to carefully consider whether to roll your own custom controls or buy third party interfaces and rely on their javascript and state programming (or maybe their support team!) And for very simple "read-only" web sites, viewstate just gets in the way.

So in summary, Viewstate and lifecycle are still be a pain in the butt, and still represent hurdles for new programmers. Years ago, it was worth the annoyances and problems, because you could write strongly typed, object oriented portable code without getting lost in folders and folders of scripted sites. So why might it be dying? Well…

The first item on the agenda is Silverlight. ASP.Net represented significant advantages for writing portable code that was closer to classic desktop programming, as it abstracted out much of the xhtml. But now I've seen web application developers swoon over Silverlight. If their reaction is any indication, and the amount of momentum that Microsoft is putting into WPF, many shops are going to give up programming their internal or non-public projects in ASP.Net and move to Silverlight instead. I'm personally still skeptical about Silverlight's market penetration, as there are many gaps it doesn't fill for content publishers. But it's not a minor consideration.

Second is the new ASP.Net MVC framework. MVC is an old model that's rapidly gaining traction in the web world, and with the introduction of the ASP.Net MVC framework, it is hard to see any clear advantage to the older webforms model. From Scott Guthrie's Introductory MVC post a couple months ago:

To help enforce testability, the MVC framework today does not support postback events directly to server controls within your Views. Instead, ASP.NET MVC applications generate hyperlink and AJAX callbacks to Controller actions -- and then use Views (and any server controls within them) solely to render output. This helps ensure that your View logic stays minimal and solely focused on rendering, and that you can easily unit test your Controller classes and verify all Application and Data Logic behavior independent of your Views.

And finally, we come to REST, another model that is gaining traction. True REST and webforms are mutually exclusive since REST involves heavy reliance on mapping resource locators to known states of an application. This model has lots of advantages for web services and data-based applications, especially in the realms of testing, while ASP.Net Webform applications are often built around one URL for many states, using Viewstate and Session as your state map, which are of course lost between sessions and server restarts.

So long story short… I think we're seeing the beginning of a new paradigm. In 5 years, will anyone still be developing with true ASP.Net Webforms? It will be interesting to see!

Be Sociable, Share!

10 Responses to “ASP.Net Web Forms dying a slow death?”

  1. Daniel says:

    Excellent article!

    I remember myself a couple of years ago, when developed my first fully rendered web components. I tried to figure out the view state and IPostBackDataHandler stuff, I was extremely annoyed for the fact the events did not come in the order I prefered :) etc.

    Also I had the chance to see that if you leave the default values for viewState, you end up with a hidden field sometimes ten times bigger than the page.

    I spent some time reading a lot and trying a lot about that. Fortunately, although the topic was complicated, it was very well documented.

    Now that I understand it, I believe it is an invaluable tool. You just have to know how to use it and yes, I agree, it is not easy.

    I don't know what will be the future of web, however I believe sites that have a server side technology (php, java, .net etc) and a "normal" browser on the other side are here to stay. See Java Applets, ActiveX, Laszlo – they are there, but did they catch up more than the "regular" web approach ? No…

  2. Dag Johansen says:

    Reading this makes me think about how symptomatic this is of Microsoft technologies. Admittedly they have put out some stuff that I at least would not consider proper tools for making software, notably VB6, but how many times have they put out things that are actually quite capable while simultaneously lowering the threshold for beginners to the point where it comes back to bite them in their behinds?

    Here's what I mean: If you went around looking at companies and their databases a few years ago, and I suspect largely today, you'd find that practically anyone who used Oracle had a DBA. Many who had SQL Server didn't. Why? Because SQL Server shipped with administrative tools that were sufficiently easy to use that a whole lot of non-DBAs were able to get something up and working. Of course these people had much less understanding of how to maintain a database, define the right indexes and rebuild them periodically, update statistics, shrink transaction logs, and so on. And so a lot of SQL Server-based systems' performance degraded over time, more so than Oracle-based ones.

    A similar thing happened with ASP.NET. I've replied to countless messages in forums that demonstrated the person who had made a web page had never reflected over the fact that it's a client-server application the person is building, much less actually a web application! I've seen the question asked many times: How can I open a new window in the button_click handler? This in 1.x where no client-click handler was supported "out of the box" (though one could of course attach one with client-side script, whether included statically or emitted dynamically).

    I think it is fundamentally unfair to blame the technology in cases where it's plainly the developer's fault. Here it's viewstate, and sure, if you actually depended on it to use the controls I would agree. But you do not. And if you were to implement a page that can be submitted and render again and still show the same table of data, what would your alternatives be in, say, php? Ultimately it comes down to the same basic possibilities: Load it from a db or file on each request, cache it in session, or cache it on the client and include it in the submitted data, which of course is what viewstate does. Since viewstate can be disabled both on page and individual control levels at the (double-)click of a mouse button all it really does is offer the developer to use the "cache at client" option with zero programming effort. All the other options are still wide open and require no more work to implement in .net than other technologies, and arguably quite a bit less (databinding to files and dbs would typically require moving one line in page_load from inside to outside the usually-needed if (!IsPostback) block..!

    .NET is incredibly rich and for developers who bother to learn it and who are capable programmers I remain convinced there's rather more you can do with it than with alternatives such as PHP. Admittedly it is a few years since I used PHP but back then at least there's no way I could have done ANY of the following things (all of which I have done in .net, and all of which are in production at various companies in Europe):

    – Dynamic code generation and compilation (creating compiled JSON-serializers for our .NET types and caching them for reuse, incurring the compile overhead only for the first instance serialized, just like compiled regex, my inspiration).

    – Multi-threaded queueing system allowing the app to limit the number of concurrent tasks executing, with limits per task type.

    – Scheduler service running as a windows service and depending on the same business layer as the web application.

    – Programmatically log on to other computers to write files there, so that the application as a whole does not need to run under an identity with network access priveliges.

    – RSA/SHA-1 encryption and digital signatures.

    Even the much simpler stuff, like having a common base class for all pages in an app and perhaps some sub-hierarchy depending on what the site is was impossible back when I used PHP. And because the coding model was just server-side tags the execution order of your code basically depended on the layout of the web page, making it rather fragile should you want to move the markup around! I'm sure PHP has improved greatly since then, but I am also quite sure it isn't really an alternative to .NET, certainly not if building anything that may one day be something more than just a web site.

  3. Bob Dickow says:

    I use PHP and ASP.NET, depending on the client's host, my designer's preference, the size of the job, etc. PHP has plenty of warts. Maintaining security is harder. Versions need to be watched. Keeping too much script out of the markup is time consuming. The object model is weak. Tracing the logic of a full project is sometimes complex (ever thread through something like Zencart???!!!), getting code needed to do sophisticated tasks (encryption) takes extra research. Maintaining page state takes more work, and I often find myself doing it all over again from scratch for each project. PHP doesn't display in my designer's DreamWeaver any better than ASPX pages. PHP can be more fragile; my designer is frequently mangling the script tags, etc. PHP doesn't have Intellisense support (one thing I love about MS VS) that I know of, PHP can be more verbose in code. On and on. I could make parallel complaints about C# and ASP.NET. I can make complaints about any of the 12 computing languages I know. Will any of these languages be around in 5 years? But using either PHP or C#/.NET or whatever properly, knowledgeably and efficiently takes experience. As for accessibility for beginners, maybe PHP is simpler. It is more like BASIC. My first PHP projects, though, even as an experienced programmer generally, when I look at them now, were spaghetti code. It all comes out in the wash.


  4. Dag Johansen says:

    @Daniel: Regarding server-side versus client-side.

    I think this will very much depend on to what extent the W3C manages to keep evolving the standards, and get solid browser support for them, in a manner that allows the creation of web applications that meet people's needs.

    In many ways, the current state of affairs is a mess. HTML was not designed for making applications, but for publishing (hyperlinked) documents. While this is a very useful capability, it has it's limits. Even tagging onto this model first DHTML and JavaScript to make for dynamic documents is quite far from a suitable model as a general application platform.

    To see what I mean, just ponder this whole ViewState thing and why it is there. Imagine a page is requested that contains a TABLE. Even if this TABLE is contained by a FORM, the data in the table is not sent by the browser if the browser submits the form – because HTML vision of forms was never anything as sophisticated as that. Only INPUT elements contained in the form are posted, and what's more, each input will be posted simply as a name-value pair, such as "firstNameTextBox=Daniel".

    This works beautifully for publishing documents, but it creates a lot of headache for someone trying to create applications. Say you want users to edit the data directly in the table and save it (on the server-side). DHTML and JavaScript let's you do that, because you can dynamically change the document to put textboxes inside the table cells to put a row into editing mode. But to get the data to the server, you have to either use AJAX, or put invisible INPUT elements, () that *are* posted to the server inside the form. Of course this latter is exactly what ViewState does.

    To take another example, say a user can change the color or font of some element on a web page. This too can be done relatively easily on the client, but again the problem comes when the form is submitted, and again you have to yourself write a lot of infrastructure code if you want to get this information to the server.

    Nowadays there are sufficiently stable and well-understood abstractions of these techniques available that a developer can succeed at getting such tasks done without necessarily knowing much about what the HTTP requests and responses actually look like or contain – at least as long as they do not want to develop controls themselves, but are content to build on the abstractions others have created for them. However, the solutions are still work-arounds for a technology ill conceived for the task, and as such they will never become very efficient.

    Something like Silverlight, where all of a sudden most of the .NET FrameWork becomes available client-side and you could use binary formatting to transfer objects between client and server, is inherently far more powerful, both in terms of development effort and in terms of run-time performance, which ultimately determines the limits of what it can do.

    What I personally would have liked to see was a non-proprietary technology that otherwise was much like Silverlight in terms of allowing the client to run (JITted) native code and thus do lots of heavy lifting, but combined with open standard protocols for communication between client and server. A plug-in is the way to do the migration, because there is "so much web on the internet" already (Flash or Silverlight isn't really "web technology" in the W3C sense) that one couldn't just replace the browser with something like a "silverlight client". But a plug-in of course transforms the browser into such a client as well.

    The problem with a proprietary standard is that it allows only two possible outcomes, both of which are deeply problematic: Either one standard completely dominates, in which case the owner of the standard will have far too much market power. This would hurt everyone but that owner. Or else we'd have many competing standards, spurring innovation and all that, but trashing interoperability and severely limiting what sort of mashups could be easily made.

    In any case, I think anyone who have used the web and native version of virtually *any* software (Gmail being the only thing close to an exception that I can think of) will quickly realize what a big difference there still is in user experience. I see this especially on the iPhone, where there often are "mobile web" and native versions, but I am fairly sure I wouldn't want to use a web version of PhotoShop, to put it that way.

    Of course, there are other possibilities. Many desktop applications stay native but add client-server behaviors such as storing data in the cloud and syncing between locations. Again the problem with this approach is that it zaps interoperability. I can sync my Opera bookmarks between all my Opera clients, and I can do the same in Chrome or in FireFox (plug-in required). But to have it all work across these products I must either give up the convenience and power of native apps (though I can access my Opera bookmarks by signing on to Opera's web site, but it's surely less convenient than having them just appear in my bookmarks in Chrome or FireFox) or manually resort to import/export processes which are difficult at best given the constraints of the real world (things like network barriers between our corporate WLAN and my home computer or iPhone).

    My conclusion therefore is that the world is in need of a client-server standard as interoperable and widely accepted as current web standards, but designed to be a true applicaton development platform, not just an interlinked-document publishing system with a bunch of tagged-on layers of inefficiency to "hack" the system into doing things it was never made for and will never be good at. It is my hope that the W3C would seriously consider adopting those (considerable) bits of .NET that *are* open (ECMA) standards so that something like it can rescue us from the scenario where we get lots of capable client-server applications that cannot interoperate.

    I have at least as much to say on the related topic of bringing *data* into the picture, but I'll save you the pain as this is quite long enough already and I fear few will even read this far. I'll instead point those who wonder what I'm on about to Hans Rosling's organization gapminder (;, where the idea is anyway better expressed than I could hope to achieve.

  5. Dag Johansen says:

    BTW, I just realized something immediately after posting that: I'm going fairly far off topic here – after all the posting is about PHP vs .NET. My apologies!

  6. Bob Dickow says:

    Dag, your analysis is quite on target. As for Silverlight-like technology and www evolution, I hear you. Another side of all this, though, is the user-friendliness of web publishing. On some level, HTML was for simple folks to have access to publishing stuff on the web. Such a technology as HTML should always be available. Silverlight, after all, is NOT for the faint of heart. Also, what about my designer? Talk about interoperability! She does beautiful work, but she is not happy with the direction I want to go. She can't deal with my ASP.NET master pages. She can't deal with my dynamic controls. She barely understands how this magic works, doesn't know what I'm talking about when I mention ViewState. She thinks MS is evil when she sees all the ViewState encoded stuff. I tell her PHP would have to do it another way, using more bytes. She doesn't believe me. She is appalled that a web page source shows up with weird javascript and long ID strings. She works on a MAC. I work on a PC. Now I'm going off topic!


  7. Dag Johansen says:

    Thanks Bob. I think this is more a matter of tools than of standards. If you think back to the beginnings of the web and it's early development, everyone (rightly) thought about making sure it was accessible so that most anyone could publish their ideas to the world. But in my view it was a mistake to do so by making standards that were not strict and browsers that tolerated invalid documents and had to second-guess the intentions of the author. How much pain hasn't this lead to in terms of pages not looking the same or even not working with some browsers?

    Of course it was difficult back then to imagine that the web would become what it has become. But in hindsight it's easy to see that if there had been strict standards and browsers would show nothing but an error message if a document contained the slightest syntax error, it would only have lead to publishing tools that truly respected the standards. As it was, we instead got a bunch of tools that produced invalid documents that looked fine in Netscape Navigator (and later in Internet Explorer). As an Opera user I sometimes amuse myself by using the "validate page" feature available straight from the context-menu. This uploads the document to W3C's online validator, and even now in 2010 I almost never encounter documents that are actually error-free.

    When people have better tools than notepad to create their HTML and CSS files the leniency becomes a liability rather than an asset. In a similar way, I think it would be a mistake to put much emphasis on a graphical designer's understanding of how Silverlight or something like it works. The makers of the technology must keep such things in mind of course, but it's my understanding that this was one important design goal of WPF (closely related to Silverlight). I haven't got the experience to say if it works as of today, but tools like Expression Blend are intended for designers and supposed to give them the power to control look & feel and even behavior to some extent (navigation and animation at least) using a purpose-specific tool and without knowledge of how it all works under the hood. And this is but the first attempt at graphical design tools in the WPF context.

    With ASP.NET, I'd agree it's difficult to cleanly divide these responsibilities unless great care is taken not to just use the tools in the most straightforward ways. But it is possible, and requires the same sort of work as doing it with just HTML. For example, setting properties like fonts and colors and borders directly on elements places style attributes on the elements, and thus becomes redundant all over the place. But there are at least two alternatives: User controls is one. Even a standard unmodified button might make sense as a user control, because it gives you a central place to change it and apply the change across a site. But as far as look is concerned CSS is a better alternative. Well-written ("semantical") CSS files are understood by web designers and their tools alike, and can give an ASP.NET site great flexibility in terms of at least layout and look, if not so much behavior. If you have a look at (which shows one well-designed HTML document and lets you choose which CSS document to apply to it, with astonishing results) the same result can be achieved in ASP.NET, but it requires both knowledge (not of graphical design but of how to separate layout and design from content and behavior – enabling a designer to do their job) and not a little effort. WPF aims to make this kind of separation the default, so you can achieve it without studying the matter and spending a lot of time thinking about it when building a system.

    I hope we are forgiven for this (imo) interesting but rather off-topic discussion. Alas (for my ego if nothing else) I doubt my thoughts on these matters make any difference to what actually comes to pass!

  8. very good stuff. Do you have an RSS feed? And would it be cool if I included your feed to a site of mine? I have a website that pulls content via RSS feeds through a several sites and I'd like to include yours, most folks really don't mind given that I link back and everything but I like to get authorization 1st. Anyway let me know if you could, thank you.

  9. Bob Dickow says:

    The tendency for ASP.NET pages to generate verbose and redundant code is certainly true, and a valid criticism. I have a hunch they knew that there was a tradeoff there. It's great for a simple page to be able to put an attribute just on one control, or two… very quick and convenient. But more difficult to maintain because there is no central Css. User Controls are better, but if you have pages with, say, a couple of dozen different ones, they too carry a lot of overhead for other reasons. ASP.NET 4.0 looks like a great improvement in the area of Css treatment, and an avoidance of the use of tables for laying controls out. Now, I've posted… gotta go do a gig for somebody. In PHP this time!

Leave a Reply