Welcome!

Silverlight Authors: Automic Blog, Michael Kopp, AppDynamics Blog, Kaazing Blog, Steven Mandel

Related Topics: Microsoft Cloud, Silverlight, Agile Computing

Microsoft Cloud: Article

Developing Situational Applications with Web 2.0 Mashups

Wring out the advantage of Web 2.0

•   Pages are personalized and remember who the user is and what he or she did last, but users don't follow a predefined process.

•   Applications literally stitch pages together on-the-fly, generating the page in the user's browser from multiple sources. Instead of rendering pages via a single trip to a server, applications rely on content from multiple services that may reside on separate virtual or physical servers, which means multiple round trips between the browser and the various content sources. And because the dynamically generated content is dependent on the individual user, the frequency and order of those trips is highly unpredictable.

•   Popular content is apt to be viral, which means that the volume of traffic can jump exponentially with little warning. And the more features and user-generated content you have, the faster it can happen.

Creating an application that can deal with these characteristics for a known number of users is challenging enough. But if your site suddenly becomes hugely popular, the complexity of managing your application increases exponentially. Ultimately, having your application succeed beyond your expectations isn't a bad problem to have. The need for new features and the need to scale to accommodate unpredictable loads, however, put enormous demands on you and your application. You need an architecture that's agile enough to accommodate both growth and sudden surges.

Achieving such an architecture is no simple task. Page rendering in applications that incorporate user-generated content is substantially more complex than traditional data-driven Web applications. Additionally, in the race to deliver new features as quickly as possible, developers often turn to tools designed to accelerate development, many of which aren't optimized for performance at scale. When you combine more complex development processes with unpredictable use patterns and a rush to get applications to market as quickly as possible, programming for scalability inevitably takes a back seat.

Fundamentally, to ensure optimal performance even under extreme swings in demand, you need a system that can intelligently distribute load and lets you scale individual components of the application as needed. To achieve that, you need to be thinking about two key strategies: granular distribution and specialization.

Architecting More Granular Distribution
The traditional solution to increasing the scalability of an application has been basic distribution - throwing more hardware at the problem and distributing the application load among more servers. The problem with this strategy is that it's only effective if your entire application scales symmetrically, and in a Web 2.0 world, that's extremely unlikely. Your image demands may be rising much faster than your page computation demands, for example, but adding servers and having them all do the same work doesn't take that into account. What you need is a system that distributes intelligently at a more granular level - that's organized to scale individual components of the environment as needed. And that requires both a more intelligent approach to distribution and greater use of specialization.

The key to effective distribution is the ability not only to replicate servers but to manage all of those servers as a group. But the biggest impediment to doing that (and to responding dynamically to rapid changes in demand) is hidden resource affinities. The most common affinity is session, but there are a number of others in ASP.NET. Session affinity cripples the ability to distribute load between servers because a given user must always work (or "stick") with the same server where the session data resides. The theory of distribution is that you can double the number of users you can support by doubling the number of servers. An affinity, like session, undermines that behavior, so doubling the number of servers may only support 50% more users. Over time, that ratio continues to degrade until you get virtually no additional load support for additional servers.

As an application is developed developers focus primarily on features and performance and affinity issues rarely have a high priority. And as long as a relatively small number of users are using the application, they don't present a significant problem. When the application grows in popularity and requires more resources, however, these affinities can significantly impair the application's ability to scale. Ultimately, they can make it impossible to load balance effectively, undermining the entire distribution strategy.

To get rid of session affinity, you must move from an in-process session to out-of-process session. ASP.NET includes out-of-process options for handling session state. Without any additional application coding, you can configure the Web server to store session data in a separate database. However, developers typically avoid this solution because the additional processing tends to sap performance. The two extra trips across the internal network (reading session data from the database at the beginning of each session and writing session data at the end) makes an out-of-process session take as much as six times longer than an in-process session - a huge impact on overall application performance.

Fortunately, these out-of-process options aren't the only way to solve the session state affinity problem. One of the great things about ASP.NET is its broad support for third-party tools, components, and services. Session state management, in particular, uses a standard set of interfaces for storing and retrieving data, which means that many steps in the request processing pipeline can be handled by code from third-party vendors and solutions. This opens the door to third-party software and hardware products that address affinity.

Software solutions are available that provide distributed in-memory caching of session state and other workload data, partitioned across a Web server farm. There are also hardware solutions, such as the Strangeloop AS1000 Application Scaling Appliance, which centrally manages session state from an appliance. Because hardware solutions are deployed in-line, between the network load balancer and the application servers, they can manage session information out-of-process without a performance penalty. In Figure 1, you can see where the acceleration appliance sits in the Web farm, so that it can provide out-of-process session data while minimizing performance impact.

Specialization
Distributing load more intelligently is the first step to creating a more agile application, but a second, equally important, requirement is specialization. Fundamentally, specialization is the process of taking specific elements that the application reuses and isolating them from other elements. By doing that, you can distribute the workload more evenly and scale individual elements independently, as needed. Three immediate targets to consider for specialization are image handling, encryption, and caching.

IMAGES
Images are fundamentally different from the rest of an ASPX page and are handled by a different part of IIS entirely. So why put the additional load of image handling on servers that are primarily geared toward ASPX processing, when you can move them somewhere else? You can handle images with separate IIS servers inside your data center that are configured and optimized for image retrieval. You can also use third-party image services, such as Akamai, and take image processing out of your environment entirely.

Of course, distributing image management isn't without its challenges. It's code-intensive, and it can make the management of your application more complicated. When you're updating your site, for example, you have to update image servers as well as Web servers.


More Stories By Kent Alstad

Kent Alstad, CTO of Strangeloop Networks, is principal or contributing author on all of Strangeloop's pending patents. Before helping create Strangeloop, he served as CTO at IronPoint Technology. Kent also founded, Eclipse Software, a Microsoft Certified Solution Provider, that he sold to Discovery Software in 2001. In more than 20 years of professional development experience, Kent has served as architect and lead developer for successful production solutions with The Active Network, ADP, Lucent, Microsoft, and NCS. Kent holds a bachelor of science in psychology from the University of Calgary.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
If a machine can invent, does this mean the end of the patent system as we know it? The patent system, both in the US and Europe, allows companies to protect their inventions and helps foster innovation. However, Artificial Intelligence (AI) could be set to disrupt the patent system as we know it. This talk will examine how AI may change the patent landscape in the years to come. Furthermore, ways in which companies can best protect their AI related inventions will be examined from both a US and...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
Chris Matthieu is the President & CEO of Computes, inc. He brings 30 years of experience in development and launches of disruptive technologies to create new market opportunities as well as enhance enterprise product portfolios with emerging technologies. His most recent venture was Octoblu, a cross-protocol Internet of Things (IoT) mesh network platform, acquired by Citrix. Prior to co-founding Octoblu, Chris was founder of Nodester, an open-source Node.JS PaaS which was acquired by AppFog and ...
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
Whenever a new technology hits the high points of hype, everyone starts talking about it like it will solve all their business problems. Blockchain is one of those technologies. According to Gartner's latest report on the hype cycle of emerging technologies, blockchain has just passed the peak of their hype cycle curve. If you read the news articles about it, one would think it has taken over the technology world. No disruptive technology is without its challenges and potential impediments t...