Niraj Bhatt – Architect's Blog

Ruminations on .NET, Architecture & Design

Category Archives: Performance Tuning

ASP.NET Session Timeout Not Working

Many of us are familiar with ASP.NET session state and various options surrounding it. Applications are either aggressive or relaxed about their session timeline and in most cases you would be inclined to change the default session timeout limit of 20 minutes. The obvious place to do this in web.config’s sessionState element as shown below:

<system.web>

<sessionState timeout=”40″ /> <!–timeout is in minutes–>

But unfortunately there are caveats to it. In case you are using forms authentication there is a timeout for the underlying forms authentication cookie. The default value of this timeout is 30 minutes and hence you might witness user being thrown out of the system before his session expires. Hence you want to ensure that the cookie lifetime matches the session time.

<system.web>
<authentication mode=”Forms”>
<forms loginUrl=”~/Login.aspx” timeout=”40″ /> <!–timeout is in minutes–>
</authentication>

Finally you might also want to check your Application Pool’s idle timeout. If there is sole user browsing your site (could be rare case) and user idle time exceeds the application pool idle time, you might again see an unexpected behavior. So ensure application pool idle timeout too matches your session timeout.

Hope this helps!!!

Advertisements

Load Balancing vs. Failover Clustering

A pending post from long time!!! At distance both look quite similar and are point of confusion for many. Rationale though is making Load Balancing address scalability while Failover Clustering address high availability. Load Balancing is all about improvising performance (scale) while Failover Clustering is improvising uptimes mitigating system failures. Another difference is, you would find Load Balancing happening at web/application servers (stateless hopefully) and failover clustering at database servers (state full). Industry seems to be using word “Cluster” (set of connected nodes) for both – but with different intents of Load & Failover.

Both are also separate things in terms of configuration & setup. For instance, Windows 2003 (currently that’s what we have in our production) has separate options for Load Balancing & Clustering. Windows recommendation is not to mix both, i.e. you shouldn’t cluster machines for failover which are already configured for load balancing.

Setting up load balancing is simple – you need couple of machines connected to a common network and an additional IP where clients would connect to. This Virtual IP where the requests are made by clients, is in turn is used for Load Balancing nodes that part of this cluster (load balancing cluster).

Setting up failover clustering on the other hand is little complex. You need 2 networks a public and private (hear beat), a shared drive (called Quorum), and an additional Public IP (in addition to minimum – 2 public and 2 private IPs that 2 systems will have). Remember, creating a failover cluster at Windows level is a primary requirement to build a failover SQL Server cluster. Reason to create a Windows level cluster is install required cluster services and create cluster groups (logical collection of nodes). You can select a cluster group (obviously at least 2 nodes should be part of this group) and configure SQL Server Cluster or anything else on top it. SQL Server Cluster would require an additional IP, another shared disk for installation & database files (this disk is a shared resource for chosen cluster group), domain account (that has administrative privileges on all nodes) & group on that domain which that account has complete access. Shared Quorum and Shared Disk are normally part of SAN storage. I have also come across quite a few implementations using Starwind or similar tools to create these shared iSCSI targets in form of virtual disks (.img). It might be helpful to know that Windows 2003 doesn’t have the iSCSI initiator built-in and you can download the same from here.

Hope above helps to some extent 🙂 .

.NET Worker Threads, I/O Threads And Asynchronous Programming

Before I talk about Asynchronous programming, I will outline type of threads available in .NET environment. CLR maintains a pool of threads (to amortize thread’s creation cost) and it should be the place to look for when you need to create more threads. Thread pool maintains 2 types of threads – Worker & I/O. As name implies Worker threads are computational threads while I/O are used for wait (block) of long duration (e.g. when invoking a remote WCF service). A good rule to follow is to ensure your all waits are on I/O thread, provided they are long enough otherwise you would end up degrading your performance due to a context switch. Let’s understand what above statement means with a bit of code. Your entry point to CLR’s thread pool is ThreadPool class. Code below helps you find how many threads are there in thread pool

int wt, iot;
ThreadPool.GetAvailableThreads(out wt, out iot);
Console.WriteLine(“Worker = ” + wt);
Console.WriteLine(“I/O = ” + iot);

One can only request for a worker thread by ThreadPool.QueueUserWorkItem. If all worker threads are occupied your request will be blocked till time a worker thread is available. Below sample shows how you can consume & monitor worker threads.

int wt, iot;
for (int i = 0; i < 10; i++)
{
ThreadPool.QueueUserWorkItem(Dummy);
}
Console.ReadLine();
ThreadPool.GetAvailableThreads(out wt, out iot);
Console.WriteLine("Worker = " + wt);
Console.WriteLine("I/O = " + iot);

static void Dummy(object o)
{
Thread.Sleep(50000000);
}

Unlike worker threads you don’t have any direct API to request for I/O threads. But .NET leverages I/O threads automatically when we use asynchronous programming. Below code uses asynchronous operations on client side to invoke a WCF Service

for (int i = 0; i < 10; i++)
{
/* Invoke a WCF Service via a proxy, asynchronously*/
client.BeginGetData(10, null, null); /*Put some Thread.Sleep in server side code*/
}
ThreadPool.GetAvailableThreads(out wt, out iot);
Console.WriteLine("Worker = " + wt);
Console.WriteLine("I/O = " + iot);

Contradictory to expectations of many, the output of the above program shows that only one I/O thread is in use instead of 10. This optimization makes I/O so important and something you should care about. As a good designer of your application you want to ensure that minimal threads are blocked and I/O threads just give you that. Also note when call returns another I/O thread is picked up from pool and callback method gets executed, hence we should avoid touching UI from callback method.

So why should one use Asynchronous programming? There are quite a few reasons but what I also see is many programmers miss on server side asynchronous programming & focus just on client aspect of it. Also irrespective of whether you are doing client or server side I strongly recommend you set timeouts for your I/O operations (WCF defaults this to 60 seconds).

Client Side:

1) You don’t want to block UI while processing the request with server (though I have found that this creates other issues like handling UI parts that are only to be activated post request, accessing UI from callback thread, exception handling, etc.)

2) You want to issue simultaneous requests to your server so that all of them are processed in parallel and your effective wait time is the wait time of longest request (though in case you are making requests to same server you can try the DTO pattern to create a chunky request & break the request up on server for parallel processing)

3) You just want to queue up a request (message queue)

Server Side:

1) Normally there is throttle placed on server side handling. If you sever side code is I/O oriented (Calling long processing DB query / remote web service) you are better off doing that on an asynchronous thread. For more information you can refer to ASP.NET link and WCF link.

I haven’t mentioned about asynchronous ADO.NET in this post and you can find more information about it here.

So what are your scenarios for using asynchronous programming?

Performance Testing – Response vs. Latency vs. Throughput vs. Load vs. Scalability vs. Stress vs. Robustness

Normally I find quite a bit of ambiguity when people talk about performance tests, some restrict it to response time whereas some use it to cover a gamut of things they are testing or measuring. In this post, I will put across few thoughts on contrasting between them. Ideally a lot depends on what you are trying to measure. The terms that you will frequently hear in this arena are – Response Time, Latency, Throughput, Load, Scalability, Stress, Robustness, etc. I will try explaining these terms below also throwing some light on how can you measure them.

Response Time – Amount of time system takes to process a request after it has received one. For instance you have API and you want to find how much time that API takes to execute once invoked, you are in fact measuring response time. So how do we measure them? Simple use a StopWatch (System.Diagnostics) – start it before calling API & stop it after API returns. The duration arrived here is quite small so a preferred practice is to call that API in sequential loops say 1000 times, or pass variable load to the API if possible (input/output varies from KBs/MBs/GBs e.g. returning customer array of varied lengths).

Latency – In simplest terms this is Remote Response time. For instance, you want to invoke a web service or access a web page. Apart from the processing time that is needed on the server to process your request, there is a delay involved for your request to reach to server. While referring to Latency, it’s that delay we are talking about. This becomes a big issue for a remote data center which is hosting your service/page. Imagine your data center in US, and accessing it from India. If ignored, latency can trigger your SLA’s. Though it’s quite difficult to improve latency it’s important to measure it. How we measure Latency? There are some network simulation tools out there that can help you – one such tool can be found here.

Throughput – transactions per second your application can handle (motivation / result of load testing). A typical enterprise application will have lots of users performing lots of different transactions. You should ensure that your application meets the required capacity of enterprise before it hits production. Load testing is the solution for that. Strategy here is to pick up a mix of transactions (frequent, critical, and intensive) and see how many pass successfully in an acceptable time frame governed by your SLAs. How to measure it? You normally require a high end professional tool here like Visual Studio Team System (Load Testing feature). Of course, you can try to simulate load through your custom made applications /code but my experience says custom code are good to test for response times; whereas writing custom code for load testing is too much of work. A good load testing tool like VSTS allows you to pick a mix of transactions, simulate network latency, incorporate user think times, test iterations, etc. I would also strongly recommend this testing to be as close as possible to real world with live data.

Scalability – is the measure of how your system responds when additional hardware is added. Does it take new increased load by making use of added resources? This becomes quite important while taking into consideration the growth projections for your application in future. Here we have two options – scale vertically/up (better machine) or horizontally/out (more machines), latter is usually more preferred one. A challenge to scale out is to ensure that your design doesn’t have any server affinity, so that a Load balancer can adjust load across servers. Measuring scalability can be done with help of load balancing tools with a software/hardware NLB in place ensuring system is able to take on new load without any issues. One can monitor performance counters to see whether actual request load has been balanced/shared across servers (I plan to cover NLB in a future post).

Stress testing – Many people confuse this or relate it to load testing. My take which I have found easy to explain is, if you find yourself running tests for more than 24 hours you are doing a stress test (precise would be your production time i.e. duration before you take your machine offline for a patch, etc.). Motivation behind stress test is to find out how easily your system can recover from over loaded (stressed) conditions. Does it limp back to normalcy or gives up completely? Robustness an attribute that is measured as part of stress testing relates to long running systems with almost negligible down time. A simple example here could be memory leak. Does your system release memory after working at peak loads? Another, what happens if a disk fails due to constant heavy I/O load? Does your system lose data? Finding and addressing such concerns is motivation behind stress testing.

I will look forward to read your thoughts on above 🙂 .

Cost Based Optimization (CBO) vs. Rule Based Optimization (RBO)

These terms were brought up in a recent meeting. I decided to dig them out. These are the optimization strategies used by Database engines for executing a query or a stored procedure. They come into picture after a query or Stored Procedure is compiled and is just about to execute (most databases also cache these generated execution plans). Topic of optimization strategies & their differences can be huge one (guess one can write a book on that) but in this post I will try to keep things simple at a definition level. (An analogy here could be you want to travel from destination A to B, & you have several routes to pick up from.)

Rule Based Optimization: This is an old technique. Basically, the RBO used a set of rules to determine how to execute a query. E.g. If an index is available on a table, the RBO rules can be to always use that index (a RBO for our travel analogy can be avoid all routes with speed brakers). As it turns out that this is simpler to implement but not the best strategy always and can backfire. A Classic example of indexing a gender column is shown here in a similar post. RBO was supported in earlier versions of Oracle. (SQL Server supports table hints which in a way can be compared to RBO, as they force optimizer to follow certain path).

Cost Based Optimization: Motivation behind CBO is to come up with the cheapest execution plan available for each SQL statement. The cheapest plan is the one that will use the least amount of resources (CPU, Memory, I/O, etc.) to get the desired output (in relation to our travel analogy this can be Petrol, time, etc.). This can be a daunting task for DB engine as complex queries can thousands of possible execution paths, and selecting the best one can be quite expensive. For more information on CBO I suggest you go through “Inside MS SQL Server 2005 T-SQL Querying”. CBO is supported by most of databases including Oracle, SQL Server, etc.

(N.B. If you find execution plan selected by DB engine is not the optimal one you can try breaking your query into smaller chunks or changing the query logic)

As a programmer you should strive to ensure that cached query plans are used as much as possible. One of the techniques which can get you going is using parameterized queries & this turns out to be important even if you are using an O/R mapper like NHibernate as shown in this post. A related topic with CBO is that of statistics. Statistics determine the selectivity of the indexes. If an indexed column has unique values then the selectivity of that index is more, as opposed to an index with non-unique values. Query optimizer uses these indexes in determining whether to choose an index or not while executing a query. Some situations under which you should update statistics a) If there is significant change in the key values in the index b) If a large amount of data in an indexed column has been added, changed, or removed or the table has been truncated using the TRUNCATE TABLE statement and then repopulated c) Database is upgraded from a previous version. One can use UPDATE STATISTICS / sp_updatestats to update statistics for a table or an index.

I will look forward to hear your thoughts on above.

(P.S. TOAD from Quest is very useful tool if you want a deep dive into execution plans, just feed your query / SP to it and it will provide many alternatives plans suggesting optimizations & indexes).

Getting ANTS Profiler 4.x out of VS.NET 2008 Menu

This is bad and I would prefer if a company like RedGate takes care of this. Recently I had to profile my XBAP application & was looking for a suitable memory profiler. I installed ANTS Profiler 4.3 trial edition and discovered it doesn’t support memory profiling of XBAP applications (guess there is only performance profiling). With a sigh I had to uninstall it, as profiler had added few menus in VS.NET, which in short term were no longer needed. To my surprise though ANTS profiler uninstall was successful, menu items were still hanging around. This struck me as those disabled menu items were occupying lot of unnecessary space. After doing some experimentation I was finally able to fix it. Steps are outlined below:

Go to VS.NET 2008, Tools -> Customize. That would bring up the Customize dialog box

ants1

Click ‘Rerrange commands’ button to get another dialog box as shown below. Select Tool Bar radio button to remove any menus you think are not required.

ants2

And yeah, .NET memory profiler supports XBAP applications 🙂 .