Niraj Bhatt – Architect's Blog

Ruminations on .NET, Architecture & Design

Category Archives: .NET

NuGet Package Restore, Content Folder and Version Control

I was recently explaining this nuance to a Dev on my team, and he suggested I should capture this in a blog post. So here we go. First some NuGet background.

NuGet is the de facto standard of managing dependencies in the .NET world. Imagine you have some reusable code – rather than sharing that functionality with your team via a DLL, you can create a NuGet Package for your team. What’s the benefit you may ask?

1) Firstly NuGet can do lot more than adding a DLL into your project references. Your NuGet package can add configuration entries as part of the installation, execute scripts or create new folders / files within Visual Studio project structure, which would greatly simplify the adoption of your reusable code.

2) Secondly as a package owner you can include dependencies to other packages or DLLs. So when a team member installs your package, she will get all the required dependencies at one go.

3) Finally the NuGet Package is local to your project, the assemblies are not installed on your system GAC. This not only helps for a clean development, but also at build time. Packages don’t have to be checked into the version control, rather at build time you can restore them on your build server – no more shared lib folders.

It’s quite a simple process to create NuGet Packages. Download the NuGet command line utility, organize artifacts (DLLs, scripts, source code templates, etc.) you want to include into their respective folders, create package metadata (nuget spec), and pack them (nuget pack) to get your nupkg file. You can now install / restore the package through Visual Studio or through command line (nuget install / restore).

Typically NuGet recommends 4 folders to organize your artifacts – ‘Lib’ contains your binaries, ‘Content’ contains the folder structure, files, which will be added to your project root, ‘tools’ contains scripts e.g. init. ps1, install.ps1, and ‘build’ contains custom build targets / props.

Now let’s get to the crux of this post – the restore aspect and what you should check into your version control. When you add NuGet Package to your project, NuGet does two things – it’s creates a packages.config and a packages folder. Config files keep a list of all the added packages, and packages folder contains the actual package (it’s basically an unzip of your nupkg file). Recommended approach is to check-in your packages.config file but not the packages folder. As part of NuGet restore, NuGet brings back all the packages in the packages folder (see workflow image below).

workflow-image

The subtle catch is NuGet restore doesn’t restore content files, or perform transformations that are part of it. These changes are applied the first time you install NuGet package, and they should be checked in the version control. This also goes to say don’t put any DLLs inside the content folder (they should anyways go to lib folder). If you must, you will have to check-in even those DLLs inside your version control.

nugetpackage

In summary, NuGet restore just restores the package files, it doesn’t perform any tokenization, transformation or execution as part of it. These activities are performed at the package installation, and corresponding changes must be checked into the version control.

Advertisements

Dependency Inversion, Dependency Injection, DI Containers and Service Locator

Here is a post I have been planning to do for years 🙂 . Yes, I am talking about Dependency Injection, which by this time, has most likely made its way into your code base. I got to recollect few thoughts around it in a recent discussion and hence writing them down.

Dependencies are the common norm of object oriented programming, helping us to adhere to software principles like Single Responsibility, Encapsulation, etc. Instead of establishing dependencies using direct references between classes or components, they are better managed through an abstraction. Most of the GOF patterns are based around this principle which is commonly referred as ‘Dependency Inversion’. Using Dependency inversion principle we can create flexible Object Oriented designs, making our code base reusable and maintainable.

To further enhance value proposition of the dependency inversion, you can pass the dependencies via a constructor or a property from the root of your application, instead of instantiating them within your class. This would allow you to mock / stub your dependency and make your code easily testable (read unit testing).

E.g.

IA ob = new A(); //Direct instantiation
public D (IA ob) { … } //constructor injection
public IA ob { get; set } //property injection – create object and set the property

Entire software industry sees value in above approach and most people refer to this entire structure (pattern) as Dependency Injection. But beyond this things get little murkier.

Confusion starts for programmers who see above as a standard way of programming. To them Dependency Injection (DI) is the automated way of injecting dependencies using a DI container (DI container at times is also referred as IoC (Inversion of Control) container. As Martin Fowler points out in his classic article, IoC is a generic term used in quite a few cases – e.g. the callback approach of Win32 programming model. In order to avoid confusion and keep this approach discernible, the term DI was coined. In this case, IoC refers to inversion of dependencies creation, which are created outside of the dependent class and then injected.) If you are surprised by this statement, let me tell you 90% of people who have walked to me to ask if I was using DI, wanted to know the about the DI container and my experience with it.

So what the heck is this ‘DI container’? A DI container is a framework which can identify constructor arguments or properties on the objects being created and automatically inject them as part of object creation. Coming from the .NET world, the containers I use include StructureMap, Autofac and Unity. DI containers can be wired up using few lines of code at the starting of your program or you can even specify configuration in a XML file. Beyond that, containers are transparent to the rest of your code base. Most containers also provide AOP (Aspect Oriented Programming) functionality and its variants. This allows you to bundle cross cutting concerns like database transactions, logging, caching, etc. as aspects and avoid boilerplate code throughout the system (I have written a CodeProject article on those lines). Before you feel I am over simplifying things, let me state if you haven’t worked with DI containers in past you are likely to be faced with a learning curve. As is the case with most of the other frameworks, a pilot is strongly recommended. As a side note, the preferred injection rule is – unless your constructor requires too many parameters (ensure you haven’t violated SRP), you should resort to constructor injection and avoid property injection (see fowler’s article for a detailed comparison between the two injection types).

Finally, let’s talk about Service Locator an alternative to DI. A Service Locator holds all the services (dependencies) required by your system (in code or using a configuration file) and returns a specific service instance on request. Service Locator can come in handy for scenarios where DI container is not compatible with a given framework (e.g. WCF, ASP.NET Web APIs) or you want more control over the object creation (e.g. create the object late in the cycle). While you can mock service locator, mocking it would be little cumbersome when compared to DI. Service Locator is generally seen as an anti-pattern in DI world. Interestingly, most DI containers offer APIs which can allow us to use them as Service Locator (lookup for Resolve / GetInstance methods on the container).

Below sample shows you StructureMap – a DI container, in action (use NuGet to add StructureMap dependency).

class Program
{
static void Main(string[] args)
{
var container = new Container(registry =>
{
registry.Scan(x =>
{
x.TheCallingAssembly();
x.WithDefaultConventions();
});
});
var ob = container.GetInstance<DependentClass>();
ob.CallDummy();
}
}

public interface IDependency
{
void Dummy();
}

public class Dependency : IDependency
{
public void Dummy()
{
Console.WriteLine("Hi There!");
}
}

public class DependentClass
{
private readonly IDependency _dep;
public DependentClass(IDependency ob)
{
_dep = ob;
}
public void CallDummy()
{
_dep.Dummy();
}
}

I will try to post some subtle issues around DI containers in future. I hope above helps anyone looking for a quick start on DI and associated terms 🙂 .

Controlling Windows Azure VM lifetimes

Most of the Cloud computing resources are billable on an hourly basis and it’s important that you release these resources when you no longer need them. This typically applies to Windows Azure Virtual machines not running 24×7, example – there could be business workloads which requires an application to be available only twelve hours on week days. To limit running costs most users stop their VMs, just to realize that Windows Azure bills for VMs that are in a stopped state. So, your only option to control costs is to delete the VMs. But wouldn’t deleting VM cause any issues?

The answer is both No and Yes. When you delete a VM you are just deleting VM instance but the underlying OS and data disks are still intact (in fact, you still keeping paying for their storage which luckily is quite negligible). Hence, you can easily resurrect your VM without much harm. It’s important to note that when you delete the VM, you still retain the underlying Cloud Service container and its associated Site URL. The issue you might face though when you delete and re-create the VM, is the public IP address change. I work in an organization with strict IT security rules and locked down access. Static IP was necessary for me to raise an outbound RDP access request with my IT team. But with IP changing everyday it was definitely turning into a challenge. In end the solution I adopted was to create an extra small VM running 24×7 and bounce from there to other VMs.

To delete a VM you can use PowerShell cmdlets. PowerShell cmdlets allow you to export your VM configuration, delete the VM and then re-create VM using exported configuration.

Export-AzureVM -ServiceName ” -Name ” -Path ‘c:\vmconf.xml’

Remove-AzureVM -ServiceName ” -Name ”

Import-AzureVM -Path ‘c:\vmconf.xml’ | New-AzureVM -ServiceName ” -VNetName ‘’ -DnsSettings ‘’

Export configuration as shown above is stored in a XML file. It contains various properties of a VM including Endpoints, disks, VM size, subnet, etc. Below snapshot from Azure Portal shows an empty Cloud Service Container post deletion of the VM. Currently there is no cost associated with an empty cloud service. It’s important to note that when you retain Cloud Service you also retain the underlying DNS URL.

Creating PivotTable and PivotChart using VSTO

Extending the previous discussion on PivotTables, in this blog post I will show how you can create a PivotTable and PivotChart using Visual Studio Tools for Office (VSTO). After populating data into excel worksheet, generating PivotTables is quite a common task for VSTO developers. This can be easily broken down into below steps:

a) Create Table Range where Pivot Table would be created
b) Create Chart Range where Pivot Chart would be placed
c) Determine Data Range that would act as input for Pivot (generally you might want to create many pivot tables out of this range)
d) Create a PivotCache object using Data Range as Input
e) Determine Data Range that would act as input for a pivot table
f) Generate Pivot Table and set the orientation of Pivot fields
g) Create chart with using Pivot Table as input

Let’s have a quick look at the C# source code below:


//Step A, B
Worksheet worksheet = Globals.Factory.GetVstoObject(
Globals.ThisAddIn.Application.ActiveWorkbook.ActiveSheet);

var chartRange = worksheet.Range["G3", "J15"];
var pivotTableRange = worksheet.Range["G20", "J30"];
//Step C
var listObject = FindListObject("demoDataList");
var end = listObject.DataBodyRange.get_End(Microsoft.Office.Interop.Excel.XlDirection.xlDown).Row;
var dataRangeForPivot = worksheet.Range["A1", "W" + end.ToString()];
//Step D
Excel.PivotCache pivotCache = this.Application.ActiveWorkbook.PivotCaches().Add(Excel.XlPivotTableSourceType.xlDatabase, dataRangeForPivot);
//Step E
Microsoft.Office.Interop.Excel.Range dataRangeForPivotTable = worksheet.Range["H35", "H" + end.ToString()];
//Step F
Microsoft.Office.Interop.Excel.PivotTable pivotTable = pivotCache.CreatePivotTable(dataRangeForPivotTable, @"DTable", dataRange, Type.Missing);
Microsoft.Office.Interop.Excel.PivotField demoField = ((Microsoft.Office.Interop.Excel.PivotField)pivotTable.PivotFields(8));
demoField.Orientation =
Microsoft.Office.Interop.Excel.XlPivotFieldOrientation.xlRowField;
demoField.Orientation =
Microsoft.Office.Interop.Excel.XlPivotFieldOrientation.xlDataField;
demoField.Function = Microsoft.Office.Interop.Excel.XlConsolidationFunction.xlCount;
//STEP G
Chart chart = worksheet.Controls.AddChart(chartRange, "DChart");
chart.ChartType = Microsoft.Office.Interop.Excel.XlChartType.xlColumnClustered;
chart.SetSourceData(pivotTableRange);
chart.ChartTitle.Text = "Demo Analysis";

Confusing? I hope not 🙂

ConnectionTimeout vs CommandTimeout

ConnectionTimeout a property of Connection class in ADO.NET, is the time you would wait, for connecting to a given database, before flagging a connection failure. Default value is 30 seconds.

new SqlConnection().ConnectionTimeout = 10;

CommandTimeout a property of the Command class in ADO.NET, is the time you would wait, for a command (query, stored procedure, etc.) to return result set, before flagging an execution failure. Default value for CommandTimeout too is 30 seconds.

new SqlConnection().CreateCommand().CommandTimeout = 10;

Unlike ConnectionTimeout which are part of connection string, command timeouts are defined separately (hardcoded or as appSettings). Setting both of them to 0 (zero) would result in indefinite wait period which is generally considered a bad practice. You ideally want to set both of them to in accordance to your performance SLAs. Though at times, while running data heavy background jobs / workflows you may want to set your CommandTimeout to a larger value.

What are Encodings?

Few days back I got an email query on one of blog post related to WCF encodings. Query was asking for help to understand things better, especially around the encoding part. While not being a SME on encodings, I thought of sharing few thoughts.

Encoding is a word I am sure most of us would have come across. If you are a developer this would sound even more true, with terms like Base 64 encoding / HTML encoding popping up more than often. In simplest terms Encoding converts / maps your input data (character set) to another character set. The output character set varies depending on the Encoding style you choose. While there could be many motivations around using encodings, most frequent is to prevent unintentional loss of data. Let’s see couple of examples

a) I had given an example in one of my earlier blog post on ASP.NET security which used HTML encoding to prevent loss of string data being interpreted by browser as Script. Second line of code below keeps the string data intact while the former is interpreted as Script
Response.Write (“alert(‘Niraj’)”);
Response.Write(Server.HtmlEncode(“alert(‘Niraj’)”)); // Or HttpUtility.HtmlEncode

b) Consider another scenario where we want to insert the below string in SQL Server
string Key = “S” + Convert.ToChar(0x0) + “!@#$”;
Now the problem with above is the NULL character (Convert.ToChar(0x0)), which Databases generally consider as end of the string. So in order to insert the string in Database you may as well encode it. Below line of code shows how to encode /decode strings using Base 64 .NET APIs
//To Encode, UTF8 is backward compatible with ASCII
Convert.ToBase64String(Encoding.UTF8.GetBytes(Key))
//To decode
Encoding.UTF8.GetString(Convert.FromBase64String(base64))
As most you are aware ASP.NET ViewState too is encoded using Base 64; to experiment just copy of view state of your page and try to decode it using above APIs.

Coming to my blog post on WCF encodings I had mentioned that Base 64 encoding (used by Text Encoding) inflates the size of string data. I found wiki explaining this quite clearly (see the Examples section) and generally Base 64 inflates the size of data by 33%. That’s where you might prefer using binary encoding in non-interoperable scenarios and MTOM while transferring large chunks of binary data.

Hope this helps 🙂 .

PDB and ClickOnce

Bit of history first. PDB – Program Debug Database is essential sometimes during debugging. By default, stack trace points to the function where problem lies and don’t include line numbers at which errors are thrown. Sometime this becomes quite critical for a serious production issue. Best practice seems to build PDBs during your build process (both Debug / Release), exclude them while creating your installers, and ship them to production when you need to diagnose your code. Another related thing you might want to keep in mind while debugging some critical issue – you can create a debug build by turning off code optimization (project properties -> Build Tab -> Optimize Code (uncheck)). This helps you to get an accurate stack trace devoid of any code optimizations like inline functions. Let’s get back to PDBs, the topic of this post. Enabling PDB effect normally is done by turning them on (Project Properties -> Build -> Advanced -> Debug Info = pdb-only) and copying generated PDBs to deployment directory. But when you are using ClickOnce things are different. In ClickOnce, the assemblies are downloaded the client’s local machine and then executed (deriving the benefit of auto update). So how do you ensure that the client download PDBs with assemblies? Fortunately VS.NET simplifies this for us. Steps are below:

1) Go the publish tab of your project file and click on “Application Files” button.

2) By default PDB files are not bundled for publish. You need to check “Show All Files” check box and then you would get to see PDB. PDB again is excluded by default, so include them. And you are all set to get line numbers and file names with your stack trace.

Hope this helps 🙂 .

.NET Worker Threads, I/O Threads And Asynchronous Programming

Before I talk about Asynchronous programming, I will outline type of threads available in .NET environment. CLR maintains a pool of threads (to amortize thread’s creation cost) and it should be the place to look for when you need to create more threads. Thread pool maintains 2 types of threads – Worker & I/O. As name implies Worker threads are computational threads while I/O are used for wait (block) of long duration (e.g. when invoking a remote WCF service). A good rule to follow is to ensure your all waits are on I/O thread, provided they are long enough otherwise you would end up degrading your performance due to a context switch. Let’s understand what above statement means with a bit of code. Your entry point to CLR’s thread pool is ThreadPool class. Code below helps you find how many threads are there in thread pool

int wt, iot;
ThreadPool.GetAvailableThreads(out wt, out iot);
Console.WriteLine(“Worker = ” + wt);
Console.WriteLine(“I/O = ” + iot);

One can only request for a worker thread by ThreadPool.QueueUserWorkItem. If all worker threads are occupied your request will be blocked till time a worker thread is available. Below sample shows how you can consume & monitor worker threads.

int wt, iot;
for (int i = 0; i < 10; i++)
{
ThreadPool.QueueUserWorkItem(Dummy);
}
Console.ReadLine();
ThreadPool.GetAvailableThreads(out wt, out iot);
Console.WriteLine("Worker = " + wt);
Console.WriteLine("I/O = " + iot);

static void Dummy(object o)
{
Thread.Sleep(50000000);
}

Unlike worker threads you don’t have any direct API to request for I/O threads. But .NET leverages I/O threads automatically when we use asynchronous programming. Below code uses asynchronous operations on client side to invoke a WCF Service

for (int i = 0; i < 10; i++)
{
/* Invoke a WCF Service via a proxy, asynchronously*/
client.BeginGetData(10, null, null); /*Put some Thread.Sleep in server side code*/
}
ThreadPool.GetAvailableThreads(out wt, out iot);
Console.WriteLine("Worker = " + wt);
Console.WriteLine("I/O = " + iot);

Contradictory to expectations of many, the output of the above program shows that only one I/O thread is in use instead of 10. This optimization makes I/O so important and something you should care about. As a good designer of your application you want to ensure that minimal threads are blocked and I/O threads just give you that. Also note when call returns another I/O thread is picked up from pool and callback method gets executed, hence we should avoid touching UI from callback method.

So why should one use Asynchronous programming? There are quite a few reasons but what I also see is many programmers miss on server side asynchronous programming & focus just on client aspect of it. Also irrespective of whether you are doing client or server side I strongly recommend you set timeouts for your I/O operations (WCF defaults this to 60 seconds).

Client Side:

1) You don’t want to block UI while processing the request with server (though I have found that this creates other issues like handling UI parts that are only to be activated post request, accessing UI from callback thread, exception handling, etc.)

2) You want to issue simultaneous requests to your server so that all of them are processed in parallel and your effective wait time is the wait time of longest request (though in case you are making requests to same server you can try the DTO pattern to create a chunky request & break the request up on server for parallel processing)

3) You just want to queue up a request (message queue)

Server Side:

1) Normally there is throttle placed on server side handling. If you sever side code is I/O oriented (Calling long processing DB query / remote web service) you are better off doing that on an asynchronous thread. For more information you can refer to ASP.NET link and WCF link.

I haven’t mentioned about asynchronous ADO.NET in this post and you can find more information about it here.

So what are your scenarios for using asynchronous programming?

Loading 2 Versions of the same .NET Assembly

This can be an interview question 🙂 . Let’s say I have a class

public class Class1 //1.0.0.0 – in C:
{
public string SayHello()
{
return “Hello World”;
}
}

and I version it to 2.0.0.0

public class Class1 // in C:\2 folder
{
public string SayHello()
{
return “Hello India”;
}
}

So you have 2 assemblies. Now, write a program in .NET to print Hello World & Hello India. Simple – use Assembly.LoadFrom (LoadFrom takes a path while Load goes through Assembly Resolver & Loader).

class Program
{
static void Main(string[] args)
{
Assembly assm = Assembly.LoadFrom(@”C:\ClassLibrary1.dll”);
Type[] types = assm.GetTypes();
foreach (var t in types)
{
Object obj = assm.CreateInstance(t.FullName);
Console.WriteLine(t.GetMethod(“SayHello”).Invoke(obj, null));
}

Assembly assm2 = Assembly.LoadFrom(@”C:\2\ClassLibrary1.dll”);
Type[] types2 = assm2.GetTypes();
foreach (var t in types2)
{
Object obj = assm2.CreateInstance(t.FullName);

Console.WriteLine(t.GetMethod(“SayHello”).Invoke(obj, null));
}
}
}

But to your suprise the output remains – “Hello World” & “Hello World”. If you take the second block first, output will change to “Hello India” & “Hello India”. Neither of them is what we expected. Conclusion – It’s not possible to load 2 assemblies by same name inside a single AppDomain. So you will have to create a seperate AppDomain & load the second version of assembly into it as shown below:

class Program
{
static void Main(string[] args)
{
Assembly assm = Assembly.LoadFrom(@”C:\ClassLibrary1.dll”);
Type[] types = assm.GetTypes();
foreach (var t in types)
{
Object obj = assm.CreateInstance(t.FullName);
Console.WriteLine(t.GetMethod(“SayHello”).Invoke(obj, null));
}
AppDomain app = AppDomain.CreateDomain(“Child”, AppDomain.CurrentDomain.Evidence, new AppDomainSetup { ApplicationBase = @”C:\2″ });
Assembly assm2 = app.Load(@”ClassLibrary1″);
Type[] types2 = assm2.GetTypes();
foreach (var t in types2)
{
Object obj = assm2.CreateInstance(t.FullName);

Console.WriteLine(t.GetMethod(“SayHello”).Invoke(obj, null));
}
}
}

Do you have any better ways?

Logging Best Practices

I have been using Log4Net for all my projects though Logging Application Block from Enterprise Library seems to be equally good. There is a full fledged war out there between two, but I don’t see a major motivation to move to latter (inspite of .NET framework logging being built around tracesources). Log4Net has got many trace levels which you can flip through your config file – FATAL, ERROR, DEBUG, WARN & INFO. They start from most limited logging to maximum.

Normally I see myself using ERROR & DEBUG, as most of the time it’s hard to communicate with developers when they have to draw line between others. My minimum instructions (though not enough see post script at end of this post) are

1) Use ERROR LEVEL for all Exception Logging (ensure that you hook into unhandled exceptions)
2) Use DEBUG level for Activity Logging (this is bit tricky and there are no straight guidelines for this). Activity Logging needs to be more contextual with revlevant information to your domain. Some programmers also use this to find performance bottlenecks in their system during production.

I will briefly elaborate steps to get started with Log4Net for above (skip them if you aren’t looking for them):

1) Add reference to log4net.dll into your project (This is required per project).

2) Modify your config file to add a log4net configsection & specify various appenders. This config is the for the main projects not class library projects. E.g. you have Presentation (ASP.NET) & Data Access Layer (C# Class Library) only web.config of your presentation project should be altered.

<?xml version=”1.0″ encoding=”utf-8″?>
<configuration>
<configSections>
<!–This is a must addition–>
<section name=”log4net” type=”log4net.Config.Log4NetConfigurationSectionHandler,log4net” />
</configSections>
<log4net>
<appender name=”RollingFileAppender” type=”log4net.Appender.RollingFileAppender,log4net”>
<param name=”File” value=”C:\\HRESWeb.log” /><!–File where the log should reside–>
<param name=”AppendToFile” value=”true” />
<!– FOR DATE BASED ROLL BACKUP –>
<param name=”rollingStyle” value=”Date” />
<param name=”datePattern” value=”yyyyMMdd” />
<layout type=”log4net.Layout.PatternLayout,log4net”>
<!–Pattern to add timestamp, type, etc.–>
<param name=”ConversionPattern” value=”%d [%t] %-5p %c [%x] – %m%n” />
</layout>
</appender>
<root>
<level value=”ERROR” /> <!–Just change the level to DEBUG for informative logging–>
<appender-ref ref=”RollingFileAppender” />
</root>
</log4net>
</configuration>

3) Add an entry in the AssemblyInfo.cs of the project whose config you edited in the second step.

[assembly: log4net.Config.XmlConfigurator(ConfigFile = “ConsoleApplication7.exe.config”)]
or for Web.config
[assembly: log4net.Config.XmlConfigurator(ConfigFile = “Web.config”)]

4) Start logging

Normally you would create a static ILog Object per class by specifying the class type as parameter & then use Debug & Error Methods for Logging

static readonly ILog log = LogManager.GetLogger(typeof(Program));
log.Debug(“Debug Message”); // No need to check if Debug level is enabled. Log4Net does that internally
log.Error(“Error Message”);

// A sample method with minimum features
public void SomeActivity()
{
try
{
log.Debug(“Entering”); // Activity Logging; context information here e.g. patient you processing, claim you working
//… Do Something
log.Debug(“Exiting”); // Activity Logging;
}
catch (Exception ex)
{
log.Error(“Description : “, ex); // This would also take inner exceptions if any
throw; //Don’t forget this
}
}

Logging is also normally implemented via AOP (I had posted a similar article using Unity on CodeProject, though this one deals with transactions).

A recommendation is that exceptions should also be posted in the Windows Event Logs (you can create a category for your application), the motivation being that this increases visibility & is easy for production guys to locate them instead of wading through your text, Udi organizes the same thought here.

I have seen few applications giving preference to Database logging. The same can be enabled via log4net (it’s just an appender). The thought process that goes into selecting database is normally easy search (say you want to retrive all errors occoured during last peek load or corelation queries). Another benefit is security – as DB entries are more secure than file system. But sometimes this additional security may cause delays in getting logs or make the process too rigid. To add to this, you need to consider purging logging records, performance impact on DB Server, license cost (if required), etc. Hence, it’s good to have flat files logging while creating custom applications, and provide multiple choices in case you are architecting a product.

Finally, if you are creating RIA / thin client apps (Silverlight, XBAP, Clickonce), you may not be able to create a log locally on client machine. I recommend that you implement logging for such applications also by sending messages to Web server / App Server, as it becomes quite necessary in few cases to know what’s going wrong on these clients. Checking if activity logging is enabled or not (assuming exception logging is always on) remains a challenge here.

P.S. An effective way to find out how good your logging is, ask developers to debug without breakpoints (IDE) during development. If they can do it just by looking at log file, you are on your way, else you might have to get ready for post production nightmares 🙂 .