NuGet Package Restore, Content Folder and Version Control

I was recently explaining this nuance to a Dev on my team, and he suggested I should capture this in a blog post. So here we go. First some NuGet background.

NuGet is the de facto standard of managing dependencies in the .NET world. Imagine you have some reusable code – rather than sharing that functionality with your team via a DLL, you can create a NuGet Package for your team. What’s the benefit you may ask?

1) Firstly NuGet can do lot more than adding a DLL into your project references. Your NuGet package can add configuration entries as part of the installation, execute scripts or create new folders / files within Visual Studio project structure, which would greatly simplify the adoption of your reusable code.

2) Secondly as a package owner you can include dependencies to other packages or DLLs. So when a team member installs your package, she will get all the required dependencies at one go.

3) Finally the NuGet Package is local to your project, the assemblies are not installed on your system GAC. This not only helps for a clean development, but also at build time. Packages don’t have to be checked into the version control, rather at build time you can restore them on your build server – no more shared lib folders.

It’s quite a simple process to create NuGet Packages. Download the NuGet command line utility, organize artifacts (DLLs, scripts, source code templates, etc.) you want to include into their respective folders, create package metadata (nuget spec), and pack them (nuget pack) to get your nupkg file. You can now install / restore the package through Visual Studio or through command line (nuget install / restore).

Typically NuGet recommends 4 folders to organize your artifacts – ‘Lib’ contains your binaries, ‘Content’ contains the folder structure, files, which will be added to your project root, ‘tools’ contains scripts e.g. init. ps1, install.ps1, and ‘build’ contains custom build targets / props.

Now let’s get to the crux of this post – the restore aspect and what you should check into your version control. When you add NuGet Package to your project, NuGet does two things – it’s creates a packages.config and a packages folder. Config files keep a list of all the added packages, and packages folder contains the actual package (it’s basically an unzip of your nupkg file). Recommended approach is to check-in your packages.config file but not the packages folder. As part of NuGet restore, NuGet brings back all the packages in the packages folder (see workflow image below).

workflow-image

The subtle catch is NuGet restore doesn’t restore content files, or perform transformations that are part of it. These changes are applied the first time you install NuGet package, and they should be checked in the version control. This also goes to say don’t put any DLLs inside the content folder (they should anyways go to lib folder). If you must, you will have to check-in even those DLLs inside your version control.

nugetpackage

In summary, NuGet restore just restores the package files, it doesn’t perform any tokenization, transformation or execution as part of it. These activities are performed at the package installation, and corresponding changes must be checked into the version control.

Advertisements

Dependency Inversion, Dependency Injection, DI Containers and Service Locator

Here is a post I have been planning to do for years 🙂 . Yes, I am talking about Dependency Injection, which by this time, has most likely made its way into your code base. I got to recollect few thoughts around it in a recent discussion and hence writing them down.

Dependencies are the common norm of object oriented programming, helping us to adhere to software principles like Single Responsibility, Encapsulation, etc. Instead of establishing dependencies using direct references between classes or components, they are better managed through an abstraction. Most of the GOF patterns are based around this principle which is commonly referred as ‘Dependency Inversion’. Using Dependency inversion principle we can create flexible Object Oriented designs, making our code base reusable and maintainable.

To further enhance value proposition of the dependency inversion, you can pass the dependencies via a constructor or a property from the root of your application, instead of instantiating them within your class. This would allow you to mock / stub your dependency and make your code easily testable (read unit testing).

E.g.

IA ob = new A(); //Direct instantiation
public D (IA ob) { … } //constructor injection
public IA ob { get; set } //property injection – create object and set the property

Entire software industry sees value in above approach and most people refer to this entire structure (pattern) as Dependency Injection. But beyond this things get little murkier.

Confusion starts for programmers who see above as a standard way of programming. To them Dependency Injection (DI) is the automated way of injecting dependencies using a DI container (DI container at times is also referred as IoC (Inversion of Control) container. As Martin Fowler points out in his classic article, IoC is a generic term used in quite a few cases – e.g. the callback approach of Win32 programming model. In order to avoid confusion and keep this approach discernible, the term DI was coined. In this case, IoC refers to inversion of dependencies creation, which are created outside of the dependent class and then injected.) If you are surprised by this statement, let me tell you 90% of people who have walked to me to ask if I was using DI, wanted to know the about the DI container and my experience with it.

So what the heck is this ‘DI container’? A DI container is a framework which can identify constructor arguments or properties on the objects being created and automatically inject them as part of object creation. Coming from the .NET world, the containers I use include StructureMap, Autofac and Unity. DI containers can be wired up using few lines of code at the starting of your program or you can even specify configuration in a XML file. Beyond that, containers are transparent to the rest of your code base. Most containers also provide AOP (Aspect Oriented Programming) functionality and its variants. This allows you to bundle cross cutting concerns like database transactions, logging, caching, etc. as aspects and avoid boilerplate code throughout the system (I have written a CodeProject article on those lines). Before you feel I am over simplifying things, let me state if you haven’t worked with DI containers in past you are likely to be faced with a learning curve. As is the case with most of the other frameworks, a pilot is strongly recommended. As a side note, the preferred injection rule is – unless your constructor requires too many parameters (ensure you haven’t violated SRP), you should resort to constructor injection and avoid property injection (see fowler’s article for a detailed comparison between the two injection types).

Finally, let’s talk about Service Locator an alternative to DI. A Service Locator holds all the services (dependencies) required by your system (in code or using a configuration file) and returns a specific service instance on request. Service Locator can come in handy for scenarios where DI container is not compatible with a given framework (e.g. WCF, ASP.NET Web APIs) or you want more control over the object creation (e.g. create the object late in the cycle). While you can mock service locator, mocking it would be little cumbersome when compared to DI. Service Locator is generally seen as an anti-pattern in DI world. Interestingly, most DI containers offer APIs which can allow us to use them as Service Locator (lookup for Resolve / GetInstance methods on the container).

Below sample shows you StructureMap – a DI container, in action (use NuGet to add StructureMap dependency).

class Program
{
static void Main(string[] args)
{
var container = new Container(registry =>
{
registry.Scan(x =>
{
x.TheCallingAssembly();
x.WithDefaultConventions();
});
});
var ob = container.GetInstance<DependentClass>();
ob.CallDummy();
}
}

public interface IDependency
{
void Dummy();
}

public class Dependency : IDependency
{
public void Dummy()
{
Console.WriteLine("Hi There!");
}
}

public class DependentClass
{
private readonly IDependency _dep;
public DependentClass(IDependency ob)
{
_dep = ob;
}
public void CallDummy()
{
_dep.Dummy();
}
}

I will try to post some subtle issues around DI containers in future. I hope above helps anyone looking for a quick start on DI and associated terms 🙂 .

Controlling Windows Azure VM lifetimes

Most of the Cloud computing resources are billable on an hourly basis and it’s important that you release these resources when you no longer need them. This typically applies to Windows Azure Virtual machines not running 24×7, example – there could be business workloads which requires an application to be available only twelve hours on week days. To limit running costs most users stop their VMs, just to realize that Windows Azure bills for VMs that are in a stopped state. So, your only option to control costs is to delete the VMs. But wouldn’t deleting VM cause any issues?

The answer is both No and Yes. When you delete a VM you are just deleting VM instance but the underlying OS and data disks are still intact (in fact, you still keeping paying for their storage which luckily is quite negligible). Hence, you can easily resurrect your VM without much harm. It’s important to note that when you delete the VM, you still retain the underlying Cloud Service container and its associated Site URL. The issue you might face though when you delete and re-create the VM, is the public IP address change. I work in an organization with strict IT security rules and locked down access. Static IP was necessary for me to raise an outbound RDP access request with my IT team. But with IP changing everyday it was definitely turning into a challenge. In end the solution I adopted was to create an extra small VM running 24×7 and bounce from there to other VMs.

To delete a VM you can use PowerShell cmdlets. PowerShell cmdlets allow you to export your VM configuration, delete the VM and then re-create VM using exported configuration.

Export-AzureVM -ServiceName ” -Name ” -Path ‘c:\vmconf.xml’

Remove-AzureVM -ServiceName ” -Name ”

Import-AzureVM -Path ‘c:\vmconf.xml’ | New-AzureVM -ServiceName ” -VNetName ‘’ -DnsSettings ‘’

Export configuration as shown above is stored in a XML file. It contains various properties of a VM including Endpoints, disks, VM size, subnet, etc. Below snapshot from Azure Portal shows an empty Cloud Service Container post deletion of the VM. Currently there is no cost associated with an empty cloud service. It’s important to note that when you retain Cloud Service you also retain the underlying DNS URL.

Creating PivotTable and PivotChart using VSTO

Extending the previous discussion on PivotTables, in this blog post I will show how you can create a PivotTable and PivotChart using Visual Studio Tools for Office (VSTO). After populating data into excel worksheet, generating PivotTables is quite a common task for VSTO developers. This can be easily broken down into below steps:

a) Create Table Range where Pivot Table would be created
b) Create Chart Range where Pivot Chart would be placed
c) Determine Data Range that would act as input for Pivot (generally you might want to create many pivot tables out of this range)
d) Create a PivotCache object using Data Range as Input
e) Determine Data Range that would act as input for a pivot table
f) Generate Pivot Table and set the orientation of Pivot fields
g) Create chart with using Pivot Table as input

Let’s have a quick look at the C# source code below:


//Step A, B
Worksheet worksheet = Globals.Factory.GetVstoObject(
Globals.ThisAddIn.Application.ActiveWorkbook.ActiveSheet);

var chartRange = worksheet.Range["G3", "J15"];
var pivotTableRange = worksheet.Range["G20", "J30"];
//Step C
var listObject = FindListObject("demoDataList");
var end = listObject.DataBodyRange.get_End(Microsoft.Office.Interop.Excel.XlDirection.xlDown).Row;
var dataRangeForPivot = worksheet.Range["A1", "W" + end.ToString()];
//Step D
Excel.PivotCache pivotCache = this.Application.ActiveWorkbook.PivotCaches().Add(Excel.XlPivotTableSourceType.xlDatabase, dataRangeForPivot);
//Step E
Microsoft.Office.Interop.Excel.Range dataRangeForPivotTable = worksheet.Range["H35", "H" + end.ToString()];
//Step F
Microsoft.Office.Interop.Excel.PivotTable pivotTable = pivotCache.CreatePivotTable(dataRangeForPivotTable, @"DTable", dataRange, Type.Missing);
Microsoft.Office.Interop.Excel.PivotField demoField = ((Microsoft.Office.Interop.Excel.PivotField)pivotTable.PivotFields(8));
demoField.Orientation =
Microsoft.Office.Interop.Excel.XlPivotFieldOrientation.xlRowField;
demoField.Orientation =
Microsoft.Office.Interop.Excel.XlPivotFieldOrientation.xlDataField;
demoField.Function = Microsoft.Office.Interop.Excel.XlConsolidationFunction.xlCount;
//STEP G
Chart chart = worksheet.Controls.AddChart(chartRange, "DChart");
chart.ChartType = Microsoft.Office.Interop.Excel.XlChartType.xlColumnClustered;
chart.SetSourceData(pivotTableRange);
chart.ChartTitle.Text = "Demo Analysis";

Confusing? I hope not 🙂

ConnectionTimeout vs CommandTimeout

ConnectionTimeout a property of Connection class in ADO.NET, is the time you would wait, for connecting to a given database, before flagging a connection failure. Default value is 30 seconds.

new SqlConnection().ConnectionTimeout = 10;

CommandTimeout a property of the Command class in ADO.NET, is the time you would wait, for a command (query, stored procedure, etc.) to return result set, before flagging an execution failure. Default value for CommandTimeout too is 30 seconds.

new SqlConnection().CreateCommand().CommandTimeout = 10;

Unlike ConnectionTimeout which are part of connection string, command timeouts are defined separately (hardcoded or as appSettings). Setting both of them to 0 (zero) would result in indefinite wait period which is generally considered a bad practice. You ideally want to set both of them to in accordance to your performance SLAs. Though at times, while running data heavy background jobs / workflows you may want to set your CommandTimeout to a larger value.

What are Encodings?

Few days back I got an email query on one of blog post related to WCF encodings. Query was asking for help to understand things better, especially around the encoding part. While not being a SME on encodings, I thought of sharing few thoughts.

Encoding is a word I am sure most of us would have come across. If you are a developer this would sound even more true, with terms like Base 64 encoding / HTML encoding popping up more than often. In simplest terms Encoding converts / maps your input data (character set) to another character set. The output character set varies depending on the Encoding style you choose. While there could be many motivations around using encodings, most frequent is to prevent unintentional loss of data. Let’s see couple of examples

a) I had given an example in one of my earlier blog post on ASP.NET security which used HTML encoding to prevent loss of string data being interpreted by browser as Script. Second line of code below keeps the string data intact while the former is interpreted as Script
Response.Write (“alert(‘Niraj’)”);
Response.Write(Server.HtmlEncode(“alert(‘Niraj’)”)); // Or HttpUtility.HtmlEncode

b) Consider another scenario where we want to insert the below string in SQL Server
string Key = “S” + Convert.ToChar(0x0) + “!@#$”;
Now the problem with above is the NULL character (Convert.ToChar(0x0)), which Databases generally consider as end of the string. So in order to insert the string in Database you may as well encode it. Below line of code shows how to encode /decode strings using Base 64 .NET APIs
//To Encode, UTF8 is backward compatible with ASCII
Convert.ToBase64String(Encoding.UTF8.GetBytes(Key))
//To decode
Encoding.UTF8.GetString(Convert.FromBase64String(base64))
As most you are aware ASP.NET ViewState too is encoded using Base 64; to experiment just copy of view state of your page and try to decode it using above APIs.

Coming to my blog post on WCF encodings I had mentioned that Base 64 encoding (used by Text Encoding) inflates the size of string data. I found wiki explaining this quite clearly (see the Examples section) and generally Base 64 inflates the size of data by 33%. That’s where you might prefer using binary encoding in non-interoperable scenarios and MTOM while transferring large chunks of binary data.

Hope this helps 🙂 .

PDB and ClickOnce

Bit of history first. PDB – Program Debug Database is essential sometimes during debugging. By default, stack trace points to the function where problem lies and don’t include line numbers at which errors are thrown. Sometime this becomes quite critical for a serious production issue. Best practice seems to build PDBs during your build process (both Debug / Release), exclude them while creating your installers, and ship them to production when you need to diagnose your code. Another related thing you might want to keep in mind while debugging some critical issue – you can create a debug build by turning off code optimization (project properties -> Build Tab -> Optimize Code (uncheck)). This helps you to get an accurate stack trace devoid of any code optimizations like inline functions. Let’s get back to PDBs, the topic of this post. Enabling PDB effect normally is done by turning them on (Project Properties -> Build -> Advanced -> Debug Info = pdb-only) and copying generated PDBs to deployment directory. But when you are using ClickOnce things are different. In ClickOnce, the assemblies are downloaded the client’s local machine and then executed (deriving the benefit of auto update). So how do you ensure that the client download PDBs with assemblies? Fortunately VS.NET simplifies this for us. Steps are below:

1) Go the publish tab of your project file and click on “Application Files” button.

2) By default PDB files are not bundled for publish. You need to check “Show All Files” check box and then you would get to see PDB. PDB again is excluded by default, so include them. And you are all set to get line numbers and file names with your stack trace.

Hope this helps 🙂 .