Niraj Bhatt – Architect's Blog

Ruminations on .NET, Architecture & Design

Category Archives: Visual Studio .NET

Demystifying Access Control Service

Having presented quite a few sessions on Claims Based Idenitity and Access Control Service I still see quite a few participants confused on how to get started. While most of them are able to understand the underlying business motivation, they are list lost admist all the new terms like SAML, SWT, OAuth, WRAP, WIF, ACS, ADFS, Claims, Active / Passive Federation, etc. Let’s get them in turn.

At the heart of these offerings is the below simple block diagram which drives the key concept of –  Relying on a trusted External Entity (Identity Provider) for – Authenticating users and Providing user attributes (claims) to saves our services / applications from Identity nightmares. So, for your Service or your Application, you establish trust with an identity provider by sharing a X509 certificate or a shared secret. Clients or users of your service / application no longer connect with you for authentication, rather they authenticate with an identity provider to get back a claims token. That token is then presented to your service / application. As your service / application has a trust established with that IP they can validate the trust of the incoming token and then use the claims bundled in it to authorize the level of access for client / user.

Above flow becomes little complex when RP needs to trust multiple identity providers (for instance you are offering a multi-tenant service for individuals and coporates with each of them having a different identity provider). Not only RP would have to establish trust with all these providers but the Identity Providers too would have to register this RP to issue tokens on request. This is where a mediator called federation provider comes into picture. RP get registered only with federation provider and federation provider in turn is registered and trusted by various identity providers. Hence it simplifies many-to-many relationship into easily manageable one-to-one. Access control service (ACS) is a federation provider hosted on Windows Azure.

You will also find two federation terms used quite frequently – Passive & Active federation. Passive federation is assoicated with web applications (rather web browsers) where authentication happens via a set of redirects. Active federation is assoicated with Web Services and clients that explicity get authenticated.

Once a client authenticates with Identity Provider he get a token back. There are two token formats supported by Access Control Service – SAML and SWT. SAML exchange happens over WS-* protocols while SWT tokens are usually transferred over OAuth WRAP / OAuth 2.0 protocols (details here). You are most likely to use SWT tokens for RESTful services hosted in Azure. You can find more details about these token formats explained here

ACS would accept either of the token formats as input and would also return either of them as outputs. For instance there could be scenarios where you get a SAML token from ADFS (corporate identity provider), you use that to get authenticated with ACS and ACS returns you back a SWT token which in turn is used to access a protected REST Service (offered by a business partner).

Let’s start with SWT tokens. To see SWT tokens and OAuth WRAP in action protecting a WCF REST Service (active federation) I would recommend you have  a look at the ACS samples. Establishing trust between RP & ACS is quite simple – you exchange a shared secret. When client authenticates with ACS he gets back a SWT token which integrated HMAC256 hash. Relying part checks the authenticity of hash and claims inside the token on incoming tokens  to allow access to requesting client. You should see that retrieving SWT tokens using WRAP protocol from ACS and sending it to a RP is quite simple, infact you would hardly need anything beyond HTTP client library.

//Requesting a SWT token with username / password from Access Control Service

var client = new WebClient();
client.BaseAddress = string.Format(“https://{0}.{1}”, serviceNamespace, acsHostName);

var values = new NameValueCollection();
values.Add(“wrap_name”, unamepass.Username); /*Service Identity to be specified in Access Control Service*/
values.Add(“wrap_password”, unamepass.Password);
values.Add(“wrap_scope”, ConfigurationManager.AppSettings["relyingpartyname"]);

byte[] responseBytes = client.UploadValues(“WRAPv0.9/”, “POST”, values); /*SWT token is received in raw format*/

Handling WS-*/SAML on the other hand is little more complex. Luckily for us Microsoft provides nice Visual Studio integreated tooling (FedUtil – Add STS Reference) for working with WS-*/SAML tokens along with WIF (Windows Identity Foundation) SDK. This tooling can be leveraged by both services and applications. In fact all you need to establish trust with identity providers is federation metadata and tooling generates that for you in form a file called FederationMedata.xml. WIF SDK, in addition, provides web controls for federated sign in and sign in status (Look at this article in case you want to extend WIF for handling SWT tokens).

Tooling also sets you up by plugging necessary modules in your RP’s web.config namely – WSFederationAuthenticationModule and SessionAuthenticationModule. Former helps you validate authenticity of incoming token while later establishes a session between client and relying party (FedAuth Cookie) so that token validation doesn’t become an overhead for every operation invoked. You can also add an additional module to this called ClaimsAuthorizationModule which let’s you invoke your custom ClaimsAuthorizationManger class as shown below.

<microsoft.identityModel>    
<service>
<claimsAuthorizationManager type=”WebApplication4.CustomAuthorizationManager”/>
</service>
</microsoft.identityModel>

public class CustomAuthorizationManager : ClaimsAuthorizationManager    
{        
public override bool CheckAccess(AuthorizationContext context)        
{      
//…      
return base.CheckAccess(context);        
}    
}

HTH!!!

Dummy vs. Stub vs. Spy vs. Fake vs. Mock

One of the fundamental requirements of making Unit testing work is isolation. Isolation is hard in real world as there are always dependencies (collaborators) across the system. That’s where concept of something generically called ‘Test Double’ comes into picture. A ‘Double’ allow us to break the original dependency, helping isolate the unit (or System Under Test (SUT) – as commonly referred). As this Double is used to pass a unit test it’s generally referred to as ‘Test Double’. There are variations in types of Test Doubles depending on their intent (reminds me of GOF’s Proxy pattern).

Test doubles are not only useful in state verification but also in behavior verification; help us enhance the code coverage of our unit tests. While demarcating various test doubles may not provide exceptional value add, knowing about them can definitely organize our thinking process around unit testing.  Interestingly Mock Frameworks available today, allow us to seamlessly create all the variations of test doubles. I would be using moq for this blog post. The variations of Test Doubles described below are taken from xUnit Patterns.com. Below are the various test doubles along with examples:

a) Dummy is simple of all. It’s a placeholder required to pass the unit test. Unit in the context (SUT) doesn’t exercise this placeholder. Dummy can be something as simple as passing ‘null’ or a void implementation with exceptions to ensure it’s never leveraged.

[TestMethod]
public void PlayerRollDieWithMaxFaceValue()
{
var dummyBoard = new Mock<IBoard>();
var player = new Player(dummyBoard.Object, new Die() ); //null too would have been just fine
player.RollDie();
Assert.AreEqual(6, player.UnitsToMove);
}

While the above test would work just fine, it won’t throw any exceptions if RollDie implementation is invoking Board Object. To ensure that Board object isn’t exercised at  all you can leverage strict mock. Strict Mock with throw an exception if no expectation is set for member.

[TestMethod]
public void PlayerRollDieWithMaxFaceValueStrictTest()
{
var dummyBoard = new Mock<IBoard>(MockBehavior.Strict); //Ensure Board class is never invoked
var player = new Player( dummyBoard.Object, new Die() );
player.RollDie();
Assert.AreEqual( 6, player.UnitsToMove );
}

b) Fake is used to simplify a dependency so that unit test can pass easily. There is very thin line between Fake and Stub which is best described here as – “a Test Stub acts as a control point to inject indirect inputs into the SUT the Fake Object does not. It merely provides a way for the interactions to occur in a self-consistent manner. These interactions (between the SUT and the Fake Object) will typically be many and the values passed in as arguments of earlier method calls will often be returned as results of later method calls“. A common place where you would use fake is database access. Below sample shows the same by creating a FakeProductRepository instead of using live database.

public interface IProductRepository
{
void AddProduct(IProduct product);
IProduct GetProduct(int productId);
}

public class FakeProductRepository : IProductRepository
{
List<IProduct>
_products = new List<IProduct>();
public void AddProduct(IProduct product)
{
//...
}
public IProduct GetProduct(int productId)
{
//...
}
}

[TestMethod]
public void BillingManagerCalcuateTax()
{
var fakeProductRepository = new FakeProductRepository();
BillingManager billingManager = new BillingManager(fakeProductRepository);
//...
}

Fakes can be also be implemented by moq using callbacks.

c) Stub is used to provide indirect inputs to the SUT coming from its collaborators / dependencies. These inputs could be in form of objects, exceptions or primitive values. Unlike Fake, stubs are exercised by SUT. Going back to the Die example, we can use a Stub to return a fixed face value. This could simply our tests by taking out the randomness associated with rolling a Die.

[TestMethod]
public void PlayerRollDieWithMaxFaceValue()
{
var stubDie = new Mock<IDie>();
stubDie.Setup(d => d.GetFaceValue()).Returns(6).Verifiable();
IDie die = stubDie.Object;
Assert.AreEqual(6, die.GetFaceValue()); //Excercise the return value
}

d) Mock – Like Indirect Inputs that flow back to SUT from its collaborators, there are also Indirect Outputs. Indirect outputs are tricky to test as they don’t return to SUT and are encapsulated by collaborator. Hence it becomes quite difficult to assert on them from a SUT standpoint. This is where behavior verification kicks in. Using behavior verification we can set expectations for SUT to exhibit the right behavior during its interactions with collaborators. Classic example of this is logging. When a SUT invokes logger it might quite difficult for us to assert on the actual log store (file, database, etc.). But what we can do is assert that logger is invoked by SUT. Below is an example that shows a typical mock in action

[TestMethod]
public void ModuleThrowExceptionInvokesLogger()
{
var mock = new Mock<ILogger>();
Module module = new Module();
ILogger logger = mock.Object;
module.SetLogger(logger);
module.ThrowException("Catch me if you can");
mock.Verify( m => m.Log( "Catch me if you can" ) );
}

e) Spy – Spy is a variation of behavior verification. Instead of setting up behavior expectations, Spy records calls made to the collaborator. SUT then can later assert the recordings of Spy. Below is variation of Logger shown for Mock. Focus on this test is to count the number of times Log is invoked on Logger. It’s doesn’t care about the inputs passed to Log, it just records the Log calls and asserts them. Complex Spy objects can also leverage callback features of moq framework.

[TestMethod]
public void ModuleThrowExceptionInvokesLoggerOnlyOnce()
{
var spyLogger = new Mock<ILogger>();
Module module = new Module();
ILogger logger = spyLogger.Object;
module.SetLogger( logger );
module.ThrowException( "Catch me if you can" );
module.ThrowException( "Catch me if you can" );
spyLogger.Verify( m => m.Log( It.IsAny<string>()), Times.Exactly(2) );
}

Hope this helps!!!

Connecting to TFS 2010 using Team Explorer 2008 / Visual Studio 2008

This has been quite a common scenario especially around Business Intelligence projects which are still only supported out of Visual Studio 2008. So you often want these BI projects to connect to TFS 2010, which could be your centralized version control among others. Below are the steps for the same, assuming you have already installed Visual Studio 2008:

a) Install Team Explorer 2008

b) Install VS.NET 2008 SP1

c) Install forward compatible Update for Team Explorer 2008 to make it work with TFS 2010

d) Go to your VS.NET 2008 Team Explorer and adding a server by specifying full URL as shown below

Hope that helps!!!

PDB and ClickOnce

Bit of history first. PDB – Program Debug Database is essential sometimes during debugging. By default, stack trace points to the function where problem lies and don’t include line numbers at which errors are thrown. Sometime this becomes quite critical for a serious production issue. Best practice seems to build PDBs during your build process (both Debug / Release), exclude them while creating your installers, and ship them to production when you need to diagnose your code. Another related thing you might want to keep in mind while debugging some critical issue – you can create a debug build by turning off code optimization (project properties -> Build Tab -> Optimize Code (uncheck)). This helps you to get an accurate stack trace devoid of any code optimizations like inline functions. Let’s get back to PDBs, the topic of this post. Enabling PDB effect normally is done by turning them on (Project Properties -> Build -> Advanced -> Debug Info = pdb-only) and copying generated PDBs to deployment directory. But when you are using ClickOnce things are different. In ClickOnce, the assemblies are downloaded the client’s local machine and then executed (deriving the benefit of auto update). So how do you ensure that the client download PDBs with assemblies? Fortunately VS.NET simplifies this for us. Steps are below:

1) Go the publish tab of your project file and click on “Application Files” button.

2) By default PDB files are not bundled for publish. You need to check “Show All Files” check box and then you would get to see PDB. PDB again is excluded by default, so include them. And you are all set to get line numbers and file names with your stack trace.

Hope this helps :) .

Enhancing Team’s Productivity – .NET, VS.NET

Software projects normally have a scary deadline. One way to meet them is to increase your team’s productivity. I am going to jot down few practices I have been following, & look forward to read the ones you follow:

1) Using Macros for VS.NET repetitive tasks: I normally have a complex VS.NET folder Structure & this structure gets repeated for every module added. For instance creating a new module in my current solution requires you to create about 10 folders, starting from DTO, Factories, Domain Model, Exceptions, etc. This took quite a bit of time of developers about start a new module. So I created a VS.NET macro to automate this task.

macros

2) Code Generators: I have seen many companies having a framework wherein they take care of common tasks starting from transactions to Workflows, etc. Their implementation normally revolves around Factory method or Template method patterns. If you follow this approach, using code generator you can create “Fill-In-The-Blanks” template for your developers. I don’t have a framework like this, but I have still created few code generators. I normally use NHibernate for my projects. Given it’s a fantastic tool, it takes sometime to generate mappings & corresponding classes which again might be error prone. I have created a code generator which not allows you to create mappings & classes but also creates Factories & Repositories for entities that are generated. (N.B. You can find one alternative approach to mapping files here).

3) Extracting Code Snippets: I use Snippet Designer a lot to extract common code patterns & insert them with zero effort. This definitely saves a lot of valuable time.

codesnippets

4) Tools: Though they cost we can’t program without them :)ReSharper, CodeRush, etc. I personally use ReSharper. You can also check out plugins available for these tools. You can find a good plugin here.

5) Design Techniques: I normally prefer making cross cutting concerns oblivious to developers. This includes things are Security, Transactions, Logging, etc. They help in reducing lot of developer’s key strokes. You catch my recent article on same here. You can also use techniques like Visual Inheritance, though it’s tough to get it right with WPF (for a way out you can look here).

6) Keyboard Shortcuts: Although minor this can save lot of mouse clicks. The ones I use most frequently are CTRL + K + D (Alignment), CTRL + K + C (Comment), CTRL + K + U (UnComment), F12 (Navigate to a type) (though I prefer ReSharper shortcuts over above). An awesome way to speed up with XAML editing in VS.NET is this.

7) Proper Training: I guess all of us understand the importance of this but the level to which we get it is always less. I would also recommend productivity trainings once you master the basics of a technology / framework. I have written about one such technique here.

8 ) Appropriate Hardware: I pity the developers running on 1 GB Ram with VS.NET 2008, SQL Server, Oracle, etc. (even 2 GB is less).

9) Shared knowledge base: When working with a team you find many issues are recurring. A issue solved today by one developer is faced tomorrow by another developer. Keeping a shared knowledge base for team definitely boosts productivity.

10) Builds that always work: Integration of local working copies can consume a lot of valuable time of your projects. Normally developers have a tendency to get away with their check in. Have an hour per day where developers can integrate their work (or you can have couple per week). You can automate the build to save more time, but I leave it as a personal choice considering competency & cost involved.

11) Holidays/Working hours/Re-creation: Hmm… May be I am getting into aspects which I shouldn’t be not to mention salary. So I will stop here.

Let me know what you do to gear up your team’s producitivity :) .

Resolving Cyclic Dependency among VS.NET Projects

I normally prefer keeping all projects inside a single solution. This enables easy code navigation and mainly simplifies debugging. But sometimes this strategy hits a roadblock. For instance recently I required WCF & WF projects to refer each other. Now VS.NET doesn’t permit this, as it would create a cyclic dependency. We can resolve this issue by moving WCF / WF into a single project (used folders for logical separation). But unfortunately WF templates are not visible inside Web Application Template project which I was using to host WCF Services. Hence, as a way out I had to create 2 different solutions & split WCF / WF projects. Once you create different solutions, those solutions can have cyclic dependency between projects they contain. This might be the only way out in certain scenarios :( . Let me know if you have resolved this with other alternatives.

Follow

Get every new post delivered to your Inbox.

Join 173 other followers