What is DevOps?

Though the notion of DevOps has been around for years I still see folks struggling to articulate what DevOps really is. Like recently in our user group meeting the participants highjacked the session for 30 minutes debating what DevOps stands for without really arriving at a conclusion. Similarly, I have seen architects, sales teams and developers, struggle to explain in simple terms to their management as to what is DevOps. In this post, I will share my thought process and would like to hear from you if you have a simpler way of communicating the DevOps intent.

Traditionally software development has been a complex process involving multiple teams. You have developers, testers, project managers, operations, architects, UX designers, business analysts and others, collaborating to create value for their business. The collaboration among these teams requires handshakes which often cause friction (leading to non-value adds). For instance, the handshake where UX designer develops UI and then developers add code, or the handshake where analyst captures requirements and project team builds desired features, or the traditional handshake between developers and testers for code validation, and so on. One such critical handshake is between developers and operations where developers typically toss software over to operations to deploy it in upstream environments (outside of developer’s workstation). Unfortunately, members of these two teams have been ignoring concerns of each other for decades. Developers assume code will just run (after all it ran on their machine) but it rarely does. And that’s where DevOps comes in to rescue.

Considering above, it would be safe to summarize that DevOps is any tool, technology or process that can reduce friction during the handshake of Developers and Operations (thereby creating more value for their business e.g. faster time to market). Now this could be app containerization bundling all dependencies into single image, or it could be setting up Continuous Integration and Continuous Delivery pipelines to allow for robust consistent deployments, or adopting microservices architecture for rapid loosely coupled deployments or it could be infrastructure as a code to allow for reliable version-controlled infrastructure setup, or AI enabled app monitoring tools to proactively mitigate app issues or even reorganizing teams and driving cultural changes within IT organization. But once the DevOps objectives are clear it’s easy to devise your strategy and approach.

Does this resonate with you? Look forward to your comments.


TOGAF – Quick Reference Guide

TOGAF is The Open Group Architecture Framework – a framework for Enterprise Architecture. In this post I am going to provide a summary of this framework to create a quick easy reference for fellow architects. Open group reports TOGAF is employed by 80% of Global 50 companies and 60% of Fortune 500 companies, if that motivates you to adopt / learn this framework.

So what’s enterprise architecture? Who can be an enterprise architect (EA)? Should every IT architect aspire to be one? Architecture word in enterprise architecture still conveys ‘organization of a system’ just that you are now alleviating from system level organization (typically tiers / layers) to an enterprise level organization of business capabilities & supporting IT ecosystem. If you are a system architect (SA) you can certainly alleviate to an enterprise architect, though you need to weigh it carefully. EAs are different beasts. They certainly have more visibility, are close to business,  up in the corporate ladder (due to alignment with overall enterprise), but at the same time they aren’t too close to technology. So if you rejoice technology, enjoy being close to code, trying out cool stuff, being hands-on with technology innovations, etc., EA may not be the right thing for you. EA’s role is lot more involved with business, people and processes. Now I am not saying EAs don’t mess around with technology, or SAs don’t deal with processes, but ratios are mostly skewed.

Now let’s understand what TOGAF has to offer. TOGAF essentially provides guidance for going about enterprise architecture. To start it recommends a methodology for doing enterprise architecture called ADM (Architecture Development Method). It then goes on describe typical deliverables produced throughout the ADM (Content Framework), how to logically organize them (Enterprise Continuum) and store them (Architecture Repository). For enterprise starting fresh there is guidance on creating architecture capability and how to adopt / adapt ADM for a given organization. TOGAF aims at being comprehensive, and often gets cited as being bloated (or impractical as critics call it). This is true to large extent. I haven’t come across an organization that follows TOGAF verbatim, at the same time it’s hard to find an organization EA practice that hasn’t been influenced by TOGAF. Hence it helps to look at TOGAF as a reference guide than a prescription.

Let me answer one last frequently asked question – Is TOGAF dated / dead? After all, version 9.1 was released in December 2011. Isn’t that quite long in today’s rapidly changing word? Is TOGAF still relevant? Or what’s the use of TOGAF in a digital world? As mentioned earlier, TOGAF is not bound to technology advancements such as Cloud, AI or Digital. TOGAF framework holds and will work with any underlying technology. For instance, when SOA (Service Oriented Architecture) emerged there was no modification done to TOGAF ADM, rather TOGAF provided additional perspectives as to how SOA would map to ADM phases (may be Open Group could have a separate SOA guidance, rather than integrating that guidance in TOGAF publication). The same applies to other technology initiatives. As an EA you should have a handle on market innovations, and how your business could leverage them, but how you go about aligning those two – it stays the same.

With this background let me provide you with a quick overview of TOGAF components. At heart of TOGAF is the ADM method consisting of Phases A-H, along with a preliminary phase and centralized requirements management.


  • Preliminary Phase is used to identify business drivers and requirements for architecture work and capturing outcome in Request for Architecture work document. Depending on organization EA maturity this phase will also be used for defining architecture principles, establishing enterprise architecture team / tools, tailoring architecture process, and determining integration with other management frameworks (ITIL, COBIT, etc.).
  • Vision phase (phase-A) focus it to elaborate on how new proposed capability will meet business needs and address stakeholder concerns. There are various techniques like Business Scenarios which TOGAF recommends for assisting in this process. Along with capability one needs to identify business readiness (organizational), risks involved (program level) and mitigation activities. Outcome of this phase is a statement of architecture work and high level view of baseline and target architectures (including business, information systems & technology).
  • Next three Phases B, C, D are to develop these high level baseline and target architectures for each – business, information systems (apps + data) and technology (infrastructure), identify gaps between them and define the roadmap components (building blocks) which will bridge that gap. These architectures and gaps  are then captured in architecture definition document, along with measurable criteria (must do to comply) captured in architecture requirements specification. At the end of each phase a stakeholder review is conducted to validate outcomes against the statement of architecture work.
  • Phase E is Opportunities and Solutions. Goal here is to consolidate gaps across B, C, D phases, identify possible solutions, determine dependencies across them and ensure interoperability. Accordingly create work packages, group these packages into portfolios and projects, and identify transition architectures wherever incremental approach is needed. The outcome of this phase is a draft architecture roadmap and migration plan.
  • Next Phase F is migration planning. While Phase E is largely driven by EAs, phase F will require them to collaborate with portfolio and project managers. Here the draft migration plan is further worked upon assigning business value (priority) to each work package, cost estimates, risk validations, and finalizing migration plan. At the end of this phase both architecture definition and architecture requirements document are completed.
  • In phase G (implementation governance) the project implementation kicks in and EAs need to ensure that implementations are in accordance with target architecture. This done by drawing out architecture contracts and getting those signed from developing and sponsoring organizations. EAs will conduct reviews throughout this phase and will close it out once the solutions are fully deployed.
  • What’s guaranteed after phase G is ‘change’. Organizational drivers do change either top-down or bottom-up leading to changes in enterprise architecture. Managing this change is what phase H is all about. Here EAs perform analysis of each change request, and determine if the change warrants an architecture cycle of its own. This often requires an architecture board (cross-organization architecture body) approval. Typical outcome of this phase would be a new request for architecture work.
  • Central to these phases is requirements management. Requirements management is a dynamic process where requirements are interchanged across phases and also at times between ADM cycles.

In addition to ADM, TOGAF offers guidelines and techniques which will help you adopt / adapt ADM. For instance, there might be cases where you might skip phases or change order of phases. Consider an enterprise committed to adopt packaged solution, where you might do business architecture after information systems and technology architecture. Other is where you develop target architecture before baseline architecture to ensure there is more effective transition (not getting boxed into the existing capability). In both these cases you are adapting ADM to your specific enterprise needs.

Next let’s discuss architecture deliverables for each of the phase. We spoke about architecture definition and architecture requirement documents. Wouldn’t it be nice if there was a meta-model which would dictate the structure of these documents ensuring there is consistency across ADM cycles? This is where architecture content framework (ACF) comes in. Metamodel for individual artifacts can be thought of as viewpoint from which view (a perspective for related set of stakeholder concerns) is created. TOGAF categories all viewpoints into catalogs, matrices or diagrams. Furthermore these viewpoints are used to describe architecture building blocks (ABB), which are then used to build systems (ABBs are mapped to solution building blocks (SBBs) in phase E).


So now we have enterprise architecture development methodology and a way to define its deliverables to ensure consistency. What else? How about classifying and storing these artifacts? If you look at a large enterprise there could be hundreds of ADM cycles operating at any given point in time. Each of these cycles would generate tons of deliverables. Storing all the deliverables in a single bucket would lead to chaos and minimal reuse. This is where enterprise continuum (EC) along with architecture and solution continuum comes in. Continuum is used to establish an evolving classification model starting from Foundation to Systems to Industry to Organization for all the architecture artifacts. These artifacts, along with content metamodel, governance log, standards, etc. is stored in architecture repository. There are two reference architecture models included in TOGAF documentation – first one is TRM a foundation level architecture model and second one is III-RM which is a systems level architecture model.

Finally framework gets into the details of establishing architecture capability within an organization. It talks about the need of Architecture Board, its role and responsibilities, including architecture governance (controls & Objectives) and compliance (audit). For the latter two TOGAF includes a governance framework and compliance review process. Guidance also touches upon maturity models (based on CMM techniques) and necessary role skill mapping.

That was TOGAF summary for you. I strongly encourage to read open group publication. Many find it a dry and lengthy read. But it’s the best way to learn TOGAF and a must read if you want to clear the certification. Though it’s highly unlikely that getting certified in TOGAF will overnight establish you as an enterprise architect, but it’s a first good step in that direction.

All the best for your exam, if you are planning for one. Hope you found this post useful!

NuGet Package Restore, Content Folder and Version Control

I was recently explaining this nuance to a Dev on my team, and he suggested I should capture this in a blog post. So here we go. First some NuGet background.

NuGet is the de facto standard of managing dependencies in the .NET world. Imagine you have some reusable code – rather than sharing that functionality with your team via a DLL, you can create a NuGet Package for your team. What’s the benefit you may ask?

1) Firstly NuGet can do lot more than adding a DLL into your project references. Your NuGet package can add configuration entries as part of the installation, execute scripts or create new folders / files within Visual Studio project structure, which would greatly simplify the adoption of your reusable code.

2) Secondly as a package owner you can include dependencies to other packages or DLLs. So when a team member installs your package, she will get all the required dependencies at one go.

3) Finally the NuGet Package is local to your project, the assemblies are not installed on your system GAC. This not only helps for a clean development, but also at build time. Packages don’t have to be checked into the version control, rather at build time you can restore them on your build server – no more shared lib folders.

It’s quite a simple process to create NuGet Packages. Download the NuGet command line utility, organize artifacts (DLLs, scripts, source code templates, etc.) you want to include into their respective folders, create package metadata (nuget spec), and pack them (nuget pack) to get your nupkg file. You can now install / restore the package through Visual Studio or through command line (nuget install / restore).

Typically NuGet recommends 4 folders to organize your artifacts – ‘Lib’ contains your binaries, ‘Content’ contains the folder structure, files, which will be added to your project root, ‘tools’ contains scripts e.g. init. ps1, install.ps1, and ‘build’ contains custom build targets / props.

Now let’s get to the crux of this post – the restore aspect and what you should check into your version control. When you add NuGet Package to your project, NuGet does two things – it’s creates a packages.config and a packages folder. Config files keep a list of all the added packages, and packages folder contains the actual package (it’s basically an unzip of your nupkg file). Recommended approach is to check-in your packages.config file but not the packages folder. As part of NuGet restore, NuGet brings back all the packages in the packages folder (see workflow image below).


The subtle catch is NuGet restore doesn’t restore content files, or perform transformations that are part of it. These changes are applied the first time you install NuGet package, and they should be checked in the version control. This also goes to say don’t put any DLLs inside the content folder (they should anyways go to lib folder). If you must, you will have to check-in even those DLLs inside your version control.


In summary, NuGet restore just restores the package files, it doesn’t perform any tokenization, transformation or execution as part of it. These activities are performed at the package installation, and corresponding changes must be checked into the version control.

WS-Fed vs. SAML vs. OAuth vs. OpenID Connect

Identity protocols are more pervasive than ever. Almost every enterprise you would come across will have a identity product incubated, tied with a specific identity protocol. While the initial idea behind these protocols was to help enterprise employees use a single set of credentials across applications, but new use cases have shown up since then. In this post, I am going to provide a quick overview of major protocols and the use cases they are trying to solve. Hope you will find it useful.

WS-Fed & SAML are the old boys in the market. Appearing in early 2000s they are widespread today. Almost every major SSO COTS product supports one of these protocol. WS-Fed (WS-Federation) is a protocol from WS-* family primarily supported by IBM & Microsoft, while SAML (Security Assertion Markup Language) adopted by Computer Associates, Ping Identity and others for their SSO products. The premise with both WS-Fed and SAML is similar – decouple the applications (relying party / service provider) from identity provider. This decoupling allows multiple applications to use a single identity provider with a predefined protocol, and not care about the implementation details of identity provider per se.

For web applications, this works via a set of browser redirects and message exchanges. User tries to access web application, the application redirects user to identity provider. User authenticates himself, identity provider issues a claims token and redirects user back to the application. Application then validates the token (trust needs to established out of band between application and IdP), authorizes user access by asserting claims, and allows user to access protected resources. The token is then stored in the session cookie of user browser, ensuring the process doesn’t have be repeated for every access request.

At a high level there isn’t much separating the flow of these two protocols, but they are different specifications with each having its own lingo. WS-Fed is perceived to be less complex and light weight (certainly an exception for WS-* family), but SAML being more complex is also perceived to be more secure. At the end you have to look at your ecosystem including existing investments, partners, in house expertise, etc. and determine which one will provide higher value. The diagram below taken from wiki, depicts the SAML flow.


OAuth (Open Standard for Authorization) has different intent (the current version is OAuth 2.0). It’s driving force isn’t SSO but access delegation (type of authorization). In simplest terms, it means giving your access to someone you trust, so that they can perform the job on your behalf. E.g. updating status across Facebook, Twitter, Instagram, etc. with a single click. Option you have is either to go to these sites manually, or delegate your access to an app which can implicitly connect to these platforms to update status on your behalf. Flow is pretty simple, you ask application to update your status on Facebook, app redirects you to Facebook, you authenticate yourself to Facebook, Facebook throws up a consent page stating you are about give this app rights to update status on your behalf, you agree, the app gets an opaque access token from Facebook, app caches that access token, send the status update with access token to facebook, facebook validates the access token (easy in this case as the token was issued by Facebook itself), and updates your status.

OAuth refers to the parties involved as Client, Resource Owner (end-user), Resource Server, and Authorization Server. Mapping these to our Facebook example, Client is the application trying to do work on your behalf. Resource owner is you (you owe the Facebook account), Resource Server is the Facebook (holding your account), and Authorization Server is also Facebook (in our case Facebook issues the access token using which client can update status on Facebook account). It perfectly ok for Resource Server and Authorization Server to be managed by separate entities, it just means more work to establish common ground for protocols and token formats. Below screenshot depicts the OAuth2 protocol flow


Web community liked the lightweight approach of OAuth. And hence, the question came – can OAuth do authentication as well, providing an alternative to heavy lifting protoo WS-Fed and SAML? Enter OpenID Connect is about adding Authentication to OAuth. It aims at making Authorization Server do more – i.e. not only issuing access token, but also an ID token. ID token is a JWT (JSON Web Token) containing information about authentication event, like when it did it occur, etc. and also about subject / user (specification talks of a UserInfo Endpoint to obtain user details). Going back to the Facebook example, here the client not only relies on Facebook to provide us an opaque access token for status updates, but also an ID token which client can consume and validate that the user actually authenticated with Facebook. It can also fetch additional user details it needs via Facebook’s UserInfo Endpoint. Below diagram from OpenID connect spec indicates the protocol flow.


OP in above case is OpenID Provider. All OpenID Providers have the discovery details published via JSON document found by concatenating provider URL with /.well-known/openid-configuration. This document has all the provider details including Authorization, Token and UserInfo Endpoints. Let’s see a quick example with a Microsoft offering called Azure Active Directory (Azure AD). Azure AD being a OpenID Provider, will have the openid configuration for it’s tenant demoad2.onmicrosoft.com available at https://login.microsoftonline.com/demoad2.onmicrosoft.com/.well-known/openid-configuration.

Fairly digestible, isn’t it 🙂 ?

Evolution of software architecture

Software architecture has been an evolutionary discipline, starting with monolithic mainframes to recent microservices. It’s easy to understand these software architectures from an evolution standpoint, rather than trying to grasp them independently. This post is about that evolution. Let’s start with mainframes.

The mainframe era was of expensive hardware, having powerful server capable of processing large amount instructions and client connecting to it via dumb terminals. This evolved as hardware became cheaper and dumb terminals paved way to smart terminals. These smart terminals had reasonable processing power leading to a client server models. There are many variations of client server models around how much processing a client should do versus the server. For instance, client could all the processing having server just act as centralized data repository. The primary challenge with that approach was maintenance and pushing client side updates to all the users. This led to using browser clients where the UI is essentially rendered from server in response to a HTTP request from browser.

Mainframe Client Server

As server started having multiple responsibilities in this new world like serving UI, processing transactions, storing data and others, architects broke the complexity by grouping these responsibilities into logical layers – UI Layer, Business Layer, Data Layer, etc. Specific products emerged to support this layers like Web Servers, Database servers, etc. Depending on the complexity these layers were physically separated into tier. The word tier indicates a physical separation where Web Server, Database server and business processing components run on their own machines.

3 Tier Architecture

With layers and tiers around, the next big thing was how do we structure them what are the ideal dependencies to have across these layers, so that we can manage change better? Many architecture styles showed up as recommended practice most notably Hexagonal (ports and adapters) architecture and Onion architecture. These styles were aimed to support development approaches like Domain Driven Design (DDD), Test Driven Development (TDD), and Behavior Driven Development (BDD). Themes behind these styles and approaches were to isolate the business logic, the core of your system from everything else. Not having your business dependent on UI, database, Web Services, etc. allows for more participation of business teams, simplifies change management, minimizes dependencies, and make the software easily testable.


Next challenge was scale. As compute became cheaper, technology became a way of life causing disruption and challenging the status quo of established players across industries. The problems are different, we are no longer talking of apps that are internal to an organization or mainframes where users are ok with longer wait times. We are talking of global user base with sub second response time. Simpler approach to scale was better hardware (scale up) or more hardware (scale out).  Better hardware is simple but expensive and more hardware is affordable but complex. More hardware meant your app would run on multiple machines. That’s great for stateless compute but not so for storage where the data now would be distributed across machines. This distribution leads us to famous CAP (Consistency, Availability and Partition tolerance) theorem. While there are many articles on CAP, essentially it boils down to – network partitions are unavoidable and we have to accept them. This is going to require is to choose between availability and consistency. You can choose to be available and return stale data to be eventually consistent or you can choose to be strongly consistent and give up on availability (i.e. returning error for the missing data – e.g. you are trying to read from a different node than where you wrote your data). Traditional Database servers are consistent and available (CA) with no tolerance for partition (active DB server catering to all requests). Then there are NoSQL databases with master slave relationship, configurable to support strong consistency or eventual consistency.

CAP Theorem

Apart from scale challenges, today’s technology systems have to often deal with contention. E.g. selecting an airline seat, or a high discount product that everyone wants to buy on a Black Friday. As multitude of users are trying to get access to the same piece of data, it leads to contention. Scaling can’t solve this contention, it can only make it more worse (imagine having multiple records of the same product inventory within your system). This led to specific architecture styles like CQRS (Command Query Responsibility Segregation) & Event Sourcing. CQRS in its simple terms is about separating writes (command) from reads (query). With writes and reads having separate stores and model both can be optimally designed. Write stores in such scenarios typically use Event Sourcing to capture entity state for each transaction. Those transactions are then played back to the read store to make write and read eventually consistent. This model of being eventually consistent would have some implications and needs to be worked with business to keep the customer experience intact. E.g. Banana Republic recently allowed me to order an item. They took my money and later during fulfillment they realized they were out of stock (that is when things became eventually consistent). Now they refunded my money, sent me a sorry email and allowed me 10% discount on my next purchase to value me as a customer. As you would see CQRS and Event Sourcing come with their own set of tradeoffs. They should be used wisely for specific scenarios rather than an overarching style.


Armed with above knowledge, you are now probably thinking can we use these different architecture styles within a single system system? For instance, have parts of your system use 2-tier other use CQRS and other use Hexagonal architecture. While this might sound counterproductive, it actually isn’t. I remember building a system for healthcare providers, where every use case was so different – appointments, patient registration, health monitoring, etc. Using a single architecture style across the system was definitely not helping us. Enter Microservices. The microservices architecture style recommends to break your system into a set of services. Each service can then be architected independently, scaled independently, and deployed independently. In a way, you are now dealing with vertical slices of your layers. Having these slices evolve independently, will allow you to adopt a style that’s more befitting to the slice in context. You might ask, while this makes sense for the architecture, but won’t you just have more infrastructure to provision, more units to deploy and more stuff to manage? You are right, and what really makes Microservices feasible is the agility ecosystem comprising of cloud, DevOps, and continuous delivery. They bring automation and sophistication to your development processes.


So does this evolution make sense? Are there any gaps in here which I could fill? As always, will look forward for your comments.

Image Credits: Hexagonal Architecture, Microservices Architecture, 3-Tier architecture, client server architecture, CQRS architecture, Onion architecture, CAP Theorem

Azure ExpressRoute Primer

What is Azure ExpressRoute?
ExpressRoute is an Microsoft Azure service that lets you create private connections between Microsoft datacenters and infrastructure that’s on your premises or in a colocation facility. ExpressRoute connections do not go over the public Internet, and offer higher security, reliability and speeds with lower latencies than typical connections over the Internet.

How to setup ExpressRoute Circuit?
ExpressRoute circuits are resources within Azure subscriptions. But before you setup Expressroute connection (or circuit as it’s normally referred as), you need to make decisions about setup parameters.
1) Connectivity Option – You can establish connection with Azure cloud either by extending your MPLS VPN (WAN), or you can leverage your Colocation provider and it’s cloud exchange or roll out an point-to-point ethernet your self. Most large enterprises would use the first option, medium size enterprise running in COLO would go with second option, and the last is more specialized scenario warranting higher level security
2) Port Speed – Bandwidth for your circuit
3) Service tier / SKU – standard or premium (more on this later)
4) Location – You might get multiple options for this depending on your choice for connectivity (#1). E.g. MPLS providers have multiple peering locations from which you can pick the one closet to you
5) Data Plan – Limited plan with pay as you go egress charges or unlimited plan with higher cost irrespective of egress volume

After you make these five choices, you can fire up PowerShell on your VM to execute ‘New-AzureDedicatedCircuit’. Remember to select the right Azure subscription (Add-AzureAccount / Select-AzureSubscription) where you want the circuit to be created. Please note you would need to import ExpressRoute module if not already (Import-Module ‘C:\Program Files (x86)\Microsoft SDKs\Azure\PowerShell\ServiceManagement\Azure\ExpressRoute\ExpressRoute.psd1’

New-AzureDedicatedCircuit -CircuitName $CircuitName -ServiceProviderName $ServiceProvider -Bandwidth $Bandwidth -Location $Location -sku Standard

As soon as this completes you will get a service key which is kind of a unique identifier for your circuit. At this step only your billing starts, as we have only completed Azure side of things. Now you need to work with your Network Service Provider, provide your service key and ask them to complete their side of configuration to Azure. This would also involve setting up a BGP session at your end. Once this done you are all set to leverage expressroute and connect the circuit to azure virtual networks – with the traffic flowing over private connection.

Connecting ExpressRoute Circuit to Azure Virtual Network
Once the circuit is configured it’s relatively straight forward to connect it to virtual network. Once again PowerShell is your friend. But before firing the below command ensure your VNET and the virtual gateway is created.

New-AzureDedicatedCircuitLink -ServiceKey “***” -VNetName “MyVNet”

ServiceKey parameter uniquely identifies your circuit. As circuits are part of the Azure Subscription (wish there was a way to view them in portal) your VNET should be part of the same subscription. This lead to the question – Can we connect expressroute circuits to VNETs across subscriptions? Answer is yes.

Connecting ExpressRoute Circuit to Azure Virtual Network across subscriptions
As we know circuit is part of a subscription, so as a subscription admin you will have to grant rights to other subscription admins so that they can link their VNETs to your circuit. Here’s the PowerShell cmdlet to do that.

New-AzureDedicatedCircuitLinkAuthorization -ServiceKey “***” -Description “AnotherProdSub” -Limit 2 -MicrosoftIds ‘devtest@contoso.com’

This commands allows 2 VNETs from AnotherProdSub to connect to the ExpressRoute circuit. You might see the last parameter MicrosoftId replaced by AzureAD Id (not sure what IDs are supported right now)

Once you have the authorization, you can query the servicekey from your subscription and link your VNET as appropriate.

Get-AzureAuthorizedDedicatedCircuit #This will get details of the circuit including ServiceKey

New-AzureDedicatedCircuitLink –servicekey “***” –VnetName ‘APSVNET’ #Link VNET in another subscription

Remember you can only connect 10 VNETs per circuit. Though this is a soft limit but you can grow only few folds. If you need to create 100 VNET instance you need to look at ExpressRoute Premium.

What is ExpressRoute Premium?
Premium tier for enterprises that need more VNETs per circuit, need their circuit to span geo-political region or have more than 4000 route prefixes. You will pay around 3000 USD more for the premium features, when compared to standard edition with same bandwidth.

How much it costs?
Express route costs boil down to price you pay to Microsoft and your service provider.
To Microsoft it’s
monthly fee depending on the port speed
Bandwidth consumed (unless you are in unlimited data where you flat 300 USD)
Virtual Gateway which you would provision in your VNET (mandatory for expressroute & S2S VPN)

To Network Service Provider:
It’s one time setup fee for the circuit
Bandwidth charges (how much data goes through their cloud to Microsoft)

How long does it take to setup connection?
Well it depends. If you already have a network service provider or an exchange provider supporting Azure, it shouldn’t take more than a day (excluding paperwork). Otherwise this can turn out to be a project in itself.

Can we use ExpressRoute to connect to office 365?
Answer is yes, but it actually depends on your provider. Apart from connecting to Azure VNET, expressroute allows you to establish public peering and Microsoft peering to route your Azure PaaS (public services) and Office 365 traffic over the private network. For more details refer to this link. Public peering allows you to route your traffic to public services like Azure Services and Azure SQL database over private tunnel.

Thou shalt cause no harm, Thou art SAFe!

Yup, got another certification on a software development methodology 🙂 . It’s called Scaled Agile Framework or SAFe for short. In case you haven’t heard of it, this post should provide you some context, as to where it fits into the larger scheme of things. At the same time, I have penned down my journey over last few years with respect to development processes. Hope you find it useful.

Software development processes have been a fascinating aspect of software industry, often leading to a debate between its proponents. My first exposure to these processes was through Rational Unified Process (RUP) – a comprehensive process covering various facets of software development. There was a big RUP cardboard in the center of my workplace with few colleagues of mine swearing by it, and taking pride for their in-depth understanding of the process. At the same time, industry was evolving towards lean approach to software development, working only on those activities that created value for customer. Lean approach coupled with agile manifesto for embracing change, lead to a set of new development processes like Scrum, Kanban and XP among others. Agile processes were an instant hit with developer community, as these were largely created by practitioners and not managers, who believed in empowering individuals cutting layers of management above.


Adopting agile though can be journey in itself. Let’s take Scrum. The Scrum process talks of a lean process with just three roles (Product Owner – a business representative, development team and Scrum master – a facilitator) and two weeks incremental development cycle called sprint which covers planning, demo and retrospect (continuous improvisation). But most organizations got the implementation wrong. For instance, in my team we religiously adopted agile, even changing our working place to reflect it – moving from isolated cubicles to collaborative benches, etc. We ran two weeks sprint (incremental development cycle), occasionally allowing business to re-prioritize work at starting of the sprint. Surprisingly though when we measured business impact of all these, it turned out to be almost zero. Issue was while we were internally having two weeks development cycle, our release to business was still every quarter, as we left out activities like regression testing, production deploy, etc. for the last sprint, water-scrum-fall as most experts call it. And it took quite a bit of churn before we leaped from a two week development cycle to a two week release cycle as desired by business.


Another fumbling issue about agile processes is the management struggle with it. CXOs and senior management find it tough to figure out a way of tying their budgets, forecasts, strategy and portfolios to things like burn down chart, velocity, story points, etc. For my workplace, we tried marrying different tools like Microsoft Team Foundation Server and Microsoft Project Server, with former catering to developers and latter to executives managing portfolios. We informally adopted process like scrum of scrums, where respective scrum masters across board met every two weeks with their progress report. Execs though were still looking for prescriptive guidance to manage large portfolio of agile projects, treating agile as a developer only framework.


This is where methodologies like SAFe come in. SAFe builds on scrum process, providing a framework for agile enterprise. SAFe framework breaks the process into three broad areas – team, program and portfolio. At the team level nothing changes much. SAFe prescribes scrum for the teams to deliver incremental development with two weeks sprint. In addition to sprint meetings, SAFe recommends a joint planning session (described later) involving all members, which happens at end of every program increment (PI).


The value of SAFe Framework primarily starts above the team level with program level. SAFe maps many scrum (team) level aspects to program level aspects. For instance team (product) backlog becomes program backlog, user stories becomes features, sprint becomes program increment (PI), scrum master becomes Release Train Engineer (RTE), product owner becomes product manager and agile teams come together to form a long lived Agile Release Train (ART). ART is at the heart of SAFe. ART is chartered to drive development of program level features with 8-10 weeks cadence called program increment (PI). Each PI thus is an aggregation of team level sprints. PI objectives too are an aggregate of teams’ objectives. ART also has shared roles like System architect, RTE, and UX among others across sprint teams. Unlike agile processes, SAFe explicitly defines architect role to create an architecture runway ensuring teams have the necessary foundation to start building PI features. At the end of each PI, there is retrospect and release planning meeting for the next PI. At this time business owners measure the PI objectives by comparing planned value with actual delivered value. All of these shouldn’t sound too different for individuals and teams practicing scrum or scrum of scrums.

Top level is the portfolio level, intended to cater to the needs of senior leadership. Here the strategic themes of organization (e.g. enhance our digital channels to improve customer experience by 30%) are translated into portfolio epics. Epics are business and technology oriented, with the intent of latter supporting the former. To limit epics and work in progress, SAFe recommends Kanban process. Each epic in turn gets assigned to ART which breaks it down further (program epics, features and user stories) and delivers incremental value with PI iteration. SAFe recommends budgeting to be done around ART and not around projects to ensure there are no impediments to value creation. It’s interesting to see SAFe have guidance around expenditure and capitalization (you know why CFOs prefer OPEX vs. CAPEX).

So, would I recommend SAFe? Or is Ken Schwabber right in stating it as ‘unSAFe at any speed’? From my perspective SAFe has a unique appeal to senior leadership, people who right the cheque. Good (or bad) part is, SAFe really doesn’t offer anything substantially new, and hence I don’t see it creating much friction with existing development teams, who are practicing agile development. And, while most organizations already have a similar process in place, SAFe does provide a good reference framework. I also like the ubiquitous language and acronyms it brings to the table – agile release train, architecture runway, etc. At the same time, like any other framework SAFe isn’t one size fit all, you will find it bloated in specific areas (e.g. roles, planning, etc.) and hence, you will have to customize to ensure it doesn’t impend organization’s agility. But overall it appears that SAFe is here to stay.

If your developer face is still frowning, just remember, it has anyways always been about practices and less of processes. I am embarking on my first release train. Let’s see if the title of this post holds true – ‘it won’t cause harm, it’s SAFe :)’.