Evolution of software architecture

Software architecture has been an evolutionary discipline starting with monolithic mainframes to recent microservices. In my perspective, it’s easy to understand these software architectures from an evolution standpoint rather than trying to digest them independently. This post is about that evolution. Let’s start with mainframes.

The mainframe era was of expensive hardware, having powerful server capable of processing large amount instructions and client connecting to it via dumb terminals. This evolved as hardware became cheaper and dumb terminals paved way to smart terminals. These smart terminals had reasonable processing power leading to a client server models. There are many variations of client server models around how much processing a client should do versus the server. For instance, client could all the processing having server just act as centralized data repository. The primary challenge with that approach was maintenance and pushing client side updates to all the users. This led to using browser clients where the UI is essentially rendered from server in response to a HTTP request from browser.

Mainframe Client Server

As server started having multiple responsibilities in this new world like serving UI, processing transactions, storing data and others, architects broke the complexity by grouping these responsibilities into logical layers – UI Layer, Business Layer, Data Layer, etc. Specific products emerged to support this layers like Web Servers, Database servers, etc. Depending on the complexity these layers were physically separated into tier. The word tier indicates a physical separation where Web Server, Database server and business processing components run on their own machines.

3 Tier Architecture

With layers and tiers around, the next big thing was how do we structure code, what are the ideal dependencies, so that we can test / manage change better? Many architecture styles showed up as recommended practice most notably Hexagonal (ports and adapters) architecture and Onion architecture. These architecture styles were aimed to support development approaches like Domain Driven Design (DDD), Test Driven Development (TDD), and Behavior Driven Development (BDD). Themes behind these styles and approaches were to isolate the business logic, the core of your system from everything else. Not having your business dependent on UI, database, Web Services, etc. allows for more participation of business teams, simplifies change management, minimizes dependencies, and make the software easily testable.


Next challenge was scale. As compute became cheaper, technology became a way of life causing disruption and challenging the status quo of established players across industries. The problems are different, we are no longer talking of apps that are internal to an organization or mainframes where users are ok with longer wait times. We are talking of global user base with sub second response time. Simpler approach to scale was better hardware (scale up) or more hardware (scale out).  Better hardware is simple but expensive and more hardware is affordable but complex. More hardware meant your app would run on multiple machines. That’s great for stateless compute but not so for storage where the data now would be distributed across machines. This distribution leads us to famous CAP (Consistency, Availability and Partition tolerance) theorem. While there are many articles on CAP, essentially it boils down to – network partitions (due to latency, communication failures, etc.) are unavoidable and we have to accept them. This is going to require is to choose between availability and consistency. You can choose to be available (i.e. run / manage multiple instances / copies of database / data) and return stale data to be eventually consistent or you can choose to be strongly consistent and give up on availability (i.e. run a single instance / copy of DB / data so everyone gets the same data query results). Traditional database servers were designed to be consistent and available (CA) with no tolerance for partition (active DB server catering to all requests). But with data growth partitioning became main stream and NoSQL databases emerged as well with master slave relationship, configurable to support both strong consistency or eventual consistency.

CAP Theorem

Apart from scale challenges, today’s technology systems have to often deal with contention. E.g. selecting an airline seat, or a high discount product that everyone wants to buy on a Black Friday. As multitude of users are trying to get access to the same piece of data, it leads to contention. Scaling can’t solve this contention, it can only make it more worse (imagine having to reconcile multiple inventory records of the same product within your system). This led to specific architecture styles like CQRS (Command Query Responsibility Segregation) & Event Sourcing. CQRS in its simple terms is about separating writes (command) from reads (query). With writes and reads having separate stores and model both can be optimally designed. Write stores in such scenarios typically use Event Sourcing to capture entity state for each transaction. Those transactions are then played back to the read store to make write and read eventually consistent. This model of being eventually consistent would have some implications and needs to be worked with business to keep the customer experience intact. E.g. Banana Republic recently allowed me to order an item. They took my money and later during fulfillment they realized they were out of stock (that is when things became eventually consistent). Now they refunded my money, sent me a sorry email and allowed me 10% discount on my next purchase to value me as a customer. As you would see CQRS and Event Sourcing come with their own set of tradeoffs. They should be used wisely for specific scenarios rather than an overarching style.


Armed with above knowledge, you are now probably thinking can we use these different architecture styles within a single system system? For instance, have parts of your system use 2-tier other use CQRS and other use Hexagonal architecture. While this might sound counterproductive, it actually isn’t. I remember building a system for healthcare providers, where every use case was so different – appointments, patient registration, health monitoring, etc. Using a single architecture style across the system was definitely not helping us. Enter Microservices. The microservices architecture style recommends to break your system into a set of services. Each service can then be architected independently, scaled independently, and deployed independently. In a way, you are now dealing with vertical slices of your layers. Having these slices evolve independently, will allow you to adopt a style that’s more befitting to the slice in context. You might ask, while this makes sense for the architecture, but won’t you just have more infrastructure to provision, more units to deploy and more stuff to manage? You are right, and what really makes Microservices feasible is the agility ecosystem comprising of cloud, DevOps, and continuous delivery. They bring the necessary automation and sophistication to your development processes.


So does this evolution make sense? Are there any gaps in here which I could fill? As always, will look forward for your comments.

Image Credits: Hexagonal Architecture, Microservices Architecture, 3-Tier architecture, client server architecture, CQRS architecture, Onion architecture, CAP Theorem

Azure ExpressRoute Primer

What is Azure ExpressRoute?
ExpressRoute is an Microsoft Azure service that lets you create private connections between Microsoft datacenters and infrastructure that’s on your premises or in a colocation facility. ExpressRoute connections do not go over the public Internet, and offer higher security, reliability and speeds with lower latencies than typical connections over the Internet.

How to setup ExpressRoute Circuit?
ExpressRoute circuits are resources within Azure subscriptions. But before you setup Expressroute connection (or circuit as it’s normally referred as), you need to make decisions about setup parameters.
1) Connectivity Option – You can establish connection with Azure cloud either by extending your MPLS VPN (WAN), or you can leverage your Colocation provider and it’s cloud exchange or roll out an point-to-point ethernet your self. Most large enterprises would use the first option, medium size enterprise running in COLO would go with second option, and the last is more specialized scenario warranting higher level security
2) Port Speed – Bandwidth for your circuit
3) Service tier / SKU – standard or premium (more on this later)
4) Location – You might get multiple options for this depending on your choice for connectivity (#1). E.g. MPLS providers have multiple peering locations from which you can pick the one closet to you
5) Data Plan – Limited plan with pay as you go egress charges or unlimited plan with higher cost irrespective of egress volume

After you make these five choices, you can fire up PowerShell on your VM to execute ‘New-AzureDedicatedCircuit’. Remember to select the right Azure subscription (Add-AzureAccount / Select-AzureSubscription) where you want the circuit to be created. Please note you would need to import ExpressRoute module if not already (Import-Module ‘C:\Program Files (x86)\Microsoft SDKs\Azure\PowerShell\ServiceManagement\Azure\ExpressRoute\ExpressRoute.psd1’

New-AzureDedicatedCircuit -CircuitName $CircuitName -ServiceProviderName $ServiceProvider -Bandwidth $Bandwidth -Location $Location -sku Standard

As soon as this completes you will get a service key which is kind of a unique identifier for your circuit. At this step only your billing starts, as we have only completed Azure side of things. Now you need to work with your Network Service Provider, provide your service key and ask them to complete their side of configuration to Azure. This would also involve setting up a BGP session at your end. Once this done you are all set to leverage expressroute and connect the circuit to azure virtual networks – with the traffic flowing over private connection.

Connecting ExpressRoute Circuit to Azure Virtual Network
Once the circuit is configured it’s relatively straight forward to connect it to virtual network. Once again PowerShell is your friend. But before firing the below command ensure your VNET and the virtual gateway is created.

New-AzureDedicatedCircuitLink -ServiceKey “***” -VNetName “MyVNet”

ServiceKey parameter uniquely identifies your circuit. As circuits are part of the Azure Subscription (wish there was a way to view them in portal) your VNET should be part of the same subscription. This lead to the question – Can we connect expressroute circuits to VNETs across subscriptions? Answer is yes.

Connecting ExpressRoute Circuit to Azure Virtual Network across subscriptions
As we know circuit is part of a subscription, so as a subscription admin you will have to grant rights to other subscription admins so that they can link their VNETs to your circuit. Here’s the PowerShell cmdlet to do that.

New-AzureDedicatedCircuitLinkAuthorization -ServiceKey “***” -Description “AnotherProdSub” -Limit 2 -MicrosoftIds ‘devtest@contoso.com’

This commands allows 2 VNETs from AnotherProdSub to connect to the ExpressRoute circuit. You might see the last parameter MicrosoftId replaced by AzureAD Id (not sure what IDs are supported right now)

Once you have the authorization, you can query the servicekey from your subscription and link your VNET as appropriate.

Get-AzureAuthorizedDedicatedCircuit #This will get details of the circuit including ServiceKey

New-AzureDedicatedCircuitLink –servicekey “***” –VnetName ‘APSVNET’ #Link VNET in another subscription

Remember you can only connect 10 VNETs per circuit. Though this is a soft limit but you can grow only few folds. If you need to create 100 VNET instance you need to look at ExpressRoute Premium.

What is ExpressRoute Premium?
Premium tier for enterprises that need more VNETs per circuit, need their circuit to span geo-political region or have more than 4000 route prefixes. You will pay around 3000 USD more for the premium features, when compared to standard edition with same bandwidth.

How much it costs?
Express route costs boil down to price you pay to Microsoft and your service provider.
To Microsoft it’s
monthly fee depending on the port speed
Bandwidth consumed (unless you are in unlimited data where you flat 300 USD)
Virtual Gateway which you would provision in your VNET (mandatory for expressroute & S2S VPN)

To Network Service Provider:
It’s one time setup fee for the circuit
Bandwidth charges (how much data goes through their cloud to Microsoft)

How long does it take to setup connection?
Well it depends. If you already have a network service provider or an exchange provider supporting Azure, it shouldn’t take more than a day (excluding paperwork). Otherwise this can turn out to be a project in itself.

Can we use ExpressRoute to connect to office 365?
Answer is yes, but it actually depends on your provider. Apart from connecting to Azure VNET, expressroute allows you to establish public peering and Microsoft peering to route your Azure PaaS (public services) and Office 365 traffic over the private network. For more details refer to this link. Public peering allows you to route your traffic to public services like Azure Services and Azure SQL database over private tunnel.

Thou shalt cause no harm, Thou art SAFe!

Yup, got another certification on a software development methodology 🙂 . It’s called Scaled Agile Framework or SAFe for short. In case you haven’t heard of it, this post should provide you some context, as to where it fits into the larger scheme of things. At the same time, I have penned down my journey over last few years with respect to development processes. Hope you find it useful.

Software development processes have been a fascinating aspect of software industry, often leading to a debate between its proponents. My first exposure to these processes was through Rational Unified Process (RUP) – a comprehensive process covering various facets of software development. There was a big RUP cardboard in the center of my workplace with few colleagues of mine swearing by it, and taking pride for their in-depth understanding of the process. At the same time, industry was evolving towards lean approach to software development, working only on those activities that created value for customer. Lean approach coupled with agile manifesto for embracing change, lead to a set of new development processes like Scrum, Kanban and XP among others. Agile processes were an instant hit with developer community, as these were largely created by practitioners and not managers, who believed in empowering individuals cutting layers of management above.


Adopting agile though can be journey in itself. Let’s take Scrum. The Scrum process talks of a lean process with just three roles (Product Owner – a business representative, development team and Scrum master – a facilitator) and two weeks incremental development cycle called sprint which covers planning, demo and retrospect (continuous improvisation). But most organizations got the implementation wrong. For instance, in my team we religiously adopted agile, even changing our working place to reflect it – moving from isolated cubicles to collaborative benches, etc. We ran two weeks sprint (incremental development cycle), occasionally allowing business to re-prioritize work at starting of the sprint. Surprisingly though when we measured business impact of all these, it turned out to be almost zero. Issue was while we were internally having two weeks development cycle, our release to business was still every quarter, as we left out activities like regression testing, production deploy, etc. for the last sprint, water-scrum-fall as most experts call it. And it took quite a bit of churn before we leaped from a two week development cycle to a two week release cycle as desired by business.


Another fumbling issue about agile processes is the management struggle with it. CXOs and senior management find it tough to figure out a way of tying their budgets, forecasts, strategy and portfolios to things like burn down chart, velocity, story points, etc. For my workplace, we tried marrying different tools like Microsoft Team Foundation Server and Microsoft Project Server, with former catering to developers and latter to executives managing portfolios. We informally adopted process like scrum of scrums, where respective scrum masters across board met every two weeks with their progress report. Execs though were still looking for prescriptive guidance to manage large portfolio of agile projects, treating agile as a developer only framework.


This is where methodologies like SAFe come in. SAFe builds on scrum process, providing a framework for agile enterprise. SAFe framework breaks the process into three broad areas – team, program and portfolio. At the team level nothing changes much. SAFe prescribes scrum for the teams to deliver incremental development with two weeks sprint. In addition to sprint meetings, SAFe recommends a joint planning session (described later) involving all members, which happens at end of every program increment (PI).


The value of SAFe Framework primarily starts above the team level with program level. SAFe maps many scrum (team) level aspects to program level aspects. For instance team (product) backlog becomes program backlog, user stories becomes features, sprint becomes program increment (PI), scrum master becomes Release Train Engineer (RTE), product owner becomes product manager and agile teams come together to form a long lived Agile Release Train (ART). ART is at the heart of SAFe. ART is chartered to drive development of program level features with 8-10 weeks cadence called program increment (PI). Each PI thus is an aggregation of team level sprints. PI objectives too are an aggregate of teams’ objectives. ART also has shared roles like System architect, RTE, and UX among others across sprint teams. Unlike agile processes, SAFe explicitly defines architect role to create an architecture runway ensuring teams have the necessary foundation to start building PI features. At the end of each PI, there is retrospect and release planning meeting for the next PI. At this time business owners measure the PI objectives by comparing planned value with actual delivered value. All of these shouldn’t sound too different for individuals and teams practicing scrum or scrum of scrums.

Top level is the portfolio level, intended to cater to the needs of senior leadership. Here the strategic themes of organization (e.g. enhance our digital channels to improve customer experience by 30%) are translated into portfolio epics. Epics are business and technology oriented, with the intent of latter supporting the former. To limit epics and work in progress, SAFe recommends Kanban process. Each epic in turn gets assigned to ART which breaks it down further (program epics, features and user stories) and delivers incremental value with PI iteration. SAFe recommends budgeting to be done around ART and not around projects to ensure there are no impediments to value creation. It’s interesting to see SAFe have guidance around expenditure and capitalization (you know why CFOs prefer OPEX vs. CAPEX).

So, would I recommend SAFe? Or is Ken Schwabber right in stating it as ‘unSAFe at any speed’? From my perspective SAFe has a unique appeal to senior leadership, people who right the cheque. Good (or bad) part is, SAFe really doesn’t offer anything substantially new, and hence I don’t see it creating much friction with existing development teams, who are practicing agile development. And, while most organizations already have a similar process in place, SAFe does provide a good reference framework. I also like the ubiquitous language and acronyms it brings to the table – agile release train, architecture runway, etc. At the same time, like any other framework SAFe isn’t one size fit all, you will find it bloated in specific areas (e.g. roles, planning, etc.) and hence, you will have to customize to ensure it doesn’t impend organization’s agility. But overall it appears that SAFe is here to stay.

If your developer face is still frowning, just remember, it has anyways always been about practices and less of processes. I am embarking on my first release train. Let’s see if the title of this post holds true – ‘it won’t cause harm, it’s SAFe :)’.

Managing Access to Cloud Resources

As you start your cloud incubation journey, one of the very first hurdles you would run into is access management. How to secure access to your cloud provider? Whom do you allow to provision resources? Do you want to centralize the provisioning, or empower project teams with self-service capability? Can we leverage on-premise identity stores for cloud access? Needless to say, these aspects can get quite tricky. In this post, I will talk about different options around managing accessibility to cloud services and as always would love to hear your feedback.

No Self Service: Many organizations looking at cloud as an extension to their data center, and want similar to enforce similar control over their cloud environment. Their IT team provisions and de-provisions cloud resources as necessary. But the end users have no direct access from their end. They still raise a ticket through tools like Service Now which are then full filled by IT Ops through automation or manual setup.

Self Service via Custom Portal: This is standard practice across many organizations. Instead of providing direct access via cloud service provider portal, they create a layer of abstraction – a custom portal for managing access. This is definitely feasible as most of the cloud service providers have APIs, controlling access to cloud resources. A typical custom portal can help drive governance. An example use case could be – someone requests a VM image and an request approval email is automatically sent to her manager. Further custom portals can provide a unified view catering to different cloud platforms – i.e. a single UI to provision workload on AWS, Azure or Google Cloud. But challenge with such initiative is to keep pace with new cloud services. Most of the cloud platforms are introducing new features biweekly, making custom portal a never ending project. One solution here could be to control the feature scope of the custom portal – e.g. cater to just IaaS services – Compute, Network, Storage & Security.

Controlled Access to Provider portal with extensions: Many enterprises don’t want to reinvent the wheel. Their intent is to add only delta functionality to the existing self-service cloud provider portal. For instance, most of the cloud provider portal have no context of the consuming enterprise, its projects, its policies, etc. In such cases, it makes sense to augment cloud provider portal with additional project view and build an ecosystem to enforce organizational policies. E.g. When User A logs into the extended Portal she can view the list of projects (a project can have a direct mapping to cloud subscription or account), her role / rights on each. But provisioning any cloud resources would have to be carried out through provider portal (may be a SSO with provider portal). Depending on the rights user has, she will be able to provision only those cloud resources.

Let’s understand the last option from Microsoft Azure perspective, though similar features are available in other cloud platforms like AWS as well.

Single Sign On:
To setup Single sign on you will require Azure Active Directory domain configuration and ADFS setup. You can find more details here. This ensures that only employees of the organization will have access to Azure portal & resources.

Controlling access to resources:
SSO is great, but you don’t want every user of the organization to have unrestricted access to Azure resources. Rather only the authorized set of users should have access to them. That’s where Role based access control comes in. A role in RBAC terms is a collection of actions that can be performed on an Azure resources or group of Azure resources (group of resources referred to as ‘Resource Groups’ in Azure are containers holding resources for a given application). RBAC is currently supported in Azure Preview Portal only. You can also configure the access through PowerShell.

Azure RBAC

Subscription, Administrators & Azure AD:
While RBAC is the preferred way of setting access control, knowing the different Azure Portals administrative roles is necessary to gain comprehensive understanding. Once you sign up for Azure EA, MS sets up an account for you called ‘Enterprise administrator’. As an enterprise admin you can create different accounts and subscriptions. Each account has an Account administrators who in turn can create multiple subscriptions, with each subscription having its own service administrator. Service Administrator is the super user having complete access to the subscription and can provision resources (VMs, Databases, etc.) as required. Service Administrator can also create co-administrators as required to support them with administrative tasks.
Coming to Azure AD, you can create, rename, delete Azure AD from Azure Portal. Every Azure Subscription can trust only one Azure AD and only service administrator has the rights to choose the trusted AD for a given subscription (Settings -> Subscriptions -> Edit Directory).

Azure Subscription & Azure AD

Hope that provided some good perspective. As always do drop a note below, on how are you managing access to cloud resources.

Dealing with Resourcing Constraints

As part of my current role I engage with top execs of fortune 500 companies discussing cloud transformation and strategy. While discussions are very engaging, they invariably stall around resourcing. After all, everyone wants an ‘A Player’ and they want to on-board that person very next week. The common approach here (unless you have budget to maintain a good bench strength for your practice) is get into an endless loop of interviewing candidates, first internally and then with customer. And even if you are lucky to find right candidate, you are still on hook till he or she really joins the organization. Such situations can derail projects and even create a dent on your reputation. Below are some of the workarounds I have seen working, and would be good to hear your thoughts.

Contracting – Let’s go over the simple option first. There are dozens of recruiting firms out there, who can provide you resources to staff on your project. While these firms charge a premium, you can leverage them to start an engagement at short notice. Contractors can act like tip of the spear, for you to build the launching platform of bigger projects. What’s more, once you get the right person hired, you can swap him with the contractor. Of course, all of this works only if your customer is willing to onboard contractors.

Travel Ready Offshore Candidates – Most of the IT companies today have global delivery center across globe. Resources working in these offshore locations can be availed legitimate work Visa and you can plan to staff them on client engagements. This is good option for global players as they don’t have to maintain a big bench onsite or hire contractors, which is usually expensive.

Supplementing Skill Sets with Onsite / Offshore mix – At times it might be difficult to find a single resource having all the skill necessary for a client engagement. In such cases, you can try to split the profile, deriving an onsite / offshore mix and creating the right symbiosis. If you have the right mix of people, this can be a real savior. Even better, you can convince customer to transfer work entirely offshore or plan for onsite transition when resource becomes travel ready.

Loan from other teams – I have been a loaned resource myself. One of the SVPs in my previous organizations pulled me out to support a key post-merger project. Idea here is, other teams within your organization might have skillset you are looking for or at least in the near range. Key then is to know who those teams are, and how you can leverage them. In a competing scenario, who will have to play it right, so that you don’t ending up losing opportunity to those teams. But it’s still better, as the overall organization wins not losing to an external competitor.

In addition to above, you can also revamp your referral program. There could be company norms here, but one of my earlier VP made is so lucrative that the referral system was flooded. Interestingly, he didn’t go overboard. All he did was to try and bridge the gap between an internal and external referral bonus. You can also get referrals into the annual goal system, but I would recommend not pushing it down the throat of your employees without enough motivation.

Finally, a word on margins (profitability). When you are starting a new engagement taking one of the above approaches your margins would be impacted – you get a resource from market at premium, at the same time customer doesn’t know your capability, so they will bargain for less. You have very less options here but to absorb that cost – ideally indicating you are discounting, considering the fact it’s a new initiative or plan to slowly transition to a blended rate of onsite / offshore mix, where the margin for overall engagement can be improvised (for instance, you make 20% for onsite resource and 40% for offshore resource, your margin for the engagement would average out to be 30% which could be lot better).

All of above though is in addition to prepping up your recruiters & their resourcing channels, making them feel that they are integral to your team, take risks and have guts to maintain a small bench, even if you are on a wafer thin budget.

Hope this provides you some food for thought on dealing with your resourcing constraints. Let me know if you have additional approaches or improvised versions of above. Comments / Suggestions are welcome 🙂

Client Profitability vs. Practice / Company Profitability

This post is for dummies covering few business terms which I am dabbling with these days. Thoughts below are primarily related to software services, but I think they would be of help to any service industry.

Having ran a startup earlier, I have always cared for margins which is necessary for the healthy growth of the business. Before getting into a customer engagement getting your margins or simply put profits right is very important, for both fixed bid and T&M (Time and Materials) projects. Apart from resource costs, you also need to take into consideration other costs like T&E (Travel and Expenses) and call them out separately.

Keeping above in mind, the profits you derive out of a given customer project is called Customer or Client Profitability (CP) – usually measured in terms of percentage. So, is good CP all what a company should care for? Answer is, of course not. While you might have a high CP it’s still possible that the overall company or practice is making loss. Let’s see how.

The common reason for discrepancy here is overlooking the fixed costs. For instance, you are going to incur salary costs irrespective of whether your resources are allocated (billable) to a project or not (e.g. project you signed up for got over in 5 months) and you will have to still pay rent, infrastructure bills, etc. All of these expenses fall under the larger category called SG&A (Selling, General and Administrative Expenses) which includes advertisement, sales, taxes, training, corporate functions, etc. In short, the Practice Profitability (PP) is not a sum of various CPs; rather it’s Sum of CPs minus SG&A.

It should be clear by now that the only way you can grow your business is increment CP without proportionally increasing SG&A; i.e. do more with less. Most of the budget planning exercises in corporate companies is around this agenda. One way to achieve this is move away from RFR (Resource following Revenue) to non-linear revenue models, shifting the focus from services to products.

Hope this was useful in putting these terms into the right perspective.

Overview of Office 365

Office 365 is suite of Microsoft products delivered software as a service from cloud. For consumers it represents a simplified pay as you go model, helping them use office products across multiple devices while for the enterprises the value proposition is workplace transformation by driving Enterprise Mobility.

Consumers can now pay a monthly subscription fee and have the word, excel and other office tools installed across 5 PCs and Macs. Users also get 5 more mobile office installs for Android and iOS platforms and there is a feature available called Office on demand which allows users to temporarily stream office 2013 applications on a windows 7 / 8 PC. In addition, one gets 20 GB of SkyDrive integrated with Office Web Apps (a subset of desktop version) and 60 Skype world minutes to make calls in over 60 countries.


Enterprises, on the other hand, are being disrupted by various needs of geographically distributed teams, decentralized work locations, BYOD and data security, social engagement platforms, etc. Office 365 for enterprise, adds additional hosted services like Exchange, Lync, SharePoint, Yammer, SkyDrive Pro, etc. to cater to these needs. These services can be accessed using Single Sign On with an on premise AD / ADFS. What’s more, with SaaS model you take the entire IT complexity and management out of the equation.

Office 365 also has something for developers. The developer subscription which is bundled free with MSDN subscription or otherwise costs 99 USD, allows developers to build applications for Office 365 including SharePoint Online. These applications typically enhance office tools – for instance an enterprise can develop set of applications for their employees and avail them under my organization section of the portal. Developers can do application development using familiar development tools. For small enterprises, which want an easy way to augment the OOB office functionality, office team offers “NAPA” – office 365 development tools right of your browser. In addition to this, enterprise developers can also use Visual Studio. ISVs planning to develop commercial applications, can publish their applications to the office store.

Using a Single Windows Azure Active Directory tenant for All EA Azure Subscriptions

As you know by now Windows Azure Active Directory is at the root of every Azure subscription.


But in an EA setup you typically have multiple subscriptions and you definitely don’t want to create a different WAAD tenant for every other subscription. So here’s what you can do (there might be other ways too of achieving this). You can first create a Shared account and under that a Shared Subscription. Also create the WAAD tenant you want to use and ensure your shared subscription is under that WAAD tenant. In that WAAD tenant create all the account administrators.


Now go to your EA portal, and add new accounts specifying the account administrators you just created. That’s it – next when you create subscriptions for those newly created accounts, these subscriptions will be by default part of the same WAAD tenant under which you created your shared subscription.


It can’t get any easier, isn’t it 🙂 ?

Windows Azure Portals and Access Levels

When you sign up for Windows Azure you get a subscription and you are made the Service administrator of that subscription.


While this creates a simple access model, things do get little complicated in an Enterprise where users need various levels of access. This blog post would help you understand these access levels. 

Enterprise Administrator
Enterprise Administrator has the ability to add or associate Accounts to the Enrollment and can view usage data across all Accounts. There is no limit to the number of Enterprise Administrators on an Enrollment.
Typical Audience: CIO, CTO, IT Director
URL to GO: https://ea.windowsazure.com

Account Owner
Account Owner can add Subscriptions for their Account, update the Service Administrator and Co-Administrator for an individual Subscription, and can view usage data for their Account. By default all subscriptions are named as ‘Enterprise’ on creation. You can edit the name post creation in the account portal. Under EA usage, only Account Administrators can sign up for Preview features. Recommendation for accounts to be created is either on functional, business or geographic divisions, though creating a hierarchy of accounts would help larger organizations.
Typical Audience: Business Heads, IT Divisional Heads
URL to GO: https://account.windowsazure.com

Service Administrator
Service Administrator and up to nine Co-Administrators per Subscription have the ability to access and manage Subscriptions and development projects within the Azure Management Portal. The Service Administrator does not have access to the Enterprise Portal unless they also have one of the other two roles. It’s recommended to create separate subscriptions for Development and Production, with production having strict restricted access.
Typical Audience: Project Manager, IT Operations
URL to GO: https://manage.windowsazure.com

Subscription co-administrators can perform all tasks that the service administrator for the subscription can perform. A co-administrator cannot remove the service administrator from a subscription. The service administrator and co-administrators for a subscription can add or remove co-administrators from the subscription.
Typical Audience: Test Manager, Technical Architect, Build Manager
URL to GO: https://manage.windowsazure.com

That’s it! With above know-how you can create an EA Setup like below


Hope this helps 🙂