Niraj Bhatt – Architect's Blog

Ruminations on .NET, Architecture & Design

Tag Archives: Architecture

It’s Tech Ed Time!!!

Tech Ed 2010 is here. I will presenting @ Architect Track on – “Integration and Identity challenges for Enterprise Grade Cloud Applications”. I also have an session lined up for Community Track with an interesting title – “Imagine a world free of login screens …”. I am really geared up for these sessions and look forward to provide max ROI to audience for their time and money. If you are attending Tech Ed and do drop in to say hello.

I am bit late on this post and missed uploading presentation decks for my last VTD. Hope to put bundle all of them together and upload post TechEd.

Advertisements

SQL Server Reporting Services (SSRS) Architecture Overview

This is level 100 for people trying to figure out how SQL Server Reporting Services (SSRS) works and what makes it work. Honestly, I haven’t found SSRS architecture explained that clearly. I have given up in past, as the search engines weren’t leading to any easy to understand sources. Few days back I had to give a management presentation on migrating to SSRS. So with my back on wall I had little option but to dive in. During the journey I came across few distilled facts that I am sharing in this blog post. Hope you find them useful 🙂 .

SSRS is an optional package which you can select to install while installing SQL Server. SSRS in turn is made up of number of components. The simplest diagram I could find that describes these components and their deployment was from TechNet

As you can see in above diagram when you install SSRS it creates Report Server Databases in your SQL Server Instance. These databases are ReportServerDB and ReportServerTempDB which are used to store report configurations and other things including Caching, Session, etc. that improvise the overall performance. You have an option of installing other components like Report Manager and Report Server on the same machine where SQL Server instance is running or you can install them on a different server (typical enterprise setup). An important thing to note here is if you opt for latter you would end up paying for 2 or more SQL Server licenses.

As it turns out there are 3 distinguish components of SQL Server Reporting Services:

1) Report Server: It’s an overloaded term. Largely used to indicate a set of components that allow interaction with Report Server database. SSRS provides Web Services (.asmx) which allows LOB applications to interact with Report Server database directly (http://computername/ReportServer/reportservicexxxx.asmx where xxxx is version of SSRS). SSRS 2005 created virtual directories for Report Server & Report Manager (discussed next), but SSRS 2008 leverages the OS level HTTP listener making SSRS independent of IIS. This allows bundling of Report Server & Report Manager within a windows Service ReportingServicesService.exe. The name of this Windows Service is ReportServer. ReportingServicesService.exe functionality also includes report processing, scheduling (auto generated reports), subscriptions (mailers), etc.

2) Report Manager: An ASP.NET web based application (http://computername/Reports) that in turn interacts with Report Server Web Services. As the name indicates Report Manager allows you to manage reports in terms configuring security access, organizing them into folders (non of these folders map to physical directories but are stored as details in Report Server Database), subscribing to them, etc. One can also create reports (see next point as to how) and deploy them to Report Server Database using Report Manager. This is handy for some restricted user / production scenarios, though most developers prefer to do the deploy reports from BI studio. As discussed earlier with SSRS 2008 this component is bundled with ReportServer windows service.

3) Report Designer: There would be few guys in your team whom you may want to designate as Report Designers. Report Designers can design reports using VS.NET Business Intelligence projects (Report Server Project). Report Designers create data sources (normally a shared data source (.rds) that’s used across a set of reports), create the dataset (using queries / stored procedures on top of data source), define relevant report parameters (mapped to datasets for value retrieval via Report data window and can be passed on from .NET applications too), field formats (using properties window with pervasive VB expressions – e.g. formatting a textbox to display currency decimals) and create layouts (e.g. Grouping). Once they are done with designing their reports (.rdl files – described later) they can test (preview) them and publish them via Report Server (this is done by providing the Report Server URL in project properties and SSRS there creates a specific folder for your project). Once published these reports are available for end user consumption. Advance scenarios like interacting with Excel may require a third party product like OfficeWriter.

There are few other important aspects of SSRS which one should be familiar with.

Report Builder is a another tool which is targeted at business users who want to generate custom reports on fly. Report Builder is a ClickOnce application, intuitive and easy to use but doesn’t support all the options available with VS.NET. It’s also possible to install ReportBuilder as a standalone application.

Report Model is the base for report creation with Report Builder. It’s a simplified view of relational database targeted at business users for ad hoc report creation. Report models are created using BI Development Studio (Report Model Project – .smdl files). A report model is built on top of a Data Source View (.dsv) that defines a logical model based on one or more data sources. Models generated mainly consist of entities (relational tables), fields (attributes of a relational table) and roles (entity relationships – 1-1, 1-*, *-1). Models also contain other attributes like aggregate values that would help ease the reporting for end users. Post creation report model has to be deployed in similar way as reports. You can also use Report Model as a data source for generating reports via Report Designer. While it’s easy to deploy Report model from BI development studio, to deploy report model manually e.g. in production requires you to merge the .smdl and .dsv files.

RDL – this is another term you would run into while talking about SSRS. RDL stands for Report Definition language. This is an XML file which stores query information, data source information, etc. which are required to generate report. There is another type of report definition – RDLC (Report Definition Language Client-side) which don’t store any of above configurations. RDLC is a client side component (VS.NET Application Projects) to which you can pass data (e.g. via DataSet) coming from any of data sources. RDLC can be useful for scenarios like implementing custom pagination (SSRS 2005 pagination by default is client side pagination).

SSRS Security is primarily windows based. When a user accesses the Report Manager Application or ASMX Web Services he has to authenticate with a valid domain username / password. On successful login SSRS determines the role of the user (custom or built-in ones like Browser / Content Manager, etc.) and displays only those reports / folders to which user has access. There are few variations in the security implementation I have come across that don’t rely on Windows Authentication. Some projects tend to control role like content manager for pushing reports (.rdl files) to production with help of rs.exe. All users have an implicit role of a Browser and application layer security determines which reports the user should have access to. In case you want go ahead and roll out your custom authentication that flows security all the way down, SSRS allows you that too. If you are generating reports by connecting to remote data sources for accessing images, etc. you might have to configure Unattended Execution Account.

Deploying Reports to Production – This is mainly done in three ways. In restricted production environments rs.exe can be used. In others we can deploy the reports directly from Visual Studio or use Report Manager (discussed earlier). This normally requires to change your Data Sources and Report Server URL in Project properties. There is overwrite property for Data Sources which is normally set to false. This property helps in ensuring that you accidentally don’t overwrite production data sources during your deployment. We can deploy individual reports too in case we have a specific modification.

Integrating SSRS into your applications – I will focus mainly on ASP.NET here. We need a ReportViewer Control found in the toolbox. Drag it and drop it to your ASPX page and in the background it would add reference to – Microsoft.ReportViewer.WebForms DLL. You may need to bundle this DLL with your application package, as mostly in production, Web Server and Report Server would be on different machines. Below is typical markup found in .ASPX page (I have hardcoded report server url for simplicity)

<rsweb:ReportViewerID=”ReportViewer1″ …>
<ServerReportReportPath=”/TargetReportFolder/OpenTicketReport”ReportServerUrl=”http://ReportServerURL/ReportServer”/&gt;
</rsweb:ReportViewer>

One can also pass any necessary parameters as below in the codebehind file

this.ReportViewer1.ServerReport.SetParameters(Param);

In case your Web Server and Report Server are located on different machines, you need to ensure that the Worker Process running the application on web server has access to reports (you can configure the same using Report Manager Security)

Finally here’s a good link to tune SSRS reports.

Inputs/Thoughts/Suggestions/Corrections?

Implementing Audit Trail for your application

I recently came across Davy’s post where he talks about implementing auditing. Davy talks about 3 requirements which I guess are quite common for audit trail:

1) We should be able to easily retrieve what user X changed in the database on day Y or during a certain time frame.
2) We need to know what record X looked like at point Y in time, and be able to show a historic list of changes for a specific record.
3) We need to configure the duration for which auditing data must be kept in the database on a per-table (or per-schema) level

In this post I would like to talk about the approach we took for one of our recent projects. We prepared a generic sort of table which would hold the audit data. This table mainly consists of AuditID, EntityID, EntityTypeID, UserID, AuditTime, AuditLevel & AuditData. AuditData is a TEXT column (we support SQL 2000) holding XML structure, for those changes made to an entity by a given user in form of old and new values, at a level specified. Level here is just an entry (capturing type of modification I/D/U) or a detail of old / new values. This would cater to the first requirement as it’s quite easy to search for activities of a given user.

Second requirement is little tough, as with this model we don’t allow direct queries against record attributes. But this can be achieved by 2 variants. First simple solution could be you have to search by entity type & a specific entity in turn for a date range & browse through records to see changes done on them. Second option is take this AuditData out and restructure its layout with help of OLAP cubes to produce meaningful quick search.

Third requirement is very much there for us in order to control size of this massive table but we for simplicity purge entire database at a given time interval & do not make it adjustable for each entity. Though I feel providing that flexibility shouldn’t be difficult considering the design we have laid out.

A final thing I wanted to highlight is about generating the XML data itself. It’s always better to do such processing in .NET code & if you have been using DataSets you get old values extraction feature out of box. If not a dataset many O/R mappers too allow to access old and new values. In case you are rolling your own repositories with domain model not capturing the old values, you would need to design a custom approach like storing original object in session and comparing it with the postback. We are yet to see a significant performance degradation due to this design but as a contingency plan I was thinking of making this XML conversion & insertion into AuditDB an asynchronous activity with help of Queue like data structure.

Look forward to read your thoughts on above.

Whoever saves one life, saves the world entire

Title of this post happens to be from one of my all time favourite movies – Schindler’s List, but post is about my newly created cloud application using Azure Platform. My cloud application – EmergencyBloodBank is finally up. Here’s the direct link to application. This application was created was Cloud App Contest.

There were 2 major driving factors for this application – one to create an application covering all offerings of Azure platform that can help community members to boot start their Azure efforts and second was coming up with scenario that effectively leverages cloud services displaying no elements of over engineering (which can make solution look artificial). My initial thought was to create a shared calendar service or streaming live video but there were few existing compelling solutions already there. So, I decided to focus on healthcare segment and immediately could see the impact on blood bank field. An initial search on internet landed me here and that was it, I was all set to create a communication platform for sharing blood related requirements that leverages cloud infrastructure and services.

First step was to create a site with 99.999% uptime which is a necessary requirement when talking about blood bank sort of service. Site is built on ASP.NET 3.5 with C# as programming language, and this is where one can post their blood related requests. Next was using Live Services for authentication. How many times you have turned away from an excellent site due to its lengthy login form? I have and I never wanted the end users of this site to do that. So using Live Services was a natural fit with a large user base – apprx. 500 million users. So challenge was how we make these requests reach potential donors? Pull model is not effective in this case (imagine the load it would put on site or a web service) and .NET Service Bus provides the push model I was looking for. So I went ahead with it and created a WPF client which gets the relayed messages from Azure worker role via .NET Services Bus. Few things that I opted out of due to time constraints were providing a SMS service, porting WPF client to Windows mobile, & integration with web 2.0 applications like FaceBook, Twitter, etc. Finally task was to store the user details so they don’t have to key them everytime they visit site & also audit all blood requests they place. SQL Azure with its SaaS offering was a natural fit because the data here is nothing more than a KB per request, so one can easily forgo buying the product and use the available SaaS model (add to that almost zero percent of installation & maintenance effort). This would help maintain the site cost within 100$ per month providing a unique high available solution. Below is the architecture diagram for the application

EBBArchitecture

You can access the acutal solution document here.

Let me know your thoughts if I could have done things differently. I plan to cover the entire architecture, design & development process in one of upcoming talks.

Till then, Happy Clouding 🙂 .

(P.S. My Application has been selected as winner of the Azure Cloud App Contest. Thanks to everybody who voted and provided their valuable feedback).