Niraj Bhatt – Architect's Blog

Ruminations on .NET, Architecture & Design

Monthly Archives: December 2008

Bringing Windows Server 2008 to Windows Vista’s Look & Feel

When I first saw VS2010 CTP VPC Image, I was quite surprised too see the look & feel MS was able to give to Windows Server 2008. It was looking ditto as Vista. I searched for enabling that magic on my copy of Windows Server 2008, and hurray I got it here :) .

Resolving XmlDictionaryReaderQuotas Error for WCF Compression using GZipEncoder with Custom Binding

To compress the WCF data transfered over the wire, Microsoft Samples contains a GZipEncoder. This encoder wraps TextMessagingEncoder applying GZip Compression on top of it (N.B. you can even wrap a Binary Encoder to enable compression on Images for instance).

But as you try to transfer a large object using GZipEncoder, you will run into the below error asking to increase maximum string length content:

Unhandled Exception: System.ServiceModel.CommunicationException: Error in deserializing body of reply message for operation ‘GetData’. The maximum string content length quota (8192) has been exceeded while reading XML data. This quota may be increased by changing the MaxStringContentLength property on the XmlDictionaryReaderQuotas object used when creating the XML reader. Line 192, position 30. — System.Xml.XmlException: The maximum string content length quota (8192) has been exceeded while reading XML data. This quota may be increased by changing the MaxStringContentLength property on the XmlDictionaryReaderQuotas object used when creating the XML reader.

This can look quite confusing if you are not familiar with how WCF Channel pipeline works. This error is encountered by many even if you are not using GZipEncoder outlined above. So let’s look at all solutions while using & not using GzipEncoder (below I set the limit to max allowed, though it’s not recommended) :

1) Standard Binding (No GZipEncoder)
<bindings>
      <basicHttpBinding>
        <binding>
          <readerQuotas maxStringContentLength=”2147483647″/>
        </binding>
      </basicHttpBinding>
</bindings>
//new BasicHttpBinding().ReaderQuotas.MaxStringContentLength = Int32.MaxValue
N.B. Binding is a collection of channels providing an abstract way to add readerQuotas.

2) Custom Binding (No GZipEncoder)
<bindings>
      <customBinding>
        <binding>
          <textMessageEncoding>
            <readerQuotas maxStringContentLength=”2147483647″/>
          </textMessageEncoding>
          <httpTransport />
        </binding>
      </customBinding>
</bindings>

/*
CustomBinding binding = new CustomBinding();
TextMessageEncodingBindingElement element = new TextMessageEncodingBindingElement();
element.ReaderQuotas.MaxStringContentLength = Int32.MaxValue;
binding.Elements.Add(element);

*/
N.B. For CustomBinding you need to select channels manually and for encoding channel you can specify the readerQuotas.

3) Using GZipEncoder – in this case you need to add couple of lines in GZipMessageEncodingBindingElement class (GZipMessageEncodingBindingElement.cs file). The method which you would change is below:

public override IChannelFactory BuildChannelFactory (BindingContext context)
{
if (context == null)
throw new ArgumentNullException(“context”);
context.BindingParameters.Add(this);

var property = GetProperty<XmlDictionaryReaderQuotas>(context);
property.MaxStringContentLength = 2147483647; //Int32.MaxValue
property.MaxArrayLength = 2147483647;
property.MaxBytesPerRead = 2147483647;

return context.BuildInnerChannelFactory();
}

N.B. It’s not possible to alter these parameters through configuration file while using GZipEncoder.

Hope this helps :) .

Resolving Cyclic Dependency among VS.NET Projects

I normally prefer keeping all projects inside a single solution. This enables easy code navigation and mainly simplifies debugging. But sometimes this strategy hits a roadblock. For instance recently I required WCF & WF projects to refer each other. Now VS.NET doesn’t permit this, as it would create a cyclic dependency. We can resolve this issue by moving WCF / WF into a single project (used folders for logical separation). But unfortunately WF templates are not visible inside Web Application Template project which I was using to host WCF Services. Hence, as a way out I had to create 2 different solutions & split WCF / WF projects. Once you create different solutions, those solutions can have cyclic dependency between projects they contain. This might be the only way out in certain scenarios :( . Let me know if you have resolved this with other alternatives.

Capacity Planning vs. Hardware Sizing

I have been recently working on business proposals which require me to architect solutions starting from layers to deployment. This is where I crashed into these 2 terms: Capacity Planning & Hardware Sizing. I came across an accurate difference here which I would like to quote straight away:

“decide whether you need to do capacity planning or hardware sizing, you can’t do both, at least not at the same time. In capacity planning the software and hardware are constant while the workload varies (ie, given a particular system, how much work can it do?). In hardware sizing the software and workload are constant while the hardware varies (ie, given a particular amount of work, what’s the least-costly system that can handle the workoad in the specified performance constraints?).”

I am going to edit this space as my thoughts mature.

Snapshot vs. LogShipping vs. Mirroring vs. Replication vs. Failover Clustering

All these SQL SERVER terms were quite confusing for me. Luckily, I got to attend Vinod’s session last Saturday at BDOTNET’s UG meet. So I am going to jot down my understanding of them, & will look forward to read your comments on it.

1) Snapshot is a static read only picture of database at a given point of time. Snapshot is implemented by copying a Page (8KB for SQL SERVER) at a time. For e.g. assume you have a table in your DB, & you want to take a snapshot of it. You specify the physical coordinates for storing snapshot & when ever original table changes the affected rows are pushed first to the the snapshot & then changes happen to the DB. (N.B. There is also something called as Snapshot Isolation Level which is different from Database Snapshot).

Usage Scenario: You have a separate DB for report generation, and want to ensure that latest data for that is available. You can periodically take snapshot of your transactional database.

2) Log Shipping is an old technique available since SQL SERVER 2000. Here the transactional log (ldf) is transferred periodically to the standby server. If the active server goes down, the stand by server can be brought up by restoring all shipped logs.

Usage Scenario: You can cope up with a longer down time. You have limited investments in terms of shared storage, switches, etc.

3) Mirroring which was introduced with 2005 edition, works on top of Log Shipping. Main difference is the uptime for the standby server is quite less in mirroring. Standby server automatically becomes active in this case (through help of a broker server which is called as Witness in SQL SERVER parlance), without having to restore logs (actually logs are continuously merged in this scenario – no wonder it’s called Mirror :) ). Additional advantages of Mirroring include support at .NET Framework level (read no switching/routing code – requires ADO.NET 2.0 & higher) plus some new features like Auto Page Recovery introduced with SQL SERVER 2008.

Usage Scenario: You want very less down time and also a cost effective solution in terms of shared storage, switches, etc. Also you are targeting a single database which easily fits in your disks.

4) Replication is used mainly when data centers are distributed geographically. It is used to replicate data from local servers to the main server in the central data center. Important thing to note here is, there are no standby servers. The publisher & subscriber both are active.

Usage Scenario: A typical scenario involves syncing local / regional lookup servers for better performance with the main server in data center periodically, or sync with a remote site for disaster recovery.

5) Failover Clustering is a high availability option only (unlike others above which can be used for disaster recovery as well) used with clustering technology provided by hardware + OS. Here the data / databases don’t belong to either of servers, and in fact reside on shared external storage like SAN. Advantages of a SAN storage is large efficient hot pluggable disk storage. You might see DR options like Mirroring used quite frequently with failover clustering. Here’s a good article on adding geo redundancy to a failover cluster setup.

You might want to look at the licensing options of SQL Server, various editions available and how they map to above features. You can find this information in detail here.

Hope that helps to some extent :) .

Follow

Get every new post delivered to your Inbox.

Join 174 other followers