WCF REST over HTTPS

Accessing REST over HTTPS is quite a common production scenario. This would need some binding tweaks to enable transport security (highlighted in bold below). While it could be annoying for a REST purist that’s how it goes with WCF. If you are wary of modifying your configuration files for development and production among others check out this new feature available with VS 2010 which can transform web.config files during deployment.

<system.serviceModel>
 <bindings>
            <webHttpBinding>
                <binding name=”WebHttpBindingConfig”>
                    <security mode=”Transport”/>
                </binding>
            </webHttpBinding>
        </bindings>
<behaviors>
<serviceBehaviors>
<behavior name=”httpEnabled”>
<serviceMetadata httpGetEnabled=”true” httpsGetEnabled=”true” />
</behavior>
</serviceBehaviors>
<endpointBehaviors>
<behavior name=”EndpBehavior”>
<webHttp helpEnabled=”true”/>
</behavior>
</endpointBehaviors>
</behaviors>
<services>
<service name=”Namespace.ContractImpl” behaviorConfiguration=”httpEnabled”>
<endpoint address=”” binding=”webHttpBinding” contract=”Namespace.IContract” behaviorConfiguration=”EndpBehavior” bindingConfiguration=”WebHttpBindingConfig” />
</service>
</services>
</system.serviceModel>

What are Encodings?

Few days back I got an email query on one of blog post related to WCF encodings. Query was asking for help to understand things better, especially around the encoding part. While not being a SME on encodings, I thought of sharing few thoughts.

Encoding is a word I am sure most of us would have come across. If you are a developer this would sound even more true, with terms like Base 64 encoding / HTML encoding popping up more than often. In simplest terms Encoding converts / maps your input data (character set) to another character set. The output character set varies depending on the Encoding style you choose. While there could be many motivations around using encodings, most frequent is to prevent unintentional loss of data. Let’s see couple of examples

a) I had given an example in one of my earlier blog post on ASP.NET security which used HTML encoding to prevent loss of string data being interpreted by browser as Script. Second line of code below keeps the string data intact while the former is interpreted as Script
Response.Write (“alert(‘Niraj’)”);
Response.Write(Server.HtmlEncode(“alert(‘Niraj’)”)); // Or HttpUtility.HtmlEncode

b) Consider another scenario where we want to insert the below string in SQL Server
string Key = “S” + Convert.ToChar(0x0) + “!@#$”;
Now the problem with above is the NULL character (Convert.ToChar(0x0)), which Databases generally consider as end of the string. So in order to insert the string in Database you may as well encode it. Below line of code shows how to encode /decode strings using Base 64 .NET APIs
//To Encode, UTF8 is backward compatible with ASCII
Convert.ToBase64String(Encoding.UTF8.GetBytes(Key))
//To decode
Encoding.UTF8.GetString(Convert.FromBase64String(base64))
As most you are aware ASP.NET ViewState too is encoded using Base 64; to experiment just copy of view state of your page and try to decode it using above APIs.

Coming to my blog post on WCF encodings I had mentioned that Base 64 encoding (used by Text Encoding) inflates the size of string data. I found wiki explaining this quite clearly (see the Examples section) and generally Base 64 inflates the size of data by 33%. That’s where you might prefer using binary encoding in non-interoperable scenarios and MTOM while transferring large chunks of binary data.

Hope this helps 🙂 .

Explicit Interface Implementation and WCF

You are inheriting from 2 contracts to create a contract. Both contracts that you are inheriting from have a method with same name and signature. You don’t have much control over these interfaces as they are tightly coupled with your subsystem. You create an implementation class with help of explicit interface implementations and dispatch calls to subsystem. Now you have a challenge of invoking the right implementation from client side? Will WCF work even when we can’t have any config files? Fortunately YES, WCF supports such scenarios. Below is a code sample on same, hope it helps you when you have a sharp knife against your throat 🙂 :

//Contracts
[ServiceContract]
public interface IIPV4Location
{
[OperationContract]
string GetLocationId();
}

[ServiceContract]
public interface IIPV6Location
{
[OperationContract(Name = "IIPV6Location")] //A must for WCF to distinguish
string GetLocationId();
}

[ServiceContract]
public interface ILocation : IIPV4Location, IIPV6Location
{ }

//Server Side
class LocationImpl : ILocation
{
string IIPV4Location.GetLocationId()
{
return "Ipv4";
}

string IIPV6Location.GetLocationId()
{
return "Ipv6";
}
}

class Program
{
static void Main(string[] args)
{
ServiceHost host = new ServiceHost(typeof(LocationImpl), new Uri("http://localhost:9999/"));
var behavior = new ServiceMetadataBehavior();
behavior.HttpGetEnabled = true;
host.Description.Behaviors.Add(behavior);

host.AddServiceEndpoint(typeof(ILocation), new BasicHttpBinding(), "Location");

host.Open();
Console.WriteLine("Running...");
Console.ReadLine();
host.Close();
}
}

//Client Side
class Program
{
static void Main(string[] args)
{
ILocation location = new ChannelFactory(new BasicHttpBinding(),
"http://localhost:9999/Location")
.CreateChannel();

var ipv4Location = ((IIPV4Location)location).GetLocationId();
Console.WriteLine(ipv4Location);

var ipv6Location = ((IIPV6Location)location).GetLocationId();
Console.WriteLine(ipv6Location);
}
}

WCF Serializers – XmlSerializer vs. DataContratSerializer vs. NetDataContractSerializer

I talked about encodings in a previous blog post, in this one I will talk about Serializers. Have a look at the below diagram which depicts WCF architecture in its simplest form.

8-26-2009 4-02-52 PM

As you can see serializers convert a .NET Object to WCF Message (XML Infoset) whereas Encoder convert that WCF Message into byte stream. Serializers are governed by Service Contracts whereas Encoders are specified through endpoint’s binding. There are 3 serializers supported by WCF – XmlSerializer, DataContractSerializer & NetDataContractSerializer. DataContractSerializer is the default one and should be used always unless there are backward compatibilities required with ASMX / Remoting. Let’s explore each of serializers in turn.

XmlSerializer, I guess most of us are familiar with it coming from ASMX world, is an opt-out serializer. By default this serializer takes only public fields, properties of a given type & sends them over wire. Any sensitive data must be explicitly opted out using XmlIgnore attribute. The advantage of XmlSerializer is the amount of flexibility it gives in controlling layout of XML Infoset (schema driven), and this sometimes might be required due to compatibility requirements with existing clients. Choosing XmlSerializer over default DataContractSerializer is quite easy, just use XmlSerializerFormat attribute along with your ServiceContract or OperationContract.

[ServiceContract]
//[XmlSerializerFormat] – Could be applied here for all contracts
interface IBank
{
[XmlSerializerFormat]
[OperationContract]
Customer GetCustomerById(int id);
}

DataContractSerializer – When to moving to WCF world, Microsoft seems to have decided to give more focus on versioning contracts then creating contracts. DataContractSerializer which is default serializer doesn’t allow us the flexibility of XmlSerializer for layout of XML Infoset (though you can still serialize a type implementing IXmlSerializable), but provides good versioning support with help of Order, IsRequired attributes & IExentsibleDataObject interface. Also there is a myth that DataContractSerializer only supports types decorated with DataContract/DataMember & Serializable attributes in reality though there is a programming model supporting a range of types outlined here including Hashtables, Dictionary, IXmlSerializable, ISerializable and POCO’s. There is an attribute DataContractFormat which can allow a mix with XmlSerializer. Note that if you don’t apply any attributes at all, DataContractFormat is applied by default.

[ServiceContract]
[XmlSerializerFormat] //Serialize everything using XmlSerializer
interface IBank : IExtensibleDataObject
{
[DataContractFormat] //Override with DataContractSerializer
[OperationContract]
Customer GetCustomerById(int id);
}

Finally NetDataContractSerializer. To many this is a distant concept. Let me try explaning this with a example:

[ServiceContract]
public interface IAdd /*AddService is the implementation class*/
{
[OperationContract]
int Add(int i, int j);
ISub Sub { [OperationContract] get; } /* return new SubService() */
}

[ServiceContract]
public interface ISub /*SubService is the implementation class*/
{
[OperationContract]
int Sub(int i, int j);
}

The problem with above is that it won’t work with either of serializers, why? WCF only shares contracts not types and here you are trying to send a type ‘SubService’ back. To make this work with DataContractSerializer we need to use KnownType or ServiceKnownType attribute:

[ServiceContract]
[ServiceKnownType(typeof(SubService))]
public interface IAdd { …

Or you can use NetDataContractSerializer. WCF team discourages use of NetDataContractSerializer and hence there are no straight attributes to apply it, luckily it’s quite easy to create one as shown here. Hence with NetDataContractSerializer there is no need to declare the sub types, actual type information travels over the wire & same will be automatically loaded.

[ServiceContract]
public interface IAdd /*AddService is the implementation class*/
{
[OperationContract]
int Add(int i, int j);
ISub Sub { [NetDataContractFormat][OperationContract] get; }
}

Note that both of the above approaches require that your implementation assembly is present on client side (there is no mapping for MarshalByRefObject of .NET Remoting in WCF).

Few more items you should care about WCF Serialization, DataContractSerializer in particular, all of which you can control through DataContractSerializer’s constructor:

1) maxItemsInObjectGraph which controls maximum items your object graph can have. You can also control it through behaviors in configuration file or Service behavior attribute (you don’t have attributes for endpoint behaviors as they are not mapped to static programming constructs, so you will have to wire maxItems explicitly by code or through configuration on client side) – default is 65536

<behaviors>
      <serviceBehaviors>
        <behavior name=”largeObjectGraph”>
          <dataContractSerializer maxItemsInObjectGraph=”100000″/>
        </behavior>
      </serviceBehaviors>
      <endpointBehaviors>
        <behavior name=”largeObjectGraph”>
          <dataContractSerializer maxItemsInObjectGraph=”100000″/>
        </behavior>
      </endpointBehaviors>
</behaviors>

2) preserveObjectReferences which help you deal with circular references like Person having Children or bidirectional references between Order and line items.

3) IDataContractSurrogate which I would let you figure out yourself (post has already crossed my normal word limit 🙂 – MSDN link here).

Look forward to read your experiences with WCF Serializers.

MTOM vs. Streaming vs. Compression – Large attachments over WCF

Above question pops up when one is about to do a large transfer of data (images for instance) using WCF. Let me try answer this starting with basics.

Bandwidth & Buffer – There are 2 considerations to large transfers. First – you want to transfer as minimal as possible in terms of size (bytes) to avoid bandwidth cost which normally matters a lot when you are over WAN paying for it & Second – whether you want to transfer the entire message (read the entire image in memory on client & send it to server) or you want to stream it byte by byte. Streaming sometimes is necessary as buffering can adversely affect performance of your server in case of multiple clients (e.g. 20 clients concurrently transferring a 100 MB image, which would take up to 2 GB of your server’s RAM). So coming to title of this post – MTOM is related to Bandwidth while Streaming is related to Buffering. Let’s dig in bit more.

MTOM (Message Transmission Optimization Mechanism) – WCF supports 3 encodings (in context of WCF, encoding means converting a WCF message (serialized XML InfoSet) into bytes) – Text, MTOM & Binary (JSON & POX are also possible – webHttpBinding). All Http Bindings (Basic, WS, Dual, etc.) support Text / MTOM encoding, Text being the default one. Text/MTOM are preferred in WS-* interoperability scenarios. To switch to MTOM encoding all you need to do is just select it as shown below:

<wsHttpBinding>
        <binding messageEncoding=”Mtom” />
</wsHttpBinding>

why MTOM? Problem with Text Encoding is it uses base 64 encoding format which can inflate the message size by 30%. This can be a heavy penalty while carrying large binary attachments. Enter MTOM!!! MTOM avoids base 64 encoding for binary attachments keeping the overall size of message in control. Moreover, MTOM is based on open specifications & hence is largely interoperable. Coming to binary encoding of WCF (TCP/Pipe/MSMQ) though it’s best in terms of performance it’s not interoperable. Some people are also averse to TCP etc. because of firewall constraints & need of Sticky Sessions (Load balancing with transport sessions). I would strongly recommend to do a performance test on all of them in your environment and then take a decision.

Streaming – Streaming (BasicHttp, Tcp, Pipe) can be a good solution when you don’t want to increase the load on your servers though unlike buffering this doesn’t allow you to leverage on WCF’s message based security & reliability (how do you ensure that entire stream is transferred and not broken in between?). In case, latter two are your requirements and you want to limit the memory usage on Server, there is a chunking channel sample on MSDN. When you want to use streaming though, your OperationContract should use only one instance of Stream class (details here) in parameter list or as return type.
E.g. Stream PlaySong();
Unfortunately above still uses a buffered mode. PlaySong API is as good as returning a ‘Byte array’ in buffered mode. To enable the Streamed mode, you need to select it at Binding level, as shown below:

<basicHttpBinding>
        <binding name=”streamedHttp” transferMode=”Streamed” />
</basicHttpBinding>

Compression – WCF’s extensible channel architecture allows us to easily plug-in a compression channel. So, how about not using MTOM or binary, and just applying compression on what we are about to transfer? First compression doesn’t come for free, it costs a lot in terms of CPU. You need to weigh the CPU cost of compression / decompression vs. Latency cost (i.e. is bandwidth a bottleneck?). For Binary encoding, I think it doesn’t make sense (I would encourage you to do your own test, but it didn’t show me much difference), for MTOM encoding I would prefer sending an already offline compressed attachment (i.e. a compressed .bmp instead of .bmp) & for Text encoding, yes, it may make sense. Say, you want to send 10000 customers over WAN (though you shouldn’t be doing that) and you need to use Text for interoperability reasons. I recommend to use compression by all means for such scenarios.

Below are the important Knobs one might have to configure depending on their message transfer requirements.

<customBinding>
        <binding name=”LargeMessageOverHttp”>
<!–Encoders–>
          <textMessageEncoding>
            <readerQuotas maxStringContentLength=”” maxArrayLength=””
  maxBytesPerRead=”” maxDepth=”” maxNameTableCharCount=”” />
          </textMessageEncoding>
<!–Transport–>
          <httpTransport maxBufferPoolSize=”” maxBufferSize=”” maxReceivedMessageSize=”” />         
        </binding>
</customBinding>

maxArrayLength
The maximum allowed array length. The default is 16384.

maxBytesPerRead
The maximum allowed bytes returned for each read. The default is 4096.

maxDepth
The maximum nested node depth. The default is 32.

maxNameTableCharCount
The maximum characters allowed in a table name. The default is 16384.

maxStringContentLength
The maximum string length returned by the reader. The default is 8192.

maxBufferPoolSize
The maximum size of the buffer pool. The default is 524,288 bytes.

maxBufferSize
The maximum size, in bytes, of the buffer. defaults to 65536.

maxReceivedMessageSize
The maximum allowable message size that can be received. The default is 65,536 bytes.

Hope above gives some clarification 🙂 .

Creating HttpContent from a .NET Object – WCF REST Starter Kit

While I was preparing my demo for Tech Ed, I found that it isn’t quite easy to convert .NET Objects to HttpContent (provided with WCF REST Starter Kit) & Vice Versa. Additionally it requires some boilerplate code to be written every time. I thought of abstracting it out in HttpContentHelper class with 2 straight methods – CreateContentFromObject & CreateObjectFromContent. Code is given below:

public static class HttpContentHelper
{
        public static HttpContent CreateContentFromObject<T>(T obj)
        {
            XmlSerializer serializer = new XmlSerializer(obj.GetType());
            MemoryStream ms = new MemoryStream();
            serializer.Serialize(ms, obj);
            ms.Close();
            return HttpContent.Create(ms.ToArray(), “application/xml”);

            return HttpContentExtensions.CreateXmlSerializable(obj); //update, see comments below
        }

        public static T CreateObjectFromContent<T>(HttpContent content)
        {
            return content.ReadAsXmlSerializable<T>();
        }
}

Hope you it find it helpful 🙂 .

Speaking at MCT Summit 2009

I would be taking a session on – “Building Secure Web Services using WCF” at this MCT Summit. You can find the details about summit here. A brief introduction to my session is also there in the speakers list. To get there just search for my name 🙂 . In couple of days I will post the link to download demos I am going to demonstrate. Hope to see you there.

Resolving XmlDictionaryReaderQuotas Error for WCF Compression using GZipEncoder with Custom Binding

To compress the WCF data transfered over the wire, Microsoft Samples contains a GZipEncoder. This encoder wraps TextMessagingEncoder applying GZip Compression on top of it (N.B. you can even wrap a Binary Encoder to enable compression on Images for instance).

But as you try to transfer a large object using GZipEncoder, you will run into the below error asking to increase maximum string length content:

Unhandled Exception: System.ServiceModel.CommunicationException: Error in deserializing body of reply message for operation ‘GetData’. The maximum string content length quota (8192) has been exceeded while reading XML data. This quota may be increased by changing the MaxStringContentLength property on the XmlDictionaryReaderQuotas object used when creating the XML reader. Line 192, position 30. — System.Xml.XmlException: The maximum string content length quota (8192) has been exceeded while reading XML data. This quota may be increased by changing the MaxStringContentLength property on the XmlDictionaryReaderQuotas object used when creating the XML reader.

This can look quite confusing if you are not familiar with how WCF Channel pipeline works. This error is encountered by many even if you are not using GZipEncoder outlined above. So let’s look at all solutions while using & not using GzipEncoder (below I set the limit to max allowed, though it’s not recommended) :

1) Standard Binding (No GZipEncoder)
<bindings>
      <basicHttpBinding>
        <binding>
          <readerQuotas maxStringContentLength=”2147483647″/>
        </binding>
      </basicHttpBinding>
</bindings>
//new BasicHttpBinding().ReaderQuotas.MaxStringContentLength = Int32.MaxValue
N.B. Binding is a collection of channels providing an abstract way to add readerQuotas.

2) Custom Binding (No GZipEncoder)
<bindings>
      <customBinding>
        <binding>
          <textMessageEncoding>
            <readerQuotas maxStringContentLength=”2147483647″/>
          </textMessageEncoding>
          <httpTransport />
        </binding>
      </customBinding>
</bindings>

/*
CustomBinding binding = new CustomBinding();
TextMessageEncodingBindingElement element = new TextMessageEncodingBindingElement();
element.ReaderQuotas.MaxStringContentLength = Int32.MaxValue;
binding.Elements.Add(element);

*/
N.B. For CustomBinding you need to select channels manually and for encoding channel you can specify the readerQuotas.

3) Using GZipEncoder – in this case you need to add couple of lines in GZipMessageEncodingBindingElement class (GZipMessageEncodingBindingElement.cs file). The method which you would change is below:

public override IChannelFactory BuildChannelFactory (BindingContext context)
{
if (context == null)
throw new ArgumentNullException(“context”);
context.BindingParameters.Add(this);

var property = GetProperty<XmlDictionaryReaderQuotas>(context);
property.MaxStringContentLength = 2147483647; //Int32.MaxValue
property.MaxArrayLength = 2147483647;
property.MaxBytesPerRead = 2147483647;

return context.BuildInnerChannelFactory();
}

N.B. It’s not possible to alter these parameters through configuration file while using GZipEncoder.

Hope this helps 🙂 .

WCF Certificate Security with XBAP / IIS Issues

wsHttpBinding uses Message Security by default. But the default clientCredentialType is Windows. Now considering that your clients are going to access your application over internet, it makes sense to use Certificate / Username security. In my case I was using an XBAP in full trust and it was a more of fixed clients business scenario, so I thought of making use of the same certificates to provide secure transfer of data. Steps for doing the same are provided below:

1) Change clientCredentialType to Certificate (this would require you to customize wsHttpBinding) & specify the serviceCertificate in serviceCredentials of the web.config file. (N.B. The service can pick the certificate only from Local Machine and this can be the same certificate you are using to provide full trust to XBAP).

2) Next using Add Service Reference, generate proxy for the client. After generation you need to specify the location of the client certificate (this certificate would in Current User certificate store on client’s machine & different one from what we selected in step 1 – ideally used for authenticating client to service). This can be done by specifying new endpoint behavior on the client side.

3) As a final step in the client’s app.config file you need to change value:

<endpoint … >
<identity>
      <dns value=”YourCertNameHere” />
</identity>
</endpoint>

Plus if you are using self issued certificates through (certmgr.exe), you will need enable PeerTrust in service’s web.config and client’s app.config (search for authentication certificateValidationMode and set it to PeerTrust)

<authentication certificateValidationMode=”PeerOrChainTrust” />

(N.B. If you hosting your service on IIS & running under ASPNET/NETWORKSERVICE account, you would have to grant rights to that certificate so that IIS can access it when required. This would require you to download FindPrivateKey (I found it here) and execute below commands:
1) findprivatekey My LocalMachine -n CN=localhost –a
2) Output – (C:\Documents and Settings\All Users\ApplicationData\Microsoft\Crypto\RSA\MachineKeys\
7b90a71bfc56f2582e916a51aed)
3) cacls “C:\Documents and Settings\All Users\Application Data\Microsoft\Crypto\RSA\MachineKeys\7b90a71bfc56f2582e916a51aed” /E /G ASPNET:R
(Change ASPNET in step 3 to NETWORKSERVICE – For Windows Vista IIS7))