Recently in Web Services Category

In a lecture this afternoon presented by Microsoft Technical Fellow, John Shewchuk, on .NET Services, he said that the eventing protocol supported by the forth coming, cloud-based Internet Service Bus (ISB) would conform to a recognized standard.  When I inquired as to which specification was being implemented, he said that it was WS-Eventing.  I was surprised, thinking that it would be the converged protocol, WS-EventNotification (PDF), which was supposedly underway.  He explained to me that that effort was abandoned.  This was confirmed in a report I found from the EPTS Event Processing Symposium held in September where Chris Ferris of IBM said that the joint effort to merge the two competing standards had ended.

My questions after hearing this are these:

  1. Does this mean that the two standards will continue to compete?
  2. Will IBM support WS-Eventing in its products instead of or in addition to WS-Notification?
  3. Why was the convergence effort abandoned?

Eventing in a service bus, especially one of the Internet persuasion, is a major component.  Integrating on premises services with those running in the cloud via the ISB will require that they understand the eventing protocol of the bus.  This isn't a problem for WCF services, just switch to the *Relay*Binding and you're good to go.  But, what about existing non-WCF clients and services that want to get on the bus?  They will have to understand WS-Eventing to send/receive notifications.  This will mean that all applications using WS-Notification will have to be updated to or excluded from sending/receiving notifications via this new ISB.  This is a big deal since many Java-based and IBM-based services are using WS-Notification not WS-Eventing.       

Windows Communication Foundation (WCF) services can be secured using the ASP.NET providers, an identity management system included with the second version of the .NET framework.  It supplies a way to storing user- and group-related information in SQL Server or Active Directory (AD).  As its name implies, its original use was with ASP.NET Web applications; however, the third version of the .NET platform has made it security capabilities available to WCF services as well. 

Just as with Web application, the ASP.NET providers allow WCF services to be locked down to specific users and/or groups of user (i.e., roles).   The toolkit allows operations (methods) of a service to be secured by applying attributes to them that indicate which users and/or groups are allowed to invoke them. 

The use of the ASP.NET providers furnishes WCF services with a number of attractive alternatives to the built-in security system of the OS.  For instance, it allows any user to login and use the service and not just those with an local account or one in the AD domain in which the service is hosted.  It also provides many other attractive benefits which will be pointed out in the following discussion.

The ASP.NET providers abstract the backend storage in which users, groups, passwords, and other such information is stored.  A provider in the context of this toolkit refers to a component with specialized capabilities for storing user data in one particular database.  It includes pre-built providers for SQL Server and AD.  Which one the application uses can be changed via configuration.  The use of the former will be detailed here. 

To begin using the ASP.NET SQL providers, the database in which they store their information must be created.  The initialization is handled by the tool aspnet_regsql.exe, which is installed with version 2 of the .NET framework.  This wizard can be run by invoking the following from the Visual Studio Command Prompt:

    C:\ >"%FrameworkDir%\%FrameworkVersion%\aspnet_regsql.exe" 

See http://msdn2.microsoft.com/en-us/library/x28wfk74.aspx for more information about the switches and options that this tool supports.  After completing the wizard, a new database called aspnetdb will be created on the specified SQL Server machine (which will be localhost unless aspnet_regsql is given another host name on the command line when started).  It will contain all the tables, stored procedures, and other infrastructure that the SQL providers need.  A nice schema diagram of this database can be found at http://msdn2.microsoft.com/en-us/library/aa478948.asp2prvdr0102l(en-us,msdn.10).gif.

After the underlying system is initialized, how does one populate it and begin managing its entities?  The methods to do so include the following: 

  1. The stored procedures created by aspnet_regsql.exe can be called directly;
  2. Juval Löwy has provided a feature-full utility he calls the Credentials Manger that is available with the source code of his book Programming WCF Services;
  3. a Web-based interface can be opened from any Web application project in Visual Studio that allows for the management of users and groups; and
  4. the SqlMembershipProvider and SqlRoleProvider classes expose APIs for creating, updating, viewing, and deleting entities within the aspnetdb database.

In my experience, options one, two, and three were not straightforward (option four wasn’t explored).  Specifically, after an hour of failed attempts, option one was abandoned without any success.  While Löwy’s explanation of the Credentials Manager in his book was impressive, configuring it was not possible in short order, so its use was also abandoned.  In the end, a new Web project was created in Visual Studio, and the Web Site Administration Tool was opened by selecting ASP.NET Configuration from the Website menu.  Once the Web-based configuration UI was opened, new roles and user were created and users were assigned to different roles.  After which, the Web application was discarded.  See http://msdn2.microsoft.com/en-us/library/yy40ytx0.aspx for more information about the Web Site Administration Tool. 

Using the ASP.NET providers requires special modification to be made to the code and configuration of the client and service.  Specifically, using this toolkit requires the following alterations:

  1. Operations must be secured by applying the PrincipalPermission attribute.  This attribute is also used when securing a method using integrated security provided by the OS; however, when using the ASP.NET providers, the domain name to which users/groups belong should not be provided.
  2. The message passed between the client and service must be encrypted using a public/private key that must be configured at both ends.  When using native Windows security, the messages between the client and service are secured as part of the communication protocol (NTLM and/or Kerberos).  This is not the case when using the ASP.NET providers because messages may originate from a non-Windows client making this unfeasible.

The first modification is applied to the class that implements the service contract.  For example, say that a service contract called ICalculator exposes an Add operation.  To prevent anyone but mangers from calling this method, the service class should be coded thus: 

    public class CalculatorService : ICalculator
    {
        [PrincipalPermission(SecurityAction.Demand, Role = "Managers")]
       
public double Add(double x, double y)
       
{
            return x + y;
        }
    }

The addition of the PrincipalPermission attribute to the Add operation (in bold) is the only coding change necessary in the service, in order to restrict all calls of it to managers. Two important things to note about this addition: 

  1. If NT groups are being used, the domain name or local machine name should be passed to the Role value of the PrincipalPermission attribute:

    [PrincipalPermission(..., Role = @"ServerMachine1\Managers")]

  2. The group names are hard-coded.  This is a real problem and must be avoided somehow in production environments.  I haven’t found a way to factor out this information into a configuration file, but one must be or else this system is unusable for many applications.

Once the code of the service has been updated, it must also be configured. 

The configurations that are needed are the enabling of the role manger and membership provider, the definition of the aspnetdb connection string (unless SQL Express is being used), the configuration of the binding, and the service behavior.  The role manger and membership provider are enabled by adding the following to the system.web section of the service’s config file (web.config if it is being hosted in IIS and app.config if it is being self-hosted):

    <roleManager enabled="true" defaultProvider="Foo">
        <providers>
            <add name="Foo"
                 type="System.Web.Security.SqlRoleProvider"
                 connectionStringName="qqq"
                 applicationName="/" />
        </providers>
    </roleManager>
    <membership defaultProvider="Bar" userIsOnlineTimeWindow="15">
        <providers>
            <clear/>
            <add name="Bar"
                 type="System.Web.Security.SqlMembershipProvider"
                 connectionStringName="qqq"
                 applicationName="/"
                 enablePasswordRetrieval="false"
                 enablePasswordReset="false"
                 requiresQuestionAndAnswer="false"
                 requiresUniqueEmail="true"
                 passwordFormat="Hashed" />
        </providers>
    </membership>

The addition of all this code was unfortunate in my view considering that the default configuration in machine.config is almost identical.  Despite different attempts, however no way could be found to avoid it.  After enabling these two components, the connection string had to be defined. 

Because the experiments were performed on the standard edition of SQL Server and not the express version, the default connection string defined in machine.config for the aspnetdb database was insufficient and a new one had to be defined.  This was done by adding the following to the connectionStrings element of the service’s configuration file:

    <add name="qqq"
        connectionString="Data Source=localhost;Integrated
            Security=SSPI;Initial Catalog=aspnetdb;"/>
 

The next configuration was to set the client credential type of the message security to username.  This was done by adding a binding element and assigning its name to endpoint of the service as follows:

    <endpoint address=""
        binding="wsHttpBinding"
        bindingConfiguration="Foobar"
        contract="Microsoft.ServiceModel.Samples.ICalculator" />
    <wsHttpBinding>
        <binding name="Foobar">
            <security mode="Message">
                <message clientCredentialType="UserName" />
            </security>
        </binding>
    </wsHttpBinding>

The last configuration that the service required was the definition of a custom behavior that set the username and password authentication mode and the service certificate information as seen in the following listing: 

    <behavior name="CalculatorServiceBehavior">    
       
<serviceCredentials>
            <userNameAuthentication
                userNamePasswordValidationMode="MembershipProvider" />
            <serviceCertificate storeLocation="LocalMachine"
                storeName="My"
                x509FindType="FindBySubjectName"
                findValue="localhost" />
        </serviceCredentials>          
       
<serviceAuthorization
            principalPermissionMode="UseAspNetRoles" />
     </behavior>

Once the service configuration was complete, the last thing that was needed before it could begin receiving requests from clients was the creation of the certificate.  This was done using a script included with a Microsoft WCF sample (on which the code of the experiment was derived).  It boiled down to these two commands: 

    makecert.exe -sr LocalMachine -ss MY -a sha1 -n CN=localhost –sky exchange –pe
    certmgr.exe -add -r LocalMachine -s My -c -n localhost –r CurrentUser -s TrustedPeople

Configuring and coding the client was much simpler than the service.  The configurations needed in the client were setting the credential type to username (as in the service) and setting the certificate validation mode to PeerTrust.  The configuration file used by the client in the experiments can be found in my stash.  Before calling the Add method of the service, the client had to provide its credentials.  To pass these to the server, the ClientCredentials property of the auto-generated proxy was used as follows: 

    // Create a proxy object using the auto-generated
    // CalculatorClient class.
    CalculatorClient client = new CalculatorClient();
    client.ClientCredentials.UserName.Password = "user4_user4";
    client.ClientCredentials.UserName.UserName = "user4";
    // Call the Add service operation.
    double result = client.Add(100.0, 15.99);

When the given username wasn’t in the managers role, the invocation of Add resulted in a MessageSecurityException.  When the user was, the operation succeeded. 

The ASP.NET providers offer an important alternative to NT and AD groups.  Their use is effective and poses few obstacles.  As the experiments were performed the most time consuming hang ups encountered were the configuration of the membership and role providers (due to a futile attempt to use the settings defined in machine.config), the need for a certificate to secure client/service communications, and the creation of new users and groups.  After overcoming these hurtles, the system was easy to use and work with.  Unsolved problems that were encountered include a simpler user/group management system and a way to avoid hard coding user and group information in method attributes.  If these complications can be solved, the system offers developers a fantastic way to secure their WCF services.

The complete source code used in this experiment can be found at http://travisspencer.com/stash/dotnet_providers/.

I just started a project with a new team of developers who are using contract-first development when defining the data contracts of their services.  So, I am objectively reexamining this methodology and weighing it against its alternative – code-first development.

In doing so, I’ve seen that contract-first development has many compelling benefits.  It offers support for versioning, versatility, predictability, and interoperability to name a few.  Aaron Skonnard insightfully retells how these two approaches were used in COM development in his article Contract-First Service Development.  VB programmers, Skonnard explains, used code-first development to define the interfaces of their COM components which lead to versioning issues and incompatibility between their components and that of their C++ contemporaries.

The new group I’m working with is building their services using WCF.  When doing contract-first design with this new technology, it is the job of the DataContractSerializer (à la svcutil) to convert the contract from XML Schema into C# code.  This components places a number of restrictions on what parts of XML Schema are allowed in the data contract.  If these restrictions aren’t adhered to, svcutil will generate C# code that is (de)serialized by XMLSerializer which levies a purported 10% performance increase.*  When creating services in WCF, these restrictions and performance penalties limit the versatility that contract-first development can provide.

When using code-first development, some translation application has to convert the code into a language agnostic contract.  Reliance upon such an program can make the code-first developers susceptible to unpredictable results.  This dependency hurt VB developers at times, Skonnard further explains, because the auto-generated contract was suboptimal vis-à-vis interoperability.  When the output of a translation system isn’t carefully crafted with compatibility in mind, integration issues ensue.  Learning from this mistake, Microsoft developed the DataContractSerializer to be used as the default WCF serialization layer (rather than reusing XMLSerializer) for the sole purpose of generating interoperability contracts from CLR code.  As a result, its output is predicable and compatible with other Web service platforms.

Two other issues that should be factored in when choosing between contract- or code-first development are the level of expertise required to design contracts in XML Schema and the integration of the translation layer into the IDE used by developers.  While learning XML Schema isn’t hard, it's not something that many junior developers know, limiting productivity if they are required to learn it before defining their service contracts.  Also with regard to productively, Visual Studio 8 and 9 (beta 2) do not provide an integrated way to translate XML Schemas into CLR code that is (de)serialized with the new serializer.  Developers have to create a pre-build event that runs svcutil over their XSD file or manually invoke it from the command line.

Due to the interoperable nature of the DataContractSerializer, the limited skill set of junior developers, and the lack of integrated tool support in Visual Studio, I think that data contracts should be designed using code-first methods when working with WCF.

* Microsoft Windows Communication Foundation: Hands-on by McMurty, Mercuri, and Watling pg. 75.

There is a lot of talk these days about SOA, Web services, and workflows.  What is fueling this conversation?  As an engineer, I don’t have the vantage point to answer this question; however, in their book Web Services Platform Architecture, Weerawarana et. al. point out that the primary motivation is capitalism.

As businesses strive to survive in increasingly aggressive markets, they have seen that profits go hand in hand with the processes used to produce goods and services.  This awareness has shown companies that they needs to A) understand, document, and automate their business processes, B) monitor and analyze them, and C) optimize their workflows to be as efficient as possible. 

The need that businesses have to understand their processes is followed closely by a necessity to automate them.  This demand is what is fueling the push at the IT level for Workflow Management Systems.  In order to help in-house development teams and ISVs fulfill this need, toolkits such as Windows Workflow Foundation (WWF) have arisen as have standards such as BPEL4WS which are designed to facilitate interoperability between systems built on such frameworks.

Comprehension and automation are only the beginnings.  Once companies have understood and computerized their workflows, they need to analyze them.  By timing, trending, and monitoring procedures, companies have the information, reports, and facts necessary to hypothesize and theorize about better methods that they can use to gain completive advantages and to be more profitable.  This leads to optimization of their processes. 

In order to do so, companies outsource their peripheral activities to partners.   By using contractors, previously weak and poor performing tasks are completed more quickly and efficiently.  This consolidation means that the optimized processes are completed faster, resulting in higher profits.  Peak efficiency through outsourcing means that automated processes must flow through inter-company boundaries. 

In this increasingly federated business environment, companies can no longer depend on isolated, homogeneous information systems; instead they must move to heterogeneous ones that make no assumptions about the implementation technology used by their partners.  To achieve this, everyone must agree upon standards that insure secure, reliable communication that achieves the necessary QoS.  SOA and Web services facilitate this which is why they’re being touted so heavily.

A few weeks back, a customer pointed out a WCF-based implementation of WS-Discovery on netfx3.com.  They said that this implementation would be included in the next version of the .NET framework and that they were going to use the sample in the meantime to publish and discover services within their system.

As I thought about it more, I became confused.  I thought that service discovery was handled by UDDI.  Perhaps UDDI had been superseded by WS-Discovery I thought; however, in his new book SOA Using Java Web Services, Dr. Mark Hansen says that UDDI is very important.  (He didn't discuss WS-Discovery though.)  So, UDDI isn't outdated and replaced by WS-Discovery as I originally thought.  Then, how do the two related?

After a bit of research, I've learned that the two aren't competitors, but that they're compliments.  In general, both provide a way to find and consume available services on a network; however, the approaches they take to supply discovery is fundamentally different.  As with other aspects of computer systems, Web services use one of two methods to discover available network resources: by looking in a well-known location or broadcasting a request to everyone that's listening.  UDDI takes the former tact while WS-Discovery takes the latter.

UDDI provides a central registry to store information about available services.  It supplies a catalog where consumers can find services that meet their needs.  This phonebook-like directory of information allow consumers to find services by name, address, contract, category, or by other data.   UDDI can be thought of as the DNS of Web services.

On the other hand, WS-Discovery provides a protocol to discover services that are coming and going from a network.  As a service joins the network, it informs its peers of its arrival by broadcasting a Hello message; likewise, when services drop off the network they multicast a Bye message.  WS-Discovery doesn’t rely on a single node to host information about all available services as UDDI does.  Rather, each node forwards information about available services in an ad hoc fashion.  This reduces the amount of network infrastructure needed to discover services and facilitates bootstrapping. 

This last point is an important one.  With UDDI, the only services that can be discovered are those that have registered with the directory service.  Non-registered services may exist on the network, but, if they haven’t registered, clients can’t consume them.  Unless a service knows where the directory is, it can’t register itself.  This foreknowledge is usually gained by configuration, making the system less agile.  Because UDDI isn’t dynamic, the registry can contain stale, out-dated information about services that are no longer available.  Conversely, WS-Discover provides a decentralized system that insures that whichever service is found is available.

Another important distinction is that UDDI is a third version standard governed by OASIS while WS-Discovery hasn’t been ratified by any standards body.  Instead, it is simply an as-is publication provided by a group of industry leaders (including Microsoft, Intel, and BEA).  In my mind, this makes WS-Discovery more risky; however, this hazard is slightly mitigated by it purported use in Windows Vista.  While its adoption in Microsoft’s new operating system shows that the protocol is capable, its risk is exacerbated by reports that its use may require the future payment of royalties.  The quote sited in the article couldn’t be found in the current version of the specification, so it seems that the concern is moot.

For more information about the relationship between UDDI and WS-Discovery, see the following: