September 2007 Archives

If you are having problems with the stored procedure wrapper methods generated by the Object Relational Designer (O/R Designer), be sure to check out a recent blog entry by Manny Siddiqui.  The problem may stem from insufficient permissions on the database server.

I just started a project with a new team of developers who are using contract-first development when defining the data contracts of their services.  So, I am objectively reexamining this methodology and weighing it against its alternative – code-first development.

In doing so, I’ve seen that contract-first development has many compelling benefits.  It offers support for versioning, versatility, predictability, and interoperability to name a few.  Aaron Skonnard insightfully retells how these two approaches were used in COM development in his article Contract-First Service Development.  VB programmers, Skonnard explains, used code-first development to define the interfaces of their COM components which lead to versioning issues and incompatibility between their components and that of their C++ contemporaries.

The new group I’m working with is building their services using WCF.  When doing contract-first design with this new technology, it is the job of the DataContractSerializer (à la svcutil) to convert the contract from XML Schema into C# code.  This components places a number of restrictions on what parts of XML Schema are allowed in the data contract.  If these restrictions aren’t adhered to, svcutil will generate C# code that is (de)serialized by XMLSerializer which levies a purported 10% performance increase.*  When creating services in WCF, these restrictions and performance penalties limit the versatility that contract-first development can provide.

When using code-first development, some translation application has to convert the code into a language agnostic contract.  Reliance upon such an program can make the code-first developers susceptible to unpredictable results.  This dependency hurt VB developers at times, Skonnard further explains, because the auto-generated contract was suboptimal vis-à-vis interoperability.  When the output of a translation system isn’t carefully crafted with compatibility in mind, integration issues ensue.  Learning from this mistake, Microsoft developed the DataContractSerializer to be used as the default WCF serialization layer (rather than reusing XMLSerializer) for the sole purpose of generating interoperability contracts from CLR code.  As a result, its output is predicable and compatible with other Web service platforms.

Two other issues that should be factored in when choosing between contract- or code-first development are the level of expertise required to design contracts in XML Schema and the integration of the translation layer into the IDE used by developers.  While learning XML Schema isn’t hard, it's not something that many junior developers know, limiting productivity if they are required to learn it before defining their service contracts.  Also with regard to productively, Visual Studio 8 and 9 (beta 2) do not provide an integrated way to translate XML Schemas into CLR code that is (de)serialized with the new serializer.  Developers have to create a pre-build event that runs svcutil over their XSD file or manually invoke it from the command line.

Due to the interoperable nature of the DataContractSerializer, the limited skill set of junior developers, and the lack of integrated tool support in Visual Studio, I think that data contracts should be designed using code-first methods when working with WCF.

* Microsoft Windows Communication Foundation: Hands-on by McMurty, Mercuri, and Watling pg. 75.

Loose Coupling

| | Comments (0) | TrackBacks (0)

Juval Löwy describes the four tenets of service-oriented architectures in his wonderful book Programming WCF Services

  1. Service boundaries are explicit
  2. Services are autonomous
  3. Services share operational contracts and data schema, not type-and technology-specific metadata
  4. Services are compatible based on policy

By building upon these pillars, engineers insure that the architectures that they define are service-oriented.  These guidelines help insure that the components within a system are loosely coupled – a key requirements of a SOA

The importance of reduced coupling isn’t new or SOA-specific.  I remember being taught in college to strive to lower coupling irrespective of the architectural pattern being applied; however, I don’t recall being given any practical advice on how to achieve this or any metrics to measure it. 

In fact, I’ve heard it said that coupling can’t be quantified; I disagree, however.  If you count the number of distinct service operations which a client invokes, you can determine how tightly it’s bound to that service.  Swapping it out would require a new one to supports at least the same functionality.  If you don’t own all of the clients, you can’t make this determination, however, and must assume that all of the operations are in use. 

For this reason, it is advisable to:

  1. limit the number of operations exposed by a service (Löwy suggests no more than 20);
  2. not define two methods if one will suffice (i.e., avoid convenience functions); and
  3. if the arguments of an operation are likely to change and are numerous, bundle them together in a structure and pass this to the operation.

For other pointers, see John Evdemon’s blog as well as David Orchard’s.  After reading those, check out the other suggestions that Löwy has in Appendix C of his book Programming WCF Services.  I also suggest browsing through Milind Shingade's article where he defines different types of coupling.

As Rozanski and Woods state in their book Software Systems Architecture, every problem has many candidate architectures.  So, what makes one service-oriented and not another?  Weerawarana et. al. outline the qualities of a SOA in their book Web Services Platform Architecture.  They say that a SOA is one that meets the following requirements:

  • It is composed of loosely coupled services.
  • The interface of these services is defined in a programming-language- and platform-agnostic manner.
  • It furnishes service providers with a means to publish their functionality.
  • It provides consumers with a way to find published services that meet their needs and allows them to bind to these services dynamically.
  • The above requires are governed by standards that are not implementation specific (i.e., language- and framework-neutral).

Using this definition, are the following SOAs: DCOM, CORBA, Web services, WCF, JWS?  Though many will say that they are, doing so is like saying that metal is a car.  You use raw materials, like metal, to make cars, but no one would conflate them.  Likewise, these technologies can be used to create Service-Oriented Architectures, but they aren’t architectures at allBrenda Michelson, Program Director of the SOA Consortium, stated in the SOA Consortium’s blog that "SOA is not about the underlying technology, but about 'enabling organizations to create, and adapt to, change.'"  Likewise, Richard Soley has respectfully rejected Jan Pokin’s definition that "SOA is a technology;" he says that "SOA is an enterprise strategy based on an architectural principle."

Businesses and architects would be wise to guard against the trap of equating SOAs with the technologies used to build them.  If ensnared by this wile, they’ll build systems that are in the same predicaments as the ones they're phasing out. Companies will site their use of WCF, JWS, etc. (which were sold to them as the pathway out of entanglement) and fail to understand why their systems are still tightly knit together, costly to update, and unable to adapt in time to profit from ever-changing market trends.

There is a lot of talk these days about SOA, Web services, and workflows.  What is fueling this conversation?  As an engineer, I don’t have the vantage point to answer this question; however, in their book Web Services Platform Architecture, Weerawarana et. al. point out that the primary motivation is capitalism.

As businesses strive to survive in increasingly aggressive markets, they have seen that profits go hand in hand with the processes used to produce goods and services.  This awareness has shown companies that they needs to A) understand, document, and automate their business processes, B) monitor and analyze them, and C) optimize their workflows to be as efficient as possible. 

The need that businesses have to understand their processes is followed closely by a necessity to automate them.  This demand is what is fueling the push at the IT level for Workflow Management Systems.  In order to help in-house development teams and ISVs fulfill this need, toolkits such as Windows Workflow Foundation (WWF) have arisen as have standards such as BPEL4WS which are designed to facilitate interoperability between systems built on such frameworks.

Comprehension and automation are only the beginnings.  Once companies have understood and computerized their workflows, they need to analyze them.  By timing, trending, and monitoring procedures, companies have the information, reports, and facts necessary to hypothesize and theorize about better methods that they can use to gain completive advantages and to be more profitable.  This leads to optimization of their processes. 

In order to do so, companies outsource their peripheral activities to partners.   By using contractors, previously weak and poor performing tasks are completed more quickly and efficiently.  This consolidation means that the optimized processes are completed faster, resulting in higher profits.  Peak efficiency through outsourcing means that automated processes must flow through inter-company boundaries. 

In this increasingly federated business environment, companies can no longer depend on isolated, homogeneous information systems; instead they must move to heterogeneous ones that make no assumptions about the implementation technology used by their partners.  To achieve this, everyone must agree upon standards that insure secure, reliable communication that achieves the necessary QoS.  SOA and Web services facilitate this which is why they’re being touted so heavily.

In my opinion, the names that software engineers use to describe their work is very important.  Whether they refers to a variable in source code, an operation of a service, a component in a system, or a some other "thing", names are paramount.  As IT professionals, a lot of our job is centered around communication.  If we aren't using the same words or ones that aptly describe the ideas they’re indented to, we aren't doing our jobs to the best of our abilities. 

As Rozanski and Woods point out in their excellent book Software Systems Architecture, names are "sticky."  For this reason, they should be chosen carefully.  When considering names by which to refer to things, it is best to see if the same idea already has a name; if it does, it should be used.  While this advice may seem obvious, it isn't always followed, making miscommunication inevitable.  The primary reason that I've seen for misnomers is ignorance.  If you don't know a widget is a widget, you'll call it a gizmo. 

In a effort to avoid this in the domain of workflow managment, I'd like to share a collection of terms that I compiled while reading Workflow Management by van der Aalst and van Hee and the Workflow Management Coalition's Workflow Management Coalition Terminology & Glossary (PDF).

The compiled glossary (PDF) can be found in my stash.

A few weeks back, a customer pointed out a WCF-based implementation of WS-Discovery on netfx3.com.  They said that this implementation would be included in the next version of the .NET framework and that they were going to use the sample in the meantime to publish and discover services within their system.

As I thought about it more, I became confused.  I thought that service discovery was handled by UDDI.  Perhaps UDDI had been superseded by WS-Discovery I thought; however, in his new book SOA Using Java Web Services, Dr. Mark Hansen says that UDDI is very important.  (He didn't discuss WS-Discovery though.)  So, UDDI isn't outdated and replaced by WS-Discovery as I originally thought.  Then, how do the two related?

After a bit of research, I've learned that the two aren't competitors, but that they're compliments.  In general, both provide a way to find and consume available services on a network; however, the approaches they take to supply discovery is fundamentally different.  As with other aspects of computer systems, Web services use one of two methods to discover available network resources: by looking in a well-known location or broadcasting a request to everyone that's listening.  UDDI takes the former tact while WS-Discovery takes the latter.

UDDI provides a central registry to store information about available services.  It supplies a catalog where consumers can find services that meet their needs.  This phonebook-like directory of information allow consumers to find services by name, address, contract, category, or by other data.   UDDI can be thought of as the DNS of Web services.

On the other hand, WS-Discovery provides a protocol to discover services that are coming and going from a network.  As a service joins the network, it informs its peers of its arrival by broadcasting a Hello message; likewise, when services drop off the network they multicast a Bye message.  WS-Discovery doesn’t rely on a single node to host information about all available services as UDDI does.  Rather, each node forwards information about available services in an ad hoc fashion.  This reduces the amount of network infrastructure needed to discover services and facilitates bootstrapping. 

This last point is an important one.  With UDDI, the only services that can be discovered are those that have registered with the directory service.  Non-registered services may exist on the network, but, if they haven’t registered, clients can’t consume them.  Unless a service knows where the directory is, it can’t register itself.  This foreknowledge is usually gained by configuration, making the system less agile.  Because UDDI isn’t dynamic, the registry can contain stale, out-dated information about services that are no longer available.  Conversely, WS-Discover provides a decentralized system that insures that whichever service is found is available.

Another important distinction is that UDDI is a third version standard governed by OASIS while WS-Discovery hasn’t been ratified by any standards body.  Instead, it is simply an as-is publication provided by a group of industry leaders (including Microsoft, Intel, and BEA).  In my mind, this makes WS-Discovery more risky; however, this hazard is slightly mitigated by it purported use in Windows Vista.  While its adoption in Microsoft’s new operating system shows that the protocol is capable, its risk is exacerbated by reports that its use may require the future payment of royalties.  The quote sited in the article couldn’t be found in the current version of the specification, so it seems that the concern is moot.

For more information about the relationship between UDDI and WS-Discovery, see the following: