Monthly Archives: March 2007

[Update: The electronic version (in Dutch) is available online now.]

My article on developing interactive TV-websites using ASP.NET 2.0 has been published in the Dutch .NET Magazine #16. I mentioned that I was writing this article in January and I made the deadline.

If you are subscribed to this magazine, you received issue #16 last Friday. The electronic version (in Dutch!) is not online yet, but it should appear in the near future. All previous issues are available here.

The typesetting process has caused a few minor issues that I would like to rectify here:

  • The byline of code example 1 reads "WMC.browserbestand om Windows Media Center mee te herkennen". This should be "WMC.browser bestand etc." I.e., it's a file with the .browser extension.
  • In the "ASP.NET 2.0 browser sniffing" section, the line "Daarin kun je bestanden plaatsen met de browserextensie." should read "Daarin kun je bestanden plaatsen met de .browser extensie".

If any other issues are discovered, I'll update this post.

*sigh*. It seems you can't strong name a C++/CLI assembly, i.e., sign a managed C++ assembly, with a .pfx file like you can in C# or VB in Visual Studio 2005. You have to revert to using an unprotected .snk file ;(

An MSDN forum post about this issue was never answered. It seems that the Visual C++ team is even unaware of the possibility of using a .pfx file to sign a managed assembly in their sister products. And the writers of this MSDN article were apparently unaware that Visual C++ is also a part of Visual Studio 2005 since they assume it can be done using both a .snk and .pfx file.

The situation

I am currently working on version of my Flickr Metadata Synchr tool. The goals for my open source project on CodePlex are described on the Flickr Metadata Synchr wiki page and you can always find the latest status there.

At the moment the latest public release is version The feature set for v0.5.5.0 is roughly:

  • Allow you to select a Flickr photoset and a local directory with images.
  • Load metadata from both local and Flickr images into internal metadata structures.
  • Compare these metadata structures and synchronize them.
  • Update metadata on Flickr after the synchronization.

One of the features planned for v0.6.0.0 is updating the XMP and IPTC metadata in locally stored images. I was planning on doing this through the Windows Imaging Component (WIC) which is part of the .NET Framework 3.0. WIC is also available as a separate download for Windows XP and Windows Server 2003.

Windows Presentation Foundation provides a nice managed API for reading and writing metadata through WIC. It provides the SetQuery and GetQuery methods on the BitmapMetadata class. I was already using the GetQuery method, which works fine. However, I hit a snag when I wanted to use the SetQuery method to update metadata.

Plan A

There is a way to do this through the InPlaceBitmapMetadataWriter class. It just touches the metadata structures in the image file and doesn't have to read or write the entire stream with pixel information. This will give you excellent performance and so you don't run the risk of having to reencode the pixel stream or loosing metadata. The sad thing is that it almost never works. The image file often does not have enough room in its metadata structures to allow metadata fields to be filled or updated. When you try to save the updated metadata, the InPlaceBitmapMetadataWriter fails. That is probably why the save method is called TrySave. By the way, the code sample on that MSDN Page is dead wrong. If you call TrySave before updating metadata, it always succeeds. Probably because there is nothing to save yet. You have to call it after you update the metadata, and then it returns false ;( Which means your metadata was not updated successfully.

Plan B

So I tried plan B: Creating a new image file by writing out a copy of the original image, but now with updated metadata. This means you have to grab the original BitmapFrame from the JpegBitmapDecoder. Clone it, update its metadata and write it out again using the JpegBitmapEncoder.

This is where I hit a major problem. The Save() method on the JpegBitmapEncoder almost always fails with an InvalidOperationException with the error message "Cannot write to the stream". When the encoder is able to write out the image, the JPEG turns out to be reencoded with a different quality than the original. This is noticeable through a significant change in size of the file. This happens even though I specified the BitmapCreateOptions.PreservePixelFormat option when opening the image with the decoder. Googling (or Windows Live Searching if you will) for a solution didn't yield anything useful.

Plan C

I had to come up with a Plan C. The Windows Vista Shell is obviously able to update metadata in images without affecting the JPEG quality and without creating a copy of the image file. This led me to an MSDN article titled "Photo Metadata Policy". This is the introduction:

Metadata (file properties) for photo files can be stored using multiple metadata schemas, in different data formats and in different locations within a file. In Windows Vista™, the Microsoft® Windows® Shell provides a built-in property handler for photo files formats, such as JPEG, TIFF, and PNG, to simplify metadata retrieval.

When a piece of metadata is present in different underlying schemas, the built-in property handler determines which value to return . For instance, the Author property may be stored in the following locations in a TIFF file:

  • The Creator tag in the XMP Dublin Core schema:

  • The Artist tag in the EXIF schema:

  • The Artist tag in the EXIF schema embedded in an XMP block:

On read, the property handler determines the value that takes precedence over the others that exist in the file and returns it. On write, the property handler makes sure it leaves each schema in a resolved and consistent state with the others. This may mean either updating or removing the tag in question in a given schema.

This would also help to solve another piece of the metadata puzzle: what to do with the several different options of putting metadata in image files (XMP versus IPTC, multiple possible XMP places, etc.). After updating an image, I want the metadata in the different blocks to be consistent. WIC doesn't help with this. You have to sort it out yourself. The Windows Vista Shell does help with metadata reconciliation.

So all seems to be well. Just use the Shell API to update the metadata. I would love to be able to do this from C#. Yet that doesn't seem to be possible or it is extraordinarily difficult. The "file property" handling is implemented in propsys.dll through a COM based API. But you can't add a reference to this COM library in a C# project. It doesn't have a type library ;( The only option I can find is to use C++ and use the propsys.h and propsys.idl files that are distributed in the Windows SDK. This is horrible. I guess I have to dust off my C++ skills to be able to call a brand-new Windows Vista API. WTF?!

The "Longhorn" promise for managed code

Do you remember the promises Microsoft made back in 2003 for the new Windows Client OS codenamed "Longhorn"? I sure do, since I visited the PDC03 conference where this was all announced. Microsoft promised us a brave new world where all Windows APIs could be accessed easily from managed code. Three and a bit years later we have a new Windows Client OS called Vista that doesn't live up to this promise. Microsoft has implemented new APIs that seem to be inaccessible from managed code other than through C++.

Now I can understand why part of the promise was lost during the infamous "Longhorn Reset" at Microsoft. Microsoft's ambition to completely wrap all existing Win32 APIs in WinFX was too big. But why Microsoft would be creating new APIs without managed code in mind is beyond me...

I found some C++ code on the blog of Ben Karas that is indeed able to update metadata  I hate having to add this C++ code to my project. It would require people to have Visual C++ and the Windows SDK (especially the Windows Vista header and library (*.h, *.idl, *.lib) files) installed to be able to build my code in Visual Studio.

A plan D might be to manually create C# wrappers for the COM interfaces of propsys.dll. This article describes how to do this for COM interfaces in general.

1 Comment

As announced on Tom Hollander's blog, the Microsoft Enterprise Library 3.0 will include a new application block: the Policy Injection Application Block.

Enterprise Development Reference Architecture (EDRA)

Edward and me noticed a striking similarity with an earlier effort by Microsoft Patterns and Practices. For some other people who have been following the P&P guidance for some years now, this similarity didn't go unnoticed as well. For example in the post: Can anyone say Shadowfax?

"Shadowfax" was the codename for the Enterprise Development Reference Architecture (EDRA) released by Microsoft in 2004.

One of the important goals for EDRA was the separation of business logic from cross cutting concerns in enterprise applications. This was implemented by providing a pipeline of pre- and posthandlers that could be inserted declaratively using configuration in XML format. Messages would pass through this pipeline before reaching the business logic. The response would go back through the pipeline as well. EDRA handlers could inspect and even alter the messages flowing through the pipeline.

One of the other important goals for EDRA was the ability to physically separate the service interface from the service implementation. As in distributing these layers across different tiers and across security boundaries.

Our business unit at LogicaCMG followed and evaluated this effort in 2004 and even used it in some projects. We especially liked the fact that this was a ready-made framework for "policy injection". We made some extensions for cross cutting concerns not originally included in EDRA. Previously, we worked on a home-made framework that was based on .NET remoting extensibility to configure cross-cutting concerns. Although architecturally sound, it was far from complete because it still had a long way to go to fulfill our vision.

EDRA also included an early prototype of the Guidance Automation Toolkit to help framework users with building an EDRA based service. This was known as the "Microsoft IPE Wizard Framework". Check out this blog post from Daniel Cazzulino to see the EDRA wizard framework in action.

Eventually we reluctantly decided to drop EDRA. Some of the reasons for this were:

  • The EDRA wizards were hard to extend.
  • You had to mess with a big XML file to configure handlers.
  • No proper .NET 2.0 support.
  • No WS-* support.
  • No support for calling out into other services from the service implementation.
  • No big adoption in the worldwide .NET community. Not a lot of publicly available handlers.
  • No clear path towards Windows Communication Foundation. The internal messaging structure was not based on SOAP.

Especially the last reason was probably the biggest reason why Microsoft decided to stop the P&P effort on EDRA. The 1.1 release was announced to be the final release.

The Microsoft Enterprise Library has become far more successful with respect to adoption world-wide than EDRA. There are several EntLib extensions freely available. I've contributed my RollingFileTraceListener to the greater EntLib community. According to the feedback I got, this extension has been really useful for several people and companies. Microsoft has realized the lack of this functionality in the Enterprise Library and EntLib 3.0 will include a similar rolling file trace listener out-of-the-box.

EDRA is still being used by some companies. One of the largest implementations is the Commonwealth Bank of Australia CommSee Solution that is based on EDRA. Microsoft is still using this as a reference case. For instance, there was a presentation on CommSee at LEAP2007 in Redmond.

I disliked the idea that EDRA pushed you into the direction of distributing the service interface and service implementation across different tiers. Both layers had to be realized in managed code using EDRA, so the service interface could also just call the service implementation in process. Remember the first law of distributed computing: "Don't distribute!" (unless you have to).

Of course, it was great that you could distribute service interface and service implementation. But not all applications need this.

So in my opinion it is better to have different frameworks that cleanly support these orthogonal concepts:

  • Separating business logic from cross cutting concerns.
  • Separating service interface from service implementation across physical tiers.


By the way, a sample application that was commissioned by Microsoft to provide guidance on how to build distributed systems never saw the public light of day. It was the Proseware application designed and build by newtelligence's Clemens Vasters. I never got a clear answer from Richard Turner, the responsible program manager at Microsoft, for why it would not be released. But I think it was because of internal Microsoft politics: Proseware was too close to the release of WCF and might be perceived as conflicting guidance (too close turned out to be two years!).

Proseware included a great idea to improve the reliability and scalability of web services. By using one-way messaging for both requests and response. That way you can use queuing inside your service to handle heavy loads and to automatically retry failed attempts at processing messages (for example when a database is temporarily unavailable) without bothering the service clients with having to retry. To achieve this, a web service interface would be a very thin facade around an MSMQ transactional message queue. The web service would only return a fault when the message could not be placed onto the queue (highly unlikely). The message would re-enter the queue if the service implementation failed at processing it so it could be processed again. A response from the service implementation would be send using a one-way message to the recipient specified in the WS-Addressing ReplyTo header in the original request. Note that this recipient does not have to be the original caller! It could well be a different service and the response message is really a new request. Check out this blog entry for more details on Proseware.

Eventually, we will all leave the world of developing and calling synchronous services in distributed systems, but that may take a while 😉 Anyway, sorry for this digression, enter the Policy Injection Application Block.

Policy Injection Application Block

The new Policy Injection Application Block (PIAB) wisely focuses on just the separation of business logic from cross cutting concerns. Windows Communication Foundation is the way to go for distributing your system, i.e., for building connected systems.

The PIAB shares the idea of a pipeline of pre- and post handlers processing messages. It uses the MarshalByRefObject and TransparentProxy infrastructure that was originally designed for .NET remoting to transparently insert policies when they are enabled. The client just thinks it is calling the business logic object directly. Take a look at the pictures in Tom's blog entry to get a better idea of how this works and read Edward Jezierski's blog post.

You might also be interested in the comments posted to Tom's announcement. Several people are concerned about the performance impact that inserting policies will have on your code. This is caused by the technical implementation that Microsoft has chosen to insert policies (or should they be called aspects? ;). As I haven't looked into detail at the new block, I do not have a firm opinion on this matter yet.

The transparent aspect of the policy injection does conflict with the wisdom put into WCF. WCF follows one of the important tenets of service orientation: "Make boundaries explicit". Don't fool the client into thinking they are just performing a local method call, because the performance and reliability characteristics are entirely different. WCF achieves this explicitness by using DataContracts and ServiceContracts. It does not expose everything by default, you have to opt-in. It also makes you more aware that you should not use chatty interfaces across service boundaries.

One of the comments to Tom's blog post states that the overhead of just having a PIAB policy injection could mean that the mere act of calling a method is 50 times slower than a direct method call. If this is the case, you should be well aware and design your objects accordingly: don't implement chatty interfaces!

The future will tell if the PIAB will be more successful than EDRA at enabling the separation of business logic from cross cutting concerns in .NET enterprise applications.