Work

1 Comment

Microsoft used the keynote in the SOA & Business Process Conference in Redmond to present its vision on the future of Service Oriented Architecture on the Microsoft platform. That vision and the wave of technology that will come with it, is codenamed "Oslo".

MVP Charles Young has a solid write up in a blog post called "Microsoft 'Oslo' - the vNext SOA platform". No need to repeat all that here.

Long time Microsoft watcher Mary Jo Foley is very critical in her post called "Microsoft talks SOA futures but not dates". Mary Jo ends with "Microsoft has been struggling to prove to the market that it has a real SOA strategy. While the Redmondians are talking the right talk, the company is still a ways away from walking the SOA walk. Will customers wait or run off with other SOA vendors before Microsoft rolls out more than just a piecemeal SOA strategy?"

"Oslo" is obviously a Grand Vision. It will take a couple of years before this next wave of Microsoft technologies will ship. I thought that after the Longhorn reset/WinFX debacle and the "Whidbey" delays, Microsoft would not attempt to align so many technologies again in the future. But it is! "Oslo" comprises of at least:

  • BizTalk Server "6"
  • Visual Studio "10"
  • .NET Framework "4"
  • Systems Center "5"
  • BizTalk Services "1"

Some of the stuff presented reminded me of the grand WinFX, especially WinFS, vision that Microsoft presented at PDC03. We all know that WinFS never RTM-ed, despite enormous effort (many, many man years) put into it by Microsoft. Especially the term "Universal Editor" for the "Oslo" integrated modeling tool gave me the creeps. Sounds too much like: One tool to rules them all. One tool that spans the entire application development lifecycle: from its inception to its deployment.

Here are some screenshots from the new "Universal Editor" modeling tool that was demoed during the keynote:

Microsoft Oslo Universal Editor

Microsoft Oslo Server List

Microsoft Oslo Application Verifier

1 Comment

Scott Guthrie did a major announcement on his blog yesterday: Microsoft will be releasing the source code for most .NET Framework libraries with the release of Visual Studio 2008. There will even be integrated support for debugging into framework classes and on-demand dynamic downloading of source files and debug symbols in Visual Studio 2008.

This is great news for .NET developers and a major step forward for Microsoft in my opinion. In and by itself it is enough reason to warrant an upgrade to Visual Studio 2008. In fact, I can think of no reason to keep using Visual Studio 2005 after the release of VS2008.

The source will be released under the Microsoft Reference License which basically means you can view and debug but not change or reuse the source code.

If you want a more liberal license you can look into Rotor aka the Shared Source CLI. Rotor was Microsoft's first effort for open sourcing a .NET CLI implementation. But Microsoft does not guarantee that Rotor has exactly the same codebase as the real .NET Framework.

Check out the full details and screenshots of VS2008 integration on Scott's blog.

Today I encountered a problem with accessing the metadata for a WCF service that was deployed on a Windows Server 2003 machine.

The WSDL part worked just fine for the metadata exchange endpoint (url?wsdl, url?wsdl=wsdl0, etc.). These WSDL files refer to XSD files for the message types. Requesting these files (url?xsd=xsd0, url?xsd=xsd1, etc.) resulted in an empty response from the webserver. Checking the IIS logs indicated a HTTP 200 OK response with 0 bytes transferred. A very weird problem. Checking the config files did not lead anywhere.

Eventually I found a hint in a reply by James Zhang in this MSDN Forum post. The identity that is used for the application pool that hosts the WCF service must have the correct NTFS permissions on the %WINDIR%temp folder. The identity that I used is a domain account. After setting the right NTFS permissions, the problem disappeared.

The funny thing was that this particular answer wasn't the answer for the original question in this forum post.

James Zhang does not indicate what type of permissions are needed, so I had to experiment a little.

First I added the account to the local Users group. This gives it special access permissions: Traverse folder/execute file, create files/write data, create folders/append data  on this folder and subfolders. This is not enough. Then I realized, the domain account is already implicitly a member of this group because the Users group contains the NT AuthorityAuthenticated Users group. Next, I duplicated the extra rights that the NETWORK SERVICE account had for the domain account. These are list folder/read data and delete permissions for this folder, subfolders and files. This was enough. But it doesn't seem very secure. Now the service account can access temporary files created by other accounts.

So I experimented a bit more. I tuned back the NTFS permissions for the service account on %WINDIR%temp to list folder/read data on this folder only. This is just enough. This allows the account to see which files are in the temp folder, but it doesn't allow it to read the data in files that are owned by other accounts.

It is very unfortunate that WCF didn't give any clue about why it couldn't generate metadata in this case. It is also unfortunate that it needs just slightly more permissions that a standard user on the folder for temporary files.

Note that if you run your WCF service in an IIS application pool under the default NETWORK SERVICE account you won't run into this problem, because it has more than enough permissions.

PS: Best practices indicate you shouldn't deploy your services with metadata enabled. We will turn this off eventually. However, of course it should work if you do want to enable this.

WPF comes with great support for animation using XAML without needing to code this in for example C#. With Silverlight (fka "WPF/E") you can also do animations from XAML.

If you want to perform custom animations in code that you can't do using XAML, you need timers. In the full blown WPF you have several options, e.g., System.Threading.Timer, System.Timers.Timer and System.Windows.Forms.Timer.

You normally provide a callback that gets called when the timer elapses from a background thread. Properties on WPF objects can only be set from the foreground thread, so you have to queue a call on the UI thread to perform the actual animation. You can do that by calling the Invoke or BeginInvoke method on the System.Windows.Threading.Dispatcher class. You can access the correct Dispatcher instance to use through the Dispatcher property on the UI element (*).

Another option in WPF is to use the Rendering event of a CompositionTarget instance. In that case you get called when WPF is ready to render a frame. The frame rate depends on CPU speed, GPU performance, graphics complexity and other factors, so it fluctuates. This means that the interval after which you get called also fluctuates. However this is great for some scenarios.

In the current Silverlight 1.1 alpha your options are more limited. The CoreCLR libraries do have a System.Threading.Timer, but there is no Dispatcher class to delegate work to the UI thread. So it is useless for doing custom animation. In the source of the Monotone sample by Lutz Roeder I found there is an HtmlTimer class in Silverlight 1.1. This class is undocumented and marked obsolete. Visual Studio shows a warning after compilation:

'System.Windows.Browser.HtmlTimer' is obsolete: 'This is not a high resolution timer and is not suitable for short-interval animations. A new timer type will be available in a future release.

Lutz shows how to use an HtmlTimer in his sample. HtmlTimer has a Tick event. Any event handler that you wire-up to that event gets called from the UI thread. So that solves the problem for the time being.

When I tried to Google for more info on HtmlTimer, all I found was this blog post by Mike Taulty which mentions this class in passing.

(*) In fact any .NET class that derives from DispatcherObject has a Dispatcher property.

Microsoft has some great news today: the new Expression Web (already RTM) and Expression Blend (RTM later in Q2 2007) tools will be included in the MSDN Premium Subscription. Soma announced this on his blog. This is the result from strong feedback from the developer community.

Although I am not a graphical designer, I have to come to appreciate both tools.

Expression Blend goes beyond what you can accomplish with "Cider". "Cider" is the codename for the WPF design surface that will be included in Visual Studio "Orcas". I used Blend to design the UI for my Flickr Metadata Synchr. And we have used it at LogicaCMG for a WPF showcase application.

Expression Web has a much richer HTML/CSS editor than the one in Visual Studio 2005. VS "Orcas" will come with a new HTML/CSS editor based on the same codebase. This will lessen the need for a separate tool for web developers, but "Orcas" will not RTM before the end of this year. I am still working on getting a more graphically inclined colleague to give up on Dreamweaver and switch to Expression Web 😉

[Update: The electronic version (in Dutch) is available online now.]


My article on developing interactive TV-websites using ASP.NET 2.0 has been published in the Dutch .NET Magazine #16. I mentioned that I was writing this article in January and I made the deadline.


If you are subscribed to this magazine, you received issue #16 last Friday. The electronic version (in Dutch!) is not online yet, but it should appear in the near future. All previous issues are available here.


The typesetting process has caused a few minor issues that I would like to rectify here:



  • The byline of code example 1 reads "WMC.browserbestand om Windows Media Center mee te herkennen". This should be "WMC.browser bestand etc." I.e., it's a file with the .browser extension.
  • In the "ASP.NET 2.0 browser sniffing" section, the line "Daarin kun je bestanden plaatsen met de browserextensie." should read "Daarin kun je bestanden plaatsen met de .browser extensie".

If any other issues are discovered, I'll update this post.

1 Comment

As announced on Tom Hollander's blog, the Microsoft Enterprise Library 3.0 will include a new application block: the Policy Injection Application Block.


Enterprise Development Reference Architecture (EDRA)


Edward and me noticed a striking similarity with an earlier effort by Microsoft Patterns and Practices. For some other people who have been following the P&P guidance for some years now, this similarity didn't go unnoticed as well. For example in the post: Can anyone say Shadowfax?


"Shadowfax" was the codename for the Enterprise Development Reference Architecture (EDRA) released by Microsoft in 2004.


One of the important goals for EDRA was the separation of business logic from cross cutting concerns in enterprise applications. This was implemented by providing a pipeline of pre- and posthandlers that could be inserted declaratively using configuration in XML format. Messages would pass through this pipeline before reaching the business logic. The response would go back through the pipeline as well. EDRA handlers could inspect and even alter the messages flowing through the pipeline.


One of the other important goals for EDRA was the ability to physically separate the service interface from the service implementation. As in distributing these layers across different tiers and across security boundaries.


Our business unit at LogicaCMG followed and evaluated this effort in 2004 and even used it in some projects. We especially liked the fact that this was a ready-made framework for "policy injection". We made some extensions for cross cutting concerns not originally included in EDRA. Previously, we worked on a home-made framework that was based on .NET remoting extensibility to configure cross-cutting concerns. Although architecturally sound, it was far from complete because it still had a long way to go to fulfill our vision.


EDRA also included an early prototype of the Guidance Automation Toolkit to help framework users with building an EDRA based service. This was known as the "Microsoft IPE Wizard Framework". Check out this blog post from Daniel Cazzulino to see the EDRA wizard framework in action.


Eventually we reluctantly decided to drop EDRA. Some of the reasons for this were:



  • The EDRA wizards were hard to extend.
  • You had to mess with a big XML file to configure handlers.
  • No proper .NET 2.0 support.
  • No WS-* support.
  • No support for calling out into other services from the service implementation.
  • No big adoption in the worldwide .NET community. Not a lot of publicly available handlers.
  • No clear path towards Windows Communication Foundation. The internal messaging structure was not based on SOAP.

Especially the last reason was probably the biggest reason why Microsoft decided to stop the P&P effort on EDRA. The 1.1 release was announced to be the final release.


The Microsoft Enterprise Library has become far more successful with respect to adoption world-wide than EDRA. There are several EntLib extensions freely available. I've contributed my RollingFileTraceListener to the greater EntLib community. According to the feedback I got, this extension has been really useful for several people and companies. Microsoft has realized the lack of this functionality in the Enterprise Library and EntLib 3.0 will include a similar rolling file trace listener out-of-the-box.


EDRA is still being used by some companies. One of the largest implementations is the Commonwealth Bank of Australia CommSee Solution that is based on EDRA. Microsoft is still using this as a reference case. For instance, there was a presentation on CommSee at LEAP2007 in Redmond.


I disliked the idea that EDRA pushed you into the direction of distributing the service interface and service implementation across different tiers. Both layers had to be realized in managed code using EDRA, so the service interface could also just call the service implementation in process. Remember the first law of distributed computing: "Don't distribute!" (unless you have to).


Of course, it was great that you could distribute service interface and service implementation. But not all applications need this.


So in my opinion it is better to have different frameworks that cleanly support these orthogonal concepts:



  • Separating business logic from cross cutting concerns.
  • Separating service interface from service implementation across physical tiers.

Proseware


By the way, a sample application that was commissioned by Microsoft to provide guidance on how to build distributed systems never saw the public light of day. It was the Proseware application designed and build by newtelligence's Clemens Vasters. I never got a clear answer from Richard Turner, the responsible program manager at Microsoft, for why it would not be released. But I think it was because of internal Microsoft politics: Proseware was too close to the release of WCF and might be perceived as conflicting guidance (too close turned out to be two years!).


Proseware included a great idea to improve the reliability and scalability of web services. By using one-way messaging for both requests and response. That way you can use queuing inside your service to handle heavy loads and to automatically retry failed attempts at processing messages (for example when a database is temporarily unavailable) without bothering the service clients with having to retry. To achieve this, a web service interface would be a very thin facade around an MSMQ transactional message queue. The web service would only return a fault when the message could not be placed onto the queue (highly unlikely). The message would re-enter the queue if the service implementation failed at processing it so it could be processed again. A response from the service implementation would be send using a one-way message to the recipient specified in the WS-Addressing ReplyTo header in the original request. Note that this recipient does not have to be the original caller! It could well be a different service and the response message is really a new request. Check out this blog entry for more details on Proseware.


Eventually, we will all leave the world of developing and calling synchronous services in distributed systems, but that may take a while 😉 Anyway, sorry for this digression, enter the Policy Injection Application Block.


Policy Injection Application Block


The new Policy Injection Application Block (PIAB) wisely focuses on just the separation of business logic from cross cutting concerns. Windows Communication Foundation is the way to go for distributing your system, i.e., for building connected systems.


The PIAB shares the idea of a pipeline of pre- and post handlers processing messages. It uses the MarshalByRefObject and TransparentProxy infrastructure that was originally designed for .NET remoting to transparently insert policies when they are enabled. The client just thinks it is calling the business logic object directly. Take a look at the pictures in Tom's blog entry to get a better idea of how this works and read Edward Jezierski's blog post.


You might also be interested in the comments posted to Tom's announcement. Several people are concerned about the performance impact that inserting policies will have on your code. This is caused by the technical implementation that Microsoft has chosen to insert policies (or should they be called aspects? ;). As I haven't looked into detail at the new block, I do not have a firm opinion on this matter yet.


The transparent aspect of the policy injection does conflict with the wisdom put into WCF. WCF follows one of the important tenets of service orientation: "Make boundaries explicit". Don't fool the client into thinking they are just performing a local method call, because the performance and reliability characteristics are entirely different. WCF achieves this explicitness by using DataContracts and ServiceContracts. It does not expose everything by default, you have to opt-in. It also makes you more aware that you should not use chatty interfaces across service boundaries.


One of the comments to Tom's blog post states that the overhead of just having a PIAB policy injection could mean that the mere act of calling a method is 50 times slower than a direct method call. If this is the case, you should be well aware and design your objects accordingly: don't implement chatty interfaces!


The future will tell if the PIAB will be more successful than EDRA at enabling the separation of business logic from cross cutting concerns in .NET enterprise applications.

1 Comment

The December 2006 CTP of the Web Service Software Factory (WSSF) from Microsoft P&P ships with a 2.0 version of the Microsoft Enterprise Library. The Data Accesss factory inside Service Factory generates code which uses the Data Access Application Block from Enterprise Library.

This interface of the Data Access block hasn't changed significantly in the January 2007 CTP of version 3.0 of the Enterprise Library. I wondered if I could use this version of EntLib together with Service Factory.

It turns out that this is really easy. The Service Factory uses a registry key to locate the EntLib binaries. This key is HKEY_LOCAL_MACHINESOFTWAREMicrosoftpatterns and practicesService FactoryEntlibBinaryPath.

It has a default value of C:Program FilesMicrosoft Service FactoryEnterprise Library Binaries. After you have installed EntLib 3.0, change the value to the location of the new binaries. For the January 2007 CTP the default location is C:Program FilesMicrosoft Enterprise Library 3.0 - January 2007 CTPBin.

If you create repository classes in your DataAccess project using the "Create data repository classes from business entities" recipe, the project will reference and use the 3.0 binaries.

On a WPF app we had the need for the application to remember its window position and state (maximized, minimized) across restarts. A short journey on the Internet led me to an interesting article by TimK on CodeProject on how to do this.


As you may know, Windows Presentation Foundation makes heavy use of XAML. As it turns out XAML also allows you to mix-in behavior with your classes without using code. That is a productive way of reusing behavior. Behind the screens the mix-in class uses WPF magic as DependencyProperties to wire things up.


The article contains a WindowSettings class that stores and restores the window position and state. I fixed one bug in this WindowSettings class from TimK. Without the fix an app would restore maximized on the wrong display if it was maximized on a secondary display.


You can find the C# code for our WindowSettings class attached to this post. For your reading convenience I have also copied the code below.


To use it in your WPF window defined in XAML, do something like:

<Window
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
x:Class="LogicaCMG.Client.MainWindow"
Title="Test Window" Height="700" Width="880"
xmlns:settings="clr-namespace:LogicaCMG.Settings"
settings:WindowSettings.Save="True"
>

Code of the WindowSettings class:

using System;
using System.Diagnostics;
using System.Globalization;
using System.ComponentModel;
using System.Configuration;
using System.Windows;
using System.Windows.Markup;

namespace LogicaCMG.Settings
{
/// <summary>
/// Persists a Window's Size, Location and WindowState to UserScopeSettings
/// </summary>
public class WindowSettings
{
#region WindowApplicationSettings Helper Class
public class WindowApplicationSettings : ApplicationSettingsBase
{
private WindowSettings windowSettings;

public WindowApplicationSettings(WindowSettings windowSettings)
: base(windowSettings.window.PersistId.ToString())
{
this.windowSettings = windowSettings;
}

[UserScopedSetting]
public Rect Location
{
get
{
if (this["Location"] != null)
{
return ((Rect)this["Location"]);
}
return Rect.Empty;
}
set
{
this["Location"] = value;
}
}

[UserScopedSetting]
public WindowState WindowState
{
get
{
if (this["WindowState"] != null)
{
return (WindowState)this["WindowState"];
}
return WindowState.Normal;
}
set
{
this["WindowState"] = value;
}
}

}
#endregion

#region Constructor
private Window window = null;

public WindowSettings(Window window)
{
this.window = window;
}

#endregion

#region Attached "Save" Property Implementation
/// <summary>
/// Register the "Save" attached property and the "OnSaveInvalidated" callback
/// </summary>
public static readonly DependencyProperty SaveProperty
= DependencyProperty.RegisterAttached("Save", typeof(bool), typeof(WindowSettings),
new FrameworkPropertyMetadata(new PropertyChangedCallback(OnSaveInvalidated)));

public static void SetSave(DependencyObject dependencyObject, bool enabled)
{
dependencyObject.SetValue(SaveProperty, enabled);
}

/// <summary>
/// Called when Save is changed on an object.
/// </summary>
private static void OnSaveInvalidated(DependencyObject dependencyObject, DependencyPropertyChangedEventArgs e)
{
Window window = dependencyObject as Window;
if (window != null)
{
if ((bool)e.NewValue)
{
WindowSettings settings = new WindowSettings(window);
settings.Attach();
}
}
}

#endregion

#region Protected Methods
/// <summary>
/// Load the Window Size Location and State from the settings object
/// </summary>
protected virtual void LoadWindowState()
{
this.Settings.Reload();
if (this.Settings.Location != Rect.Empty)
{
this.window.Left = this.Settings.Location.Left;
this.window.Top = this.Settings.Location.Top;
this.window.Width = this.Settings.Location.Width;
this.window.Height = this.Settings.Location.Height;
}

if (this.Settings.WindowState != WindowState.Maximized)
{
this.window.WindowState = this.Settings.WindowState;
}
}

/// <summary>
/// Save the Window Size, Location and State to the settings object
/// </summary>
protected virtual void SaveWindowState()
{
this.Settings.WindowState = this.window.WindowState;
this.Settings.Location = this.window.RestoreBounds;
this.Settings.Save();
}
#endregion

#region Private Methods

private void Attach()
{
if (this.window != null)
{
this.window.Closing += new CancelEventHandler(window_Closing);
this.window.Initialized += new EventHandler(window_Initialized);
this.window.Loaded += new RoutedEventHandler(window_Loaded);
}
}

private void window_Loaded(object sender, RoutedEventArgs e)
{
if (this.Settings.WindowState == WindowState.Maximized)
{
this.window.WindowState = this.Settings.WindowState;
}
}

private void window_Initialized(object sender, EventArgs e)
{
LoadWindowState();
}

private void window_Closing(object sender, CancelEventArgs e)
{
SaveWindowState();
}
#endregion

#region Settings Property Implementation
private WindowApplicationSettings windowApplicationSettings = null;

protected virtual WindowApplicationSettings CreateWindowApplicationSettingsInstance()
{
return new WindowApplicationSettings(this);
}

[Browsable(false)]
public WindowApplicationSettings Settings
{
get
{
if (windowApplicationSettings == null)
{
this.windowApplicationSettings = CreateWindowApplicationSettingsInstance();
}
return this.windowApplicationSettings;
}
}
#endregion
}
}

Microsoft has hastily released the February 2007 CTP of the Windows Presentation Foundation Everywhere runtime. The previous CTP expired prematurely.

WPF/E is the "light" version of WPF that will run in all major browsers across several platforms (including non-Windows platform). For instance, it also works in Mozilla Firefox. Think of it as the Microsoft competitor to Flash.

WPF/E is also good at displaying video with a light-weight video engine that does not depend on Windows Media Player. Again, comparable to Flash Video, but now using the WMV format.

I noticed that the speed of the WPF/E runtime has increased significantly when compared to the December 2006 CTP.

The WPF/E samples in the Channel 9 Playground have been updated. Go check them out if you are interested in WPF/E. If you don't have WPF/E installed, your browser should prompt you on how to install the runtime.