.NET

Windows Azure

The Windows Azure Platform has a rich diagnostics infrastructure to enable logging and performance monitoring in the cloud.

When you run a cloud app locally in the DevFabric you can view diagnostics trace output in the console window that is part of the DevFabric UI. For example, say I have the line

System.Diagnostics.Trace.TraceInformation("Old Trace called at {0}.", DateTime.Now);

That line gives a result like this in the DevFabric UI:

Screenshot of DevFabric UI

If you can’t see the DevFabric UI, you can enable it here after starting your cloud app from Visual Studio:

image 

Using the DevFabric UI is the most basic form of viewing Trace output. Looking at each individual console window doesn’t really scale well if you have many instances and, furthermore, the console window of an instance is not available in the cloud. For this, there is a special TraceListener derived class: Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener. It sends the trace output to a Windows Azure diagnostics monitor process that is able to store the messages in Windows Azure Storage. Here is a look at a trace message using the Azure Storage Explorer:

Screenshot of Azure Storage Explorer

This trace listener is enabled through web.config:

  <system.diagnostics>
    <sources>
      <source name="Diag">
        <listeners>
          <add name="AzureDiagnostics" />
        </listeners>
      </source>
    </sources>
    <sharedListeners>
      <add type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener, Microsoft.WindowsAzure.Diagnostics, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"
          name="AzureDiagnostics" />
    </sharedListeners>
    <trace>
      <listeners>
        <add name="AzureDiagnostics" />
      </listeners>
    </trace>
  </system.diagnostics>

You can also see a message “TraceSource called at …”. This message was output using a TraceSource instance:

private static readonly TraceSource ts = new System.Diagnostics.TraceSource("Diag", SourceLevels.All);

protected void TraceMeButton_Click(object sender, EventArgs e)
{
    ts.TraceEvent(TraceEventType.Information, 2, "TraceSource called at {0}.", DateTime.Now);
    System.Diagnostics.Trace.TraceInformation("Old Trace called at {0}.", DateTime.Now);
}

However, note that a similar message “TraceSource called at …” didn’t show up in the DevFabric UI. You might wonder what is going on. And you might wonder why I want to use a TraceSource instead of Trace? Because this MSDN article states:

One of the new features in the .NET Framework version 2.0 is an enhanced tracing system. The basic premise is unchanged: tracing messages are sent through switches to listeners, which report the data to an associated output medium. A primary difference for version 2.0 is that traces can be initiated through instances of the TraceSource class. TraceSource is intended to function as an enhanced tracing system and can be used in place of the static methods of the older Trace and Debug tracing classes. The familiar Trace and Debug classes still exist, but the recommended practice is to use the TraceSource class for tracing.

Also check out this blog post.

The reason you don’t see the message for the TraceSource in the DevFabric UI is that the DevFabric magically adds a special TraceListener for the “old fashioned” Trace class, but not for your TraceSource instance. I put together a cloud app solution (Visual Studio 2010) that shows this through a simple web role. This web role has the configuration you see above in its web.config file. If you run this simple web role in the DevFabric you’ll see:

Screenshot of sample web role app

Note that Trace has a Microsoft.ServiceHosting.Tools.DevelopmentFabric.Runtime.DevelopmentFabricTraceListener instance registered, while the TraceSource hasn’t. To remedy this, I’ve created a small class that adds a DevFabricTraceListener to a TraceSource if it is registered for Trace:

public static class TraceSourceFixer
{
    private const string DevFabricTraceListenerFullName = "Microsoft.ServiceHosting.Tools.DevelopmentFabric.Runtime.DevelopmentFabricTraceListener";

    public static void AddDevFabricTraceListener(TraceSource traceSource)
    {
        var alreadyInTraceSource = GetDevFabricTraceListeners(traceSource.Listeners);

        if (alreadyInTraceSource.Count() > 0)
            return;

        var alreadyInTrace = GetDevFabricTraceListeners(Trace.Listeners);

        var devFabricTraceListener = alreadyInTrace.FirstOrDefault();
        if (devFabricTraceListener != null)
        {
            traceSource.Listeners.Add(devFabricTraceListener);
        }
    }

    private static IEnumerable<TraceListener> GetDevFabricTraceListeners(TraceListenerCollection listeners)
    {
        var result = from TraceListener listener in listeners.Cast<TraceListener>()
                where IsDevFabricTraceListener(listener)
                select listener;

        return result;
    }

    private static bool IsDevFabricTraceListener(TraceListener listener)
    {
        return (listener.GetType().FullName == DevFabricTraceListenerFullName);
    }
}

This helper class gets called when you press the Register DevFabric Listener button. If you click the Trace Me button after that, you’ll see two trace messages show up in the DevFabric UI:

Screenshot of DevFabric UI detail

You can download my solution DiagnosticsService.zip to try it yourself.

1 Comment

The Microsoft Enterprise Library has always been one of the most popular things to come out of the patterns & practices team. Yesterday p&p reached a major milestone by releasing version 5.0 of EntLib.

The improvements are too numerous to sum up here, but let me mention one: this release has full .NET 3.5 SP1 and .NET 4 compatibility and works great from both Visual Studio 2008 SP1 and Visual Studio 2010 RTM.

Full details can be found in Grigori Melnik’s blog post on this release. Or you can go straight to the download page or the documentation.

Long before I joined the company, Brad Abrams was one of the first people for me that put a human face on Microsoft.

I felt sad when I just read his blog post that Brad is leaving Microsoft. But at the same time I feel happy for all that he has accomplished for the company. Check his blog post for all that he has been involved in. Not on the list, but not less important in my mind, is the book Framework Design Guidelines that he co-authored.

Brad, I wish you all the best in your next endeavors.

.NET Frameworkin Windows Azure 

As you will probably know, Visual Studio 2010 and .NET Framework 4 will RTM on April 12, 2010 and will be available for download on MSDN Subscriptions Downloads the same day.

The Windows Azure team is committed to making .NET Framework 4 available in Windows Azure within 90 days of the RTM date.

A lesser known fact is that the latest available Windows Azure build already has a .NET 4 version installed, namely the RC bits. Although this version cannot be used to run applications on (because .NET 4 is not yet exposed in the Windows Azure dev tools), you can use this build to test if the presence of .NET 4 has impact on existing .NET 3.5 apps running on Windows Azure.

Read the official announcement here.

Yesterday I worked on a new version of my FlickrMetadataSynchr tool and published the 1.3.0.0 version on CodePlex. I wasn’t really planning on creating a new version, but I was annoyed by the old version in a new usage scenario. When you have an itch you have to scratch it! And it is always good to catch up on some programming if recent assignments at work don’t include any coding. So what caused this itch?

FlickrMetadataSynchr-v1.3.0.0About two weeks ago I got back from my holiday in China with about 1,500 pictures on my 16 GB memory card. I always make a first selection immediately after taking a picture, so initially there were lots more. After selection and stitching panoramic photos, I managed to get this down to about 1,200 pictures. Still a lot of pictures. But storage is cheap, tagging makes search easy, so why throw away any more? One of the perks of a Pro account on Flickr is that I have unlimited storage, so I uploaded 1,1173 pictures (5.54 GB). This took over 12 hours because Flickr has limited uploading bandwidth.

Adding metadata doesn’t stop at tagging pictures. You can add a title, description and geolocation to a picture. Sometimes this is easier to do on your local pictures, and sometimes I prefer to do it on Flickr. The FlickrMetadataSynchr tool that I wrote is a solution to keeping this metadata in sync. You should always try to stay in control of your data, so I keep backups of my e-mail stored in the “cloud” and I store all metadata in the original picture files on my hard drive. Of course I backup those files too. Even offsite by storing an external hard drive outside my house.

Back to the problem. Syncing the metadata for 1,1173 pictures took an annoyingly long time. The Flickr API has some batch operations, but for my tool I have to fetch metadata and update metadata for pictures one-by-one. So each fetch and each update uses one HTTP call. Each operation is not unreasonably show, but when adding latency to the mix it adds up to slow performance if you do it sequentially.

Imperative programming languages like C# promote a sequential way of doing things. It is really hard to exploit multiple processor cores by splitting up work so that it can run in parallel. You run into things like data concurrency for shared memory, coordinating results and exceptions, making operations cancellable, etc. Even with a single processor core, my app would benefit from exploiting parallelism because the processor spends most of its time waiting on the result of the HTTP call. This time can be utilized by creating additional calls or processing results of other calls. Microsoft has realized that this is hard work for a programmer and great new additions are coming in .NET Framework 4.0 and Visual Studio 2010. Things like the Task Parallel Library and making debugging parallel applications easier.

However, these improvements are still in the beta stage and not usable yet for production software like my tool. I am not the only user of my application and “xcopy deployability” remains a very important goal to me. For example, the tool does not use .NET 3.5 features and only depends on .NET 3.0, This is  because Windows Vista comes with .NET 3.0 out of the box and .NET 3.5 requires an additional hefty install. I might make the transition to .NET 3.5 SP1 soon, because it is now pushed out to all users of .NET 2.0 and higher through Windows Update.

So I added parallelism the old-fashioned way, by manually spinning up threads, locking shared data structures appropriately, propagate exception information through callbacks, making asynchronous processes cancellable, waiting on all worker threads to finish using WaitHandles, etc. I don’t use the standard .NET threadpool for queing work because it is tuned for CPU bound operations. I want to have fine grained control over the number of HTTP connections that I open to Flickr. A reasonable number is a maximum of 10 concurrent connections. This gives me almost 10 ten times the original speed for the Flickr fetch and update steps in the sync process. Going any higher puts me at risk of being seen as launching a denial-of-service attack against the Flickr web services.

If you want to take a look at my source code, you can find it at CodePlex. The app was already nicely factored, so I didn’t have to rearchitect it to add parallelism. The sync process was already done on a background thread (albeit sequentially) in a helper class, because you should never block the UI thread in WinForms or WPF applications. The app already contained quite a bit of thread synchronization stuff. The new machinery is contained in the abstract generic class AsyncFlickerWorker<TIn, Tout> class. Its signature is

/// <summary>
/// Abstract class that implements the machinery to asynchronously process metadata on Flickr. This can either be fetching metadata
/// or updating metadata.
/// </summary>
/// <typeparam name="TIn">The type of metadata that is processed.</typeparam>
/// <typeparam name="TOut">The type of metadata that is the result of the processing.</typeparam>
internal abstract class AsyncFlickrWorker<TIn, TOut>

It has the following public method

/// <summary>
/// Starts the async process. This method should not be called when the asychronous process is already in progress.
/// </summary>
/// <param name="metadataList">The list with <typeparamref name="TIn"/> instances of metadata that should
/// be processed on Flickr.</param>
/// <param name="resultCallback">A callback that receives the result. Is not allowed to be null.</param>
/// <typeparam name="TIn">The type of metadata that is processed.</typeparam>
/// <typeparam name="TOut">The type of metadata that is the result of the processing.</typeparam>
/// <returns>Returns a <see cref="WaitHandle"/> that can be used for synchronization purposes. It will be signaled when
/// the async process is done.</returns>
public WaitHandle BeginWork(IList<TIn> metadataList, EventHandler<AsyncFlickrWorkerEventArgs<TOut>> resultCallback)

It uses the generic class AsyncrFlickrWorkerEventArgs<TOut> to report the results:

/// <summary>
/// Class with event arguments for reporting the results of asynchronously processing metadata on Flickr.
/// </summary>
/// <typeparam name="TOut">The "out" metadata type that is the result of the asynchronous processing.</typeparam>
public class AsyncFlickrWorkerEventArgs<TOut> : EventArgs

The subclass AsyncPhotoInfoFetcher is one of its implementations.

/// <summary>
/// Class that asynchronously fetches photo information from Flickr.
/// </summary>
internal sealed class AsyncPhotoInfoFetcher: AsyncFlickrWorker<Photo, PhotoInfo>

These async workers are used by the FlickrHelper class (BTW: this class has grown a bit too big, so it is a likely candidate for future refactoring). Its method that calls async workers is generic and has this signature:

/// <summary>
/// Processes a list of photos with multiple async workers and returns the result.
/// </summary>
/// <param name="metadataInList">The list with metadata of photos that should be processed.</param>
/// <param name="progressCallback">A callback to receive progress information.</param>
/// <param name="workerFactoryMethod">A factory method that can be used to create a worker instance.</param>
/// <typeparam name="TIn">The "in" metadata type for the worker.</typeparam>
/// <typeparam name="TOut">The "out" metadata type for the worker.</typeparam>
/// <returns>A list with the metadata result of processing <paramref name="metadataInList"/>.</returns>
private IList<TOut> ProcessMetadataWithMultipleWorkers<TIn, TOut>(
    IList<TIn> metadataInList,
    EventHandler<PictureProgressEventArgs> progressCallback,
    CreateAsyncFlickrWorker<TIn, TOut> workerFactoryMethod)

This method contains an anonymous delegate that acts as the result callback for the async workers. Generics and anonymous delegates make multithreaded life bearable in C# 2.0. Anonymous delegates allow you to use local variables and fields of the containing method and class in the callback method and thus easily access and change those to store the result of the worker thread. Of course, make sure you lock access to shared data appropriately because multiple threads might callback simultaneously to report their results.

And somewhere in 2010 when .NET 4.0 is released, I could potentially remove all this manual threading stuff and just exploit Parallel.For 😉

I ran into an issue with a Windows Azure project created from scratch in Visual Studio 2010 Beta 1 with the May CTP of the Windows Azure Tools.Azure_Logo

When trying to create tables in the local development storage, I got the error “Invalid image format”. This issue occurred both from VS (using the Create Test Storage Tables option) and when running DevTableGen.exe manually from a command prompt. It didn’t occur when doing the same in VS2008.

DevTableGen.exe is a tool from the Windows Azure SDK. This tools loads an assembly, reflects over it and then tries to create tables in the local development storage.

After some head scratching I figured out the cause: in Dev10 the default target platform for executables was changed from AnyCPU to x86. In Beta 1 a bug snuck in, so that x86 is also the default for class libraries (DLLs). This will be changed back to AnyCPU in Beta 2. This blog post from Rick Byers has a lot of detail on why the default has changed.

DevTableGen.exe is an AnyCPU executable, so it will run as a 64-bit process on an 64-bit version of Windows. As such it cannot load assemblies marked as 32-bit only when run on 64-bit Windows. The solution was to change the assembly to AnyCPU.

This issue would have been prevented if DevTableGen.exe was marked as 32-bit only. That way it would always run in a 32-bit process and could load x86 and AnyCPU assemblies. According to the blog post linked to above, it is now considered best practice to explicitly mark an executable as x86 or x64. For most applications x86 is the best option.

I’ve suggested this change to the product team.

Update: I've been informed that DevTableGen.exe will not be changed. This is because this issue is temporary and will go away when VS2010 Beta 2 is released. Also the web role and worker role processes in Windows Azure are 64-bit, so trying to load x86 assemblies will fail anyway. So if you use Beta 1 make sure you explicitly change the platform for any assemblies you create for the cloud to AnyCPU.

1 Comment

Ever since the internal unveiling of Windows Azure as project “Red Dog” at our internal TechReady conference in July 2008, I’ve been very interested in this Software+Services platform.

Technical strategist Steve Marx from the Windows Azure team recently released a cool sample app called The CIA Pickup. He put up a demonstration video and a nice architecture drawing of this app up on his blog.

TheCiaPickup_Logo

Using the app you can pretend to be a CIA agent and hand out a phone number and your agent id to someone. When this person calls this number, they are greeted by an automated message that says they are connected to the CIA automated phone system and are requested to enter your agent id. After they have entered your id, you will receive their caller id via e-mail.

Seeing that 90% of the IT population seems to be male of which probably 95% is straight, I can see why the app is slightly biased in helping men picking up phone numbers of women. But if you don’t like this, you can always pick up the source and change the text. Which I did. Not to change the text, but to make some improvements so that I could run the app in my own Windows Azure playground in the cloud.

For example, the SMTP port of my e-mail service is not the standard port 25. I made this port configurable and in the process I found out that the app has to be deployed with full trust in order to be able to use the non standard port. I added logging to trouble shoot issues like this and made some security improvements.

I contributed these improvements back to Steve and he has gracefully credited me in his second blog post.

The CIA Pickup app is a great example of the power of combining different off-the-shelf services like SMTP providers, telephony service Twilio, Azure Table and Queue Storage, Windows Live ID Authentication with custom code, C# and ASP.NET MVC, running in the cloud. You can literally have this up-and-running within a couple of hours, including the creation of all necessary accounts.

So go try it out! You don’t need to deploy the app yourself to do this. You can use Steve’s deployment for this. Although it uses the US phone number +1 (866) 961-1673, it works when dialing from the Netherlands. If you want to get in touch, use my agent id 86674 😉

A couple of months ago I received a license for NDepend to evaluate its usefulness. I was already convinced that NDepend is a very useful tool. But up to now, I hadn’t put NDepend to good use in a way that I could blog about it.

Today I decided to bite the bullet and put my own pet project FlickrMetadataSynchr up for analysis. Its source code is available on CodePlex.

NDepend analyses managed code for several quality aspects, like cyclomatic complexity, coupling and unused code. In a way it resembles FxCop, but it also does a lot more in terms of reporting. NDepend also is a lot more flexible in letting you query your code base. For this it uses its own SQL variant called Code Query Language (CQL). For example, you could enter this query into the tool

SELECT METHODS WHERE NbLinesOfCode > 30 AND IsPublic

and NDepend will show you all public methods whose number of lines of code exceeds 30.

Just by using the standard settings, NDepend gives you truckloads of information that point to areas with potential code smell. The report has inline comments that explain why it selects stuff and points out possible false positives for which it is okay to ignore the warning.

You can find my NDepend results here if you want to see what such a report looks like.

Starting with those results from top to bottom, I started refactoring my code to improve the quality. For example, splitting up methods to:

  • Reduce cyclomatic complexity
  • Reduce the number of IL instructions in a method
  • Reduce the number of local variables in a method
  • Increase the comment to code ratio

This should increase maintainability of the code.

Go check out this tool if you are interested in improving the quality of your .NET code or if you are tasked with reviewing somebody else’s code.

Sometimes it’s attention to detail that excites me the most. I just noticed the beautiful rendering of PNGs with transparency in the Pictures Library view in Explorer in Windows 7 (RC build):

PNG Transparency in Pictures Library View

You can get this view by selecting Arrange by Month in the upper right-hand corner of the Pictures library view. The RC build has been pretty stable for me and I use it for “production” purposes on my work laptop. I have both Visual Studio 2008 SP1 and Visual Studio 2010 Beta 1 installed on it and this works fine.

These images you see above do not reside on my Win7 laptop, but on my Windows Home Server (WHS) box. I’ve included the \serverphotos share in my Win7 Pictures Library. This pulls some 20,000 pictures into my library without any ill effect. I run Windows Search 4.0 on my WHS, so searching the library is still very snappy. This is possible because the search box in Explorer uses Remote Index Discovery and my laptop doesn’t have to index those pictures by itself.

You can code against the new Library feature in Win7 using C++ as explained on the Windows 7 Blog for Developers. If you are slightly less masochistic and want to use C# or VB, I suggest you use the Windows API Code Pack for Microsoft .NET Framework. It’s essentially a bunch of wrapper classes around new unmanaged code APIs in Windows that are not yet covered by the .NET Framework itself.

PS: If you are still on Windows Vista, SP2 has just been released on MSDN Subscriber Downloads.

PS2: I lied a bit. Those images you see on the left are actually beautiful 2048x2048 pixel TIFF files with transparency. You can download those so-called Blue Marble pictures.

If you watched the first keynote of PDC08, either at PDC or through the live stream, you have seen the first public unveiling of Windows Azure. In short, it’s our OS for the cloud.

servicesPlatform

A CTP of the Windows Azure SDK should be available shortly. In the mean time, you can get busy with the “Oslo” SDK that has been released already. Downloadable from the new “Oslo” Developer Center on MSDN.

Of course as a Microsoft employee I am biased about this new, but oh so familiar, platform. So don’t take my opinion, but that of Robert W. Anderson. He writes about Windows Azure:

It is the openness of this platform, the ability of developers to mix and match the different components, and to do it between the cloud and in-premises solutions that makes this such a winner.

This last point is an important one.  Microsoft is in a unique position to help enterprise IT bridge to the cloud.  While I don’t think Amazon and Google will cede that market to Microsoft, their current offerings aren’t a natural fit.

Taking this all together — not forgetting Microsoft’s leading developer productivity story — it looks like a home run to me.