Personal

Long before I joined the company, Brad Abrams was one of the first people for me that put a human face on Microsoft.

I felt sad when I just read his blog post that Brad is leaving Microsoft. But at the same time I feel happy for all that he has accomplished for the company. Check his blog post for all that he has been involved in. Not on the list, but not less important in my mind, is the book Framework Design Guidelines that he co-authored.

Brad, I wish you all the best in your next endeavors.

1 Comment

My first blog post with the title UPC sucks is the #1 hit on Bing and the #5 hit on Google. Unfortunately it is time to add another such post.

UPC is a cable company that has a monopoly on cable tv in large parts of the Netherlands. I have two UPC mediaboxes for watching digital tv and using the UPC On Demand service. When they work, the picture quality is great. If they don’t work, they cause you major headaches and an endless stream of calls to the UPC helpdesk. You have to dial a paid phone number in order to speak to UPC and the waiting time is between 5 and 10 minutes in my experience.

Since february 2009, I have two mediaboxes, one is a Philips HD-DVR and the other is a Thompson SD without DVR. On at least three occasions I have had “access denied” problems on all digital tv channels. Resolving these problems took multiple calls to the helpdesk in each case. The usual chain of recommended trouble shooting is:

  1. Recheck the cabling, unless you insist you can see the “access denied” error message on the screen so there is a working connection between your mediabox and the tv.
  2. Take the power cable out for 30 seconds and then reinsert it. The reboot takes at least a couple of minutes. Which you pay for of course, because you don’t want to hang up the phone and get back to the end of the queue when you need to call again because the problem is not resolved.
  3. Reset the box to factory settings. You loose all customizations (like favorite channels). This takes about five minutes.
  4. Press some button on the box while reinserting the power cable (variant of step 2) to force a software update. This takes over five minutes before the box is usable again.

I am sure these steps resolve 90% of the problems for most customers, because the mediabox is an unstable piece of crap. I try these steps before calling the helpdesk to save some money on the call. But for the remaining 10% of the cases it doesn’t work. And actually if the helpdesk had more intelligent scripts or more intelligent people, they could tell in advance what types of problems are not resolved by these steps. The “access denied” case is such a class of problem. Some helpdesk employees realize this, if you insist you know the problem is on their end. Others just want to send a mechanic to check your cabling for which you have to stay at home for at least half a day.

What sometimes happens is that the mediabox for unknown reasons looses it authorization to view all channels. The resolution is that the UPC server should push the authorization again to the box. Some employees claim they can’t do this and others that this takes up to 24 hours to take effect, but that is rubbish. If you happen to get a decently skilled employee on the phone, they should be able to solve the problem instantly while you wait.

In May 2009 the On Demand service stopped working on both of my mediaboxes. After selecting a video and pressing OK, error code CU103 or VD103 appeared with the message that I should call the helpdesk. For most cases this symptom can be resolved by rebooting the box (step 2 above). After being online for a couple of days, the box has the tendency to loose its IP-address and is not smart enough to reacquire it. You can check this by pressing the red button on channel 999. This performs a connectivity check. In my case the result was “congratulations! you have a connection with the UPC network” and the next screen showed all green statuses with IP-addresses and all. After calling the helpdesk they insisted that it must be a coax cabling problem on my end because steps 2 to 4 didn’t fix the problem. I didn’t believe this, because I have good cabling from Hirschmann. Nevertheless, as I was out of options, I agreed to stay at home for a mechanic. The guy (from Centric hired by UPC) was no On Demand expert and repeated steps 2 to 4. He measured the signal strength, which was fine as I expected. He did have one extra trick up his sleeve:

  • After the reset to factory settings enter the wrong region code so you connect with a server in another region. Try to start On Demand. Switch back to original region. Retry.

This didn’t work. The mediaboxes otherwise seemed to work just fine and the chance of two in one home breaking down at the same day is pretty slim, so they were not replaced. The mechanic left without being able to fix the problem. He concluded that the problem had to be at the UPC server end. I didn’t hear back from UPC for a couple of days and called them again. A new case would be opened and I heard something about “frequency fine-tuning on the server end”. A couple of days later On Demand magically started working, but I never heard if and what was fixed by UPC.

Fast forward to August 7th, 2009. On Demand stopped working again with the same symptoms as in May. So I called the helpdesk. They always apologize for the inconvenience and claim they will solve your problem. Their script and training includes friendliness. But it turns out that for complex On Demand problems they can only send e-mail messages to a special UPC unit that I dub the “black hole”. There is only one-way communication from the helpdesk to this unit possible. They promise that the unit will call you back, but they never do. Apparently the workflow is that the customer has to call the help desk again if the issue is not resolved after a couple of days and his/her patience runs out after not hearing anything from UPC. The help desk goes through exactly the same cycle again and just opens a new case for the “black hole” unit. Some employees have the nerve to claim that the unit will look into the issue the same evening. Others say they cannot state any reasonable time for resolution or feedback: “could be longer than a week”. There is only one case where the helpdesk called me back (not the black hole unit). The only thing the employee could tell me was that the e-mail had been sent and he would call again after the weekend to give another status update. That call never came.

Randomly in this endless sequence of calls to the helpdesk an employee will claim again that they need to send a mechanic. After telling them the whole history again, I could oftentimes convince them that the problem is very likely not on my end and I don’t want to stay at home for a day for a mechanic who can’t fix the problem. My mediaboxes have a proper return signal. UPC can measure this remotely. UPC Interactive works, so the TCP/IP connection is obviously not the problem.

Tonight I got into this whole discussion with the help desk again. I was ready to give in, so I asked “When can you send a mechanic?” The first available option was August 26th, so more than a week from now. And that for a problem that started on August 7th. And no compensation what so ever if the mechanic isn’t able to fix the problem, because it isn’t at my end of the cable. UPC has a cash back policy if they can’t fix a problem in 24 hours. But On Demand is not included in this policy, because they claim it is a “free” service. Of course it isn’t free. You can’t get it unless you pay money for a digital television pack and mediabox. On Demand is a large part of their marketing and an important motivator for me to pay extra for digital television.

Tomorrow I am going to call UPC again to file a formal complaint. This blog post serves as a public statement that I can refer to, to add some extra weight to my complaint.

If you managed to read this far, thanks for bearing with me.

If you consider becoming a UPC customer, be prepared to buy extra aspirin.

Yesterday I worked on a new version of my FlickrMetadataSynchr tool and published the 1.3.0.0 version on CodePlex. I wasn’t really planning on creating a new version, but I was annoyed by the old version in a new usage scenario. When you have an itch you have to scratch it! And it is always good to catch up on some programming if recent assignments at work don’t include any coding. So what caused this itch?

FlickrMetadataSynchr-v1.3.0.0About two weeks ago I got back from my holiday in China with about 1,500 pictures on my 16 GB memory card. I always make a first selection immediately after taking a picture, so initially there were lots more. After selection and stitching panoramic photos, I managed to get this down to about 1,200 pictures. Still a lot of pictures. But storage is cheap, tagging makes search easy, so why throw away any more? One of the perks of a Pro account on Flickr is that I have unlimited storage, so I uploaded 1,1173 pictures (5.54 GB). This took over 12 hours because Flickr has limited uploading bandwidth.

Adding metadata doesn’t stop at tagging pictures. You can add a title, description and geolocation to a picture. Sometimes this is easier to do on your local pictures, and sometimes I prefer to do it on Flickr. The FlickrMetadataSynchr tool that I wrote is a solution to keeping this metadata in sync. You should always try to stay in control of your data, so I keep backups of my e-mail stored in the “cloud” and I store all metadata in the original picture files on my hard drive. Of course I backup those files too. Even offsite by storing an external hard drive outside my house.

Back to the problem. Syncing the metadata for 1,1173 pictures took an annoyingly long time. The Flickr API has some batch operations, but for my tool I have to fetch metadata and update metadata for pictures one-by-one. So each fetch and each update uses one HTTP call. Each operation is not unreasonably show, but when adding latency to the mix it adds up to slow performance if you do it sequentially.

Imperative programming languages like C# promote a sequential way of doing things. It is really hard to exploit multiple processor cores by splitting up work so that it can run in parallel. You run into things like data concurrency for shared memory, coordinating results and exceptions, making operations cancellable, etc. Even with a single processor core, my app would benefit from exploiting parallelism because the processor spends most of its time waiting on the result of the HTTP call. This time can be utilized by creating additional calls or processing results of other calls. Microsoft has realized that this is hard work for a programmer and great new additions are coming in .NET Framework 4.0 and Visual Studio 2010. Things like the Task Parallel Library and making debugging parallel applications easier.

However, these improvements are still in the beta stage and not usable yet for production software like my tool. I am not the only user of my application and “xcopy deployability” remains a very important goal to me. For example, the tool does not use .NET 3.5 features and only depends on .NET 3.0, This is  because Windows Vista comes with .NET 3.0 out of the box and .NET 3.5 requires an additional hefty install. I might make the transition to .NET 3.5 SP1 soon, because it is now pushed out to all users of .NET 2.0 and higher through Windows Update.

So I added parallelism the old-fashioned way, by manually spinning up threads, locking shared data structures appropriately, propagate exception information through callbacks, making asynchronous processes cancellable, waiting on all worker threads to finish using WaitHandles, etc. I don’t use the standard .NET threadpool for queing work because it is tuned for CPU bound operations. I want to have fine grained control over the number of HTTP connections that I open to Flickr. A reasonable number is a maximum of 10 concurrent connections. This gives me almost 10 ten times the original speed for the Flickr fetch and update steps in the sync process. Going any higher puts me at risk of being seen as launching a denial-of-service attack against the Flickr web services.

If you want to take a look at my source code, you can find it at CodePlex. The app was already nicely factored, so I didn’t have to rearchitect it to add parallelism. The sync process was already done on a background thread (albeit sequentially) in a helper class, because you should never block the UI thread in WinForms or WPF applications. The app already contained quite a bit of thread synchronization stuff. The new machinery is contained in the abstract generic class AsyncFlickerWorker<TIn, Tout> class. Its signature is

/// <summary>
/// Abstract class that implements the machinery to asynchronously process metadata on Flickr. This can either be fetching metadata
/// or updating metadata.
/// </summary>
/// <typeparam name="TIn">The type of metadata that is processed.</typeparam>
/// <typeparam name="TOut">The type of metadata that is the result of the processing.</typeparam>
internal abstract class AsyncFlickrWorker<TIn, TOut>

It has the following public method

/// <summary>
/// Starts the async process. This method should not be called when the asychronous process is already in progress.
/// </summary>
/// <param name="metadataList">The list with <typeparamref name="TIn"/> instances of metadata that should
/// be processed on Flickr.</param>
/// <param name="resultCallback">A callback that receives the result. Is not allowed to be null.</param>
/// <typeparam name="TIn">The type of metadata that is processed.</typeparam>
/// <typeparam name="TOut">The type of metadata that is the result of the processing.</typeparam>
/// <returns>Returns a <see cref="WaitHandle"/> that can be used for synchronization purposes. It will be signaled when
/// the async process is done.</returns>
public WaitHandle BeginWork(IList<TIn> metadataList, EventHandler<AsyncFlickrWorkerEventArgs<TOut>> resultCallback)

It uses the generic class AsyncrFlickrWorkerEventArgs<TOut> to report the results:

/// <summary>
/// Class with event arguments for reporting the results of asynchronously processing metadata on Flickr.
/// </summary>
/// <typeparam name="TOut">The "out" metadata type that is the result of the asynchronous processing.</typeparam>
public class AsyncFlickrWorkerEventArgs<TOut> : EventArgs

The subclass AsyncPhotoInfoFetcher is one of its implementations.

/// <summary>
/// Class that asynchronously fetches photo information from Flickr.
/// </summary>
internal sealed class AsyncPhotoInfoFetcher: AsyncFlickrWorker<Photo, PhotoInfo>

These async workers are used by the FlickrHelper class (BTW: this class has grown a bit too big, so it is a likely candidate for future refactoring). Its method that calls async workers is generic and has this signature:

/// <summary>
/// Processes a list of photos with multiple async workers and returns the result.
/// </summary>
/// <param name="metadataInList">The list with metadata of photos that should be processed.</param>
/// <param name="progressCallback">A callback to receive progress information.</param>
/// <param name="workerFactoryMethod">A factory method that can be used to create a worker instance.</param>
/// <typeparam name="TIn">The "in" metadata type for the worker.</typeparam>
/// <typeparam name="TOut">The "out" metadata type for the worker.</typeparam>
/// <returns>A list with the metadata result of processing <paramref name="metadataInList"/>.</returns>
private IList<TOut> ProcessMetadataWithMultipleWorkers<TIn, TOut>(
    IList<TIn> metadataInList,
    EventHandler<PictureProgressEventArgs> progressCallback,
    CreateAsyncFlickrWorker<TIn, TOut> workerFactoryMethod)

This method contains an anonymous delegate that acts as the result callback for the async workers. Generics and anonymous delegates make multithreaded life bearable in C# 2.0. Anonymous delegates allow you to use local variables and fields of the containing method and class in the callback method and thus easily access and change those to store the result of the worker thread. Of course, make sure you lock access to shared data appropriately because multiple threads might callback simultaneously to report their results.

And somewhere in 2010 when .NET 4.0 is released, I could potentially remove all this manual threading stuff and just exploit Parallel.For 😉

1 Comment

Ever since the internal unveiling of Windows Azure as project “Red Dog” at our internal TechReady conference in July 2008, I’ve been very interested in this Software+Services platform.

Technical strategist Steve Marx from the Windows Azure team recently released a cool sample app called The CIA Pickup. He put up a demonstration video and a nice architecture drawing of this app up on his blog.

TheCiaPickup_Logo

Using the app you can pretend to be a CIA agent and hand out a phone number and your agent id to someone. When this person calls this number, they are greeted by an automated message that says they are connected to the CIA automated phone system and are requested to enter your agent id. After they have entered your id, you will receive their caller id via e-mail.

Seeing that 90% of the IT population seems to be male of which probably 95% is straight, I can see why the app is slightly biased in helping men picking up phone numbers of women. But if you don’t like this, you can always pick up the source and change the text. Which I did. Not to change the text, but to make some improvements so that I could run the app in my own Windows Azure playground in the cloud.

For example, the SMTP port of my e-mail service is not the standard port 25. I made this port configurable and in the process I found out that the app has to be deployed with full trust in order to be able to use the non standard port. I added logging to trouble shoot issues like this and made some security improvements.

I contributed these improvements back to Steve and he has gracefully credited me in his second blog post.

The CIA Pickup app is a great example of the power of combining different off-the-shelf services like SMTP providers, telephony service Twilio, Azure Table and Queue Storage, Windows Live ID Authentication with custom code, C# and ASP.NET MVC, running in the cloud. You can literally have this up-and-running within a couple of hours, including the creation of all necessary accounts.

So go try it out! You don’t need to deploy the app yourself to do this. You can use Steve’s deployment for this. Although it uses the US phone number +1 (866) 961-1673, it works when dialing from the Netherlands. If you want to get in touch, use my agent id 86674 😉

Sometimes it’s attention to detail that excites me the most. I just noticed the beautiful rendering of PNGs with transparency in the Pictures Library view in Explorer in Windows 7 (RC build):

PNG Transparency in Pictures Library View

You can get this view by selecting Arrange by Month in the upper right-hand corner of the Pictures library view. The RC build has been pretty stable for me and I use it for “production” purposes on my work laptop. I have both Visual Studio 2008 SP1 and Visual Studio 2010 Beta 1 installed on it and this works fine.

These images you see above do not reside on my Win7 laptop, but on my Windows Home Server (WHS) box. I’ve included the \serverphotos share in my Win7 Pictures Library. This pulls some 20,000 pictures into my library without any ill effect. I run Windows Search 4.0 on my WHS, so searching the library is still very snappy. This is possible because the search box in Explorer uses Remote Index Discovery and my laptop doesn’t have to index those pictures by itself.

You can code against the new Library feature in Win7 using C++ as explained on the Windows 7 Blog for Developers. If you are slightly less masochistic and want to use C# or VB, I suggest you use the Windows API Code Pack for Microsoft .NET Framework. It’s essentially a bunch of wrapper classes around new unmanaged code APIs in Windows that are not yet covered by the .NET Framework itself.

PS: If you are still on Windows Vista, SP2 has just been released on MSDN Subscriber Downloads.

PS2: I lied a bit. Those images you see on the left are actually beautiful 2048x2048 pixel TIFF files with transparency. You can download those so-called Blue Marble pictures.

Since last week I have a new digital SLR camera: a 15 megapixel Canon EOS 50D. With it I took my first photo set with Photosynth in mind. I am very happy with the result: 98 photos that are 100% synthy (meaning that they all connect into one seamless 3D scene).

Here is a screenshot of this scene of the main hall of the train station in Leeuwarden:

Photosynth (Build 10683) - Train Station Leeuwarden

Of course, this synth is best experienced live because then it is fully navigable. You can move around using the mouse, cursor keys or keyboard shortcuts. Also try to zoom in. Fifteen megapixel pictures give you quite some leeway for zooming into details.

Have you created a nice synth? Leave the link as a comment so that I can check it out.

1 Comment

For the last week, I have been in love with Photosynth. It was a tech preview in view-only mode for quite a while, but now we finally released it to the masses.

I've created three synths from existing pictures, i.e., from pictures I took without having Photosynth in mind. These are my first experiments:

  Photosynth (Build 10683) - Sunset Amsterdam

But these ones are much better:

Photosynth is a prime example of the execution on our Software + Services vision. It combines the best of the web with local computing power (on Windows).

Go create!

PS: I have a 40 Megapixel picture of Mount Rainier stitched from the same pictures as above. For the stitching I used Windows Live Photo Gallery. You can view it using Silverlight DeepZoom technology.

1 Comment

NoCar-NoMcDonalds

Yesterday evening I found that McDonald's is discriminating against the car-impaired. I am currently in Seattle for the Microsoft TechReady conference. After a photo walk with two colleagues we were getting thirsty and hungry. I managed to stay out of McDonald's for the almost four weeks that I have been in the States, but the McDonald's two blocks away from the Space Needle was a bit hard to resist.

We arrived there at 23:02 and the "restaurant" had just closed. However, the drive thru (yes, that seems to be the correct spelling here) is open 24 hours per day. But what do you do when you don't have a car? You walk through the drive thru. We weren't carrying enough metal to trip the detector in front of the ordering pole. So we couldn't state our order. We preceded to the pay window to order there. The guy there looked very amazed and said he could not and would not serve us. WTF! "Why not?" we asked. "Because you don't have a car", he replied. Disappointed we left. Next time, I will try riding a bike through the drive tru.

In the mean time I will boycot McDonald's for a while.

PS: This is just for my own good as I have the feeling that I gained a couple of kilos here in the US.

It depends on the situation if I self-identify as a geek or not. Today, I thought it would be fine, so I signed up for the Geek Dinner organized by Scott Hanselman in Bellevue, WA.

Quite a lot of people showed up. I went there with my colleague Erwin van der Valk. He was a Development Consultant, like I currently am, at Microsoft Services in the Netherlands. Erwin now works on the Patterns & Practices team at Microsoft in Redmond.

I took this picture of the entire group:

Microsoft Geek Dinner 

After dinner, a large portion of the group went to the Rock Bottom Restaurant and Brewery in Bellevue to continue the conversation. I had some really interesting discussions, over beer, with guys from several different product teams and a fellow MCS consultant based in Denver, CO. And not even all about Microsoft technology 😉

Too bad, I won't be able to attend this event in the near future again, unless I just happen to be in the neighborhood.

1 Comment

Best wishes to everyone for 2008!

In the first day of the new year, I've released version 0.9.0.0 of my Flickr Meta Synchr tool. You can always find the latest version on CodePlex.

I had been working on this new version for a while now, but didn't get around to finishing it. Improvements in this new version:

  • Added much better activity logging. The activity log can now be shown in an additional window and is persisted to disk.
  • Added the option to match pictures on title and filename. This is useful when images have been timeshifted and cannot be matched on date taken.
  • Bug fixes. Improved stability when corrupt image files are encountered. Fixed GPS roundtripping bug.
  • Should run better on 64-bit versions of Windows XP and Vista.
  • Solution is now built in Visual Studio 2008 without the need for any additional WPF extensions.

Here is a screenshot of version 0.9.0.0:

Flickr Metadata Synchr v0.9.0.0