Monthly Archives: August 2005

1 Comment

I was just listening to the latest .NET Rocks! Show featuring Brad Abrams and Joel Pobar. I have been a fan of Brad and his blog since just before PDC 2003. Go listen to the show if you are interested in the CLR.

Brad was talking about a dev on the CLR team putting in a YouMoronException for a rare pathological error case. This reminded me of a YouBastardException that I defined in one of my first .NET projects a couple of years ago. This type was thrown when somebody deliberately tried to hack the application by passing invalid parameters. The application (an Intranet application) is still up and running and I can still trigger this exception type by messing with the query string in the URL. However the name of the exceptiontype will only show up in the logfile and not in the error page shown to the user.

Recently somebody asked me if I had included a similar exception in my current project. The answer was, no I didn't. You better not do this in an Internet facing application. Showing track traces to the user should be turned off, but a configuration error is easy to make. I have settled for the more neutrally named InvalidParameterException this time 😉

1 Comment

Please read my earlier post on generating XHTML output from ASP.NET 1.1 before reading this post.

Summary:

  • We use custom controls to generate XHTML 1.0 Strict markup.
  • We parse the ASP.NET output as XML.
  • We tweak the form element and an input element to make the entire page XHTML 1.0 Strict.
  • We map parts of the XHTML tree of the ASP.NET output into an XHTML 1.0 Strict static page produced by a content management system (CMS).

The last point is our answer to how to ensure and maintain a common look-and-feel between the static XHTML pages published by the CMS and the dynamic ASP.NET pages. In this post I will give more background and detail on this design decision. Because it's not the only approach possible.

The CMS we use is of Dutch origin. Unlike Microsoft's CMS it publishes pages to the web server that have no runtime dependency on the CMS. So they are static from the CMS perspective. The choice for the CMS in question was fixed before ASP.NET entered the picture. One option to make some pages dynamic is to publish .asp, .aspx or .jsp pages that contain code to be executed on the web server. But they will still be static from the CMS perspective. Anything that the CMS does, like creating a common layout and helping with links among web pages and to resources like images, is done at publication time. Not when the page is served to a browser by the web server.

The old style approach would have been to use frames. The static part of the page with the common look-and-feel would be published by the CMS and the dynamic part would be framed inside it. Using frames has well-known drawbacks so this approach was quickly ruled out.

We thought about letting the CMS publish the .aspx pages for the .NET modules. This way the XHTML markup for the common look-and-feel and navigation menus on the .aspx pages can be build by the CMS from the same building blocks as the static XHTML pages. However a CMS is good at publishing content and not really good at managing and publishing source code. In the ASP.NET case you have depencies on assemblies containing the compiled code-behinds. Do you want these to be published by the CMS as well? Do you want to run the risk of a content editor messing up the markup of your ASP.NET page and breaking the close relationship with the code-behind assemblies which they cannot edit?

So we decided to take a different approach. The ASP.NET pages and the assemblies live separate from the CMS content on the web server and are not managed or published by the CMS. The ASP.NET page knows what content template it should use, fetches it and maps its own output into container elements (like divs) in the XHTML content template. The resulting XHTML document is what gets sent to the browser.

Well it is not really the ASP.NET page itself that performs the mapping, but an IHttpModule which I will call the content mapper. So how does it know what content template to use? One of the requirements was that a single ASP.NET page might be mapped into several different content templates depending on the context in which the .NET module in question was being used.

The .NET modules are linked to from static XHTML pages published by the CMS. A static page that links to a .NET module represents a navigation state which I will call the navigation context for the .NET module. You can see the navigation state as the highlighting of menu items and in the folder structure in the URL. It shows you where you are in the navigation structure of the site. One of the requirements was that the .NET modules would keep the visual navigation state intact. They should support different navigation contexts, i.e., a .NET module should adapt like a cameleon to the page that linked to the module. In the old days the easiest way to solve this would be to use frames. As I mentioned before these are out of the question.

The first idea to keep track of the navigation state was to read the HTTP referrer header value to see which page linked to the .NET module. This value was to be passed along in a hidden field if the .NET module itself has multiple pages. But I didn't like the strong dependency on an HTTP header that is sometimes blocked by proxy servers or personal firewalls. I wanted it to be possible to explicitly tell the .NET module what navigation context to use. Also I disliked the hidden field idea. It puts an extra burden on developers of .NET modules to pass it along. You always have to use HTML forms to pass it along, and you have to use POSTs instead of GETs for those forms or you'll end up with all form fields in the URL and not just the navigation context parameter. 

I wanted the whole content mapping thing to be as much a black box as possible for the developers. I felt the best place to keep track of where you are in the site is the URL you see in the address bar of your browser. And I don't mean the query part, but the path part of the URL. So I decided to solve this issue together with the requirement that URLs to pages in the site should not include extensions like .aspx or .html.

We settled on a URL format that looks like this: http://www.finance.nl/advice/savings/context/information/savingsacounts/. And this URL actually points to the same .NET module but in a different context: http://www.finance.nl/advice/savings/context/thisweektips/. The .NET module is deployed in a single virtual directory: /advice/savings/. The part after that in the URL does not correspond to a real virtual directory. It just identifies the navigation context. The virtual directories /thisweekstips/ and /information/savingsaccounts/ do exist. These are the locations that the content mapper uses to fetch the content template for the navigation context from.

The trick to accomplish this is to configure a wildcard mapping in Internet Information Server (IIS) for the virtual directory /advice/savings/. This means configuring the .* extension to use the ISAPI extension aspnet_isapi.dll. This way all requests for URLs within that folder will be handled by ASP.NET.

ASP.NET uses the machine.config and web.config files to determine what IHttpHandler to invoke. An ASP.NET Page is an example of an IHttpHandler. We configure a wildcard mapping for * in the web.config file. But instead of specifying a concrete IHttpHandler, we specify a class that implements IHttpHandlerFactory. I will call this class I wrote the CustomPageHandlerFactory. It parses the URL and determines what the navigation context is and what .aspx page to invoke. For the two URLs above this would be a configurable default page in the savings directory, but for the URL http://www.finance.nl/advice/savings/page2/context/information/savingsacounts/ it would be page2.aspx. After parsing the URL the CustomPageHandlerFactory puts the navigation context in the current HttpContext. This way the content mapper can use it later on in the page processing. Finally the CustomPageHandlerFactory gets the IHttpHandler for the .aspx page by calling PageParser.GetCompiledPageInstance. PageParser is part of the ASP.NET infrastructure code. It's not intended to be called by user code, but it's a public class and works well. The IHttpHandler is returned to the ASP.NET runtime which invokes it to process the request.

On a closing note. I have left out some details and cannot share my code as it is owned by the client I work for. While googling for some pages to link to I found a blog post detailing a similar approach. Matias has a simple implementation of a PageHandlerFactory that you can download.

1 Comment

What better thing to do on a late Saturday night/early Sunday morning than to read up on service design?

Before you answer it, this is a rhetorical question and I know I should get a life 😉 In fact I am going to Sail 2005 in Amsterdam later this day with a friend and it looks like the wheather is going to be good.

But back to the subject of this post. John Evdemon from Microsoft published two articles on service design on MSDN last week:

  • Principles of Service Design: Service Patterns and Anti-Patterns
    Summary: This paper, the first of a multi-paper series, provides some fundamental principles for designing and implementing Web services. The paper starts with a brief review of SOA concepts and leads up to a detailed discussion of several patterns and anti-patterns that developers can leverage when building Web services today. Guidance in this paper is applicable to any programming language or platform for which Web services can be developed and deployed.
  • Principles of Service Design: Service Versioning
    Summary: This paper is the second in a multi-paper series focusing on service design principles. This paper surveys some fundamental guidelines for versioning schemas and Web services. Six specific principles for versioning Web services are identified.

The first article is highly recommended for anyone new to service orientation, because it starts with a good introduction and explanation of the four tenets of service orientation. The patterns and anti-patterns give concrete advice on do's and don'ts when designing service interfaces. The guidance in the second article is also pretty concrete.

I am looking forward to future articles in the series. I like John's writing style and approach to giving guidance.

1 Comment

Since last week we have an accessibility expert blogging on BloggingAbout.NET: Nathan J Pledger. His first post inspired me to blog about my current project.

As I mentioned in a comment on Nathan's first post, I currently work in a team on the new web site for one of the major financial institutions in the Netherlands. This web site will go live sometime later this year. Most web pages on that site come from a content managent system. All web pages are designed to conform to the XHTML 1.0 Strict standard. I.e., they should be valid when being validated against the XHTML 1.0 Strict DTD. This in itself helps in creating an accessible site by separating layout and content and working well across different browsers.

Extraordinary care is taken to ensure that markup is semantically correct. Tables are only used for tabular data and not for layout. Lists are marked up using the XHTML elements for lists: ul, ol and li. Etc. etc.

One of the design goals is that pages will work well with JavaScript turned off and with CSS styling turned off. JavaScript should only enhance the browsing experience when available but it should not break the page when it is not available. This means that drop down menus are marked up as regular XHTML lists of links. These are accessible to people using screen readers or with JavaScript turned off. When JavaScript and CSS are available those lists will be converted through the DOM to drop down menus with fancy shading etc. We have had CSS and accesibilty experts working on creating the stylesheets, JavaScripts and templates for accessible pages.

ASP.NET is used for dynamic pages that depend on user input, like pages that give advice on what type of savings account is best suited for someone. I will call these types of web applications within the site .NET modules.

For the content management system this was not so difficult to implement.  But for ASP.NET 1.1 it was. As you may know the default output of ASP.NET is not XHTML compliant. Furthermore it has a lot of controls that output JavaScript that does not work well outside of Internet Explorer. There is poor support for using CSS. The HTML editor is a disaster. I could go on, but to cut the list short, I will jump to the conclusion: we had to roll our own custom controls and page rendering.

We use the standard ASP.NET Page model, its lifecycle and its postback model. We don't use most of the standard controls and we have build our own custom controls. These custom controls render XHTML 1.0 Strict output and can work with or without JavaScript in a cross-browser compatible way. Without JavaScript the standard ASP.NET auto postback functionality will not work. For instance when you select another radio button the page cannot not be posted back automatically. With standard ASP.NET you are out of luck if want to modify the page in response to the user selecting another option. When JavaScript is not available we render  submit buttons next to such controls to allow the user to update the page. This simulates the auto postback behaviour. So how do we know if  the user has turned JavaScript  off  when we render the page on the server? We don't. The trick is to always render these extra submit buttons, and then remove them from the page using clientside DOM manipulations when JavaScript is available.

Having custom controls that render XHTML 1.0 Strict is not enough when you want to use the standard postback model. There are two well known problems with the standard ASP.NET output. The form element  rendered for postbacks by ASP.NET contains an illegal name attribute according to the XHTML 1.0 Strict DTD. And the <input name="__VIEWSTATE" .. /> element should be wrapped in a container element like a div.

To fix these issues we post process the ASP.NET output by using a response filter. This filter is installed by an IHttpModule in the ASP.NET pipeline for each request for an ASPX page. This response filter parses the ASP.NET output as XML. If the ASP.NET output is well formed XML, this is already a good step on the way to XHTML compliance. A developer will know instantly that there is a problem if the output cannot be parsed as XML, because the filter will throw an InvalidAspNetOutputException.

Most approaches to fixing ASP.NET output mentioned on the web use regular expressions or other forms of string parsing to fix the output. We use XML parsing because we have to do additional manipulations on the output on top of fixing the XHTML issues. We have to integrate the ASP.NET output with static pages published by the content management system. Parsing the ASP.NET output as XML imposes a significant overhead with noticably longer processing times on the server, but in our case the resulting performance is acceptable.

After fixing the XHTML issues, parts of the ASP.NET output are mapped into container elements in a XHTML 1.0 Strict content template. This content template comes from the content management system. It contains the skeletons for the layout of the page and the references to external stylesheets and JavaScripts. The .NET module inserts its own title in the title tag of the template and adds additional meta elements to the document. These mappings can be configured in a flexible way using an XML configuration file that is read using the Enterprise Library at run-time. Think of our mapping functionality as "XSLT light". After the mappings have been performed, the resulting XHTML document is rendered to the standard ASP.NET output stream. If all goes well the browser receives an XHTML 1.0 Strict compliant and accesible page rendered by ASP.NET 1.1!

In another post I will go into some additional design goals that we had, like not showing the .aspx extension in URLs and I will tell more about how we integrated with the content management system.

BTW: Don't look at the HTML code for postings on this Community Server. It contains horrendous old style markup with font tags all over the place 😉

1 Comment

Monday I blogged about a problem I encountered with the RollingFileSink. Unfortunately deleting the logfile once and waiting for more than 15 seconds before creating a new log file solves the problem only for one day (or whatever age limit you have set). To fix the problem completely I had to change the RollingFileSink. Fortunately that is possible because Hisham Baz released RollingFileSink in source code form.

I fixed the LogRoller class. You can find the code that I changed and added below or you download LogRoller.cs. The line with CreateNewLogFile(); in the PerformRenameRollover method is new and the CreateNewLogFile method itself is new. (Note that the indentation of the code is lost due to problems with our blog engine).

/// <summary>
/// Archive the current log file by renaming it with today's timestamp.
/// Generate a new filename for the current log file using a <see cref="FilenameBuilder"/>.
/// </summary>
public void PerformRenameRollover()
{
Purge();
if (File.Exists(_info.FullName))
{
string newName = _builder.CreateNewFilename();
File.Move(_info.FullName, newName);
CreateNewLogFile();
}
}
/// <summary>
/// Creates a new current log file and explicitly sets its creation <see cref="DateTime"/> to <see cref="DateTime.Now"/>.
/// </summary>
/// <remarks>Explicitly creating the new file and explicitly setting its creation <see cref="DateTime"/> is
/// necessary due to file system tunneling. Due the file system tunneling a new file will get the creation <see cref="DateTime"/>
/// of an older file that existed with the same name but that was deleted or renamed within 15 seconds of
/// the creation operation.</remarks>
private void CreateNewLogFile()
{
FileStream newLogFileStream = File.Create(_info.FullName, 1);
newLogFileStream.Close();
File.SetCreationTime(_info.FullName, DateTime.Now);
}

1 Comment

Today I experienced some strange unexpected log file roll-overs when logging messages using the EntLib extension RollingFileSink. That's an extension I blogged about a month ago

I had set a age threshold of 1 day, but instead of rolling over once a day, the RollingFileSink started a new log file for each log entry. Rechecking the configuration multiple times I found nothing wrong with the configuration. Clearing the entire log directory did not help either. The RollingFileSink created a fresh log file, but that file still wasn't reused. Strangely enough another log file with identical settings (apart from the filename) was reused and did not roll over.

So I fired up the debugger and was surprised to see that the RollingFileSink considered the fresh file to be older than 1 day. Digging a little deeper revealed that the file was 40 days old! Then I remembered reading about file system tunneling in Raymond Chen's blog. When the file is deleted, the file system remembers the creation datetime. When the file reappears within a short interval, instead of getting the current datetime as its creation date, it gets the old value. Yikes, but Raymond explains why file system tunneling exists. More details can be found in this KB article. The default tunneling cache time is 15 seconds.

The moral of this story. Wait at least 15 seconds after deleting a file before creating a new file with the same name if you want the new file to have the correct creation date.

1 Comment

ASP.NET 1.1 ships with now pretty outdated browser capabilities detection settings. We ran into the problem that my colleague Chi Wai Man blogged about: ASP.NET uses the System.Web.UI.Html32TextWriter instead of the System.Web.UI.HtmlTextWriter for rendering HTML to FireFox.

ASP.NET uses the browserCaps section in the machine.config file for determining what the capabilites of a browser are. I found an updated version of these settings from Rob Eberhardt but those were not complete. I merged those settings with what was already present in the default machine.config. You can find a part of the result below or download it here. I will publish the complete  browserCaps section next week.

Note that I didn't update any of the mobile devices stuff, so this will still be very outdated.

[Update 2005-08-07: Added the link to the source of the browserCaps section. Fixed a typo and some layout issues. Removed download link because it didn't work (files with .config extension cannot be downloaded).]

<browserCaps>
<filter>
<!-- Internet Explorer //-->
<case match="^Mozilla[^(]*(compatible; MSIE (?'version'(?'major'd+)(?'minor'.d+)(?'letters'w*))(?'extra'.*)">
browser=IE
version=${version}
majorversion=${major}
minorversion=${minor}
<case match="^[5-9]." with="${version}">
frames=true
tables=true
cookies=true
backgroundsounds=true
vbscript=true
javascript=true
javaapplets=true
activexcontrols=true
tagwriter=System.Web.UI.HtmlTextWriter
ecmascriptversion=1.2
msdomversion=${major}${minor}
w3cdomversion=1.0
css1=true
css2=true
xml=true
isMobileDevice="true"
<filter with="${letters}" match="^b">
beta=true
</filter>
<filter with="${extra}" match="Crawler">
crawler=true
</filter>
</case>
<case match="^4." with="${version}">
frames=true
tables=true
cookies=true
backgroundsounds=true
vbscript=true
javascript=true
javaapplets=true
activexcontrols=true
tagwriter=System.Web.UI.HtmlTextWriter
ecmascriptversion=1.2
msdomversion=4.0
cdf=true
css1=true
<filter with="${letters}" match="^[ab]">
beta=true
</filter>
<filter with="${extra}" match="Crawler">
crawler=true
</filter>
<filter match="; AOL" with="${extra}">
aol=true
</filter>
<filter match="; Update a;" with="${extra}">
authenticodeupdate=true
</filter>
</case>
<case match="^3." with="${version}">
frames=true
tables=true
cookies=true
backgroundsounds=true
vbscript=true
javascript=true
javaapplets=true
activexcontrols=true
ecmascriptversion=1.0
css1=true
<filter match="true" with="%{win16}">
javaapplets=false
activexcontrols=false
<filter match="^a" with="${letters}">
beta=true
vbscript=false
javascript=false
</filter>
</filter>
<filter match="Mac68K|MacPPC" with="%{platform}">
vbscript=false
activexcontrols=false
</filter>
<filter match="^B" with="${letters}">
beta=true
</filter>
<filter match="; AK;" with="${extra}">
ak=true
</filter>
<filter match="; SK;" with="${extra}">
sk=true
</filter>
<filter match="; Update a;" with="${extra}">
authenticodeupdate=true
</filter>
<filter match="; AOL" with="${extra}">
aol=true
</filter>
</case>
<case match="^2." with="${version}">
tables=true
cookies=true
backgroundsounds=true
<filter match="^b" with="${letters}">
beta=true
</filter>
<filter match="; AOL" with="${extra}">
aol=true
</filter>
</case>
<case match="^1.5" with="${version}">
tables=true
cookies=true
</case>
</case>
<!-- Opera //-->
<case match="Opera[ /](?'version'(?'major'd+)(?'minor'.(?'minorint'd+))(?'letters'w*))">
browser=Opera
version=${version}
majorversion=${major}
minorversion=${minor}
frames=true
tables=true
cookies=true
javascript=true
ecmascriptversion=1.1
<filter match="[4-9]" with="${major}">
ecmascriptversion=1.3
css1=true
css2=true
xml=true
<filter match="[5-9]" with="${major}">
w3cdomversion=1.0
</filter>
</filter>
<filter match="[7-9]" with="${major}">
tagwriter=System.Web.UI.HtmlTextWriter
</filter>
<filter>
<case match="7" with="${major}">
<filter>
<case match="[5-9]" with="${minorint}">
ecmascriptversion=1.5
</case>
<case>
ecmascriptversion=1.4
</case>
</filter>
</case>
<case match="[8-9]" with="${major}">
ecmascriptversion=1.5
</case>
</filter>
<filter match="^b" with="${letters}">
beta=true
</filter>
</case>
<!-- Pocket Internet Explorer //-->
<case match="^Microsoft Pocket Internet Explorer/0.6">
browser=PIE
version=1.0
majorversion=1
minorversion=0
tables=true
backgroundsounds=true
platform=WinCE
isMobileDevice="true"
</case>
<case match="^Mozilla[^(]*(compatible; MSPIE (?'version'(?'major'd+)(?'minor'.d+)(?'letters'w*))(?'extra'.*)">
browser=PIE
version=${version}
majorversion=${major}
minorversion=${minor}
tables=true
backgroundsounds=true
cookies=true
isMobileDevice="true"
<case match="2." with="${version}">
frames=true
</case>
</case>
<!-- GECKO Based Browsers (Netscape 6+, Mozilla/Firefox, ...) //-->
<case match="^Mozilla/5.0 ([^)]*) (Gecko/[-d]+)(?'VendorProductToken' (?'type'[^/d]*)([d]*)/(?'version'(?'major'd+)(?'minor'.d+)(?'letters'w*)))?">
browser=Gecko
<filter>
<case match="(Gecko/[-d]+)(?'VendorProductToken' (?'type'[^/d]*)([d]*)/(?'version'(?'major'd+)(?'minor'.d+)(?'letters'w*)))">
type=${type}
</case>
<case>
<!-- plain Mozilla if no VendorProductToken found -->
type=Mozilla
</case>
</filter>
frames=true
tables=true
cookies=true
javascript=true
javaapplets=true
ecmascriptversion=1.5
w3cdomversion=1.0
css1=true
css2=true
xml=true
tagwriter=System.Web.UI.HtmlTextWriter
<case match="rv:(?'version'(?'major'd+)(?'minor'.d+)(?'letters'w*))">
version=${version}
majorversion=0${major}
minorversion=0${minor}
<case match="^b" with="${letters}">
beta=true
</case>
</case>
</case>
<!-- Older Mozilla browsers //-->
<case match="^Mozilla/2.01 (Compatible) Oracle(tm) PowerBrowser(tm)/1.0a">
browser=PowerBrowser
version=1.5
majorversion=1
minorversion=.5
frames=true
tables=true
cookies=true
vbscript=true
javascript=true
javaapplets=true
platform=Win95
</case>
<case match="^Mozilla/(?'version'(?'major'd+)(?'minor'.d+)(?'letters'w*)).*">
browser=Netscape
version=${version}
majorversion=${major}
minorversion=${minor}
<filter match="^b" with="${letters}">
beta=true
</filter>
<filter match="Gold" with="${letters}">
gold=true
</filter>
<case match="^[4-9]." with="${version}">
frames=true
tables=true
cookies=true
javascript=true
javaapplets=true
ecmascriptversion=1.2
css1=true
<filter match="^[5-9]*" with="${minor}">
ecmascriptversion=1.3
</filter>
</case>
<case match="^[2-3]." with="${version}">
frames=true
tables=true
cookies=true
javascript=true
javaapplets=true
ecmascriptversion=1.1
</case>
</case>
<!-- AppleWebKit Based Browsers (Safari...) //-->
<case match="AppleWebKit/(?'version'(?'major'd?)(?'minor'd{2})(?'letters'w*)?)">
browser=AppleWebKit
version=${version}
majorversion=0${major}
minorversion=0.${minor}
frames=true
tables=true
cookies=true
javascript=true
javaapplets=true
ecmascriptversion=1.5
w3cdomversion=1.0
css1=true
css2=true
xml=true
tagwriter=System.Web.UI.HtmlTextWriter
<case match="AppleWebKit/(?'version'(?'major'd)(?'minor'd+)(?'letters'w*))(.* )?(?'type'[^/d]*)/.*( |$)">
type=${type}
</case>
</case>
<!-- Konqueror //-->
<case match=".+[K|k]onqueror/(?'version'(?'major'd+)(?'minor'(.[d])*)(?'letters'[^;]*));s+(?'platform'[^;)]*)(;|))">
browser=Konqueror
version=${version}
majorversion=0${major}
minorversion=0${minor}
platform=${platform}
type=Konqueror
frames=true
tables=true
cookies=true
javascript=true
javaapplets=true
ecmascriptversion=1.5
w3cdomversion=1.0
css1=true
css2=true
xml=true
tagwriter=System.Web.UI.HtmlTextWriter
</case>
<!-- Web TV //-->
<case match="WebTV/(?'version'(?'major'd+)(?'minor'.d+)(?'letters'w*))">
browser=WebTV
version=${version}
majorversion=${major}
minorversion=${minor}
tables=true
cookies=true
backgroundsounds=true
isMobileDevice="true"
<filter match="2" with="${minor}">
javascript=true
ecmascriptversion=1.0
css1=true
</filter>
<filter match="^b" with="${letters}">
beta=true
</filter>
</case>
</filter>
<filter>
<case match="Unknown" with="%{browser}">
type=Unknown
</case>
<case>
type=%{browser}%{majorversion}
</case>
</filter>
</browserCaps>

1 Comment

I blogged before about some of the promising new features for working with relational and XML data in C# 3.0. Those features will be revealed at PDC 2005 by Paul Vick (the equivalent of Anders Hejlsberg in the VB world). As with generics, VB developers will not be left in the cold because Visual Basic 9.0 will get the same features.

Maybe VB will get more of the dynamically typing that I blogged about than C#, because VB has always been good at supporting dynamic typing (since long before the first .NET version of VB). Paul Vick alluded to this in July. This is going to be very interesting at the PDC. I hope that I don't have to switch to VB to get those features 😉

1 Comment

Bruce Eckel (of Thinking in C++ and Thinking in Java fame) has a very interesting post titled "What is Consulting?".

My job title is "consultant" and fortunately from time to time I get do work that truely matches Bruce's definition of consulting (with which I agree). That type of work is very satisfying. But I wouldn't want to miss being directly involved in building software, so to not give advice from the sideline. I.e., getting your feet dirty (in dutch: "met je poten in de modder staan" ;).

An important point that Bruce stresses is the need to invest time and effort in maintaining your knowledge and gaining new knowledge. This is indeed very important to be effective as a consultant. I get to do R&D 1-2 days a week. I consider myself fortunate because I must admit that the majority of my colleagues work fulltime on projects for clients. Apart from being able to try out new technologies, reading blogs, thinking about software architecture and development during my day job, I also invest quite some spare time in this.

What do you think? Should everybody in a consulting firm be given time to do these type of things? Can a consulting firm in return demand employees to invest spare time in gaining relevant knowledge? And with that I mean going beyond just investing time in getting a MCAD or MCSD certification.

Or should a consulting firm make a clear distinction between consultants (with a high hourly rate but not 100% billable) and people who work fulltime on projects (with a lower hourly rate, but 100% billable)?

Disclaimer: the opinions expressed in this blog do not necessarily reflect those of my employer.