Enter your email address:

Delivered by FeedBurner

0. Always remember the source of the rule.


All through my life, especially my career, I have run into rules that make no sense.

Once I was given a 'beat down' by the IT department of a former employer for not following a specific rule. When I asked how I was to have known about the rule, they said it wasn't documented or published. I then asked if there is a list of other undocumented/secret rules I should know about. They didn't laugh.

Too many times a rule or guideline is drawn up and rolled out to the masses. Then time goes on, and if things are going well, the business changes. [Side note: If your company doesn't change, then get a new job, because that company won't be around for long.] After the change, the rule is still blindly followed, doesn't make any sense, and people are still following it.

Also, after this business change, the rule might need to be updated or removed entirely. If you don't remember the reason for the rule, then you won't know when it needs to be updated.

This following joke explains this better than I can. I have seen this come around about every two years, and it is always a good read.

The Origin Of Company Policy
Start with a cage containing five monkeys. Inside the cage, hang a banana on a string and place a set of stairs under it. Before long, a monkey will go to the stairs and start to climb towards the banana. As soon as he touches the stairs, spray all of the other monkeys with cold water. After a while, another monkey makes an attempt with the same result -- all the other monkeys are sprayed with cold water.

Pretty soon, when another monkey tries to climb the stairs, the other monkeys
will try to prevent it. Now, put away the cold water. Remove one monkey from the cage and
replace it with a new one. The new monkey sees the banana and wants to climb the stairs. To his surprise and horror, all of the other monkeys
attack him. After another attempt and attack, he knows that if he tries to climb the stairs, he will be assaulted.

Next, remove another of the original five monkeys and replace it with a new one. The newcomer goes to the stairs and is attacked. The previous
newcomer takes part in the punishment with enthusiasm! Likewise, replace a third original monkey with a new one, then a fourth, then the fifth.

Every time the newest monkey takes to the stairs, he is attacked. Most of the monkeys that are beating him have no idea why they were not
permitted to climb the stairs or why they are participating in the beating of the newest monkey.
After replacing all the original monkeys, none of the remaining monkeys have ever been sprayed with cold water. Nevertheless, no monkey ever again approaches the stairs to try for the banana. Why not? Because as far as they know that's the way it's always been done around here.

And that, my friends, is how company policy begins!

Rule 0: If you are going to make a rule, document the reason behind it, so that people will understand the rule, and know when it should be refactored or garbage collected.

Rules of Thumb for Consultants


I love being a consultant. I love managing a consulting team. I can't think of something more fun to do. Over the years, I have learned a lot! I try to codify these lessons, so that our team can move farther and faster from all of our experiences.

I will start posting some of my 'rules of thumb' here. These are not scientific. NONE of them are 100% strict or prefect, which is why they aren't just called 'Rules'. There is always a lot of gray area in consulting, and communication and truth will guide you in the gray areas. But until you get the experience, and comfort with clients, these should help you through some situations.

Truth be told, most of these have come from me, or people I know, putting my foot in my mouth.

I am sure these have all been covered on other web sites and blogs. But, since blogs are about conversion, I thought I would put some down here, and see what people have to say.

What is a consultant? Well, that is a very ambiguous term, as much as the title architect is. There are two ways of looking at it.

There is the paperwork way, and the philosophical way. The legal way lays out like this:

0. You work for the same people that pay you. They tell you what do to on a daily/tactical basis. You do it. This makes you a 'native'.

1. You work for different people that pay you. But the people that you work for still tell you what to do on a tactical level each day. This makes you a contractor/staffer.

2. You work for different people that pay you. They give you strategic direction, and let you figure out the best way to do it, based on your training, processes, experience, and luck. This makes you a consultant. You don't necessarily work on site or off site. Your professional relationship with the client is much like a lawyer.

I am sure there are better definition out there, but I guess that is the basic.

The philosophical view grays this a little. You can be a native employee, yet still think and act like a consultant. I think this is key. No matter who you are working for, and how they are paying you, you should always think and act like a professional consultant. You will end up bringing a lot more to the table for your employer. Think of your internal customer as a client, and truly treating them like that can be a very powerful thing. It will lead to more success, and to more value and growth for your company. That's a good thing.

The worst is when a consultant ends up thinking and acting like an internal/native. This leads to stagnation, and negative value for the client. They could have gotten the same grunt work for a lot less money from a real native. And all you did was ruin your own reputation, as well as your company's.

I think 'how' you think, approach problems, and how you provide value is more important in the "Am I a consultant?" equation that merely how you are paid and who you work for.

CodeMash Registration


The past 48 hours has seen a huge spike in registrations!

The Early Bird discount of $50 (from $149 to $99) just expired about half an hour ago. I guess there are a lot of procrastinators out there.

We have received word that the hotel still has some rooms left at the $88/night rate. So while the CodeMash registration fee is now $149, the hotel room is still a great value. We can't guarantee this hotel rate for long, because when they run out of rooms in our block, then you will have to get a normal room at the normal rate of around $150 or so (I don't know the real rate off the top of my head, but it is up there!).


So, act now!


Qualities of a good dev team member

Labels: ,

Part of my job is to build an awesome application development team. A team involves a lot of different skill sets of course. This includes the developer, but also the architect, PM, BA, QA, DBA, etc.

There are certain qualities that we look for when we are trying to find a new person to add to the team. These qualities are in addition to the certain skills and attitudes a specific position might require. I think all of these are things that you learn as you are young, and may be complimented by a certain amount of inborn talent.

The following three qualities are the baseline to get started. These are qualities that can be hard to see in a person in interviews. We usually go through 3-5 rounds of interviews, which culminate in an 'audition' in front of the team. More on this later.

So, what are these wonderful, Zen like qualities?

0. Learn Quickly, and Unlearn Quicker

I used to say 'Everything you know today will be worthless in two years.' I was corrected by a team member recently who pointed out that isn't really true. Problem solving skills, learning skills, etc. may improve, but they don't go out of use. I agree. So, technical knowledge you have today, will be worthless in two years. But how you get that technical knowledge will last a lifetime.

As time marches on, it is important to know when to move on from something that you do "KNOW", and move on to the next thing. For example, it used to be that Client/Server was the end all be all. The last architecture you will ever need. Then came n-tier. Then came SOA. So, don't just learn the next platform, but be ready to leave behind outdated concepts and ideas.

How do we figure this out in an interview? Ask them about some of the latest and greatest. See what they think of Atlas, or WCF, or anything. Have they played with it, learned it? What was the last thing they learned? How do they like to learn? What was the last book you read? Our interviews tend to be conversations, not SAT style interrogations. It is about the discussion, and about your gut feeling as an interviewer. [OK, there is ONE interview that is a quiz, and that is the technical interview. We want to know what your chops are. Where do you lie on a range of skill levels, and across several parameters. People have cried. It's a long story.]

1. Problem Solving

Our jobs are about problem solving. We solve problems. Our clients have problems, and we provide solutions; solutions that solve those problems. Usually within a bizarre set of technical, business, and financial constraints.

Problem solving goes directly to how to solve that business problem, the best business process, how to get that code to work, why is the trigger not firing, how do I map that XML document?

Problem solving skills are some of the hardest skills to learn.

How do we find this in a person? Ask them about difficult problems they have faced, and how did they solve them, or try to solve them? Get them to talk about their problems, and solutions. What process did they use? Did they work with the team? Did they do their own research first? If they talk about how they haven't had problems to solve, then show them the door. Don't waste your time.

2. Passion, preferably for what you do

Passionate people are infectious. They put more effort into what they are doing, they build team energy, and they become a force multiplier on their team.

We look for people who show passion for technology. There are plenty of good candidates that are good developers, but will never be on the team, because they aren't passionate. It is fine to be a nine-to-fiver. Those are fine people. Really, they are. I just choose to work with people that stand up and applaud when the BizTalk team annouces you can now zoom in the orchestration designer, or people who take a long van trip to the VS2005 launch, or people who take time outside of work to learn about things you don't have to learn for your job.

The best way to find this is to be part of the community. If I can get to know you at user groups, then it is a lot easier to get through the interview process. If I see you at code camps, CodeMash, conferences, Usenet, etc., then I know you are engaged in what you do. Perhaps you blog. Perhaps you just voraciously read blogs. I want to see engagement. HINT: I Google you before I ever call you for an interview.


These aren't the only things we look for, but I feel they are the most important three.

Auditions? Yes, auditions, but you don't have to sing. We generally go through several interviews before the audition.

0. Recruiter screen - Are you an axe murderer? Do you have the required skill sets and attitudes? Team culture match?

1. Technical screen - You say you have nine years of .NET experience. Sure. Prove it. You don't have to score a 95%. We try to see where you are from one to ten across several aspects of your role.

2. Personal Interview- The candidate will meet with me, or someone on my leadership team. We spend a lot of time listening to the candidate, trying to figure out the above three aspects, as well as their general attitude, and direction. What are their career goals?

I heartily believe that a person has a career vector, and a company has a vector. The vector represents where they are going, how they are getting there, and what their other goals are. If these two vectors aren't close, or don't converge, then neither the employee nor the company will be happy. It's ok. They are not a bad person, they just have to know it might not be a great match. We also spend a lot of time talking and showing how our business and team works. We want you to feel like you KNOW what you are getting into. That there is a strong ownership culture. That you WANT what we have. That you know what your day is going to be like. What your role will entail. We aren't talking about a ridiculously vague position description (which all end with 'and other duties as assigned'). We are talking about what life in a day of a team member is like. I don't want the candidate to feel like what they thought they were getting, and what the job really is are two different things.

3. Then the audition. A time is set. The candidate picks a topic of their choosing. Must be technology related. Doesn't have to be related to the project, or even their role or primary skill set. Preferable something they have learned recently, and something we don't know inside and out already.

The audition serves several purposes. You can ask people about the pillars of OOP all day long, and what generics are, but until you see them flying around visual studio, you really won't know if they know what they are doing.

Also, you get some free training/exposure for your team on something new. You get a sense for how that person can communicate in a group, and transfer their ideas and complex technical concepts.

At the beginning of the audition, I will introduce the candidate, and hand out their resume. I then leave the room and get a water. I let the team interact without my presence being an influence. There is 30 minutes of slides/code (we're agile, so we prefer code), and 30 minutes of Q&A on any topic. The Q&A is in both directions, so I expect a good candidate to come prepared with questions they want to ask the team.

Vista has stolen my 'free time'


There was a time when I had a lot more free time. I had time to go check the mail. The real mail I mean. To go get a drink. Perhaps to quickly read a blog post or two. Skim a tv show. Hug the kids. Lots of small activities I could fit into my day.

Vista has ruined that.

Damn you Microsoft!

Now, the laptop boots quicker, hibernates/sleeps quicker (with one button or icon Mr. too many icons guy), runs quicker, loads quicker, searches quicker. I even upgraded my display driver from a standard Vista one to an ATI driver (I have a long hatred for ATI drivers, they always seemed to have problems, not like the nVidia drivers) without rebooting. What happened?

It used to be :

1. download new driver

2. uninstall old diver.

3. reboot.

4. install new driver.

5. reboot.

6. waste 20 minutes reconfiguring the new driver to the settings I like.

New Way:

1. Click 'update'. It was an optional update for the new ATI driver.

2. Wait a few minutes while it is download, and checkpoint created and installed.

3. Screen flickered a bit.

4. Done. Same resolution, same settings. no reboots.

I didn't have any time to go do anything else. I will never get that time back.

Granted, once I have to start using the Dell drivers, instead of the ATI manufacturer drivers, I am sure it won't be as smooth. Dell seems to have a need to require me to use the Dell version of the drivers, instead of the ATI version. I don't know if there is a technical reason, but the ATI ones seem to work fine. The reason I don't like having to use the Dell drivers is because they update them. Ever.

Another interesting thing is that an entry for a software install (a driver is software after all) into the performance and reliability monitor. This tool tracks app crashes, installs, etc and creates a time graph to show you your 'health' rating over time. Cool tool. I am surprised to not see the driver update listed there.

Information Architecture in Vista

Labels: ,

I have been running Vista for a while now. Not once have I had a real crash. A few applications have crashed here and there, and that is going to happen on any OS/platform. Applications have problems at times. It's no big deal.

Each release of Vista during the RC/beta process was better and better. It was faster, and more cohesive. I really love the search in the control panel. Ever since Windows 3.1, I could never find the right icon for what I needed to do. The control panel was always the worst designed aspect of the system. Just a giant switchboard interface, with no real guidance, or rhyme or reason. It lacked any sense of Information Architecture. It was a firehose of options.

Windows XP tried to fix that, with the groupings, and what not, but I always clicked it back to 'classic view' when I setup my profile because I had become comfortable with the dysfunctional menu, and didn't like the menu aimed for 'everyday' people.

Vista has even more items in the control panel. They improved the 'everyday' person (joe lunchpail) interface by improving the groupings, so that isn't too bad. They helped the experience, but not expert user, by promoting common used features to the grouping level. But the best thing is the search bar on the control panel window. I just type in what I am looking for, and it comes up.

This is a perfect example of a tiered information architecture. Architects think too much about the backed, and they leave users out. Thats fine, it will get better, as architects start to focus on the whole application, and not just the gears behind the curtain. Those architects that do pay attention to UX and IA will produce better applications, which will help their users kick ass, which is what architecture and development is all about.

So what is this tiered IA?

Tiered IA means that 70% of the screen/window is for novice/new users, 20% is dedicated to experienced/return users, and 10% to 'expert' users. In this case, a novice can plumb through the well labeled categories to find what they want. To protect against going down blind alleys, or ambiguous groupings, an item may be found in several areas.

For regular or return users, the common functions have been promoted to the category list.

For those experts that know what they are looking for (and expert here means someone who is very familiar with a system, not someone with a MCSD.) they can just go directly to that.

A side effect in this case is that the expert search bar is very useful to the other categories of users as well.

This tiered approach is used on a lot of B2C web sites that really need to cater to a wide audience. The bulk of the main landing page is about what the site does (perhaps explains what eBay does, or who the bank is). Then a portion is for regular users; they can get some information (most active auctions, today's rates, etc.). That last group is served usually by very dense and abbreviated information. A small login cluster is the most common example of this.

I don't plan on turning on classic mode in my Control Panel. I plan on searching every time.

Win a Zune!


So, want to win a Zune?

Register for CodeMash here : www.codemash.org

Put the blog badge on your blog, and blog about CodeMash. Then send an email with your blog link to contest@codemash.org.

Don't have a blog? Just start one. There are plenty of places to start one.

What? You're still reading? Go do it. Now, while you are thinking of it.

Speaking at CodeMash!


I just received confirmation from the content coordinator, Jason, over at CodeMash.org, that I have been selected to present my "Networking for Nerds" topic.

Many of you are rolling your eyes.

Why Brian, you ask, how could you be surprised and excited? You are after all one of the many co-founders.

That is a very good question. The CodeMash tribe has set the bar so high for the quality of the content and speakers that we didn't want to assume that the planners would have be good enough. After all, what makes a good planner, doesn't neccesarily make a good speaker.

So, go register while you still can, and come check out the best tech event there is.

The Mad Russian is On the Air!

Labels: ,

Alexei is a long time friend of mine. We also happen to be co-workers. He is an egineer's engineer. Around our office, when referring to the skill set of a candidate for an opening, we use the term 'Alexei level' as a measuring stick.

He is also one of the smartest guys I know. He takes every challenge head on, and beats it soundly, or nags me to death until he does.

So I threw down the gauntlet. Alexei, I said, share some of that there know how.

Alexei, welcome to the blogosphere (gosh I hate that term.)

NHibernateRepository is published

Labels: ,

Told you I had a lot of free time today.

Dave Donaldson just published his NHibernateRepository assembly. Go read about it.

I normally don't like 'link to something else' posts, but this is really cool, and I know we will get some great use out of it, since we use NHibernate quite often on our projects.

Since I am linking to other posts, check out Jeff's post on 'Please advise...'. He and I share the same derision for this 'line'.


CodeMash Registration



CodeMash is coming soon! We finally have registration up. Go forth and attend.


Yet Another Conference Posting


No, this isn't about CodeMash, but there is big news for that as well right now.

I am at a local conference today. It is usually an OK conference, but it's not for developers, but for project managers. Usually, when I am at a conference, I write or talk about what sessions I am going to, and what I am learning, etc.

Not today. Today I paid $15 so I could get on the wifi, and use some free time to either play NWN2, or blog, of something.

I will listed with an ear or two, but there are only two sessions I am really excited about.

One of the biggest barriers I have seen to the adoption of agile practices is the PM's who think that they lose a job/control. In my experience, there is ten times as much planning on the project. It's that outside view (especially by PMs) that agile/xp is about cowboy programming. That it's just a revolt of developers against structure and process. That is so wrong on so many levels.

At this PM conference, there are several sessions on agile project management. Once our PMs went to these sessions a year or two ago, we were much better in our execution.


Pro BizTalk 2006 Not Book Review


A book from Apress just came out called Pro BizTalk 2006 by George Dunphy and Ahmed Metwally. I don't know them, but I know Marty, who wrote the forward

This is not a book review. I don't do that. I don't have the patience to think deep enough about a book, to reflect on it, and comment on it with the depth a potential would expect. Jim does a great job doing that, so I will leave it to him.

But, I do categorize books on a simple continuum, and it goes something like this:

0- what? you killed a tree for this?

1- well thanks for the effort, I guess I wasn't the right audience

2- learned a lot, will keep for reference reasons, and loan it generously

3- my eyes are so wide open now they hurt in full sunlight

4- I will make whoever I can read this, even if it is against their will. I will buy them a copy and ship it to them without telling them.

This book is squarely in category four, if you are a BizTalk developer/architect. I am continually amazed at the quality of books from Apress. Wrox had good books, very timely, but they tended to be more 'beta' in content than solid.

So, if you are into BizTalk, this is a great book, but only breeze through the first chapter or two. If you are trying to get into BizTalk, it is also a great book. If you are looking for romance, I would probably direct you to the open source aisle.

How do get it? I prefer Barnes & Noble. I don't do the affiliate program or anything, just providing the link as a service.

BizTalk Performance Testing Tips


In a lot of BizTalk Server environments, performance is critical. It is not uncommon to hear for a client that they need to be able to process a specific level of transactions in a certain time window. Unfortunately, it is usually followed by the question: "So, how much hardware do I need?"

There isn't anyway to answer that question because there are too many unknowns. How big are the messages? How complex are the pipelines and maps? What about the orchestrations, if any? What other systems or adapters will be involved?

There are several strategies for finding out how much hardware you need. The first is a 'grow as you can' model. You deploy your system on a good foundation. A good SQL Server and a good single or pair of BizTalk servers. Once in production, slowly increase the traffic or consumers of the business process. As limits are reached, add more servers to the BizTalk group. This is a very organic model, and allows you to add only what you need.

This model won't work in some enterprises where budgeting and accounting are more important than the properness of the solution. In these cases, they want a number up front (even before you could fairly SWAG it) and you have to stick with it. To that end, a lot of IT groups over estimate the cost of the project, almost in a negligent manner, and create this giant plan. This will either lead to the company spending more than it should (it's always a bad thing to have to go back for a second dip in the money well in these types of organizations), or the project gets canceled for costing too much money.

There is another way, and it is sort of a blend. You can prototype some of the processes on some trial hardware, and then extrapolate from there to determine the cost of the project. You will still get estimated figures, but they will be based on results, and not on beer and dreams.

Microsoft has finally made a document called Managing a Successful Performance Lab public, that helps you learn how to manage a performance lab test.

I don't want to cover what is clearly laid out in the paper, but I do want to add some of my own thoughts and some high level guidance.

First, make sure that you select a business process that is representative of the work the system will be handling. Build that process out as you would for production. But don't go so far that you end up actually writing the system. It is OK to cut corners. This is a prototype. Just make sure that you involve the adapters and third party systems you will use in production. Which adapters you use can really affect the systems performance.

Make sure you not only find a good process to test, but make sure you set realistic expectations about the traffic it will need to support. For example, a system might sit idle through most of the day, and then have to process large batch files at night as submissions are sent in from partners. Or, the system might receive small requests throughout the day (web service calls for example), and the occasional floodgate batch (5-10 month). So, sit down and think about the traffic shaping for the system.

Then, setup your test environment. You should have at least two servers, one for the SQL Server, and one for BTS. If you plan on have dedicate hosts (send, receive, exec), then extra boxes would help you model what you think your final production physical environment might be like.

Run the BPA! Download and run the BizTalk Best Practice Analyzer. Fix the first thing on the list, and then run it again. Repeat as necessary. This is a fabulous tool, and helps a great deal. Any issue found by it has a link to specific instructions to fix the issue. It will find practices that you can't or won't want to do (false positives). But it will catch a lot of configuration and environmental issues for you, including the MS-DTC trap, which is probably the most common issue asked about on support groups.

Develop a test plan! Boy, I sound like a PM saying that. Plan out what tests you will run, and what they will entail. Develop a way to track results. The key to running good tests is to only ever CHANGE ONE THING AT A TIME. If you change more than one thing, you won't be able to verify what impact each change truly had. Again, only change ONE THING AT A TIME. It will be tempting to cut corners, but if you are going to do that, you might as well not do the performance tests at all, forge the numbers, spend the budget at Best Buy, and call it a day.

The test plan should also include what tasks should be done at the beginning and end of each session, run, and iteration. The steps should be followed ruthlessly. Again, human laziness is your enemy here. Your best bet is to script or automate as much of this as possible. You should also have a printed checklist and a pencil. A team of people will be better for this than on geek in a corner as well. They can keep each other honest.

The test plan should include sample messages, and the performance counters that will be tracked for each run. You can always add more perf counters based on what you are looking for. The Perf Lab whitepaper can get you going in the right direction, but here are some you should do:

1. Spool depth

2. Throttling levels in the system

3. CPU %

4. Memory %

5. Disk Idle time on the SQL Server %

We usually track about 100 counters in our tests, as a baseline. A separate machine should be used to track the counters. After each test, the perf counters log should be saved for reference later. We usually assign a number to each test run, and name the log file with that number. This number is then used in Excel to track the results.

The best way to put a load on your system is to use a tool called Load Gen from Microsoft. It is very configurable and extensible. We usually configure it to drop files in the pickup folder at a certain rate for a specific period of time.

We usually break up the test plan into runs. Each run represents a specific traffic shape. For example, we might start with a batch of 100% good messages (no errors) with 10 transactions per batch. Then each iteration of that run would have progressively more load placed on the system. Each run should have the same progression. The progressions are usually 1, 10, 20, 50, 100, 250, 500, 1000, etc. The next run would have a different traffic shape. We will usually do several runs that only differ in how many transactions per file. Start with 10, then 100, then 500, etc. The traffic shape patterns should become more complex in successive phases of testing. We usually start with simple batches, and then evolve the configuration of LoadGen to generate more realistic scenarios with blends of traffic. For example, 20% traffic is steady and in small batches (real time requests), with 50% in regular, but spaced out medium sized messages, with 10% of traffic with significant errors, and then the rest of the traffic as a floodgate scenario. This mix should match your traffic shapes you worked out in your test plan.

Before each test, the various BizTalk databases should be cleaned out. There are scripts that can do this for you. You don't want later runs to be affected by slower inserts because the tracking database has grown very large. You should also reset any other systems that you are hitting. For example, if you are dropping failed batches to a sharepoint site for manual repair, that doclib in Sharepoint should be cleaned out after each test. Your goal is to start each test to start with the same environment so the test results reliable. With that in mind, you should grow your SQL databases before testing so that the early test runs don't pay the runtime grow tax on SQL performance.

Before each test a simple message should be run through the system to 'prime the pump.' We have found this helps to normalize the test results, making the test results of small batches more reliable.

After all of the test runs are completed, you will need to determine a scale factor for the system. This scale factor will be used to determine what the final production environment might have been able to sustain. For example, a factor to account for the real process being twice as complex to execute, and a second factor to account for dual SQL servers, and four quad servers in the BTS group.

Before the test you should become very comfortable with the topic of 'Maximum Sustainable Throughput' for your system. There are several blogs out there on this topic. It is also covered in the Performance Lab whitepaper mentioned above.

In short, MST is how many transactions your system can handle without creating a backlog that can't be recovered from. This is different from how many transactions can be completed per second because each part of the system will operate at different speeds. Many times, after a perf lab is completed, a second round will be run to specifically find the MST for that system. These tests are usually setup to overdrive different parts of the system to narrow down and define the MST for the system.

A quick list of things to change between runs?

1- Which server in the group is running which Host Instances? It is proven that breaking send/recv/exec into separate hosts, even on the same box helps improve performance. This is because each then gets its own pool of threads and memory.

2- Maybe rework maps, or the intake process on the receive side. A lot of times if performance is critical, a custom pipeline component will need to be developed.

3- Rework the orchestration to minimize the persistence points.

4- Tune the system in the host setup screens, or in the registry to better suit the majority of your traffic. BizTalk does come out of the box tuned very well for the typical business message. But if you end up processing a high number of tiny messages, or large messages, then you can get more performance by adjusting some of the tuning parameters.

That was a longer post than I expected, and I think I could keep on going. Maybe I will expand further in future postings, maybe with sample deliverables.

Announcing the CodeMash Conference 2007!


It has been almost a year since a bunch of us had met at a Japanese restaurant last winter to discuss a new type of event. We had all delivered one day conferences before, and had success. But we wanted more, and we thought attendees did as well.

We wanted something that spoke to technology a little further up the though chain. Instead of a whole day on how to do .NET, we wanted sessions on topics that affect all developers. Better architecture. Better development practices. Better guidance.

We also wanted to learn more about our own platforms by learning about other platforms.

To that end, we have formed an Ohio Non Profit organization whose goal is to put on technical education conferences. The first conference will be CodeMash Conference 2007.

It will be held at the Kalahari, in Sandusky, Ohio on January 18th and 19th, 2007.

We have arranged some major sponsors, and even more importantly, some GREAT speakers.

Great Speakers? YES! Bruce Eckel, Neal Ford, and Scott Guthrie! Check the site out for more information. We have other speakers in process, but we can't announce them quite yet.

This is not another .NET or Microsoft conference. Those are great. We are trying to reach out and lift developers above their platform. These platforms are just the tools we are using today.

The site has just been launched in a basic form, with an upgrade coming soon to allow for attendee registration.

Please check it out and registration for email updates, or subscribe to the RSS feed.


Microsoft SOA and Business Process Conference 2006



This post was going to be a daily post about the SOA conference, as it was happening. Unfortunately, I had been on the road for weeks during that period, and then my stay at the conference was cut short for personal reasons. Here are some of my notes, and I will probably blog about some the topics in more depth as time permits.


The annual SOA conference at MS has just kicked off. We are expecting some major announcements pertaining the future of the BizTalk platform. I have some ideas about what they will say, but I will hold out for the keynotes to see for sure.

A lot of well known speakers will be here this week, and I look forward to talking about SOA and what advances there are in helping driving business value.

David Chappell is giving the first keynote right now. He defines some fundamental aspects of SOA (which is universally difficult to put a good definition on) as:

- Standardize on SO communication protocol (SOAP is what made this 20 years long overnight success finally catching on)

- WCF and SCA (non-windows) are enhancing this.

- Create the necessary SO infrastructure

- Use BPM technologies effectively

Ultimately, this is all about providing business value. While SOA does this through many ways, the business agility enhancements is one of the best ones. The more easily IT can respond to the changing business needs, the better the business will do.

The first keynote was excellent, and I just added him to my 'Must see speaker regardless of topic' list. Keep in mind there are two David Chapell's that speak on SOA. How funny is that?


There was a session with Oliver (top manager for BizTalk Server) about what the next version of BizTalk will look like. He was speaking about the version that comes after BizTalk Server 2006 R2.

- mission critical enterprise

current theme- high scale, reliable, managed and controlled :

vnext->models raise the level of abstraction, distributed exec env, fill key gaps in platform; treat n machines to act as 1 machine. ability to scale up a normal business app

- people-ready process (people centric biz process)

now- people do the work, human interaction,ui,tasks and roles, office/db integration

vnext->rich tool support, pre-built common activities, portal for state and KPI's

-rich connected apps

now- internet, massive user scale, access from anywhere, compose services, multiple trust domains.

vnext -> pre-built services, network transparency, convenient secure identity, tool support across lifecycle

BizTalk Workgroup Edition

There might be a new edition that would fill a current need in enterprise environments. Some organizations are running a central core for operations, and need a BTS box out in each distribution center you have. The licensing is too expensive in this model. MS is thinking of releasing a version that would be less expensive, but would only work with other BTS servers in their workgroup, in the situation above, it would be only those core enterprise BTS servers. Interesting idea.


There was some great talk on the new adapter framework, but that is for another post.

Is Ruby Eating Your Lunch?

We have an internal discussion going on at my job about how Ruby is eating into the Java ecology. It seems that the Java community has become VERY interested in Ruby. The .NET crowd is too, but it seems the Java tribe picked it up quicker, and began running with it faster.

I made a half-intended remark in that discussion that it seemed Ruby was eating Java's lunch. Easier to use, developers like the tools and language better, and they feel more productive. Our Java team lead fired back that Ruby is eating .NET's lunch as well. And he may be be right. I hope it is.

But this post isn't really about whether Ruby will dominate in 12 months (my prediction is it won't, not in the classic sense.)

This post is about what does the effect of Ruby (and it's ilk like Python) have on our ecologies, both Java and .NET. By ecology, I mean the community strength and presence, thought leadership, toolsets, open source projects and modern development practices.

I think that the Ruby tribe will help Java/.NET. When you learn about other platforms and languages, you learn more about your own platform. You don't need to abandon your toolset for the next shiny thing just because it's new. But you should pause, learn from it, and apply some of that thinking to your own ecology. Steal their ideas, copy their tools, port their code. A different ecology has different constraints and assumptions. Leverage what their different assumptions have allowed them to create. Just like they will borrow/steal from ours since our assumptions are also different from theirs. Evidence? nHibernate; OS X; SLED; and many others.

This platform competition can only make everyone better. First, the competition will drive the platform owners (Microsoft, Sun, IBM, etc.) to drive more value into their platform. Usually through better tools that increase the levels of abstraction (WPF,WCF,WF....) to make developers more productive (stop being a plumber!!!). The competition will also drive the ISVs/Open Source crowd to refine and improve and create. Cause them to push the boundaries. And third, and biggest, it helps developers and architects think different about their platform. Instead of just Control+C/Control+V the system architecture from the last project, injecting some improvements and concepts into that to make it better. Dependency injection and Aspect Oriented Programming are great examples of this as well as the new drive for software factories. This is the digital reflection of the European explorers finding crazy stuff in the world, and bringing it back to impact their culture. Another analogy would be cross pollination in the plant world.

This idea is a big factor in my team's prime directive of 'Better, Faster, Cheaper.' In order to do that you have to constantly improve, and learn. A great way to do that is to see not only what your field does, but what other, maybe not so related, fields do things as well.

To follow this idea, some of the local user group leaders (not just .NET, but across the communities) are trying to put together a cross platform technology conference. Hopefully it will come together. There is a ton of planning to do, and it has been like herding cats.

Speaking at CincyPG


I have the honor of speaking at the Cincinnati Programmers Guild on August 29th, 2006. It's a Tuesday, at 6:30pm.

It will be my first formalized presentation on 'Networking for Deveopers'. The topic does not cover how to meet people in your industry, but rather what you need to know as a developer about networks.

Troy wrote a great abstract:

The talk is aimed at giving developers some useful insight into how our computers get hooked together, talk to each other and ultimately make our applications work. If you've ever spent too much time trying to track down a bug in your program that turned out to be a firewall issue or something else networking-related, then this discussion is for you. And if you don't know much about computer networks, it's a great time to learn before you're expected to know all about them for your next project.

If you are in the area, see if you can drop by and say hi!

Kellerman Software is on the air!

Labels: ,

A friend of mine, Greg Finzer, who is also a great developer, has launched a company to sell components for developers. Greg has a great way of distilling complex needs down to simple to use components. Check them out at www.kellermansoftware.com.

Why am I posting this? I like to support my friends. I also like to support people that have an idea, and take risks to at least try to make them a reality. There is nothing worse than forever wondering wether your idea would have worked or not.

Intro to BizTalk Server and Windows Workflow Foundation

I just finished my mini-tour of Ohio with the talk about BizTalk Server and WF. The sessions went well, although I ran out of time in Toledo, and wasn't able to complete a more complex demo. The goal of the session is to introduce workflow and the two major platforms to do it with, so I think I at least accomplished that goal.

I want to thank NWNUG and the Dayton .NET Developers Group for having me. As usual, it was a lot of fun.

I think the funniest part was in Dayton. They were reading off names for the the swag, and the announcer couldn't pronounce the last name of the winner and the first name had been abbreviated to just T. The announcer started spelling it, and a women (I actually should say THE woman, as she was the only one) stood up and said that it was her, and pronounced a name that most likely used every letter in the alphabet. Someone in the audience (James from work) said "Why didn't you just say 'The Girl'?". Everyone laughed. James is funny, but that was the funniest thing he said all day. Then I replied "I am surprised no asked what a girl was."

People have asked for the slides, so I posted them on my site.


WF on tour!

I will be speaking about BizTalk Server and WF at NWNUG July 25th (Tuesday). They have a really interesting UG format. The first portion is meant to be easier content (level 100 type stuff). They then break for pizza, and then the second session is related to the first, but of a high level. A great way to help people ease into the water!

Jim has graciously invited me to speak at the Dayton .NET Developers Group again. That will be this Wednesday (July 26th). The topic will be very similar to the NWNUG session.

With BizTalk Server close to my hear, and knowing where MS is taking the product, workflow is near and dear to my heart. Mark my words, the lanscape for WF/BizTalk/middleware will be dreastically different in two years. There are major mind shifts happening, and I think it is very exciting. BizTalk server skills have always been a niche need, much like HIPAA, X.12, etc. Soon most developers will have at least basic skills in workflow and related technolgies, just like SQL server skills are prevelant today (with plenty of room for SQL architects Bruce).


The finer points of the WSS adapter

I have recently finished a project for a client that used the IBS model. Infopath -> BizTalk -> Sharepoint. In this application the Infopath client used several web services. Some were hosted in IIS, and some in BizTalk. When the user submitted the form, it was sent to a BizTalk orchestration exposed as a web service for processing before being stored in a WSS doclib. [In IP you have to send only the data, not the entire form, to the web service.]

The orchestration was nicknamed the router. It did some server side validation (you should never rely on client side validation, that is just there for the user's UX benefit). It also dynamically configured the WSS send port, and did some other housecleaning tasks. Each form was for a different business unit (more than fifty options). The router determined which site (based on audit year) and doc lib (based on business unit) the form should be sent to.

While doing this project we learned a bit about the WSS adapter that we hadn't run into in prior projects.

Office Integration. The WSS adapter has an option for 'office integration'. When an infopath form is submitted it will use this setting to integrate with an infopath solution that has been published to a sharepoint form library. This is handy. At the top of each 'form' for IP (the data fragment) there are some processing instructions (PI's) that tell Windows that this is an infopath form, and where it should go get the form template (the visual display of the data basically); in this scenario, which sharepoint server and doclib holds the form template. It is nice that the adapter adds these to the data for you when it is sent to the wss site because these PI's can't be in the data when submitted via SOAP to a web service. It would hoark things all up. The downside is that it isn't terribly intelligent about how and what PI's it adds. The adapter will cover the basics, but it will not handle the file attachment PI's you need if files have been attached to the infopath form. In order to fix this, you have to disable office integration for the port, and set the PI's yourself in your orchestration, along with whatever other dynamic port properties you need.

// assign the message
FormOut = FormIn;

//adjust the file attachment PI
FormOut(XMLNORM.ProcessingInstructionOption) = 1;
FormOut(XMLNORM.ProcessingInstruction) = "";

// set the file name
strFileName = FormIn.ControlInfo.UnitName + "_" + FormIn.ControlInfo.ControlName;
strFileName = System.Text.RegularExpressions.Regex.Replace(strFileName, @"\.","_");
FormOut(WSS.Filename) = strFileName + ".xml";

//if a form document is found on the library, overwrite it.
FormOut(WSS.ConfigOverwrite) = "yes";

FormOut(WSS.ConfigOfficeIntegration) = "no";

Document Versioning. We also found out that the WSS adapter does not partake in the document versioning feature of WSS doc libraries. We had hoped to leverage this in our solution that a poorly submitted document could be rolled back. These documents are worked on by a global team over the course of a year, and things can happen. So, we don't know WHY versioning doesn't work for the adapter (maybe it bypasses the process somehow?), but it doesn't work. It's a shame really. Adrian confirmed this for us. He has a great blog, some great webcasts, and works on the adapter for MS. He is also very active in the TechNet newsgroups. Our workaround was for the routing orchestration to always submit a copy of the message to an archive doclib (one for each audit year). We let the adapter auto-rename the document if there was already one in there with that name. Our form submissions are named for us by the orchestration following a standard pattern. Only the business admin user has access to this archive doclib. They will be able to go in there and get any of the older versions they need. It's old fashion, and will likely bloat their content db for WSS, but it will work. The business user promised to go in and 'garden' the folder every once in a while to keep it fit and trim.

Go Speed Racer! Go! Another bug-a-boo we ran into was a race condition between the submitted form in infopath running through BTS and how long it took infopath to release it's lock on the file in WSS. The orchestration was beating infopath everytime. We had to put a timeout/retry on the WSS adapter. The first attempt would still fail, but it would succeed on the second try because we set the retry interval to 10 seconds. Why so short? Why not? The users would submit their form, and then alt-tab back to the WSS browser and hit refresh to see their updates. This changes the retry to 10 seconds, instead of the default five minutes.

Here was the error we were receiving:
Error Detais: Microsoft.Sharepoint.SPException
the file xxxx.xml is checked out or locked for editing by user

Form, name thyself! In an infopath form, there is a property on the file manu that determines the form's identity. This is a secret identity that is hidden (sort of) form the user. We wouldn't want to expose the user to anything hi-tech or anything. This form name, and namespace mashup is used by Infopath to determine if that form in is it's local cache or not. If it is, then the version numbers are compared, and if they match, then the local version is used. If not, it will ask you if you (the user) want to upgrade the form. Infopath will then download the new form and update the cache. This is all fine and dandy. Until we use it. We had the same form published to several different doclibs, one for each audit year. Since each audit year had it's own WSS site, we used a consistant structure to the site. The main doclib for these forms was always named the same, just the root URL of the path changed based on the audit year (2005, 2006, etc.). These forms werent' totally identical. There were some very minor differences, at least at this point. For example, the 'year' on the form was defaulted to the year of the audit site the form was published to. The form on the 2005 site had that field default to 2005, whereas is was 2006 on the 2006 audit site. Confused yet? Infopath also auto-verison numbers itself (if you want). So when we published to 2005, it was version 1.1. When we made the minor tweak, adn publish to 2006 it would change to 1.2 (trivial sample in this case, the version numbers are four dimensional). What would happen is, as a user bounced from year to year they were constantly being nagged to upgrade their form becuase it was out of date.

Well, you can't have that. So I thought I would give a custom form name (different from the file name) to each version. This would mutliply the number of distict files of the template, but would seperate them in the cache, and avoid this multiple personality issue. Every time I would then publish the form to the WSS doclib, it would overwrite the form name back to what it was.

In reality it was changing it to match the name of the doclib itself during the form publish. It was a coincidence that this is what the form name was originally. This took me some time to figure this out. This stumped us for a while. This doclib name really needed to stay teh same from year to year, we didn't want to have to rework all of the routing orchestrations, etc. We solved it by changing the name of the doclib to . The web address (URL) stayed the standardized , but when we published the infopath solution to the doclib, the form name would be set to something unique, at least in out solution. This was a hidden action inside IP/WSS that we didn't know about, and it was hard to figure it out. It was very confusing, and frustrating, as I am sure this post seems.


Final TechEd thoughts

Well, it's Father's Day, and I have finally recovered from TechEd. I think I slept most of Saturday just trying to regain from the exhaustion.

Out session on designing BizTalk solutions on Thursday morning went much better than I had hoped. I expected ten people to show up because it as was at 8:00am, which is a hard time to get up on the Thursday of TechEd. About 100 people showed up, and our score was 8.5 on a scale of 1-9! We were in the top ten of all 1,006 sessions until later that day. We got bumped to #11. Still a great showing. Personally, I think I have done better before, but the crowd seemed to have liked it.

I ended up going to several IIS7 sessions, a session on DSL's and frameworks, and went to some other great sessions/chalk talks.

The new architecture for IIS7 is really awesome. I can see how it will really help in the future. The first major step they did was to take the core functionality that supplied authentication, caching, dir browsing, static file serving (and plenty of other things), and broke each into it's own component. You can now configure the server to only load the components into the IIS7 pipeline that you need. This helps with performance, as well as completely customize the threat surface your server has. Don't want any file browsing to happen? Just don't load the module. This is awesome. Then they took the ASP.NET component and broke it up as well. The current module contains all ASP.NET functionality in one box. IIS6 receives a request, goes through any IIS authentication and authorization, caching, etc, and then passes it to the ASP.NET module. ASP.NET then does it's own authorization, and then goes through it's lifecycle, and then back to IIS for the second half of the pipeline, and eventually sending back the response.

In IIS7 you can run ASP.NET in the classic mode, loading the consolidated component, or you can load each component seperately, and natively in the IIS7 pipeline. This will increase your performance, and improve your security.

MS has been quick to point out that this also improves the usability of your server. How? If you load the Forms Auth component in the IIS7 pipeline, and then load the PHP or ColdFusion ISAPI extensions, you could suddenly use asp.net forms auth on your PHP applications. Or leverage any other piece as well!

The other great thing about IIS7 (that I can think of right now) is the distributed config files. The metabase is gone forever! Yeah! That thing was a pain in the rump. Your server settings are now stored like all other .NET configurations. There is a server and site .config file. You can load the config of your site into the web.config of the site itself! That will make it so much easier to deploy an application to a remote system. No more calling up the admin to make a simple change, like the default doc. Of course the sys admin has to give these permissions. They can make the system config file un-overridable.

This config can be managed in three ways. Directly through the config file, through WMI, and through the new management UI. If you write a custome IIS7 pipeline component, then you can implement your own admin UI, and have that integrate into the management toolset. Very awesome!

There are other nice parts, like FREB and tracing, etc. I might go into those later in other posts.

We also went to the Fenway Party with Train. It was a great time. I didn't expect to like the music, but it was really nice.

Going to TechEd is like going to Disneyland for the first time. We call it "Death by Disney". You are surrounded by your "type" (geeks or families), and have only a week to partake in as much stuff as you can. The laws of physics prohibit that, but you do your best. In the process you end up rushing and exhausting yourself. I have learned (in both occasions) to go slower, and take your time. You will get to experience more stuff on the next trip. By doing this, you may not DO everything, but you will definitely get a lot more value out of it.

I already can't wait for next year.


TechEd Tuesday Treatise

Wow! What a day. I needed some time off from going to sessions so I hung out in the architectre TLC area. I got to watch an arcCast be recorded. The topic was about the new service library GAT tools for patterns and practices. It was really intersting to see them talk about these new features. You should check out the kit. They have a CTP now, and will release at the end of July.

The best talk so far, at all of TechEd was the SOA architecture talk by Beat Schwegler. He was a great speaker, very calm but very engaging. His slide deck was definitely outside what the content owners have been forcing us speakers to follow. I think that is cool. I have been toying with swapping to a serperate deck at the last minute. We shall see. I do want good scores, but I also want to be asked to come back next year.

One of his points was that services should be designed around the What of business, not the how. Developers tend to immediately think of how, when an architect should always focus on the why, what is the business needs, not the features needed. Great concept, and something that I have always thought was important for an architect.

I did get to have breakfast with someone from teh IIS7 team. I need to go back to their kiosk to get her name. You could see the passion for IIS7 just seeping out of her pores. She really believed in the power of the product. I had said that I wasn't going to any of the sessions because I saw them all at PDC. She said that everything from teh PDC was 'old news', and that I should check them out. I decided to take her up on the offer, and sat in on that talk about tracing at the http module level from iis with the new FREB toolset. Neat stuff. You can have it trace in a very restrictive level. Instead of killing performance by tracing every site and every action. They have allowed you to really nail it down tightly, to perhaps only 404.2 errors on a specific page or site. Each failure report is in it's own xml file, so they are easy to break up and use.

The big news of the day, and I am glad that I can finally blog about this, is the annoucement of Visual Team Studio for Database Professionals. This is like team dev, but for db developers. They seperate the data from teh schema, and track the schema just like source code. This isn't just scripts, and so on, this is the true schema. You can refactor, write tests, and construct automated builds. The great part is that you can it populate sample data into the schema, which helps you test all permuations of a possible field based on it's requirements and restrictions. I grabbed several dvd's of the CTP. It should be RTM by the end of the year I believe.

I think this is great innovation from Microsoft. Our DBAs have been stuck with crappy tools, with no improvements for ten years. The tools have bceome nicer, but have never really offered a true enhancement to the deature set.

And now for the daily slog (swag log).

1. Awesome Vista backpack
2. Oreily tshirt for buying books
3. more copies of vista and a application compatibility toollkit
4. several pens
5. more source force 'action figures'
6. a nice metal keychain flashlight
7. awesome patterns and practices dvd
8. dvd for VSTO
9. 4 dvd's for VSTS Database Developer



The 'blogosphere' (always hated that term) is a glow once more with a daily flood of TechEd updates. So, Yet Another Tech Ed Conference Update.

I had a depressing start to the day. My first session was terrible, and the speaker was bad. The last ten minutes were what I thought the whole session was going to be about. I need to learn more about the new Office Scorecard Manager so that I can use it on a few upcoming projects (both internal and external). The speaker spent most of the time demoing and talking about BizTalk BAM. BAM is awesome, don't get me wrong, but the topic was supposed to be about scorecards. I learned my lesson, and should of bailed as soon as it was proving to be sucky.

My second session was to be on building Finanical system with BizTalk and human components. Brian L and I thought maybe they were going Borg on us. But then decided that a human heart was likely NOT at the center of any financial system. James pointed out this was possible, if that heart was black, small, and lifeless. Anyway....

This second session turned out to be a waste of time as well. It was mostly a commercial for K2 (which is a fine product), but I felt a bit of bait and switch going on here. Come talk about BTS and big finance systems, and it turned out to be car insurance and K2. No thanks. Being the quick learner that I am (do more of what works, and less of what doesn't) I bailed big time. Moved on to the VS Team Architect session. Loved it. See James' post on this as well. It was great. I love the toolset, and just need to cut over from doing the same stuff in Visio.

I happen to be wearing a blue Microsoft BizTalk shirt today. It was amazing what I was getting away with. In the back of each room there is a table against the wall. These are mysterious tables. Almost like stonehenge. No one REALLY knows why they are there. I would just walk in, move the table, rip some chairs of their twisted DNA chain like assembly, and sit in the back, with plenty of room, laptop setup, etc. Even the door minders (that ask "Can I scan you?" when they really mean "Can I scan your badge?", which are RFID by the way.) would approach me like I had some authority in the situation. One asked if the lighting was fine, etc. Sure, lights are nice. In principal, I guess.

After a terrible meal for lunch (usually the food is quite good. This time the rice was crunchy like chewing on resisters (don't ask why I know that)), I headed on up to yet another scorecard session. I thought I would give it another try. This session was GREAT. It was presented by people from the MS team, so they knew what they were talking about. They have taken the accelerator kit, and turned it into a easily usable platform with a great extension system. They boiled the sharepoint web parts from 9 down to 2! Thats great simplification.

After that session I was asked to pitch in on Brian L's BizTalk Architecture chalk talk in the TLC (technical learning center. the area of the conference floor for lounges with MS product people and hands on labs. This area is renamed more often than Indigo, no, WCF, no WinFX, no .NET Framework 3.0/NETfx.) We were in one of the little theatres with about 12 chairs. Most were filled up. Since we didn't have any content planned, we played 'stump the chump.' We had some great questions. The area was outfitted with a singleton whiteboard. You can use it once, and then it must be thrown out. Not very useful if you ask me. Reminds me of buying dogfood on the Internet for some reason. We tried an eraser, our hands, and even that deathly alchohol spray stuff. The only thing that worked was turning it over to use the second side. Built in archiving I guess.

The GREATEST thing about the talk was that Lee Graber (BizTalk Message Box God) kind of slinked in the back of the space. I didn't want to 'out' him as a product team member, but he eventually piped up and took part in the conversation. It was a great time. He winced when I told him I have an outlook rule that flags ANY email written by him on the MS internal distribution lists. (of which I have access to because of the CSD VTS program.) We had a great time talking about some extreme performance situations, and about a specific need a customer had about BAM and instrumenting when something DOESN'T happen within an SLA. It showed I really need to dig deeper into BAM. Which isn't hard to say. BTS is such a broad platform that it is hard to know alot about alot of it.

Got my picture taken with the Webcast guy. I did that just for Arnulfo, I think he will enjoy the pic.

The last session of the day was on IronPython (.NET native dynamically typed language that is a full and true implementation of Python) and Ruby.NET, which is a bridge between Ruby and .NET. The bridge was cool. You could call Ruby from .NET, and implement .NET interfaces in Ruby, or call .NET classes as well. Great session even though I am not really interested too much in this fad.

There are plenty of other things that happened today (like the reception and the 19 mile walk to a tube station to get on it for 3 stops to get to a restaurant. Note to self, never let Ian navigate.)

But I know everyone is desperate for the swag log of the day.
Here is the pic:

1. Codezone tshirt (just like from the PDC, but smells nicer.)
2. Two Windows Mobile hats. I thought they were handing out mobile devices. Turned out they were mobile hats. I don't know of any immobile hats. Don't know how useful those would be anyways.
3. A small compact screwdriver toolkit thing I stole from the TechNet community booth after being told I had to answer a quiz to get it. I told them I knew all the answers anyway. Greg F gets this, as he is the resident tool gadget guy.
4. Really cool giant foam lego blocks that are branded with the now defunct WinFX naming.
5. Hat for visual studio and central region developers. This one I am keeping.
6. Free sharepoint magazine
7. Three little MSDN hero foam action figure thingys. Got these for kids and Arnulfo (who collects them.) As a side note, I got a Channel 9 guy at PDC, and he lasted about a week until my son ripped his head off. Oh well.

Thats all for now.


Tech Ed 2006 : day 0

So, day zero is nearly complete. Today is Sunday. I arrived yesterday after completely missing my plane. I thought my flight was on Sunday, and when I went to print out my itinerary Sat afternoon I made the connection that 6/10 was indeed not Sunday, but that someone had rudely moved it to Saturday. This would have been ok had I noticed this incovenient change before my flight. But I noticed this at 1:30, and my flight left at 11:00. I called up the travel people, and they re-booked me for a flight later that day. Got to the Hilton just fine thanks to them.

While on the shuttle from the hotel I hooked up with Brian Loesgen, a fellow MVP and VTS'er.

I spent the day registering for the event, and doing my initial check in at the speaker lounge. Registering was quick and painless, and instead of a normal laptop bag this year, we get Man-Purses. They are actually quite nice bags, your laptop goes in vertically however.

The speaker lounge was less 'lounge-y' than I expected and more 'airport cattle coral'. Unfortable chairs, and basic snacks (which are the same outside the room.) There is rumor that the speaker area has it's own private network, so maybe I will check that out, since the hotel has less bandwidth than a remote farmer in Australia. Right after that I sat down and pawed through the marketing crap in the bag. Most went immediately into the trash can as worthless, meaningless crap. I had to work with marketing back in my .com days to put together inserts like this, and we always put more thought into than these people have. One had the title "Bring this card to our booth for a demo of our product." Ok, so you aren't going to give me a demo if I don't have this piece of dead tree? Ok, enough ranting about the marketing chaff.

I spent most of the day at the MVP summit. Got to meet with some great people, some old friends, and some brand new. I got to know my MVP handler, Kim, who is a blast to talk to, and met some of the other MVP architects.

After that we decided to walk to a restaurent called the "No Name" restaurent. A little seafood place down on the docks. After some 'agile' navigation we arrived to a nice feast. Reminds me of the places I was used to growing up in Maine (Ayuh!). Boston is really bringing back some of those memories.

After dinner we hoofed it back up to the center to see the key note. It wasn't bad. There was some corny 24 (the show) rip off stuff, and they had one of the actors from the show helping out.

They finally announced Visual Studio Team Data. This is going to be a great tool for data architects and data developers. It has a bunch of tools around unit testing and refactoring which are going to kill in my shop. They showed Windows Compute Cluster, which they showed at the PDC, and a few other things.

Now, for the important part. I am not normally a swag-whore. I have known people in the past that will go out of their way to get as much free stuff as possible, regardless of it's value. Brian L has a great notion to use the swag as geocaching presents. I thought that was a great idea.

So the swag count for day 0. I intend to log what I get. I do not intend to go out of my way to get stuff, so it will be interesting to see what I walk out with.

Here is a picture of my swag, and the contents.
1. Man Purse with random stuff inside. Nice to get DVDs of the stuff I spent days downloading just recently (Vista, etc.)
2. The hotel room key is office branded. (not really swag, but the marketing machine is set to 11). Same goes for the Windows HPC do not disturb sign.
3. DVD of random goodies I was given. No idea what is on it.
4. MP3 player from the MVP team.
5. Vista branded magentic stick and ball thing, from the MVP team. I guess they raided the Vista swag closet before coming here.
6. Bottle of wine for being a speaker. Nice presentation.
7. Two little compass keychain things from the keynote.
8. A shirt (thrown at me from someone) that says "Is your network up today? You're welcome.".

Can't wait for some of the sessions tomorrow. Time to go find something more to do.


HOAP slide deck

I have been asked by some friends at the user group to post the slide deck for the agile talk. I put them up at my new site on officelive. www.brianhprince.com. Lower right hand side. The editing tools for office live aren't even close to what SharePoint can do, and I am starting to see some of the limitations. Anyway, enjor the deck. I also included all of the hand outs, etc. Let me know if you have questions.

HOAP in Columbus

There will finally be HOAP in Columbus. I will be doing Hands on Agile Practices this Thursday (5/25/2006) at CONDG.org. It is at the Microsoft building in the Polaris area. There have been plenty of great talks on Agile/xp, and what it means. They explain the why really well. The beliefs, etc. After those though I have always wondered about the how. This talk will focus on those things that WE do. HOAPfully this will help you adapt your practices. Then, as usual, it's hoggy's afterward.


BizTalk 101 in Findaly Ohio

I have the pleasure of presenting my BizTalk 101 talk to the Findlay, Oh area .NET users group on 5/24/2006. I really like driving up there and working with them. They always have great questions. If you are in the area, you should drop by and say hi.

I will cover what this thing does, and how you can use it in your enterprise, your application, and how it can make your life better. :)


Where do business rules go?

‘Where do I put my business rules and logic? Where does validation fit in?’

I don’t know how often I hear this from clients. It is always asked in one or another during most coaching or development engagements. Sometimes it is because of BizTalk, and other times it is because of Atlas (or some other front end framework).

Our first rule of validation:
‘All validation is done on the server side.’

The business layer is responsible for validating its input. We never trust input coming from outside the business layer. This is a fundamental guideline for application security. Any validation on the web client is purely to support a great user experience. It should definitely be there, but it is just for the user’s needs, not for the system’s needs.

So, where do you put your business rules? As with many things with BizTalk, it depends. The first aspect to consider is how you are using BizTalk in your system. If you are using it as middleware to integrate several different systems (sort of an ESB approach), then your business rules should be used in the systems themselves. You can still use the BizTalk Business Rules Engine, but it should be used from your applications, and not embedded in the messaging layer of your bus.

However, if you are using BizTalk to run your application on as a platform, then there is a continuum for you to consider. The most important aspect to drive where on the line you want to be should be how often these business rules will need to be modified. The more often they change, the more cheaper and easier you want it to be to update them. Remember that a good business changes, and it's systems must support this.

On the far left side is the ‘custom code’ approach. You would bake the rules into your source code. This will give you the best speed of execution, but the worst story for maintenance and management. You will not easily be able to change the rule if you need to, and you will have the cost of moving code changes through your lifecycle (testing, qa, promotion to production, updating support documentation, etc.). This is also the most brittle solution, and will reduce your systems ability to be agile.

The other side of the continuum is to put the rules into the BizTalk Business Rules Engine (BRE) and then access those rules either through orchestrations, web services, or through the BRE API. The API is the fastest, but the web service approach can help you leverage those rules in other places. One of the best advantages to this approach is the centralization of your rules. Keeping them in one place can greatly reduce the cost of maintaining a system. Of course, if you aren't using BizTalk you can still do this, just find another rules engine to use. On our Java projects we use JRules. I wouldn't build something like this when there are so many good alternatives out there already.

In the middle of this continuum is the option to put rules in the orchestrations themselves. This is risky in my view, but an option. I only use it for trivial rules that are truly a workflow routing question. Even if the rule is simple, a tax rate for example, I know that in the future it can grow to be more complex. This approach is less expensive than maintaining code, but more expensive to maintain than the BRE based approach.

I think your decision can be made on a case by case basis as you are building your system, but sticking to one approach will make your system more consistent.

Hopefully this can act as some guidance for you as your design your system. Unfortunately there are rarely hard written rules in architecture. Most decisions come down to a choice, where you need to discuss the benefits and costs of each option. In the end, you should always strive to reduce the system brittleness, the system maintenance costs, and to inhibit ripples that changes cause in the system.


Thanks to areyn for helping with this post.


I have setup a site for myself at www.brianhprince.com. I am keeping my blog here. I setup the site with MS Office Live. I wanted to get the domain name for free, and have an easy place to host files or stuff that I reference in my blog.

I also wanted to try out the new service. It isn't bad for a beta. There are some significant features missing though. I can't get rid of the lame border around linked images, nor can I use custom HTML/CSS to place my own content. Even the content web part for Sharepoint can do that.


TechEd 2006 Schedule

I am getting very excited about going to TechEd 2006. Not only do I get to go, but I get to present. This will be great.

I will be attending the second half of the MVP Engagement Event Sunday afternoon, and the influencer's party on Wednesday.

Keith and I were just told that our session has been booked for that Thursday (6/15) at 8:00am! So, it looks like I won't be partying too much at the Influencer's Ball.

Our session is :

CON329 - BizTalk Server Solution Lifecycle: Planning and Design (Part 1)
Level: 300
Part one of a three-part series of sessions covering the design, implementation and management of a BizTalk Server solution. The first session covers the common scenarios where BizTalk is an appropriate solution and what questions you should be asking to develop appropriate timelines, resource plans and deliverables. This session provides experienced IT project managers, who may not have previously worked with BizTalk Server, the background needed to successful launch a BizTalk Server project.

Timeslot: 6/15/2006 8:00 - 9:15

Our's is the first of three related sessions. Please let me know of any requests for our topic, I hope to see you there!


Simplicty bites

I am a big fan of simplicity, which is an unusual trait for us architects. But sometimes, it can waste some of your time.

In BizTalk 2004, while using VS2003, if you wanted to push up your solution to the server to test it, you had to (chant with me now) RE-build, RE-GAC, RE-start [the host]. This can be a huge hassle, and slows down your momentum during development. To the point that you almost avoid it as long as you can. This always caused problems for me, since I would be tempted to break the 'change one thing at a time rule.' This just led to more frustration on my part.

With the new BizTalk 2006 developer tools that plug into VS2005 they made this so much more simpler. You now right click the project, and choose 'deploy.' And that's it. It's all handled for you under the covers.

VS will rebuild (if needed), upload, regac, etc. to get you new code running. Very cool. It made huge increases in my productivity, and has made it easier for new BizTalk developers to get familiar with BTS. The old way was a huge barrier to someone learning this on their own.

The new process has worked for me just fine for a while now. But I generally stick to content based routing, and do as much as I can with pipelines/maps instead of orchestrations, at least with the stuff I happen to have been working on in BTS06. I have been working heavily withthe SQL adapter, and was doing tests on calling stored procs, updategrams, and debatching responses, so I had no need for orchestrations.

Then I was trying to do a quick and dirty orchestration one evening. I got the basic version up and running with no problems. I tend to develop my orch's in baby steps, with multiple short circuits to see what is happening. When I went to test the next baby step version, the new shapes in the orch weren't firing. The orch would just stop on the last shape the old version ran. HAT would step through just the old shapes, and ignore the new ones, even with break points! At the time I didn't recognize the true behavior that was going on, and it led to at least half hour of cursing. Eventually I went back to basics and re-started the host that the orch was running in. That fixed everything. My excuse, of course, was the I was tired from playing Oblivian. :)

[Side note: This also reinforces my rule number 8 of development: "The size of the root cause of your problem is directly proportional to the length of time to find it." Meaning, a major problem can be found quickly, and small problem, like a missing semicolon would take hours. ]

Of course, knowing about the orch's and assemblies, etc. this makes perfect sense, and I am ashamed this even happened to me. But I became dependant on the new simplicity in VS2005, and forgot a little of what was happening under the covers. The orch's assembly was loaded, and even though a new one was published to the gac/server, it didn't matter.

So, sometimes, simplicity can bite you. Sometimes it makes you forget the underlying 'stuff'.' Sometimes it causes you to take for granted what is happening. For Shame. :(

Ayway, I learned my lesson. I did find a new property on BizTalk projects in VS05 that allows you to have the host auto-restart. More Simplicity! I didn't turn it on, becuase there is a delay in deployment while studio waits for the host to restart. But it's nice that it's there. You can also set which BTS application your project is deployed too, which can be important.


HOAP in Cincinnati

Last Tuesday, Michael and I went to CINNUG to present our Hands on Agile Practices talk. This talk has been received well before, especially in Dayton. Something was in the air this time though. It might of been the Pepsi Michael pushed on me before the session started (I don't normally drink soda.)

The group was in a great mood, and we had a ball. The time flew by, and everyone was laughing and hollaring. CINNUG meets at a training center. In the room next door there was a trial session of their PM classes so that potential students could get a taste. I can only imagine that the room was filled with eager PM's to learn what these classes were like, and all they could hear was us carrying on in our room. I am sure many of them signed up for developer's classes that night. It was obvious developers have a lot more fun.

It was also good to retain my track record of filling the room at CINNUG as well! :)

All kidding aside, it was great to see the lights go on with some people. We had some excellant discussion after the session as well.

I have been through some great sessions that touch on the theory or pillars of agile/XP, but I always walked away wondering how I would implement those ideas in my shop. This talk is aimed at people who want to see how agile is actually used in a real shop. I make it clear that our practices are NOT the gold standard, but our take on the beliefs, and they may not work for other people.

We have tweaked our process so that it is easily adapted to the needs of our different clients and projects. We will remove deliverables or activities that won't provide value to the client, and the opposite. We will add deliverables or activities that will provide value to the client.

In the end, we have one core belief.

Do more of what works, and do less of what doesn't.

Remember that everyday, and you will always get better. There are other beliefs we have, but that is the first fundamental one.


HIPAA errors

I have been struggling to get some 278 (service approval) HIPAA messages in through my BizTalk box (using the accelerator). I was stumped. I received some sample messages from colleagues, and I still couldn't get the messages processed.

Then Keith (on the BizTalk team at MS) gave me an idea. I started up Notepad2 to look at the messages, and turned on the encoding/hidden characters. Voila! The line endings were CRLF. Changed them all to LF, and everything started going through! Yeah! In hindsight, it always seems to be the smallest thing that is the hardest to solve. I remember that when I was first using C, I would spend hours to fix an issue, and it turned out to be a missing semi-colon.

For future reference, here is a small sample (of course wrapping is all messed up):
ISA*00* *00* *ZZ*7654321 *ZZ*1234567 *050101*1200*U*00401*000000054*0*T*:GS*HI*7654321*1234567*20050101*0717*141797357*X*004010X094A1ST*278*0002BHT*0078*11*20030721071727*20030721*071727HL*1**20*1NM1*X3*2*NYSDOH*****PI*141797357PER*IC*eMedNY PROVIDER SERVICES*TE*8003439000HL*2*1*21*1NM1*1P*1******46*11111111REF*ZH*00358690HL*3*2*22*1TRN*1*GS192050236782100@03202071727*1141797357NM1*IL*1*MEMBER*IMA****MI*XX22222XREF*HJ*02HL*4*3*19*1NM1*1T*1******46*11111111REF*ZH*01234567HL*5*4*SS*0UM*HS*I*2HCR*A1*93387654321DTP*472*D8*20030721HI*BO:A4554:::300SE*21*0002GE*1*141797357IEA*1*000000054

The error messages are so much more useful in this new version. Very verbose messages, but useful. Here is a sample:
Event Type: Error
Event Source: HIPAA EDI Subsystem
Event Category: BizTalk Server 2006
Event ID: 24
Date: 4/25/2006
Time: 10:06:03 PM
User: N/A
Computer: POC
Description:Error encountered: ERROR (62), interchangenr 10052 :The length of the element is not correct. Contact the sender.
source format: [5 00401 ,X12-4010]
source document: [278 004010DEFAULT X X094A1,Health Care Services Review -- Resp]
source segment: [data#0,def#7,tag=IEA ,name=Interchange Control Trailer]
source element: [def#2,elm#2,comp#0,tag=9012,name=Interchange control number], value: [00000354], fixed length: [9] (msgnr:1 segnr:0)(line:27 pos:0 filepos:934)
For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

If you wade through all of the detail, this message says the length of the element is incorrect. Which element? Read further. It identifies the element (including which segment it is in). It even tells you the type of element. In this case, I munged the ICN (interchange control number) so that the ICN at the begining of the message didn't match the one at the end. If other fields didn't match, it would show the two different values. This makes it much nicer to deduce content errors in your message. One error like this might create several errors in the event log, so always try to start with the first error message.


BizTalk 2006 HIPAA Accelerator

I have occasionaly had to work on HIPAA related projects. I never had to get very deep in the spec (there always seemed to be others around to do that heavy lifting). Recently I have had time to install the accelerator for BizTalk, and work with it.

First off, the accelerators are a group of software add-ons for BTS that added new functionality for specific markets. SWIFT, HIPAA, HL7, etc.
The HIPAA package adds all of the related schemas from WPC (THE company that builds and maintains the schemas), and an EDI engine that can process the edi formatted messages. This engine runs as a service on Windows Server, and can be configured to watch a FILE drop or an FTP drop for in/out going edi formed HIPAA messages. It will grab the message, validate it, find the appropriate party from BTS, and then convert it to XML and drop it in the message box. The engine will also handle the functional responses often needed in a HIPAA exchange. These message (997, etc.) are responses back to the sender letting them know that you received a valid message. It is a pure functional/technical meaning. The response does not imply any business process (approval, etc).

If you don't need to process the edi format, or need to use a different adapter (perhaps POP3 of HTTP), then you have to build a custom pipeline. The accelerator providers a disassembler/assembler for you to use in the your pipeline.

The install experience is identical to BTS 2006, which is a nice touch. The same screens, options, and workflow. One thing I did notice is that I get an error in the install (they at least warn you about it, but you have to not get 'next crazy' and miss it.) I get this when I just finish installing BizTalk, and then the accelerator. Now, I just reboot between installs, but a stop/start of BizTalk might also fix it.

The documentation is good all around. There are two help files that focus on the HIPAA specs themselves. These are very technical, just a series of formats and layouts. These are good as reference, but not meant for the uninitiated. Tey won't teach you what you need to know about the HIPAA business processes. The actual accelerator documentation is just ok. The BizTalk Server 2006 documentation team really set the bar high (for all of MS's products), and the accelerator falls a little flat in comparison. The BizTalk docs are so good, I wonder if third party books will even be published. The accelerator docs seem to be the old docs with a little polish. Some of it is out right wrong. When you are going through the tutorials there are improper screen references, for example. But the docs are functional.

The real trick is the accelerator doesn't come with sample messages. I really think that the accelerator really should. There should be several variations of each format, with a range of complexity (some messages of the same type can be simple or complex). This would really help in getting POC's or samples put together, as well as proven fodder to use to test your system.

The accelerators did release at the same time at BizTalk Server did, and that is a first. They are usually 30-90 days later than the general availability date. Having them available at GA really makes it easier for customers to be aggresive in adopting the new version of BizTalk. I also hear that the accelerator team actually received RTM approval a few scant hours before the Server team did.

Anyway, check the acceleartors out. They are worth their weight in gold! Not having to build the complex schemas and test them by hand is a huge timesaver. It would take months to get them where they would need to be. These schemas can be simple on the surface, and very difficult to test and ensure they are accurate to the standards, down deep. By using the 'official' schemas, you save a lot of time, and can have a lot of trust that you won't have interoperability issues down the road.


Proud owner of a shiny new MVP Award!

I was nominated for an Microsoft MVP Award (Architect) many weeks ago. I always wanted to be an MVP, but never really thought I would get one. It always seemed like something other people got. Authors, freelancers, etc. And with limited awards to go around, I felt for a long time that my freelancer friends (remember I work for a not so souless corp, I ain't no freelancer) should get them only because MVP awards have a major effect on their personal marketing and visibility.

But I was nominated, and I felt honored just for that. Then the weeks started to slowly slide by. I was on pins and needles the whole time. I would obsessively check my email at all hours (even more than normal). There were days when I would check my office snail mail four or more times (even though we only get one delivery per day!) The only thing that helped pass the time was Oblivian (see dave's post).

Today I got in late (helpng a client deploy a BizTalk application), and finally checked my email. There, sitting in it's 'no yet read red text' was an email telling me of the award. I of course yelled out, wa-hoo! Several people from down the hall came into my office to find out what the excitement was about (assuming perhaps a lottery win?).

So now I am an MVP/Architect. I just feel so honored. I have worked hard in the community over the past few years. I do it out of love for technology, and love for the local community. I just love going anywhere and talking about any technology. I wouldn't have been able to do it without the support of my family, and the support of my colleagues, and my company. They give me the time and resources (money, travel, etc.) I need to be active.

Again, I say thank you. Dave.Drew.James.Jim.Nate.Keith.and many others.


TechEd 2006 - Be There!

It's spring cleaning time! What's different on my blog design?

Not only have I been working in the yard, but I ditched the Google Adsense banner today. I only added it because it was a simple check box when setting up blogger. I wanted to go through the process to see how it worked. I got sick of it lurking up there, so out it goes!

Second, I changed my stat provider to feedBurner, on Jim Holmes advice. I also added a RSS icon. Never noticed there wasn't one there. I just assumed there was one in the blogger template, but I guess not.

Third, and best, I have registered for Tech-Ed 2006! This is going to be awesome. I attended the PDC in LA this past fall, and met a lot of great people. This time I get to not only go to the conference, but also I have been asked to be a speaker! I was totally floored and honored when I was asked. I will be co-presenting with Keith Bauer, the district's Program Manager for BizTalk. Our topic will be "BizTalk Server Solution Lifecycle: Planning and Design (Part 1)". This will be a level 300 talk. The abstract is:

"Part one of a three-part series of sessions covering the design, implementation and management of a BizTalk Server solution. The first session covers the common scenarios where BizTalk is an appropriate solution and what questions you should be asking to develop appropriate timelines, resource plans and deliverables. This session provides experienced IT project managers, who may not have previously worked with BizTalk Server, the background needed to successful launch a BizTalk Server project. "

So, who else is going? If you are going, sometime before the event, you will be able to go to the TechEd site, and post requests for the session content. Let us know what details you want.