Enter your email address:

Delivered by FeedBurner

P&P Releases PRISM 2.0

Labels: ,

The Patterns & Practices group at Microsoft has released version 2.0 of the PRISM framework. PRISM is a framework you can use you easily build loosely coupled composed applications. This new version supports both WPF and Silverlight! That is totally awesome.

I believe that we should focus on only writing the code that only we can write, and to leverage frameworks and components to provide the infrastructure and cross cutting concerns for our systems. This is a key way to provide more value to your business, and to reduce costs and time. There are plenty of other ways, like TDD, agile, pairing, etc. But this post is about PRISM, and there are far more smarter people than I to learn about those other practices from.

PRISM helps you create modules in your application. For example, a screen might have a list of upcoming appointments, and a section highlighting some tasks the user has to accomplish before the selected appointment. Each of these pieces would be a component. But they aren’t directly tied together. The first instinct might be to refresh the view of the task list when the selected appointment is changed in the list. You might go off and write some glue code to do this. There are many applications that are created like this, and in some situations, this is ok.

But some applications need a greater degree of loose coupling. PRISM provides a sort of message bus within the application that allows the modules to communicate without being directly bound to each other. This is really useful when these components are accessing different systems, and presenting their information in a composed manner.

Another great reason to build a system in such a loosely coupled manner is when the modules are being developed in a disconnected way. Perhaps each component is coming from different teams, and they need a separate development process. Maybe the architect expects a great degree of enhancement by the business over several years, and they want an easy way to plug in new modules as the system grows. This could easily be used by an ISV in such a manner, when they often need to deploy customer specific customizations.

This componentization also benefits the testability of your codebase. When a system is well architected, and has the proper level of the component isolation, it becomes easier to test each component in a stand alone manner.

The great thing about PRISM is it will give you the most reuse of you code across both platforms.

If you are building line of business applications, especially in a services environment, you should take 20 minutes and check out PRISM. The kit comes with a full reference application called Stock Trader. It is built using the framework, and following the latest guidance on how to build composed applications.

Of course guidance is exactly that. As with all things architecture, it depends. You need to spend an hour reading about this, understand how it might fit into what you might be doing, and be prepared to bring out of your toolbox when an opportunity exists. The videos on Channel 9 make it really easy to understand how it can help you.

Channel 9 videos

clip_image002    http://channel9.msdn.com/shows/Continuum/Prismv2/



Because it is from P&P, and hosted on CodePlex, you get access to the source code. So if it is close to what you need, but you want to enhance it, or tweak it to suit your needs, by all means go ahead. To get the bits, and to read more about the framework, check out these sites:



Azure ‘How do I’ videos


Looking for something to do this weekend? How about a series of small, low calorie, bite sized videos on how to work with Azure? Check these out. These are great training videos.

Everyone is interested in Azure, and cloud computing, and everyone is looking for resources to get up to speed quickly. This is the next best thing to downloading the SDK and hacking out some code.

Overall Page

Visit here for the full summary of videos


Get Started Developing on Windows Azure?

If you’re a developer and you’re new to Windows Azure, start here! You’ll see what you need to download and install, and how to create a simple “Hello World” Windows Azure application.


Deploy a Windows Azure Application

You’ll see what it takes to move your application into the cloud – you’ll see how to request and register a token, how to upload your Windows Azure application and how to move it between staging and production in the cloud.

Store Blobs in Windows Azure Storage?

Learn how to leverage Windows Azure storage to store data as blobs. You’ll learn about blob storage, containers and the API that makes it easy to manage everything from managed code.

Leverage Queues in Windows Azure?

Learn how to use queues to facilitate communication between Web and Worker roles in Windows Azure.

Debugging Tips for Windows Azure Applications

The Windows Azure SDK includes a development fabric that provides a "cloud on your desktop." In this screencast, learn how to debug your Windows Azure applications in this environment.

Get Started with .NET Services?

.NET Services are a set of highly scalable building blocks for programming in the cloud. In this brief screencast, you'll learn about the registration process, the SDK and the built-in samples - everything you need to know in order to get started.


Harness the Microsoft .NET Service Bus?

The .NET Service Bus makes it easy to access your Web services no matter where they are. In this brief screencast, you'll see how to take a basic Windows Communication Foundation (WCF) service and expose it to the Internet with the .NET Service Bus.

Get Started with the Live Framework?

If you are looking to get started developing with the Live Framework, this is the place to start! In this screencast you'll learn how to get a Live Services token and what you need to download in order to start writing Live Framework applications.


Use the Microsoft Live Framework Resource Browser?

The Live Framework Resource Model is a simple, straightforward information model based on entities, collections and relationships. In this brief screencast you'll learn how to navigate the relationships between entities by using the Live Framework Resource Browser, which is a tool that ships with the Live Framework SDK.

Dave Giard publishes an interview with Brian H. Prince

Labels: ,

Dave was hanging around the awesome CodeMash event this January, when he asked to interview me for his blogged. So we snuck out to the entryway of the Kalahari, and chatted about my experience at being a new employee of Microsoft.

He had good timing in publishing the video, since this past Wednesday was my one year anniversary as a blue badge. Go check it out.


New User Group: Columbus Exchange & Windows User Group

Labels: ,

Some of my colleagues have worked with IT Pros in the community to start a new user group focused on Exchange and Windows. If you need to manage these systems, and want to learn how to really leverage them, this is the group for you.

CEWUG Registration Link

What: http://cewug.spaces.live.com/blog/

· Please join us for the first meeting of the Columbus Exchange & Windows User Group (CEWUG).  The goal of CEWUG is to help businesses, public sector and home users to optimize their knowledge of Windows, Exchange, Office Collaboration Server, System Center, Dynamics and Virtualization.

· The goal of the CEWUG is to build relationships with peers, share expertise and involvement with the central Ohio IT community.

· We will meet the 4th  Wednesday of the month at the MS Columbus Office: 8800 Lyra Dr, Suite 400, Columbus, OH  43240


§ 5:30 to 6:05: Welcome time; meet the MS steering committee, pizza and beverages

§ 6:05 to 7:05: Windows 7 Overview

§ 5 minute break

§ 7:10 to 8:15: Finish Windows 7,  Start Windows Server 2008 R2


· 8800 Lyra Dr, Suite 400 Columbus, OH 43240


· Wednesday: February 25, 2009 5:30 to 8:15 PM

I will be with a bunch of KNOTheads

Labels: , ,

During the upcoming ArcReady tour, I will be sitting in on a KNOThead meeting. The meeting is at EdFinancial, and is from 5:30-8:00pm on 3/10/2008. We will talk about cloud computing, and how this shift is affecting our organizations, especially during this time of grapefruit.

So come out for ArcReady, and hang around for the KNOThead meeting.



EdFinancial Services, 150 N Seven Oaks Dr, Knoxville, TN

Job security is a myth, and how IT Pros are Thriving

Labels: ,

No matter how you slice it, the economy is definitely in a state of concern. Some call it a recession, some a depression, others it is a crash. Since all of those words are overloaded, I will choose to use the word Grapefruit to represent the current situation. How concerned you are with grapefruit depends on who you are, and your personal situation. It is these times of grapefruit when there is the most opportunity. Crazy, I know. But now is the time to start that crazy idea, to outmaneuver an opponent, or get desperate enough to take risks you normally wouldn’t take (personally, or as a company) during up times.

All boats float in the high tide. Only great captains can navigate the low tide.

There is no such thing as job security. Surprise. There just isn’t, it is a complete myth. ‘Job Security’ might have made sense two generations ago, but it just doesn’t work now. You make your own security, during good times, and during grapefruits. Through keeping your skills sharp, through delivering real value to your company, planning for your own future, and through professional networking. The people that aren’t worried now are those that have been training at home, going to user groups, and networking both inside and outside the organization. They knew they needed to be ready when the grapefruit came.

A large part of our economy is based on technology, and if you are reading this blog, then you are likely in technology as well. Microsoft is in a position, and I think obligated, to help out IT Pros weather this storm.

We have launched a new program, called Thrive, that will help IT Pros with their careers. We have looked at how we can empower the IT Pro to really move the needle when it comes to helping your company not only survive through these times, but thrive.

The program is centered on three aspects:

  1. Career Care
  2. Technical Competency
  3. Business and IT Alignment

Career Care We have worked out a program to help you guide your career. These come from career and IT experts. We want to help you learn and prove your skills with certifications and access to technical training and material. We are working with CareerBuilder.com to provide help in managing your career.

Technical Competency We are working to bring a series of programs to bear that will help you learn the skills you need to be fresh, and to use the technology you ALREADY HAVE. What? Yes, we want to help you get the most of what you have. No need to buy something that will fix this problem. Chances are you already have it, and have been to busy or distracted with luxurious trips to Hawaii to use it.

Our platforms have had the features for years that help you do more with less. Whether that is virtualization technology, or management tools that help only a few IT Pros to manage hundreds or thousands of desktops. There are plenty of other time and money saving goodness in Windows. It has taken this tough environment to get people to look into using them, when they really need the payback, and show how much value they are really delivering. Plus, chances are you used these features as a selling point to management to let you buy it, so you should probably use them.

Business and IT Alignment We want to help you align with the business, and help you communicate your value, long term, to the business. There are tools and training in place that can teach you how to align your IT goals with the business goals, and then market that throughout your organization. As techs, we really know technology, but sometimes we need help explaining that value to the rest of the world. “It’s just better” usually doesn’t cut it. That is like telling a child to drink grapefruit juice because it is good for them. You have to be able to articulate the value, and then follow through.

Once you are aligned, it is easy to show your value, and it is easy to be seen as a lever to help move the business, not just a cost center, that must be paid like the water bill. If you are a cost center, then the first thing companies look for is how to CUT COSTS. If you are a strategic advantage, they company looks on how to use you as a lever to survive and thrive in this grapefruit.

How are you positioning yourself, your team, and your organization for success during this economic grapefruit?  Visit the Thrive site and learn how to enhance your skills, advance your career and elevate IT as the business leader.  Go ahead - find out how YOU can Thrive!

Azure Tables are for Squares in the Cloud


The third aspect of Azure storage is the table structure. BLOBs answer how to manage unstructured data in your cloud application, and tables answer how to manage your structured date.

Tables, like the rest of Azure, are designed to be super scalable. You can easily store billions of rows, and terabytes of data, and still get great performance. Because the tables are part of Azure, the table data is always available, and triple replicated for reliability.

Accessing tables is easy using ADO.NET Data Services, which should be familiar to most .NET developers. ADO.NET DS uses normal .NET classes and LINQ to access your data. If you don’t want to access sit with ADO.NET, you can easily use REST, so any platform can make queries and calls into the data.

Each account can have many tables of data. Each table contains entities, which are like rows in a traditional database. Each entity is indentified with a composite key. The first half is the partition key, which identities which partition the entity lives in. Partitions are how the table data structures scale so well. Which entities go into which partitions are left up to the application and the developer to decide, simply by assigning the key appropriately. You could have a partition in your customers table for each state, based on which state the customer lives in. Azure will move partitions around to balance performance and traffic to a specific data server. If two partitions are very active, one will be moved to a different server.

The second half of the primary key is the RowKey which is used to identify the entity within the partition, and is very similar to your trusty old unique row id.

Both PartitionKey and RowKey are strings, and must be less than 64KB in size.

Queries that reference a partition key in the constraints will be much faster, because the engine can constrain the table scan to the same partition.

Each entity can have up to 255 properties, including the partition key, and the row key. Azure does not enforce a schema. Schema enforcement is left to the application. This gives you the flexibility to vary the shape of the entity based on the scenario, instead of building more and more tables. This is very handy when you are deploying customizations for customers in a multi-tenant scenario.

It is easy to create and destroy tables. It is common to check for the table existence, and create it if it isn’t found on application startup. This makes it easy to deploy your application. No more struggling with complex setup SQL scripts. This code can also be moved into a separate tool for use in deploying your application. Each account has a table called Tables, that tracks which tables have been created.

To access a table with REST, you can easily use the URI:

POST .table.core.windows.net/http://<ACCOUNT NAME>.table.core.windows.net/<TABLE NAME>

Using a POST verb will save an object to the table. The object would be stored in the Atom envelope in the REST call. Querying, updating, and deleting records is just as simple. A sample query would look like:

GET .table.core.windows.net/?$filter">http://<ACCOUNT>.table.core.windows.net/<TABLE>?$filter= cost eq 45

TechNet Tour Unleashed details here!


[TechNetUnleashed has two siblings, ArcReady, and MSDN Unleashed. For some reason we haven’t unleashed ArcReady yet.]

Windows Server 2008
In this session we will look at Windows Server 2008 and the improvements that have been made to Microsoft’s premier server operating system.  Microsoft Windows Server 2008 is the most advanced Windows Server operating system yet, designed to power the next generation of networks, applications, and Web services. With Windows Server 2008 you can develop, deliver, and manage rich user experiences and applications, provide a highly secure network infrastructure, and increase technological efficiency and value within your organization.
Windows Server 2008 introduces several new capabilities including 64bit virtualization, a robust web and development platform, improvements in networking, security, high availability and disaster recovery.  In addition, there is a new “Core” installation option that reduces the operating system overhead by removing the graphical user interface thus freeing resources and lowering the potential security attack surface.  Come see demonstrations on many of the features in a technical deep dive you won’t want to miss!
After we discuss Windows Server 2008, we’ll briefly discuss the improvements coming in Windows Server 2008 R2 which is in development and now available for beta testing.


Attendees are eligible to win a book or a copy of Windows Vista.


Event Schedule – TechNet Events Unleashed starts at 3:00 PM and end by 5:00 PM

St. Louis, MO – March 5, 2009 3:00 PM

Downers Grove, IL – March 9, 2009 3:00 PM

Austin, TX – March 10, 2009 3:00 PM

Indianapolis, IN – March 12, 2009 3:00 PM

Irving, TX – March 19, 2009 3:00 PM

Chicago, IL – March 23, 2009 3:00 PM

Houston, TX – March 24, 2009 3:00 PM

Detroit, MI – March 31, 2009 3:00 PM

MSDN Unleashed tour details announced.


[MSDN Unleashed has two siblings, ArcReady, and TechNet Unleashed. For some reason we haven’t unleashed ArcReady yet.]

Enhance your coding capabilities with new tools, tips, and inside secrets from MSDN Events. You’ll see how developing for a Windows Mobile phone leverages your current coding skills and can make it simple to build, deploy and debug cool new devices. Additionally, we’ll be showing you how to take full advantage of the Visual Studio debugger. We’ll offer some great tips and tricks to help you debug faster and more efficiently, while applying fresh techniques to ramp up your problem solving abilities.


Session 1: Tips & Tricks for the Visual Studio 2008 Debugger

The Visual Studio debugger is a highly underutilized tool for many developers. In this session, you’ll learn how to use it like a pro, while picking up new techniques to fast-forward your problem solving and debugging abilities. We’ll show you how to use advanced breakpoints, advanced watch window / Expression evaluator tricks, modifiers, assertions on the fly, remote debugging, and more. Whether you’re writing C#, VB, WPF, ASP.NET, Windows Forms, or services, we’ll provide tips and tricks that will have you debugging faster and much more efficiently. The debugger is your primary tool for finding bugs, so join us and learn how to make the most of it.

Session 2: Developing for Windows Mobile Devices

Mobile development is growing fast, and Windows Mobile is at the forefront with over 18 million phones shipped last year and many more cutting-edge devices on the way. Visual Studio developers have tremendous opportunities in this space. Why? Developing for a Windows Mobile phone leverages your existing coding experience and takes it to new heights. In this session, we’ll look at some of the coolest new devices, you’ll learn how to set up Visual Studio with the latest SDK and device emulators, and you’ll see how to build, deploy and debug Windows Mobile applications. We’ll also explore how Internet Explorer Mobile 6 provides new AJAX capabilities that offer the richness of the desktop with pan and zoom features tuned for mobile devices.


Attendees are eligible to win one of two books, a copy of Windows Vista or Visual Studio 2008 Professional.

Event Schedule

Cleveland, OH – February 24, 2009 1:00 PM – 5:00 PM

St. Louis, MO – March 5, 2009 1:00 PM – 3:00 PM

Downers Grove, IL – March 9, 2009 1:00 PM – 3:00 PM

Austin, TX – March 10, 2009 1:00 PM – 3:00 PM

Overland Park, KS – March 10, 2009 1:00 PM – 4:00 PM

Indianapolis, IN – March 12, 2009 1:00 PM – 3:00 PM

Irving, TX – March 19, 2009 1:00 PM – 3:00 PM

Columbus, OH – March 20, 2009 1:00 PM – 4:00 PM

Chicago, IL – March 23, 2009 1:00 PM – 3:00 PM

Houston, TX – March 24, 2009 1:00 PM – 3:00 PM

Mason, OH – March 27, 2009 1:00 PM – 4:00 PM

Detroit, MI – March 31, 2009 1:00 PM – 3:00 PM

Waukesha, WI – March 31, 2009 1:00 PM – 4:00 PM


ArcReady Tour is ready to launch!

Labels: ,

[ArcReady has two siblings, MSDN Unleashed, and TechNet Unleashed. For some reason we haven’t unleashed ArcReady yet.]

For our next ArcReady, we will explore a topic on everyone’s mind: Cloud computing. Several industry companies have announced cloud computing services. In October 2008 at the Professional Developers Conference, Microsoft announced the next phase of our Software + Services vision: the Azure Services Platform. The Azure Services Platform provides a wide range of internet services that can be consumed from both on premises environments or the internet.

Session 1: Cloud Services In our first session we will explore the current state of cloud services. We will then look at how applications should be architected for the cloud and explore a reference application deployed on Windows Azure. We will also look at the services that can be built for on premises application, using .NET Services. We will also address some of the concerns that enterprises have about cloud services, such as regulatory and compliance issues.

Session 2: Mesh and Live Services In our second session we will take a slightly different look at cloud based services by exploring Live Mesh and Live Services. Live Mesh is a data synchronization client that has a rich API to build applications on. Live services are a collection of APIs that can be used to create rich applications for your customers. Live Services are based on internet standard protocols and data formats.

Event Schedule – ArcReady starts at 9:00 AM local time and ends by noon.

St. Louis, MO – March 5, 2009 9:00 AM

Downers Grove, IL – March 9, 2009 9:00 AM

Austin, TX – March 10, 2009 9:00 AM

Knoxville, TN – March 10, 2009 9:00 AM

Overland Park, KS – March 10, 2009 9:00 AM

Indianapolis, IN – March 12, 2009 9:00 AM

Nashville, TN – March 13, 2009 9:00 AM

Irving, TX – March 19, 2009 9:00 AM

Columbus, OH – March 20, 2009 9:00 AM

Chicago, IL – March 23, 2009 9:00 AM

Houston, TX – March 24, 2009 9:00 AM

Bloomington, MN – March 25, 2009 9:00 AM

Cleveland, OH – March 26, 2009 9:00 AM

Mason, OH – March 27, 2009 9:00 AM

Detroit, MI – March 31, 2009 9:00 AM

Waukesha, WI – March 31, 2009 9:00 AM

Take and find notes with MS Recite on your phone


I have always wanted to be able to just speak into my phone to store simple reminders.

There is the voice note application built in, but it just doesn’t work smoothly for me. And you end up with all of these .wav files all over the place.

There is 'jott', where you call a number to save your reminder. They turn it into text, and send it to your inbox. I tried it a few years ago, but I never really fell in love it.

While these two mechanisms work ok, they both make retrieval difficult once you have built up a fair amount of recordings.

Microsoft Research has released a CTP of some technology they have been developing. Recite is a Windows Mobile application. You simply click “Remember” to record all of your reminders. Click “Search” to ask your notes a question. Recite figures out what you are asking, and finds the related recorded note. Watch the video below, and get all of the latest information at http://recite.microsoft.com. Just thinking about how they have to search a series of voice recordings, with a voice recording is mind bending.

<a href="http://video.msn.com/?mkt=en-US&amp;playlist=videoByUuids:uuids:1cf330f0-7863-401a-ba91-c3013ed1e03c&amp;showPlaylist=true&amp;from=msnvideo" target="_new" title="Microsoft Recite">Video: Microsoft Recite</a>

Disk Defragmentation in Windows 7


I have always been interested in the internals of operating systems, and the science behind the engineering. One of those aspects is reading and writing to storage, and how that can be done in an efficient and performant manner.

The Engineering Windows 7 blog has posted how they work with hard disks, and how disk defragmentation has changed over the versions of Windows. It is a very interesting read. I wouldn’t dig into the comments, like usual, they are filled with trolls and flame wars. Oh well. If you haven’t been reading this blog, and you have any interest in OSs or in Windows 7, I highly recommend it. It is an open and honest discussion on the building of Windows 7 by the people actually building it. The posts are detailed, and explain some of the decisions that the Windows team has to make, and how they make them. Balancing all of the different interests and different use models and different users is quite challenging.

Here is my short summary, but you really should just go read the real thing.

Because the hard disk is so much slower than the CPU, how the OS interacts with the disk is very important. Some of the key principles for this are very similar to the guidance on how to use services. From their post:

    1. Perform less I/O, i.e. try and minimize the number of times a disk read or write request is issued.
    2. When I/O is issued, transfer data in relatively large chunks, i.e. read or write in bulk.

This is very much like with services. You try to make non-chatty services, that are chunky in nature, and call them when needed. Your application’s performance will suffer if you make to many small calls to a service. All of the serialization, deserialization (I love that word, and I just added it to my spell checker), dispatching, and transport costs you latency. This is a bigger problem with SOAP than with REST because of the overhead of a SOAP message, but the concern is still there.

Back to disks, and how slow they are. The team has figured out, long ago, that you want to read big chunks at a time, so even if the user has requested one small part of a file, they should read a lot more of it, so that it is ready and in cache. For example, when streaming a music file to the player, the player asks for the first 64K (or however big the buffer is). The OS will request more than that, assuming the user will want the rest.

This has become an even bigger issue as files have grown in size over the years. Ten years ago, people didn’t have terabyte drives on their desktops, and file sizes were in the KB’s, maybe MB’s.

In order for the disk to more easily read a file, it helps if the file is allocated in a sequence on the disk itself. If the file is fragmented into chunks all over the disk, the disk will take longer seeking out those pieces and returning them. The practice of making sure the files are assembled together in sequence, and perhaps putting them on the most efficient locations on the disk is known as disk defragmentation. As a concept, it is relatively simple. Move all of the open space to the end of the disk (think of it as a virtual sequential tape). Then rearrange the pieces of the files so that they are all together. It is common for related files (perhaps for OS startup) to be put back to back to make reading them even faster.

The Windows team has found out that a single large file doesn’t have to be in one long sequence. As long as the fragmented chunks are large (bigger than 64MB) then they aren’t rearranged, because moving them really wouldn’t help. It is the constant zigzagging for small pieces that costs big during I/O.

I remember defragging my hard drives all the time. Especially before I installed a new application (and by application I mean game). By consolidating all of the open space, when I installed said game (I get pangs for Civilization just thinking of this), then all of the files would be contiguous, and grouped together for performance.

In Windows 7, the algorithm has been tuned. There were files in Vista that could not be moved by defrag. These were usually NTFS meta data files. If you can’t move these, you can’t shrink your disk volume, which if you are using VM’s or want to rearrange your disk partition is a big issue. Windows 7 is now able to move them.

Also, many new laptops, especially netbooks, come with SSD drives. Defragmenting them may not matter, and even if it would help, it would cut into the lifecycle of the drive. Windows 7 will not automatically schedule a defrag on an SSD drive.

An interesting note is that auto defrag is not enabled on Windows Server 2008 R2. This is because how file fragmentation affects the system is dependent on the unique workload on that system, and an experience system administrator should configure the defrag process to meet those needs. You wouldn’t want a big defrag going on just as the nightly backup starts, for example.

A big change is in the UI. The team has made it possible to schedule the defrag process to your liking, and you can schedule multiple disks in parallel (in Vista they had to be in sequence).

From the post:



Stop reading my blog, and go read their blog already!

Build a Silverlight game, win $5,000


This is how you can score an easy $5,000. Think up, design, and build an awesome game in Silverlight. Just submit it to the Server Quest II contest site. Then get all of your friends and family to vote for it. The game with the most votes wins some awesome cash.

The deadline is 78 days from now, so go get cracking.

The website has a cool trailer, done in the 16bit graphics style of yore. Really funny stuff. But your game can be on anything, it doesn’t have to sync up with the Server Quest plot.

<br/><a href="http://video.msn.com/video.aspx?vid=182f88a2-d194-4938-a6c2-86d2a41f490f" target="_new" title="Server Quest Trailer">Video: Server Quest Trailer</a>

How about a package delivery and retrieval simulator, that is multiplayer, and allows you to collect equipment, cash, and friends? Call it FedCraft.

Or a real time strategy game that is centered around giant configurable robots doign battle with a background story about different clans competing for a diminishing pool of critical resources. Call it WreckWarrior.

Perhaps a puzzle game that tracks progress in a community leader board way mixed with a bureaucracy simulator, where you get to cut red tape with a giant pair of laser scissors. RedTris?

They did this contest last year, and it had some really great games come out of it.

Of course you should check out the site for all of the real details and contest rules.

Using BLOBs in Azure to store homemade BSG episodes


The other day I discussed using Queues in Azure, and why you would want to. Today, we will talk about using BLOBs.

First a bit about BLOBs. BLOB stands for Binary Large OBject, and is a way to store binary data. Up until now, whenever I used BLOBs it was as a column type in SQL Server and Oracle Server. This was always challenging for me because the APIs to store and read BLOBs were a pain, and hard to use. At least for me. The other problem was related to maintenance on the database server. Your database will get very big, very quickly by storing images and videos in the database. There may be some very good reasons for you to do this, but for me, in the past, the tax was too high. The approach I generally took was to store the image in a folder somewhere, and store the path and filename in the database. This did create a maintenance issue, in that we have to backup the file folder, and make sure it was transactionally in sync with the database backup. That is a whole different blog post.

BUT, if you are looking at using BLOBs in a database, check out the new  FILESTREAM feature in SQL Server 2008. I wish I had that about five years ago.

This common pain is what causes most people to wince when they hear BLOB, at least I know I do. But BLOBs are important. They are a great way to store all of that unstructured data we have. Our world is becoming rich media centric in what we do, and what we store.

BLOBs don’t have to be pictures or videos. They can be any binary stream. Perhaps a large catalog detail file, or a backup history. It can really be anything. BLOBs are opaque though, so you usually can’t scan them during a query or to help in an index. In that case, make sure you store some metadata with them.

BLOBs are one of the three pillars of the Azure storage fabric. There are queues, tables, and BLOBs. Any data saved in the storage fabric is stored in three different replicas. This is done for reliability, and scalability reasons. This storage is also shared across your account. So you can have one node store files into the BLOB storage, and another node could read it. This is very similar to using file storage in an on-premises application.

Within your account, you can organize your Blobs with containers. These are just a simple mechanism to segment your Blob storage, and make it easier to work with them. At this point, it isn’t possible to nest containers, like you would file folders on your file system.

Once I have created a container, and Blob, accessing it is as easy as browsing to this URL:


Blobs and containers are locked down to only be accessible by the account owner. If you want a container of Blobs to be publicly accessible on your site, you can use access control lists on the container. In this way, you can grant anonymous users read access.

In your application code, you will want to reference the Microsoft.Samples.ServiceHosting library. This dll will hold some nice classes that will make working with Azure Storage easier. You can find it in the Azures Samples folder that comes with the SDK.

To store a Blob in a known container, you would use the above URL, but with a PUT verb instead of a GET verb. When you ‘put’ a Blob, the size is limited to 64MB. If your file is bigger than that, you can use the Put Block method. This will allow you to store the Blob in 4MB blocks until you are done. The maximum size of any Blob is currently 50GB. That is pretty big. Want to know why it’s 50GB, and not 51GB or something like that? Because they needed a number, and no one will ever need more than 50GB for a single Blob. :) If it was me, I would have made it just large enough to hold a complete BlueRay movie. You know, for when you want to store your home made BlueRay movies in the cloud.

One of the scenarios you might use Blobs for are to store images and videos (or other user generated content) on your site. In that case, storing them, and displaying them back to your users is pretty simple.

Another common scenario, which ties into the post on queues, is the transitional scenario. In this case, a user might upload a video for processing. Your application would store the video into Blob storage, and then push a work ticket into the queue. The work ticket would hold only the top meta data (user name, transaction id in your db, the name of the Blob). The worker node would pull this off of the queue, pick up the Blob, and process it. It might then put the results into a different Blob container, and then finally delete the original Blob out of storage. Guess what the delete command is. You guessed it, you just change the HTTP verb to DELETE. Same URL as above. Of course, the user has to have permissions to delete, so don’t worry that some script kid is going to start deleting all of your homemade Battle Star Galactica movies off of your site.

Toledo will be cloudy on February 17th

Labels: ,

The Northwest .NET Users Group has been kind enough to invite me to speak at their February meeting. It will be on 2/17 at 6:00p – 8:00p, located at 333 North Summit Street, Toledo, Oh.

I will be speaking on cloud computing, what it can mean to you, and what the Azure Services Platform is. Given enough time, we will even look at some code.

I hope to see you there!

Queues in Azure


Many modern systems are now being designed with SOA principles in mind. This usually means they are designed as a composite application of several services working together. As part of this structure, you usually need a way for the different services to communicate.

A common way is to use an Enterprise Service Bus, or even just naked, direct SOAP calls. This works when the systems are synchronous in nature. But if the service you are leveraging is very asynchronous, meaning it is more like a back end processor, or bulk processor, then you are likely going to end up working with queues. The advantage to queues is that they help enforce some loose coupling to your architecture. Just make sure that you pick a queue-ing technology that supports the protocols the consumers will need (ie SOAP, REST, COM+, etc.)

If you are working with Azure, then you can easily leverage the queue infrastructure already build into the storage fabric of Azure. Before you dive in, there are a few things you should know about how queues work, and some of the design limitations they have.

Queues are FIFO. That means the first message in is the first message out. Much like a line for tickets at the movie theatre for Star Trek. The first nerd in line, gets the first ticket, etc.

Because it is possible for the processing agent that took the top message to fail, it is possible for a message to get forgotten about in an architecture like this. In this case, most queue servers have an ability to mark a message as read, but not actually delete it until the processor says it is successful. In this manner, if the processing fails, your code can find stale read messages, and reprocess them after a period of timeout. The ‘read’ state also keeps other processing nodes from picking up the same message, and processing it a second time. It is very common in this scenario to have several processing nodes reading messages off of the same queue. The queue becomes an abstraction to talking with the group of nodes, and is an easy way to balance the load across the nodes.

Queues are a one-way asynchronous messaging system. I can use a queue to send you a message, but there has to be some other mechanism for any return message. Sometimes this is just a second queue, but more likely there is some other out of band signaling going on. Perhaps the sudden appearance of data in your database, a flag being set, or a flat file gets picked up the next morning. Another common return path is for the back end processor to call a lightweight service (REST of SOAP) that merely reports that the specific message has been processed. For example, the contract might include an order id, and a final status (completed, shipped, error, pineapple, etc.).

You don’t just dump a giant message on the queue, this will surely lead to bad performance, regardless of which queue server you are using (Azure, MSMQ, MQ Series, etc.) If you are just processing an order from a web site, then it might be ok. But if you are processing an image, or something of real size, you are better to follow a pattern called a ‘control message’ or ‘work request message’. In this pattern, you drop off the actual large part of the message in some common store. This could be a common file system, common database, or the BLOB storage in Azure. Then you put a message in the queue that tells the backend processor what needs done, and which item in the common store to use.

In the always common image thumbnail generator scenario, you might put the uploaded image into BLOG storage, and then put a message in the queue that states the name of the item in BLOB storage, the expected thumbnail dimensions, and an account code to bill the work to. The backend processor would then pick up the message, go fetch the image, do the work, bill the proper code, and then dump the thumbnail back into the common storage. The consuming website must just keep looking for the particular thumbnail filename to see when it is done, or you could leverage one of the callback mechanisms mentioned above.

It is common to have one queue per message pattern, meaning all messages going into the queue should either always be bound for the same destination (all messages pertaining to customer records), or be of the same verb (process this image, produce report). The downside to this is that it is very easy to end up with a proliferation of queues. This leads to a management nightmare, as well as a lot of traffic.

In Azure, you can create as many named queues as you want. When you  put a message onto a queue, it can be no larger than 8KB, and must be XML. This is to keep the platform fast, and super scalable. A queue can theoretically hold as many messages are you want to put in it, but I haven’t done any performance or scalability testing on the Azure queue to see if this holds up.

The API is RESTful, and you can place or read items from the queue from anywhere that make that REST call, it doesn’t have to be code running in an Azure role. This means that you can host your backend processor in the cloud, to get eth dynamic scalability to respond to spike events, but wire up your preexisting applications to feed that queue.

What is the address of your queue? It depends on what you name it, and your account name for Azure. Perhaps you named your queue ImageProcessing, and your account name is BHP. In that case, the address for the queue would be : http://BHP.queue.core.windows.net/ImageProcessing. As you make REST calls into this address, make sure you remember you are addressing the queue at large. Meaning a delete command would delete the queue. To add a message to the queue, you need to extend the URI a little, to something like this: http://BHP.queue.core.windows.net/ImageProcessing/messages. Of course there would be parameters that hold that actual message content you wanted to add.

When you get a message from the queue (a HTTP GET against the URL above) you have to optional parameters you can define. The first lets you fetch more than one message at a time. This is important for scale reasons, when the overhead of fetching a message is high. In this case, grabbing a batch of messages is more efficient. The second parameter allows you to set the invisible timeout, up to two hours. If you don’t delete the message before this timeout, then it will reset back to visible, allowing someone else to pick up the message.

When you GET a message, you are given a pop receipt id. This id is needed in order to DELETE the message when you are completed with it’s processing. You will also need to supply the message id itself (which is a GUID). This is to make sure you delete the proper message off of the queue, and that you are the most recent recipient of the message. Remember, in a timeout scenario, the message could be revived, and given to another processor. If the timeout, which can be set on the PUT, expires, then the pop receipt will expire as well. This keeps you from running into conflicts when things go haywire.

If people want, I can code up a sample, and walk through it.