Observations from a Full-Scale Migration to Windows Azure, Part 1 (Highlights)

Over the past several years, we have been designing and developing our systems in preparation of getting them up “into the cloud”. Whether this means Microsoft, Amazon, or whomever was unimportant as the architecture needed to allow for high-availability and load-balanced deployments of our systems – the cloud-specific issues could be figured out later. About 1 1/2 years ago, we deployed some minor systems to Azure and consumed some of their services (most importantly queueing and blob storage). Over the past month and a half, we’ve been making changes specific to Azure. And last weekend, a co-worker of mine (who I can’t express enough gratitude towards) and I spent a grueling 72 hours beginning Friday morning migrating all of our databases and systems to Azure. We learned a lot through our various successes and failures during this migration, and in the time leading up to it.

For our system, we have a single set of internal WCF services hitting the database and half a dozen internal applications hitting those internal services. One of those internal applications is a set of externally-accessible WCF services, and on our customers’ behalf, we have some custom applications consuming those “public” services. Technologies/systems that we employ include the following:

  • SQL Server (50GB database where 23GB exists in a single table)
  • SQL Server Reporting Services
  • SQL Server Analysis Services
  • SQL Server Integration Services
  • WCF
  • .NET 4.0/MVC 2.0/Visual Studio 2010
  • Claims-Based Authentication (via ADFS)
  • Active Directory
  • Probably some more that I’m forgetting. If they’re important, I’ll add them back here.

By the end of the weekend, we had successfully migrated all critical systems to Azure (that we planned to) and only a couple non-critical apps still needed migration. We (temporarily) pulled the plug on one of our non-critical applications, in part due to migration difficulties and in part due to a pre-existing bug that needs fixed in it ASAP, so we decided to just tackle both at once the following week after getting some sleep. I can’t say the migration went without a hitch. While we had some unexpected major victories on some high-risk areas, we also had some unexpected major problems in some low-risk areas.

I’ll go over some specific experiences in some follow-up posts, but here were some major key points we took away from this experience that might help others. Some of these we knew about in advance and were prepared to deal with them. Others caught us by surprise and caused problems for our migration

  1. If you have a large database (anything more than 5GB), do a LOT of testing before you start migration! Backups, dropping/recreating indexes on large tables, etc! For instance, we have one table that we can’t drop and recreate an index on and the default ways to create backups take 8-10 hours for our database!
  2. When migrating your database to Azure, don’t do it with your “base system” being locally. Upload a .bak backup file to Azure blob storage using a tool like Cerabata’s Cloud Storage Studio (which allows you to upload in small chunks to easily recover from errors and improve bandwidth speeds) and create a medium-sized Azure Virtual Machine with a SQL Server Evaluation image, and base all of your data migration work from there. You’ll save so much time doing it this way unless you get everything working perfectly your very first try (unlikely). Otherwise, for just a couple bucks (it literally cost us ~$2 for the entire weekend’s worth of VMs we used), it’s totally worth it!
  3. AUTOMATION!! Automation, automation, AUTOMATION! You do the same thing over and over and over so many times, really, have a solid build server with automated build scripts for doing this! Do NOT use Visual Studio or any manual process! The ROI on investing in a build server will pay off before your 5th deployment, most likely, regardless of how complex or simple your system is!
  4. No heaps! You must have a primary key/clustered index on every single table. No exceptions! Period! Exclamation mark!
  5. Getting Data Sync up and running is a major pain in the ass! Azure’s Data Sync has stricter limitations than SQL Server’s Data Sync (for instance, computed columns don’t play nicely at all in Azure but SQL Server has no problem with them). There are just enough nuances and so much time that it takes to find them that this you can spend quite a bit of time just figuring this out. And then figuring how to automate these nuances is yet another topic of discussion since the tools are so poor right now.
  6. Use the SQL Database Migration Wizard to migrate your data from your “base system” to an Azure database. But be gentle with it, it likes to crash and that’s painful when it happens 3 hours into the process! Also, realize that it turns nullable booleans with a NULL value into FALSE and doesn’t play nicely with some special characters, so be prepared to deal with these nuances!
  7. Red Gate SQL Compare and SQL Data Compare are GREAT tools to help you make sure your database is properly migrated! SQL Data Compare fixes up the problems from the SQL Database Migration Wizard very nicely and SQL Compare gives you reassurance that indexes, foreign keys, etc. are all migrated nicely.
  8. As I said before, test test test with your database! For us, 8-10 hour database backups were unacceptable. Our current solution for this problem is to use Red Gate’s Cloud Services Azure Backup service. With the non-transactionally-consistent backup option, we can get it to run in ~2 hours. Since we can have nightly maintenance windows, this works for us.
  9. Plan on migrating to MVC 3.0 if you want to run in Server 2012 instances.
  10. If you’re changing opened endpoints in the Azure configuration (i.e. opening/closing holes in firewalls), you have to delete the entire deployment (not service) and deploy again. Deploying over an existing deployment won’t work but also won’t give you any errors. Several hours were wasted here!
  11. MiniProfiler is pretty awesome! But the awesomeness stops and becomes very confusing if you have more than 1 instance of anything! Perhaps there’s a fix for this but we haven’t yet found one.
  12. If you have more than just one production environment, it’s very handy to have different subscriptions to help you keep things organized! Use one subscription for Dev/QA/etc, one for Production, one for Demo, one for that really big customer who wants their own dedicated servers, etc. Your business folks will also appreciate this as it breaks billing up into those same groups. Money people like that.
  13. Extra Small instances are dirt cheap and can be quite handy! But don’t force things in there that won’t fit. We found that, with our SOA, Extra Small instances were sufficient for everything except for two of our roles. Except for those two roles, we actually get much better performance with 7 (or fewer) Extra Small instances than 2 Small instances for a cheaper price (1 Small costs the same as 6 Extra Small).

In the next post, we’ll go over the things that we did leading up to this migration to prepare for everything. From system architecture to avoiding SessionState like the plague and retry logic in our DAL, we’ll cover the things that we did to help (or we thought would help) make this an easier migration. And I will also highlight the things we didn’t do that I wish we had done!

Mouse without Borders – My experiences on Windows 8 CP

I just thought I’d share my observations as I’ve been using Mouse without Borders (what is that?) on Win8 CP since the day it came out. I’ve used this at work on a daily basis, so I’ve logged MANY hours of experience with this setup.

So what is my setup? I have three systems side-by-side. My “host” (where my keyboard and mouse are plugged into) is a Windows Server 2008 R2 machine (this is my primary workstation that runs Hyper-V with several VMs that I RDP into). To my left is an old Vista-based workstation that is there for supporting a legacy application of ours (for only one more month – yay!). Then to the right is my PDC laptop (what’s a PDC laptop?), which has a2-point resistive touchscreen. I’m currently running v2.1.1.1112 of Mouse without Borders on a wired network.

For the most part, this works well. Mouse, keyboard, and copy/paste of text (I don’t know about files – I never use that feature) all work well, when they work. However, there are some non-minor issues. Just to share with you my experiences:

  1. Whenever my mouse enters the Win8 system, I can never simply move my mouse out of the Win8 system. Instead, I must use the hotkey to switch to a specific other system. This is with “Easy Mouse” enabled in MWB. I have tried to disable “Easy Mouse” and even then, holding CTRL as I move across screens still doesn’t work. This forces me back to my CTRL+ALT+F1 hotkey combo that I have setup to go to specific workstations – the hotkey works perfectly in both directions. (Not sure who to blame on this one – a tweak to MWB will probably fix this.)

    UPDATE: I’ve discovered that if you check the “Move mouse relatively” checkbox on the host, this problem goes away. This may or may not be a problem for you. Personally, I don’t like the mouse behavior with this checked. But still, this sounds like it’s something that MWB can easily fix.

  2. Approximately half of the time, MWB doesn’t work on the login screen but does once I login. I usually have my screen rotated so the keyboard and touchpad are inaccessible so this forces me to use the touchscreen to login. I think this is an inadequacy in MWB that needs to be fixed.

    UPDATE: I’ve discovered that if I lock my Win8 box by hitting [START] and then clicking on my avatar in the top-right corner and then clicking Lock, then I can usually (85% of the time) unlock my Win8 machine with my kb/mouse.

  3. (continued from 2) Of the times that I have to use the touchscreen to login, only half of the time does the new onscreen keyboard work. The other half of the time, I have to use the accessibility onscreen keyboard (the one that Windows Vista and Windows 7 had). Why? No clue. But I notice that when it’s in this mode, when I use my touchscreen that it moves the mouse cursor around where I’m tapping instead of giving me the little bubble animation to show my touches. I think this is a Win8 bug, and a pretty bad one. If I didn’t have that accessibility button on the login screen, if this were a tablet without a physical keyboard, I would be kinda screwed!
  4. It seems that often (maybe twice a day) that when I lock my machines, MWB totally disconnects from the server on the Win 8 machine. I rarely see this from the Vista machine (maybe once a week). When this happens, CTRL+ALT+R rarely helps. Closing and restarting the host’s instance of MWB sometimes fixes this. More often than not, I end up rebooting my Win8 box to fix this. I have no clue who’s “fault” this is.

One more issue to mention: Many people seem to have installation difficulties with Mouse without Borders because it depends on .NET 2.0. That said, I personally didn’t experience these troubles. I’m not sure if it was MWB or something else but my experience of the bootstrap installer automatically installing .NET 2.0 for me was a pleasant experience for the most part. It was kinda slow but it “just worked” and I had to do nothing special to get it to work. I don’t know what I did differently but I did want to share this. If you run into that problem and need help, check out this blog post by Bruce Cowper.

All in all, while there are a lot of fairly major issues, MWB is still worth using. The problems tend to only be problematic for just a minute and, fortunately, it’s never when I’m in the middle of something but is always as I’m about to start something. Other than my first point (which I’ve learned to live with), if these problems randomly crept up while I was actively using MWB, I probably would have gone back to Synergy by now. But since the major annoyances tend to be at the time when I’m logging in, a slight delay isn’t so bad. That said, it’s still fairly annoying and I’d like to see these scenarios work much better!

Installing Windows 2008 R2 from USB Stick via EFI on Dell PowerEdge servers

I previously posted an article on Installing Windows 7 or Windows Server 2008 R2 from USB Stick. I’ve been using this process for a while and it’s worked beautifully! However, we recently got in a PowerEdge R510 (12x drive bays) and our RAID10 array is larger than 2TB, so I ran into the 2GB MBR limit.

If you Google around, TONS of people run into problems with this. Ultimately, the solution is different for every EFI implementation. With Dell’s uEFI v2.1 (uEFI before v2.0 does not work with Windows, FYI), the solution was rather simple but me and a team of support engineers at Dell spent a better part of half a day to figure it out.

  1. First of all, the USB stick MUST be formatted as a FAT32 drive. NTFS will not work!
  2. Next, follow my instructions from Installing Windows 7 or Windows Server 2008 R2 from USB Stick. This will get you a USB stick that will work for non-GPT installs, but we need to modify it for EFI installs using Dell’s uEFI implementation.
  3. Now, here’s the part that wasted so much of our time! On your USB stick, go into the H:\efi\microsoft\ folder and copy the boot folder into the H:\efi\ folder.
  4. Next, go into an existing install of Windows 2008 R2 and copy the bootmgfw.efi file out of c:\Windows\Boot\EFI\ into the USB stick’s H:\efi\boot\ folder but rename it to BOOTx64.EFI (not case sensitive).

At this point, the USB drive should be bootable to Dell’s uEFI v2.1 (and likely other versions) in a way that makes Windows happy!

These steps should also work for Windows Vista x64 SP1 (and newer), Windows 7 x64, and Windows Server 2008 x64 SP1 (and newer). If you have a 32-bit version of Windows, then give up (or really, upgrade to a 64-bit OS, are you crazy?!).

Portable Devices and 3D

We’ve recently seen some news about 3D in the mobile industry (Next Tegra 2 CPU (known as Tegra 2 3D) and 3D phones at MWC2011). It has gotten a great marketing response but a pretty mediocre to lack-luster response from consumers. On this one, I have to agree with the marketers in one way – I think this is great! On the other hand, I also have to agree with the consumers on the other hand, nobody really cares now nor will they much when it’s first released. So what’s the deal here?

Before I explain my thoughts, let me throw one more variable into the equation. You have heard of Kinect for the Xbox 360 from Microsoft, right? If not, go read here. As a very quick technical summary, the main components of Kinect that we care about here include a/some video camera(s) and depth perception. Do you see where I’m going with this?

Let me give you one more hint if you still don’t get what I’m talking about here. Have you ever seen Minority Report? If not, you MUST watch this video. At this point, I should no longer need to explain anything. Between having a 3D display that doesn’t require glasses and Kinect technology, we now have all physical technology required for this type of UI! But actually, ours is cooler because the Minority Report UI isn’t really in 3D and ours is! So we can actually interact with the depthness of UI components by “touching” them, something that the Minority Report UI couldn’t do. They essentially just had overglorified 2D pinch-to-zoom.

So what we see in 2011 will be some pretty crappy implementations of 3D in mobile devices. There will be 3D movies that play on them and some terrible apps/games that use the 3D display. However, this is a necessary evil to get us to where we are going. Once 3D becomes standard and Kinect-type controllers become more standard, we can then begin creating UIs from the ground-up that builds upon these features in a way that can become awesome devices for all. So as the marketers say, this is great/amazing/spectacular. But also, as the consumers say, nobody will initially care much. However, it really will end up with some really neat end results! So if you’re somebody with some money and creativity, I STRONGLY encourage you to create some 3D UIs and get some patents in this area. Today’s science allows all of this to happen, 2010 has already started and 2011 will continue shipping products with these technologies in them, and you’ll see that 2012, people will begin doing this for real instead of just as a gimmick to create sales. So what you begin today will begin to come into strong demand in about 1.5 years from now! At that point, Apple, Microsoft, Google, or somebody will want to buy your company out.

What do you think about having 3D displays on tablets and smartphones? Share below if this is something you’re looking forward to or not (because I guarantee, we will get there!).

Integrated Portable Devices – The future of Smartphones and Tablets

Quote me now, this is where I think the industry will go in a number of years (I guess ~2 years from now but may be wrong on that). We’re already starting to see shifts in this direction with a great response to it.

The Atrix sounds cool where the phone docks into a laptop shell, right?

What about the Asus Eee Pad Transformer where you dock the tablet into a keyboard to create a laptop?

But wait, there’s more! We can, and will, do better!

We will see a market where you choose your phone and everything builds upon that device. We will have a variety of tablets that we can dock our phones into. We will have a variety of keyboards that we can dock our tablets into. And perhaps we will keep the Atrix idea of directly docking a phone into a keyboard/screen combo. Ultimately, all of these devices will be powered by our phones and we will get to choose the platform independent of the accessories independent of the carrier. For a very large percentage of us (not all, mind you), this device with these accessories would provide us with 100% of our personal portable computing needs, and would even apply to a significant number of us in the workplace as well. For the rest of this article, I will refer to these devices as a Phone, a Tablet Shell, and a Keyboard Dock.

A few points I want to make sure to hit on:

  1. Industry-standard compatability
  2. Cheaper upgrades, fewer wasted natural resources with more flexible accessories
  3. Single data package for all of these different “devices”

1. Industry-standard compatability
This will require some standards. Let’s call this the IPD (Integrated Portable Device) Standard and clearly we would need all parties to agree on and support these standards. A small governing body would be created (perhaps with representatives from major interested companies would be on the panel) to govern the standard.

Initially, I would suggest IPDv1 would support some pretty basic I/O with Display (out from the phone and into the tablet), sound out/in from all interconnects (speakers and microphones on Tablet Shell and/or Keyboard Dock), and user-interaction in through all interconnects (touchscreen on the Tablet Shell, keyboard/touchpad on the Keyboard Dock, perhaps other buttons too). Via this standard display-out, we could also go to a TV, similar to how we use HDMI-out on phones now (but I suggest we consider a different standard, such as DisplayPort which can still support DHCP). I would also propose some Data I/O standards so that our Tablet Shell and/or our Keyboard Dock can have additional functionality (such as a memory card reader and definitely backup capabilities, controlled by the phone OS/software, of course). Lastly, there should be standard power connections so a Tablet Shell and a Keyboard Dock can provide additional battery support, and charging, to the other devices. All of these standards would be independent of Carrier and OS so that accessories that were purchased with a 3G CDMA Android phone with Verizon would work when upgrading to a 4G Wimax WP7 phone on Sprint. We’ll see if Apple would allow their devices to meet these standards – I would hope so but doubt it.

Additionally, we would need physical standards to dock different phones into the same Tablet Shell. This would make it such that a phone with a 3.2″ screen and a phone with a 4.4″ screen can both dock into the same Tablet shell. This might require “phone cartridges” to help with this, but hopefully something better can be thought of to keep a phone securely docked into a tablet yet conveniently removed. Also, port placement would be standardized.

Lastly, these standards will need to grow over time (for example, when we need to support 3D displays, the display interconnect standard might change), so there needs to be a clear versioning of these standards to prevent confusion of what is and is not supported. So if IPDv1 was what I proposed, then perhaps IPDv2 would add 3D support, ethernet support, and support up to 5″ phone screens (or rather, a phone with an increased maximum set of dimensions). Ideally, these standards would not change on a frequent basis, and there should be a very forgiving level of backwards-compatability (for example, an IPDv2 phone that supports 3D technology should still work, albeit in 2D, with a IPDv1 Tablet Shell). These standards should be tightly controlled/patrolled to prevent devices/accessories being released that aren’t fully-compatible with what they claim to be.

2. Cheaper upgrades, fewer wasted natural resources with more flexible accessories
Since at upgrade time you can continue using your previous accessories with newer phones, this will result in a cheaper upgrade for the user as well as fewer wasted natural resources. This is a win for all of us. For those of us who care about the environment, a win. For those of us who want to save money as a consumer, a win. Those of us who want both a 10″ and a 7″ tablet for different times, a win.

But what about manufacturers who make a lot of money on accessories? Well, let’s look at that since this will fail without their support and adoption. We have 3 primary groups of people who make money off of accessories: Phone Manufacturers (Motorola, etc.), Third-party Manufacturers (Seido, etc.), and Carriers/Resellers (who tend to rebadge and/or resell products from the first two groups). For parties in the first two groups here, I claim this is a win because this will have little negative impact on the current market of gel cases, perfect-fitting car docks, screen protectors, etc. where they currently make money (those accessories would not be IPD-compliant and still have their own market). It would affect them in two ways: (1) this would create some new product areas that these manufacturers can create, specifically Keyboard Docks, and (2) this should increase sales in the tablet/laptop accessories area since it lowers the barrier of entry for new users to obtain tablets (which would still need gel cases and screen protectors). For Tablet Shells, this would now open up this accessories markets a bit and let screen manufacturers create and sell what they specialize in, profit in, and can do best – create screens with interfaces to display stuff on them without hurting manufacturers of the other legacy-type accessories. For the third group, the carriers/resellers, well, this just gives them more product to sell and profit on.

3. Single data package for all of these different “devices”
For the user, this could save them some money by not requiring 3 different data packages to keep a phone/tablet/laptop online while traveling. Since everything happens through the phone, only the phone would need a data package. Even if not saving money, it would at least be easier and less confusing for the user. So for the user, this is a win. What about the carrier? Well, this is a tough one but I still think this is a win since expectations are that carriers are moving more and more towards tiered data plans. AT&T has already done this and Verizon has given all indications that it will be as well. This tiered data plan would fit perfectly with Integrated Portable Devices. And this still doesn’t prevent carriers from offering a Premier Unlimited Data package. In fact, I think it would be wise for them to do so and just to consider what this might mean for them (i.e. the exact reason why we’re all expecting them to move to tiered plans).

Please comment below and let us all know what you think!

I thought I would update this and list a few interesting articles that talk about progress on IPDs:

Now don’t get me wrong. These aren’t the most polished products, or even something that can necessarily be called “good”, but this is an interesting evolution to watch as it all evolves right before our eyes!

Droid X Misinformation – Let’s Clear Some Things Up

There has been a LOT of misinformation floating around about the Droid X. I can’t count how many times I have heard rumors that have been made up instead of coming from the reliable sources (i.e. the people who have had the device, the people who leak the information, etc.). So I thought I would clarify here everything as I know it. Now keep in mind, the phone has not yet even been announced, so there is most certainly the chance that anything can change. For all we know, maybe Verizon has scrapped Android and put WebOS on it! (Not really, but keep in mind that things can change, even from the devices that people have personally seen.) So while nothing is locked in stone yet, I’m trying to filter out the unreliable rumors from the reliable rumors.

Misinformation Summary

  Unreliable Rumor Reliable Rumor
Screen Resolution 720p 854×480
Screen Size 4.4″ 4.3″
Android Version FroYo (Android 2.2) Eclair (Android 2.1)
Release Date
(Date in YOUR hands)
June 23 July 19
CPU Snapdragon 720MHz or 1GHz OMAP3 CPU
Front-Facing Camera Yes No 🙁

Screen Resolution

First off, take a look at the official Verizon and Motorola pages for the device. They don’t say much, but I’m sure in a few days they’ll be a bit more interesting. The main comment I have here is that Verizon temporarily and erroneously listed “720p screen” on the official page before updating it and changing it to “Captures 720p”. So this is one source of misinformation that many people are blogging about. Engadget, among many others, confirm this. To reiterate, the Droid X does NOT have a 720p display! At this point, the reliable rumor is that the display’s resolution will be 854×480, the same as the original Droid, based on Engadget, the people who actually had hands-on time with the phone.

Droid X or Droid 2?

Now that we have looked at the mostly useless official pages, let’s next discuss the phone’s identity itself. The rumormill is going on and on about the Droid X and only some of them mention that a second phone is being talked about at the same time: the Droid 2. The Droid X is supposed to be a 4.3″ screen-only phone while the Droid 2 is a 3.7″ w/physical keyboard phone. I have seen more than one report confuse the two and not even realize that they are different phones. So keep this in mind in your reading. (As far as I know, Verizon/Motorola has not yet officially acknowledged the existence of the Droid 2 phone, only the Droid X.) Lots of reports are confusing the rumors/specs associated with one phone for the other.

Eclair? Frozen Yogurt? Hot Fudge Sundae?

Next, let’s talk about the operating system version. Various sources are reporting that it will run Eclair (Android 2.1) while others are claiming it will run FroYo (Android 2.2). However, I have not seen any of them to have a reliable source for the claim that it will run FroYo. Rather, the only reason to believe this is a possibility is because of a widget that has been displayed on the Droid X that is supposed to come with FroYo and has not been included in Eclair yet. This is where the confusion between Droid X and Droid 2 comes into play. Droid X is supposed to be in consumers’ hands on/about July 19 while the Droid 2 is speculated to be released in August. As such, the Droid 2 has a bit more time to get FroYo ready for it whereas the Droid X doesn’t have as much time and is supposed to launch with Eclair. This is based both on Engadget’s hands-on time with the phone as well as what a couple reports have been told has come from “reliable sources”. I have yet to see a “reliable source” for the FroYo claim.

Release Date

Most people agree that the Droid X will be announced on June 23 based on the invitations that many different news agencies received. Note the execs from Motorola, Verizon, Google, and Adobe who will be there. There is plenty of speculation about Adobe’s role in this but nobody really knows anything about this right now. While the phone is announced on June 23, some reports are saying (or at least implying) that it will be available on June 23 and this is an unreliable rumor. The reliable rumor is that it will be in consumers’ hands on July 19. As for the Droid 2, the rumor is August but I think this is mostly speculation at this point with no reliable source for it.


So this is one area where I am unsure of what exactly to expect. There has been multiple lines of rumors about this and I’m not sure which one to believe. One line claims that the phone will have a OMAP3630 CPU. Another line claims that it will have a 1GHz processor. And some claim that it is a 1GHz OMAP3630 CPU. However, there are problems with some of these rumors. As you can tell from TI’s offerings, the 3630 is a 720MHz model while the 3640 is a 1GHz model. Some of the people are saying the 1GHz CPU will be a Qualcomm Snapdragon CPU, even, but I suspect this is simply based on an assumption because that’s the only 1GHz CPU they’re familiar with. All in all, I’m not sure what to believe here. If it truly is a 1GHz OMAP3 CPU (whether the 3630 at a faster clock speed or the 3640), I think it would be a welcomed improvement over the pseudo-standard Snapdragon.

Another comment to make about the CPU is that the common rumor used to be a 720MHz OMAP3630 until some advertisement information leaked (that, mind you, did NOT mention a thing about the Droid X’s processor speed, only the Droid 2 processor speed). I’m not really sure what the source is that the Droid X’s CPU will be a 1GHz processor but it seems to be what everybody is reporting now. I still am unsure about this. However, keep in mind that a 720MHz OMAP3630 is still comparable to a 1GHz Snapdragon for general use and better for 3D gaming.

Front-facing camera

So I’ve seen more than one report that this will have a front-facing camera. I have only seen this reported from sites that seem to have no real source and are probably just making this information up. As far as everything I’ve found that seems reliable, there will be NO front-facing camera on the Droid X. And yes, this is a shame.

Screen Size

I have seen reports of both a 4.3″ screen and a 4.4″ screen. The 4.3″ screen is what was reported by Engadget (who physically had the device) and is what Verizon shows on the official page. I don’t know what the source is of the 4.4″ screen rumor is but a LOT of reports suggesting it. They seemed to have died down lately, but I have still seen a couple reports even today that are still claiming it has a 4.4″ screen. At this point, I think it’s pretty much definitely a 4.3″ screen. But are all 4.3″ screens the same? I’m not so sure. This screen does appear to be longer in the photos displayed next to an HTC Evo 4G (photos reported by a LOT of places but appear to originate here). Considering that the resolution is slightly higher, this could mean that the display is a bit narrower than the Evo, so this could explain how it could appear larger but really be the same size, since the size is measured as a diagonal from corner to corner.

There you have it! I think that takes care of most of the misinformation that I’ve been seeing. Like many others, I can’t wait to get my hands on this device!

Setting up an ASP.NET Service Account with Least-Privilege Permissions

I find myself rediscovering how to do this a lot, so I thought I’d post this here.

  1. Create your Account (let’s say it’s MyDomain.com\sa_MyMVCHostingUser in your company’s Active Directory server – it can also be a local Windows account as well)
  2. Open up an elevated command prompt (Start -> “command” -> [CTRL]+[SHIFT]+[ENTER])
  3. Navigate to your .NET Framework directory (such as C:\Windows\Microsoft.NET\Framework\v4.0.30319)
  4. execute: aspnet_regiis -ga MyDomain.com\sa_MyMVCHostingUser

After performing the above steps, your account will have the basic permissions to host a basic ASP.NET application. If you are accessing resources other than the IIS Metabase or content files in your IIS Application, then those permission configurations are beyond the scope of this post (and you would NOT set them up in a similar way, so don’t try).

Use Lambda Expressions for Strongly-Typed Goodness, and Serialize it too

With my venture into MVC2 recently, I’ve fallen in love with the strongly-typed helpers and am attempting to provide mechanisms similar to those in many different areas of my code so I can cease the use of hardcoded/constant strings all over the place and get better compile-time error checking.

This post is to show you how you can use Lambda Expressions to perform these sorts of things in an N-Tier system in a way where you can even serialize these expressions. I’m only handling a very basic scenario here but the tools I use are VERY flexible and can handle, out-of-the-box, much more complex scenarios.

So the setting where I’m using this in is within MVC where a grid is displaying paged data and I’d like to filter and sort the data. For this post, let’s look at the dynamic sorting options. Not only do I want to sort the data, but I also want to support multi-column sorting. However the UI implements this, I don’t really care but I want to expose a strongly-typed way for it to do so, if it chooses (i.e. as the default so future coders are less likely to create bugs that sneak into production).

In order to do this, I’ve created a fairly simple class to store these sorting options. I share both on the UI side and the server side of WCF as this is acceptable in my system. However, I do so in a way that COULD be consumed by any other technology while losing the strongly-typed functionality but not losing all functionality. Here is my initial version of this class:

    public class SortOption<TModel, TProperty> where TModel : class
        public SortOption(Expression<Func<TModel, TProperty>> property, bool isAscending = true, int priority = 0)
            Property = property;
            IsAscending = isAscending;
            Priority = priority;
        public SortOption()
            : this(null)
        public Expression<Func<TModel, TProperty>> Property { get; set; }
        public bool IsAscending { get; set; }
        public int Priority { get; set; }

Pretty simple, right? This actually is a VERY simple class with the exception of that pesky Property object with the nasty Expression<Func<TModel, TProperty>> signature. In case you didn’t know, that nasty signature is how we would write code such as foo.Property = (x => x.FirstName); in order to say that we want the “FirstName” property off of whatever object we instantiated foo with as TModel (whether it’s a Person class or a Pet class or whatever). Because of that nasty Expression signature, there’s no way this is going to serialize nicely, or at all. Lots of Googling all over, everybody tells you that you simply cannot serialize Lambda Expressions, don’t try!

Luckily for us, they are only partially correct. It depends on what you want to do with that Lambda Expression that determines if you really can serialize it or not. All we’re using it for is to have strongly-typed code at development time. At run-time, it does nothing for us and in actuality is a slight performance hindrance. But this is the age of sacrificing performance for developer productivity, so this is okay (at least for some). So with the help of some Dynamic LINQ Libraries (haha!!), we can actually serialize these expressions that we’re wanting to use in this scenario! ScottGu has a nice post introducing these and I encourage you to read it. What we are using, as ScottGu mentions, is the code in the Dynamic.cs file in the “\LinqSamples\DynamicQuery” project.

Take a looksee through the Dynamic.cs file and you’ll see a static DynamicExpression class with some ParseLambda functions in there – these are the things that we care about mostly for this. As you can see, these functions are quite powerful and flexible.

Once we make some slight modifications to our SortOption<TModel, TProperty> object to implement ISerializable, we can leverage one of those ParseLambda functions to perform the hard part of our custom serialization of this class. Below is a newer version of this class, all with nice comments, to do exactly what we’re wanting to do:

    /// <summary>
    /// This defines a framework to pass, across serialized tiers, sorting logic to be performed.
    /// </summary>
    /// <typeparam name="TModel">This is the object type that you are filtering.</typeparam>
    /// <typeparam name="TProperty">This is the property on the object that you are filtering.</typeparam>
    public class SortOption<TModel, TProperty> : ISerializable where TModel : class
        /// <summary>
        /// Convenience constructor.
        /// </summary>
        /// <param name="property">The property to sort.</param>
        /// <param name="isAscending">Indicates if the sorting should be ascending or descending</param>
        /// <param name="priority">Indicates the sorting priority where 0 is a higher priority than 10.</param>
        public SortOption(Expression<Func<TModel, TProperty>> property, bool isAscending = true, int priority = 0)
            Property = property;
            IsAscending = isAscending;
            Priority = priority;
        /// <summary>
        /// Default Constructor.
        /// </summary>
        public SortOption()
            : this(null)
        /// <summary>
        /// This is the field on the object to filter.
        /// </summary>
        public Expression<Func<TModel, TProperty>> Property { get; set; }
        /// <summary>
        /// This indicates if the sorting should be ascending or descending.
        /// </summary>
        public bool IsAscending { get; set; }
        /// <summary>
        /// This indicates the sorting priority where 0 is a higher priority than 10.
        /// </summary>
        public int Priority { get; set; }
        #region Implementation of ISerializable
        /// <summary>
        /// This is the constructor called when deserializing a SortOption.
        /// </summary>
        protected SortOption(SerializationInfo info, StreamingContext context)
            IsAscending = info.GetBoolean("IsAscending");
            Priority = info.GetInt32("Priority");
            // We just persisted this by the PropertyName. So let's rebuild the Lambda Expression from that.
            Property = DynamicExpression.ParseLambda<TModel, TProperty>(info.GetString("Property"), default(TModel), default(TProperty));
        /// <summary>
        /// Populates a <see cref="T:System.Runtime.Serialization.SerializationInfo"/> with the data needed to serialize the target object.
        /// </summary>
        /// <param name="info">The <see cref="T:System.Runtime.Serialization.SerializationInfo"/> to populate with data. </param>
        /// <param name="context">The destination (see <see cref="T:System.Runtime.Serialization.StreamingContext"/>) for this serialization. </param>
        public void GetObjectData(SerializationInfo info, StreamingContext context)
            // Just stick the property name in there. We'll rebuild the expression based on that on the other end.
            info.AddValue("Property", Property.MemberWithoutInstance());
            info.AddValue("IsAscending", IsAscending);
            info.AddValue("Priority", Priority);

With this simple custom serialization implementation, we can now have strongly-typed, serializable Lambda expressions and hopefully prevent sneaky bugs getting in there. Granted, when we serialize/deserialize this, we’re convert it to simple bug-tolerant strings, but this is the ugly plumbing work that you tend to not change that much. And also, when you break it, it tends to be pretty obvious and not very sneaky. So that’s okay. And from another perspective, good luck serializing anything if you can’t convert it into a string of some sort!

Daisy-chain your PCs’ audio to 1 set of speakers

Way back in WinXP times and before, I would always connect all of my PCs at one desk together to a single set of speakers. The only special software/hardware this required were some extra 3.5mm audio cables and a set of ordinary speakers. The software to do this was built into Windows XP and just required some intelligent checkboxes to be selected. However, the native ability to do this with both Windows Vista and Windows 7. Luckily, with a registry hack, it is possible to still get this behavior with both operating systems!


SOA-based Architecture in progress

I’ve been tasked at coming up with a standard architecture for us to apply to our future projects. Yeah, I know, there’s no blanket answer for everything, but given the requirements I expect, I think a single architecture can handle 95% of what we’ll be doing and we can deviate/improve as necessary for the other 5%.

Ultimately I don’t yet know what this is going to look like but this is the direction I’m leaning in:

Next Page »

Jaxidian Update is proudly powered by WordPress and themed by Mukkamu