WPF changed the rules for UI best practice, but it didn't update the book.
With all the new and improved DataBinding and Command features - a new set of patterns have emerged, including a recommended pattern called DataModel-View-ViewModel (DM-V-VM).
At a conceptual level this pattern looks fantastic – but there are issues... namely it doesn’t work for non-trivial scenarios.
Rather than a boring high level rant, I'll use boring examples to explain why:
Cannot support events
The recommended approach is to use DataTemplates and Commands - but DataTempates cannot handle events, and the command system is not extensive enough (eg. doesn't support mouse clicks).
Supporting events requires using code behind; which means using at least a UserControl base class.
No Keyboard support for Standard Commands
Dan Crevier and John Gossman don't mention the standard Commands, instead opting to create their own – not because it's better practice, but it's easier to demonstrate.
Their suggested approach uses CommandParameter bindings on the buttons, which only works with OnClick (i.e when the shortcut Key is pressed "null" is passed through).
Since the InputBindings.KeyBinding.CommandParameter does not support DataBinding - supporting the keyboard (and still using commands) means we can't use CommandParameters.
UI support breaks abstraction
Say that we want to delete all the selected items from a list; this means storing a list of items in the ViewModel (since we not using CommandParameters), which we then bind from our View.
But this is purely for UI logic rather than business logic!
Complicated scenarios might need multiple properties - and since the binding is not clearly defined, it will introduce bugs.
The abstraction is now broken, so let's just move our ViewModel into the code behind.
Unit testing the UI logic now not easy
No kidding, that's why we initially moved the code into the ViewModel!
All of the real logic should be implemented in the data bound objects or by a class separate to the UI. These classes are unit testable, but the UI integration is not.
Aren't we back to where we started?
Not quite, WPF's new style allows us to develop a cleaner system than the days of WinForms and MFC - and while there's not a published best practice for it, it doesn't mean that it doesn't exist.
UI programming is hard, especially since it looks so easy.
Tuesday, December 4, 2007
Monday, December 3, 2007
Avoiding MSDTC with Linq to Sql
Now MSDTC is not evil; it's just overkill when it isn't needed. It slows performance, increases complexity and requires client side configuration... and it can activate when you least expect it.
Take the following code:
This code uses the recommended (and very helpful) TransactionScope class - and unfortunately invokes MSDTC.
Since we're using separate instances of UserDataContext this creates separate database connections - and causes TransactionScope to "promote" the transaction to require MSDTC.
Avoiding this is as simple as sharing the database connection - after all, it is the same database - but passing database connections to sub methods gets ugly really quick.
My Solution
To solve this problem in a cleaner manner, I wrote a Transaction Resource Manager - which works behind the scenes and makes the above code work without change.
It simply shares database connections across the instances (only within a transaction). It's thread safe, and easy to integrate - simply change the DataContext base class in the dbml designer.

Grab the code and check it out. There's an example program and a whole slew of unit tests (around 275 - testing the various permutations of nested TransactionScopes).
Kudos to Nick Guerrea's Weak Dictionary - which I use to weakly track the transactions and share the connections.
Same Solution for other Scenarios
There are other scenarios where this problem arises:
Say you've got a database with many tables (or a plug-in system that shares a database) - a modular approach uses multiple DataContext classes.
Our problem occurs when you need to modify tables from different DataContexts within a single transaction.
Thankfully, the root cause is still the same and our solution solves these cases as well.
This has been a great learning experience for me, and I must say that the Transaction Resource Managers are very interesting (might even make a good alternate ScopeGuard implementation?)
EDIT: See updated article for extra goodness.
Take the following code:
void CreateUsers()
{
using (var scope = new TransactionScope(TransactionScopeOption.Required))
{
CreateUser("Fred");
CreateUser("Joe");
scope.Complete();
}
}
void CreateUser(string name)
{
using (var context = new UserDataContext())
{
context.Users.InsertOnSubmit(new User { name = name });
context.SubmitChanges();
}
}
This code uses the recommended (and very helpful) TransactionScope class - and unfortunately invokes MSDTC.
Since we're using separate instances of UserDataContext this creates separate database connections - and causes TransactionScope to "promote" the transaction to require MSDTC.
Avoiding this is as simple as sharing the database connection - after all, it is the same database - but passing database connections to sub methods gets ugly really quick.
My Solution
To solve this problem in a cleaner manner, I wrote a Transaction Resource Manager - which works behind the scenes and makes the above code work without change.
It simply shares database connections across the instances (only within a transaction). It's thread safe, and easy to integrate - simply change the DataContext base class in the dbml designer.

Grab the code and check it out. There's an example program and a whole slew of unit tests (around 275 - testing the various permutations of nested TransactionScopes).
Kudos to Nick Guerrea's Weak Dictionary - which I use to weakly track the transactions and share the connections.
Same Solution for other Scenarios
There are other scenarios where this problem arises:
Say you've got a database with many tables (or a plug-in system that shares a database) - a modular approach uses multiple DataContext classes.
Our problem occurs when you need to modify tables from different DataContexts within a single transaction.
Thankfully, the root cause is still the same and our solution solves these cases as well.
This has been a great learning experience for me, and I must say that the Transaction Resource Managers are very interesting (might even make a good alternate ScopeGuard implementation?)
EDIT: See updated article for extra goodness.
Wednesday, November 28, 2007
Quick Tip - Read Only Automatic Properties
This is a simple little trick, which I only discovered a few weeks ago.
In the past providing public "read only" access to a member would look something like:
With Automatic Properties in C# 3.5 the code becomes:
The neat trick is the access modifier (private) next to the set. This allow you to internally modify the member, while publicly providing "read only" access.
In the past providing public "read only" access to a member would look something like:
private string connection;
public string Connection
{
get { return connection; }
}
With Automatic Properties in C# 3.5 the code becomes:
public string Connection { get; private set; }
The neat trick is the access modifier (private) next to the set. This allow you to internally modify the member, while publicly providing "read only" access.
Monday, November 26, 2007
VS2008 RTM Update
I have updated all the downloadable code to compile with the RTM release of VS2008.
It's not a big change, so I'll use this blog entry to update my Xceed <-> Linq code. I've fixed some minor bugs, improved performance a lot and added better linq syntax support.
So if you're using the code, it'd be a good idea to update.
It's not a big change, so I'll use this blog entry to update my Xceed <-> Linq code. I've fixed some minor bugs, improved performance a lot and added better linq syntax support.
So if you're using the code, it'd be a good idea to update.
Monday, November 19, 2007
IEnumerable Joy - Batch / Split
I was inspired by my good friend Nick's latest blog entry Return of the Batch Enumerable.
Here he demonstrates a neat idea of using extension methods and yield to split an IEnumerable into arbitrary batches.
My version is slightly less efficient, but treats the batches as separate lazy collections - so you can use it on multiple threads etc.
I've also added a Split extension, which I'll describe - but first, the code!
static class InBatchesExtension
{
public static IEnumerable<IEnumerable<T>> InBatches<T>(this IEnumerable<T> source, int batchSize)
{
for (IEnumerable<T> s = source; s.Any(); s = s.Skip(batchSize))
yield return s.Take(batchSize);
}
public static IEnumerable<IEnumerable<T>> Split<T>(this IEnumerable<T> source, int items)
{
for (int i = 0; i < items; ++i)
yield return source.Where((x, index) => (index % items) == i);
}
}
The InBatches returns the same output as Nicks:
Batch: 1 2 3
Batch: 4 5 6
Batch: 7
And the Split batches the output across the collections:
Batch: 1 4 7
Batch: 2 5
Batch: 3 6
So this gives you 2 different batch splitting methods for a generic enumerable.
I love this stuff.
Tuesday, November 13, 2007
Wield the Yield
Recently I've noticed an IO pattern emerging combining yield and linq; a powerful and efficient mechanism to read data.
Check the following code: We safely open a file (with the using statement), and yield the results with a loop.
This simple (and beautiful) code allows us to treat a file like an array of strings - which becomes very powerful when coupled with linq.
I'm sure this same pattern could also be applied to other data retrieval methods, eg algorithm calculation (prime numbers, Fibonacci, etc), network communication (message queue) and even as an abstraction over asynchronous data methods.
Yield provides a lazy enumeration (doesn't execute unless needed) so the solution is generic and efficient - I like it.
Check the following code: We safely open a file (with the using statement), and yield the results with a loop.
IEnumerable<string> ReadFile(string filename)
{
using (StreamReader reader = new StreamReader(File.OpenRead(filename)))
while (reader.EndOfStream == false)
yield return reader.ReadLine();
}
This simple (and beautiful) code allows us to treat a file like an array of strings - which becomes very powerful when coupled with linq.
// Contrived example - How many file lines refer to "fred"?
var result = ReadFile("file.txt").Where(x => x.Contains("fred")).Count();
I'm sure this same pattern could also be applied to other data retrieval methods, eg algorithm calculation (prime numbers, Fibonacci, etc), network communication (message queue) and even as an abstraction over asynchronous data methods.
Yield provides a lazy enumeration (doesn't execute unless needed) so the solution is generic and efficient - I like it.
Monday, November 5, 2007
n! - let me count the ways
The other day I had an interesting discrete maths assignment question involving permutations - so I decided to codify it.
"How many ways can six people (Josh, Toby, CJ, Sam, Leo & Donna) sit, if Josh must sit to the left of Leo and to the right of Sam."
Now the astute reader will immediately recognize the answer to be 2 * (4! + 3!3!) = 120; but to be sure I wanted to double check this via C#. (I'm keen to redo this in F# too)
My solution was a quick hack, and is by no means "the best way" - but using linq it sure looks pretty.
This simple example shows why I'm a such fan of linq in C# - the pre-linq code for this certainly wouldn't be as eloquent.
Oh, and it also shows why I'm a fan of maths... it's much more efficient to just to calculate the factorials.
Update: Formatting code in a blog sure is a pain!
"How many ways can six people (Josh, Toby, CJ, Sam, Leo & Donna) sit, if Josh must sit to the left of Leo and to the right of Sam."
Now the astute reader will immediately recognize the answer to be 2 * (4! + 3!3!) = 120; but to be sure I wanted to double check this via C#. (I'm keen to redo this in F# too)
My solution was a quick hack, and is by no means "the best way" - but using linq it sure looks pretty.
class Program
{
static IEnumerable<List<string>> Permutate(IEnumerable<string> people,
IEnumerable<string> current)
{
// Get remaining people and recurse
foreach (var person in people.Where(x => current.Contains(x) == false))
foreach (var result in Permutate(people, current.Concat(new string[] { person })))
yield return result;
if (current.Count() == people.Count())
yield return current.ToList();
}
static void Main(string[] args)
{
string[] people = new string[] { "Josh", "Toby", "CJ", "Sam", "Leo", "Donna" };
var result = Permutate(people, new string[] { }).ToList();
// Josh sits to the left of Leo
var Josh_Leo = result.Where(r =>
r.FindIndex(x => x == "Josh") < r.FindIndex(x => x == "Leo"));
// Josh sits to the right of Sam
var Sam_Josh = Josh_Leo.Where(r =>
r.FindIndex(x => x == "Sam") < r.FindIndex(x => x == "Josh"));
var count = Sam_Josh.Count();
}
}
This simple example shows why I'm a such fan of linq in C# - the pre-linq code for this certainly wouldn't be as eloquent.
Oh, and it also shows why I'm a fan of maths... it's much more efficient to just to calculate the factorials.
Update: Formatting code in a blog sure is a pain!
Subscribe to:
Posts (Atom)