2/11/12

Custom Model Binder in MVC

During postbacks, the MVC framework automatically binds the incoming data to the specified model. Let us say, you have a postback handler as show below:
 

  [HttpPost]
  public ActionResult MyPostbackHandler(Person person)
  {

  }
  
Assuming that your MVC view was bound to the Person object, when you postback to this method, the framework will automatically create an instance of Person and populate it with the values in the incoming form fields.

So if the Person object has a property called 'Location', the framework will look for a form field with the same name(Remember - "name" not "Id"). If it finds it, it will get the value from that form field and map it to the property. Same is the case with all the other properties and associated form fields. This is the Default MVC Model binding functionality.

 But what if our postback handler was as follows:

[HttpPost]
public ActionResult MyPostbackHandler(Dictionary<string,string> customObject)
{

}
In this case, default model binding will not work, since the method expects a dictionary object. We will have to explicitly extract the values from the incoming form fields and populate the dictionary object.

To do this we will need to create a custom model binder class as shown below.
Let us assume that we are going to only map fields whose names are prefixed with "Custom:"


public class CustomModelBinder : IModelBinder
{
    public object BindModel(ControllerContext cContext, ModelBindingContext bContext)
    {
        const string Prefix = "Custom:";
        string[] AllKeys = cContext.HttpContext.Request.Form.AllKeys;

        //1.get all the form fields with the specified prefix
        List CustomKeys = AllKeys.Where(x => x.StartsWith(Prefix)).ToList();

        var KeyValuePairs = new Dictionary();
        foreach (var key in CustomKeys)
        {
            ValueProviderResult objResult = bContext.ValueProvider.GetValue(key);
            //get the key's value
            string Value = objResult.AttemptedValue;

            //*Refer NOTE 1
            bContext.ModelState.SetModelValue(key, objResult);

            //*Refer NOTE 2
            if (!CusomValidator(Value))
                bContext.ModelState.AddModelError(key, "ErrorMessage");

            //add key and Value to dictionary
            KeyValuePairs.Add(key, Value);
        }
        return KeyValuePairs;
    }
}

NOTE 1
Before you add an error message for a key in modelstate, we must add it's ValueProviderResult(that contains the attempted value) to ModelState by calling SetModelValue. If we do not do that, the key has an associated error message, but does not have an attempted value to display in the front end. So if you use HtmlHelpers and the framework tries to set the attempted value when validation fails(by implicitly calling ModelState["key"].Value.AttemptedValue), it will throw a null exception since the Value is missing.

NOTE 2
Since we are not using default model binding, we cannot use DataAnnotations for Validation purposes, which means if the fields require some validation we will have to explicitly call the validation code and, if there are any errors, add them to ModelState.

Now that we have a custom model binder called CustomModelBinder, we need to tell the framework to use this custom model binder instead of the default one.We do it by adding the following attribute to the postback handler:


[HttpPost]
public ActionResult MyPostbackHandler([ModelBinder(typeof(CustomModelBinder))]Dictionary<string,string> customObject)
{

}

1/22/12

Good Application Development Practices

  • The core components of the application must be identified early and a significant amount of time and effort must be devoted to their design, development and testing. A perfect example of a core component is the Subscription functionality.
  • If possible, the smartest people must be assigned to these tasks.
  • Logging must be enforced in all important areas of the application. This ensures that we have enough diagnostic information to troubleshoot potential issues in production. Logging is also necessary to troubleshoot issues where the application might not be blowing up, but is not performing as intended.
  • The first page in the application that the user lands, after login, must not contain any time-consuming calls that can lead to page timeouts. This will prevent the user from logging into the application and accessing any other areas of the application.
  • Avoid Single Point of Failures. e.g. Do not use a single large object to share data across the whole application. e.g a cache object that stores a huge amount of data that might not be required by all areas of the application. Serialization and deserialization of a large object is a costly process and can slow down the entire application and may even bring the whole application to a halt.  
  • There should be a single point of access to all common objects used across the application. This ensures that the application behaves consistently. e.g. When trying to extract the one and only element from a common list, one part of the code must not call SingleOrDefault while another calls FirstOrDefault.
  • Be extremely cautious if your application uses a memory cache to store objects. When returning values from the cache, always return a deep copy of the object and not a shallow copy. This ensures that the calling code gets it's own copy of the cached data including all reference types. More importantly, this ensures that the calling code does not tamper with the original object in cache.
  • If  the application utilizes separate Read and Write databases, ensure that the database objects(e.g stored procedures) in either case return consistent results.

11/6/11

Deep copy & Shallow copy - Why basics matter

Recently, we ran into an issue in our application, a weird bug that was not easily reproducible. It goes like this:

When I try to look at data for User X, the application sometimes pulls up the details of X and sometimes it gets me the data for some other random User Y. There was no relation between X and Y. This was happening more under load and it was pretty difficult to reproduce.

After digging through the freaking code for hours and hours we finally figured out the problem.It happened for two reasons:

First Reason:
The programmer who wrote the code was careless or lacked a clear understanding of programming fundamentals.

Second Reason:
A shallow copy of the data was being returned instead of a deep copy.

Explanation:
We were using the memory cache to cache the common user metadata from the database so as to cut down the load on the database servers. We then created a class (a Singleton) to encapsulate the access to the cache. This class has an instance of a cache object that talks to the actual memory. The getters in this instance returned shallow copies of data i.e actual references to memory. It was the responsibility of the Singleton class to create a deep copy of the data before returning it to the calling code.

This critical piece of code was missing and so the Singleton was essentially returning a reference to the data in cache.The code that called the Singleton was then modifying the shallow copy of the metadata, unaware that it was actually modifying the values in cache. So when User Y requested data, since the class was a singleton(and therefore using the same cache instance as well), it was returning a modified version of the metadata. When this metadata was combined with other data, specific to user Y, it resulted in weird values and as a result the user ended up seeing some other data.

Solution:
The fix was very simple. In the Singleton class, we had to create a deep copy of the object returned from the cache, before passing it to the calling code, so any changes made to that object did not impact the original values in cache.

8/12/11

throw and throw ex

There are some things in life which come back to bite you in the face(haha...) if you do not pay enough attention to them. In my case one such thing was NOT knowing the difference between throw and throw ex when dealing with exceptions in C#(other languages probably do the same). I didnt know that the stack trace was going to be different depending what whether I did a throw or a throw ex and as a result I always ended up looking up in the wrong places whenever there were exceptions deep inside the call stack.Let me give you an example of what I am talking:

Look at the code below. There are three classes Level 1, Level 2, Level 3. Level makes a call to method HelloLevel2()  in Level2 which in turn makes a call to a method HelloLevel3() in Level 3. The method in Level 3 throws an exception.Pay attention to the text in Red.

public partial class Level1 : System.Web.UI.Page
    {        
        protected void Page_Load(object sender, EventArgs e)
        {            
            Level2 objlevel2 = new Level2();
            objlevel2.HelloLevel2();

        }
    }
 public class Level2
    {
        public void HelloLevel2()
        {
            try
            {
                int x = 10;
                Level3 objlevel3 = new Level3();
                objlevel3.HelloLevel3();
            }
            catch (Exception ex)
            {
                throw ex;
            }
        }
    }
public class Level3
    {
        public void HelloLevel3()
        {
            try
            {                
                throw new Exception("Oops....Something got messed up here");
            }
            catch (Exception ex)
            {
                throw ex;
            }

        }
    }

If you look at the stack trace, this is how it looks. If you notice the text in Red you will notice that even though the error occured in HelloLevel3() there is no indication of that. This is happening because the method HelloLevel2() is doing a throw ex inside it's catch block instead of doing a  throw



[Exception: Oops....Something got messed up here]
   WebApplication1.Level2.HelloLevel2() in C:\VisualStudioProjects\WebApplication1\WebApplication1\Level2.cs:20
   WebApplication1._Default.Page_Load(Object sender, EventArgs e) in C:\VisualStudioProjects\WebApplication1\WebApplication1\Default.aspx.cs:29
   System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) +14
   System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) +35
   System.Web.UI.Control.OnLoad(EventArgs e) +91
   System.Web.UI.Control.LoadRecursive() +74
   System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +2207

Now, as shown below, I have modified the code in HelloLevel2() to as shown below

 public class Level2
    {
        public void HelloLevel2()
        {
            try
            {
                int x = 10;
                Level3 objlevel3 = new Level3();
                objlevel3.HelloLevel3();
            }
            catch (Exception ex)
            {
                throw;
            }
        }
    }

If we run the application,the stack now looks as shown below:

[Exception: Oops....Something got messed up here]
WebApplication1.Level3.HelloLevel3() in C:\VisualStudioProjects\WebApplication1\WebApplication1\Level3.cs:18

WebApplication1.Level2.HelloLevel2() in C:\VisualStudioProjects\WebApplication1\WebApplication1\Level2.cs:20
WebApplication1._Default.Page_Load(Object sender, EventArgs e) in C:\VisualStudioProjects\WebApplication1\WebApplication1\Default.aspx.cs:29
System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) +14
System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) +35
System.Web.UI.Control.OnLoad(EventArgs e) +91
System.Web.UI.Control.LoadRecursive() +74
System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +2207

As you can see it tells you that the exception actually occured in HelloLevel3(). This information is extremely useful and can save hours of troubleshooting in the wrong direction.

Hope this helps! Peace.

3/9/11

Address Verification Software

Last month, I was working on an Order processing application in which the customers had to provide their shipping addresses as a part of the final step. My boss mentioned that prior to shipping, these addresses had to be verified to ensure they were valid deliverable addresses.

My initial thoughts were "Verify addresses? really? How do you do that?".I actually found the idea pretty amusing. Ignorance is bliss I guess.

As I started researching I realized that inaccurate addresses posed a big problem for businesses, especially those that rely on mailings for their sales. Companies that did not have an address verification solution in place were losing millions of dollars in postage, lost merchandise, lost invoices and all the additional time spent by employees in double checking all existing procedures to ensure accuracy. As a result this had opened up a big market for address verification software. A number of companies of varying sizes and shapes offer this software. My next task was to find the one that best suited us.

I poured over a number of "How To" guides on choosing address verification software and also spent a considerable amount of time reading the websites of the various companies suggested in those guides. Finally I narrowed down my choices to the following five companies: 
  • MelissaData
  • Intelligent Search Technology
  • QAS
  • USPS
  • Sartoris Software  
I called each of these companies and spoke to their sales people. I noted down the various options provided by each and also the cost associated with each option. Most of the Companies except USPS offered two options:
  • Hosting the software in-house
    •  You get an installation CD that contains the API and all the addresses. You also get a list of updated addresses every two months  
  • Accessing the software via web services 
    • You pass the address to their web service that will perform the validation and return the results. This option was not offered by USPS. 
    • This option is a lot cheaper compared to the first option.  
In the end we decided to go with the web services option provided by Intelligent Search Technology. They had a good API that satisfied our requirements and the cost was very affordable. Their web services were secure (https based) and used a 128 bit algorithm to encrypt the incoming and outgoing data. I created a proxy and tested their API quite extensively and so far the address verification has been pretty consistent and accurate.

Overall, It has been an amusing and enlightening experience and I hope this information proves useful to people out there looking for address verification software.

3/8/11

Programmer Keyboards



Regular Programmer
Real Programmer

Really Real Programmer

12/11/10

Authentication Issues with WebHttpBinding in WCF 3.5

If you plan to expose your WCF Services via HTTP requests instead of SOAP messages, you will need to use WebHttpBinding to configure your endpoints. When you use do this and host your services in IIS 7.0, you might get the following error:

IIS Specified authentication schemes 'IntegratedWindowsAuthentication,Anonymous'
but the binding only supports specification of exactly one authentication scheme. Valid authentication schemes are Digest, Negotiate, NTLM, Basic, or Anonymous. Change the IIS settings so that only a single authentication scheme is used.

This happens if your services are inside a web application that is configured to use Anonymous as well as Integrated Windows Authentication in IIS. To get around this issue, assuming all your services are in one folder under your application root, select that folder and in the Authentication options,uncheck one of the Authentication options depending on your need. This should fix the problem.