Getting to grips with the jQuery Validation plugin

Introduction

Recently I was assigned the task of adding some validation to a new HTML form. The form itself was nothing out of the ordinary. A standard user registration page like you see everywhere on the web today. The only requirement was that the validation be performed client side. Again, this is what you see on most modern web applications in 2012.

I’m a big believer in reusing existing components, but the trick is to pick the components that are viable to be maintained in the future. Since jQuery was already in use on the project then a jQuery plugin seemed appropriate. jQuery plugins are in abundance on the Internet today, but if I’m going to use something I want it to be maintained and compatible with future versions of jQuery. I knew of the jQuery Validation plugin since I had used it in a previous project. However, I was a little reluctant to jump in with both feet first since I found it a little awkward to work with the last time around. Needless to say, this time it did the job for me, but when trying to get familiar with its usage, good documentation was hard to find.

Now, that’s doesn’t mean that good documentation doesn’t exist, it’s just that a lot of the examples you find on the Internet are either much too simple (and use the simple validation syntax), or they are cluttered with a lot of other unnecessary JavaScript code that overcomplicates the essence of illustrating the validation in action. A good starting point for documentation is the official page from jQuery, but beware, there is so much more to this useful plugin than their examples show you. So let me share my experiences with you here and let you be the judge.

The source code

First things first. All the examples I will be showing you can be downloaded from github. Just head on over to GitHub and download them. There are only five of them and they are very easy to read. You can follow along from the first example and start building a working HTML form.

Create validation for a simple form

When you create an HTML web form you sometimes are a little lenient with the standard attributes used on the input tags – at least I am. It took me a while to figure it out, but the validation plugin won’t work unless the input form tags have a name attribute defined. Take a look at the example below:

<!DOCTYPE html>
<html>
    <head>
        <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.min.js" type="text/javascript"></script>
        <script src="http://ajax.aspnetcdn.com/ajax/jquery.validate/1.9/jquery.validate.min.js" type="text/javascript"></script>
        <script src="js/example1.js" type="text/javascript"></script>
    </head>
    <body>
        <form id="validation" action="">
            <fieldset>
                <label id="name-label" for="name">Name:</label>
                <input id="name" name="name" type="text" />
            </fieldset>
            <input id="submit" name="submit" type="submit" value="Validate" />
        </form>
    </body>
</html>
$(document).ready(function () {
    $("form").validate({
        debug: true,
        rules: {
            name: {
                required: true
            }
        }
    });
});

Output of example 1

This is as simple as it gets, but you will see simpler examples than this that make use of the jQuery Validation plugin, most of which use the “easy” configuration syntax adding attributes to the form input fields directly. In my opinion, this approach does not scale and clutters the markup. I prefer a clearer, predictable JavaScript syntax and I don’t mind writing an extra few lines of code to do so. The Validation configuration syntax I’m showing you here is what you should strive for and it will also let your form validation scale. In my experience forms always need to scale so do it right from the start.

This form is easy. It only has one input field. Notice the rules keyword in the validation configuration. Using the input field’s name attribute you define the validation rules for each field. When the user hits the submit button the form will be validated before a POST is attempted. The debug setting is good to have turned on. It will write any errors or warnings to the browsers JavaScript console. It was this helpful setting that made me aware that I was missing the name attribute on my input form fields.

Let’s move on.

Adding more fields

The second example (below) adds a few more fields. You will also notice that I have started to use validation methods here. For instance, now you must type a name of at least 7 characters. The address can be of any length, but it is still required. The zip code must consist of at least 5 digits, and only digits. You can read more about the Validation methods here.

Notice the code for the messages keyword and compare it with rules keyword. You will no doubt see that each validation method has its own custom validation message. I thought that was a pretty nice touch and makes user feedback easy. You don’t have to add your own custom validation messages, but if you don’t then you will get the jQuery Validation plugins own error messages.

Go ahead and run the example in your browser. Use the TAB key to move from field to field. Notice the validation at work when you move from field to field.

<form id="validation" action="">
    <fieldset>
        <div>
            <label id="name-label" for="name">Name:</label>
            <input id="name" name="name" type="text" />
        </div>
        <div>
            <label id="address-label" for="address">Address:</label>
            <input id="address" name="address" type="text" />
        </div>
        <div>
            <label id="zipcode-label" for="zipcode">Zip code:</label>
            <input id="zipcode" name="zipcode" type="text" />
        </div>                
    </fieldset>            
    <input id="submit" name="submit" type="submit" value="Validate" />
</form>
$(document).ready(function () {
    $("form").validate({
        debug: true,
        rules: {
            name: {
                required: true,
                minlength: 7
            },
            address: {
                required: true
            },
            zipcode: {
                required: true,
                digits: true,
                minlength: 5
            }
        },
        messages: {
            name: {
                required: "Required name",
                minlength: "Your name is too short. Must be at least {0} characters."
            },
            address: {
                required: "Required address"
            },
            zipcode: {
                required: "Required zipcode",
                digits: "Only digits accepted",
                minlength: "A minimum of {0} digits are required."
            }
        }
    });
});

Output of example 2

If your input does not validate the error message is displayed at once. You don’t need to submit the form. This a configuration setting that can be easily changed, but I am using the defaults.

The validation error message

You may have noticed that the error message that pops up for the validation is not something that is present in the markup itself. The jQuery Validation plugin is inserting it in to the markup at runtime. In most cases you will want to do something with the presentation of the validation error messages. There are a number of things you can do, but by default the validation plugin encloses the error message in an html label tag with a class named error. It also adds an error class to the input field that failed to validate and a valid class to inputs that are considered valid.
The easy thing to do is to simply create a class for the input and add it to the CSS style sheet. If you run the next example you will see a simple way to style the default error messages. The code is identical to previous example apart from the addition and inclusion of the CSS file. I have also played with the presentation of the error message text and added a valid input style. There are numerous possibilities here including using something other than the html label tag. Check out the errorElement setting in the plugin configuration documentation.

input.valid {
    border: 2px solid green;
}

input.error {
    border: 2px solid red;
}

label.error {
    color: red;
    font-weight: bold;
}

Output of example 3

Giving your use options, but choose at least one

Check boxes are the preferred choice for entering multiple choice input. Validation can be somewhat tricky in this scenario. You can’t simple set a check box validation rule to required since the input is never required. However, sometimes you have to ensure that the user has picked at least one. When reading the documentation to see how this could be done I came across something called grouping, however, I couldn’t get it to work for my scenario. However, I did manage to support this feature by adding a custom validation method and using a hidden form field. The hidden field acts as a place holder to call the custom validation method. It doesn’t really do anything, but the validation uses the jQuery method connected to it to check if a check box is ticked.

Take a look at the next example. Notice the custom validation method at the top, requireOne. It’s purpose is simply to check if one of the check boxes is ticked and then return true or false. However, there is a slight flaw in the plan. Hidden fields are not by default validated by the validation plugin. The call to setDefaults() makes every form element a candidate for validation including hidden fields.

Notice the call to $(“#hiddenOptionValidator”).valid() at the end of the script. I had to add this code to force the plugin to validate the hidden field again once a check box is ticked or unticked. Without this the validation error message for the check boxes will not be cleared until the submit button is pressed which will most likely confuse a user.

<form id="validation" action="">
    <fieldset>
        <div>
            <label id="name-label" for="name">Name:</label>
            <input id="name" name="name" type="text" />
        </div>
        <div>
            <label id="address-label" for="address">Address:</label>
            <input id="address" name="address" type="text" />
        </div>
        <div>
            <label id="zipcode-label" for="zipcode">Zip code:</label>
            <input id="zipcode" name="zipcode" type="text" />
        </div>             
        <div>
            <div>
                <input id="option1" name="option1" type="checkbox" />
                <label for="option1">Option 1</label>
            </div>
            <div>
                <input id="option2" name="option2" type="checkbox" />
                <label for="option2">Option 2</label>
            </div>
            <div>                    
                <input id="option3" name="option3" type="checkbox" />
                <label for="option3">Option 3</label>                            
            </div>
            <div>
                <input id="hiddenOptionValidator" name="hiddenOptionValidator" type="hidden" />
            </div>                    
        </div>
    </fieldset>            
    <input id="submit" name="submit" type="submit" value="Validate" />
</form>
$.validator.addMethod("requireOne",
                        function (value, element) {
                            return $('input[type="checkbox"]:checked').size() > 0;
                        },
                        "Missing required status - Must choose one");

$.validator.setDefaults({ ignore: [] });

$(document).ready(function () {
    $("form").validate({
        debug: true,
        rules: {
            name: {
                required: true,
                minlength: 7
            },
            address: {
                required: true
            },
            zipcode: {
                required: true,
                digits: true,
                minlength: 5
            },
            hiddenOptionValidator: {
                requireOne: true
            }
        },
        messages: {
            name: {
                required: "Required name",
                minlength: "Your name is too short. Must be at least {0} characters."
            },
            address: {
                required: "Required address"
            },
            zipcode: {
                required: "Required zipcode",
                digits: "Only digits accepted",
                minlength: "A minimum of {0} digits are required."
            },
            hiddenOptionValidator: {
                requireOne: "Please tick one checkbox"
            }
        }
    });

    $("input[type='checkbox']").click(function() {
        $("#hiddenOptionValidator").valid();
    });
});

Output of example 4

Validating input based on remote data

Validating input fields is pretty simple, but can quickly elevate up to a something more advanced if you want to do an online lookup for data in a database.

Let’s say you want to register a new user in your system and the unique key is the user’s email address. In most scenarios two users won’t have the same e-mail address since this is unique, but if the same user returns to your registration form and wants to re-register then you have to avoid a new registration and take action before submitting the form. Using the validation email method will handle a user typing a valid e-mail address. Once you are past that hurdle you want to use the remote keyword to look up a backend service to connect to the validation. The documentation is severely lacking here, but luckily it isn’t as hard as it sounds. You do need a backend service to get this working. The thing to remember is that the syntax for the remote keyword configuration is identical to jQuery’s AJAX requests. If you have used AJAX in jQuery before, this should be pretty similar. To set up a backend service is beyond the scope of this article, but take a look at the code example and I’m sure you’ll see what you have to do.

<div>
    <label id="email-label" for="email">E-mail:</label>
    <input id="email" name="email" type="text" />
</div>  
            zipcode: {
                required: true,
                digits: true,
                minlength: 5
            },
            email: {
                required: true,
                email: true,
                remote: {
                    type: "GET",
                    url: "/your-service-url/",
                    cache: false,
                    data: {
                        mail: function () {
                            return $("#mail").val();
                        }
                    },
                    dataType: "json",
                    dataFilter: function (data) {
                        if (data) {
                            var json = $.parseJSON(data);

                            if (json) {
                                return JSON.stringify(json.available);
                            }
                        }

                        return false;
                    }
                }
            },
            hiddenOptionValidator: {
                requireOne: true
            }
        },
        messages: {
            name: {

Output of example 5

Closing

So there you have it! Some simple examples of jQuery Validation plugin usage based on code from the real world. If you are interested in seeing the form on which this posting is based then take a look at this link.
Your feedback, good or bad, is appreciated as always so don’t be afraid to leave a comment. Thanks for reading.

Using Fiddler as a simple http development server

Fiddler’s AutoResponder

Lately I’ve been playing around with Fiddler (version 2.3.4.4). Fiddler it is a free packet analyzer for web debugging HTTP and HTTPS traffic. It acts as a proxy between your web browser and a web server so you direct your web browser to it using a plugin (for Firefox) and it enables you to analyze and tamper with the HTTP requests and responses between the server and client. Very cool and very useful.

Screenshot of Fiddler user interface

Although Fiddler is packed with lots of useful functionality for analyzing and tampering with HTTP traffic I also found a new use for it when working with jQuery and JSON.

Just recently I was looking in to the jQuery UI Autocomplete plugin and wanted to play around with the functionality that uses JSON returned from the server. To simplify, my idea was to test the Javascript client code without writing any server-side code. Of course I could have simulated something similar by creating JSON in the Javascript code, but that’s not what I wanted here since I it was important to get familiar with the code making the JSON request over the wire. The client code that I ended up creating with can easily be deployed to a proper web server without modification.

Fiddler’s AutoResponder functionality

Fiddler includes something called AutoResponder. As the name gives away, you use it to automatically send a response to a calling browser client. The idea is to get Fiddler to intervene and return something useful when the browser makes a request for a particular URI. In my case I was aiming at making Fiddler return static JSON to the browser when making a call to http://server.anywhere.com/json. I created the contents of a HTTP response that I wanted to be returned to the browser in a file and stored it to disk. Then I redirected the AutoResponder to return this file when the browser request was made. I made no modifications to my hosts file.

Screenshot of the configured AutoResponder in Fiddler

Very simple really. As you can see from the screenshot above. I have configured two URIs. One is for the index.html file and the other for the JSON “service”. When one of the URIs is hit by the browser the corresponding file on the file system will be returned. The AutoResponder also lets me set latency so I simulated a 3000 ms sleep for the service before responding.

The tricky part was actually making a valid HTTP file for the JSON service. In my case it looked as shown below and saved as UTF-8. For this purpose I found Notepad++ to be very useful. When selecting the actual HTTP content text in the file it tells you exactly how many bytes are needed as value for the content length header. In my case it was 87 bytes.

A screenshot of calculating the number of bytes in the html response with Notepad++

Now, when opening the browser and making a call to http://server.anywhere.com/json the AutoResponder will step in and return the JSON to the browser. The code I used for invoking the call to return the JSON is shown below. Of course this code ignores what is typed in the input field and results the same JSON regardless, but for my purpose that’s okay.

<html>
    <head>
        <title>jQuery Autocompletion with JSON call</title>
        
        <link rel="stylesheet" 
              type="text/css" 
              href="http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.5/themes/ui-lightness/jquery-ui.css" />

        <script type="text/javascript" 
                src="http://ajax.googleapis.com/ajax/libs/jquery/1.5.1/jquery.min.js"></script>
              
        <script type="text/javascript" 
                src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.5/jquery-ui.min.js"></script>
        
        <script type="text/javascript">
            $(document).ready(function() {
                $( "#tags" ).autocomplete({
                    source: function(request, response) {                                               
                        var url = "json";
                        var param = "";
     
                        $.getJSON(url, param, function(data) {
                            response(data);
                        });
                    }
                });
            });
            </script>        
    </head>

    <body>
        <div class="demo">
            <div class="ui-widget">
                <label for="tags">Tags: </label>
                <input id="tags" />
            </div>
        </div>
    </body>
</html>

Same origin policy

When requesting data using Javascript there are some security limitations that the browser enforces, one being the same origin policy. This policy restricts Javascript from accessing JSON from a different domain than the one hosting the script making the call. So if I want to request JSON from http://server.anywhere.com/json then the Javascript which makes the JSON call needs to originate from the same domain, http://server.anywhere.com/.

Again, AutoResponder to the rescue. I set up the AutoResponder to call my html page (the file which includes my Javascript) and mapped this file to a known URI from the same domain as the simulated JSON service. When the browser makes the initial request to http://server.anywhere.com/index.html, Fiddler’s AutoResponder intercepts the request and redirects a file from my local drive to the browser. The browser thinks it’s getting files from the web, but in fact Fiddler is just redirecting files from my local hard drive. When the time comes to trigger the script requesting the JSON from the simulate service at http://server.anywhere.com/json the AutoResponder steps in and returns my static JSON file. Notice the host name in the screen shot below and the jQuery autocomplete plugin in action.

Screenshot of the resultsing page in browser

Maybe a little clumsy to set up, but once done you can tweak everything in the files and no need to deploy any code or install any servers. I thought it was a nice touch that the AutoResponder can simulate latency so you can test any timeout functionality on the client side without having to add thread sleeps which usually is the case for service development.

Conclusion

I am really happy with Fiddler. I have used WireShark in the past, but for working with HTTP traffic it is a little too heavy. Fiddler has a lot of interesting features for web development and analysis work.

Jumping over LDAP hurdles

LDAP is nothing new, but until recently I had never had the need to create LDAP lookups from an application to a directory server. Most of today’s platforms are usually LDAP compliant and handle this themselves, thereby abstracting developers and administrators from the LDAP server internals. Under normal circumstances you just have to read the platform/framework configuration documentation, create a system account for the application to use to bind to the LDAP server, and the platform takes care of the rest itself… well more or less :-).

Context

In the past I have read a lot about LDAP, and for some reason I had got the idea in my head that working with LDAP was difficult. In practice it turned out to be pretty simple once you got past a few hurdles. The ASP.NET application I was maintaining needed to switch from LDAP services supplied by Lotus Domino to Microsoft Active Directory. The application was using Active Directory for user authentication, but for all the other services it was using the LDAP directory supplied by Lotus Domino. There were historic reasons why this was the case, but it didn’t make sense anymore. To make matters worse, the data in two directory services were not synchronized so users were complaining that their user data was displayed incorrectly in the application UI. Of course this was true since it was being read from a mix of directory servers.

From my brief experience from working with this technology there are three basic hurdles you need to jump over to get something working.

Binding

The first thing you need to do is bind to a LDAP directory which is LDAP jargon for authenticating to the LDAP server. It sounds simple enough… However, when creating a URL I usually write the protocol specifier using lowercase characters. Doing this was giving me a very cryptic error from the Microsoft .NET runtime since I was using a protocol specifier that looked similar to ldap://server/query… Luckily Google was my friend on this occasion and I soon found out why I was getting complaints from the .NET runtime environment. It seems the .NET System.DirectoryServices.DirectoryEntry class does not like the protocol specifier in lowercase characters when connecting with Active Directory, so ldap:// has to be converted to LDAP:// which I still find to be a bit strange. Is this a platform specific bug? I could not find any information in any LDAP documentation that specifies that this is necessary…

Directory structure

The next thing to do is get familiar with your specific directory structure. I used the free Softerra LDAP browser to help me here. It will let you query the directory and display the results. It will also let you traverse the tree using the GUI which is useful to get a feel for the LDAP tree. Your tree will probably be site specific and you will need to know where to look to create a meaningful and efficient LDAP query. For lookup efficiency you should avoid starting your query at the directory’s root node. I was working for a large enterprise with thousands of users and groups. Starting a query at the wrong place would kill performance. In my case I was using an auto-complete function on the UI to call upon a backend lookup service so it had to be fast.

LDAP query syntax

Building the LDAP query is where you probably will spend most of your time. The query syntax itself may seem difficult to read at first, but you get used to it fast. If you’ve ever worked with a scientific calculator then I guess you can think of it working much that same way, only backwards :-). So to find an object that has objectclass=person and a shortname attribute starting with “joe” you could write (&(objectclass=person)(shortname=joe*)). The “&” works as a logical AND and the objectclass is the LDAP object type to return (there are other types of standard LDAP objects). Also note the wildcard. To expand our search example to find people with a shortname starting with “joe” or “kent”, the query could be written as (&(objectclass=person)(|(shortname=joe*)(shortname=kent*))). Notice the “|” symbol for the logical OR. Also note the parenthesis in the query limiting the AND and OR functionality. It may be hard to read at first. This is where a LDAP query tool as mentioned above may come in handy when testing your query.

Maintaining and refactoring C++

Last week was my last day working with C++ (for a while). It’s been quite fun to revisit both the programming language and source code which kicked of my development career over 12 years ago, and I have enjoyed the experience a lot. There is also a few things to note so I put together a short list of things I found interesting during this short maintainance assignment.

Introducing a source control system

The code was originally written in 1999 and the executable files have been running in production ever since. Today the programs are owned by a group in the enterprise operations team. Their focus is to keep the systems up and running and they have little interest in the development process. There was no source control system available when I originally developed the code so,  before making any changes to the existing source, I was determined to correct that fact. A few months ago I taught myself Git and have never looked back since. Git is an excellent tool and this was an appropriate opportunity to introduce Git as suitable source control system for this code base. Being the sole maintainance developer of these programs I was happy just to add Git to aid my own productivity and give me the ability to safely abort a change should the need arise (and it did), but it will also pay off in the long run.

Updating to new IDE

Once a source control system was in place, the next step was to pick out the correct file candidates from each project to be checked in to the repository. I didn’t want every project file source controlled and this was a good occasion to get a bit more familiar with some of the lesser known project files used by the IDE, and also how to configure Git to filter file names/paths. Originally, the projects were all developed using Microsoft Visual C++ version 6 so the first step was to get them updated to a newer C++ IDE, which just happened to be Visual Studio 2008. Once the project files I needed were identified, these were checked in to the repository and tagged as the base version. Safe and ready to go!

Automatically updating the projects from Visual C++ 6.0 projects to Visual Studio 2008 solutions went ahead problem free – the IDE handled it all. My job was then to rid myself of the unnecessary project files only used by the old IDE. The (newer) Visual Studio C++ compiler has grown a lot “smarter” so a few syntax bugs had to be ironed out before the old code would build. There were also warnings due to calls to C++ standard library functions that now were deemed unsafe. In most cases a safer alternative was suggested.

Visual Studio 2008 is not unfamiliar to me, and those following this blog will know that I have used it for C# development, but never for C++. I was surprised how it lagged it’s C# cousin in functionality. Among other things there is little or no support for MSBuild and the IDE has no refactoring functionality. The latter was a real let down since refactoring C++ proved to be notoriously more difficult than any other modern language I have encountered. However, a few things made the update worth it: a better compiler and some IDE features like folders for structuring the source files. Visual Studio 2008 also has line numbering which I’m pretty sure was missing in the Visual C++ 6 source code editor.

Documentation and getting familiar with the source code

By chance, it just so happened that I came across Doxygen when googling for free C++ tools. Since Doxygen can be used for C#, Java and Python (untried, but according to the documentation) I thought it would be worth the time to take a closer look at this tool and that proved to be a wise decision. Doxygen is brilliant! I have not used it for the other languages it supports, but I plan to for my next project.  It’s syntax may remind you of JavaDoc, but with the correct dependencies installed it can create useful illustrations for viewing code and dependencies. Also, when creating the documentation you can configure it to include the source code in the documentation. For me the output was html and I actually found it easier to browse through the generated Doxygen documentation with my web browser than the source code itself using the IDE! Also useful is the fact that Doxygen can tell you which functions a particular function calls, and which functions your function calls. This proved to be useful when looking for things to refactor while attempting to simplify the code.

Beautiful code

I had never really had the need for a beautifier before, but this time I wanted to make the source easier to read, and also replace tabs with spaces and a few other things. I found a beautifier named UniversalIndentGUI which also works with more than one programming language, which I think is a plus. I fed all the source files to it and out popped “beautifully formatted” C++ source code. Voilà!

Unit testing and mocking framework

In Java development, unit testing is part of everyday life and has been for quite some time. However, where JUnit is the defacto standard for unit testing for Java, there is no similar single tool which has widespread adoption for C++ development. There are many tools available, but I had a hard time picking the one which I thought had the most potential and most active user community. In the end my choice fell on Google Test which proved to be a useful tool. Along with Google Mock, a mocking framework for C++, they provide functionality for unit testing and creating mock objects.

I spent a lot of the project time trying to refactor the code to use these tools. Unfortunately the code was riddled with references to a third part library, Lotus Domino C++ API, which I could not get working with GTest. Therefore a lot of the work was trying to narrow the usage of this library to only certain parts of the code. Although this was always in my plans, I never got quite that far and ran out of time, which was a shame. Refactoring can be time-consuming…

Project improvements

I added a simple readme file and change log to each project and moved any comments referring to a changes from the source code to the change log. I hope this will prove useful to any future developers for getting a head start and saving them from starting off with the source itself. With a simple attribute, Doxygen let me include the contents of each of the files in to the generated Doxygen documentation, which I though was a nice touch.

Lasting impressions

As I said earlier, I will miss working with C++. That said, I feel I can better appreciate the syntax improvements of languages such as C#, Java and Python. I think these languages better facilitate the creation of object-oriented code without syntax getting in the way, so to speak. C++ does make you work harder, but supplies more power in return (if you need it!). It is useful to keep in mind that trying to write C++ code in a Java or C# style may well provide you with unwanted memory leaks. In C++ you use the new and delete operators to create object instances on the heap, whereas Java and C# provide garbage collection to handle the deletion of objects no longer being referenced, as you probably know. Take this example, a Java method for fetching a bucket of water could look something like this:

public Bucket createBucketOfWater() {
    Bucket b = new BucketImpl();
    b.fill();
    return b;
}

Inside the method a new instance of a Bucket class is created and initialised. The memory used for this object will be reclaimed by garbage collection once the myBucket reference to the object is invalidated. The caller does need to think of this – it happens automatically.

// someObjectInstance creates and initialises the Bucket class, the garbage collector handles the memory when the myBucket reference goes out of scope
Bucket myBucket = someObjectInstance.createBucketOfWater();
myBucket.DoSomething();

Doing something similar in C++ may not be a good idea. You may end up with something like:

// create a new Bucket of water, return a pointer to the memory on the heap
Bucket* CreateBucketOfWater() {
    Bucket* b = new BucketImpl();
    b->FillWithWater();
    return b;
}

This code works, but will burden the caller to delete the memory used for the Bucket when done. If, for some reason, the caller should forget, the memory will be lost once the pointer variable is invalidated. We then have a memory leak.

// create a new Bucket of water, return a pointer to the memory on the heap
Bucket* b = CreateBucketOfWater();
b->DoSomething();

// must remember to delete memory on heap
delete b;

A useful rule of thumb to remember is that objects should be created and deleted by the same part of the code, not spread around. In other words a function or method should not create an object on the heap and then leave it up to the caller to tidy up when done. So how do we avoid this scenario? A more suitable C++ approach could be something like this:

// function body not relevant
void FillBucketWithWater(Bucket*);
// create a Bucket instance and pass an object pointer to the method, remember to delete the memory when done
Bucket* b = new WaterBucket();
FillBucketWithWater(b);
b->DoSomething();
delete b;

So to conclude, where in Java you would ask the method for a bucket of water, in C++ you would supply your own bucket and then use another method to fill it with water! When you are done with the bucket you are responsible for deleting it since you created it.

However, although this is a clear division of responsibilities, it does make me wonder how to properly create a factory method without burdening the caller to delete any created heap objects that the factory creates.

Google Test (GTest) setup with Microsoft Visual Studio for C++ unit testing

Introduction

[Links now include solution files for both 2008 and 2010 versions of Visual Studio]

I’m going to be nice with you today and save you some time. What I am about to describe to you took me the better part of two (half) workdays…. with a few hours sleep in between. Setting up Google Test with Microsoft Visual Studio can be a bit tricky, but if you really want unit testing for C++ in Visual Studio (and I hope you do) then this is for you. Most of the challenges can be overcome by configuring the compiler and linker correctly.

It’s worth mentioning that before settling on Google Test, or GTest as it’s also named, I did take a look at a few of the other unit test frameworks for C++, but I don’t think things seem any easier anywhere else. GTest doesn’t seem like a bad choice: its open source, used to test the Google Chromium Projects (Chrome) and more importantly, seems to be actively maintained.

There is a fair bit of documentation available on the project site, but sometimes you just want to get a feel for something before committing yourself to it. This posting should help you do that, but if you want more, the project has good documentation. In my quest for documentation I noticed several guides, a FAQ, a Wiki and a mailing list. In other words, there are good sources of information available if you choose to dive in.

Disclaimer

I suppose a disclaimer is in order for those wondering:

  • I only work with C++ in passing. It’s not something I do much of these days and my working knowledge of Microsoft Visual Studio for C++ is limited.
  • I used Visual Studio 2008 Profession Edition for this work. I also updated the project using Microsoft Visual Studio 2010 Professional Edition (see links below). Maybe the Express versions will work too?
  • I am not affiliated with Google in any way. The reason I am looking in to this particular framework is because I am currently maintaining some older C++ programs that I wrote 10 years ago. I want to introduce unit testing for them before making changes and GTest seems a good choice.

So, in this posting I want to share with you how I configured Visual Studio 2008 to work with the GTest framework. After spending a fair bit of time getting this to work, I want to write it all down while it’s still fresh in my mind.

The GTest binaries for unit testing

First thing’s first: you need to download the Google Test Framework. I use version 1.5.0 which seems to be the current stable release. I unpacked the GTest project to a folder named C:\Source\GTest-1.5.0\ which I then refer to from other projects in need of the unit testing library. I call this directory %GTest% in the text that follows. Be aware that I think I may have read that Google recommends adding the GTest project to your own solution and building them together with your own code, but this is how I do it for this sample project.

If you are coming from a Java world then this may be where you hit your first snag. It may be a bit different from what you have grown accustomed to with Eclipse, JUnit and all, but you will have to build the unit test binaries from the downloaded C++ source code. Yes, you will actually have to compile and build the GTest libraries yourself, but before you lose heart, let me add that it comes with project files for many popular C++ IDEs, Visual Studio being one of them (older version). In the msvc/ folder of the download you will find two Visual Studio solution files which VS 2008 will ask you to upgrade when you open them.

I had no trouble building the binaries. In fact, I can’t remember actually having to configure anything so don’t be put off by this step. However, there is an issue here: there are two solution files and you must choose the correct version to use with your project. The solution file with the -md suffix uses DLL versions of Microsoft runtime libraries, while the solution with no suffix uses static versions of the Microsoft runtime libraries. The important thing to note is that you must correctly set the C++ Code Generation setting for the Debug and Release configurations in your project to the exact same setting used when building GTest. If you experience linker problems somewhere down the line in your project then this might be the cause. Most of the trouble I have experienced while building has been due to this setting being incorrect. The project’s README file does a better job of explaining all this so be sure to have a look. For my code I am using the static versions of the runtime libraries, so for me that’s /MT for the Release configuration and /MTd for the Debug configuration. I use the GTest solution without the -md suffix.

In any case, if you plan on using both Debug and Release configurations in your own project then you should remember to also build the GTest solution for both Debug and Release configurations. Among other things, the Release configuration will build two files, gtest.lib and gtest_main.lib, and similarly, the Debug configuration will also build two files, namely gtestd.lib and gtest_maind.lib (notice the extra -d- character in the file names).

Project setup

Now that you have successfully generated the libraries for unit testing, we need to incorporate them in to a C++ project. The GTest documentation provided will show you some simple examples of how to create unit tests using the framework, but it won’t say much about how to set up a good project structure for unit testing. I guess, this is not to be expected since it could be very environment specific.

My preference is to avoid making the unit tests part of the resulting binary (EXE file), and I don’t want to have to restructure my existing project (too much) to add unit testing. I simply want to add unit tests to my project, but avoid making my existing project code aware that it’s now being unit tested. So, my solution is based on what I’ve grown accustomed to with Java development with Eclipse, or C# development with Visual Studio. Maybe this is also the norm in other C++ projects? The idea is to split the solution in to three separate projects:

  1. One project containing the base code which will function as a library for the others
  2. One project used for running main(), the application entry point, which makes calls to functionality in the library
  3. One project for running unit tests which also makes calls to the same library functionality. In GTest the main() function entry point can be optional if you use gtest_main.lib.

The screenshot below shows what this may look like in Visual Studio:

Solution view in Visual Studio 2008

This setup requires the BaseCode project to be built as a library (LIB) file. The two others projects will build as EXE files that both depend on the LIB file so their project’s dependencies must be set up to both individually depend on the BaseCode project. When attempting to build the solution using this project structure, these are the things to watch for:

  • The BaseCode project must be configured to build as library. For both configurations, Release and Debug, you must set the project’s Configuration Type to Static Library (.lib). It’s Code Generation must be set to Multi-threaded (/MT) for the Release configuration and Multi-threaded Debug (/MTd) for the Debug configuration (must be identical to the GTest project explained earlier).
  • The RunBaseCode project is used to create the EXE for the resulting application so it’s Configuration Type is set to Application (.exe) which is the default. It depends on the BaseCode library so it’s project dependency must be set to depend on the BaseCode project. The Code Generation should also be set as explained above.
  • The TestBaseCode project is also used to create an EXE, but only for running the test cases – it’s not something you ship. It also depends on the BaseCode library so it’s project dependency must be set to depend on the BaseCode project. As before, it’s Code Generation should be set as explained above.
  • Since the TestBaseCode project needs to run the unit tests it must refer to the GTest libraries. Of the three projects, it is the only project which needs this. Therefore, for both Release and Debug configurations, set the Additional Include Directory setting to refer to the %GTest%\include directory.
  • The TestBaseCode Release configuration’s Additional Library Directories setting should refer to the %GTest%\msvc\gtest\Release directory. The Additional Dependencies setting should list the libraries gtest.lib and gtest_main.lib. Similarly, for the Debug configuration the Additional Library Directories setting should refer to the %GTest%\msvc\gtest\Debug directory and the Additional Dependencies should list the libraries gtestd.lib and gtest_maind.lib (notice the extra -d- character in the file names). Of course, if you have set up you GTest libraries somewhere else then it you have to refer to these directories instead.
  • The Command Line setting for TestBaseCode‘s Post-Build Event can be set to “$(TargetDir)$(TargetFileName)” for both Release and Debug configurations. This will run the unit tests automatically and display the results in the Build output window after building the project.

If you are successful, the build output should look something like this:

Screenshot of the build log

You will notice that the unit tests are run automatically and results displayed. The build creates two EXE files as expected, one for the application and one for the unit tests:

Screenshot of running the code and tests

If you get this far you might also want to check out gtest-gbar project which is a graphical UI for the unit tests. It’s a simple, one-file .NET application. By pointing it at the unit test EXE file you can get output like this:

Screenshot of gtest-gbar

Closing

For simplification, I’m linking to the Visual Studio 2008 solution I used to create the example so you can have a look at my solution settings. If you are using Visual Studio 2010 then use this solution. Have a look, build it and see if it works for you! You will also need to download, build and refer to the GTest framework LIB files and include folder as described above. Tell me how you get on and what Visual Studio version you were using (2008, 2010, Express etc). Your feedback would be greatly appreciated!!

Now that I’ve got this set up the next step for me is to incorporate GTest unit testing in to my current C++ projects. There’s a lot to learn…

The Apple iPad will rock your world

On a recent trip to London I purchased an Apple iPad. I have been using an iPhone for years so I kind of knew what to expect, but the iPad is really something else. It is such a useful tool and has really changed my life to the better in many ways in only a short space of time.

The MobileRSS app and a Google Reader account enables me to follow my RSS news feeds more easily and frequently. The Read It Later app allows me to follow-up on links that I previously have marked for later viewing when I come across them on my PC using the Firefox plugin. I have been using Read It Later for PC/iPhone for a while now, but until now have never be able to find the time to read. However, the thing that has impressed me the most about the iPad is it’s usefulness as an e-book reader.

I started off reading pdf books using the iBooks app – which was OK, but that was before I discovered the ePub format which really makes the whole digital reading experience a lot more enjoyable. iBooks offers more functionality when using ePub including backlight and font adjustment functionality, animated paging, bookmarking (also available for pdf’s), text highlighting, notes, a dictionary and more. Now that I have discovered that both O’Reilly and Manning use ePub for a lot of their books I really don’t see the need for buying paper books anymore. Yes, it really is that good. I am going full digital from now on and hopefully saving a few trees in the process – maybe also a bit of money and some shelf space :-)

O’Reilly also have a pretty good offer in place to buy digital formats for a reduced price if you already own the paper version of the book and have registered it online at their website. It’s an offer which I have been using to “upgrade” some of my most frequently used O’Reilly books to the ePub format and have them easily accessible on my iPad… or iPhone should I become really desperate. They also have an ebook deal of the day offer in place which I follow.

(99% written and posted using the WordPress application on my iPad)

Useful Visual Studio keyboard shortcut

I found myself doing a lot of refactoring today. I was working through some terrible code with multiple if’s, else’s and everything else bar the kitchen sink!! Maybe there are tools that can help out for this kind of thing, but a simple keystroke came in very handy! Pressing Ctrl+Å (Norwegian keyboard) jumps to the starting/closing parenthesis of a code block (check this link for other keyboards). In my case some of the methods are hundreds of lines long so this saved me a fair bit of scrolling …