Mocking static methods

I’m working on a pretty old application with no unit tests whatsoever, and I’m trying to write some as I add features and fix bugs.

After successively finding out about MS Fakes, then that it’s only included in VS2012+ Premium/Ultimate and very hard to include in a continuous integration chain, I was pretty stumped until I found this, which works incredibly well.

Another option to transform the static method into a static Func or Action. For instance.

Original code:

You want to “mock” the Add method, but you can’t. Change the above code to this:

Existing client code doesn’t have to change (maybe recompile), but source stays the same.

Now, from the unit-test, to change the behavior of the method, just reassign an in-line function to it:

Put whatever logic you want in the method, or just return some hard-coded value, depending on what you’re trying to do.

This may not necessarily be something you can do each time, but in practice, I found this technique works just fine.

C, C++ and C++/CLI cheat sheet

No explanation of the basic concepts, just a memory help when you don’t use C/C++ frequently, or if you’re a beginner with some bases.

First question: why pointers? Answers: because functions arguments are passed by value in C and C++. Because arrays and strings are crazy in C.

How to read pointers

Here is a handy way of remembering how pointers syntax works.

In variable declaration

Go from the right to the left of the declaration, from the innermost parenthesis, going back to the right when hitting “)”

  • Read the name of the declaration
  • *  is “a pointer to”
  • []  is “an array of”
  • ()  is “a function returning”
  • Finally, read the type

int *p[]; reads “p is an array of pointers to int”.
int *(*foo())();  reads “foo is a function returning a pointer to a function returning a pointer to int”.

In variable usage

Variable usage reads from left to right.

  • Determine “get” (read variable) or “set” (write variable)
  • *  is “the value pointed to by”
  • &  is “the address of”
  • [i]  is “the element at position i”
  • Read the variable name

*p = 5;  reads “set the value pointed to by p”.
x = &i;  reads “get the address of i”.
printf("%i", *foo[0]);  reads “get the value pointed to by the element at position 0 of foo”.

C pointers

Manipulating a variable through a pointer is called dereferencing. It is used through an indirection expression.
Manipulating pointers through variables is called decaying.

Given an integer:

  • Create an pointer:
  • Get the memory location of the variable:
  • Get the value of the variable:
  • Set the value of the variable:
  • Setting a pointer to null allows to assign it to a variable later:

C arrays

Arrays mostly can’t be manipulated directly, but rather through pointers.
The variable array  is, for most intents and purposes, a pointer to the first element of the array (until it’s incremented).

Given an array of integers:

  • This doesn’t really creates an array. It creates 3 integers side-by-side, and stores the memory location of the first one.
  • array == &array == &array[0]
  • When you pass an array as an argument to a function, you really pass a pointer to the array’s first element, because the array decays to a pointer.
  • Incrementing a pointer to an array really moves the pointer to the next element of the array (the pointer is incremented by the size of the type of the pointer).
  • The subscript operator (the [] in array[0]) has nothing to do with arrays themselves, you can also use it on a pointer:

C structures

Given a structure:

  • Accessing members of the structure through the variable:
  • Accessing the members through a pointer:

C++ references

C++ is much less in love with pointers than C, and prefers using reference variables. They are mostly used in the same way as pointers, but their address can’t change (they always points to the same variable).
The main usage of a reference variable is for function parameters, to pass the variables by reference (by default it’s by value), which allows to not use pointers.

Given an integer:

  • Create a reference to a variable with  int &r = i; ; this is called binding a reference to an object
  • You do not need to dereference a reference to access the variable’s value:
  • To make sure the arguments are passed by reference in a method:
  • On the other hand, the same method using pointers must be passed the variables by reference in the call:

Managed instances

Instances of managed objects should be used through handles ( ^ , also called “hat”). It can be read “is a handle to the instance of the managed class”. Then, you use the handle as if it was the actual instance.

Note that, even though you don’t have to use managed objects in C++/CLI, you have to declare instances of managed objects with a handle.

Managed pointers

Because of garbage collection (which changes memory addresses), you can’t just use a regular C/C++ pointer to a managed objects. The managed pointers are called interior pointers. A good overview can be found here.

The syntax for interior pointers is very… declarative:

Native pointers are automatically converted to interior pointers (the inverse is not true):

As with pointers in C++, usage interior pointers is not recommended, and you should rather use managed references.

Managed references

A managed reference is called a tracking reference and uses  %  instead of & .

Managed pointers and references have some weirdness, for which this SO post goes into further details.

Further readings

C++/CLI wrapping a C library: cheatsheet

Because once in a while I write a C++/CLI wrapper around a C library, but since I do it so infrequently I forget everything each time. Here is a list of pointers (haha) for common pitfalls and problems.

Compiling and debugging

You have to wrap the .h into an #ifdef __cplusplus extern “C” :

You also need to remove the /clr option for C files.
Right click your C files > Properties > C/C++ > General > Common Language RunTime Support > No CLR Support.

In your managed class, to include an unmanaged C header, use managed(push|pop)

Because Linux and Windows get along so well, you may also have to redefine a few C methods in your header file.

If you wish to create a C# unit test project (using xUnit for instance), with Visual Studio 2012 and later, you will have to do a few things:

  • In the C++/CLI project’s properties, in Debugging, choose the Debugger Type “Mixed”
  • In the Visual Studio options, Debugging, check “Managed C++ Compatibility Mode” (in VS2012) / “Use Managed Compatibility Mode” (in VS2013).

To build using MSBuild, you may have to change the toolset to VS2012: project properties > General > Platform Toolset = Visual Studio 2012 (v110). And set the Configuration Type to Dynamic Library (.dll) while you’re at it.

Managed and unmanaged objects

gcnew instantiates a managed objects, which will be garbage collected. Always use it for the managed objects.

To easily call the native structures through managed classes, use the internal constructor and ToNative methods.

Powershell for data mining Access databases into SQL Server

I need to do some data extraction from several hundreds of Access databases extracted from zip files. The extracted data will allow us to do some statistics on our customer’s behavior.
In order to do that, I decided to use Powershell, because it has all the features I need, bundled in one neat language.

The scaffolding for the scripts uses PSake for the task launcher, Pester for Powershell unit tests, and a few Nuget packages (including PSake).

In a vendor folder, add the Nuget exe, as well as a packages.config file listing the necessary Nuget packages. I have added PSake, and NUnitOrange to transform the Pester results to a nice HTML report.
I’m not using the Pester Nuget package because it includes the Pester unit tests, and some of them fail, which breaks my build (in addition to adding a thousand tests I don’t care about). Instead, I directly include the Pester scripts, and I cleaned up the samples and tests.

Since the script needs to read Access databases using some sort of ADODB or OLEDB provider, you will have to install either the Access 2007 or Access 2010 provider. If you’re like me (with Office x86 already installed), you won’t be able to install the Access 2010 x64 provider, which will trigger “The Jet/ACE OLEDB provider is not registered on the local machine” errors. So you will have to run the x86 version of Powershell. Which means you will have to use the x86 version of the SQLPS module (make sure you download the x86 version). Don’t worry, the SQL 2012 version of SQLPS is compatible with older SQL Servers (at least 2008 R2).

The batch bootstrapper is inspired by the bootstrapper from Pester:

The tasks in tasks.ps1 are pretty standard and minimalist, so that the maximum is tested in modules via Pester.

Reading Access databases is as simple as using OleDb, old-school style:

And the corresponding Pester tests, using actual Access test databases:

Executing SQL scripts uses the much more powerful Invoke-SqlCmd cmdlet from the SQLPS module. The most is that it returns a Powershell object, so I can do something like this:

WPF Behaviors: adding mouse-event adorners

This is part 3 of the WPF drag & drop exploration. Part 2 can be found here.

Let’s recap what we have right now. We have a sample application that allow us to freely drag items across an area. Now, since there might be multiple items stacking on top of each other, we want the user to be sure which item will be moved. For that, we want to add a visual indication on the item. This kind of stuff is called an adorner in WPF.

Now, you will see many samples of code-driven adorners implementation. But we don’t want to break the MVVM pattern, and to do that, we want the adorner to be implemented in the view.

So we’ll create an Adorner that allows us to use a DataTemplate. For that, I’ll just copy some existing code :

We can now use this adorner in the behavior.

Now, we just need to create the adorner’s DataTemplate, and wire up the events. We just have to add an AdornerDecorator in the controls tree, otherwise the adorner may get a random behavior (it would be attached to the window), or not work at all.

Note that from the adorner’s DataTemplate, we set the DataContext property, so that we can bind the adorner’s properties directly to the ones of the item view model.

Now, on mouse-over, a black border will appear, and will disappear when the mouse leaves. There are a few problems with this method: most notably, the adorner flickers when the mouse stays over it. But we’ll try to tackle them later on.

WPF Behaviors: switching to Windows.Interactivity.Behavior

This is part 2 of the WPF drag & drop exploration. Part 1 can be found here. Among other things, this CodeProject sample helped me much.

Yesterday I created a WPF behavior following WPF conventions for custom dependency properties. While trying to add an adorner to my item (more on that later), I noticed many samples were using System.Windows.Interactivity.Behavior<T>, which I thought was Blend-related, but apparently is not. Because it is much more concise to write and provides useful overridable methods, I decided to switch to it instead.
In addition, with the previous code, I had a problem when defining multiple dependency properties on my behavior, so I needed to find a way to solve this also.

The first thing is of course to inherit from Behavior<T>. You will find it in the System.Windows.Interactivity namespace, which is not included in .Net. You can grab it through the Blend SDK, in a Nuget package, or it’s included in MVVM frameworks like Caliburn.Micro or MVVM Light.

Then, we can remove the Getxxx and Setxxx methods, and replace them with an actual property (that does the same thing). We can also access the behavior instance from the dependency property variable, so we don’t need the instance singleton anymore. We will, however, need to attach the mouse events to the draggable item through ICommands. Since there are no standard ICommand implementation that we can use, we must create one ourselves. I have copied a “standard” implementation from CodeProject, and you will find many similar implementations if you do a quick Google search:

This command will be the bridge between the view and the other layers of the application.

Now, on to inheriting from Behavior<T>. You will notice that I have changed a few names here and there, because they weren’t reflecting the reality of the application anymore: IDragDropHandler becomes IDraggable ; the DropHandler property becomes DraggableItem. I also have replaced the mouse event names with descriptive names (OnStartDrag instead of “mouse button down”), especially since we will now be able to bind these handlers to any event.

You will notice that we don’t attach the commands to any particular item event anymore. This is because it’s handled by the view. It kind of makes sense, because a different device may have a different method of dragging. The rest of the view can be found in part 1.

The events handlers are also similar to the ones in part 1. The only difference is that they access the item through Behavior<T> properties. For instance :

That’s it! Your behavior event handlers are now linked through the view, which means better testing, and better flexibility.

WPF Drag&drop items in a canvas: communicate between behavior and ViewModel

The source code of the sample application is available at GitHub. It has been strongly inspired by Gong WPF DragDrop and has started life as a copy of another “WPF behavior lab”.

The final goal of the application is to allow a user to freely drag an item in a canvas, and that the items “snap” among themselves, so that the user can align them easily. I’m taking it as an opportunity to learn more about WPF.

The first thing I had to learn was how to create and use custom behaviors. They allow to bind elements from the View into the ViewModel using custom properties.
In “pure” WPF (Blend has different behavior conventions), a behavior is a regular class, with a few conventions to declare a custom property. It is called a Dependency Property. It’s pretty easy to bind a simple value (boolean, integer…), examples are plentiful.

Now, in the view, I want to say “when the user moves this item, this method should be run”, because I want not only the item to move, but also the container to compare the item’s new position to its neighbors. A custom property can be anything a variable can be, but not a method. So, to circumvent this, the behavior must be aware of the ViewModel, but obviously we don’t want to strongly link the behavior with the ViewModel.

  • The ViewModel will have to implement an interface
  • The View will be bound to this ViewModel through {Binding}
  • The Behavior will use this interface’s methods to send messages to the ViewModel

The IDragDropHandler interface is very simple. The Moved method will notify the ViewModel that the item has been moved, and the Dropped method that the mouse button has been released.

Now, the behavior will use these methods. To do that, we need to create mouse click and move handlers and assign them to the UIElement. Because the OnxxxChanged method is static, but the click/move handlers are not, we need a way to “memorize” the handlers. We’ll do that through a singleton instance:

Now, the behavior is notifying the ViewModel that it is being moved. Let’s handle the movement:

Now that we have a ViewModel that know when it’s being moved, let’s actually allow the user to move it. First, let’s write a container for the items:

Then the view (I’m removing things like styling for brevity). Note that it follows Caliburn.Micro conventions on naming (among other things), so it automatically binds some things, like the ItemsControl items through its name.

To summarize:

  • The behavior is attached to the item’s viewmodel from the view
  • The behavior memorizes the item instance through the IDragDropHandler interface, and assigns mouse events to the item
  • The mouse events call the IDragDropHandler methods
  • The item’s viewmodel implements these methods and handles coordinates changes itself (which are visually updated through binding in the view)
  • The item viewmodel notifies the container viewmodel of its coordinates changes through Caliburn.Micro events

Unit testing your Powershell scripts using Pester

In order to stabilize our middle office, we need to test it. Not that it’s buggy, but hey, testing is good.

We have a huge 500-lines script that processes PDF files through a bunch of programs. We need to test a few things:

  • Every step and each piece of the code must be working
  • The configuration files must be properly read
  • The proper programs must be run in the proper order
  • The processed PDFs must look like what we’re expecting

To do that, we have a lot of work to do.

There is a series of great articles on Pester on PowerShell Magazine.

Refactor your script

First, we need to extract the methods and code blocks we will test. Our script is not that hard to refactor into a bunch of methods, since it’s pretty well organized so far. A simple refactor will simply be moving a bunch of independent code blocks into methods, regrouped and externalized by feature.

Prefer using modules to do that. Create your methods into .psm1 files. It will allow you to just Import-Module mymodule.psm1 and use it the same way as if you were dot-sourcing your file (. .\myfile.ps1), but will allow you to do more awesome things later; for instance, you can get the list of available commands through Get-Command -Module mymodule, get the comment-based help of your module methods through Get-Help my-module-method, get tab-expansion on your methods, etc.

If you’re not sure how to refactor your script to extract units of work, I can’t really help you there, and you should go learn more about unit testing; there are a lot of places where you can do that.

Unit Test your extracted code

There is an awesome unit test and BDD tool called Pester. Download the latest nuget package into your scripts directory by running nuget install pester. PsGet also has a Pester package, but including the Pester files with your sources (or a way to get them like with Nuget) is very important if you intend to run your tests on a continuous integration platform, especially if you externalize them on a service like AppVeyor.

Create a _run_tests.ps1 file (or whatever your naming convention calls for), with the very simple following contents:

Invoke-Pester will run all the “*.Tests.ps1” files it finds in the current directory. Unfortunately, the Pester Nuget package comes with the Pester tests, and it’s pretty annoying to see the thousands of unit tests in the middle of yours. You can either not use Invoke-Pester and roll your own “look for *.Tests.ps1 file except in the Pester folder” method, or (like I did) forget about nuget update pester and remove the test files from the Pester folder.

To create unit test files, you can either write them manually, or use the New-Fixture module command, which will create both a file to contain your methods, and a test file to test your methods. If you’re working with modules, you will not really be able to use the power of this command, but if you’re creating a new script, it will provide you with a BDD workflow.

Your test file for your module will look like this:

As you can see, for now I’m using a custom config files with the expected values filled. This is not the best way to unit test, so we’re going to use mocking.

Mock the system methods

Here is the true power of Pester and Powershell: the ability to mock system methods. Your method reads files from the disk? No problem. Just provide an alternative implementation and you don’t have to setup a bunch of test data and config files.

My Read-MailConfig looks like this:

So, there is a Get-Content method that I want to mock and control its return value. I can now modify my test so that the values used by my test are right next to them:

Note the usage of -ModuleName mail in the Mock call: modules have their own scope (which is not the case of plain dot-sourced script files), and so need a bit more work to inject mocks.

My mail module actually sends emails through the System.Net.Mail classes, but Pester can’t mock .Net objects (note that .Net mocking frameworks can’t mock most system classes either).

In order to bypass that, we’re going to extract the .Net object calls into separate methods doing only that, we’re going to mock this extraction, and not test the .Net method call:

And the test:

Use TestDrive to test file system processes

If you need to test for complex file access, mocking system methods will quickly become too hard. For instance, I need to test two methods: one that removes files older than X days, and one that removes empty folders. Using TestDrive is much more straightforward, simple and compact than mocking the Get-ChildItem cmdlet:

Unit testing custom querystring-based authorization on a WebApi controller

I needed to add a very simple authorization mechanism to my API: use a query string parameter “api_key”, so that it’s compatible with Swagger (using Swashbuckle, there is a field “api_key” in the Swagger UI) and is easily callable through Ruby On Rails.

Following and adapting a nice tutorial, I have done the following.

Implement the authorization filter

Create an interface for your API key “getter”:

Implement this interface; here it’s extremely simple:

Inject this interface through your dependency injector of choice. You don’t have to modify your controllers, which is great.

Then create a filter attribute to use this implementation:

Now you just have to add the [ApiKeyAuthorize] attribute to your controller(s), and you now need to add the proper api_key query string parameter to all your requests.

Test the filter

A few things to test: that your class uses this attribute, and that the attribute does what it says it does.

Test the attribute presence

Here I’m using XUnit and FluentAssertions.
It’s just a matter of listing the attributes on the class, and checking that an attribute matching the one created exists.

Test the attribute inner workings

What are we testing there? That the attribute throws a HttpResponseException when no parameter exists or when the value is wrong, and that it doesn’t throw an exception when it matches.

Setup the tests

Using XUnit and Moq, the test setup looks like this:

The ContextUtil.CreateActionContext method can be picked from the ASP.Net source. The corresponding tests can be found here.

My InjectionSetup.Register is a unit-test-specific injection that allows to use a specific instance instead of creating one, here using LightInject:

Test the attribute

Still using XUnit and FluentAssertions, three simple tests allow to check that the responses are what is expected:

 

Testing Entity Framework layer in database-first mode

In your app, you may have a “query” layer, where all your Linq queries are regrouped. If you don’t, you should. Having your queries inside your controller leads to poor testing and strong coupling.

You will want to test that your queries return what they say they return. In order to do that, you need to mock your Entity Framework entities container.

There is an awesome and complete tutorial here, explaining everything.

If you’re stuck on Visual Studio 2010, you will need to do a few things: first, download and install the ADO.NET DbContext Generator code template, so that your entities use the DbSet type. Then, open your Entity Framework container, right click inside and select “Add a code generation element” (or something like that, my VS is not in english) and select the DbContext elements you just downloaded. You will then be able to customize the way your entities are generated (awesome tool).

In all Visual Studio versions, if you’re not using code-first like the tutorial, you will have to modify the template generation to mark your entities list as virtual (so that they can be mocked). Open your xxx.Context.tt file, find the line with DbSet<<#=Code.Escape(entitySet.ElementType)#>> and add virtual in front of it. Now you can check out the generated xxx.Context.cs file to be sure it’s not doing crazy things.

While you’re at it, since you’re modifying the code templates, follow these awesome instructions to xml-document your generated entities.

Now, following the MSDN article, in your query tests (the ones returning a set of data), you will have to manually create the data to return, and “bind” it to the mock instance. Since you will have to do that in each of your query tests, don’t forget to extract it to a method: