Creating a web API using ASP.Net WebApi over a legacy database

I’m trying to create a coherent API to access a crazy legacy database, do I decided to use an Asp.Net WebApi project. Since I want to get a somewhat sane result, I’ll test everything.

First things first: create a new WebApi project. It’s a type of project, located under Asp.Net MVC websites.
Don’t forget to update the project’s nuget packages ASAP, since doing it too late will make you tear you hairs out with broken links and missing files.

I will not go over creating API controllers, models etc, because there are a billion websites to help you with that, and it’s pretty straightforward. Instead, I will try to regroup all “best practices” when dealing with a web API in a single place, applied to the problems of dealing with a legacy database. I will also not dwell on the various concepts (data transfer objects, dependency injection, etc), you can read more about them at length with a quick Google search.

Version your API

If your project has any kind of risk associated with it (which means: once in production, will breaking it make you lose money?), you should version your API. So that your website can use the latest and most awesome version, but an executable running somewhere lost and forgotten in the bowels of your middle office doesn’t suddenly (and probably silently) crash once a method returns differently-named fields.

For that, there are several method, each one with various cons and pros. You can read a nice summary (and how to include all main methods at once in your project) here.
I decided to use URL versioning, because it’s the most simple to use and understand, and the most “visible”. Working with various contractors that use various languages and technologies, I always have to choose the less painful option.

To implement API URL-based versioning, the most simple, painless option is to use attribute-based routing. Make sure you have the AttributeRouting.WebApi Nuget package, and just add attributes to your controller’s methods:

I advise you against using namespace-based versioning (as seen here), because it makes your life a living nightmare. Everything you will be using (like Swagger) will assume your API is not versioned, so you have to keep default routes, and namespace-based versioning doesn’t allow that (unless you succeed in modifying the linked class, which I didn’t try).

Create a public data layer

Publish Data Transfer Objects (DTO)

One annoying thing when you try to “publish” entities from Entity Framework, is that you get all the foreign keys relationship stuff in your JSON result.
It also strongly links your underlying model with your API, which may be OK, but is not awesome, especially considering that our model is crazy (still working with a legacy DB, remember?).

So, in order to solve these problems (and also because I am not fond of publishing the underlying data model directly), there is a design pattern called Data Transfer Objects.

To implement this pattern, let’s create a Model project, create the Entity Framework entities inside, and publish them through custom and “clean” objects.

Always be remembering that we’re dealing with a legacy DB. Our crazy fields with random casing, cryptic names and insane types (for instance, everything in my main table is a varchar 100, even dates, booleans and numbers) would really love a nice grooming. You’re in luck, because tools exist to make your life easier.

We will use AutoMapper to bind the entities to the custom data objects. This awesome tool will do much of the grunt work for us.

Let’s pretend that in the actual DB, we have entities looking a bit like this:

Since having all fields as string and with random casing  is clearly insane (or lazy, or both, who knows?), we want to map it to an object like this:

In order to do that, we will use AutoMapper so that we don’t have to manually create object converters. It doesn’t prevent us to write code, though, it just means that all mapping code can be stored in a single place.

System types are best converted using system conversion, but in my case the datetime have some weird things going on, so I created my own converter.

Note that you can also write a custom converter to convert from and to Order and OrderDto. However, it makes you write mapping method “by hand” in various places like constructors, and I like having all the casting in one place, even though it can get quite long after a while. The mapping can also be much more complicated than the above example, as seen here for instance, and using AutoMapper helps a lot in these cases.

Use dependency injection to publish the DTOs

In order to help in the next step (creating test), we will use dependency injection in our controllers.

In your Model project, create a IEntitiesContainer interface with all the methods to fetch the data you need.

Then, still in your Model project, implement this interface in a class that will wrap around the actual Entity Framework container.

Note that tying the controller to the MyEntities() implementation is not the best approach, but solving it is well documented.

Note that we’re not using AutoMapper’s Project().To(), because it doesn’t work great with Entity Framework when using type conversion.

Now use this new layer in your controller:

Now we can use the controller with any implementation of IEntitiesContainer in our tests.

Test the controller

Now that we have a foundation for the project, we can finally do some yummy tests. I will use xUnit (because it’s a great tool) and FluentAssertions (because it makes tests easier to read and write).

I will not go over testing the model, since it’s pretty well documented. However, testing the WebApi controller requires a bit more work.

In order to not hit the actual DB, we’ll use the dependency injection we’ve set up. Create an implementation of the IEntitiesContainer in your test project. I suggest using an awesome method that I found on a blog: create a mostly-empty implementation, for which you will provide the implementation on a case-by-case basis.

Then your controller test can use this fake like this:

The TestsBoostrappers.SetupControllerForTests() method configures the routes of the controller. If we don’t do that, the controller won’t know what to do.

One last, but very important, point of interest on your tests. In order to follow the REST principles, after certain actions, your API should return some specific data. For instance, after an item creation, it should return the URL to this new item. Your POST method may look like this:

This simple this.Url creates an URL based on your routes and such, but if you’re using WebApi 1 like me (because you’re stuck on .Net 4.0 in Visual Studio 2010), all documented solutions won’t work, and will make you want to flip your desktop. In order to solve that, I simply used another dependency injection to wrap the UrlHelper class, and provide my own implementations in my tests.

Document your API

Documenting your API is pretty easy using Swagger. An Nuget package wrapping the implementation for .Net is available as Swashbuckle. Add it to your API project, and then you can access the documentation through /swagger/.

Switching program from .Net 4.0 to 4.5

If, like me, you switched your .Net program from 4.0 to 4.5, you might have encountered a bunch of problems.

What worked for me is the following.

First, modify your projects to use .Net 4.5. Make sure it builds (even if you might get a bunch of errors). Save and commit.

Next, run the following command in the VS package manager console:

This should reinstall your Nuget packages without touching their dependencies.

If you still get MSB3277 errors (“Found conflicts between different versions of the same dependent assembly that could not be resolved”), follow these steps:

  • In Visual Studio, open tools > options > projects and solutions > build and run > select “detailed” for the two msbuild output options.
  • Build your project
  • In the “output” window, search for MSB3277. Go up a little bit, and look for “A conflict exists between…”. It will tell you which assembly is conflicted.

Myself, it was the System assembly… weird.

The both most simple, and most annoying solution, is to remove all references and Nuget packages, then add them back.

Using NUnit for a .Net 4 project in a Jenkins installation

I have migrated my MBUnit tests to NUnit, considering MBUnit has been “on hiatus” for about a year now. NUnit is not much more agile, but at least there is a semblance of life on their Github account.

The switch has been surprisingly easy: change the namespace, change TestRow with Test, and change Row with TestCase. Easy!

Run unit tests in your continuous integration platform

In order to integrate the tests in our continuous integration platform, while minimizing build time, I have split the setup package build and the unit tests run. So I have a job that runs at each commit, that builds the project and generates the setup packages; and another job running at midnight, that only builds the project and runs unit tests.

Apparently, using the automatic package restore from Visual Studio is now obsolete with the latest Nuget versions. So I followed the instructions and migrated to Nuget package restore. Remember to download nuget.exe from nuget.org, add it to the CI server’s PATH, and restart Jenkins.

Add the NUnit Nuget packages to the test projects, and remember to add the NUnit Runner Nuget package to the solution! It will be much easier to run the unit tests this way (no need to install NUnit on the CI server).

I’m using PSAke for my build process (awesome tool!), so now my script looks like this:

And now my Jenkins jobs consist of a simple batch line:

I have sumbled on a System.BadImageFormatException while running tests in Jenkins using nunit-console, or trying to add the assemblies to the NUnit GUI runner. Here are a few things to remember:

Do not use AnyCPU

Your test assemblies, assemblies linked by your tests, and really your whole project, should be built in 32-bits mode, so that they can be run by the 32-bits test runner. Personnaly, I created a “NUnit” solution configuration just for this. It allows me to select which assemblies are built in which case: no need to build tests in the general build, and no need to build stuff like bootstrapper exes in the test build.

Check your build paths

If you create a new configuration like I did, remember to change the build path! My main project builds in the \build\app folder, and my tests are located in \build\tests. I spent a few minutes wondering why the platform change did nothing, before finding out that my x86 test assemblies were in bin\x86\NUnit (the default value).

Check the framework version

Select the proper .Net framework with the /framework:net-4.0 switch. NUnit should be able to find out automagically, but sometimes it doesn’t.

WPF/SOAP configuration for third-party webservice with empty security headers

I am writing a program that accesses a SOAP webservice written by a third-party provider.

I need to provide a user and login in the SOAP headers, which is kind of easy, in theory:

  • add the webservice reference to Visual Studio
  • instantiate a new “XXXXClient” generated class
  • set xxx.ClientCredentials.UserName.UserName and xxx.ClientCredentials.UserName.Password

Easy as pie. Except when it’s not. You see, the provider’s webservice is written using php, it does not behave nicely with .Net. When you send the request, you can see on Fiddler that the query is sent and gets a response, but .Net does not like this response, and throws exceptions around.

First problem: it throws a MesssageSecurityException because of empty security headers in the response (see here). I first thought that I had the same problem, so I spent the day resolving the issue following this blog post, but it turns out it’s much more simple. To fix this, you must set the enableUnsecuredResponse in the binding’s security node to true. And in order to do that, your binding must be a customBinding. If you have a complicated binding, good luck switching.

Second problem: the mime type for the request and the response are different (“text/xml” vs “application/soap+xml”). To fix this one, you have to write your own custom text encoder (download the WCF examples at the top), and to do that… you have to create something like a thousand classes with a hundred lines each. Just for a simple mime type mismatch! Do not forget to also copy, from the downloadable examples, the BindingElement, BindingSection, ConfigurationStrings, and MessageVersionConverter !

So, for such a simple task, as is too often the case with Microsoft, I need to rewrite (or copy, here) a whole bunch of stuff, just to modify some pretty mundane and common behavior, which should be configurable in the first place. It shows how extensible WCF is, and how bloated the extending is.

Anyway, here is a working app.config. For the code of the custom text message encoding class, as I said, just copy the MS examples (and don’t forget to rename the types in the bindingElementExtensions).

 

Debug integrated WPF WebBrowser scripts using Visual Studio

Copied and modified from another blog for future reference.

Note that in order to debug the Javascript inside a WebBrowser control, you don’t need the source code of the application (neat!).

  1. Open Internet Explorer, go to Settings, tab Advanced, and, in the “Navigation” section:
    • Uncheck “disable script debugging” (both IE and Other)
    • Uncheck “Display friendly HTTP messages”
    • Check “Display a notification about every script error”
  2. Open an instance of Visual Studio. Open the Tools menu, Attach to Process, and on the “Attach to” line, click “Select…”, and only check “Script”.
  3. Run the program including the WPF BrowserControl
  4. Attach the Visual Studio instance to its process.

Now, open the “Solution explorer” tab/toolbox. You will see every Javascript file loaded by the WPF WebBrowser control, and you can add breakpoints where you want.

Tested with IE 11 and Visual Studio 2010.

Creating a C++/CLI wrapper around a C library

So, following my previous post, I have decided that I might try C++/CLI after all, since the SWIG way goes nowhere except “down in flames”, and I really need to implement it.

Turns out that creating a C++/CLI wrapper around a fairly well-made C library is not that hard.

Considering the following .h file from the C library (which are the methods and structures to interface):

Add the .h and .c files to a new C++ CLR DLL project, then add a new cpp file, in which we will replicate all the C structures as classes.
We have to name our classes differently than the C structures, otherwise there will be conflicts, both with the compiler, and in our heads.

The annoying thing when like me, you know pretty much nothing about C/C++, is that you never know where to specify pointers and stuff, and it’s mostly trial and error (it says it can’t build, let’s add a *… nope, a & ? Yes, that seems to work).

A few things to note:

  • Provide internal ToNative() methods on the structure wrappers to ease up marshalling from managed objects to native structures.
  • Also provide internal constructors taking a native structure as parameter, to ease up marshalling from native to managed.
  • The “^%” syntax (for the “images” parameter of the “Pack” method) builds to a “ref” in the .Net method signature. Otherwise the calling C# can’t get the values back.

As you can see, it’s not that difficult after all. I guess for an experienced C/C++ developer it would be even more simple.

Using SWIG to create a .Net wrapper around a C library

A consultant has written a C library to neatly pack images on a space. They have integrated it in their Ruby On Rails / Node.js online app through the power of extending Ruby. I’m quite jealous, because it’s not quite as easy to do in .Net. Especially when, like me, you know absolutely nothing about C/C++.

Two options are available to me:

  • Create a C++/CLI project, that will generate a .Net DLL
  • Use SWIG to generate a .Net wrapper around the C code.

Now, since I can’t change much the C code (it must be built on Mac through Ruby), I thought that I couldn’t easily switch it to CLI (I was wrong ; see the next post).

So I tried using SWIG. And it’s a pain to use. Let me walk you through creating a wrapper around a non-trivial C library.

Generating basic wrapper methods is pretty easy: it’s a matter of creating a .i file with a few lines of code:

Run SWIG on this file to generate a few C# files and a C wrapper class:

Then, you create an empty C/C++ library project with Visual Studio, include all the necessary files, and generate it. It builds a DLL, and you can include both the C# files and the DLL in your .Net project to use. Very nice, very simple.

A problem arises when you have a C method taking an array as argument: SWIG generates a C# signature taking a single instance as argument:

You can tell SWIG to create an “intermediate” array type by adding this to your .i file:

This creates an “image_t_array” class, with methods for setting and getting items, old-school-style. There is also a method to cast it as “image_t”, with pointers handling to go back and forth in the array.

Now, creating an array class, and cast as instance to pass it as parameter is not very intuitive, and I’m trying to create a usable wrapper. I want to pass an actual, managed array to my method. Is that so hard?

The SWIG documentation has a whole part about passing managed arrays to non-managed code. The problem is that (once again), the documentation is incomplete. It shows an example for a managed arrays of ints… but not an array of structs. Apparently it doesn’t work the same way, because SWIG complains about a missing typemap when using this:

Actually, a totally undocumented feature needs to be called just above this line:

Thanks to a nice fellow on StackOverflow for reading the SWIG sources… Googling for “CSHARP_ARRAYS” yeilds exactly zero result in the documentation, but only a few relevant results on forums and mailing lists. Why is it not documented, while it’s obviously necessary? If it’s not the right way, which is? Why not documenting the warning message that appears when you don’t use it?

So anyway, now my C# method takes a managed array as argument, which makes this library so much more easy to use. It throws a memory access exception when running the “pack” method, though (System.AccessViolationException).

Thankfully I have both the source files, and the pdb file for the unmanaged dll. I just have to copy the pdb to the target directory (using my test program post-build events), and once I activate unmanaged code debugging (managed project properties / debug  /activate unmanaged code debugging), I can step through the library, using the actual C code! Awesome.

At the actual point that an error occurs, I can see that the library tries to access an image in the array that is not defined. Weird. By adding a few breakpoints in the C library (I can’t imagine how I would have debugged that without the source code…), I quickly notice that my “*images” (in the pack) have very weird values, and that they do not correspond to the sent values at all.

Since passing values as arrays might do weird things with pointers, I decide to switch back to using the “image_t_array” class… and sure enough, now the values are correctly passed. I still have memory access problems, though, but that seems to be on the C side… but it runs nicely when a C++ program calls it, so I still have some digging to do.

At this point, debugging the wrapper becomes very complicated with all the layers involved, so I decided to try the C++/CLI way which, as you may see in my next post, is not that hard after all.

Use git with your SVN repository

You are just a few simple commands away to start using all the local power of git (local commit and rewind, stash…) over an existing SVN repository.

I say local power, because obviously, SVN being not as powerful as git, you’re still limited in what you can do without breaking everything when you try to commit.

A simple and effective guide can be found here : http://svnrating.com/svn-to-git-migration/

Note that on large SVN repository, the first checkout/clone can take hours, or even days. On our 2400-commits-large, several-Gb repository, the first clone has taken a few days, but it was making my (bi-Xeon + 16Gb RAM…) computer hang a few seconds or minutes every 15 minutes or so. Maybe you’ll have better luck.

But once it’s done… it’s really awesome.

Linq2SQL – Row Not Found Or Changed

Working on a .Net 3.5 client app, I am stuck using the lesser evil of Linq2SQL (compared to the awefulness that is EntityFramework 1).

Updating data on a column results in a ChangeConflictException : “Row not found or changed”. The usual culprit of the DBML differing even slightly from the actual DB model was not to blame, and the concurrency was not either, since the processing took a few milliseconds at worst, on a local DB.

It actually was a trigger updating the “updated_at” column. In the DBML editor, setting the column to “check update : never”  and “automatic synchronization : always” solved the problem.

Listen to your users

Why would I want to?

Why are we writing software ? Because we need, or we like, to solve problems.

I you write software for people to use (as opposed to embedded software, for instance), these problems will only be solved if these people actually succeed in using this software. This is, both surprisingly and annoyingly, incredibly difficult.

Your users are not dumb. They are not as well-trained as you are in using computers. They often will be far superior to you in their field of choice. Don’t underestimate them.

And even if you do underestimate them, or even if your users were actually not-that-smart, so what? Aren’t you writing software for them anyway? You are not your user.

How do I do that?

I urge you to do user testing. It will blow your mind, and might make you rethink your whole software. It doesn’t have to be huge, you just need to put people in front of your software, and watch them use it.

You can begin by asking co-workers that are not as familiar as you are with the product : a secretary, an accountant, a salesperson… If you’re in a building with other companies, you might ask people there. Better yet, if you can, ask real users (or prospective users) to come to your office.

Get in a room with them, get them on a laptop with a mic (most do, these days), install a capture software like BB Flashback Express, run it, then run your software or open your website.

Then, just ask the users to do common tasks. Insert this, edit that. Don’t help them. You might find plenty of help everywhere on how to conduct user testing.

How will it help me?

Listen to their remarks. Probe them for more informations. Don’t necessarily do what they suggest, but think about what problem they encounter, what problem they want to solve with their suggestions, and how you would solve it optimally.

Some examples of recent user comments that have had a tremendous impact on our recent designs :

I think I understand how to resize this block, but why does it display a “hand” cursor instead of an arrow ?

This allowed us to realize that our block resizing was even more awful than we thought, and we redesigned them completely. The blocks had arrows on the corners that allowed the users to resize, and on mouse-over the cursor changed to the “link click” cursor. The common pattern is that the cursor must change to a “double arrow” to resize. We were so used to this anti-pattern that we didn’t even notice it anymore. Now users finally understand how to move and resize blocks !

I think the previous version was more simple to use. I liked the toolbar.

This comment made us realize that our “clever” new design was in fact very confusing. The new design uses contextual panels and buttons in various places to do various actions, it’s kind of pretty, and has a lot of attention to details. The old one was using a single toolbar for all actions, was awfully ugly, with widely inconsistent icons that changed places depending on what you clicked. But users always knew that the actions were there. Now they’re lost, because the action to do X is on the left, Y is on the bottom, and Z is displayed in a contextual panel. It’s only this comment that made me realize why I was uncomfortable when watching users search for some actions: they are actually lost. We might have to rethink our whole design to switch to toolbar-based.

That’s it?

Of course not. It’s very easy to do enhancements with just that, but it won’t allow you to build the best software out there. But there is some help available everywhere.

I personnally started with Steve Krug’s “Don’t make me think” and “Rocket Surgery Made Easy” : they are very concise and to the point. I’d consider them mandatory readings.

There are many great usability blogs out there: try out (in no particular order) Smashing Magazine, UX Magazine, UXMatters, Boxes And Arrows, The UX Booth, UX Movement, Usability Counts, Wireframes Magazine… the list goes on.

I am sure this small start will help you a great deal.