MongoMapper callbacks/hooks

It took me a while to find these, so I thought I would post them here. In the end I had to search the source code to find them.

:before_save,                        :after_save,
:before_create,                      :after_create,
:before_update,                      :after_update,
:before_validation,                  :after_validation,
:before_validation_on_create,        :after_validation_on_create,
:before_validation_on_update,        :after_validation_on_update,
:before_destroy,                     :after_destroy,
:validate_on_create,                 :validate_on_update,
:validate

I was searching for these amongst MongoMapper doucmentation and blog posts when really they are an ActiveSupport concept as the following comment suggests:

Almost all of this callback stuff is pulled directly from ActiveSupport . . .

Being new to rails I wasn’t very familiar with the ActiveSupport stuff and so I didn’t know to look there. Anyway this is how I use them in my posts model:

class Post
  include MongoMapper::Document

  many :comments

  key :title, String, :required => true
  key :slug, String
  key :body, String, :required => true
  key :published, Boolean
  key :published_on, Date, :default => Date.today

  after_save :update_comment_titles

  private

    def update_comment_titles
      comments.each do |comment|
        comment.post_title = self.title
        comment.save
      end
    end

end

My comments live in a separate document collection and each Comment document contains the title of the Post that it belongs to. This allows me to list all comments (on the admin page for example) without having to query any of the Post documents.

This above after_save call back ensures that the post_title on each Comment is updated if I ever update the title of the post.

Tags: , , , ,

Inversion of Control, Two Ways

One of the reasons I like to play with other languages from time to time is to find new ways of solving problems. This is why, while learning Ruby on Rails, I am making a conscious effort to do things the Ruby way, as opposed to writing C# style code in Ruby.

While working on my new blog page, I encountered a problem that was identical to one I had on a previous project but using ASP.Net MVC. In both projects (but for different reasons) I made a design decision to use Amazon’s S3 web service for storing images. However, I wanted the option of switching to an alternative local (file system based) implementation at configuration/deploy time.

The ASP.Net MVC way

The C# solution to this problem was obvious to me at the time, because I was already using an Inversion of Control container. I created two implementations of my image repository and set up the IoC container to use whatever implementation I wanted based on some configuration setting (note: In this extract I am also setting up local/remote message queue implementations):

if (bool.Parse(ConfigurationManager.AppSettings["RunLocal"]))
{
   _container.RegisterType<IBinaryRepository, Local.BinaryRepository>(
       new InjectionConstructor("F:/TadmapLocalData/LocalBinaryFolder")
   );

   _container.RegisterType<IMessageQueue, Local.MessageQueue>(
      "Image",
      new InjectionConstructor("F:/TadmapLocalData/LocalImageMessageFolder")
   );
   _container.RegisterType<IMessageQueue, Local.MessageQueue>(
      "Complete",
      new InjectionConstructor("F:/TadmapLocalData/LocalCompleteMessageFolder")
   );

   _container.RegisterType<FileUploaderAdapter, LocalUploadAdapter>();
}
else
{
   _container.RegisterType<IBinaryRepository, Amazon.BinaryRepository>(
      new InjectionConstructor(
         ConfigurationManager.AppSettings["S3AccessKey"],
         ConfigurationManager.AppSettings["S3SecretAccessKey"],
         ConfigurationManager.AppSettings["S3BucketName"]
      )
   );

   _container.RegisterType<IMessageQueue, Amazon.MessageQueue>(
      "Complete",
      new InjectionConstructor(ConfigurationManager.AppSettings["CompleteMessageQueue"])
   );
   _container.RegisterType<IMessageQueue, Amazon.MessageQueue>(
      "Image",
      new InjectionConstructor(ConfigurationManager.AppSettings["ImageMessageQueue"])
   );

   _container.RegisterType<FileUploaderAdapter, DirectAmazonUploader>(
      new InjectionProperty("BucketName", ConfigurationManager.AppSettings["S3BucketName"]),
      new InjectionProperty("AccessKey", ConfigurationManager.AppSettings["S3AccessKey"]),
      new InjectionProperty("SecretKey", ConfigurationManager.AppSettings["S3SecretAccessKey"]),
      new InjectionProperty("FileAccess", com.flajaxian.FileAccess.Private)
   );
}

The Ruby on Rails way?

While looking for an IoC container or equivalent Dependency Injection framework, I started to realise that although DI frameworks exist for Ruby they are normally not used. One of the articles I read suggested reopening the class and mixing in a module. So I gave this a try.

Most of my models are including a module to define the persistence implementation, for example my Post class is stored as as mongoDB document:

class Post
  include MongoMapper::Document
  # ...
end

So my first step was to move all image persistence code into two different modules, one for S3 storage and one for local file storage.

Then inside an initializer file, I check my configuration and based on this, I reopen the class and include the appropriate persistence module:

require 'image'

IMAGE_STORE_CONFIG = YAML.load_file("#{RAILS_ROOT}/config/image_store.yml")[RAILS_ENV]

if IMAGE_STORE_CONFIG['use_s3']

  class Image
    include S3ImageStore
  end

else

  class Image
    include FileImageStore
  end

end

This solution works for now, although I am not entirely happy with it. My problem is that my Image model now looks like the following:

class Image

   attr :name

end

This is very simple (which is good) and there are no details about it’s persistence (which is also good), but it doesn’t give the reader any clue that it can persist itself. For example, in C# I would have to have a placeholder (interface) for my implementation that would indicate to the reader that there is some stuff missing and this is what the missing stuff would look like if you want to go looking for it.

I am happy now that I can easily switch between cloud services and local implementations, but I can’t help thinking there is a better way of achieving this…

Tags: , , ,

A new blog and learning Ruby on Rails

My blog has been a Word Press application for a while now but two things have been bothering me.

  1. I spent ages playing with the application settings and server settings to try and get the URLs the way I wanted, they are still not what I want. Nothing complex, I justed wanted a little more control.
  2. There are thousands of plugins that do almost what I want, but none that do exactly what I want. Most are customizable but sometimes you end up having to hack some PHP.

As a web developer, I am used to having full control over my URLs from within the code and would like to tweak the behaviors using the tools and languages I am familiar with.

My original idea was to write a new blog application using ASP.Net MVC, but have since decided to use this as an opportunity to learn something new.

I’ve been spending a lot time recently learning a new language and framework, namely Ruby on Rails. At first it was just to see what all the hype was about but I eventually got hooked. Right now I am in the middle of developing my new blog/website using this fantastic framework.

I’ve already blogged about some of the new stuff I’m learning (Checking page titles with Cucumber) and I will probably have some more posts about it before it goes live.

Tags: , , ,

Checking page titles with Cucumber

As a way of learning a new framework I started to write my own blog application using Ruby on Rails. I’ve also taken the opportunity to try a bit of BDD using Cucumber, which I’m finding is a lot of fun. One feature that I have written recently and particularly like is:

Feature: Main Title
  In order to improve SEO and help readers know what they are looking at
  The reader or search engine spider will need the name of the page
  and the author's name in the title of all pages

  Background:
    Given I have a post with title "Why I really like Ruby"

  Scenario Outline: Pages contain name and author in the title
    Given I am on the <Page>
    Then I should see "<Full Title>" within "title"
    Examples:
    | Page                          | Full Title                            |
    | home page                     | Home - Trevor Power                   |
    | about page                    | About - Trevor Power                  |
    | contact page                  | Contact - Trevor Power                |
    | blog page                     | Blog - Trevor Power                   |
    | "Why I really like Ruby" post | Why I really like Ruby - Trevor Power |

This test is very simple, it visits each page in the listed examples and checks that the title is as expected. For me it is important that this is tested automatically because as developer/tester of the site I never really pay attention to the text that appears in the window title. But this text is important for search engines as well as appearing in browser history and book marks.

Another thing I like about this test is the cross cutting nature of it. The titles of all pages from different parts of the website are listed together and any inconsistencies are made obvious.

I don’t have syntax highlighting on this blog for gherkin (the language used in the above spec) but here is a screen shot of the results which might be more readable:

Screen shot of the Cucmber results for the above feature

Tags: , , ,

Podcast List

I listen to mostly programming podcasts, namely:

Non programming related include:

Tags: , , ,

Displaying the workspace name in Visual Studio 2008

Switching between Visual Studio instances with different workspaces can be very frustrating as it is hard to tell which workspace solution is open. I was planning on writing a plug-in to display the workspace name in the title bar when I found this question on Stackoverflow by someone with a similar problem, Working with different versions/branches of the same Visual Studio 2005 solution.

One of the comments there suggested the use of ‘hard links’ to the solution file. I have given it a go and it has been working quite well for me for that last few days. Here’s how to do it.

A normal windows shortcut will not suffice, you need to create a symbolic or hard link to the solution file. In Windows 7 you can do this via the command line tool mklink. If you have a workspace called ‘Merge’ and a solution called ‘MyApp.sln’ the the following command will create a shortcut that you can use. (You will need to run this as an administrator).

C:\MyApp>mklink MyApp.Merge.sln MyApp.sln

Multiple Visual Studio Instances

Now use the new link to open you solution file. Then when you have multiple VS instances open and you click on the Visual Studio task bar icon, you will see the name of your new link (including workspace name or branch name). It also appears in the main title bar.

Have you come across any better solutions? Or easier ways to create links?

Update: I wasn’t sure at first what the difference would be between a hard link and a symbolic link but it turns out that using a hard link (a ‘/H’ switch on the above command) helps when you need to save changes to the solution file. But I suggest opening the solution file normally when you need to modify it as saving the solution from the hard link will insert its name into the file.

Tags: , ,

Wickes 1 – 0 Maplin

I was in a Maplin store yesterday returning a set of precision screw drivers that were of such poor quality that I asked to see the manager to complain. I showed the manager where it said “Quality Tools” on the box then I showed him how the tiny shafts just rotated in the handles and came off. I showed how the ones that didn’t fall apart weren’t even straight (so crooked that they couldn’t be used).

They were so bad that I was expecting a “wow, I’ll have to take them off the shelves straight away”. But what I got was:

Yeah, they’re pretty bad but we didn’t make them.

I was told that any complaints should be addressed at the manufacturer!

On a positive note I stopped in Wickes (recommended by a friend) on my way home and picked up a set of precision screw drivers for around the same price.

They were Wickes own brand so I wasn’t expecting much but the quality was far superior to what I got from Maplin. Even the plastic case they came in was sturdy and practical. So if you’re in the Limerick area and looking for good value tools I can recommend Wickes.

Sharing configuration settings in Visual Studio

Correctly managing your systems settings during both development and deployment can save you a lot of time. When ever you have to manually change settings there is the opportunity to mess up, so I try to automate any repetitive configuration tasks.

It often happens that application settings or connection strings are needed by more than one component in your system. The solution I use is a variation of this method of sharing app.config files between applications, except that it allows each project to have its own app.config or web.config files which reference the common settings.

Creating shared settings

The first step is to make a settings class that acts as a wrapper around the values in the configuration files. This is a very simple class inheriting ApplicationSettingsBase and looks like this:

using System.Configuration;

namespace TP.Example.Configuration
{
    public class Settings : ApplicationSettingsBase
    {
        private static Settings _defaultInstance =
                (Settings)(ApplicationSettingsBase.Synchronized(new Settings()));

        public static Settings Default { get { return _defaultInstance; } }

        [ApplicationScopedSettingAttribute()]
        string ServiceName
        {
            get { return ((string)(this["ServiceName"])); }
            set { this["ServiceName"] = value; }
        }

        [ApplicationScopedSettingAttribute()]
        string EventLogName
        {
            get { return ((string)(this["EventLogName"])); }
            set { this["EventLogName"] = value; }
        }

        [ApplicationScopedSettingAttribute()]
        [SpecialSettingAttribute(SpecialSetting.ConnectionString)]
        string ConnectionString
        {
            get { return ((string)(this["ConnectionString"])); }
            set { this["ConnectionString"] = value; }
        }
    }
}

The next step is to create the files that contain the settings. I usually have two, one for application settings and one for connection strings. The application settings file consists of a single node matching the full name of my settings class:

<TP.Example.Configuration.Settings>
  <setting name="ServiceName" serializeAs="String">
    <value>My Example Service
  </setting>
  <setting name="EventLogName" serializeAs="String">
    <value>ExampleLogName
  </setting>
</TP.Example.Configuration.Settings>

The connection strings file consists of one ‘connectionStrings’ node. Note the name of the connection string is qualified with the full name of my settings class.

<connectionStrings>
  <add name="TP.Example.Configuration.Settings.ConnectionString" connectionString="[...]" />
</connectionStrings>

These files should exist outside any of your projects but be under source control. I usually put them in a solution folder ‘Configuration’ that contains a simple project with the above settings class:
'Configuration' solution folder

Referencing shared settings

FilesAsLinksThese common settings are now ready to be used by any project in the solution by following these steps:

  1. Add a reference to the settings project.
  2. Add a link to settings files and turn on ‘copy to output directory’
  3. Include settings files in the app.config/web.config

To link to the settings files you should Add -> Existing Item -> Add As Link, they will then appear with a shortcut icon. This means that they only point to the original file, so there is no duplication.

Now that the files are in our project we just need to get our App.config file to include them. A simplified App.config would look like this:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <configSections>
    <sectionGroup name="applicationSettings" type="System.Configuration.ApplicationSettingsGroup" >
      <section name="TP.Example.Configuration.Settings" type="System.Configuration.ClientSettingsSection" />
    </sectionGroup>
  </configSections>
  <connectionStrings configSource="TP.Example.Configuration.ConnectionStrings.config" />
  <applicationSettings>
    <TP.Example.Configuration.Settings configSource="TP.Example.Configuration.AppSettings.config" />
  </applicationSettings>
</configuration>

Now when you modify one of these files the change will be picked up by all other projects in the solution.

Websites and Web.config

Using these settings in a website requires one extra step. As before, the settings file will be linked to by the project file and the ‘Copy to Output Directory’ turned on. The problem is that the web.config file is usually at the root while the output directory is ‘bin’ or ‘bin/Debug’ depending on your setup. Because the output folder can depend on the build configuration, I have found the best solution is to copy the config files from the target directory to the project directory. This is just one line in the build events.COPY $(TargetDir)*.config $(ProjectDir)
Writing it out like this sounds like a lot of work but it is easy to setup and easy to maintain. If you have any question or have a better solution, please leave a comment.

Tags: , , ,

Enumerations and global constants in SQL Server

Everybody knows that you shouldn’t put lots of business logic into stored procedures, right? Well maybe not, but whether or not you agree with this statement you’ll always find some cases where it just makes things easier or more efficient to have your business rules close to the data. Often business rules mean dealing with record types, flags and other magic numbers. These numbers normally correspond to enumerations or constants in our favourite high level language (for me C#).

So what do we do with all these magic numbers? Of course, we being good programmers, put them into well named variables that makes the code easier to read and the values easier to maintain. But what happens when we have multiple stored procedures using the same values?

I didn’t know the answer to this, so I put a question on Stack Overflow, What are the different ways of handling ‘Enumerations’ in SQL server? I got some good answers, but most did some mapping between VARCHARs and INTs. Hard coded strings are better than hard coded numbers because they are easier to read, but they have other problems such as hard to find typos.

One answer that wasn’t given, and hasn’t been suggested by any other developers I’ve talked to, is the use of scalar functions as constants. I stumbled on this solution today while refactoring some existing functions and ended up with a function that did little more than call another function with a hard coded int value. You can create a simple function that just returns a number:

CREATE FUNCTION COLOR_RED()
RETURNS INT
AS
BEGIN
	RETURN 2
END

This is quite a lot of code for just one constant but it is available to all stored procedures. Maybe they could be generated automatically?

As for performance, I haven’t been able to write a test with any big difference in execution time, but I’m sure there must be some hit. Regardless it is bound to perform better than most of the other answers I got for the initial question.

One problem I have found is that you cannot pass them directly to other stored procedures, you need to introduce an intermediate variable, a bit annoying but not a show stopper.

Maybe you’ve being doing this all the time? If there’s something I’m missing, please leave a comment.

Tags:

Using the IIS SEO Toolkit

Was at a great talk by Scott Guthrie today in Dublin. One thing he talked about near the end was the new IIS SEO Toolkit, this I was able to try as soon as I came home.

It was very easy to install and can be pointed at any website (not just your own). To install:

  • If you haven’t already, get the Web Platform Installer.
  • Select and install the ‘Search Engine Optimization Toolkit’.

Then, to examine a site:

  • Start ‘Internet Information Services (IIS) Manager’.
  • Go to Search Engine Optimization -> Site Analysis.
  • Point it at any website.

Starting a Site Analysis of my blog
It then gives you a list of of rule violations, 405 for my blog. The details of  each violation include a description of the rule with recommended action.
SEO Toolkit results for my website
As you can see, rule violations include broken links, missing description text and non-relevant link text (as in click here). Most of these rules are obvious but they are hard to spot without looking at the generated HTML of the page.

Tags: , ,