Hello, we are 29 Steps. We build software using Ruby.

In an ongoing project I am working on, I am using the excellent Money gem with money-rails integration within a Rails 3.2 application and running on top of MongoDB.

If you are familiar with the gem, it stores the price as a type of Money object within mongodb which means typical full text search with greater than or less than will not as expected.

In order to do a search on price, you would need to create a Money object within the search criteria as follows (assuming my main model is a Product document):

Product.where(:price => Money.new(899,’EUR’)).first

The above example looks for a product priced at 8.99 euros.

I’m still in the process of working out how to search for a given price range using the money-rails gem.

I recently started to integrate AngularJS into a Rails application which has an external API to speed up the view rendering process and also as an exercise into using another JS framework apart from Backbone.js

The AngularJS app essentially renders the json output from the existing API and renders it in the view. The only issue is that some of the JSON string is actually raw HTML. If you render this in the view it gets returned as pure string and not actual HTML. Using either ‘raw’ or ‘.html_safe’ will not make any difference as the rendering is with Angularjs framework.

To be able to output dynamic HTML , we would need to make use of the ngSanitize module by including it within the application. These are the steps I took to get the html output to show: 

  1. include angular-sanitize within application.js
    
    //= require angular
    //= require angular-resource
    //= require angular-sanitize
  2. Include ngSanitize into your angular module:
    app = angular.module("MyApp", ["ngResource", "ngSanitize"])
  3. Within the main template use the ng-bind-html directive within the appropriate html tag to display the content:
    <div ng-bind-html='myhtmlstring'></div>

Further information and more examples of this usage can be seen on the angularjs site: http://docs.angularjs.org/api/ngSanitize.$sanitize

In a recent project using RefineryCMS the page speed analytics results keep indicating that the main application.js after compression is still showing trailing whitespaces and hence triggering ‘js file requires compression’ warnings when tested within google page speed

In an attempt to fix this I switched over to using the closure compiler during assets compilation. However after deploy the entire compiled javascript for refinery failed with the following error inside console:

is_match() is undefined

This issue is mentioned here and has been fixed on the main refinerycms master branch on github.

In a recent client project, in order to speed up the loading of assets and to improve the overall page speed score, I decided to store the assets outwith of the app and to be able to use the dynamic asset hosts functionality within rails.

This works really well out of the box apart from one caveat: the app uses Dragonfly to serve dynamic images which gets resized based on certain conditions.

I originally used asset_sync to store the precompiled assets into Amazon S3 but of course this fails miserably as S3 cannot find images with urls beginning with ‘/system/’ etc as that is supposed to be handled by the Dragonfly engine whereas asset_sync is looking for a folder or directory path inside S3 !! 

Add to that the complexity of adding in a cloudfront CDN

The solution I came up in the end was not to use asset_sync but to precompile all the assets before deployment. Then I added a CNAME entry such as ‘assets.mysite.com’ to point to the actual domain ‘mysite.com’ This is what you would do anyway if you are using asset hosts. Make sure you wait and test the CNAME to make sure it points to the domain ‘mysite.com’ before proceeding further.

Then within the CDN, set up an origin for ‘assets.mysite.com’ and wait for it to finish setup and turn green. if you start testing before it turns green you will get some weird 503 unavailable errors or your assets will 404 with ‘Image returned with type of text/html’ etc

Within the config/production.rb add the following line:

config.action_controller.asset_host = "//test.cloudfront.net"

This will make sure that all the assets requests pass through to the CDN rather than the app itself. It also solves the problem of using a dynamic image engine such as Dragonfly while keeping your compiled assets intact  on heroku.

Another issue to note is that on Heroku cedar stack there is no reverse proxy cache which means your html will not be minified. The following links from heroku mentioned this: Link 1 and Link 2. This can be fixed by using the heroku defalter gem which will automatically gzip both images and html responses.

In initial tests, the page speed for various pages drop from an intial 4-5ms to under 1ms when the CDN has cached all the assets properly.

If you still wish to use asset_sync to serve assets from S3 with Dragonfly, there are two possible options:

  • 1. Generate a static image for each version required e.g. thumbnail version; actual version

  • 2. Specify the host within the dragonfly initializer. In my case the dragonfly is within a gem which preloads it so i did not even attempt to spend hours overwriting the initializer.

I caught the MongoDB bug recently as part of my own exercise to learn more than 1 database system in my development life somewhat similar to ‘7 databases in 7 weeks’ but less severe regime.

I realised that it is possible to store file uploads within a mongodb instance using its gridfs file system so decided to have a go. I first setup carrierwave and carrierwave-mongoid but it seems that the latest versions of both gems do not work well with the mongoid version 3+ gems due to api changes. After several attempts and google searches I gave up and was in the process of rolling back mongoid gem to 2+ when I suddenly thought of Dragonfly the fantastic file uploader by Mark Evans of new bamboo

One of the coolest features of Dragonfly is the ability to have a single upload such as an image and being able to chain various processing options to it or to scale it on the fly. This beats the carrierwave approach of declaring versions in my opinion

To get this to work I have to do the following:

  1. Declare dragonfly as a gem dependency in your gemfile. Put rack-cache above it if you wish to cache the assets
  2. Create a dragonfly initialiser file documented here in the docs. Now Dragonfly::DataStorage::MongoDataStore has a require for the mongo gem so you need to update your gemfile to also include the mongo, bson and bson_ext gems. I put these before the dragonfly and rack-cache. To configure the datastore I decided to use the more verbose approach and declare everything manually as documented here
  3. Within your model declare the relevant accessors. In my case I have a User model which needs to have its own avatar so I added in the following:
     field :avatar_image_uid
    image_accessor :avatar_image

Restart your application server and you should have a working version of file uploads using GridFS with mongodb. Dragonfly also seems to deal with the streaming of the uploads back to the front end without any need to write your own rack middleware to deal with it.

Hope this helps someone with starting with file uploads in mongo. I am still learning about the system so any feedback is greatly welcomed.

I keep having to google this all the time so I thought I would remind myself here.

If you are using the AWS-S3 rubygem, you can use the AWS::S3::S3Object instance method exists? with the target bucket name to determine if the object in question exists or not.

For example:

  if AWS::S3::S3Object.exists? file_name, destination_target_name
     # do this if it exists
  else
     # do something else since it does not exist
   end

One of the most popular plugins to use for file uploading and processing is carrierwave. I have used it extensively in all my rails projects that require some form of file upload and it has never let me know - it always works out of the box and as expected.

One of the issues that has always interested me but I never took the time to investigate is - how do you define per user settings for each uploader instance? Normally the carrierwave settings are defined in advance in a rails initializer file and that pre determines the type of storage, where the file is uploaded to etc. In a recent project using carrierwave I had to upload user files to Amazon S3 using fog but each file had to go to the end user’s bucket.

What I did was to remove the initializer file from the rails config directory as that does not apply anymore. Within the uploader you would have to define the settings as instance methods. Thats the glue to the trick: define each setting as an instance method and then grab the user’s setting within the method

For instance to set the s3 bucket directory, we can do something like this:

In the example above, we have set the fog_directory to be based on the settings attribute of the user model on which this uploader is mounted on. The same can be applied to all the other setting such as ‘fog_host’, “fog_public’ 

You may be wondering thats great but where do I define each user’s s3 settings? The answer is you should not have to. When you are uploading into another user’s s3 bucket you would first need to add your settings as an ACL entry into their s3 account first. The example above assumes you have already done so and all you need is just your own s3 settings to upload to their s3 bucket.

Hope this helps someone.

In a recent project, I had to redirect users from certain parts of the world to a holding page based on their IP addresses. This could normally be achieved in Rails using something like request.remote_ip.

However the app is on Heroku and to make matters worse, it uses SSL which means any calls to request.remote_ip object only returns the IP of the ELB ( elastic load balancers) addresses which is normally in the range of ‘10.x.x.x’ which in turn returns an undefined location when passed to an underlying script which tries to determine the user’s location based on a reverse lookup.

After various solutions I came to the conclusion that this could only be achieved on the client side using a javascript solution. ipinfodb.com provides such a service which includes a free jquery script premade. All you need to do is sign up to the service to get an API key, download the jquery js file and within the js file, lists the countries which you are allowing access for. More examples can be found at their api page. I used their JSON api which comes with a ready made example in jquery.

This is my setup below as per their instructions but with the callback function modified as the example they provided did not include how to check the user’s country:

var allowed_countries = ['UK','USA'];

// geolocate object is provided by the jquery script
var visitorGeolocation = new geolocate(false, false);

var callback = function(){
  var country = visitorGeolocation.getField('countryCode');
  if($.inArray(country,allowed_countries) < 0){
    window.location.href='/global';
  }
);


visitorGeolocation.checkcookie(callback);

The solution works by making an API JSON request; storing the user’s location details into a cookie and then checking the country code from the cookie. If you are in the UK / EU please make sure you comply with the new EU cookie regulation before using the example. 

Hope this helps someone with the same issue I had. Sadly I still have not managed to come up with a Ruby / Rails solution due to the way Heroku works with regards to SSL and Elastic Load Balancers.

Snipplets, the syntax highlighting app I built a month ago, makes strong use of Pygmentizer to highlight user submitted code segments. It does so by making a POST request to an external pygmentizer app located on google app engines which then makes a successful request back to the app on completion of the highlighting.

While this serves the purpose of the app, it has some drawbacks. Snipplets relies on a Resque queue to wait for the request to be complete which means the user ends up with an inconsistent view of the codesnipplet; also what happens if the API service fails?

After watching the railscasts episode 207 which introduces pygments.rb I ported all the syntax highlighting code to using pygments.rb.

Whats kool about pygments.rb is that it still uses pygemntizer but the program itself is embedded within the gem and your app using ruby-python and libffi ( through the ffi gem ) which means any environment that supports or runs python can run pygments.rb and that includes Heroku.

However I ran into this peculiar bug whereby pygments.rb crashes the entire application server ( both Mongrel and Webrick as shown in the Railscasts) with a killed signal message on the console when running Pygments.highlight method.

If you are running into the same issue, try switching over to Thin which works for me without any issues. As yet I still cannot ascertain the error whether it is system dependent or not and ruby-debug did not thrown up any unusual bugs. Could have something to do with asynchronous processing but will look into it in a separate post.

If you are new to using pygments.rb and have just switched over you might notice that the line numbers have disappeared completely if you used to have line numbers enabled. This is off by default in the gem and can be enabled by passing the line numbers options to true inside the Pygments.highlight method.

Below is a short snipplet describing some of the workarounds for pygments.rb as discussed above:

The second issue you might run into when deploying onto Heroku is the version of RubyPython you are using. If you get errors such as ‘lexer cannot be found’ then it is related to RubyPython as pygments.rb cannot pick up on the Python interpreter. To do so simply create an initializer file ( I called mine ‘rubypython.rb’ and put it inside config/initializers) with the following code:

Essentially what the above does is to start a python interpreter session so that pygmentizer can run. I can’t seem to find any other way to avoid this else the entire app will fail. The other gotcha is that it only seems to work with RubyPython 0.5.1 so I have the gemfiles locked down to that version. I tried it with both RubyPython 0.5.3 and the latest 0.6.1 to no avail - the cedar stack on heroku just refuses to communicate with rubypython.

Hope this helps someone with similar dilemma as pygmentizer is a great syntax highlighting tool.

In one of my recent BDD endeavors I came across a test case whereby I have to make an assertion that a float value generated from an object method is equivalent to a certain value.

If you were to do an out right comparision such as '1.5.should == 1.5' you will get an rspec error saying 'expected 1.5 but received 1.5 instead' . Check out this ruby forum post for the details: http://www.ruby-forum.com/topic/169330#742994

Only an approximation of the floating point values can make the specs pass, like so:

1.5.should be_close 1.5, 0.1

The should be_close statement is an RSpec expectation which takes 2 arguments: the value to compare against and a precision value. The default precision is 0. Essentially the line above states that the value of 1.5 should be approximately close to 1.5.

From my own understanding of the issue above, it is not a Ruby issue but a general issue with floating point arithmetic itself.