0 Downtime Infrastructure

At AMA we strive for a high quality user experience. Part of that user experience is making sure that our services are available for our members whenever they need them. If a service/site is down, we’ve failed the member. Imagine trying to book road side assistance and the site was down. That wouldn’t be good, and worse yet could put our member’s safety in jeopardy.

Here’s what we do to make sure that our sites are running 24/7.

Servers

We use Amazon for our server hosting to make sure that we have servers available to spin up whenever we need them. In the winter time we can spike from 50 to 3000+ concurrent users on our AMA road reports site so the ability to flex our server load is very important.

We use EC2 for the application boxes themselves; Elasticache for our in memory storage and RDS for our data storage. All of the boxes/services are hosted on multiple availability zones so if we suffer an outage in 1 data center we will automatically be flipped over to the other data center. We also leverage geo-redundant S3 for backups to ensure scalable and easy retrieval of our information.

Application Deploys

We never want the member to see a downtime page and Unicorn is our server of choice for 0 downtime application updates. This allows us to deploy without having to take the website down for a short period of time. This also allows us to deploy at all hours of the day, so our team doesn’t have to do night time deploys.

Unicorn does a dance where it keeps the original version of the application in memory while you’re deploying the new version of the application. It slowly phases out the original for the new version and this experience is 100% seamless to the member.

Server Patching

It’s extremely critical that we stay on top of security patches in both the Ruby/Rails world, and on the infrastructure server side. Generally server patching needs to take down the server for a short period of time, however, due to the technologies we use (RDS/Elasticache) our database and in memory storage is covered for us (by Amazon). For the application servers we pull 1 server out of the load balancer, apply the patches and then bring the server back into the load balancer once it’s been rebooted (if it needs to).

This fully automated process using Ansible enables us to execute in parallel, patching all our sites at the same time.

Notifications

We run a 24/7 on call crew that allows us to address any issues that might happen outside usual work hours (you’d be surprised at how many people are renewing their memberships in the middle of the night!). The tough part of monitoring is that you need it at a few levels.

The first is the server level. Is the RAM/CPU/HD/Network performing correctly? No? Send an email/SMS! At the next level is the application; we need to make sure that the application itself is working correctly. If the web page isn’t loading but the server is up something is wrong and someone needs to be notified. Finally is the 3rd party integration. We’re hooked into multiple webservices and they might go down. We need a way to make sure we are alerted so we’ll get a notification and follow up with the 3rd party.

We’re able to cover off these scenarios with New Relic and Rollbar.

Armageddon

What happens if your Amazon zones get wiped off the face of the earth? We store offsite backups of our data for exactly this reason. Our server build and deploys are all fully automated allowing us to bring up our hardware infrastructure extremely fast. It would probably take longer for DNS to propagate than it does for us to rebuild in a new Amazon Region.

We have had the odd outage here and there with Amazon (which is to be expected with cloud providers) and we’ve failed over without problems.

Summary

If you’re a member of AMA, you should expect the services to be available at all times. And we’ll make sure that happens.

Improving Your Ruby Code Base

Everyone has inherited a codebase that was in dire need of a re-write (at least a portion of it). If you haven’t, consider yourself one of the lucky few. I was at the local yegrb meetup a few nights ago and there were a bunch of ideas being thrown around. I brought up a few of the methods that we used to improve our codebase(s). It’s been a long trek at AMA, but we’re miles ahead of where we were a year ago.

Developing your Ruby skill set

I’m going to ask you a loaded question. Do you write good Ruby code? I used to think I did, but I was mediocre at best. Before you can improve your codebase, you need to improve the quality of ruby that you write. If you blindly re-write code without improving, it’s probably going to be just as bad as the previous code. It’s a harsh reality, but once you accept that you’re old codebase is a reflection of your skill level, you can start to improve, and prevent future failures.

But how do you improve? Books. Specifically this book. Most of the time when I read books, nothing sticks. Or it only becomes beneficial in very specific scenarios. POODR was the first ruby book that I started to read and made my code better the next day. I started leveraging objects much more, and wrote less procedural code. It teaches you to truly embrace OOP.

Coding Rules/standards

All teams have varying skill levels. So how do you start to improve as a team? Luckily, Sandi has given a few good base rules to follow (with the caveat of “You can break these rules if you can talk your pair into agreeing with you”):

  1. Your class can be no longer than 100 lines of code.
  2. Your methods can be no longer than five lines of code.
  3. You can pass no more than four parameters and you can’t just make it one big hash.
  4. When a call comes into your Rails controller, you can only instantiate one object to do whatever it is that needs to be done.

In addition to those rules we’ve added one other. No instance variables. We use decent-exposure and the expose helper to dry up our code which gives us an easy way to stub when testing.

You’re going to struggle through those rules. I remember lots of head scratching when we decided to follow this set of rules, but it becomes second nature once you get used to it. You start to think differently about how to use models and structure your code. You’ll notice that you’ll start to use way more PORO’s (Plain ole Ruby Objects) instead of stuffing that active model full of code.

These will start to guide you and your team towards a higher quality codebase. But a codebase without consistency will drive you nuts. You need to write easy to read/well structured code, as a team. How can you do that? The first is to pick a good base of coding standards. We went with github’s ruby coding standards. Through time, your team will start to hit road blocks/frustration points in the codebase. This is good! Talk about it as a team and create some additions to your coding standards/rules. Here’s our additions (with explanation):

  • Follow Law of Demeter (only talk to your neighbors) wherever possible. If it makes sense to break the “law”, make sure you’re not changing the object you’re calling i.e., don’t do this: user.profile.update_attribute(:foo, 'bar-baz')

This rule to prevent craziness like this: object.batmans.breakfast.and.lunch.and.dinner. The more you chain the harder it is to test and debug. We had massive chains that made our life hell. We rarely go past 2 chains now.

  • No class methods, no def self.foo (unless you’re doing a finder type method or returning a collection of a your own classes instances).

I brought this up at the @yegrb meetup and everyone looked perplexed. We’ve found class methods are rarely ever needed. The only outlier would be doing some configuration or in the above scenarios.

  • Always pass a hash to an initializer method: def initialize(args = {}) instead of def initialize(bar, baz)

We used to have to do massive refactorings when we didn’t pass a hash (because the changing of the method signature everywhere). This one is a hot topic, because it’s tough to know what to pass into the method. We make sure to fetch the keys and raise an error if the key isn’t visible. You can view how we do this here. As we move most of our apps over to Ruby 2.0+ We hope to start leveraging keyword args much more.

  • When possible – try not to pass args to an instance method – this often leads to a procedural style. If you have args, pass them in the constructor instead and then operate on them in the method.

You want to throw as much stuff into the object as possible and let the methods act on the attributes. The simplest reason would be that all of the methods can act on the attributes and more complex reasons like object composition.

  • Instead of referencing classes directly – set defaults in a constructor. Ideally set defaults in an initializer.
1
2
3
4
5
6
7
8
9
  # config/application.rb
  config.after_initialize do
    config.default_provider = Car::Booking::Provider
  end

  # app/models/foo.rb
  def initialize(args = {})
    self.provider = args.fetch(:provider, ::Rails.application.config.default_provider)
  end

If you can set a default, do it. It’ll save you grief later on when using the class elsewhere.

  • If a method is not used outside a class, put it under private – this limits the public “API” of the class.

Every so often we’d forget to put a method under private and it would start to get consumed (even though it shouldn’t). This was more of a reminder for us than anything.

  • No conditionals in views.

Your mind just exploded. This one is really tough. You’ll start to make really good presenters with this rule in place. We’re personally big fans of draper. We still have the odd conditional slip through (a pair was convinced!), but not very often.

  • Don’t start lines with ‘unless’.
1
2
3
unless valid? do_stuff # rejected PR
if !valid? do_stuff    # accepted PR
do_stuff unless valid? # accepted PR

Your mind might be fresh at the start of the day, but eventually it becomes hard for your brain to process unless statements. If you’ve ever come across something like this: unless method_z && method_a && !method || method_x you’ll appreciate this rule. We still do allow unless at the end of the line, but we only allow one conditional.

  • Deploys can’t rely on .env vars

We fell into a nasty habit of relying on .env variables for a deploy (hooks specifically). This caused us all sorts of grief so we put the kibosh on it.

  • Only access ENV[‘stuff’] from the application or environment config files. These values will be pulled from the config object throughout the code.
1
2
3
4
5
6
7
8
module AwesomeApp
  class Application < Rails::Application
    config.epic_api_url = ENV['EPIC_URL']
  end
end

# example of use
RestClient.get Rails.configuration.epic_api_url

This allows you to change the config var in 1 place instead of a global find/replace.

We started to use delegate_presenter as a slimmed down feature set of draper. We regretted it for 2 reasons. 1. turns out we needed those features (doh!) and 2. The gem is really inactive and it took almost 6mo to merge in a PR. Draper is really well maintained at this point. We probably should change this rule to ‘use Draper’.

  • Blank lines don’t matter (within a method) and are not counted towards for method line length. It’s a signal as to maybe that method should be broken apart.

Sometimes we can become sticklers about method length. We tossed this in to make sure that the rule is just a guideline. Above all, don’t write bad code just to conform to the rules.

  • Only 1 line allowed in a rake task

We used to have huge rake tasks that were almost impossible to test. Ironically we started to push those into class(self) methods which just moved the problem from A to B. We broke those downs into properly instantiated classes and we were golden. Our 1 liner’s look like EpicImporter.new({data_url: 'http://example.com'}).import

Tooling

We use a couple of tools to keep the quality of our codebase up. Specifically we use:

  • Brakeman for security checks (make sure to keep it up to date!).
  • Cane which helps keep the complexity low.
  • Flay to flag down duplicate code.
  • Simplecov to make sure that our testing coverage doesn’t go down.

When it comes to Brakeman/Cane/Flay the rake tasks weren’t crystal clear, so we created a few wrapper rake tasks to make them a bit more clear. We also make sure they returned 0/non-zero return codes so our CI would flag it if we broke some of the thresholds. Simplecov is a great tool for preventing code being written without tests. It’s really useful to see if you’ve covered off those edge cases.

Build your own Rules/Standards/Tool set

It’s important to note that we built this rule set as a team as we hit rough patches of code. It’s not going to work if you just drop a bunch of rules on your team, or try and implement something overnight. Do not drop some of the tools in place without chatting with your team (you are doing a weekly meeting, right?).

Work together at a team and make those codebase(s) better! Good luck!

Dotenv Gotchas

Every so often you get your ass kicked by something and you just need to write down all of the quirks. Dotenv was today’s culprit.

At AMA we start up our blessing of unicorns using foreman (which loads in our .env files). When we do a 0 downtime (USR2) restart we manually call Dotenv.load to reload in the environmental vars(.env). Here’s the code in our unicorn.rb

unicorn.rb - before_fork
1
2
3
before_fork do |server, worker|
  Dotenv.load
end

Through time we’ve learned a few valuable lessons when upgrading:

  • < 0.7 you can’t have comments in your .env’s
  • < 0.8 you can’t have blank variables (which was a bummer for .env.development)
  • 0.9 and above you’ll need to escape all of your dollar signs as variable expansion is now enabled (or use single quotes – not double)

Recently we noticed our unicorns weren’t properly reloading their environment which led to us having to do a hard restart (TERM) on the unicorns. Luckily we weren’t taking on a huge load and our caching offloaded most of the hits, but this can mean that some users could get an error if the unicorns don’t come up fast enough.

We dug in a bit deeper and we noticed that calling Dotenv.load just loads in any new variables (it won’t override any variables). This led to the inevitable “How did that even work in the first place?”.

At this point, we’re not sure if it ever did (seriously, it did :( ), but what we did find was that we should have been using Dotenv.overload. This will override all of the variables that were in your .env file.

We noticed that it wasn’t being loaded up into rails so mvandenbeuken created a pull request to allow us to make the call without directly requiring the library in unicorn.rb

Hopefully this will save you a few ‘wtfs’ when your app is acting up when you upgrade dotenv. I think we’ve burned enough universal ‘wtfs’ on our own.

Avoiding Forks With Gem Extensions

Gems are one of the best things of the Ruby ecosystem. There’s pretty much a gem for everything. But what happens when that gem needs a slight tweak to work with your app? Eventually as you create more and more Ruby/Ruby on Rails apps you’re going to run into this issue.

The easiest way to make a change to a gem that’s hosted on Github is to just hit the ole Fork button. Now you have an exact copy in your Github repos. You can make all the changes you want and reference the repo directly in your Gemfile (you are using Bundler, right?).

Reference a github fork
1
  gem 'doorkeeper', github: 'ryanjones/doorkeeper'

At this point in time you’re happy. Jump ahead 10 months and you’re probably going to be a grumpy developer. Why? Well, 10 months is a lonnng time in the open source world. Tons of features and bug fixes have made it’s way into the gem, and you’re stuck way back at the version that you forked from.

So, the easiest way to fix this is to pull in master, correct? Sometimes you might get lucky, but it’s pretty rare that you won’t get merge conflicts. I’ve been in scenarios where the method I’ve changed was completely removed from the gem! How do you even go about fixing that?

Unfortunately, there’s not really an easy way to fix that (sadface). You’ll still have to re-write your patch, but you can create a gem extension which will allow you to manage the fallout a little bit easier. A gem extension is a just gem that overrides existing functionality within a gem. There’s a bunch of great benefits for creating a gem extension:

  • Easy to track what the actual change was, as the code only related to the change is needed. Meaning, you don’t need all of the original gems code to go along with it.
  • You can bring it in from rubygems across multiple projects (with proper versioning).
  • You can remove the gem and the app reverts back to the default functionality.
  • Tests are isolated to the overridden code (though I suggest you test it in your project also).
  • Provides a consistent way to override gem functionality across multiple projects.

We recently had to make some changes to doorkeeper (an OAuth2 provider). After a user authenticates, we needed to log them out. You can look at the gem extension here. You’ll notice a couple of things:

  • Naming the gem properly “gemname-module_name_space_to_extend”. Doorkeeper::LogoutRedirect in this case to keep the module structure in place.
  • Proper gem versioning in case we need to upgrade doorkeeper (and keep out logout after redirect in place).
  • When we decide to remove the functionality we can just drop the gem from our Gemfile and doorkeeper will revert back to it’s original functionality.

It’s important to note that this method won’t get you away from updating your gem extension if large parts of the gem’s internals change. However, it should be less painful to apply across projects, provide some versioning, and structure to how you do method overriding.

Sidenote: If there’s a different name for what I’m doing above I’d like to hear from you. At AMA we call them gem extensions, but after some googling I’m still unsure if this is the right lingo.

Pull Requests/Code Reviews Don’t Have to Be Offensive

Code reviews are a pain. They take time away from development, disrupt workflow, and can even cause conflicts among teammates. However, I think we can all agree that they’re a necessary part of life, and that if we DIDN’T do them, we’d be in a worse situation.

When I started at AMA I noticed that there wasn’t a huge amount of comments on any of the pull requests that were going up to production. It didn’t matter if the pull request had 1-2 lines of changes, or 50. That seemed odd, because out on the internet this isn’t really a problem. Developers seemingly love to bash/1up other developers when it comes to tools, methodologies or the technology they’re using. We see this all the time when a new feature gets added to rails or dhh states that “tdd is dead”.

A local team is different. If you call the guy across from you a numbskull there’s nothing stopping him from reaching out and bopping you one. I was able to work with the team (through 1 on 1’s and team meetings) and we came up with the following list of problems:

  1. Everyone had their own coding style and didn’t want to press that on other developers
  2. No one wanted to hurt other team member’s feelings
  3. Some didn’t feel that they were on the same skill levels as others
  4. No one wanted to be “the one” that was holding up code from going to production

Once we had narrowed down what was causing the issue we could start to work through each of the scenarios.

Everyone had their own coding style and didn’t want to press that on other developers. This was pretty easy to address. Github’s style guide provided a great base to build off of. It wasn’t overly complex and it laid out the rules in a clear concise fashion. We’ve added extra items to our own style guide over time to improve readability and lower complexity. We’ve added other guides for other languages such as CSS/JS/Coffeescript over time. It’s very easy to point to a styleguide if any conflicts occur.

No one wanted to hurt other team member’s feelings. If you don’t know what’s out there you’ll never know what to use. “I wish there was an app that did X.”, “Haven’t you heard about Waffle-a-tron, it will do exactly that!”. Sound familiar? Knowing is half the battle. Code reviews allow you to learn about new ways of constructing/architecting code. It might be as simple as introducing a map instead of a loop, introducing a builder pattern or even a quick lesson in dependency injection. It’s all about improving the clarity and quality of the code and never against the person who wrote the code.

Some didn’t feel that they were on the same skill levels as others. Sometimes it’s hard to “call out” team members of higher rank. You might not think that your ideas carry a lot of merit, or that you don’t have enough experience in the programming language. Here’s the rub. If you ask a question like “How does think work?” or “I was looking at X pattern, could we use this to improve this chunk of code?” you’re going to learn a 100x more than the guy who blindly accepts the PR. You’re going to be 100x more valuable in the future and you’re going to be 100x more likely to excel. Be the best by calling out the best.

No one wanted to be “the one” that was holding up code from going to production. If you push sub-par code into production. You’re going to get a sub-par experience. Once I hammered that home with the developers and upper management, the overall issue was moot. The developers focused on creating quality code and the upper management enjoyed dealing with a very low amount of regression which led to happy members. Over time, the developers realized that I would have their back if code wasn’t to par and upper management respected that I wouldn’t release poorly written code into production (which can lead to a tarnished reputation).

Keep in mind that code reviews are a process and can take awhile to turn things around. It might take a few weeks before people start to get comfortable. Your code base will thank you for taking the time to improve your code reviews ;).

A great guide on code reviews can be found here on Thoughtbots code review guide. It has some really great suggestions for reviewing code and receiving code reviews.

RyanonRails.com Is Dead, Long Live ryanjones.io!

I’ve decided to move away from ryanonrails.com and move over to ryanjones.io. When I first picked ryanonrails.com it seemed like a clever play on Ruby on Rails (it was clever, right?), but throughout time it’s grown old.

On that note, I wanted to make sure that none of my old links died off. I quickly threw together this sinatra app hosted on Heroku to make sure that the urls are redirected correctly.

This small sinatra app will take anything after the url and forward it over to the new domain you’ve defined. I went with a 301 (permanent) to keep my SEO up, however you could go for a 302 (temporary) if you’re only setting this up temporarily.

config.ru
1
2
3
4
5
6
7
require 'sinatra'

get '/*' do
  redirect "http://www.ryanjones.io/#{params[:splat].first}", 301
end

run Sinatra::Application.run!
Gemfile
1
2
source :rubygems
gem 'sinatra'

Drop these 2 files in a directory, bundle install and then run rackup.

Hosting Octopress With Amazon S3 and Cloudfront

I recently set up this blog and I figured as my first “real” post I could go through the steps it took to set this up. I gleaned quite a bit of information from quite a few sites, but I still felt there could be some improvements on this topic. I’ve made sure to include all of my references in the bottom of the post.

Creating the site

This assumes you have a version of ruby loaded. More information on the setup can be found here: Octopress setup. Run this through your console:

1
2
3
4
5
6
7
git clone git://github.com/imathis/octopress.git octopress
cd octopress
bundle exec rake install
bundle exec rake new_post["first post"]

# and start up the server
bundle exec rake preview

You should see the blog up and running if you punch in http://localhost:4000 in your browser:

Setting up the S3 bucket (as a static site)

Amazon is able to host static sites directly through the S3 buckets they provide. You’re going to want to add a bucket with the name of ‘www.yourdomain.com’. Then click on the properties, enable website static hosting, and set the index document to ‘index.html’.

Setting up s3cmd

I’m going to use s3cmd to upload our site to S3, this will allow us to push the site up through a rake task (and do some heavy lifting for us). I would normally suggest installing s3cmd through brew, but in this case we need the ability to invalidate our cloudfront cache, and the brew version doesn’t seem to support that flag currently. You can install s3cmd directly through source by running this:

1
2
3
4
git clone https://github.com/s3tools/s3cmd s3cmd
cd s3cmd
sudo python setup.py install
s3cmd --configure

You’ll have to enter your S3 API keys when prompted (your Access Key ID and Secret Access Key), you can find them here:

It should end up writing a file in ~/.s3cfg which contains your amazon credentials. This will allow you to use s3cmd from the deploy rake task that we’ll build.

Deploy to Amazon S3 static site

There’s a great rake task located here that we’ll leverage. Paste this at the bottom of your Rakefile:

1
2
3
4
5
desc "Deploy website via s3cmd with CloudFront cache invalidation"
task :s3 do
  puts "## Deploying website via s3cmd"
  ok_failed system("s3cmd sync --acl-public --reduced-redundancy --cf-invalidate public/* s3://#{s3_bucket}/")
end

Within the Rakefile you’ll need to setup a few variables:

1
2
3
4
5
# find deploy_default = "rsync" and replace it with 
deploy_default = "s3"

# and then add this line underneath it
s3_bucket = "www.yourdomain.com"

This will set our default deploy method to the s3 task that we added to the bottom, and define the bucket that the rake task needs.

Let’s deploy!

1
2
bundle exec rake generate
bundle exec rake deploy

All of your files should be pushed up to your Amazon S3 bucket and you should be able to visit the endpoint that was defined on your bucket.

www.ryanonrails.com.s3-website-us-east-1.amazonaws.com would be my endpoint

Configuring Cloudfront

We want this site to be very fast, by pushing out our static items to CloudFront we can gaurantee a faster load time than our regular S3 bucket. I saw improvements anywhere from .5-1s speed increases (considering the site takes 2s to load on the regular S3 bucket, it’s quite a large percentage). It can speed up your site from 25-50%.

Let’s head back over to Amazon CloudFront and create a distribution. Click Create Distribution, Choose Download for the next step. Now here comes the tricky part. In the Domain Name box, make sure to enter your Endpoint URL (IE: www.ryanonrails.com.s3-website-us-east-1.amazonaws.com). The box will auto complete to your bucket, but not the actual custom origin that you have setup with your static site. In the Alternate Domain Names(CNAMEs) box, enter in ‘www.yourdomain.com’. You can leave everything else as default.

Here’s what it should look like once you’ve created your distribution:

And if you select it and click edit, the General and Origins tab:

We add the CNAME so when a request comes in for www.ryanonrails.com it can match the url’s up properly. This will make more sense when we setup our DNS.

It will take about 10-15 minutes to distribute across the globe. Once it’s finished you should be able to visit your site through your CloudFront Domain Name (not to be mistaken with ‘www.yourdomain.com’). IE: http://d231akc98dz4lc.cloudfront.net/index.html (which will show you the index page of my blog) or http://d231akc98dz4lc.cloudfront.net/blog/archives/ (my archive).

Updating your DNS settings

We’ll need to change up our DNS settings to make sure that www.yourdomain.com now points to the CloudFront Domain Name. You’ll need to set your A Record to 174.129.25.170 and create a www CNAME that points to your Cloudfront Domain Name. Here’s my setup on GoDaddy:

By pointing our A record to 174.129.25.170 it actually leverages a service called WWWizer which takes our domain ‘yourdomain.com’ and redirects it to ‘www.yourdomain.com’. The reason we have to do this is because you can’t actually point an A record at a URL like you can a CNAME, but we need a way that if you hit ‘yourdomain.com’ it takes you to the www version of the site. More information on this here.

At this point if someone requests www.ryanonrails.com/index.html it sends it to clou

Cloudfront cache invalidation

By default in our rake task we pass in --cf-invalidate to s3cmd. This will invalidate any of the files that are out of date on our blog. This is useful since we generally want to see our blog post up and live once we post. You can monitor the invalidation job in the Invalidations tab in CloudFront:

Example invalidation

Once it finishes you should be able to reload ‘wwww.yourdomain.com’ and see your latest blog post (or typo fix ;)).

Potential problems

I ran into a few problems while setting this all up. The first was that I named my bucket ‘ryanonrails.com’ without the www., this caused problems once I implemented the WWWizer service and it ended up redirecting to a bucket that didn’t exist (‘www.ryanonrails.com’ at the time). Another problem was that I set a Default Root Object on my CloudFront distribution. This broke any sub directories (such as ‘www.ryanonrails.com/blog/archives’). From what I understand, this would be used on an S3 bucket that wasn’t set up as a static site.

References:

Blog Upgrade!

I haven’t been blogging for the past year or so, and I hope to change that soon. I’ve been writing down topics for the past few months, so I plan to pull from that pool for the next while or so.

Technology wise, I decided to drop Wordpress. A couple of friends have had luck with jeykll, octopress and nanoc. I ended up choosing octopress, since it has a bunch of beautiful layouts that come along with it.

I also decided on ditching my old host (http://bluehost.com) and moved over to Amazon S3 & Cloudfront. The site is blazing fast now. There are a few tweaks I still need to do (compress assets, html crunch, clean up about page), but I think it should suffice for now.

Here’s the old blog layout:

And the new (which of course you’re looking at ;) ):

MVC3 Ninject/Nunit Unit Tests

Here’s some example code for MVC 3 & 4 unit tests. Using Ninject/Moq/Nunit/Repositories

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
using System;
using System.Collections.Generic;
using System.Linq;
using NUnit.Framework;
using EST.Controllers;
using EST.Models;
using Moq;
using System.Web.Mvc;

namespace EST.Tests
{
    [TestFixture]
    public class ComplexitiesControllerTest
    {
        private Mock complexityRepoMock;
        private Mock complexityTypeRepoMock;
        private Mock analystTypeRepoMock;
        private ComplexitiesController complexityRepo;

        public AnalystType analystProg;
        public ComplexityType complexityVerySimple;

        [SetUp]
        public void Setup()
        {
            // initial arrange
            complexityRepoMock = new Mock();
            complexityTypeRepoMock = new Mock();
            analystTypeRepoMock = new Mock();
            complexityRepo = new ComplexitiesController(complexityTypeRepoMock.Object, analystTypeRepoMock.Object, complexityRepoMock.Object);

            // Create Analyst Types
            AnalystType analystProg = new AnalystType { Name = "Programmer Analyst" };

            // Create Complexity Types
            ComplexityType complexityVerySimple = new ComplexityType { Name = "Very Simple" };
        }

        [Test]
        public void IndexTest()
        {
            // arrange
            var complexitys = new List
            {
                new Complexity
                {
                    AnalystType = analystProg,
                    ComplexityType = complexityVerySimple,
                    Area = "Design and Architecture",
                    Effort =
                },

                new Complexity
                {
                    AnalystType = analystProg,
                    ComplexityType = complexityVerySimple,
                    Area = "Stored Procedure",
                    Effort =
                },

                new Complexity
                {
                    AnalystType = analystProg,
                    ComplexityType = complexityVerySimple,
                    Area = "Data Access Services",
                    Effort = 0.5
                }
            };

            complexityRepoMock.Setup(a => a.AllIncluding(complexity => complexity.ComplexityType, complexity => complexity.AnalystType)).Returns(complexitys.AsQueryable());

            //act
            var result = complexityRepo.Index() as ViewResult;
            var model = result.ViewData.Model as IQueryable;

            //assert
            Assert.AreEqual(model.Count(), 3);
        }

        [Test]
        public void DetailsTest()
        {
            // arrange
            Complexity complexity = new Complexity()
            {
                ComplexityId = 1,
                AnalystType = analystProg,
                ComplexityType = complexityVerySimple,
                Area = "Design and Architecture",
                Effort =
            };
            complexityRepoMock.Setup(a => a.Find(1)).Returns(complexity);

            // act
            var result = complexityRepo.Details(1) as ViewResult;
            var model = result.ViewData.Model as Complexity;

            // assert
            Assert.AreEqual(model.ComplexityId, 1);
        }

        [Test]
        public void CreateTest()
        {
            // act
            var result = complexityRepo.Create() as ViewResult;

            // assert
            Assert.That(result, Is.Not.Null);
        }

        [Test]
        public void CreatePostTest()
        {
            // arrange
            Complexity complexity = new Complexity()
            {
                ComplexityId = 1,
                AnalystType = analystProg,
                ComplexityType = complexityVerySimple,
                Area = "Design and Architecture",
                Effort =
            };

            var submittedComplexity = new List();
            complexityRepoMock.Setup(z => z.InsertOrUpdate(complexity)).Callback((Complexity a) => submittedComplexity.Add(a));

            //act
            var result = complexityRepo.Create(complexity) as ViewResult;

            // assert
            Assert.AreEqual(submittedComplexity.Count, 1);
        }

        [Test]
        public void CreatePostFailTest()
        {
            // arrange
            Complexity complexity = new Complexity()
            {
                ComplexityId = 1,
                AnalystType = analystProg,
                ComplexityType = complexityVerySimple,
                Area = "Design and Architecture",
                Effort =
            };

            var submittedComplexity = new List();
            complexityRepoMock.Setup(z => z.InsertOrUpdate(complexity));

            //act
            var result = complexityRepo.Create(complexity) as ViewResult;

            // assert
            Assert.AreEqual(submittedComplexity.Count, );
        }

        [Test]
        public void EditTest()
        {
            // arrange
            Complexity complexity = new Complexity()
            {
                ComplexityId = 1,
                AnalystType = analystProg,
                ComplexityType = complexityVerySimple,
                Area = "Design and Architecture",
                Effort =
            };

            complexityRepoMock.Setup(z => z.Find(1)).Returns(complexity);

            //act
            var result = complexityRepo.Edit(1) as ViewResult;
            var model = result.ViewData.Model as Complexity;

            // assert
            Assert.AreEqual(model.ComplexityId, 1);
        }

        [Test]
        public void EditPostTest()
        {
            // arrange
            Complexity complexity = new Complexity()
            {
                ComplexityId = 1,
                AnalystType = analystProg,
                ComplexityType = complexityVerySimple,
                Area = "Design and Architecture",
                Effort =
            };

            complexity.Area = "Design";

            var submittedComplexity = new List();
            complexityRepoMock.Setup(z => z.InsertOrUpdate(complexity)).Callback((Complexity a) => submittedComplexity.Add(a));

            //act
            var result = complexityRepo.Edit(complexity) as ViewResult;

            // assert
            Assert.AreEqual(submittedComplexity.First().Area, "Design");
        }

        [Test]
        public void DeleteTest()
        {
            // act
            var result = complexityRepo.Delete(1) as ViewResult;

            // assert
            Assert.That(result, Is.Not.Null);
        }

        [Test]
        public void DeletePostTest()
        {
            // arrange
            Complexity complexity = new Complexity()
            {
                ComplexityId = 1,
                AnalystType = analystProg,
                ComplexityType = complexityVerySimple,
                Area = "Design and Architecture",
                Effort =
            };

            var submittedComplexity = new List();
            submittedComplexity.Add(complexity);

            complexityRepoMock.Setup(z => z.Delete(1)).Callback(() => submittedComplexity.RemoveAt());

            //act
            var result = complexityRepo.DeleteConfirmed(1) as ViewResult;

            // assert
            Assert.AreEqual(submittedComplexity.Count, );
        }
    }
}

CRM/SSRS - Error: One or More Data Source Credentials Required to Run the Report Have Not Been Specified.

I ran into this error again. I think I’ve got it all figured out. Here’s the excerpt from the log:

Unhandled Exception: System.ServiceModel.FaultException`1[[Microsoft.Xrm.Sdk.OrganizationServiceFault, Microsoft.Xrm.Sdk, Version=5.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]]: An exception occurred when executing the plugin. Error: One or more data source credentials required to run the report have not been specified. —> Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: One or more data source credentials required to run the report have not been specified.

My current setup is this. I have regular MSCRM_FetchDataSource reports using Credentials supplied by the user running the report. I’m currently running a plugin when a a drop down list value has changed. This plugin runs an SSRS report and drops it into sharepoint.

The error actually stems from the fact that the username and password being passed to SSRS from the plugin isn’t getting populated. The SRS data connector that you install for CRM passes these values back and for you for the out of box reports. For whatever reason, this code will not populate those fields:

1
return ReportService.RenderReport(reportUrl, _networkCredential, reportPath, parameters, "PDF", devInfo, "en-us");

In this situation I’ve found the best course of action is to set your data source as follows:

MSCRM_FetchDataSource
Credentials stored securely in the report server
User:-username-
Password:-password-
Use as Windows credentials when connecting to the data source CHECKED
Impersonate the authenticated user after a connection has been made to the data source NOT CHECKED

We use 1 main account for our reports so this works out well. This will allow plugins to access SSRS and allow the out of box reports to run. All while only having to use 1 datasource (we previously had the custom reports on 1 datasource, and the rest of CRM on MSCRM_FetchDataSource).