Must-Have Features For Your Client API Library

The hard part is done.  You built an API for both you and your customers to use.  You have started to see some great partnerships from customers and other companies who are using it.  Now you want to get the word out about how your API can help people run their businesses more efficiently and help them make more money.  Is marketing the answer?  It’s a piece of it, but is also comes down to how easy is it for developers to integrate with your API.

The answer: Your client API libraries.

Designing a client API library for several programming languages can be a great opportunity to encourage developers to use your API.  Since your API is part of your product, and your product makes you money, I see a well designed client API library as a money maker.  It can allow you to keep current customers happy and find new customers in business sectors that might surprise you.  Yet, this is a largely under-appreciated aspect of many businesses that provide APIs.

As a part of my startup Affiliate Linkr, I currently integrate with APIs from 5 different affiliate networks, and none of them have a client API library.  That’s right, none of them!  Talk about a missed opportunity!  I wonder how many cool apps could be developed if only they would make it easier for a developer to pull some code off of Github and start innovating.

With my experience developing Affiliate Linkr and several other projects, I’ve had a chance to use client API libraries and create some of my own, so I wanted to share my opinions so we can all do a better job creating them.

If you are excited about creating a client API library for your API, but would like some help, please contact me on LinkedIn and let’s work together!

Here is a list of 13 must-have features when designing your client API library.

  1. Object Oriented Interface
  2. Throw Named Exceptions
  3. Allow Automatic Retries
  4. Should Handle Multiple API Versions Within One Code Base
  5. Base API URL Can Be Changed
  6. Unit Tested and Functionally Tested
  7. Use an Existing Library for Doing HTTP Communication
  8. Ability to Get Raw API Responses and API URL Calls for Debug
  9. Example Code
  10. Live Demo of Example Code
  11. Library Should be Installable Using a Package Manager
  12. Make Releases
  13. Quality Documentation

For a detailed look at each of the must-have features listed above, please read on.

Object Oriented Interface

One of the main reasons you are creating a client API library in the first place is to give people an easier way to work with your API.  You want to make it easy for people to create apps using your API, so you gain new customers or make your existing ones even more satisfied.  An object oriented interface to your API allows you to fit into the most designs, as almost all developers are familiar with OO and most likely their end application can easily integrate object oriented code.

One important detail when designing your object oriented interface is the naming of your classes, class accessor functions and other methods.  Your class names should correspond to the different service endpoints.  Like if you have a /users/:uid endpoint, you should have a UsersService that returns the user data as a User object.  If one of the resources you can access for a user is projects by an endpoint like /users/:uid/projects, you should have a User method of getProjects() that corresponds to this endpoint.  Internally, that function could use the ProjectsService class to fetch the data.

As with any class, module, or library, make sure you use a namespace so that you do not run into any conflicts with other libraries or existing code.

Great naming can help give your API users intuition about how to use your API without knowing all the little details.  It is also important to document how this naming will be done.  Let’s say that you have an endpoint of /users/:uid/equipment-owned, how would that look?  I suggest having a clear naming conversion guide, something that states for example:

  • dashes and underscores are removed and considered as word separators
  • camel case will be used (capital letters used for the beginning of words)
  • get and set will be used for accessors

A naming convention like this would result in the OO method being User->getEquipmentOwned() for the /users/:uid/equipment-owned endpoint.  Having a clear naming policy makes your job easier, and also helps future developers implement changes to the client API library easier when the API changes.  The specifics don’t have to match what I have said above, but having a clear naming convention is where the value is at.

Throw Named Exceptions

Your client API library should throw named exceptions for as many error cases as you can.  The names of the Exception classes should be self-descriptive, so inspecting an object in the debugger will allow you to have an idea of what is going on.

Your exceptions will break down into a few groups:

  • Authentication related
  • Authorization related
  • Request related
  • Response related

Make sure to cover them all.  They are especially important when a developer just starts to learn to use the library, where onboarding someone smoothly to your client API library can really improve how that developer feels about your company.  Yeah, I think of a client API library as a recruiting tool.

An example of this would be if the user sets an invalid API key, the API might respond with a HTTP status code or an error data structure.  You would present the user with this within your client API library as an exception thrown that would be descriptive and identify the error uniquely, like InvalidAPIKeyException.

Having named exceptions are also very important to allow multiple catch blocks to look for specific exceptions by name, so the user of the client API library can integrate error handling into their application.  Maybe some Exceptions thrown are fatal to the application, but other Exceptions just require a retry of a request.  

Having errors that can be recovered by just retrying a request is something that happens to me with a couple of the APIs that I use within Affiliate Linkr.  The API errors are random, really have no reason for them, and the same request will return fine with a retry, so knowing what particular error I got can be the difference between a fatal app failure and a smoothly running app.

Allow Automatic Retries

Not every API is perfect.  There could be situations where the API gives a bad response, or is too busy and cannot service a request.  Some of the reasons may be out of your control too, like a spotty internet connection that maybe broke down along the way.  Having a backup plan for these cases is smart.

I suggest to build in a way for the user of your client API library to enable automatic retries.  The reason I like it as an option for the user to enable is that maybe the user’s application has a different way they want to handle it.  Maybe they want to treat it as a fatal error, and possibly put some analytics on it so they can feed back to the API support team.

I feel like adding automatic retries might be better handled in your user’s application in most cases, so whether you choose to have automatic retries as a feature might depend on what your API functionality is.  Some APIs might be more prone to errors, or high traffic, where an uncaught error might be common and the consequences could be pretty bad.

The essential factors in a retry system is to allow configuration for:

  • Retries Enabled
  • Number of retries
  • Time between retries
  • Named Exceptions that will allow retries
  • Named Exception thrown when all retries fail

Depending on how much control you want as the designer of the client API library, you may want the Exceptions that will allow retries to be controlled by you and not exposed to the user.  However, I think that the goal of being developer-friendly should result in us exposing that as a setting, but providing a solid set of defaults so that it likely never has to be changed.

Having a different Named Exception thrown when all retries fail is important so that the application can operate with both retries enabled and disabled, but use the same code.  They could handle the non-retried Exception one way, maybe with manual retries, but the retry failed Exception might cause a fatal app error.

Should Handle Multiple API Versions Within One Code Base

When a new API version is released and some of the changes are not backward-compatible, developers using your client API library should not have their applications break.  Forgive me for stating the obvious.

I also feel that as a developer using the client API library, my existing code that uses the library should not break if I update to the latest library release.  This means that if I want to keep using my current version of the API, updating to the latest library release should not automatically cause my code to start using the newest API release.  I should always have a way to make my existing code continue to work.

To enable this, the client API library should have a setting as to what version of the library to use.  I don’t suggest that you make people define a number, like setVersion($newVersion), too much chance for error.  It is better to have a specifically named version method, or at the very least, a constant.  So setVersion3() or setVersion(API_VERSION_3).  If you allow setting it via setVersion(), just make sure you make sure it is a valid API version number and throw an exception if it isn’t.

Also, I suggest defaulting the client library to the use the current version of the API, but only using something similar to a constant API_LATEST that you can default to, and possibly a setVersionLatest() method.  Require this line to be in code!  This makes the developer make a specific decision to be working with the latest API version the library supports.  So then when they update to a newer release of your client API library, it will automatically point to the latest API version, and use any new code associated with it, and possibly cause breaking changes to their app but it was their explicit choice.  The key is that it is obvious that they are choosing to live on the edge by having an intentional choice they have to make.

To allow different API versions to be handled by the same code base, I like to design using Data Mappers to map between the API response and the client API library classes.  Data Mappers help with decoupling, because they are the only class that needs to know both about the API response data, and the client API library object.  As an example, I’d use a UserResponseMapper to map a UserResponse object into a User object.

Different class names for your service objects that correspond to different versions are the way to go, but a smart idea would be to inherit from a base service name.  The service probably doesn’t change a ton over time, but just small changes, so it’s likely that a base service class could have most of the logic you need.

An example of this would be if you are currently on version 1, but you come out with version 2, then your user class name for the latest version of the library would be User, but your old user class could be named UserVersion1.  Always keep the version number out of the class names of your current code, that allows you to potentially remove legacy code someday.  While the class names returned might be different than they were before, the interface is the same.  To handle this gracefully, it is useful to have a factory method to return an object that corresponds to your current API version to avoid having class names hardcoded.

Base API URL Can Be Changed

To allow for development work and testing, having the base API URL be a setting of the client API library will enable this to be easy.

This is also a key for unit testing.  You may want your live HTTP testing to go to a test-only URL that has some fake data in it.

Note that this base API URL should NOT include a version number in the path.  Most likely, the API version will be included in the URL, but the API version can contain any number of significant changes both in the API and the client library operation.  We are setting the API version separately also, as mentioned previously.  We will use the base API URL in combination with the library version plus the service endpoint to craft the full URL.

Normally, this base API URL would default to your production URL, but since being able to override the URL is a feature that you will want anyways to allow future development and testing of your client API library, it’s a no-brainer that it needs to be a feature.

Unit Tested and Functionally Tested

Like any piece of code, a client API library should have tests on several different levels, both unit and functional.  This can get a bit tricky since the client API library is meant to interact with the live API over HTTP.  Yet, it is important to have some tests for your client API library without needing the live API.  This is why I see a couple different levels to testing a client API library.

The first level is without any live API available.  The strategy with this is to mock the HTTP responses that the API is supposed to return, which then allows us to test all of our code independent of the API being available.  What this will look like is that we will have a bunch of files that contain sample XML or JSON data that is exactly like what is returned from the real API call, then we process it as though it was real.

The second level is with a live API available.  This part can be hard as you need some real user credentials.  You could consider having a dummy user that has some fake data within your live API.  One alternative is like Amazon Web Services PHP SDK, they just have a place within their client library that you can put your actual credentials in a file.  If your actual credentials are defined, then it enables running through the live API tests.  I really like that technique, because it avoids having to always maintain a dummy user account.

I would advise verifying the raw data back from the API as it’s own suite of tests, then also test the whole system together that is raw API data all the way through your client API library and converted into usable objects.

For completeness, you should also test for proper exceptions thrown when you pass in invalid API keys or client key/secret during authentication.

I think it goes without saying that we all know what we should test, but that ends up being a TON of work.  It’s easy for me to say in a blog post that you should test all this stuff, but of course it depends on your project and factors such as longevity, how many users you will have, and number of different environments the client API library could be used in.

Compare a Javascript client library to a PHP client library.  A Javascript library can be used from a node.js app, a web app using jQuery or other frameworks like AngularJS and Backbone.js, or a Google Chrome app, a mobile app, etc.  A PHP library really has a much more narrow focus, will be used from PHP code and possibly from within frameworks like Symfony or Zend Framework.  The end system is a lot more well known and predictable for PHP when compared to a Javascript library.

Use an Existing Library for Doing HTTP Communication

When writing a PHP client API library, I highly recommend using Guzzle.  Using that makes it very easy to mock HTTP responses, and knows how to talk OAuth to make communicating with any API require a lot less code.  Always strive for less code.

Other languages might not have a frontrunner in the area of HTTP communication libraries, if not, no big deal.  You will just need to implement the HTTP mocking yourself through smart design, and handle your specific HTTP authentication method, which for some APIs is probably not too hard at all.

Ability to Get Raw API Responses and API URL Calls for Debug

When I develop a client API library, as with other software, unpredictable things can go wrong.  When things go wrong, I start debugging and look to verify what is happening at the most basic level, which is the HTTP request and response.  If the client API library does not allow a developer visibility into this low level of HTTP requests and responses, it makes it so hard to tell what is happening.

The client API library should make it easy to get out raw API responses and API URL calls along with headers and body to help provide this low level debug capability.

Another reason to provide these raw HTTP request and response details is when dealing with support requests to the API provider.  When communicating with customer support, they always ask for the HTTP request and HTTP response, they don’t want to deal with your code and they shouldn’t have to.  If you give them the HTTP request headers, URL, and body, then they can re-create the request and see if what they see matches what you are seeing in the raw HTTP response.

This is not only for problems caused by your client API library.  No API is perfect, there is always the possibility for real errors in data to be caught by users of the client API library.  Our job as creators of that library is to enable the developers using it to identify these problems and easily have the information at their fingertips to debug it.

Example Code

ALWAYS include example code that uses your own client API library.  This should be included within your code base, and have instructions how to run it.  You should try and include examples from as many end environments as you can.  As an example, Javascript can be run in a lot of different environments, so you might need to consider example code for node.js, a web browser, and a Chrome app.

This example code can really inspire someone to create a cool app with the API.  Also, when things go wrong, the first thing I look for in a client API library is some code that I know that works, then see what I am doing differently.

I also recommend having a simple example, and some more advanced examples.  The more examples you can provide, the better off your users will be.  Look at what you are testing within your unit and functional tests for ideas on what kind of examples to create.

Live Demo of Example Code

We want to ship our example code with our client API code base, but it also is so nice when we can create live working examples of our client API library in the form of an application of some sort.

For a PHP or Javascript library, the goal would be to have a web page that developers could visit to test out the API using your client API library.  It would have a way to enter credentials to authenticate, then have form inputs to define the request parameters, and return the data from the API.

This is not only for seeing how your client API library works, but also is a great tool for testing out API calls.  You could include this sample web app in the client API library repository, but a better idea is to include it in a separate repository that you make sure to link to within your client API library documentation.  The reason to have it separate is that it should also be easy for someone else to spin it up and use.  The sample app would have a dependency on your own client API library, and if you are using a package manager, it’s a great way to show how that integration works too.

Library Should be Installable Using a Package Manager

For modern software development, it is a great idea and expected to have your library installable through the package manager for the language you use.  Like Bower or NPM for Javascript, composer for PHP, gem for Ruby, etc.

There are a lot of other aspects of your library that this implies, such as maintaining releases and versioning of your API to be compatible with the package manager.  You will probably have some extra config files that are a part of your repository.

Make Releases

Using a package manager will dictate most of this process for you, but define your release procedure and naming convention, don’t just make everybody pull trunk.

If I look at a new project and see no releases, I automatically think less of the project, consider it to be in a beta state, and will start to look for alternatives.  So make releases so you give your users confidence in using your library.

This also helps when it comes to customer support.  Knowing clearly what release a customer is using will make it a lot easier to reproduce any errors.

Quality Documentation

We all know that documentation is important, but quality is preferred over quantity.  Great documentation is made when you step away and can put yourself into the shoes of your users.  What questions are they going to ask?  What is going to be hardest for them to do?  Do you have any uncommon features?

We should include the obvious instructions on how to install both with the package manager and manually.

We should have special instructions on how to test it, how to get at the example code, and where the demo pages are located at.

I highly suggest using Github for your repository so that you get Issues and a Wiki with your project.  You can put the most basic documentation in your and more details about architecture, types of named exceptions, API version handling, and everything else in the Wiki.

Optional Features (not exactly must-haves but are a good idea):

  • Continuous integration capable through online sources like Travis CI directly integrated into your Github README page.  It can give a great deal of confidence, but as long as you have clear instructions for a developer to run tests, this is just icing on the cake.
  • Providing your source via CDN (for a Javascript library).  Nice as it just takes one line to bring into an app, but for most bigger projects integrating via package manager is used more often.


There are a lot of under-appreciated features that go into a useful client API library, but it has to be designed with as much care and attention to detail as the API itself if you want it to be a business driver for your company.

If you are excited about creating a client API library for your API, but would like some help designing a client API library using these principles, please contact me on LinkedIn and let’s work together!

Using AngularJS ui-router Query Parameters and Complex Directives Together Without Killing App Performance

This is a presentation that I gave at the Boise AngularJS meetup.  The Plunker that goes along with this can be followed along with here.

The Problem

You have an application where you want to allow the user to pan an image.  This is a large image which you cannot display on the screen at one time.  In this example, I'm using the AngularJS shield logo, but a more pertinent example might be a geo-mapping application, or a data visualization application.  In an application like this, you may prefetch more data than is actually shown on the screen to be rendered quicker as the user performs a few fairly predictable actions (like moving it around).  Think of an application like Google Maps.  If you are looking at your city, you most likely are going to drag it around a little bit, because you might be travelling.  You might pre-fetch neighboring map images so that the drag feature would be able to be fast since you can almost predict it happening.

The other requirement is that the location within the image that the user pans to must be captured in the URL to allow permalinking to a particular view.  We want the user to be able to just copy and paste, or socially share, the URL in the address bar with friends.  This is also a lot like Google Maps, you can find your favorite barcade and email it to your buddies by clicking a share button or copy and pasting the URL.

With that requirement, your URLs will then look something like: /image?x=50&y=100

Where x and y represent the offsets into the image.  With a map, these would probably be latitude and longitude.

You initially implement this within ui-router as a single state with query args, like:

.config(function config($stateProvider) {
  $stateProvider.state('single', {
    url: '/single?x&y',
    controller: 'SingleCtrl',
    templateUrl: 'single.html'

See the Plunker in the single.js file for this code.  Then in the Angular way, you decide to encapsulate the image loading and panning in a directive within single.html.  Like this:

<canvas my-image image="imageurl" x="navigation.x" y="navigation.y" width="400" height="400"></canvas>

You design your directive to be attached to a canvas element as an attribute, with directive attributes for the x and y position. Within the directive, you then load the image into memory which requires a fetch from the server, then write it to the canvas element within this directive.

When you start panning around, you notice that it is not very responsive.  There is a delay every time you switch the x and y coordinate.  You can see this within the Plunker when looking at the Single tab.  Use the Pan buttons to move the image around.  (For this exercise, I've added in a 500ms delay to simulate a delay in processing in a complex directive).

This is due to the fact that with ui-router, every time a query arg is changed, it is a state change, the controller is executed again, and the template is re-rendered.

The Solution

Follow the pattern of splitting your states with query arguments into two states, one for the "base" state, and a sub-state that handles the query arguments.

Your URLs will then look something like /image/view?x=50&y=100 as opposed to /image?x=50&y=100

The "base" state url is /image and the child state for the query parameters are /view?x&y.  Like:

.config(function config($stateProvider) {
  $stateProvider.state('double', {
    abstract: true,
    url: '/double',
    controller: 'DoubleCtrl',
    templateUrl: 'double.html'
  $stateProvider.state('double.view', {
    url: '/view?x&y',
    controller: 'DoubleViewCtrl',
    template: '<div></div>'

You can see this in the Plunker in the double.js file.  Note: You cannot just have a child state of ?x&y, it does not work, must have a pattern to match on before the ?.

This allows the "base" state controller and template (which contains the directive) to stay as an active state and not be re-executed and re-rendered when just the URL query parameters change.

The fundamental difference between this two state and the single state example is that the query parameter handling code is pushed down into the child state, and changes the x and y inherited from the base state's scope, thus causing the directive to change.  Notice in the Plunker how the logic that was in the SingleCtrl is now split between DoubleCtrl and DoubleViewCtrl.  This actually leads to a very nice natural split in the functionality of the code, feels like better design.  All the query parameter handling logic to map back to an inherited parent navigation object that contains those same query parameters is in the DoubleViewCtrl which is the state with the query parameters in the URL.  Makes sense, right!

This "base" state can also most correctly be declared as abstract: true which means it cannot be transitioned to directly.

Now that the directive does not get re-created every time a query parameter changes.  It only changes the child state, not the base state.  This means you can just move the image around and copy to canvas very efficiently because it remains in memory.  Notice this by going onto the Double tab within the Plunker, you can see the execution count there, the parent state "double" controller and directive inside it's template only get called once.  Only the child state "double.view" controller gets executed with each click on the buttons.

AngularJS Backend-less Development Using a $httpBackend Mock

I was only able to briefly mention that I used a $httpBackend mock to do backend-less development with AngularJS during my presentation on AngularJS Data Driven Directives, so I created an isolated example in a Plunker to fully implement it.  Here is a link to the Plunker if you want to skip right there, below is an explanation and the embedded plunker.

The purpose of this example code is to show how to do backend-less development with code that uses both $http and $resource to fully cover the most common server communication options within AngularJS.

Why do backend-less development?

Isolating your AngularJS frontend development from the backend (REST API) can allow potentially separate teams develop independently.  Agreeing on the REST API can then allow both teams develop in parallel, agreeing to implement/use the REST service the same way.

This can be accomplished by the simple inclusion of a file in your AngularJS code that creates a mock using $httpBackend.  $httpBackend makes the requests that are underlying $http, which is also what ngResource $resource uses.  When you are ready to hit the real backend REST API, simple remove the file from being included, possibly as simple as a special task inside your grunt config.

There are two different flavors of the $httpBackend mock, we want to use the one not for unit testing, but for E2E testing:

AngularJS $httpBackend docs

How do we do it?

We use URL patterns within app-mockbackend.js to intercept the GET and POST calls to the URLs, along with the data for a POST.  We can use regular expressions as patterns, which allows for a lot of flexibility.

The handling of the URLs and HTTP methods and returning "fake" data only works by having some data that can be persistent from request to request.  I store the data in an AngularJS service ServerDataModel that emulates storing the data a lot like the server would.  The AngularJS service recipe is perfect for this becuase it injects a constructed object, and that object contains our data.  No matter where it is injected, the instance of the object is what is shared so it contains the same data.  There is some data that is pre-loaded in that object that is analagous to having a database on the server that already has some records.

Here is the embedded version of the code, although I do think it is easier to view in its full glory directly on Plunker.

AngularJS Data Driven Directives Presentation

Creating data visualization solutions in Javascript is challenging, but libraries like D3.js are starting to make it a whole lot easier. Combine D3.js with AngularJS and you can create some data driven directives that can be used easily within your web app.

I created a presentation for the Boise AngularJS Meetup to show how to design AngularJS directives that interact with D3.js.  Also, because there is no getting away from it, I also include some ideas on how to manage your data models and transform along the way to guarantee a logical separation from your domain data model and your chart data model.

The link to github for the code is: AngularJS Data Driven Directives

The code in that repository is up and running on my Github Pages site for this project with the demo located here: Basketball Scorebook demo page.

You can get at the slides here: AngularJS Data Driven Directives Presentation Slides

Getting Started with AngularJS Screencast

Here is a video screencast of my Getting Started with AngularJS presentation that I gave at Boise Code Camp 2014. To follow along with the slides, you can get them here: Getting Started with AngularJS Slides

AngularJS Presentation at Boise Code Camp 2014

Having a great day at Boise Code Camp 2014!  I'm giving a presentation this year on Getting Started with AngularJS. Here is a link to my presentation slides


AngularJS Presentation at Boise Web Technologies Group


Here is a PDF of the slides for the presentation on AngularJS that I am giving to the Boise Web Dev Technologies group.

Why You Should Use AngularJS in your Next Web Application

Adding a new javascript framework like AngularJS to your web app requires some careful evaluation.  Most projects already are using jQuery, maybe jQuery UI, and potentially other javascript libraries to handle other functionality not covered by jQuery or even jQuery plugins for UI elements like charts, multi-selects, infinite scrollers, sliders, etc.  Any javascript framework has a tough task to find a space to fill with developers.

Your decision might consider the additional lines of code added, worrying about if this may slow down your javascript execution, or page load times.  You know you will also have a huge initial learning phase where you discover how to use, then try to learn best practices as you start to play around with implementing your app.  Maybe you are concerned that it is just a fad framework too, there are other options for similar Javascript frameworks like Backbone.js, Knockout.js.

For most, I bet the initial learning phase and temporary slowdown in development is the primary hurdle.  This can slow down a project that could be quickly done with existing technology.  The long term benefit has to be there in order to spend your time learning and figuring out best practices, and ultimately the new piece of technology must be solving a problem in your software architecture to justify the time.

I found when researching AngularJS and going through the process of implementing an app with it, that it was going to be a great tool in making web applications.  For a big list of criteria I recommend to evaluate when considering adding a new component to your web application, check out my article here titled 18 Questions to Ask Before Adding New Tech to your Web App.  Here are some of the criteria from that list and how my evaluation of AngularJS went.  It's my opinion that there is a significant hole in front-end web development that AngularJS fills.

1. Address some problems in your software architecture

When writing web applications, I have objects in the server-side code that often times aren’t represented as objects in the client-side code.  For simple apps, this might be OK, but when it gets complicated, it can be a big help to mirror these objects on both sides.  Also leads to a terminology issue, a Person object on the server can’t truly be talked about as a Person on the client side because it doesn’t look or feel the same way.  Doesn’t have the same methods, isn’t represented as code, sometimes is stuffed into hidden inputs or in data attributes.

Managing this complexity can be very hard.  AngularJS has ng-resource which you use to create services that hook up to REST APIs and return back that object in JSON and you can attach methods to that object so it can be a fully functional object.  It feels more like something familiar to what you are working with on the server side.  All without much work on your end, you have methods like save(), get(), update(), that map to REST API endpoints and are most likely the similar methods you might have in your Data Mapper on the server side.

AngularJS encourages you to also deal with models on the client side just like you have them on the server side, big plus there.

I also don’t feel like the design using jQuery + Mustache is elegant when it comes to having an object that has properties represented in different ways within the web UI.  An example, you have a table of Person objects from the REST API, you have a button for each Person to denote they have “Accepted” an invitation, so when they click, you want the checkbox to change and you want the style on the row to change.  In jQuery, you listen for the checkbox change event, then you toggle the class on the button and the row.  In AngularJS, the model is the source of truth so you build everything from that.

See what I mean by taking a look at this jQuery vs. AngularJS plunker I created and compare the type of code you write.

2. Enable you to create software more quickly and with less effort

AngularJS having the ng-model and ng-class directives alone cover so many of the common operations that we all have been doing in jQuery.  Two-way data binding and saving to the server now takes a small number of lines in AngularJS, but in jQuery would require creating your own object, and several different click and event handlers.  Switching from watching elements and events to watching a model is a big shift in the right direction.

3. Result in software that is more maintainable

AngularJS encourages using the model as the source of truth, which starts to get you to also think object oriented design on the client-side.  This allows you to keep in mind the same object-oriented design principles that in general make software more maintainable compared to procedural.

4. Improve the testability of your software

AngularJS has dependency injection at its core, which makes it easy to test.  Even the documentation on the AngularJS site has testing as a part of every tutorial step, which almost makes it hard NOT to test.

5. Encourage good programming practices

Model as the source of truth, dependency injection, ability to create directives that can decorate elements that lends to reusable and shareable components, REST API connection to your server, lots of benefits from just following AngularJS basic usage.

6. Allow you to collaborate more easily with other people

Using models as the source of truth is something that is familiar with anybody who is doing object-oriented MVC server-side software, so this should make it easy to pick up for most web developers.

Also, being able to create directives in AngularJS and dependency injection makes it easy to create components that can be shared easily between developers and really has excited the developer community.  Lots of existing projects have developed AngularJS directive bridge libraries so you can use their code by tossing an AngularJS directive to decorate an existing element with new functionality.  Like Select2, Infinite Scroller, Bootstrap, and Angular has its own UI companion suite.  Just check out

7. Allow you to become proficient in a reasonable time

It took me around 2 full weeks to feel like I was proficient, where I was starting to pick out and understand best practices and look at some of the finer points of the framework.  Where I started from is that I have a lot of experience developing with Javascript, jQuery, jQuery UI, and also implemented a project using Backbone.js earlier this year which is one of the other Javascript frameworks built to solve a similar type of solution as AngularJS.  I felt like knowing how Backbone.js works helped a lot with picking up AngularJS.

18 Questions to Ask Before Adding New Tech to your Web App

Adding a new piece of technology to your web application is not a consideration to take lightly.  There are a lot of factors to consider that can slow down or doom a project if the proper evaluation is not given up front.  In the end, you want the long term benefit to be worth it, but not derail any short term milestones.

And when I say technology, I mean a framework, library, automation tool.  Whatever the type, I use it to encompass anything that you might be considering adding into the technology stack that is new to your web application.  This most often involves something you’ve only read or heard about, or maybe only worked through a tutorial.  This is a case where there is a large unknown component to the technology you want to add.

When evaluating adding any new piece of technology to your web development project, use the criteria below to help you make your decision.

Does the new technology ______:

1. Address some problems in your software architecture

If you have some software design problems that you consistently struggle with given the tools , libraries, frameworks that you currently use, and this new framework will solve that for you then go for it.  If you are just adding this new framework only to learn it, and it doesn’t add value to your product or software process, then go pick something else to learn that does add some value.

2. Allow you to become proficient in a reasonable time

Depending on the time frame that you have within your development cycle, you may not have a lot of time that can be considered R&D time to learn a new framework.  A new framework should not require a year to become proficient at, that is a lifetime in the development world.  This is also another reason why the skill level of you and your team should be considered.  Master some core technologies first before you start learning something new.  You can always get stuff done with the other languages and frameworks that you already know if you have to, then refactor later.

3. Enable you to create software more quickly and with less effort

Anything that improves your efficiency is something to invest time in.  Always strive to do more with less code.

4. Result in software that is more maintainable

If the new framework best practices result in software that is more maintainable, then in the long run that will pay off.  Your product will be easier to add new features to and be able to be picked up by other developers easier.

5. Result in software that can better adapt to changes

If you have a problem with handling growing complexity without big rewrites and lots of bugs introduced, then integrating a piece of technology that adds to that is going to make things better, it’ll add to your problems.  If on the other hand, using the framework allows for better overall design and encourages solid, maybe even SOLID, design principles, then your software project will be better off.

6. Improve the testability of your software

If testable code is important to you, and it should be, then you should always weigh the impact that adding a new framework has on your ability to test.  One clue to look at is also how the framework tests itself?  That can tell you how much importance it places on testability.

7. Have no significant degradation in performance

If your software designs have problems with performance, and it is really important to your product, then a new framework should be providing a lot of benefits without significantly impacting your software’s performance.  Most of the time, because a framework adds flexibility and adds a layer on top of your existing stack, it often will decrease the performance compared to well-designed code in current tool sets.  The opposite would be writing absolutely optimized custom code for every interaction, but that’s a tradeoff with speed of development and maintainability.  I always opt towards speed of development and functionality first, then look at performance later.  If you know ahead of time performance is valued highly in the product, then make sure you at least have a plan to optimize performance if and when issues come up.

8. Encourage good programming practices

A good framework that you add should be using design patterns that encourage you to write good quality code.  The effort to write good code should be no more than writing bad code so you have no reason to not just write good code using best practices at the start.

9. Have good support by its creators

Sometimes the creators are commercial, sometimes it is open source, sometimes it is a combination where it is open source contributors that are employed by a big commercial company.  Either way, I consider support being the same as documentation, so how well does the project itself keep up with changes in its development with the corresponding documentation.  Checking commit history looking for a variety of committers over a long period of time also helps grow confidence in the project that something that changes in the core developer’s lives or career, or the company’s finances, won’t derail the project.

10. Have good support by its users

Where does the user community get support? These are the same channels you will use to learn and help figure out best practices.  Looking on Stack Overflow to see how many questions are being asked and how well they are being answered is a good gauge (lookup questions using Tags).  Also, seeing what conferences exist and what other companies are using the project helps.  Having someone you have a personal relationship with that has experience and that is willing to help answer some questions is also a plus.

11. Have a good chance to be actively maintained during the useful lifetime of your design

This is something that depends on your product life cycle, if you most likely redesign your product every 3 years, then only has to exist for that long.  If you have to distribute software for a hospital, useful lifetime might be 10 years, so are you confident the new piece of technology will be maintained, updated, and still valid that far in the future?

12. Currently in a state where it is complete and functional

If still in a state of flux, with lots of API changes, or bugs being introduced, it’s not worth using it in your product because you will be constantly working around bugs or wasting time debugging.  Unless you like that sort of thing.  Harder to do if you are working on a product because you are contributing to someone else’s code.

13. Change not too quickly

Will new releases be completely breaking old ones.  If a project is iterating quickly, and your software relies on stable code with infrequent releases, this may not fit into your project well.

14. Allow you to collaborate more easily with other people

If the framework you are adding is not very popular, or not well documented, then if you work on a team with others, it may be frustrating for them to work alongside you.  Also may be hard to find contractors who know the framework.  Ideally you should have the ability to share components you made with the public, or to get them on github or other sources.

15. Minimize complexity added to environment setup for developers and production servers

Does the framework add any complicated steps to the build or test process that will not be able to be automated.  Also, consider the impact on developers setting up a local environment, which is important to truly distributed collaborative work.  Sometimes unavoidable depending on role of the framework.

16. Cost should not be prohibitive

Sometimes the costs are direct, like you have to pay for a license.  Sometimes the costs are secondary, for example, what if a new framework requires a lot more memory usage on your web server, so you have to upgrade your server’s RAM costing you X/month.

17. Have only reasonable requirements to be brought into your product

If a framework has a lot of requirements, need to evaluate their impact too.  Might not be a big deal if you never directly use the required projects and they are abstracted when you use the framework.  But maybe some of them are exposed and new things you have to learn.  

18. Have a software license compatible with your product

Have a commercial product?  Make sure the license allows you to sell online, distribute, or sell a subscription to your product, whichever applies depends of course on your monetization strategy.

AngularJS Directive Best Practices

Using directives in AngularJS is one of the great features added to tie complicated javascript functionality and client-side templating to your HTML app in a way that allows for re-use and maintainable code.

You can think of a directive as an extension of HTML so that you can create your own elements and attributes.

Just like when you use a “<select>” element, you expect to see an element that you can click on and will drop down to have a list of items, and you expect to be able to click one and have that one show up as the “selected” one.  Where a HTML element implies certain functionality, a directive allows us to create our own display elements and/or functionality that can be used as new elements or to extend existing elements.

I’ll use the example of a basketball team where we have many basketball players making up a basketball team.  Wouldn’t it be nice if in your HTML you could write this to display a basketball player in HTML?


As you might expect to see in HTML, this would output a basketball player with a few fields like name, number, position, and some stats like points and assists.  To accomplish this, it would take a combination of several HTML elements, including <div>, <span>, and maybe an <input> if we wanted to change the number of points and assists in a scorebook type of app.

For the code examples shown on this page, I'm using AngularJS 1.2.7, they should all work with 1.2.*.

Here is a link to a plunker showing an example of how to create your own custom directive to do this:

The basketballPlayer directive is associated with a template file that is HTML with some AngularJS template markup within.

Take a look at how I implemented the capability to change the points using an input field.  I can use a directive as an attribute, called ng-model, which is one of the directives provided by AngularJS core. That’s right, many core AngularJS features are directives themselves, a directive can be an element name or an attribute.  This is why understanding directives is one of the most important things to learn and understand when learning AngularJS.

As with any technology, there are many ways to approach a solution, and creating directives is no exception. We have some choices to evaluate, a few things to consider include:

  • Using Attribute vs. Element
  • Using proper HTML5/XHTML5 markup
  • Scope considerations for reusable code

Using the element name


may seem like a really cool feature of AngularJS and a huge readability improvement to someone going through source code.  However, this has no chance of validating as proper HTML5.

Even though it is not proper HTML5, this is the preferred way to use a directive that adds new markup to your app.  Another wise suggestion is to use a prefix (think namespace) for your directives that you use as an element name, which is mainly a future-proofing in case an HTML element comes out that has the same name someday.  Like maybe you were thinking of:


But what if an upcoming version of HTML5 adds a special multiselect tag and all of the sudden your app breaks.  A suggestion is to use:


That way you know for sure it’ll be OK in the future.

But something like:


I’m pretty sure the chance of that being used in a future release of HTML is none.

Having an element name work as a directive is not enabled by default, you need to explicitly allow for it in your javascript code, I keep all my directives in a directives.js file:

var myDirectives = angular.module('myDirectives', []);

myDirectives.directive('basketballPlayer', function() {
  return {
    restrict: 'AE',
    templateUrl: 'bball-player-template.html'

The restrict property has an A for Attribute, and an E for Element.  The default, if you don’t specify a restrict property, is A, which means it’s only enabled for Attributes.

Notice the name of the directive is “basketballPlayer” but I called the element “<basketball-player>”.  This is an automatic conversion from camel case to lower case with dashes splitting words, it’s an AngularJS thing that you can’t control.  There are lots of possible options that it gets converted into that are valid that allow you to use it in several ways, some help you out with HTML validation.

If you are concerned about HTML validation, you can write it also as an attribute:

<div data-basketball-player>

If you are not familiar with the special attribute prefix of “data-“, this is a special name and is allowed within HTML, any attributes with a prefix of “data-“ are ignored and allowed to be custom attributes.  The attribute-based syntax is equivalent to <basketball-player>.

This format is valid HTML5, but if you want to step up to XHTML5 we’re not quite OK yet.  Attributes are required to have a value in XHTML5, so most correctly:

<div data-basketball-player=“argument”>

Even if you don’t have an argument, it’s appropriate to give it a value, recall from plain old HTML:

  <option value=“1” selected>My selected option</option>
  <option value=“2”>Not selected option</option>

Notice the selected doesn’t have an = sign behind it.  This is called Attribute Minification.
This is allowed in HTML5, but is not proper XHTML5.  So again, this is up to you and how strict you want to be.  The proper XHTML5 form for this select element is:

  <option value=“1” selected=“selected”>My selected option</option>
  <option value=“2”>Not selected option</option>

While I like being strict, I think it’s silly to write selected=“selected” so I opt for not following XHTML5 if that choice is up to me.

Within AngularJS, when you create a directive, it requires a small amount of javascript and a corresponding template.  You can determine whether the directive name you have chosen is allowed to be included in the code as an element or as an attribute, or both.  The default is to only allow it as an attribute.  This is the way I prefer, I like being able to run my HTML code through a validator and have it pass.  This is determined by the restrict property of the directive as mentioned earlier in this article.

The AngularJS docs suggest that you use an element when you are creating a component that is in control of the template.  Typical situation for this is in some domain-specific code.  They suggest you use an attribute when you are decorating an existing element.

In the real world, when looking at documentation online, and help from Stack Overflow and other sites, you will almost always see developers using the most condensed form, using element names and attribute minification.  Some of this is because they are trying to communicate a solution in the simplest manner, but I also suspect this is what most are using within their apps.

In my own code, I like using HTML5 validation because it helps me check out all my HTML to make sure I didn’t miss a close or open tag, but I don’t typically go so far as requiring XHTML5 validation.

Scope considerations for reusable code

You have the option with a directive to isolate the scope, which means to access any variables inside your directive’s template, you need to pass it into the template, using an attribute.

Here is an example of the same code from above in a new plunker showing isolate scope:

Notice that the directive now includes an additional attribute:

<basketball-player player=“basketballPlayer"></basketball-player>

And the directive code says to grab a player directive:

myDirectives.directive('basketballPlayer', function() {
  return {
    restrict: 'AE',
    scope: {
      player: '='
    templateUrl: 'bball-player-template.html'

The = sign says to also assign player attribute to the variable of the same name so inside the template, you use player as the variable name, as you can see in the bball-player-template.html file:

<div class="basketball-player">
  <div>Name: {{}}</div>
  <div>Number: {{player.number}}</div>
  <div>Position: {{player.position}}</div>
  <div>Points: <input type="text" ng-model="player.points"/></div>
  <div>Assists: <input type="text" ng-model="player.assists"/></div>

If you need to have some two-way data binding inside of your directive, then there are some situations when you cannot isolate scope.  You can isolate scope and still two-way data-bind when you are only binding to objects, but not primitives.  It’s tricky, so keep it in mind as you design your app.

Isolating scope is a really good thing, much cleaner, and leads to decoupled code, passing in the objects you need from the outside and nothing more.  So if you do need to two-way data-bind inside your directive, consider tossing your variables into an object and passing that to your directive so you can still isolate scope.

For some good reading about isolate scope and how it all works along with no scope isolation and the transclude option, check out this stack overflow question:
Confused about Angularjs transcluded and isolate scopes & bindings

AngularJS Directive Best Practice Guidelines

  1. Use your directive as an element name instead of attribute when you are in control of the template
  2. Use your directive as an attribute instead of element name when you are adding functionality to an existing element
  3. If you do use a directive as an element, add a prefix to all elements to avoid naming conflicts with future HTML5 and possible integrations with other libraries.
  4. If HTML5 validation is a requirement, you’ll be forced to use all directives as attributes with a prefix of “data-“.
  5. If XHTML5 validation is a requirement, same rules as HTML5 validation except need to add “=“ and a value onto the end of attributes.
  6. Use isolate scope where possible, but do not feel defeated if you can’t isolate the scope because of the need to two-way data-bind to an outside scope.


Subscribe to Front page feed