Skip to main content

Mobomo webinars-now on demand! | learn more.

Everyone says they have a REST (or RESTful or REST-like) API. Twitter does, Facebook does, as does Twilio and Gowalla and even Google. However, by the actual, original definition, none of them are truly RESTful. But that’s OK, because your API shouldn’t be either.

The Common Definition

The misconception lies in the fact that, as tends to happen, the popular definition of a technical term has come to mean something entirely different from its original meaning. To most people, being RESTful means a few things:

  1. Well-defined URIs that “represent” some kind of resource, such as “/posts” on a blog representing the blog posts.
  2. HTTP methods being used as verbs to perform actions on that resource (i.e. GET for read operations and POST for write operations).
  3. The ability to access multiple format representations of the same data (i.e. both a JSON and an XML representation of a blog post).

There are some other parts of the common vocabulary of REST (for example, for some developers being RESTful would also imply a URI hierarchy such that /posts/{uniqueid} would be seen to be a member of the /posts collection), but these are what most people think of when they hear “RESTful web service.” So how is this different from the “actual” definition of REST?

Diverging From Canon

By the common definition of REST, a service defines a set of resources and actions that can be accessed via URI endpoints. However, the “true” definition of REST demands that resources be self-describing, providing all of the control context in-band of the provided representation. No out-of-band knowledge should, therefore, be required beyond understanding a media type that the resource can provide. From there, it should be possible to follow relations provided in “hypertext” context of the representation to “transfer state”, follow relations, or perform any necessary actions.

Another common divergence comes through the practice of using HTTP POST (or PUT) bodies with key-value pairs to create and update documents. In a canonically RESTful service clients should be posting an actual representation of the document in an accepted media type that is then parsed and translated by the service provider to create or update the resource.

Still more divergence comes in the common practice of denoting collections and elements. A truly RESTful web service has no concept of a “collection” of resources. There are only resources. As such, the proper way to implement a collection would be to define a separate resource that represents a collection of other resources.

Is anything truly RESTful?

Pretty much everyone who claims to have a REST API, in fact, does not. The closest I’ve found is the Sun Cloud API which actually defines a number of custom media types for resources and is discoverable based on a single known end-point. Everyone else, thanks for playing.

There is, however, one public and extremely widely used system that is entirely RESTful. It’s called the world wide web. Yes, as you’re browsing the internet you’re engaging in a REST service by the true definition of the name. Does your browser (the client) know whether it’s displaying a banking website or a casual game? Nope, it just utilizes standard media types (HTML, CSS, Javascript) to compose and represent the data. You don’t have to know the specific URL you’re looking for on a website so long as you know the “starting place” (usually the domain name) and can navigate there.

So REST by its original definition is far from useless. In fact, it’s an ingenious and flexible way to allow for the consumption and traversal of network-available information. What it’s not, however, is a very good roadmap toward building APIs for web applications.

Real REST is too hard.

Truly RESTful services simply require too much work to be practical for most applications. Too much work from the provider in defining and supporting custom media types with complex modeled relationships transmitted in-band. Too much work for clients and library authors to perform complex aggregation and re-formulation of data to make it conform to the real REST style. Real REST is great for generic, broad-encompassing multi-provider architectures that need the flexibility and discoverability it provides. For most application developers it’s simply overkill and a real implementation headache.

There’s nothing wrong with the common definition of REST. It’s leaps and bounds better than some of the methods that came before it and pretty much everyone is already on board and familiar with how it works. It’s a pragmatic solution that really works pretty well for everyone. As they say, if it ain’t broke, don’t fix it.

What’s in a name?

The only problem is that now we have lots of things that we’re calling REST that aren’t. Roy T. Fielding, primary architect of HTTP 1.1 and the author of the dissertation that originally defines REST, hasn’t always been happy with people calling things REST that aren’t. And maybe he has a point: these services certainly aren’t REST by his definition and because of the wide propagation of this incorrect definition of REST most people now don’t really understand the true definition. In fact, I don’t claim to have a great understanding of REST as Dr. Fielding defines it.

The problem is that the ship has sailed, and whether it’s true or not, REST now also means any simple, URL-accessible resource-based service. Perception is reality, and perception has changed about the definition of REST and RESTful. While the true definition is interesting for academic purposes and certainly lies behind the technologies upon which we build every day, it simply doesn’t have a whole lot of use to web application developers. The fact that (nearly) zero services exist that implement true REST for their API serves as testament to that.

What can we learn from REST?

Just because we don’t use true REST doesn’t mean there aren’t a few things we can learn from it. There are a few aspects that I’d love to see come into favor in the common definition. The idea of clients needing to know a few media types instead of specific protocols for each service is one that breaks down in practice for APIs due to the overwhelming number of web services with different needs in terms of domain-specific resource definition. However, wouldn’t it be great if there were an accepted application/x-person+json format that provided a standardized batch of user information (such as name, e-mail address, location, profile image URL) that you could request from Facebook, Twitter, Google or any OpenID provider and expect conforming data? Just because there are lots of domain-specific resources doesn’t mean that it isn’t worthwhile to try to come up with some standards for common information.

REST-like discoverability could also be a boon for some services. What if Twitter provided something like this along with a tweet’s JSON?

{   "actions": {     "Retweet" : { "method":"POST", url:"/1/statuses/retweet/12345.json" },     "Delete" : { "method":"DELETE", url:"/1/statuses/destroy/12345.json" },     "Report Spam" : { "method":"POST", url:"/1/statuses/retweet/12345.json", params:{"id":12345} }   } }

So while REST as originally intended may not be a great fit for web applications, there are still patterns and practices to be gleaned from a better understanding of how such a service could work. For web applications, the case may be that REST is dead, long live REST!

Categories
Author

There have been tons of comparisons between Ruby and other languages (mostly Python) ranging from the technical to the epically titled. I’ve always felt that both languages are nice, but feel much more at home with the expressive, readable Ruby syntax. And don’t get me wrong, I love Ruby for all kinds of language reasons. Re-openable classes, blocks and the general meta-programming DNA of Ruby make it just wonderfully powerful and useful for me as a developer. However, when I think about it the language is really only half the story when it comes to why I love being a Ruby developer.

Whether it’s inherent in the makeup of the language and those to whom it appeals or the “buck the mainstream” attitude of Rails’s opinionated-ness, Rubyists tend to take to new things like fish to water. An outsider may look at the insane progression of testing frameworks and other different-ways-to-do-the-same-thing and think it’s a kind of madness, sickness, or chaos. Well, I suppose it is. However, living inside the Ruby hurricane has helped me to grow as a developer more than I could possibly have imagined in the few years I’ve been doing it full time. Thanks to Ruby, I was on board with these technologies months or even years before I feel I otherwise would have been (just a few examples):

  • Test-Driven Development: Already a mainstay in the Ruby community when I joined it, TDD has seen some pretty rapid expansion into other languages and communities. I’m not saying Ruby was the first or only, but it was certainly an early and strong TDD community.
  • Distributed Version Control: GitHub is the future not only of project version control but of open-source collaboration. SourceForge by comparison seems like a lumbering dinosaur. Fork, fix, push.
  • NoSQL: I was playing with CouchDB about a year ago, not realizing that soon it would be a player in the biggest developer flamewar since Vim vs. Emacs.

I’m not trying to say that X language doesn’t promote innovation or adoption of new ideas or Y has an inferior community. This isn’t a comparison post. I just know that when it comes to new technologies, most Rubyists I know are already jumping out of the plane while still trying to figure out how the parachute works. That sounds like an insult, but I think it’s amazing. We have a community of people so excited about technology that they literally can’t wait to build on it or for it, and share it with others. The huge proliferation of regional Ruby conferences speaks to this: we’re a group of people so intensely interested in learning about what’s new that we can’t manage holding just a few mainstream conferences each year.

For me, being a part of the Ruby community feels like getting a sneak peek at where software development is going six months to two years from now. That’s not to say it’s all rainbows and daisies…the constantly changing landscape requires a real passionate dedication to keep up or you’ll quickly fall behind, and not all technologies are meant to immediately be deployed to massive-scale production environments (restraint is a skill a good Rubyist must learn and exercise on a regular basis). But I love Ruby because I feel confident that I will be made aware of trends in software development long before I would otherwise be expected to understand them.

It’s fun to live in the future.

Categories
Author

There's a company out there that aggressively bundles its products to ensure lock-in. They have an end-to-end chain of devices and software crafted to create an impenetrable, closed ecosystem. They aggressively squash competition, even refusing to let competing products exist on their platform. Yeah, it's *Apple*.

I'm getting tired of the geek world being Apple apologists. We don't hold them to the same standards we hold other companies, especially Microsoft. They make cool stuff and that seems to give them a pass to do whatever they want. I say its time to stop being complacent.

I love Apple's stuff. It's pretty, and the operating system lets me do my work better. In fact, I don't really think I could do my job if I didn't run OS X. Windows is a terrible environment for developing Ruby, and Linux doesn't have the Adobe suite of products for the design work I do. That's exactly where the problem lies: I *must* use Apple software to perform my job, which means that I *must* buy Apple hardware to perform my job. Apple has a monopoly over my computer purchases and that doesn't sit well with me.

The reason that Apple hasn't gotten in trouble for their blatant product bundling and other anti-competitive tendencies is simple: they've never had the broad install base to warrant that kind of consideration. But with Apple's meteoric success in the consumer notebook market and an ever-increasing market share, how long can that really stay true? If Apple ever tips the scales at Microsoft-level popularity (or even a substantially smaller but still significant percentage of the market) they should be called to task just as Microsoft was.

OS X is an operating system. It "can be run on other machines":http://www.insanelymac.com/ and would be except that Apple says no. I'm sick of being told what hardware I have to use to use their software, and I'm surprised that everyone else seems to not only be complacent with this fact but revels in the "awesomeness" of Apple. I use Apple because their software is the best (and only) tool for my particular job, not because I feel some bizarre affinity to a consumer products manufacturer.

Maybe some day anti-trust hearings will force Apple to open up and allow any hardware to run OS X. Maybe not. One thing's for sure though, they aren't going to do it unless their hand is forced. Maybe that makes them a successful business, but it doesn't earn them my respect.

Categories
Author

I saw the article ’Google’s First Real Threat: Twitter’ pop up in my RSS readers today from a couple sources and didn’t click through and read; I had already learned the power of Twitter Search on multiple occasions (see my previous discussion here). But then something funny happened: I got an e-mail from O’Reilly saying that all three of my RailsConf proposals were accepted. Then I heard from a colleague here at Intridea that his talk was accepted.

This seemed, frankly, too good to be true. So I hit up Twitter Search for RailsConf, and sure enough, everyone seemed to be elated about getting their proposal(s) accepted. This confirmed my suspicion, and I contacted O’Reilly immediately. In fact, I contacted O’Reilly and they weren’t even aware of the problem yet. I may have been the one who alerted them to the issue in the first place. Confirmation came a few minutes later via the RailsConf Twitter account. While I’m a bit disheartened that I’m not necessarily speaking at RailsConf, it was an object lesson in just how powerful Twitter search has become.

Is Twitter a replacement for Google? No. But Twitter provides an instantaneous connection to what is happening to people right now, and in some (many) circumstances it can give you answers that Google never would, even after they re-index the web. This also to me serves as a lesson in how to truly compete with Google. Don’t try to “out-Google Google” like Cuil did. Instead find a way to provide a search that Google can’t touch, that can’t be created simply by crawling the web endlessly looking for new content.

Pay attention to where Twitter search goes in the coming months and years, because it’s no joke: real time search is a big deal.

Categories
Author

Neil McAllister recently wrote a piece on InfoWorld entitled The Case Against Web Apps. In it, he outlines “Five reasons why Web-based development might not be the best choice for your enterprise.” Obviously, as an employee of a web application services and products company, I disagree strongly with that opinion.

Web applications are not a “trend” in enterprise software development. They represent a fundamental shift in how software is developed, implemented and used in today’s technological climate. But to be specific, let’s go point-by-point through the “case against web apps.”

“It’s client-server all over again”

It certainly is. The difference is, we’re not living in a mainframe, dumb-terminal world anymore. Server infrastructure is cheap and scalable, and as more enterprises push their IT infrastructure to the cloud (see another article from InfoWorld: IT needs to get over its cloud denial, or management will get over IT) the need for on-site datacenters will shrink and, for many companies, eventually disappear.

Web applications require no client deployment, no versioning, no installation and no machine-by-machine support. There’s no massive rollout procedure for a new version and no back-breaking process if there’s a small but important glitch in a major release.

“Web UIs are a mess”

When each project has specific and individual user experience needs, isn’t it good to reinvent the wheel a little bit? Having a blank canvas means having the chance to build exactly what is right for this application, not shoehorning an application into pre-defined constraints.

Bad web sites and bad desktop application interfaces are equally impenetrable to the average user. The success of the user experience lies not in the hands of the chosen deployment platform but in the hands of a developer with an eye for user experience. I don’t think it’s a stretch to posit that the majority of such developers work either on web applications or for Apple. When was the last time you saw a beautiful Visual Basic application interface?

“Browser technologies are too limiting.”

“User interface code written in such languages as C++, Objective C, or Python can often be both more efficient and more maintainable than code written for the Web paradigm.” This statement rings false to me; when was the last time you saw a graphic designer who could pop open his trusty Visual Studio 2008 and recompile a project to tweak the user interface? The fundamental advantage of HTML/CSS/Javascript based interface development is that is accessible to a wholly different set of people, people who understand how users think and want to behave but don’t necessarily have the programming chops to implement the actual code.

The proliferation of Flash, Quicktime, and Silverlight can pretty much all be explained by one fact: HTML doesn’t support embedded video. Few web developers turn to Flash or other technologies for much outside of rich multimedia playing. You also can’t consider such a tool a liability when it is available for more than 99% of all web users.

This also brings up a fundamental flaw in “the case against web apps”: if this is an article talking about using web applications for enterprise business applications, how are any of the concerns about browser compatibility valid? Don’t most enterprises have control over what browsers get used by their employees? The refusal of Internet Explorer 6 to kick the bucket certainly seems to indicate that companies have a great deal of control over how employees access the internet.

“The big vendors call the shots.”

Is this untrue of any development platform short of Linux? Companies are at the whim of Microsoft when they released an in-many-cases incompatible, largely disparaged upgrade to their operating system with Vista. That’s much more of a moving target than the web standards, which with the exception of Internet Explorer (another Microsoft project) make writing cross-browser, cross-operating system applications a relative ease.

“Should every employee have a browser?”

You know what? Lots of people e-mail jokes to their families from their work accounts, let’s not allow people to write e-mails anymore. I heard that sometimes people make personal calls from the office, so let’s get rid of the phones, too. Not only is this point inherently distrustful of the work ethic and general competency of most employees, it doesn’t even hold water: browsers can be used to access internal applications even if all outside internet access is restricted.

In the end, I may have spent too much time here refuting his arguments without making the real case for web applications. So, very briefly, here’s it is:

  1. Massively Agile: Web applications can be built, deployed, and put into general use in a matter of weeks, not months or years. New features can be rolled out on a continuous basis rather than waiting a year for a new “point release.”
  2. Massively Accessible: Web applications can be accessed from any device that can access the internet, regardless of operating system or system requirements. As mobile phones become more web capable this becomes even more apparent and necessary. Desktop applications require completely separate development efforts.
  3. The Local Data Problem: There’s no need for “shared” folders and collision control on documents in a web application. Everything is on the server, everything is up-to-date as soon as it is accessed.
  4. Web is the new Desktop: Technologies such as Adobe AIR and site-specific browsers have made it so that web applications are becoming more and more like desktop applications, bringing the ease of development and deployment with them.
  5. Collaboration is King: Web applications, due to their centralized nature, can naturally encourage less isolated, more collaborative work between employees.

Web applications aren’t the solution to every problem a business faces. If you need graphically intense 3D visualizations for your buciness, web applications probably aren’t the way to go for you. But for most businesses, most of the time, web applications will be more cost-effective, more useful and more agile than the alternative.

Categories
Author

The other day when Zoho People was announced I came to the realization that even though I had heard about Zoho in 20 different blog posts over the last year or so, I had never taken a moment to go check out what they were all about. With a full online office suite, it’s definitely something I’m interested in and could use. So why didn’t I take 5 minutes to explore further?

Their Logo. It had been attached to all of the posts about them, and when I see it I just instantly lose interest in the company and their products. This is not the logo of a company that wants to be taken seriously for a productivity suite, it’s the logo of some company that sells teddy bears online that you can customize…or something. It is such a stark disconnect from the target demographic that I really just can’t understand the thought process that went into it.

Now I may be shallow in writing off this company solely because I didn’t like the look of their logo (though I would argue that’s a perfectly reasonable thing to do), but the point is that it doesn’t matter at all what features, awesome back-end programming, and next-generation online collaboration Zoho offers. I never found out more about them because the image I was presented was not one that appealed to me.

The design of a company’s logo, its products, its website, everything, are not throwaway concerns. In a split-second, a person might look at your corporate website and decide “This company doesn’t look professional enough.” There is a critical period in the very first moments a potential customer sees your product that may well inform the rest of your relationship with that customer. Without an appealing aesthetic front, you will never make it to the meat of your pitch, because they have already written it off mentally.

This, I feel, was the largest gap between my college Computer Science education and the real world. I’ve always been a designer as well as a developer, but when there was absolutely no emphasis placed on the user experience or the aesthetics of the software that we were building for classes, I got frustrated. Not everyone has an eye for design, and that’s not a problem. But if a product is to be taken seriously, someone along the line has to take it and make it look good, because behind-the-scenes magic will always be just that: behind the scenes.

Categories
Author
Subscribe to Opinion