Skip to main content

Mobomo webinars-now on demand! | learn more.

I arrived at Moscone Center on Wednesday the 29th to a mob of people making their way towards registration. When I finally made it to a registration table I got my badge and was off to the races.

I wandered around for a bit with the crowd until the keynote started upstairs an hour later. I was really looking forward to this, so I headed up to the third floor and found a seat. The music was good and on the screens over the stage were some of the great Google Chrome Experiments.

Vic Gundotra (SVP, Google Engineering) took the stage once everyone was seated. They did a great job of pumping up the crowd and it felt like a really exciting moment when he appeared.

Vic kicked off his presentation by giving a rundown of how great things are going at Google and then turned his attention to Android. While I am not personally the biggest Android fan I was actually pretty amazed at what I saw of Android at the event. He announced a new Android version, 4.1 Jellybean.

He introduced a new UX/UI layer and features, codenamed Project Butter.

Surprisingly, I found myself getting excited by all this cool stuff. While I still feel that there are massive challenges with development for Android, it seems Google is hearing our concerns and addressing them. Their demonstration was responsive, like iOS responsive. The Triple buffering was really quite amazing; smooth, fast and liquid. See the video.

Another feature presented was “Google Now", a tool to assist users with information when they need it most. It was presented as simple visual cards, such as weather, time to home, or time to the office. It constantly learns your common routines and presents solutions and or recommendations that are available whenever you need them. Fortunately it bordered more on the lines of clever than creepy, and I thought it was a really great idea.

Then, what seemed to be a moment many were waiting for, Nexus 7 was unveiled. This is a new Android 7” Tablet created by Asus and Google. From what I could see it looked pretty darn cool and not quite unlike many other 7” tablets out there. In fact my first thought was the Kindle Fire. After a few minutes it became clear Google was betting on this device.

With a price of $299, it is very attractive. The common question for ALL 7” Tablet makers is "will it compete with the iPad?" In my opinion, probably not, but it certainly appeared to be the best 7” Android tablet available. But with no carrier support and WIFI only, I was a little disappointed. I was fortunate enough to get one for myself though, which I will summarize later in this post.

Next up was the Nexus Q. If there ever was a mass WTF moment, this was it. Most of the people around me were wondering what this thing was. Google presented the “BALL” - an aluminum ball with multiple connectors and a LED light around the device. Google was quite proud that this device was their first endeavor with sole Google manufacturing form the ground up. They must have known that people would be asking questions because it was the only part of the keynote that included a reenactment use case of users within a living room.

Basically, this is a streaming device designed to be connected to your home stereo system that will play ONLY music from your Google Play account on standard equipment. It's a way to get music off your phone or desktop and into a more social experience. With the ability to create playlists from your phone or direct account, or your friends' phones or accounts, you can compete for which songs will play when you are all near the device. Honestly, for $299, I am not sure who will buy this. However, it does address the needs of Google Play users who want a simple way to get their purchases onto standard media equipment.

A bit further into the presentation we all hear “Excuse me, cough, hello”... And Sergey Brin appears on stage. This was a great moment. Who could not be impressed by such an influential figure? Most people were trying to figure out what was on his head. He was, in fact, wearing a Google Glass device. He was telling the crowd he really wanted to try something special with Google Glass when on the screens suddenly there were some people in a craft above Moscone Center, each of them wearing a Google Glass device.

Then he explains this is the first time we can actually see “in realtime” what it looks like to jump out of a blimp... The idea here of course was to show that Google Glass is all about realtime and being unobtrusive with activities you may enjoy. He talked to the jumpers briefly and then they were off. Sure enough a live stream of them diving towards the Center’s rooftop was being shown. Once they landed they delivered something to some BMX bikers, a climber, and went down the external walls of the Center to another BMX biker and finally into the event Keynote.

I have to admit it was kind of exciting... Sergey gave the audience time to clap and cheer and then tells everyone at the event they have a chance to get their own Google Glass Explorer for $1500. I think many of us were hoping to receive one as a schwag item, but our collective disappointment faded quickly because Sergey handed the keynote back over to Vic and the 2012 schwag was revealed.

I did notice there was a sense of entitlement from most attendees at the event. Speculation about the schwag was the most talked about topic up until this point. So what did we get?

  • Samsung Galaxy Nexus, which I have to admit is a really nice phone. I have already switched over to it from my iPhone.
  • Nexus 7 Tablet. After using it for a bit, it really “kills” ALL other Android tablets.
  • Nexus Q, which is still in its box.

I spent the rest of the day strolling the second floor, which was the vendor and products floor.

Seeing all the Google products at work and in use by real industries made me feel really good about being there. Everything from Android based brail consoles to Android based flying mini-drones, to Google TV. (I was satisfied they weren’t going to abandon such a GREAT product).

The first day of the event ended with a big party with a few performers including Train. People were excited and enjoyed themselves late in to the night. The overall mood here was really positive.

I spent day 2 and 3 of the event in and out of code labs, product presentations and talking with a lot of developers. Some of the more interesting sessions I attended were:

Overall, I had a great time at Google I/O. I was fortunate to receive some great schwag, see some interesting presentations, and connect with a lot of great developers to talk about product ideas, strategy, and code.

Visit our Flickr page for some more great photos from the event!

Categories
Author

Velocity Conf knocked my socks off. This was my first O’Reilly conference and I can really see what the hub-bub is all about. Velocity was host to many top industry pioneers like the dudes from Etsy who created StatsD, Mitchell Hashimoto who works on Vagrant, and reps from Opera, Mozilla, and Google, among other big names.

The conference was split into a venn-diagram of operations, development, and devops, so it was easy to experience talks that were on the fringes of most attendees’ skillsets. Being mostly into development and UX, the web performance track was my home turf. However, I did learn some operations stuff that helps me level up beyond just being able to scale up my meager home NAS server. Many of the pure operations talks had to do with visualization of systems; it was nice to hear the discussion involve many HCI principles that we use on the Intridea UX team on a day to day basis.

There was so much material on the web performance side of things that I could go on for days about it, but I’ll just share a few of my favorite tips from Velocity for this debriefing. I often see front-end developers and engineers struggle with exactly how to measure and address web performance issues, and many of the Velocity presenters covered ways to effectively optimize page load; and yes, image compression was one of those things mentioned.

DOMinate the Document Object Model

Ok, so let’s think about DOM render, it happens serially right? That means that we have to make sure we don’t structure our markup in a way that would severely block the “thread” when loading. Modern browsers have implemented work-arounds like "speculative loading” to download resources while still parsing the rest of the DOM. This is all well and good, but speculative loading will still fail if we have any inline script tags that use document.write() to append markup to the document. This would be a sure-fire way to block the DOM. Not all document.write() is entirely evil, but one should definitely be wary of it.

Something cool that Chrome for Android is doing is spinning up multiple processes when loading a document so it’s likely that true concurrent DOM render is probably coming in the near future. The faster a user sees browser paint (elements on the screen), the faster they will believe the page is loading. You never want to give them the “white screen of death”.

Optimization for Mobile

With responsive design all the rage (and with good reason), there are special considerations to make to optimize for multiple devices. Jason Grigsby drilled down into this at Velocity in his talk “Performance Implications of Responsive Design”. We obviously want to limit the size of any asset on a mobile device if necessary, but the W3C spec still needs to catch up with an image tag that allows multiple sources for multiple breakpoints. Until then, we have this:

Picturefill, a JS lib that allows us to specify multiple images with data attributes. In my opinion, the current landscape of responsive design feels very much like back when CSS and semantic markup had become en vogue. Browsers and W3C spec will need to catch up, and until then we will have to put some hacks in place to heighten the UX.

Tools

Now for the tools…

The W3C now has a couple of recommendations in the works for Timing APIs to measure a slew of attributes surrounding page speed. They are super easy to use too, all you need to do to leverage is:

window.performance

…and BAMMO, you’ve got yourself an interface in which you can piece together just about any metric for page load, memory allocation, etc. that you want.

If you just want to get a good rundown of these metrics, but don’t want to build it yourself, then use the PageSpeed Critical Path tool, a project headed by Bryan McQuade at Google. Bryan, Patrick Meenan, Dallas Marlow, and Steven Souders went over the tool in depth at Velocity, and you can see their presentation here.

A Stronger, Faster Web

Velocity’s theme is centered around “Building a Faster and Stronger Web.” What amazes me is that after leaving the conference I already feel more confident in my ability to begin building a faster, strong web.

Velocity was a conference that didn’t disappoint. It wasn’t a dull offering of overdone presentation topics and speakers – it actually offered interesting panels and presentations on a variety of really engaging topics, all centered around that single theme. I’m looking forward to heading back next year and learning what it will be like building the web in 2013!

Categories
Author

Background

As a Rails developer I normally spend most of my time on backend development, implementing features and functionalities. I am really confident about my Rails skills for backend work but rarely have I felt any happiness doing frontend work before. But things have changed since working on the new responsive intridea.com. I started some interesting frontend work from this project and fell in love with many UI skills and JS tricks, especially Pjax which I want to talk more about in this post.

What is Pjax?

Pjax is such a fantastic tool. I love this formula which describes Pjax well: pushState + Ajax = Pjax. When you need to store your page state in URL as permanent links, Ajax cannot do it gracefully. Thus, Pjax is a good choice to only update specific parts of web page and allow you to generate permanent links at the same time. Personally, I also like to call Pjax Perfect Ajax.

Pjax Magic in Rails 3.2

Let me show you how we use Pjax to speed up page loading for the current intridea.com website. Firstly, we use a rack middleware rack_pjax which will simply filter the page to respond to pjax-requests accordingly. Ok, add the gem to Gemfile as first step:

 gem 'rack_pjax' 

Secondly, we should include this rack application into our Rails stack. This is easy too, just add this line in your config/application.rb file to config the middleware:

 config.middleware.use Rack::Pjax 

Thirdly, we install the Pjax jquery plugin to assets/javascripts folder. You can download the Pjax source from this link. As with any other javascript plugin, be sure to include the file in application’s JavaScript manifest file as below:

 //= require jquery  //= require jquery_ujs  //= require chosen.jquery.min  //= require jquery.pjax  //= .... other js libs 

OK, by doing the above three steps of installation and configuration we now have Pjax plugged into our application. It's time to enable the Pjax RULE for our current website. Basically, we want to add a 'data-pjax-container'-attribute to the container in which the HTML content will be updated by the Pjax response. The Pjax data container can be put in any layout or view file in our Rails application. That sounds cool, right? In our case, we only place a Pjax data container in layout as below:

 

Wait, it's not finished yet. Now we enable the Pjax magic for the application. We enabled all links as "Pjax" triggers as below for example:

 $(document).ready(function() {    $('a').pjax('[data-pjax-container]', {timeout: 3000});  }); 

It means, every link on the web page will trigger a Pjax request to the server, then the Pjax data container part will be updated by the Pjax response. Here we set timeout as 3000ms; you can set it higher if you use a custom error handler. Besides timeout, there are a bunch of other options for a Pjax function; they are almost the same as jQuery's $.ajax(), but there are still some additional options, you can take a detailed look at Pjax's doc.

Attentions

We have some javascript code which are binding to some Dom elements in that Pjax data container. For instance, we want to validate our contact form via Javascript, but a Pjax based page reloading will prevent the Javascript validator from working. That's because we only initialize the validator when Document is ready, but a Pjax reloading will not reload the whole document, which means we have to recall the validator again after the Pjax is done. Yeah, Pjax indeed fires two basic events while doing Pjax on your data container: pjax:start and pjax:end. To solve the above javascript validation issue, we need to call that function in Pjax:end callback as well.

 $(document).ready(function() {    var ApplicationJS = com.intridea.ApplicationJS;     $('a').pjax('[data-pjax-container]', {timeout: 8000});    ApplicationJS.validate_contact_form('#contact_form');     $(document).on('pjax:end', function(){      ApplicationJS.validate_contact_form('#contact_form');    });  }); 

Similarly, if you want to add a loading indicator upon Pjaxing, then you might need to do something like this:

   $(document).on('pjax:start', function(){       // this will show an indicator on the <li> tag in navigation.       ApplicationJS.navSpinner('nav li.active');    }); 

Finally, notice that Pjax only works with browsers that support the history.pushState API. Here is a table to show you all related information about popular browsers. When Pjax is not supported in some browser, $('a').pjax('[data-pjax-container]' calls will do nothing but work as normal links. So don't worry about any regarding mess ups.

Have fun playing with Pjax, and please share your feedback and your own use cases with us on Twitter!

Resources

Categories
Author

I'm a huge fan of Heroku. I mean I'm a huge fan of Heroku. Their platform is much closer to exactly how I would want things to work than I ever thought I would get. However in the past few weeks Heroku has had a number of serious outages...enough to the point where I started thinking that maybe we needed to start working out a backup plan for when our various Heroku-hosted applications were down. That's when I realized a big problem, and it's not just a problem with Heroku but with any Platform-as-a-Service:

The moment you need to have failovers or fallbacks for a PaaS app is the moment that it loses 100% of its value.

Think about it: to have a backup for a Heroku app, you're going to need to have a mirror of your application (and likely its database as well) running on separate architecture. You will then need to (in the best case) set up some kind of proxy in front of Heroku that can detect failures and automatically swap over to your backup architecture, or (in the easiest case) have the backup architecture up and ready to go and be able to flip a switch and use it.

The backup architecture is obviously going to have to be somewhere else (preferrably not on EC2) to maximize the chance that it will be up when Heroku goes down which leaves you with the glaring problem that if you have to mirror your apps architecture on another platform, all of the ease of deployment and worry-free infrastructure evaporates. This leaves you with two options:

  1. Put your faith in your PaaS provider and figure that they will (in general) be able to do a better job of keeping your site up than you could without hiring a team of devops engineers.
  2. Scrap PaaS entirely and go it on your own.

A "PaaS with fallback" simply doesn't work because it's easier to mirror your architecture across multiple platforms than you control than it is to mirror it from a managed PaaS to a platform you control.

Don't Panic

Note that I'm not telling anyone to abandon Heroku or the PaaS concept; quite the opposite. My personal decision is to take choice #1 and trust that while Heroku may have the occasional hiccup (or full-on nosedive) they are still providing high levels of uptime and a developer experience that is simply unmatched.

Heroku has done a great job of innovating the developer experience for deploying web applications, but what they need to do next is work on innovating platform architecture to be more robust and reliable than any other hosting provider. Heroku should be spread across multiple EC2 availability zones as a bare minimum and in the long run should even spill over into other cloud providers when necessary.

If they can nail reliability the way they've nailed ease-of-use even the most skeptical of developers would have to take a look. If they could say with confidence "Your app will be up even if all of EC2 is down" that's yet another powerful selling point for an already powerful system.

The Third Option

There is actually a third option: if your PaaS is available as open source then you will be able to run their architecture on someone else's systems, giving you a backup that is at least a middleground between the ease of PaaS and the reliability of Do-it-Yourself. The two current players in this arena are Cloud Foundry and OpenShift.

While Heroku currently has them beat for developer experience (in my opinion) and the addon ecosystem makes everything just oh-so-easy, it might be worth exploring these as a potential middleground. Of course, if Heroku would open source their architecture (or even a way to simply get an app configured for Heroku up and running on a third-party system with little to no hassle) that would be great as well.

In the end I remain a die-hard fan of PaaS. It's simply amazing that, merely by running a single command and pushing to a git repo, I can have a production environment for whatever I'm toying with available in seconds. After the past few weeks, however, I am spending a little more time worrying about whether those production environments will be up and running when I need them to be. And that's the problem with PaaS.

Categories
Author

Who is old enough to ride the big kid rides at the carnival? Us! That's right, we turn 5 this month and to celebrate we're getting a facelift; you're going to love our new site! But first thing's first - a birthday toast to honor our past and celebrate our future.

In the Beginning

In the beginning there was a single idea: build a different kind of web development company. Co-founder Dave Naffis, Yoshi Maisami, and Chris Selmer partnered with like-minded DC developers to execute on this vision and together they created Intridea, a unique and agile software design company.

The co-founders had a few ideas they kicked off with:

  • Create an entirely virtual company, allowing us to hire the best developers no matter where in the world they live.
  • Be a place to work with the best and the brightest minds in our industry.
  • Be at the forefront of technology and leverage cutting-edge tech in our software solutions.
  • Build applications for customers while gaining insight into their business problems.
  • Use those insights to build products and help customers stay competitive in their industries.

What began as a couple people with ambitions to create a better kind of software company quickly evolved into a team of twelve talented Ruby on Rails developers by the end of the first year in 2007. Intridea was built from the ground-up with raw talent and focused determination without the aid of any VC funding. Fast-forward five years and today the Intridea team is comprised of nearly fifty talented engineers, designers, project managers and partners all working collaboratively on some of the most cutting edge software projects in the world.

5 Years Of Awesome

Of course, it wasn't all peaches and cream. You don't grow from two to fifty, launch hundreds of web and mobile applications, and create award-winning products without a few hiccups along the way. We had our share of growing pains but we responded to each stumbling block with the same kind of innovation we use to help our clients solve problems:

  • Talent: Because Intridea came into the web development field in 2007 when Rails was just starting to gain traction in the U.S., we wanted to do what we could to support the Ruby language, the Rails framework, and their communities. Doing so ensured that we (along with other companies) would be able to thrive in the web development space, and that people would continue learning and using the language in the business world. To that end, we began sponsoring regional and national conferences, local user groups, hackathons, and encouraging our developers to continually work on open source software projects.

    The hundreds of hours we devoted to teaching classes and presenting at conferences provided us with a reputation for excellence in the Rails community. Therefore, when we experienced periods of rapid growth we were able to bring on the additional talent we needed, even amidst a climate of high demand and low supply in the Ruby on Rails ecosystem.

  • Communication: When communication across a distributed team became difficult we created Presently (now known as Socialspring Streams) to bridge the distance between ourselves and enable more effective collaboration with real-time communication. We iterated on the product as we grew, adding features for sharing video, direct messaging, group collaboration, and more as we needed them.

    Realizing Presently could be of use to other companies as an internal micro-blogging tool we worked to make the product viable for enterprise use. Last year we released Socialspring, a suite of enterprise applications for internal knowledge-base creation, questions and answers, collaboration and communication, and secure link shortening with analytics.

Good Ideas

In a very short time, one small company with a ton of talent has produced some amazing applications. We're always thinking of ways to solve problems with software and it's evident from the products we've built for clients and consumers. Our work on mobile applications like Tradui, a Creole-English translation app to help aid workers in the wake of the Haitian earthquake crisis, and OilReporter, a crowdsourcing tool to track and report sightings of oil and harmed wildlife after the Gulf Coast Oil Spill gave us the opportunity to show the world how software can revolutionize disaster relief.

Most recently, Michael Bleigh created QUP.TV, a service that sends you email alerts when Netflix adds new titles to their lineup. GigaOM, SlashGear and other prominent blogs have covered the release of this new product.

We're helping clients like Amazon, Agilysys, Safeway, Oracle, Mashable and hundreds of others create software to revolutionize their industries. Check out our shiny new portfolio page to learn more about how we helped Amazon Mechanical Turk leverage the power of good design to engage their users, or how we helped Point of Sale industry giant Agilysys redefine how POS systems are designed.

The Next Five Years

The first five years has been an exciting start of a long journey. If corporations were really humans we'd only just be starting Kindergarten, but we like to think of startup years more like dog years; it's no easy feat for a startup to survive their first five years but we've done it with style.

What will the next five years bring? Hopefully many more opportunities to design and develop exceptional software and user experiences. We're partnering with companies like GoodData to help build custom dashboard and analytics tools; we're strengthening our mobile team so we can bring even more companies to the mobile future; we're perched at the very edge of the tech frontier, ensuring we not only know the latest technologies but that we also have the experience to know which tool is best for the job. We're confident we'll be able to forge ahead no matter what the future may hold because we think of problems as exciting challenges, not as unsurmountable walls. We're a group of programmers and designers but more than that we are a group of people who love to solve problems, whether its for our clients or for ourselves.

So we raise our laptops today to ourselves, our clients, our partners, and the tech communities we thrive in, and we cheer to a future of responsible growth, intelligent design, and transformative work.

Very early, I knew that the only object in life was to grow...- Margaret Fuller

Categories
Author

Creating a more beautiful web, one application at a time.

Our website has always been more than just a sales tool for displaying our services. As a web development and design company our website is our brand; it embodies the essence of who we are: our values, our culture, and our discipline.

We don't take a redesign lightly; when we approach the task of a redesign we begin with long, thoughtful discussions about our company, our image, where we're going, and what we want to communicate about ourselves to the rest of the world. Our website has to exemplify our passion for elegant and functional design, quality code, collaborative work, and our obsession with emerging technologies.

For this redesign we sat down and had conversations with Intrideans where we asked questions like, "What is Intridea to you?" and "What do you love about this company?" and "What do you want to see more of on our website?". We reflected on their responses and went to the design team with pages of documentation including feedback on our culture, our path, and our history.

The new website was designed and developed with all of this in mind. They worked to ensure everything that we love about our company is reflected in the layout and design elements. We added some really great new features we're really excited about:

  • You'll find a new Community Section that highlights all the events we're sponsoring, speaking, and training at, a collection of our open source projects, slides to all of our presentations, and links to our most frequently trafficked blog posts for quick and easy reference.
  • We revamped our Portfolio Section with more in-depth case studies and illustrative examples of our work and added recent clients like Amazon, SocialCode, and Oracle.
  • We used responsive design techniques to ensure our website looks good no matter what kind of device or screen size you're using to look at it.
  • New About and team member pages that do a better job of showing you the kind of geniuses we have on the Intridea team.

For a more in-depth look at how we designed the new intridea.com I interviewed , the lead designer on the project and Andy Wang, the lead developer on the project.

Design

Renae: What did you draw inspiration from in the new intridea.com design?

Chris: Grid systems (960), traditional graphic design principles, modularity, timelessness, and simplicity. The new design had to maintain the Intridea image we've created over the last five years while redefining who we are and what we do to help us move forward.

Renae: What elements from the previous site did you want to preserve?

Chris: We held stakeholder interviews with our company founders and other Intrideans to get input on how they see Intridea as a company. The consensus was Intridea is an approachable, friendly, professional company that offers its employees an opportunity to do great things without feeling like they're in some sort of software grind-house. We sought to maintain that feeling and felt Intrideans, new clients, and visitors to the site should feel welcome and excited about us. We worked to make sure the new design helped evoke that kind of excitement.

Finding a way to incorporate the previous branding into a new aesthetic was a real challenge at first as I immediately wanted to scrap our previous designs and start fresh. Yet, after several initial comps I realized helping our brand to "grow up" didn't mean I needed to start from scratch.

So we scrubbed the site down and gave it a fresh coat of paint and detailed the hell out of it. I realized that our branding elements, the hills, people, etc, could be used to create delight in the design which is always fun. Take a look at our 5th anniversary image on the homepage for example - I created a person for everyone in the company. We preserved quite a bit of the site actually; the structure is pretty similar and elements of our original branding found their way in there without being the focal point.

Renae: How do you want people to feel when they see the site?

Chris: To feel they're experiencing something new and exciting. This is a totally different experience from what we've presented in that past. The content has been overhauled, there's a lot more focus on what we do and how we do it, our community image is strong and vibrant, and there's a focus on us - the people who work here, and that didn't really exist in the previous design. What we're really trying to communicate in the design is that we're a company of ridiculously talented and creative designers and developers who love working hard and solving problems.

The redesign gave us an opportunity to really show others that we're different from our competitors - we're not gimmicky, we're not trying to hide anything, we're just giving everyone a very clear picture of who we are. That's why I wanted the design to have a lightness to it; I think it gives a surprising sense of ease.

The recent trend is to use elements of minimalist, Swedish-inspired design but we wanted to show you can use interesting and playful design elements and still be serious and professional at the same time.

Renae: What were some of the challenges you encountered in the design process?

Chris: We do an incredible amount of things; yes, we make software but we also write, participate in conferences, teach others, make products, contribute to open source projects, support user groups, and help clients solve all sorts of interesting problems. I needed to find a way to make all of that information consumable.

In order to communicate everything we wanted through an elegant user experience we used a rigid grid system to control the abundance of information. We cover a lot of ground in small, modular bits. The hardest challenged though was in designing unique views for all the different breakpoints:

  • Almost every section of the site has a unique view leading to 14 different layouts.
  • We opted for 4 different breakpoints in our responsive design - 1280, 1024, 768, and 480.
  • Each of the 14 layouts had to be adjusted to meet the needs of each breakpoint resulting in a total 56 templates.

I'm really psyched about the new website. Consistently, clients have said our “friendliness” is what made the difference when it came to choosing a new technology partner. I think we've struck a nice balance in maintaining our friendly image while at the same time showing how serious we are about our work.

Development

Renae: Talk about your decision to start with a new, fresh codebase.

Andy: The old codebase for the site still had elements from the original version in 2007. Although we had been adding features and updating the codebase over the years it was time to scrap the old code and start anew. I built it with Rails 3.2.3 and that made adding new features a lot easier throughout the development cycle. It also puts in a better position to scale and improve the site in the future.

Renae: What were some of the challenges you encountered?

Andy: I only encountered two challenges on this project. The first was adjusting responsive views and tweaking Javascript effects; those cost me some time. All the front-end improvements and enhancements were challenges to a pure Rails engineer like me. But on the upside I became a front-end skills lover from this project!

The second, larger challenge was transferring all the old data from S3, including thousands of blog posts, products, projects and all related images and assets. I decided to move the old S3 repos to new folders and wrote a script to migrate all useful data from the old database for my local environment and then push my local database to our staging environment on Heroku. After that everyone was able to share the REAL data.

Renae: Did you learn anything new as you worked on this project?

Andy: Sure, I learned the skills to build responsive views for multiple browsers/devices. It’s really cool to build a website which is responsive to many devices at the same time.

I also learned how to use Pjax with Rails. A good lesson from Pjax is that if you have some other Javascript in the Pjax content you want to make sure you run the regarding Javascript in Pjax callback.

Using rails_admin saved us a lot time in building the admin sections and features. I think it’s great to use rails_admin for a pure CMS. Sometimes rails_admin doesn’t work well for complex admin logic or complicated admin actions, but it’s good for classic CRUD actions.

I also added integration testing for the contact form to make sure the form is always working correctly with Javascript validations.

Renae: What aspect of the code/architecture are you most proud of?

Andy: I'm really proud of several things:

  • On the Rails controller level I mapped all same categories of pages to a sample controller and this helps UX and UI designers to integrate their designs and markup quickly.
  • I made it easy to read the website structure based on the code base.
  • Abstracted business logic with simple models and just display whatever make sense to Content Manager from the backend.
  • Customizing rails_admin for multiple photo management with associations.
  • Adding Pjax and other Javascript effects, such as the blog pagination with two modes and contact form validations.
  • Responsive views control and adjustments.

Renae: How do you feel this site represents Intridea?

Andy: From a developer's perspective the new site represents us really well. First, it's awesome! Second, the new site uses many technologies such a responsive views, HTML5, the latest Rails, Pjax, OmniAuth, and rails_admin - and Intrideans love using new technologies in their projects!

In Short

TLDR: we've got a shiny new website. It's made from the dust of unicorn bones and infused with the spirit of a thousand minotaurs. It's simplistic beauty and hardcore function all rolled into one; it's the new intridea.com. We hope you'll enjoy it as much as we do.

Categories
Author

Last weekend I participated in the first Hack the Midwest, a 24-hour hackathon in Kansas City. I was very impressed by the event: nearly 100 developers from the Kansas City area participated with tons of API sponsors and great prizes. I decided to go it alone and throw my hat into the ring with an idea that I had been thinking of for a while: what if there were email alerts for Netflix Instant? 24 hours later, the result was Qup.tv.

I was fortunate enough to be awarded top honors at the competition and since then the response to Qup has been phenomenal! It's been covered in GigaOM, SlashGear, and Silicon Prairie News (and even tweeted about by Roku) and has already grown to more than 600 users in under a week!Qup is a simple application that links your Netflix account to your email address. You receive periodic emails when Netflix adds new titles to their streaming catalog, and you can queue titles, watch them, or visit their Netflix page with one click. You don't even have to be signed into Netflix to queue up titles so you can add them from your phone or from a public computer without the hassle of signing in. Qup also pulls in Rotten Tomatoes scores for movies and gives you the power to filter the titles you receive based on Netflix rating, Rotten Tomatoes rating, and more coming soon.

The best part about the success of Qup for me has been demonstrating that something real, polished, and useful can be developed in just one day by just one person. It's one of the reasons I'm so passionate about web development: one person really can make a dent in the world.

If you're a Netflix user, I hope you'll give Qup a spin and if you're a developer I hope you'll take a look around and find a local hackathon to participate in. It's a lot of fun, you will learn a lot, and you might just get something you want to keep building out of it!

Categories
Author

Rails views are typically rendered after some controller action is executed. But the code that powers Rails controllers is flexible and extensible enough to create custom rendering objects that can reuse views and helpers, but live outside of web request processing. In this post, I'll cover what a Rails controller is and what it's composed of. I'll also go over how to extend it to create your own custom renderers, and show an example of how you can render views in your background jobs and push the results to your frontend.

What's a Controller?

A Rails controller is a subclass of ActionController::Base. The documentation says:

Action Controllers are the core of a web request in Rails. They are made up of one or more actions that are executed on request and then either render a template or redirect to another action. An action is defined as a public method on the controller, which will automatically be made accessible to the web-server through Rails Routes.

While Base suggests that this is a root class, it actually inherits from ActionController::Metal and AbstractController::Base. Also, some of the core features such as rendering and redirection are actually mixins. Visually, this class hierarchy looks something like:

ActionController::Metal is a stripped down version of what we know as controllers. It's a rackable object that understands HTTP. By default though, it doesn't have know anything about rendering, redirection, or route paths.

AbstractController::Base is one layer above Metal. This class dispatches calls to known actions and knows about a generic response body. An AbstractController::Base doesn't assume it's being used in an HTTP request context. In fact, if we peek at the source code for actionmailer, we'll see that it's a subclass of AbstractController::Base, but used in the context of generating emails rather than processing HTTP requests.

module ActionMailer   class Base < AbstractController::Base     include AbstractController::Logger     include AbstractController::Rendering  # <- ActionController::Base also uses     include AbstractController::Layouts    # <- these mixins, but for generating     include AbstractController::Helpers    # <- HTTP response bodies, instead of email response bodies     include AbstractController::Translation     include AbstractController::AssetPaths   end end 

Custom Controller for Background Job Rendering

For a recent project, I needed to execute flight searches in background jobs against an external API. Initially, I planned to push the search results as a json object and render everything client-side, but I wanted to reuse existing Rails views, helpers, and route path helpers without redefining them in the frontend. Also, because of differing client performance, rendering server-side improves page load times for users in this instance. Architecturally, what I wanted looks like:

The requirements for this custom controller were:

  • access to route helpers
  • renders templates and partials in app/views

Unlike a full blown ActionController, this custom controller doesn't need to understand HTTP. All it needs is the result of the flight search from background workers to be able to render an html response.

The full code for the custom controller is:

class SearchRenderer < AbstractController::Base   include Rails.application.routes.url_helpers  # rails route helpers   include Rails.application.helpers             # rails helpers under app/helpers    # Add rendering mixins   include AbstractController::Rendering   include AbstractController::Logger    # Setup templates and partials search path   append_view_path "#{Rails.root}/app/views"    # Instance variables are available in the views,   # so we save the variables we want to access in the views   def initialize(search_results)     @search_results = search_results   end    # running this action will render 'app/views/search_renderer/foo.html.erb'   # with @search_results, and route helpers available in the views.   def execute     render :action => 'foo'   end end 

A runnable example of this source code is available at this github repository.

Breaking down the above code, the first thing we do is inherit from AbstractController::Base:

class SearchRenderer < AbstractController::Base   def initialize(search_results)     @search_results = search_results   end end 

We also save the search results in an instance variable so that our templates can access them later.

  include Rails.application.routes.url_helpers  # rails route helpers   include Rails.application.helpers             # rails helpers under app/helpers 

These methods return Rails route helpers like resource_path and resource_url, and also any helpers defined in app/helpers.

Next we add the mixins we need to be able to call the #render controller method. Calling #append_view_path sets up the view lookup path to be the same as our Rails controller views lookup path.

  include AbstractController::Rendering   include AbstractController::Logger    append_view_path "#{Rails.root}/app/views" 

Then we define a controller action named execute that'll render out the response as a string. The #render method used here is very similar to the one used by ActionController.

  def execute     render :action => 'foo'   end 

To use this renderer object, you need to initialize it with a search results object, and call #execute:

search_results = [{:foo => "bar"}, {:foo => "baz"}] renderer = SearchRenderer.new(search_results) renderer.execute 

Summary

Rails ActionControllers are specific to HTTP, but its abstract parent class can be used to construct objects for generic controller objects for coordinating actions outside of an HTTP context. Custom controller objects can be composed with the available mixins to add common functionality such as rendering. These custom controllers can also share code with existing Rails applications DRY up templates and helpers.

Categories
Author

I really like using MongoDB and Mongoid, but a while back I ran into some shortcomings with querying timestamps. The problem was that I wanted to query only part of a timestamp, such as the day, week or year. So for example, let's say we need to find all users that signed up on a Wednesday.

In SQL there are date functions that let you to parse dates inside your query (although they seem to vary between engines). So in Postgres, you could do something like this:

select * from users where extract(dow from created_at) = 3; 

Note: Wednesday is the 3rd day of the week.

But MongoDB doesn’t have any native support for parsing a date/time inside the query. The best you can do is compare ranges, like this example using Mongoid:

User.where(:created_at.gte => "2012-05-30", :created_at.lt => "2012-05-31") 

Great, that finds us all users created last Wednesday. But what about all users created on any Wednesday, say in 2012? That would typically require building a query with different ranges for every Wednesday in 2012. Talk about tedious and repetitive. I think it’s safe to say that when faced with such a task most developers will end up just looping over each user, comparing the dates in Ruby.

User.scoped.select { |u| u.created_at.wday == 3 && u.created_at.year == 2012 } 

Eeek! This might work with small collections, but once you have a bunch of users it’s sub-optimal.

So I know I just said there were no native date functions in Mongo. But recently I was excited to find a solution that kind of works. It turns out that date/time types in Mongo get stored as UTC datetimes, which are basically just javascript dates stored in BSON. So it’s possible to drop down into javascript in your query using $where. With Mongoid it might look something like this:

User.where("return this.created_at.getDay() == 2 && this.created_at.getFullYear() == 2012") 

Note: in Javascript the day of week starts with 0 instead of 1. So Wednesdays are 2.

Now things seem to be looking up for us. But alas, the MongoDB documentation for $where warns of major performance issues. This makes sense because what’s really happening here is each user record is still getting accessed and each date is still getting parsed with javascript. Furthermore, we can’t index our search. So this solution is probably only marginally better than looping over each record in Ruby.

What I really wanted was a way to query by just day of week, or month, or hour, minute, second, etc. And I decided the best way to accomplish that would be to parse each timestamp before it gets saved, and then store all the additional timestamp metadata along with the record. That way I could query timestamp parts just like any other field, with no parsing. And as an added bonus, it should be even faster than using the native date functions with SQL!

So I started thinking of all the fields I would want to store, and I came up with the following list:

  • year
  • month
  • day
  • wday
  • hour
  • min
  • sec
  • zone
  • offset

But that’s a lot of fields cluttering up our model, especially if we’re storing two different timestamps like a created_at and updated_at. Well fortunately this is one area where MongoDB really shines. We can simply nest all this metadata under each timestamp field as BSON. And since we’re using Mongoid, we can also override the serialize and deserialize methods to make the interface behave just like a regular time field. So this is where the idea for the mongoid-metastamp gem came from. Here’s a simple usage example:

class MyEvent   include Mongoid::Document   field :timestamp, type: Mongoid::Metastamp::Time end  event = MyEvent.new event.timestamp = "2012-05-30 10:00" 

Now, calling a timestamp field returns a regular time:

event.timestamp => Wed, 30 May 2012 10:00:00 UTC +00:00 

But you can also access all the other timestamp metadata like this:

event['timestamp'] => {"time"=>2012-05-30 10:00:00 UTC, "year"=>2012, "month"=>5, "day"=>30, "wday"=>3, "hour"=>10, "min"=>0, "sec"=>0, "zone"=>"UTC", "offset"=>0} 

Now at last, we can performantly search for all Wednesday events in 2012:

hump_days = MyEvent.where("timestamp.wday" => 3, "timestamp.year" => 2012) 

If you were paying close attention you may have also noticed that zone is included in the metadata. That's because Mongoid Metastamp has some powerful features that allow you to store and query timestamps relative to the local time they were created in. But I’ll have to write more about that in a follow up post.

Categories
Author

If you're running any kind of service that uses e-mail as a communication method (which is just about everyone) and you want your users to be able to take some kind of action from the email (as just about everyone does) then you should be using Signed Idempotent Action Links. Now I know what you're thinking, "Signed Idempotent Action Links? But EVERYONE knows what those are!". I know, but here's a refresher anyway (ok so I made up the term, but it's descriptive!).

They are links that perform an action (such as "Delete this comment" or "Add this to my favorites") with an included signature (that associates the URL to a specific user and verifies parameters) and are idempotent (meaning that accessing them multiple times will end in the same result). In a nutshell, they are URLs that you can click through from an email and they perform a desired action:

  • whether or not the user is signed in
  • without any additional button presses or clickthroughs

So now that we've gone over what we're dealing with, why would you want to use them? Well, because not everyone is logged into your service when they're checking their email. In fact, if they're checking it from a smartphone or a public computer they most likely aren't logged into your service unless you're Facebook. It is the friendliest way to allow your users to perform simple actions through email.

Calm Down, Security People

Of course the reason not to use SIAL is that if a link can perform an action without requiring a login then, well, anyone can perform that action if they have the link. Very true! However, this problem is not enough to completely bar the use of SIAL because:

  1. These links are being sent to people's email accounts. If your email account has been compromised, you're already in way more trouble than SIAL can give you.
  2. Developers can counter this issue by making any SIAL action reversible. Have a "Delete" link? Make sure you have an "Undelete" function in your app somewhere.
  3. Convenience trumps security for many applications. Sure, don't use SIAL to initiate wire transfers or for anything that costs money, but most applications have plenty of non-world-ending actions that can benefit from instant access.

How to Use SIAL

There are two important things to consider when using SIAL:

  1. You MUST be able to verify any actionable content in the URL.
  2. You SHOULD only allow the single action via the SIAL URL. Do not log the user in from a SIAL action.

So, how do we implement something like this? Well, it's really quite simple. Here's a method similar how it was implemented for Qup.tv. First, we create the means to sign an action in a User model:

require 'digest/sha1'  class User   # ...    def sign_action(action, *params)     Digest::SHA1.hexdigest(       "--signed--#{id}-#{action}-#{params.join('-')}-#{secret_token}"     )   end    def verify(signature, action, *params)     signature == sign_action(action, *params)   end end 

What we're doing here is creating a SHA1 hash of a string that is built using a known formula and includes all of the elements needed for the action:

  • id is the id of the user
  • action is the name of the action that we're taking. For Qup the action might be queue, watch, or view.
  • params are any additional parameters that alter the outcome of the action. Again, for Qup this could be the id of the title to queue, watch, or view.
  • secret_token is a unique token for the user that is not shared publicly anywhere. You can generate this using SecureRandom or find another way to implement a secret token. This should not be something like a user's password hash as it should not be determinable from any info a user would know.

So now that we have these methods for our user, how do we go about creating the actual URLs that we'll be using? Well, if we have a simple Sinatra application we can do it like so:

helpers do   def authenticate_action!(signature, user_id, action, *params)     @current_user = User.find(user_id)     unless current_user.verify(signature, action, *params)       halt 401, erb(:unauthorized)     end   end    def action_path(user, action, *params)     "/users/#{user.id}/#{action}/#{user.sign_action(action, *params)}/#{params.join('/')}"   end end  get "/users/:user_id/favorite/:signature/:item_id" do   authenticate_action!(params[:signature], params[:user_id], 'favorite', params[:item_id])   @item = Item.find(params[:item_id])   current_user.favorites << @item unless current_user.favorites.include?(@item)   erb :favorite_added end 

As you can see, all we're really doing here is:

  1. Creating a helper that will display a 401 unauthorized message if the signature provided in the URL does not match the proper signature for the provided user.
  2. Creating a helper that will help us to generate URLs for our actions.
  3. Showing an example of how one such action could be built.

Notice that in this example I am making no use of session variables or any kind of persistent state. In fact, you should make sure that you ignore all such variables. If another user is signed in at the moment, the link should still work for the signed user.

One other thing to notice is that the item is only added to favorites if it isn't already there. This gives the action idempotence: whether you run it once or 100 times the result is the same, making sure that the item is in the user's favorites.

SIAL is not a technique that you will use in every instance, but the benefits for the user can be big in terms of convenience, and it's often the small conveniences that make a big difference when developing software that people love.

If you liked this post (or didn't) and you use Netflix Instant, go check out Qup and get email alerts (with Signed Idempotent Action Links) when new titles are added.

Categories
Author
Subscribe to