Skip to main content

Mobomo webinars-now on demand! | learn more.

Regardless of industry, staff size, and budget, many of today’s organizations have one thing in common: they’re demanding the best content management systems (CMS) to build their websites on. With requirement lists that can range from 10 to 100 features, an already short list of “best CMS options” shrinks even further once “user-friendly”, “rapidly-deployable”, and “cost-effective” are added to the list.

There is one CMS, though, that not only meets the core criteria of ease-of-use, reasonable pricing, and flexibility, but a long list of other valuable features, too: Drupal.

With Drupal, both developers and non-developer admins can deploy a long list of robust functionalities right out-of-the-box. This powerful, open source CMS allows for easy content creation and editing, as well as seamless integration with numerous 3rd party platforms (including social media and e-commerce). Drupal is highly scalable, cloud-friendly, and highly intuitive. Did we mention it’s effectively-priced, too?

In our “Why Drupal?” 3-part series, we’ll highlight some features (many which you know you need, and others which you may not have even considered) that make Drupal a clear front-runner in the CMS market.

For a personalized synopsis of how your organization’s site can be built on or migrated to Drupal with amazing results, grab a free ticket to Drupal GovCon 2015 where you can speak with one of our site migration experts for free, or contact us through our website.

______

Drupal in Numbers (as of June 2014):

  • Market Presence: 1.5M sites
  • Global Adoption: 228 countries
  • Capabilities: 22,000 modules
  • Community: 80,000 members on Drupal.org
  • Development: 20,000 developers

Open Source:

The benefits of open source are exhaustively detailed all over the Internet. Drupal itself has been open source since its initial release on January 15, 2000. With thousands of developers reviewing and contributing code for over 15 years, Drupal has become exceptionally mature. All of the features and functionality outlined in our “Why Drupal?” series can be implemented with open source code.

Startup Velocity:

Similar to Wordpress, deploying a Drupal site takes mere minutes, and the amount of out-of-the-box functionality is substantial. While there is a bit of a learning curve with Drupal, an experienced admin (non-developer) can have a small site deployed in a matter of days.

Information Architecture:

The ability to create new content types and add unlimited fields of varying types is a core Drupal feature. Imagine you are building a site that hosts events, and an “Event” content type is needed as part of the information architecture. With out-of-the-box Drupal, you can create the content type with just a few clicks--absolutely no programming required. Further, you can add additional fields such as event title, event date, event location, keynote speaker. Each field has a structured data type, which means they aren’t just open text fields. Through contrib modules, there are dozens of other field types such as mailing address, email address, drop-down list, and more. Worth repeating: no programming is required to create new content types, nor to create new fields and add them to a new content type.

Asset Management:

There are a number of asset management libraries for Drupal, ensuring that users have the flexibility to choose the one that best suits their needs. One newer and increasingly popular asset management module in particular is SCALD (https://www.drupal.org/project/scald). One of the most important differences between SCALD and other asset management tools is that assets are not just files. In fact, files are just one type of asset. Other asset types include YouTube videos, Flickr galleries, tweets, maps, iFrames--even HTML snippets. SCALD also provides a framework for creating new types of assets (called providers). For more information on SCALD, please visit: https://www.drupal.org/node/2101855 and https://www.drupal.org/node/1895554

Curious about the other functionalities Drupal has to offer? Stay tuned for Part 2 of our “Why Drupal?” series!

Categories
Author

@Intridea we use boxen as part of our employee on-boarding process and equipment upgrades. The selling point: having a shared automated process for getting machines up and ready to do real work on...

What Happened?

Recently, a co-worker received a new laptop and I pointed him to our boxen-web instance. The problem is it didn't work for him. I was confused because running boxen on my machine worked just fine. After digging deep and troubleshooting I learned that the initial install process for boxen is quite different than running the boxen command afterwards.

It had been a few months since anyone installed from scratch with our boxen setup, and the only error message I was getting was pretty vague.

sudo: /opt/boxen/rbenv/shims/gem: command not found sudo: /opt/boxen/rbenv/shims/gem: command not found ... 

I could tell both ruby and git were failing to set-up properly, but it wasn't much to go on. I didn't yet know whether the failure was due to Jeff's configuration or the main boxen configuration. After having him try it with his personal manifest files removed, it was confirmed to be a problem with the main boxen configuration. The next step was to try a fresh install myself.

Fixing our Boxen Setup

First I partitioned my hard drive and installed a fresh copy of 10.9.2. I removed the files for my personal manifest from the boxen repository in order to keep the install as simple as possible. I verified that I was receiving the same error message listed above.

Next, I tried a fresh install from the mainline boxen. This also failed with a different git related error. I've mentioned before that the boxen documentation is lacking, and I was only finding issues with people having the same problem and no solution. I was out of options.

I decided to merge mainline boxen with our boxen setup. I found that doing this created conflicts in both the Puppetfile and Puppetfile.lock. I fixed the conflicts in the Puppetfile. The official boxen docs say that it isn't necessary to resolve conflicts in the Puppetfile.lock. So I tried:

rm Puppetfile.lock bundle exec librarian-puppet install --clean 

However, those steps only gave me this error:

Could not resolve the dependencies. 

So I backed up a step and manually resolved the conflicts in the Puppetfile.lock. This, while extremely tedious, worked.

After merging mainline boxen, I was left with the same error referring to git. I next looked at what version of git our Puppetfile was using and what was available on the puppet-git repository. I found that mainline boxen was using 2.3.0 while 2.3.1 had been released more recently. On a hunch I upgraded to 2.3.1 and performed yet another fresh install of boxen. This time it worked.

Preventative Maintenance for your Boxen

After going through the above hellish process, here's some recommendations on how to ensure smooth operation in the future.

Keep a fresh install of OS X on an external drive

  • You should keep whatever point release you expect to send laptops out with (10.9.2 in our case).

Use the nuke tool

  • After an initial boxen run, you can restore the machine to pristine status by using the nuke tool that comes with boxen.
/opt/boxen/repo/script/nuke --force --all 

The above command will remove boxen entirely from your machine. It isn't necessary to reinstall OS X.

Create a dummy user for your boxen repo

  • You will need a github user for your organization which has no personal boxen configuration. This makes it simple to determine whether the problem lies in personal configuration or in your company wide boxen configuration (site.pp).

Test the install often

  • In the above scenario, I left out the part about the homebrew version that boxen uses also being broken. Testing more often would have allowed us to catch and fix each of these minor problems one by one instead of as part of a marathon debugging session.

The mainline boxen is not always correct

  • In the scenario above, I figured merging the mainline ought to get us to a working state. However, mainline boxen had a broken puppet-git module (2.3.0) at that point, while 2.3.1 was the version that worked. So beware of keeping up to date with mainline too fast.

Got any tips or tricks for using boxen? Send us a tweet or message us on Facebook!

Categories
Author

boxen

In the year since Intridea started using Boxen (GitHub's Puppet-based automation solution for OS X), a lot has happened. Not only are folks all over the company embracing it, but we've also become extremely proficient with it. Boxen's documentation, while improving, has us wanting to share our configurations as a way to help new users. However, due to containing client information, we have been unable to share our Boxen repository.

Until now.

Today, we are open sourcing our Boxen repository.

To do this, we are maintaining two repositories: One public and one private. The only difference is in the project-related details. The contents of each user's project manifest is empty in the public version, like so:

class projects::people::gary { } 

While the private version contains whatever projects they would normally include as part of daily use:

class projects::people::gary {    include projects::intridea::omniauth   include projects::client::secret  } 

The user's project manifest is then included in a standardized way:

class people::gary {    include projects::people::gary  } 

Using this layer of indirection for projects, along with the personal project inclusion pattern shown above, makes for dead simple maintenance between the repositories. Said pattern creates a line between public and private information, keeping sensitive details stored exclusively in each user's project file, and then using a blank placeholder on the public version of the repository. Using placeholder project manifests combined with the indirection we add for personal project inclusion ensures that no client information will be leaked to the Web.

Why go to all the trouble of open sourcing configs?

Every piece of knowledge can help someone else, no matter how trivial it may seem. When we first got setup with Boxen, writing a personal manifest seemed like a daunting task. Boxen's documentation, although better now, is still lacking. In many cases, reading others' configurations can be a bigger help than sifting through Boxen's code and Puppet's (good, but exhaustive) documentation.

Just as we found Plyfe's boxen repo to be so helpful that we used it as a baseline for our own configs, our goal in open sourcing our Boxen configuration is to pay it forward for the next organization that takes up Boxen.

Got any tips or tricks for using Boxen? Send us a tweet or message us on Facebook!

Categories
Author


A few months ago, we quietly launched a dashboard helping remote distributed companies, like us, keep in touch. Working remote is great, but requires a level of strategy. With differing time zones, multiple projects, and varying schedules it can be difficult to keep everyone in the loop. Houston is our solution.

Since it's launch, we've utilized Houston to see who's on vacation, what cool projects everyone's working on, and revamp our company handbook via a Hipchat-powered Q&A knowledgebase. Best of all? We open sourced it, so you can use it too. Check out this two minute video and get the Houston dashboard up and running for your organization!


We can't wait to see how you use Houston. Fork it, add your own tools, and share it back. Houston is 100% built with Ruby on Rails, Bootstrap, and integrates with Google apps, Harvest, and Confluence.

Here's just a few ideas we've got coming up on our own Houston roadmap:

  • Who's online? Rollup of statuses across Hipchat, Google, and Github
  • Analyze Github commits for visualizing team skills
  • Where is everyone? Location and time zone indicator
  • Bookmarks: cool finds and helpful tips extracted from Hipchat logs

What do you want to add to Houston? Let us know!

Categories
Author

Trying to get up to speed with D3 can be a daunting task. A quick google search will reveal hundreds of different starting points, almost all of which involve writing great swaths of code in order to build even the most basic of charts. Don't believe me? Here's the official example for a basic line chart. D3's biggest barrier to entry is its biggest strength - the flexibility that it provides to create nearly any sort of visualization that you can imagine.

In order to help foster good practices (by creating re-usable charts) and to give back to a community that is so willing to share its knowledge, Intridea has released a library that will help you get started much more easily: the D3 Fakebook. With just a few lines of code, you can easily render that same bar chart:

// From the D3 Basic Line Chart example: http://bl.ocks.org/mbostock/3883245 d3.tsv('examples/data/aapl_stock.tsv', function(error, data) {   var chart = new D3Fakebook.TimeScaleLineChart('#singleLine', {     data : data,     valueName : 'close'   });   chart.render(); }); 

Basic line chart

Easy as pie.

Getting Started

There are a few ways you can start using this library today: the first is that you can install it using bower ($ bower install d3-fakebook), which will pull down the dependencies (underscore.js and d3). You can also grab the files directly - there are both compiled JavaScript files as well as CoffeeScript.

Once you have the files loaded in your browser, you can access the different chart types under the D3Fakebook namespace. Each chart type takes a set of options that allows you to configure.

Wait - "Fakebook"? What’s up with the name?

The concept of a "fakebook" comes from Jazz - back in the 1950's, popular Jazz songs (also known as standards) were often compiled in to collections of books. Each song's transcription would have chord changes and the main melody would be written out, but not the whole song, since the musicians would make each their own by improvising on top of the chord changes.

Our goal with the D3 Fakebook is to do precisely that - give you a starting point to make amazing, beautiful visualizations that are purely your own. The Fakebook gives you a starting point and some guidance, but doesn’t force you to follow any specifically set path.

Follow the Changes

This is a living library, something we're using in our own projects (check it out in action on http://humanprogress.org) and though it's a bit rudimentary right now, we'll be adding more charts in the near future. If you want to help out, or if you've found a bug, feel free to submit a pull request on Github, and we'll incorporate it as soon as possible!

A few more roadmap items: we intend to add AMD support, compiled JavaScript modules (in addition to making the existing CoffeeScript files into actual modules), and nicer transitions.
Keep the conversation going! We'd love to hear from you!

Categories
Author

Alt text

Time. The most common excuse (and legitimately so) for avoiding open source contribution. Finding time outside normal work hours is difficult and the idea of grinding it out, post work day, isn't always appealing.

There's a perception that open source is a huge undertaking, but what if I told you it didn’t have to be? That there are ways to contribute to open source without absorbing your entire weekend? Don't believe me? Well keep reading…

Contribute as a user

Especially in the ruby community, it's quite common to utilize open source libraries for your day to day tasks (even if the actual work is proprietary). As a user, finding ways to enhance the tools you already use are excellent avenues for open source contribution.

Here are some simple, quick, and easy ways to contribute:

  • File a bug report with steps to reproduce
  • Improve the documentation
  • Contribute example usage code snippets
  • Blog about your experience using it
  • Fix a bug, and submit your fix to the maintainers
  • At Intridea, we utilize this method often. For example, during a project utilizing Google spreadsheets, we found a bug in the roo gem roo gem that prevented it from reading cell comments. Thus, we researched the problem, fixed it, and submitted a pull request back to the maintainer. This format required no special open source time, as it was necessary for the project, and sharing our fix with the community took only a few minutes.

    Open source in small ways

    Another way to open source, without devoting your weekends and nights is by creating small projects. In the ruby community, this can mean creating a gem that solves a common problem.

    Recently at Intridea, we created a small open source project for the confluence-soap gem. During a project that required us to interact with wiki pages in Confluence, we discovered many of the ruby libraries were incomplete, out of date, or lacking documentation.

    Instead of waiting until our gem implements all API methods though, we took the opportunity and released it ourselves. You'll notice the Confluence SOAP API includes roughly 160 different methods, while our gem features only twelve. We only implemented the methods that were useful for us, and that’s okay! You don’t have to come up with a huge elaborate project to make it worthwhile for open source.

    What are you waiting for?

    Open source doesn’t have to be time consuming. Utilizing everyday tasks are simple ways to participate in the open source community. As a developer, you are in the best position to enhance, create, and improve the tools you already use! So, get strategic with your work, find those opportunities, and contribute!

    Got any tips or tricks for open source? Let us know!

    Want to see more? Check out Intridea's newest open source: Houston: Mission Control for Distributed Teams

    Categories
    Author

    I recently joined Intridea as CTO after a two-year foray into the realms of 3D printing, embedded hardware, and robotics. My time spent hacking on physical-world hardware problems was super challenging and invigorating, and has given me new perspectives on how to approach software projects. Much as I enjoyed physical hacking (and I still do), I found that my time away made me yearn for my Ruby and software roots; I was ready for a new challenge.

    Jumping back into the software world at Intridea is the perfect fit for me - I could not have made a better decision or found a better company to join. In the short time I’ve been here, I’ve found my fellow Intrideans to be some of most talented engineers, UI/UX designers and project managers I've had the pleasure of working with.

    Intridea’s full embrace of distributed development is one of this company’s most unique and compelling features. There are Intrideans all over the USA and all over the world. This allows me to remain living in my beloved Portland, OR while still interacting with a world class, world-wide team developing gorgeous web and mobile applications. Though I haven’t had the opportunity to meet all my fellow co-workers in person yet, I still have felt a very warm welcome from the team virtually through IMs, emails, and video chats.

    I have several initiatives I will focus on in the coming months that I believe will help Intridea internally but will also be valuable open source contributions to the community. I’ve always been a deployment geek of sorts, and after a few-year hiatus from rubyland I am back and see that deployment is still not a fully solved issue. ;) I’m hoping to help fix that.

    Warning: I’m dropping down into deployment geek speak for the next few paragraphs!

    One approach I’m excited about is a new develop / deploy-local / deploy-remote framework. The framework will use LXC linux containers to capture all of the application’s code, data, and state, and then be able to run this amalgamation either locally via Vagrant or remotely on a hosted server or cloud.

    These types of container systems are all the rage lately with the release of docker.io and the warden project buried in VMWare's CloudFoundry PaaS. Google even recently released lmctfy - "Let me Containerize That for You".

    All three of these projects add a wrapper around LXC linux containers and the cgroups kernel feature found in newer kernels. This allows for fine-grained resource and visibility management. These are effectively "containers" for your application to run inside of and thereby be abstracted one additional layer from the underlying infrastructure.

    This approach is not quite as heavy as a full-on Virtual Machine, as these containers share the kernel and userland binaries and do not pay the overhead cost of virtualized processors. These containers are more of an abstraction for managing deployment and resource allocation of multiple applications across multiple machines or machine pools.

    The idea behind this approach is that by wrapping your applications and its services inside of a container abstraction, you can isolate it from other applications and users sharing the same physical or virtual host computer. This allows us to treat application deployments as black boxes that have certain resource allocation needs. These can then be packed in different ways onto a pool of physical or virtual compute resources.

    Now we can apply many classic bin-packing algorithms and other optimizations at the infrastructure layer to increase efficiency and hardware utilization. Since applications see the same underlying environment whether run locally or remotely, they will run the same regardless of where they are. This allows for nifty under-the-hood tricks to optimize workloads across server pool or cloud allocation, allowing us to make the best use of compute and storage resources without the waste of unused potential.

    I've barely scratched the surface here of this new deployment abstraction, and there’s much to be done, but I plan to follow soon with much more detail and open source code releases.

    Intridea’s embrace of employee open-source contributions is one of the big reasons I came here as CTO. The challenge of solving difficult problems was another. Intridea works with some of the most interesting companies across every industry and tackles their hardest problems.

    I am honored to be CTO here at Intridea, and excited about all that’s to come.

    Categories
    Author

    I’m excited today to formally welcome Ezra Zygmuntowicz to Intridea as our Chief Technology Officer.

    I first met Ezra in 2006 at a hastily-arranged Ruby on Rails workshop at a Washington, DC charter school. Back in those early days of Ruby on Rails, there were very few training opportunities available, and I was thrilled to learn from one of the earliest adopters and most well-known Rails developers in the world. That was the day I fell in love with Ruby and Rails. Learning from Ezra was one of the formative moments in my becoming a Rails developer. Soon after that workshop, I joined other like-minded Ruby developers and started building Intridea.

    Since that workshop, I’ve followed Ezra’s career with great interest; from his founding EngineYard (the most well-known Ruby on Rails web host), to his position at VMWare heading the development of CloudFoundry (the leading open source Platform-as-a-service), to his starting Trinity Labs (a company that builds 3D printers and performs robotics R&D).

    As thrilled as I was to learn from Ezra back in 2006, I’m even more thrilled now that he has joined Intridea. Ezra will grow and mentor our engineering team, continue his open-source work -- not to mention spreading the open-source love -- and help deliver scalable and well-engineered products to our clients.

    Ezra’s open-source contributions are legendary among Rubyists and the wider development community. From his own creations like merb and nanite, to his numerous contributions to other projects, Ezra has left an indelible mark on Ruby development. His history of innovation is a perfect fit with the Intridea team that released OmniAuth, Grape, and Stately, among many other open-source projects.

    I couldn’t be more excited to call Ezra an Intridean.

    Categories
    Author

    While building Surfiki, our real-time data intelligence engine, we realized that the logic contained in any number of natural language processing (NLP) and machine learning (ML) binaries is locked up on a server, with no access to the web. What if we could pass input parameters to a binary and express the output as JSON via a RESTful endpoint?

    Say hello to REBIN.

    REBIN was created to facilitate a simple method of exposing CLI applications–regardless of their language–to the web. Much of our current NLP and ML research is written in C and C++ because these languages offer the advantages of speed and scale.

    But REBIN isn’t limited to compiled executables: as long as you can flag a script with +x you’re good to go! This means it’s easy to expose your R, Python, and Ruby libraries to an instant web service. With REBIN, we’ve included a visual dashboard where you can define your endpoints and executables. It’s trivial to create an endpoint:.think of REBIN as an instant web service for any binary or script.

    REBIN is built on node.js and redis by Anthony Nyström and Jeff Baier. Hit us up on Twitter if you have questions or ideas, and grab the source on GitHub right now!

    Strata Conference 2013

    Categories
    Author

    This past weekend I participated in Random Hacks of Kindness (RHoK) hosted by the OpenGov Hub in DC. I attended my first RHoK event in June of 2010 after reading about it on slashdot, and I ended up working on a winning project. I've been a regular at the event ever since. I was extra excited this time around because RHoK was being held alongside the first ever Sanitation Hackathon, an event that tries to find technological solutions to some of the very serious sanitation problems around the world.

    The event kicked off with presentations from subject matter experts on situations that called out for aid. Problems ranged from the illegal dumping of septic waste, to finding a way to better connect domestic violence victims who are working to re-build their lives with socially conscious employers, in order to ease their re-entry into the work force. 

    More people have access to cell phones than to clean and safe toilets

    The problem that piqued my interest was submitted by the Peace Corps where tro-tro passengers had to wait in the vehicle until it was at capacity because there was no line of communication between the driver and passenger notifying them when it was departing. Not only was the problem well defined and solvable in one weekend, but the solution could also be general enough to be applicable to any situation where someone has to send notifications to a group without the painful process of gathering emails or phone numbers from everyone and sending a group message. 

    I worked with Chelsea Towns, a RHoK veteran who works for the Peace Corps and Thad Kerosky, a developer who used to be a Peace Corps volunteer in Tanzania and Liberia. The solution we provided was an SMS based notification system that allows anyone to create an event, a trip in case of the tro-tro driver or conductor, which anyone can subscribe to receive notifications for. The technology we used was Ruby on Rails deployed to Heroku, using Twilio API for the SMS part. We named the project Tro-Tron.

    Usually at other hackathons, things are a little more competitive and folks are focused on getting their projects done by the deadline. At RHoK the amount and quality of collaboration is awesome; you will always find people moving from team to team trying to help out as much as they can, and it is not unusual to have a lot of people contributing to more than one project. My favorite part of the weekend was meeting so many new people, I met three different Peace Corps volunteers who were in The Gambia, one of them back when I was still a baby. I also got to meet Javier for the first time, another Intridean who brought a lot of energy and helped test the application.

    RHoK wrapped up on Sunday with some impressive presentations and demos. You can fork the project on github and view photos we took at the event on flickr.


    History of RHok: in 2009, some good folks from Microsoft, Google, Yahoo NASA, and the World Bank started the Random Hacks of Kindness (RHoK) hackathon, an event that aims to connect subject matter experts in disaster management and crisis response with volunteer software developers and designers in order to create solutions that have an impact in the field. Since then, RHoK has grown into a community of over 5000 in over 30 countries around the world.

    Categories
    Author
    Subscribe to Open Source