Skip to main content

Mobomo webinars-now on demand! | learn more.

We are in the business of awesome user experience, software engineering, mobile products, and web design. Our award-winning solutions leverage new technological developments to propel our clients to new heights. With today’s competitive technology industry, we know the importance of software developers for our clients’ success. We want to provide the best of the best for our clients. To ensure we are doing so, our company joined Clutch to see how we stack against the competition in our industry.

Clutch, a B2B ratings and reviews platform, evaluates companies across various industries in order to help businesses choose the best service provider. They analyzed Mobomo based on our services offered, client base, and case studies of projects we’ve executed for former clients. More importantly, they spoke directly with our former clients in phone interviews to obtain an accurate and verified understanding of their experience working with the Mobomo team. The interviews have been an incredibly valuable resource for our clients to provide their feedback on our service, and for our own team to reflect on how far we’ve come as an agency.

You can find these former client reviews on our Clutch profile. Here’s a glimpse of the praise so far:

 

“[Mobomo] has extremely talented developers with a knack for finding the most efficient solutions.” – Branch Chief, Government Agency

 

“Because of Mobomo, things run much smoother and more efficiently now.” – Website Project Manager, Nonprofit

 

“The team is responsive to all inquiries, questions, and concerns.” – Program Analyst, Center for Strategic and Budgetary Assessments

 

These rave reviews, combined with our excellence in other areas of Clutch’s research methodology, scored us positions as a global leader in two development categories. In the ultra-competitive development space, we have been recognized as one of the top global 15 mobile app development companies, more specifically for iPhone development, and as one of the best WordPress Developers in the world.

Our ability to deliver has not only propelled us to the top of the development space, but we have been featured in Clutch’s inaugural listing, The Clutch 1000. The Clutch 1000 is their most selective list of the most highly ranked B2B service providers. B2B companies with the strongest brand reputation, clientele, and reviews were selected from a pool of over 50,000 global agencies, and we were placed in the top in the top 60!

Lastly, Clutch is not the only platform that gave us praise for our development prowess. The Manifest named us as a top-20 leading web development firm. The Manifest is a database of industry reports, how-to-guides, and top service provider lists across various industries.

In conclusion, we want to say thank you to all of our clients and partners; your support allows us to do what we love. We’re proud of our high rankings so far, and we can’t wait to keep pushing the boundaries of next level development with you.

Categories
Author

PHP 5.6 will officially be no longer supported through security fixes on December 31, 2018. This software has not been actively developed for a number of years, but people have been slow to jump on the bandwagon. Beginning in the new year, no bug fixes will be released for this version of PHP. This opens the door for a dramatic increase in security risks if you are not beginning the new year on a version of PHP 7. PHP 7 was released back in December 2015 and PHP 7.2 is the latest version that you can update to. PHP did skip over 6; so don't even try searching for it.

Drupal 8.6 is the final Drupal version that will support PHP 5.6. Many other CMS's will be dropping their support for PHP 5.6 in their latest versions as well. Simply because it is supported in that version does not mean that you will be safe from the security bugs; you will still need to upgrade your PHP version before December 31, 2018. In addition to the security risks, you have already been missing out on many improvements that have been made to PHP.

What Should You Do About This?

You are probably thinking "Upgrade, I get it." It may actually be more complicated than that and you will need to refactor. 90-95% of your code should be fine. The version your CMS is may affect the complexity of your conversion. Most major CMS's will handle PHP 7 right out of the box in their most recent versions.

By upgrading to a version of PHP 7, you will see a variety of performance improvements; the most dramatic being speed. The engine behind PHP, Zend Technologies, ran performance tests on a variety of PHP applications to compare the performance of PHP 7 vs PHP 5.6. These tests compared requests per second across the two versions. This relates to the speed at which code is executed, and how fast queries to the database and server are returned. These tests showed that PHP 7 runs twice as fast and you will see additional improvements in memory consumption.

How Can Mobomo Help?

Mobomo's team is highly experienced, not only in assisting with your conversion, but with the review of your code to ensure your environment is PHP 7 ready.  Our team of experts will review your code and uncover the exact amount of code that needs to be converted. There are a good number of factors that could come into play and affect your timeline. The more customizations and smaller plugins that your site contains, the more complex your code review and your eventual conversion could be. Overall, depending on the complexity of the code, your timeline could vary but this would take a maximum of 3 weeks.

Important Things to Know:

  1. How many contributed modules does your site contain?
  2. How many custom modules does your site contain?
  3. What does your environment look like?
Categories
Author

Let’s be honest, the documentation for Apache Nutch is scarce.  Doing anything more complicated than a single-configuration crawl requires hours of prowling Stack Overflow and a plethora of sick Google-fu moves.  Thankfully, I’ve already suffered for you!

A recent project involved configuring Nutch to crawl 50+ different sites, all in different states of web standard conformity, all with different configuration settings.  These had to be dynamically added and needed to account for changing configurations.  In the following few posts, I’ll share the steps we took to achieve this task.

What is Nutch?

Apache Nutch 2.x is an open-source, mature, scalable, production-ready web crawler based on Apache Hadoop (for data structures) and Apache Gora (for storage abstraction).  In these examples, we will be using MongoDB for storage and Elasticsearch for indexing; however, this guide should still be useful to those using different storage and indexing backends.

Basic Nutch Setup

The standard way of using Nutch is to set up a single configuration and then run the crawl steps from the command line.  There are two primary files to set up: nutch-site.xml and regex-urlfilter.txt.  There are several more files you can utilize (and we’ll discuss a few of them later), but for the most basic implementation, that’s all you need.

The nutch-site.xml file is where you set all your configuration options.  A mostly complete list of configuration options can be found in nutch-default.xml; just copy and paste the options you want to set and change them accordingly.  There are a few that we’ll need for our project:

  1. http.agent.name - This is the name of your crawler.  This is a required setting for every Nutch setup.  It’s good to have all of the settings for `http.agent` set, but this is the only required one.
  2. storage.data.store.class - We’ll be setting this one to org.apache.gora.mongodb.store.MongoStore for Mongo DB.
  3. Either elastic.host and elastic.port or elastic.cluster - this will point Nutch at our Elasticsearch instance.

There are other settings we will consider later, but these are the basics.

The next important file is regex-urlfilter.txt.  This is where you configure the crawler to include and/or exclude specific urls from your crawl.  To include a urls matching a regex pattern, prepend your regex with a +. To exclude, prepend with a -.  We’re going to take a slightly more complicated approach to this, but more on that later.

The Crawl Cycle

Nutch process

Nutch’s crawl cycle is divided into 6 steps: Inject, Generate, Fetch, Parse, Updatedb, and Index.  Nutch takes the injected URLs, stores them in the CrawlDB, and uses those links to go out to the web and scrape each URL.  Then, it parses the scraped data into various fields and pushes any scraped hyperlinks back into the CrawlDB.  Lastly, Nutch takes those parsed fields, translates them, and injects them into the indexing backend of your choice.

How To Run A Nutch Crawl

Inject

For the inject step, we’ll need to create a seeds.txt file containing seed urls.  These urls act as a starting place for Nutch to begin crawling.  We then run:
$ nutch inject /path/to/file/seeds.txt

Generate

In the generate step, Nutch extracts the urls from pages it has parsed.  On the first run, generate only queues the urls from the seed file for crawling.  After the first crawl, generate will use hyperlinks from the parsed pages.  It has a few relevant arguments:

  • -topN will allow you to determine the number of urls crawled with each execution.
  • -noFilter and -noNorm will disable the filtering and normalization plugins respectively.

In its most basic form, running generate is simple:
$ nutch generate -topN 10

Fetch

This is where the magic happens.  During the fetch step, Nutch crawls the urls selected in the generate step.  The most important argument you need is -threads: this sets the number of fetcher threads per task.  Increasing this will make crawling faster, but setting it too high can overwhelm a site and it might shut out your crawler, as well as take up too much memory from your machine.  Run it like this:
$ nutch fetch -threads 50

Parse

Parsing is where Nutch organizes the data scraped by the fetcher.  It has two useful arguments:

  • -all: will check and parse pages from all crawl jobs
  • -force: will force parser to re-parse all pages

The parser reads content, organizes it into fields, scores the content, and figures out links for the generator.  To run it, simply:
$ nutch parse -all

Updatedb

The Updatedb step takes the output from the fetcher and parser and updates the database accordingly. Updatedb markes urls for future generate steps at this point. Nutch 2.x supports several storage backends thanks to it abstracting storage through Apache Gora (MySQL, MongoDB, HBase).  No matter your storage backend, however, running it is the same:
$ nutch updatedb -all

Index

Indexing is taking all of that hard work from Nutch and putting it into a searchable interface.  Nutch 2.x supports several indexing backends (Solr, Cassandra, Elasticsearch).  While we will be using Elasticsearch, the command is the same no matter what indexer you are using:
$ nutch index -all

Congrats, you have done your first crawl!  However, we’re not going to be stopping here, oh no.  Our implementation has far more moving parts than a simple crawl interface can give, so in the next post, we will be utilizing Nutch 2.3’s RESTful API to add crawl jobs and change configurations dynamically!  Stay tuned!

Categories
Author

/Designing-With-Developers-in-Mind Nothing in the design process is absolute. I am sure many designers can relate, it is frustrating when you create a design and then when you see the final product (after development) the design looks different than what was intended. It is fair to say not all designs translate in the development process but as a designer we should start designing with developers in mind. In a world that isn’t perfect and where you have little control, designers, it is time to be flexible.  Designer's take pride in layouts, making sure each element has a purpose and it’s own place. Crafting “pixel perfect” designs is an achievement that we strive for after years of hard work and practice. Because of the effort that’s put into designs, we have a tendency to get upset with developers when our layouts haven’t been transformed perfectly.

We should not fault the developer

Recognize that this is a glitch within the design process. Our static, “pixel perfect” comps will only ever truly be that...static comps. Once we bring a design to life through code we have very little control over how someone will view it. What we should be striving for is a deeper and closer connection with developers, working in tandem throughout the process. As designers we must be flexible and think of each composition no longer on the basis of exact measurements, but relative proportions. This applies to things like height and width in relation to other page level elements, i.e. margin and padding. The logic is quite simple, an element with a width of 400px over 1300px of visible area is perfectly reasonable, but on a small screen it will be cropped out.

The Solution

The solution here is either create a mobile based layout that accounts for smaller device sizes, or to ensure that the item being cropped out on smaller screens isn’t pertinent to the use of the site. Which ever route you take a certain level of foresight is necessary to ensure that proper design and development are accomplished. I know it can be frustrating to see developers work in non-absolute units of measure. However abandoning pixels and switching to ems, rems and percentages make for a more flexible and fluid layout. If designers start thinking about these more liquid and dynamic measurements initially, there will be an easier transition from static comp to developed website, allowing everything to remain harmonious and relatively intact on different device sizes.

Color

This adaptable mindset also applies to colors. We can’t predict the calibration of each user screen, something that varies according to style, manufacturer and specific light conditions. Websites and apps can’t be handled like a Pantone catalog on printed paper, making it common for subtle color variants - which may appear too dark, shiny or contrasted.

How do we combat the unknown here?

By selecting a broad range color palette that suits both the device that the product is being viewed on, and also the matches the tone of the business. Learning about the technical characteristics and limitations of a browser is vital in order to avoid unexpected surprises when the product is finished. After interpreting the HTML, CSS and Javascript the browser renders a product according to its capabilities. Sometimes forcing us to think beyond devices, and to start thinking in terms of which browsers. For years IE was the bane of a designers existence, limiting the boundaries that we could push, because it was so far behind technically. Knowing which browser your design has to be supported in will help you determine what and where you can push the limits. So, designers - let’s start working and understanding the development process so that we can all be rockstars when designing for digital media.

Categories
Author

Gulp is a tool to help web developers automate various tasks. Like Grunt before it, Gulp makes repetitive tedious tasks bearable through automation. Remember: if you have to do it more than once, you've already done it too many times.

Today we are going to to set up a very basic Gulp workflow and only focus on building and optimizing your CSS from SCSS. In later posts we will have Gulp do more like compile and optimize you JS.

Prerequisites

Gulp, grunt, and other automation tools rely on Nodejs, so if you don't already have nodejs installed you can get it here. Nodejs will come with NPM, (node package manager) which will give you easy access to download all sorts of things such as gulp itself.

When you're done installing Nodejs it's time to install gulp. Open your terminal of choice and enter the following command:

sudo npm install gulp -g

 

This command will install gulp globally on your machine. Note, installing packages with sudo is potentially dangerous, this post goes into more detail and precautions to take.

Setting up your project

Now we need to create a project directory. We are going to create a dirt simple project for this example. So create a folder named "myproject", and inside that folder create another directory called "dev".

 

In your terminal, navigate over to your project directory where we can set up our project by creating a package.json file which will define information about our project and also lists project dependencies that Gulp relies on. We don't have to do this manually, run the following command to set up the file:

npm init

This will create the package.json file and will look something like:

{
 "name": "project",
 "version": "1.0.0",
 "description": "this is a demo project",
 "main": "index.js",
 "scripts": {
 "test": "echo \"Error: no test specified\" && exit 1"
 },
 "author": "your name",
 "license": "ISC"
}

In this example, I only set the name, description and author. I chose the defaults for everything else.  Now we are going to use more than just Gulp for our workflow, so add a comma to that last line ("license":"ISC") and at the following new line:

"devDependencies": {}

This is where we will list our dependencies as comma separated key/value pairs (standon JSON).

Dependencies

 

Categories
Author

 

At the time of this writing (pre-WWDC 2015), there are a number of limitations on what Apple Watch code can do. The primary limitation is that watch apps cannot exist by themselves. It is necessary for the watch app to be a part of a corresponding phone app. Apple has said they will not accept watch apps where the phone app does not do anything itself. Also, watch-only apps (such as watch faces) are not allowed for this same reason—although it’s rumored that this may change after WWDC 2015.

Another Apple Watch limitation is that Core Graphics animations are not supported, but animated GIFs are. Complex layouts (such as overlapping elements) are not allowed. However, elements can be positioned as if they overlap—provided only one element is visible at a time. Using actions such as taps and timers, the visibility of these "overlapping" elements can be changed. This can be implemented to provide a more dynamic interface. Another major limitation (also whispered to change after WWDC 2015) is that watch apps cannot access any of the hardware on the watch including the motion sensor and heart sensor.

Most watch app processing (controller logic) is done on the phone instead of the watch, and some delays are inherent in the Bluetooth communication that transpires between the watch and the phone as the view (on the watch) talks back to the controller (on the phone). This view/controller split is not obvious in the code, but the watch/phone split is obvious in the code, as the watch cannot access anything from the phone, even though the controller logic is running on the phone side—except via a specific watch-to-phone request.

One notable feature is the watch app’s ability to explicitly call the phone app with a dictionary and obtain a dictionary response. This functionality allows the developer to then set up a number of client-server style requests, where the watch is the client, and the phone is the server. For example, the watch can request information from—or record information to—the phone. The phone (which has storage and may have Internet connectivity) can then fulfill the request and provide data in response to the watch. This can drive the phone app's UI to provide near-real-time synchronization of the watch app display, as well as the phone app display.

Custom notifications (both local notifications and push notifications) are supported on the watch. These custom notifications can have a somewhat customized layout as well as having the ability to define a set of custom actions. After performing one of these actions, the watch app is started. Apple mentions not to use notifications as a way to just launch the watch app from the phone app. Apple maintains that the notifications should provide useful information.

One developer test limitation relates to custom watch notifications (for local notifications).  Since watch notifications are only displayed if the phone is asleep, there is no direct way to test custom watch notifications.  Because of this, XCode does provide a mechanism to test push notifications in the simulator (using a JSON file), but there is no similar mechanism to test local notifications. Still, one can certainly test local notifications with the physical device.

Categories
Author

In April 2015, NASA unveiled a brand new look and user experience for NASA.gov. This release revealed a site modernized to 1) work across all devices and screen sizes (responsive web design), 2) eliminate visual clutter, and 3) highlight the continuous flow of news updates, images, and videos.

With its latest site version, NASA—already an established leader in the digital space—has reached even higher heights by being one of the first federal sites to use a “headless” Drupal approach. Though this model was used when the site was initially migrated to Drupal in 2013, this most recent deployment rounded out the endeavor by using the Services module to provide a REST interface, and ember.js for the client-side, front-end framework.

Implementing a “headless” Drupal approach prepares NASA for the future of content management systems (CMS) by:

  1. Leveraging the strength and flexibility of Drupal’s back-end to easily architect content models and ingest content from other sources. As examples:

  • Our team created the concept of an “ubernode”, a content type which homogenizes fields across historically varied content types (e.g., features, images, press releases, etc.). Implementing an “ubernode” enables easy integration of content in web services feeds, allowing developers to seamlessly pull multiple content types into a single, “latest news” feed. This approach also provides a foundation for the agency to truly embrace the “Create Once, Publish Everywhere” philosophy of content development and syndication to multiple channels, including mobile applications, GovDelivery, iTunes, and other third party applications.

  • Additionally, the team harnessed Drupal’s power to integrate with other content stores and applications, successfully ingesting content from blogs.nasa.gov, svs.gsfc.nasa.gov, earthobservatory.nasa.gov, www.spc.noaa.gov, etc., and aggregating the sourced content for publication.

  1. Optimizing the front-end by building with a client-side, front-end framework, as opposed to a theme. For this task, our team chose ember.js, distinguished by both its maturity as a framework and its emphasis of convention over configuration. Ember embraces model-view-controller (MVC), and also excels at performance by batching updates to the document object model (DOM) and bindings.

In another stride toward maximizing “Headless” Drupal’s massive potential, we configured the site so that JSON feed records are published to an Amazon S3 bucket as an origin for a content delivery network (CDN), ultimately allowing for a high-security, high-performance, and highly available site.

Below is an example of how the technology stack which we implemented works:

Using ember.js, the NASA.gov home page requests a list of nodes of the latest content to display. Drupal provides this list as a JSON feed of nodes:

Ember then retrieves specific content for each node. Again, Drupal provides this content as a JSON response stored on Amazon S3:

Finally, Ember distributes these results into the individual items for the home page:

The result? A NASA.gov architected for the future. It is worth noting that upgrading to Drupal 8 can be done without reconfiguring the ember front-end. Further, migrating to another front-end framework (such as Angular or Backbone) does not require modification of the Drupal CMS.

Categories
Author

Orion_Service_Module

At 7:05am EST today, the world watched as NASA released its unmanned spacecraft, Orion, into the ether. With Captain Kirk (in doll form) at the helm, the massive capsule soared from Cape Canaveral with countless hopes attached. This new spaceship was built with one goal in mind: deep space exploration.

Orion’s 4.5 hour flight test was a critical step toward eventual near-Earth asteroid excursions, trips around the moon, and--most significantly--manned missions to Mars. That's right: with the success of Orion's launch would come “the beginning of the Mars era,” as NASA Administrator, Charles Bolden, remarked before blastoff.

And succeed it did! Completing two orbits and going farther than all rockets designed to carry astronauts have in the past four decades, Orion passed with flying colors, and landed in the Pacific Ocean at 11:29 this morning. Our biggest congratulations to NASA on an incredibly successful flight test! Mobomo is proud to be part of the team supporting NASA.gov.

Categories
Author
Subscribe to Development