Skip to main content

Mobomo webinars-now on demand! | learn more.

Mobomo, a leading software company was recently named to the Inc. 5000 fastest growing companies for the fifth consecutive year. They are one of three software companies in the DMV area to be on the list five years in a row; nationally, due to rigorous requirements, only 7% of companies make the list five years running. Mobomo offers premier, mobile-first web and mobile application design and engineering to federal agencies and commercial enterprises.  

Since 1982, Inc. has scoured the U.S. to find the fastest-growing private companies for their annual Inc. 5000 list. In order to be considered for this prestigious list businesses must meet certain requirements, one being exemplifying a certain percentage of growth year over year. Alumni include industry giants such as Microsoft (1984 and 1985), Under Armour (2003), and GoPro (2014). Mobomo is 1 out of 23 companies making the list across all industries in the DMV area five years in a row and were ranked #1878 in the 2017 list.

Over the past five years, Mobomo has expanded presence in the federal sector by being named to three primary (USGS, NOAA, GSA) and many sub-primary (NASA, VA, Department of State) contracts. They have helped federal agencies and government entities such as NASA, USGS, and NOAA, NMFS to effective, cost saving platforms that has saved the federal government millions of dollars.

Commercially, Mobomo has worked with clients such as the USO, Gallup, The World Bank, Bozzuto and more to overcome technical challenges and revitalize large digital platforms. Serving clients inside and outside of the beltway, Mobomo works to change technology and ultimately save the government and commercial enterprises time and money by automating processes to better allocate resources for repetitive tasks. 

“Being named to the Inc. 5000 for the fifth year in a row would not have been possible without the leadership and entrepreneurial guidance Barg Upender (Mobomo Founder) and Ken Fang (Mobomo President). Over the course of the five years both Barg and Ken have held different roles in growing the company and have been very successful in the federal and commercial sector. Having an ownership team that has been collaborative and engaged has given us an edge, we are lucky to have them,” says Brian Lacey, CEO of Mobomo.”

 

Categories
Tags
Author

iOS 11 update

Apple announced they will be releasing the iOS 11 this fall but what do the enhancements to iOS 11 mean for your app that is currently in the marketplace? Here are a few reasons why you should consider upgrading your app to prepare for the iOS 11 launch.

All of these new features that iOS 11 are rolling out could affect how your app is housed in the App Store. Get in touch and we will do a free analysis on your current app and let you know of any incompatibility issues. Whether you are interested in creating a new app, wanting to enhance a current app, or just a compatibility test, we have gathered some highlights as to why it would be a good idea to update your current app in preparation for iOS 11.

This is not the first time that Apple has made changes that affect apps in the App Store. In September of 2016, Apple released an App Store guideline which encouraged app owners to become compliant with those guidelines or Apple would delete the app from the App Store. There is without a doubt more and more difficulties standing out in the App Store. How will iOS 11 change that?

First, 32 bit will be turned off and any app that is 32 bit will no longer be accessible in the App Store. According to sources, any app that runs in 32 bit will no longer be supported by Apple. Developers will need to update their app to 64 bit in order to remain in the App Store. Apple has already told developers that macOS High Sierra would be the final version on the Mac to support 32 bit apps.

New iOS App Store: 

  • App Ratings: Now when asking users to request app ratings, developers must now use Apple’s official API instead of integrating their own custom prompts. The new Apple rating message lets users choose 1-5 stars then exits the prompt, all without leaving their current app. The simple message, however, has a drawback; this rating prompt can only be alerted three times a year.

 

 

Categories
Author

AMP - Accelerated Mobile PagesWe receive questions frequently about AMP. Most people have heard about it but what is it exactly? Others are wondering why all of a sudden do we need AMP? We answered all these questions and more, let us know what you think about AMP!

What is AMP?

AMP is a new term you may have recently heard, but what is it exactly? AMP stands for Accelerated Mobile Pages, a project from Google and Twitter designed to make really fast mobile pages. It consists of HTML, Javascript, and cache libraries that, thanks to specific extensions and AMP-focused properties, accelerate load speed for mobile pages, even if they feature ‘heavy’ content like infographics, PDFs, audio or video files.

Why do we need fast mobile pages? Who cares?

In a mobile first world, quick mobile web pages are vital for audience retention. We need fast websites to effectively reach and engage the ever-growing mobile market. No one wants to wait for a webpage to load 7+ seconds with cluttering ads blocking the information that they want. Users effectively “give up” on slower websites (they do not wait for slow websites to get fast, they click on a better site). Creating fast, well-organized websites require constant updates from synergistic development teams. AMP was created to answer the slow-website problem--making sites more robust.

What does AMP do?

In simpler terms, when users search for news or any general search term, Google will display a ‘carousel’ highlighting stories from AMP-enabled websites at the top of search results. AMP creates straightforward, easy to manage web pages and ads that are built in the open source format, load instantly, and give users a smooth, more engaging experience on mobile and desktop. Moreover, AMP gets rid of certain elements that take a substantial toll on your website’s speed and performance. In addition to benefitting the user, AMP also benefits the web developers using AMP technology. AMP can reduce the load on a server and improve performance against the traffic that is generated by mobile users. Mobile ranking can also be improved with AMP. “Mobile friendliness” and load times are key factors for organic mobile search results and so by utilizing AMP, your site can move higher on the search results ladder.

Facebook has already been utilizing AMP for their “Instant Articles”. Instead of loading a webpage in a browser instance, the Facebook app loads an Instant Article, distinguished by a lightning bolt icon, from their cache. Google AMP makes content more widely accessible and helps level the playing field for websites publishing articles. Subsequently, AMP can help promote the relevance of a wider range of articles in your Google search results.

How do I try AMP?

When an article or webpage has an AMP version available, it will display a small lightning bolt under the search result. Clicking on the AMP link loads a stripped-down, faster version of the article. More often than not, the webpage will be delivered directly from Google's own caching servers.

How does AMP work?

AMP pages use a smaller set of HTML, so the look and feel can be a little different than you’re used to. For example, Forms, are not a part of the AMP-HTML features and most Javascript is restricted, so there is no overload. You never have to leave the app to see the article because the results are generated from Google’s own server. The webpage looks like your site, but behind the scenes, the user is still on Google, which changes the game a bit, since before your site had to have the bandwidth to support those users. An alternate version of each page is required to start using AMP. If you are using a CMS like WordPress or Drupal, there are several plugins and modules that can help perform most of the repetitive tasks. If you are not using one of those, then, Google’s documentation is a good place to start.

And What About Ads?

Most JavaScript is forbidden on AMP markup, but there are ways to allow publishers to include ads and analytics on AMP generated pages. Third party scripts can also be used, as long as they are AMP enabled. Ads on AMP pages are intended to be non-intrusive causing no trade offs for revenue. AMP also supports paywalls and subscriptions. Twitter, Instagram and Facebook have developed AMP specific libraries for this purpose. Google has a list of available components to help with subscriptions as well.

Biggest takeaways of AMP?

AMPs were developed to favor readability and speed. AMP images are lazy loaded, meaning that they won’t load unless scrolled into view. Ads displace content, rather than popping up and blocking your view. AMPs style rulings ensure that animations can be GPU-accelerated. Mobile friendly sites already rank higher than regular sites do when using a mobile search, and while mobile SEO does not account for AMP pages at the moment, it is probable that they will be taking top ranks for future searches. All in all, speed is key when trying to reach your audience and keep them on your site. With AMP, users get the fastest experience possible.

We have covered some of the basic but be sure to check out Wired, Moz and AMP Project for more information about AMP!

 

Categories
Tags
Author

Person typing Our work as designers is filled with many repetitive tasks that can become time consuming, we have talked about ways to Automate Photoshop to Improve your Workflow, and Design Etiquette - all of these things help make our lives a little easier.  Now lets talk through some ways you can improve your UX/UI workflow in Photoshop by implementing some of the standard keyboard shortcuts the program offers, how to edit them and create your own shortcuts. Which will save you time! Who doesn’t want to save time?

Create new layer behind selected layer

  • MAC: CMD+New Layer icon
  • WINDOWS: CTRL+New Layer Icon

New layer via copy

  • MAC: CMD+J
  • WINDOWS: CTRL+J

Create new layers from existing ones. You can copy text, images, even folders with these shortcuts. Need to re-use the card you already designed? Just copy it!

Bring layer forward

  • MAC: CMD+]
  • WINDOWS: CTRL+]

For those times when you need to rearrange the order of your layers - forget the mouse and use the keyboard instead.

Send layer back

  • MAC: CMD+[
  • WINDOWS: CTRL+[

Pro Tip: Create a shortcut for renaming layers (In my case I use CMD+Shift+R) this is an easy and quick way to copy/paste/rename a layer name. Also, if you press ‘tab’ while the rename layer is active it will cycle through your layers with the name that is highlighted and it will be ready to be edited or you can press (shift+tab to go in a different direction).

Deselect the entire image

  • MAC: CMD+D
  • WINDOWS: CTRL+D

Reselect

  • MAC: CMD+Shift+D
  • WINDOWS: CTRL+Shift+D

We recently found this shortcut and it is really useful for when you click away from the selection and you need to re-do it.

Invert selection

  • MAC: CMD+Shift+I
  • WINDOWS: CTRL+Shift+I

Select all layers

  • MAC: CMD+Opt+A
  • WINDOWS: CTRL+Alt+A

Need to select all your layers to create a new group? Here’s your solution! Tip: Did you know you can collapse all your groups from the layer Menu?

Deselect from the selection area

  • MAC: Opt+drag
  • WINDOWS: Alt+drag

Increase/decrease size of selected text by 2pts

  • MAC: CMD+Shift+>/<
  • WINDOWS: CTRL+Shift+>/<

Align text left/center/right

  • MAC: CMD+Shift+L/C/R
  • WINDOWS: CTRL+Shift+L/C/R

Not convinced about the shortcuts Photoshop offers? You can customize them yourself, go to the application menu, under ‘Edit’, you’ll see “Keyboard Shortcuts” or just press Alt+Shift+CMD +K or Alt+Shift+Ctrl +K

In this window you’ll see all the different shortcuts that photoshop has to offer, there are many you won’t need as often when working as a UI/UX designer, so you can take advantage of this and create your own shortcuts for the tools you do use for example, copy/paste/clear layer styles; save them as a set to keep a copy and, if you want, share it with your team!

You can set shortcuts for the application menus, the panels (How about giving that ‘Collapse All Groups’ feature a shortcut?) and of course, tools. Tip: See that Menu tab? Here you can define special settings to the menu, assign a color to different menu options, even hide the ones you never use!

Did you know you can create a shortcut to align elements to the left/right/center/middle? There’s also another way to create your own shortcuts for repetitive tasks, through actions! We have talked a little bit about how to do so in our post Automate Photoshop to Improve your Workflow  be sure to check it out!

Categories
Author

WordPress versus Drupal It is safe to say that at the moment, WordPress does not have the largest presence in the federal government. By large, Drupal is the preferred CMS in the federal government.

Recently, we spoke at WordCamp DC where we were able to outline some reasons as to why and how we can help WordPress grow throughout the federal government.

First, lets identify some of the problems keeping WordPress less popular. There are three main reasons that Drupal appears to be the popular CMS over WordPress.

WordPress developers frequently hear a lot of these arguments and concerns about WordPress.

Drupal is more flexible and complex because:“Drupal contains taxonomies, content types, blocks, views, and user/role management”

We hear this a lot, but it’s misleading to say because WordPress also offers the following:

Drupal handles large volumes of content much better than WordPress

Often times, this seems like a moot argument. Whenever we hear people arguing about X framework versus Y framework it usually boils down to scalability.  But are you ever going to reach those upper limits you are arguing for? And if so, why can’t WordPress handle “large volumes of content”? We developers have seen plenty of sites with thousands of pages and posts.

Drupal can support thousands of pages and thousands of users

So can WordPress! WordPress.com is a single instance of the WordPress  Multisite codebase and serves millions of websites and users.Edublogs.org hosts millions of sites on one WordPress Multisite installation with over 3 million users. 

Drupal is more secure than WordPress - WordPress is plagued with vulnerabilities

It’s true that over the years, there have a been a number of high profile vulnerabilities, but these vulnerabilities are almost always a result of using a poorly built plugin, or out of date plugin.  WordPress itself is very quick to fix any discovered vulnerabilities. So installing free, low quality plugins or just the first plugin you see is not WordPress’s fault.

That’s a managerial decision, and that that needs to change. Same goes for not staying on top of your plugin updates. If you choose not to update your plugins, or you choose to keep a plugin that hasn’t been updated in years, then you take the risk of running something with security vulnerabilities.

Personally I think this is a perspective people have about websites in general.  That once you build it the first time, you can just walk away from it and not think about it again and we wish that were the case.  But like your cars, you need to maintain it to keep it running smoothly.

WordPress was originally built as a blogging platform

Yes that’s true, but WordPress, just like everything else has grown and changed.  WordPress hasn’t been a “blogging” platform for years.  Our WordPress engineer Kyle Jennings uses WordPress as an application framework to build user centric web apps.

WordPress goes the extra mile

A lot of these arguments above seem to be related to WordPress’s approach to addressing the same issues that Drupal has addressed,but at the end of the day these discrepancies don’t actually exist. And because WordPress also offers user friendliness and intuitive design, that have in our opinion blown Drupal out of the water, we think the real discrepancies lay with Drupal.

Extra Incentives Supporting WordPress

In 2015, the U.S. Digital Services teamed up with 18f to create an official front-end framework for government websites called U.S. Web Design Standards (USWDS). It is basically Twitter’s Bootstrap but built for the federal government and focuses on accessibility compliances, making it easy and affordable for agencies to build or rebuild their websites.

Our WordPress developer, Kyle Jennings built a WordPress theme named Benjamin with these standards. Benjamin makes extensive use of the awesome WordPress Customizer to provide a ton of flexible and thoughtful settings as well as a live preview of your changes. Here is a quick overview of settings:

So by using Benjamin and Franklin, agencies can quickly and easily spin up their own websites, that are branded with federally ordained style guidelines as well as easily customize their sites to meet their needs at any given time.

In case you missed the top reasons why the federal government is moving to Drupal be sure to read and let us know which content management system you prefer!

Categories
Author

SAML-Drupal

SAML authentication used to be painful

In the old days before Drupal 8, SAML authentication in Drupal was a bit of a painful experience. The only real option was using the simplesamlphp_auth module, which involves running a full instance of SimpleSAMLphp alongside your Drupal installation. It is a working solution, but running a separate application just to authenticate against a SAML identity provider is somewhat wasteful. Drupal is already a very capable web application. Why not handle authentication from inside of a Drupal module and call it a day?

SAML authentication in Drupal 8 

The SAML Authentication module was the first SAML module for Drupal 8, and now that it's been backported to Drupal 7, there's no reason to install SimpleSAMLphp ever again!

Another reason that we chose to backport the samlauth module is that we have a number of Drupal 7 and Drupal 8 sites that we manage through Aegir. Since the Drupal 7 version is a 1:1 backport of the Drupal 8 version, all of the same configuration options are available, which makes it very straightforward to centrally manage all of the configuration.

While the backport of the 1.x branch is feature complete as it stands right now, there is definitely more work that can be done. The 8.x-2.x branch expands on the 8.x-1.x branch with new features and more flexible configuration options. These improvements should be backported to the 7.x-2.x branch eventually. At that point, since we'll have feature parity between the Drupal 7 and Drupal 8 versions, an upgrade path from Drupal 7 to Drupal 8 might be a good idea.

In the coming weeks, we will talk about the work we're doing to manage SAML configuration through the Aegir interface. In the meantime, testing, feedback, and patches are always welcome over in the samlauth issue queue.

Categories
Author

Welcome to part 2 of our exploration of the Nutch API!

In our last post, we created infrastructure for injecting custom configurations into Nutch via nutchserver. In this post, we will be creating the script that controls crawling those configurations. If you haven’t done so yet, make sure you start the nutchserver:

$ nutch nutchserver

Dynamic Crawling

We’re going to break this us into two files again, one for cron to run and the other that holds a class that does the actual interaction with nutchserver. The class file will be Nutch.py and the executor file will be Crawler.py. We’ll start by setting up the structure of our class in Nutch.py:

import time
import requests
from random import randint

class Nutch(object):
    def __init__(self, configId, batchId=None):
        pass
    def runCrawlJob(self, jobType):
        pass

 

We’ll need the requests module again ($ pip install requests on the command line) to post and get from nutchserver. We’ll use time and randint to generate a batch ID later. The function crawl is what we call to kick off crawling.

Next, we’ll get Crawler.py setup.

We’re going to use argparse again to give Crawler.py some options. The file should start like this:

# Import contrib
import requests
import argparse
import random

# Import custom
import nutch

parser = argparse.ArgumentParser(description="Runs nutch crawls.")
parser.add_argument("--configId", help="Define a config ID if you just want to run one specific crawl.")
parser.add_argument("--batchId", help="Define a batch ID if you want to keep track of a particular crawl. Only works in conjunction with --configId, since batches are configuration specific.")
args = parser.parse_args()

We’re offering two optional arguments for this script. We can set --configId to run a specific configuration and setting --batchId allows us to track as specific crawl for testing or otherwise. Note: with our setup, you must set --configId if you set --batchId.

We’ll need two more things: a function to make calling the crawler easy and logic for calling the function.

We’ll tackle the logic first:

if args.configId:
    if args.batchId:
        nutch = nutch.Nutch(args.configId, args.batchId)
        crawler(args.job, nutch.getNodeID())
    else:
        nutch = nutch.Nutch(args.configId)
        crawler(args.job, nutch.getNodeID())
else:
    configIds = requests.get("http://localhost:8081/config")
    cids = configIds.json()
    random.shuffle(cids)
    for configId in cids:
        if configId != "default":
            nutch = nutch.Nutch(configId)
            crawler(nutch)

If a configId is given, we capture it and initialize our Nutch class (from Nutch.py) with that id. If a batchId is also specified, we’ll initialize the class with both. In both cases, we run our crawler function (shown below).

If neither configId nor batchId is specified, we will crawl all of the injected configurations. First, we get all of the config ID’s that we have injected earlier (see Part 1!). Then, we randomize them. This step is optional but we found that we tend to get more diverse results when initially running crawls if Nutch is not running them in a static order. Last, for each config ID, we run our crawl function:

def crawler(nutch):
    inject = nutch.runCrawlJob("INJECT")
    generate = nutch.runCrawlJob("GENERATE")
    fetch = nutch.runCrawlJob("FETCH")
    parse = nutch.runCrawlJob("PARSE")
    updatedb = nutch.runCrawlJob("UPDATEDB")
    index = nutch.runCrawlJob("INDEX")

You might wonder why we’ve split up the crawl process here. This is because later, if we wish, we can use the response from the Nutch job to keep track of metadata about crawl jobs. We will also be splitting up the crawl process in Nutch.py.

That takes care of Crawler.py. Let’s now fill out our class that actually controls Nutch, Nutch.py. We’ll start by filling out our __init__ constructor:

def __init__(self, configId, batchId=None):
    # Take in arguments
    self.configId = configId
    if batchId:
        self.batchId = batchId
    else:
        randomInt = randint(0, 9999)
        self.currentTime = time.time()
        self.batchId = str(self.currentTime) + "-" + str(randomInt)

    # Job metadata
    config = self._getCrawlConfiguration()
    self.crawlId = "Nutch-Crawl-" + self.configId
    self.seedFile = config["meta.config.seedFile"]

We first take in the arguments and create a batch ID if there is not one.

The batch ID is essential as it links the various steps of the process together. Urls generated under one batch ID must be fetched under the same ID for they will get lost, for example. The syntax is simple, just [Current Unixtime]-[Random 4-digit integer].

We next get some of the important parts of the current configuration that we are crawling and set them for future use.

We’ll query the nutchserver for the current config and extract the seed file name. We also generate a crawlId for the various jobs we’ll run.

Next, we’ll need a series of functions for interacting with nutchserver.

Specifically, we’ll need one to get the crawl configurations, one to create jobs, and one to check the status of a job. The basics of how to interact with Job API can be found at https://wiki.apache.org/nutch/NutchRESTAPI, though be aware that this page is not complete in it’s documentation. Since we referenced it above, we’ll start with getting crawl configurations:

def _getCrawlConfiguration(self):
    r = requests.get('http://localhost:8081/config/' + self.configId)
    return r.json()

 

This is pretty simple: we make a request to the server at /config/[configID] and it returns all of the config options.

Next, we’ll get the job status:

def _getJobStatus(self, jobId):
    job = requests.get('http://localhost:8081/job/' + jobId)
    return job.json()

This one is also simple: we make a request to the server at /job/[jobId] and it returns all the info on the job. We’ll need this later to poll the server for the status of a job. We’ll pass it the job ID we get from our create request, shown below:

def _createJob(self, jobType, args):
    job = {'crawlId': self.crawlId, 'type': jobType, 'confId': self.configId, 'args': args}
    r = requests.post('http://localhost:8081/job/create', json=job)
    return r

Same deal as above, the main thing we are doing is making a request to /job/create, passing it some JSON as the body. The requests module has a nice built-in feature that allows you to pass a python dictionary to a json= parameter and it will convert it to a JSON string for you and pass it to the body of the request.

The dict we are passing has a standard set of parameters for all jobs. We need the crawlId set above; the jobType, which is the crawl step we will pass into this function when we call it; the configId, which is the UUID we made earlier; last, any job-specific arguments--we’ll pass these in when we call the function.

The last thing we need is the logic for setting up, keeping track of, and resolving job creation:

def runCrawlJob(self, jobType): 
    args = ""
    if jobType == 'INJECT':
        args = {'seedDir': self.seedFile}
    elif jobType == "GENERATE":
        args = {"normalize": True,
                "filter": True,
                "crawlId": self.crawlId,
                "batch": self.batchId
                }
    elif jobType == "FETCH" or jobType == "PARSE" or jobType == "UPDATEDB" or jobType == "INDEX":
        args = {"crawlId": self.crawlId,
                "batch": self.batchId
                }
    r = self._createJob(jobType, args)
    time.sleep(1)
    job = self._getJobStatus(r.text)
    if job["state"] == "FAILED":
        return job["msg"]
    else:
        while job["state"] == "RUNNING":
            time.sleep(5)
            job = self._getJobStatus(r.text)
            if job["state"] == "FAILED":
                return job["msg"]
    return r.text

First, we’ll create the arguments we’ll pass to job creation.

All of the job types except Inject require a crawlId and batchId. Inject is special in that the only argument it needs is the path to the seed file. Generate has two special options that allow you to enable or disable use of the normalize and regex url filters. We’re setting them both on by default.

After we build args, we’ll fire off the create job.

Before we begin checking the status of the job, we’ll sleep the script to give the asynchronous call a second to come back. Then we make a while loop to continuously check the job state. When it finishes without failure, we end by returning the ID.

And we’re finished! There are a few more things of note that I want to mention here. An important aspect of the way Nutch was designed is that it is impossible to know how long a given crawl will take. On the one hand, this means that your scripts could be running for several hours at time. However, this also means that it could be done in a few minutes. I mention this because when you first start crawling and also after you have crawled for a long time, you might start seeing Nutch not crawl very many links. In the first case, this is because, as I mentioned earlier, Nutch only crawls the links in the seed file at first, and if there are not many hyperlinks on those first pages, it might take two or three crawl cycles before you start seeing a lot of links being fetched. In the latter case, after Nutch finishes crawling all the pages that match your configuration, it will only recrawl those pages after a set interval. You can modify how this process works, but it will mean that after awhile you will see crawls that only fetch a handful of links.

Another helpful note is that the Nutch log at /path/to/nutch/runtime/local/logs/hadoop.log is great for following the process of crawling. You can set the output depth of most parts of the Nutch process at /path/to/nutch/conf/log4j.properties (you will have to rebuild Nutch if you change this by running ant runtime at the Nutch root).

Categories
Author
Subscribe to