Skip to main content

Mobomo webinars-now on demand! | learn more.

With so many devices and screen sizes, it can be difficult balancing act to make your site look great and perform well. Displaying low res images on a high res display can make a site look terrible. On the other hand, serving high res images to a low res device can needlessly create a sluggish experience. Responsive images can help.

NOTE: Responsive image techniques are best used for CONTENT images (<img src=””>). For background images, you can and should use CSS media queries to control how the image is displayed across various devices.

The goal of responsive images is to provide multiple image sources and any other details the browser may need when deciding on an image.

WARNING: Parts of the syntax are very weird and the results may seem weird due to the long list of factors at play. These factors include, but are not limited to:

  • Viewport size

  • Image size

  • Image display size

  • Screen Pixel Density

  • Cache

  • Browser

  • *Bandwidth

  • *Browser settings (User preferences)

*Not currently implemented

Introducing the source set attribute:

Responsive_Images_Srcset

Srcset allows us to provide multiple image sources as well as the size of each image. It may seem odd to inform the browser of our image sizes, especially since the browser can easily determine this on its own. However, the reasoning behind this is to give the browser a leg up on the selection process; allowing the browser to intelligently decide, prior to downloading, which image to request.

Taking it one step further with the sizes attribute:

Responsive_Images_Size

The sizes attribute allows us to inform the browser of the image size being displayed on a page. It includes syntax similar to CSS media queries for responsive layouts where the image display size is determined by the layout.

Again, it may seem odd to inform the browser of something it can easily calculate on its own or it may seem odd to include redundant code from CSS, but providing this info before the browser even has the CSS files will improve performance.

A bit more control with the picture element:

Responsive_Images_Picture

Lastly, the picture element provides more control, using media queries, to explicitly narrow down the browser’s selection of image choices. This is useful for implementing art direction or when image sizes change so dramatically they must be uniquely cropped for every layout.

Responsive images can be complex. However, when done properly, they can provide the very best experience for any situation.

For more details check out our codepen:

[codepen_embed height=300 theme_id=1 slug_hash='qdMQdb' user='Mobomo LLC' default_tab='result' animations='run']

Categories
Author

With Drupal, both developers and non-developer admins can deploy a long list of robust functionalities right out-of-the-box. This powerful, open source CMS allows for easy content creation and editing, as well as seamless integration with numerous 3rd party platforms (including social media and e-commerce). Drupal is highly scalable, cloud-friendly, and highly intuitive. Did we mention it’s effectively-priced, too?

In our “Why Drupal?” 3-part series, we’ll highlight some features (many which you know you need, and others which you may not have even considered) that make Drupal a clear front-runner in the CMS market.

For a personalized synopsis of how your organization’s site can be built on or migrated to Drupal with amazing results, grab a free ticket to Drupal GovCon 2015 where you can speak with one of our site migration experts for free, or contact us through our website.

_______________________________

SEO + Social Networking:

Unlike other content software, Drupal does not get in the way of SEO or social networking. By using a properly built theme--as well as add-on modules--a highly optimized site can be created. There are even modules that will provide an SEO checklist and monitor the site’s SEO performance. The Metatags module ensures continued support for the latest metatags used by various social networking sites when content is shared from Drupal.

E-Commerce:

Drupal Commerce is an excellent e-commerce platform that uses Drupal’s native information architecture features. One can easily add desired fields to products and orders without having to write any code. There are numerous add-on modules for reports, order workflows, shipping calculators, payment processors, and other commerce-based tools.

Search:

Drupal’s native search functionality is strong. There is also a Search API module that allows site managers to build custom search widgets with layered search capabilities. Additionally, there are modules that enable integration of third-party search engines, such as Google Search Appliance and Apache Solr.

Third-Party Integration:

Drupal not only allows for the integration of search engines, but a long list of other tools, too. The Feeds module allows Drupal to consume structured data (for example, .xml and .json) from various sources. The consumed content can be manipulated and presented just like content that is created natively in Drupal. Content can also be exposed through a RESTful API using the Services module. The format and structure of the exposed content is also highly configurable, and requires no programming.

Taxonomy + Tagging:

Taxonomy and tagging are core Drupal features. The ability to create categories (dubbed “vocabularies” by Drupal) and then create unlimited terms within that vocabulary is connected to the platform’s robust information architecture. To make taxonomy even easier, Drupal even provides a drag-n-drop interface to organize the terms into a hierarchy, if needed. Content managers are able to use vocabularies for various functions, eliminating the need to replicate efforts. For example, a vocabulary could be used for both content tagging and making complex drop-down lists and user groups, or even building a menu structure.

Workflows:

There are a few contributor modules that provide workflow functionality in Drupal. They all provide common functionality along with unique features for various use cases. The most popular options are Maestro and Workbench.

Security:

Drupal has a dedicated security team that is very quick to react to vulnerabilities that are found in Drupal core as well as contributed modules. If a security issue is found within a contrib module, the security team will notify the module maintainer and give them a deadline to fix it. If the module does not get fixed by the deadline, the security team will issue an advisory recommending that the module be disabled, and will also classify the module as unsupported.

Cloud, Scalability, and Performance:

Drupal’s architecture makes it incredibly “cloud friendly”. It is easy to create a Drupal site that can be setup to auto-scale (i.e., add more servers during peak traffic times and shut them down when not needed). Some modules integrate with cloud storage such as S3. Further, Drupal is built for caching. By default, Drupal caches content in the database for quick delivery; support for other caching mechanisms (such as Memcache) can be added to make the caching lightning fast.

Multi-Site Deployments:

Drupal is architected to allow for multiple sites to share a single codebase. This feature is built-in and, unlike Wordpress, it does not require any cumbersome add-ons. This can be a tremendous benefit for customers who want to have multiple sites that share similar functionality. There are few--if any--limitations to a multi-site configuration. Each site can have its own modules and themes that are completely separate from the customer’s other sites.

Want to know other amazing functionalities that Drupal has to offer? Stay tuned for the final installment of our 3-part “Why Drupal?” series!

Categories
Author

Regardless of industry, staff size, and budget, many of today’s organizations have one thing in common: they’re demanding the best content management systems (CMS) to build their websites on. With requirement lists that can range from 10 to 100 features, an already short list of “best CMS options” shrinks even further once “user-friendly”, “rapidly-deployable”, and “cost-effective” are added to the list.

There is one CMS, though, that not only meets the core criteria of ease-of-use, reasonable pricing, and flexibility, but a long list of other valuable features, too: Drupal.

With Drupal, both developers and non-developer admins can deploy a long list of robust functionalities right out-of-the-box. This powerful, open source CMS allows for easy content creation and editing, as well as seamless integration with numerous 3rd party platforms (including social media and e-commerce). Drupal is highly scalable, cloud-friendly, and highly intuitive. Did we mention it’s effectively-priced, too?

In our “Why Drupal?” 3-part series, we’ll highlight some features (many which you know you need, and others which you may not have even considered) that make Drupal a clear front-runner in the CMS market.

For a personalized synopsis of how your organization’s site can be built on or migrated to Drupal with amazing results, grab a free ticket to Drupal GovCon 2015 where you can speak with one of our site migration experts for free, or contact us through our website.

______

Drupal in Numbers (as of June 2014):

  • Market Presence: 1.5M sites
  • Global Adoption: 228 countries
  • Capabilities: 22,000 modules
  • Community: 80,000 members on Drupal.org
  • Development: 20,000 developers

Open Source:

The benefits of open source are exhaustively detailed all over the Internet. Drupal itself has been open source since its initial release on January 15, 2000. With thousands of developers reviewing and contributing code for over 15 years, Drupal has become exceptionally mature. All of the features and functionality outlined in our “Why Drupal?” series can be implemented with open source code.

Startup Velocity:

Similar to Wordpress, deploying a Drupal site takes mere minutes, and the amount of out-of-the-box functionality is substantial. While there is a bit of a learning curve with Drupal, an experienced admin (non-developer) can have a small site deployed in a matter of days.

Information Architecture:

The ability to create new content types and add unlimited fields of varying types is a core Drupal feature. Imagine you are building a site that hosts events, and an “Event” content type is needed as part of the information architecture. With out-of-the-box Drupal, you can create the content type with just a few clicks--absolutely no programming required. Further, you can add additional fields such as event title, event date, event location, keynote speaker. Each field has a structured data type, which means they aren’t just open text fields. Through contrib modules, there are dozens of other field types such as mailing address, email address, drop-down list, and more. Worth repeating: no programming is required to create new content types, nor to create new fields and add them to a new content type.

Asset Management:

There are a number of asset management libraries for Drupal, ensuring that users have the flexibility to choose the one that best suits their needs. One newer and increasingly popular asset management module in particular is SCALD (https://www.drupal.org/project/scald). One of the most important differences between SCALD and other asset management tools is that assets are not just files. In fact, files are just one type of asset. Other asset types include YouTube videos, Flickr galleries, tweets, maps, iFrames--even HTML snippets. SCALD also provides a framework for creating new types of assets (called providers). For more information on SCALD, please visit: https://www.drupal.org/node/2101855 and https://www.drupal.org/node/1895554

Curious about the other functionalities Drupal has to offer? Stay tuned for Part 2 of our “Why Drupal?” series!

Categories
Author

For Federal Offices of Communication, the act—and art—of balancing websites that both cater to the public and promote the organizational structure and mission of the organization is always top of mind. Accordingly, those partnering with Federal offices must prioritize meeting both needs when designing and building agency sites. On numerous projects, our team has successfully managed to increase usability and deliver user-centric designs while simultaneously building sites that allow our Federal clients to bolster their brand. A sample of results for some clients:

-a swift 4% increase in first-time visitor overall satisfaction
-76% of all mobile users strongly agreeing that the new site made content easier to find
-88% of frequently visiting teens being satisfied with the new site

Below are some of the tools we’ve implemented to achieve success:

Navigation and Information Architecture

Treejack is a great usability testing tool that development teams can wield to test the information architecture and navigation of the site prior to even beginning a design. It is best used to test the findability of topics in a website using different navigational hierarchies. For one of our projects, both internal and external stakeholders were given 46 tasks to perform using a variety of different navigation hierarchies to find the most optimal site organization for both constituent groups.

treejack-information-architecture-software

Usability Testing

For usability testing, our team leverages both Loop11 and Usertesting.com. Using a live, interactive environment, both of these tools allow development teams to gain deep understanding of user behavior by observing users as they complete a series of tasks and questions on the site and/or mobile app in question. Interactions are captured and then analyzed in comprehensive reports. As an added bonus, Usertesting.com provides video footage of the interaction for review:

user-testing-video-footage

http://bit.ly/1rRvEAm

In summary, Federal websites and applications are often designed with too much emphasis on organizational hierarchy and goals, and too little focus on meeting end-users’ needs and expectations. User-Centric Design (UCD) tools can help government agencies buck this trend, however, allowing them to create websites and applications that engage users and maximize their interaction. Ultimately, this results in a sure win-win: Federal agencies’ constituents can experience an efficient, satisfying, and user-friendly design, and—with constituents’ increased engagement—organizations can ensure that their missions and information are communicated effectively. Act balanced.

Categories
Author

 

At the time of this writing (pre-WWDC 2015), there are a number of limitations on what Apple Watch code can do. The primary limitation is that watch apps cannot exist by themselves. It is necessary for the watch app to be a part of a corresponding phone app. Apple has said they will not accept watch apps where the phone app does not do anything itself. Also, watch-only apps (such as watch faces) are not allowed for this same reason—although it’s rumored that this may change after WWDC 2015.

Another Apple Watch limitation is that Core Graphics animations are not supported, but animated GIFs are. Complex layouts (such as overlapping elements) are not allowed. However, elements can be positioned as if they overlap—provided only one element is visible at a time. Using actions such as taps and timers, the visibility of these "overlapping" elements can be changed. This can be implemented to provide a more dynamic interface. Another major limitation (also whispered to change after WWDC 2015) is that watch apps cannot access any of the hardware on the watch including the motion sensor and heart sensor.

Most watch app processing (controller logic) is done on the phone instead of the watch, and some delays are inherent in the Bluetooth communication that transpires between the watch and the phone as the view (on the watch) talks back to the controller (on the phone). This view/controller split is not obvious in the code, but the watch/phone split is obvious in the code, as the watch cannot access anything from the phone, even though the controller logic is running on the phone side—except via a specific watch-to-phone request.

One notable feature is the watch app’s ability to explicitly call the phone app with a dictionary and obtain a dictionary response. This functionality allows the developer to then set up a number of client-server style requests, where the watch is the client, and the phone is the server. For example, the watch can request information from—or record information to—the phone. The phone (which has storage and may have Internet connectivity) can then fulfill the request and provide data in response to the watch. This can drive the phone app's UI to provide near-real-time synchronization of the watch app display, as well as the phone app display.

Custom notifications (both local notifications and push notifications) are supported on the watch. These custom notifications can have a somewhat customized layout as well as having the ability to define a set of custom actions. After performing one of these actions, the watch app is started. Apple mentions not to use notifications as a way to just launch the watch app from the phone app. Apple maintains that the notifications should provide useful information.

One developer test limitation relates to custom watch notifications (for local notifications).  Since watch notifications are only displayed if the phone is asleep, there is no direct way to test custom watch notifications.  Because of this, XCode does provide a mechanism to test push notifications in the simulator (using a JSON file), but there is no similar mechanism to test local notifications. Still, one can certainly test local notifications with the physical device.

Categories
Author

In April 2015, NASA unveiled a brand new look and user experience for NASA.gov. This release revealed a site modernized to 1) work across all devices and screen sizes (responsive web design), 2) eliminate visual clutter, and 3) highlight the continuous flow of news updates, images, and videos.

With its latest site version, NASA—already an established leader in the digital space—has reached even higher heights by being one of the first federal sites to use a “headless” Drupal approach. Though this model was used when the site was initially migrated to Drupal in 2013, this most recent deployment rounded out the endeavor by using the Services module to provide a REST interface, and ember.js for the client-side, front-end framework.

Implementing a “headless” Drupal approach prepares NASA for the future of content management systems (CMS) by:

  1. Leveraging the strength and flexibility of Drupal’s back-end to easily architect content models and ingest content from other sources. As examples:

  • Our team created the concept of an “ubernode”, a content type which homogenizes fields across historically varied content types (e.g., features, images, press releases, etc.). Implementing an “ubernode” enables easy integration of content in web services feeds, allowing developers to seamlessly pull multiple content types into a single, “latest news” feed. This approach also provides a foundation for the agency to truly embrace the “Create Once, Publish Everywhere” philosophy of content development and syndication to multiple channels, including mobile applications, GovDelivery, iTunes, and other third party applications.

  • Additionally, the team harnessed Drupal’s power to integrate with other content stores and applications, successfully ingesting content from blogs.nasa.gov, svs.gsfc.nasa.gov, earthobservatory.nasa.gov, www.spc.noaa.gov, etc., and aggregating the sourced content for publication.

  1. Optimizing the front-end by building with a client-side, front-end framework, as opposed to a theme. For this task, our team chose ember.js, distinguished by both its maturity as a framework and its emphasis of convention over configuration. Ember embraces model-view-controller (MVC), and also excels at performance by batching updates to the document object model (DOM) and bindings.

In another stride toward maximizing “Headless” Drupal’s massive potential, we configured the site so that JSON feed records are published to an Amazon S3 bucket as an origin for a content delivery network (CDN), ultimately allowing for a high-security, high-performance, and highly available site.

Below is an example of how the technology stack which we implemented works:

Using ember.js, the NASA.gov home page requests a list of nodes of the latest content to display. Drupal provides this list as a JSON feed of nodes:

Ember then retrieves specific content for each node. Again, Drupal provides this content as a JSON response stored on Amazon S3:

Finally, Ember distributes these results into the individual items for the home page:

The result? A NASA.gov architected for the future. It is worth noting that upgrading to Drupal 8 can be done without reconfiguring the ember front-end. Further, migrating to another front-end framework (such as Angular or Backbone) does not require modification of the Drupal CMS.

Categories
Author

We were particularly proud to see one of our favorite clients, Peter Dewar, Chief Technology Officer at the District of Columbia Retirement Board (DCRB), participate in a thought-provoking panel on Wearables and the Internet of Things. The session's description as a “visionary panel” proved to be true, as all of the participants outlined the groundbreaking mobile capabilities they foresaw as feasible within the next five years.

Dan Mintz introduces Peter Dewar and other panelists

Mr. Dewar described his vision for implementing Google Glass in the office, at conferences—even for pension fund participants, staff, and Board members. Taking the idea of “smart rooms” even further, he also described a futuristic conference room, which would be able to set up a meeting’s required media (think dial-ins, projectors, etc.) upon the meeting organizer’s entrance or (biometric) authentication.

We from Mobomo were on the edge of our seats thinking about the possibilities, and excited about building them—especially for our government clients. Congrats to Peter Dewar for a great panel session, and thanks to Tom Suder for hosting yet another fantastic summit. We’re looking forward to next year’s—and to the future of mobile (in the government!).

Categories
Author

Mobomo is home to a Stripe CTF 2.0 security challenge winner.

Stripe's second Capture the Flag ended nearly two weeks ago. If you haven't heard of it, the CTF is a security challenge in which contestants progress by analyzing (web-based, in this iteration) systems and exploiting vulnerabilities to gain access to a secret token which serves as the password to the next level.

The levels in 2.0 tested a practical understanding of at least:

  • SQL injection
  • Code/command injection
  • XSS/CSRF attacks
  • Cryptographic weaknesses
  • Side-channel attacks

I had the pleasure and privilege of completing the challenge. It was a
really great exercise in real-world web application security.

The Challenge

The source of each challenge is available online now, so you can review each level in detail if you're curious.

Each contestant had their own isolated instance of each level, and corresponding users and directories in the Linux virtual machines hosting the challenge.

Several of the challenges involved exploiting the full range of the underlying OS, user accounts, background services, and internal network of the CTF servers.

Level 0

Level 0 was basically a low bar to see if you have any place at all in the contest.

A simple web form serves up "secrets" to authorized users. Despite the use of a parameterized query, which avoids the most common form of SQL injection (unescaped quotes), this code is still vulnerable:

var query = 'SELECT * FROM secrets WHERE key LIKE ? || ".%"'; db.all(query, namespace, function(err, secrets) { if (err) throw err; renderPage(res, {namespace: namespace, secrets: secrets}); }); 

Can you spot it? By passing a wildcard (%)" to the server, it goes straight into the LIKE condition, returning all of the records in the table.

The Takeaway

Don't just count on parameterized SQL queries and proper quote escaping. You must also sanitize user data in a SQL LIKE condition.

Basically, do not trust user data.

Level 1

The code for level 1 uses a dangerous PHP function which is used as a shortcut for extracting each HTTP parameter. This is similar to Rails' "mass assignment" problem, except that it allows for an attacker to overwrite variables in the running program!

By simply sending a filename parameter in a request, the previously-assigned value of $filename is overwritten when extract is called.

An attacker can then read any file on the system readable by the PHP process, most obviously the secret combination or password file.

The Takeaway

Do not pass user data (i.e. PHP's $_GET, or Rails' params) to any routine that modifies the program's executing environment.

Basically, do not trust user data.

Level 2

Another level, another bad PHP script: level 2 demonstrates an indirect vulnerability that is very similar to level 1.

Instead of relying on any particular vulnerability in the execution of the script itself, the weakness lies in trusting uploaded content. For whatever reason, the code grants executable privileges to uploaded files. This opens the system up to command injection attacks.

The simple solution here is to upload a PHP file that prints the password:

<?PHP echo file_get_contents("../password.txt"); ?> 

You can do lots of interesting things when you can execute any PHP script you want, like running system commands (ls, etc.), and this proves to be key in several later levels.

The Takeaway

You must be careful to only grant the minimum permissions absolutely necessary to user content in the filesystem.

Basically, do not trust user data.

Level 3

Finally, a classic unescaped SQL injection! I was surprised that it took this long to make an appearance.

The program in level 3 makes the mistake of not escaping quotes in parameters to a SQL query. If the code were to do the password hash comparison in SQL, then we could simply pass:

' OR '1' = '1 

And we would get back a user record.

However, since the hash comparison is done after the fact, we need to return an actual valid hash and salt for the user we are trying to impersonate.

The trick here is to tack a row onto the result set. Since we're already in a WHERE clause, the obvious choice is to add a UNION to the query.

password = foo username = ' UNION SELECT '1', 'c3ab8ff13720e8ad9047dd39466b3c8974e592c2fa383d4a3960714caef0c4f2', 'bar' -- 

The hash here is simply obtained by firing up a Python REPL:

$ python >>> import hashlib >>> hashlib.sha256("foobar").hexdigest() 'c3ab8ff13720e8ad9047dd39466b3c8974e592c2fa383d4a3960714caef0c4f2' 

The user ID for the user with the level 4 password is randomly shuffled when the system is initialized, so it must be ascertained through trial and error.

The Takeaway

Always, always, always escape parameters to your database queries. Many data access libraries do this for you through parameterized queries.

All ad-hoc SQL queries in the applications we develop are parameterized through data access libraries.

Basically, do not trust user data.

Level 4

Now things get interesting.

This level actually has another (automated) user sitting out there, logging in and using the web app, whom is the real target of our attack.

All we need to do is get karma_fountain to send us some karma, and his password will sent along with it (by design... this is a strange web app).

The system allows you to register any number of user accounts, and while usernames must match /^w+$/, there are no restrictions whatsoever on passwords.

Couple that with the fact that passwords are displayed, completely as-is, when sending karma, and we have a quick route to injecting arbitrary JavaScript into the page. This is a basic Cross-Site Scripting, or XSS attack.

All we have to do here is sign up with a password that contains a malicious JavaScript payload, and send a message to our friend karma_fountain. The next time he loads the page, he will see our "password" and his browser will do the work for us.

A suitably devious password for a user attacker might be:

<script>$("input[@name='to']").val("attacker"); $("input[@name='amount']").val("9001"); $("form").submit();</script> 

After logging in and sending a little friendly karma to karma_fountain, the only thing left to do is wait for our karma to come pouring in, along with the password.

The Takeaway

User data is often displayed in web apps. This is unavoidable. And many fields only make sense as completely arbitrary text. The only thing to do is to be sure to correctly escape it so that it cannot be interpreted as a script by a viewer's browser.

We make use of libraries that escape output by default, so that outputting raw text is an extra step that must be done deliberately and only with a valid reason.

In case you haven't noticed a pattern yet, this basically boils down to: do not trust user data.

Level 5

The system in level 5 introduces an authentication scheme that uses an "auth pingback" URL to call an arbitrary endpoint inside the CTF network to authenticate a user.

The authenticating server calls a user-supplied pingback URL to validate a user, looking for a string matching the pattern
/[^w]AUTHENTICATED[^w]*$/.

The obvious vector for this attack was to use the level 2 server to host a malicious PHP script that authenticates.

However, the page only reveals the level 6 password when you are authenticated on a "level05" server. Fortunately the authentication script echoes the pingback response regardless of source.

This means we can use the level05 server itself as the pingback, with a nested authorization request to the level02 server tucked away inside.

The Takeaway

Stick with tried-and-true authentication schemes, and don't get clever. If you feel you need this kind of feature, go with OAuth 1.0.

Although, any networked system is only as secure as the systems it relies on, and in this case the authentication server trusted a compromised server inside its own network.

Level 6

The solution to level 6 was basically an escalated version of the attack in level 4. HTML tags are not properly escaped when user data is printed out as JSON in the page, and so by posting a malicious message we can jump out of the JSON and into our own <script> tag and get to work.

We can post message for other users to see, so when they load the page they execute our malicious script. We want to construct a script that will cause any user who loads the page to post their own password. We can view the logged-in user's password on their profile page.

Our payload script is just going to leverage jQuery to request the profile page, parse out the user's password, and post it as a message using the message posting API.

The payload wrapper is essentially:

</script><script> ... </script> // 

The only tricky part is that quotes are not allowed in messages, so it's necessary to encode our script without using any literal strings. Obviously some strings are necessary to do anything interesting with the victim's session. The JavaScript function fromCharCode() can take a sequence of integers and return a string, though, so you can turn any code into this form easily, and then pass it to eval().

The Takeaway

As always, sanitize your output to prevent JavaScript injections. Do I really have to say "don't trust user data" again?

Level 7

The underlying vulerability in level 7 was new to me, and after much Googling, it became apparent that I was looking for a hash length extension attack. I won't go too much into this, because the link explains it better than I can.

In a nutshell:

If you know a mesage and the result of:

H(secret + message) 

... then you can calculate:

H'(bad_message) 

... where H' is a hand-tweaked hash function, such that the result
is equal to:

H(secret + message + padding + bad_message) 

This works because hash functions are state machines that operate in blocks, and the digest of a message is really just the final value of the registers in the state machine for the last block calculated.

This digest gives you a starting point for extending the message and calculating a valid hash without knowing the secret.

The Takeaway

DO NOT, under any circumstances, unless you are truly an expert, try to roll your own cryptography. You will likely get it wrong, perhaps in subtle and hard-to-understand ways.

We rely on open, tested, trusted cryptography here, and fight the temptation to throw home-made cryptography at a problem.

Level 8

This was by far the most challenging level, because of the rather oblique nature of the attack.

The target in level 8 is a password validation system, consisting of one master server and a chain of "chunk" nodes. No single node knows the entire password. The master server takes password validation requests, breaks the supplied password into chunks of 3 characters (trigrams) and sends the first chunk to the first chunk node. If the first chunk is valid, then it continues through the other nodes in the same fashion.

If any chunk fails, a user-supplied "web hook" URL is called with a simple success/failure message. Nothing in the content of this response is useful for anything more than a simple brute force attack.

While brute force is an option, it would require about 1012/2 attempts, and there is likely not enough time in the world for that when the server can only handle a few per second.

However, by analyzing other attributes of the response, we can actual glean enough information to start narrowing things down.

Specifically, the callback URL receives a connection from an auto-generated port. And, since the master server only connects to a subsequent chunk server on success, the typical difference in port numbers is larger when more chunks are correct.

To begin solving this problem, the level 2 server (allowing arbitrary PHP files to be posted) had to be exploited to allow me to run a custom server to act as the web hook. By uploading a PHP script which copied my SSH public key to the correct location, I could then connect to the level 2 server and run any code I wanted.

I came up with several "solutions" that worked on a local copy of the system but failed in the real world, when the algorithm ran up against the "jitter" introduced by all of the other contestants hitting the live servers.

Finally, after clearing my head and stepping through the logic, slowly, again, a solution emerged. It was not entirely natural to come up with, because programming tasks rarely depend on "fuzzy" data such as this, where you can only guess at first and then later eliminate false positives.

The key was to start testing values (001, 002, 003, etc.) for a single chunk, "push" it (save it and start testing the next chunk) when the port difference jumped by 1, and "pop" it (throw the saved value away and start counting where you left off) if the port difference dipped down (indicating a false positive). This cut the runtime to a mere 14 minutes for my final attempt.

The Takeaway

Nothing is secure. Everything is vulnerable. Hide yo kids, hide yo wife.

...

Just kidding.

This level was really eye-opening, in that it demonstrated how a seemingly inconsequential bit of noise (auto-generated port numbers) coupled with pattern detection and oblique logic can reveal secrets.

Be careful in what you may reveal to attackers, because so much more can be deduced than what is readily apparent.

Categories
Author

Federal agency mobile implementation is an important aspect of the Digital Government Strategy, so last week the Mobile Gov team and Digital Gov University partnered for a “Mobile First” Webinar. A “mobile first” approach is where new websites and applications are designed for mobile devices first, instead of designed for the traditional desktop.  Representatives from government and the private sector spoke about what it means to be “mobile first.” You can listen to the entire webinar, but here are some highlights:

Ken Fang from Mobomo Inc. talked about the importance of a mobile first approach, citing the increasing percent of traffic routing from mobile devices. Fang  proposed three steps to consider when choosing a device and platform to develop.

  1. Consider your audience needs and remembering who and what you are making the app for.
  2. Think about what kind of content will be sent out.
  3. Think platform strategy —answering whether you develop for one device or choose a different route such as an API or responsive design.

 

Categories
Author

logo-Google-io2010logo-300wGoogle is one of those few companies who can play the field when it comes to positioning themselves with apps for both Web and mobile platforms, but still believes that the two will converge and that essentially the Web will win. Hence, the company is putting efforts into not only their Android Marketplace but their new Chrome Web Store.

While some people feel that Google is competing with itself by promoting both the Chrome and Android app stores, the company said at Google I/O this week that it believes it's keeping an open mind about the future. Google Co-founder Sergey Brin admits that right now the market wants native mobile apps, though with the progress of the HTML5 standard in terms of display graphics, and with Web apps capable of going offline, he feels that Web and native mobile apps will converge in the not too distant future.

Ultimately, at least for Google, Android will morph into Chrome OS. But before this can happen, it'll take more powerful smartphones with larger resolution screens and the fleshing out the HTML5 standard.

Want to discuss a mobile Web or native mobile app for your business or projects? Feel free to contact us to discuss your app or mobile campaign needs.

Categories
Author
1
Subscribe to Mobile Web