Back in May, when there was a rash of tornadoes across the southern U.S, we published an animated map of all the activity. Rather than just show the storms that evolved into tornadoes, we choose to also show wind reports and hail reports in the area during a 3-4 day period. The data came from the USGS and I was responsible for the Flash programming and data collection.
I recently worked with the California Health Care Foundation to create a series of data visualizations about the history of California health insurance. Based on their data, I created three separate charts, a breakdown of the insured/uninsured population, a timeline of changes to their health care system and a chart showing the consolidation of the various health insurance providers over several decades.
They have the entire PDF available to download on their site.
The data was quite interesting once I dug into it. Health insurance is something that affects everyone in one was or another, but it can be a difficult subject to get a grip on. I especially found the consolidation chart interesting. On the left, you see the nineteen separate CA health insurance providers that existed in 1985. On the right you see the 6 that exist today.
There are usually a surprising number of aftershocks when a large earthquake hits, sometimes more than the affected population notices. In the week or so following the Japan earthquake this spring, there were hundreds of aftershocks. Most are much smaller than the main 8.8 magnitude quake, but still significant.
At MSNBC.com we created a time-lapse of the aftershocks using USGS data to show the scale of the tectonic activity. The size of the red circle indicates the magnitude of the quake and the color indicates the depth beneath the earth’s surface – red means it’s near the surface, purple/blue means it’s farther down.
When the Chile earthquake occurred in February 2010, we created a similar map.
UPDATE: On June 1, 2012, Washington State began allowing private-sector businesses to sell liquor and closed all state-owned stores. The Liquor Control Board web site I used in this example was closed along with them. I’ll leave this tutorial on the site, but only as a reference. If I get time to post a new Scrapy tutorial, I’ll post an update here as well.
This tutorial will walk you through a web-scraping project from scratch using Scrapy, a Python scraping framework. By the end, you’ll be able to:
- Create a spider from scratch using both GET and POST requests
- Handle the responses via Items and Pipelines
- Parse your data into multiple files (in this case – CSV)
- Use XPath selectors to find specific elements within a HTML document
What you’ll need:
I won’t post any sort of install tutorials here, there is a good one on the Scrapy site already. If you are having trouble getting libraries to install (especially on a Mac), hit me up in the comments or email and I’ll do my best to help. Let’s get started.
When creating maps with hundreds or thousands of points, it can be difficult to make it easy for the user to read and understand. Too many icons or markers on the map and they are bound to overlap each other, users won’t be able rollover the marker they want and big chunks of the map will be completely obscured. A better option is clustering – grouping markers together based on proximity within a single, parent marker.
I’ve found some similar classes online before, but they weren’t quite what I wanted. This class doesn’t draw anything on the stage until all the clusters have been evaluated and it’s all done via an external util class, rather than being hardwired into a map application. The child objects are saved within the parent cluster, so they can be referenced for other uses like tooltip data or changing the marker’s properties based on content.
‘Geek comedian’ Tom Scott has an awesome set of journalism warning labels that you can download, print and use to notify readers of a local disreputable news source.
A List Apart has a great post today about flexible web design to fit your site across multiple platforms. The general idea isn’t new, but difficult to implement, as all web developers know.
You can always do a browser detect and alter the page to use a corresponding CSS file, but you are still designing for a specific layout – 3 columns for 1024 screens, no side nav for iPhone screens, etc, etc. Instead, the post suggests using layouts that are truly fluid, platform agnostic, yet maintain some ability to automatically alter their layout.
Rather than tailoring disconnected designs to each of an ever-increasing number of web devices, we can treat them as facets of the same experience. We can design for an optimal viewing experience, but embed standards-based technologies into our designs to make them not only more flexible, but more adaptive to the media that renders them.
I have a feeling some of these principles are going to become very useful for some upcoming projects where I work. As mobile devices continue their march, it becomes increasingly important for us to design for every platform simultaneously. If you aren’t doing the same at your organization, it’s time to take a seriously look at your company’s priorities.
A few days ago, Steve Jobs decided to clarify his position on Flash (as if we didn’t already know). According to Jobs, it’s too processor intensive, too proprietary, lacks touch support, etc. There wasn’t too much in the letter we haven’t heard from Apple before. So I was a little surprised at the strong reaction (both positive and negative).
I’m not going to pick through every point in Jobs’ post, there’s enough of that on the web already. But there is one point I’ll make before slinking back to my AS3-ridden world of
open closed standards. Many of Apple’s issues with Flash can be seriously affected by one thing – the quality if the programming behind the Flash application.
Certainly reliability, security and performance are affected by the quality of the code, but even battery life is potentially changed. And since Flash is largely geared toward graphic-centric professionals with some basic coding skills, there are some really gnarly apps out there. You’ve seen ‘em – the preloader percentage goes to 10 decimal places; the forward arrows work, but the back arrows don’t; for some reason there are megabytes and megabytes of raw data included in the SWF file. Sure, the author meant well, but they don’t have the time or the know-how to build efficient, clean apps that don’t crash machines.
A lot of sites started creating their own Recovery Act applications once the government started reporting spending data in October 2009. There are a lot of interesting applications (ProPublica, HiveGroup, Recovery.gov, CNN Money), but when you look closely at the way the government tracking Recovery spending, you start to notice (big) holes in the system.