This is a cross-post from the Timmons Group Innovate Blog. The original post can be found here.
One of the primary purposes of our Wildfire Risk Assessment Portal (WRAP) web app is to let users view and analyze various raster data layers on a map. With so many colors on these raster layers, sometimes upwards of 40, it’s important to provide a legend to give the user a quick and easy way to make sense of the data.
It’s difficult to discern between similar shapes and colors on the legend, and the user constantly has to look back and forth between the important data on the map, and the legend off to the side. There is also often no context to the legend, and no particular order in which items are displayed. It’s just not a great use of valuable space on the screen, especially on small mobile screens.
For AZWRAP (our WRAP application designed for the Arizona Forestry Service), we tried to come up with a more modern style of legend that took advantage of the dynamic and interactive nature of the web. The result is a very minimal horizontal legend that uses the user’s mouse (or touch) input to display the relevant legend information immediately as the user needs it.
Examples of the Wildfire Risk Assessment Portal dynamic legend in action. There can be over 40 legend items, and it even works with the layer swipe widget!
As the user moves the mouse around the map, the legend automatically inspects the raster pixel data that is under the mouse and highlights the relevant legend item. This is based on the simple principle that when the user moves their mouse to a location, it’s a strong signal that they are interested in whatever is at that location. That signal is all we need to display the information the user wants. No other input is required - it’s like mind reading!
This way, much more of the screen can be used for map data and other valuable information, and the user only sees the legend information that is immediately relevant at that moment. Subtle differences in raster colors can be easily distinguished with the mouse, and even layers with over 40 items can be read pretty easily. Best of all, the user automatically learns the functionality of this legend right when they interact with the map for the first time, with no additional instructions or tutorials. Panning, zooming, or even just accidentally dragging the mouse over the map is enough to trigger the dynamic legend popup, and the immediate feedback ensures the user makes the connection between the mouse input and the legend. We maximize the responsiveness and efficiency of reading the map and eliminate the confusion of the traditional style of legend.
At a high level, our goal is to read the RGB color of the pixel directly under the mouse, match the color with the corresponding legend item, and display the legend popup.
We listen for mouse-move events on the ArcGIS map div to kick off all our pixel inspection logic. First, we need to figure out which ArcGIS layer we’re actually interested in. Generally it is just the current layer that is displayed on the map, but in the case of swiped layers, there is some additional logic required to figure out which layer the mouse is hovering over.
Once we have determined the layer that is under the mouse, we have to figure out which individual image tile the mouse is hovering over. In ArcGIS, each layer on the map has its own coordinate space that is determined by a CSS translate transformation. This transformation is adjusted as the user pans around the map to achieve the scrolling effect. Each image inside the layer is further translated into a grid formation.
The mouse position we get from the mouse-move event is relative to the map div, so we need to project the mouse coordinate from div space into layer space and then further into image space. In linear 2D space, this is as easy as subtracting the layer and image translations from our div space mouse coordinate.
Now that we know the image that the mouse is hovering over, and the exact pixel within that image, we can read the RGB pixel data from that image. The only way we can do this in the browser is to use the HTML5 Canvas2D API, by first drawing the HTML image object on the canvas, then reading back the pixel data to get the individual RGB components.
We need to add a hidden canvas element to the HTML. It seems silly to have a 1x1 canvas, but we are only going to draw and read exactly 1 pixel at a time, so that’s all we need!
And now the coffeescript:
Finally, with the raw RGB values, we can figure out which legend item to pop up to the user. In our WRAP code, we have JSON (although I suppose it’s CSON for coffeescript) objects that define the legend values for each raster layer. We use that data to map the inspected RGB value to a legend item.
All kinds of effects can be triggered after matching the RGB value to a legend item. In our code we simply set a CSS class on the appropriate legend item div, and let the CSS animations do the rest. The possibilities are endless! Try to come up with a presentation of the legend that best fits your app’s layout and gives the user the most relevant information.
CORS and IE10
You will likely get this error about “cross-origin data” when trying to read from the canvas with getImageData: Unable to get image data from canvas because the canvas has been tainted by cross-origin data.
This is because Canvas2D does not allow reading data after a cross-origin image (one that has been obtained from an external source) has been drawn onto the canvas. This includes map tiles, which are likely served from a separate server (a MapProxy server in our case) than the web server that the app is running on. However, if the server is configured to allow CORS and you obtain the images with the proper headers, then the app knows the images are safe and allows you to read their pixels using the canvas. There’s a nice website dedicated to instructions for enabling CORS on servers here.
On the front-end, when fetching images, you set a flag on the HTML object to tell the browser to use CORS headers:
So we can’t trust our mapping API to obtain CORS images properly, but we can easily re-download the image ourselves:
A callback-within-a-callback like this is a bit uglier than before, but it’s not so bad. The big trouble comes when you discover that IE10 does not support CORS for images drawn on the canvas. If you only need to support IE11 or higher, consider yourself lucky and ignore the rest of this post! Sadly, AZWRAP required IE10 support, but fortunately there is a (very ugly) work around.
Instead of setting the src URL of our CORS image directly, we have to request the image as a blob and use the blob URL on the image. This is such an outdated method of making AJAX requests that jQuery’s ajax() doesn’t even have an option to request a blob, so we have to manually construct an XHR request the long way:
And with that, we can finally read pixel data of images even on browsers that don’t support CORS!
If you have any questions, comments or feedback on my writing/blogging skills, feel free to email me at [email protected] or tweet me @foolmoron!