jump to navigation

Drawing a Tube Map – how hard can it be?! March 4, 2013

Posted by IaninSheffield in Resources, Tools.
Tags: , , , , ,

Whilst I was working through my 365 project, it struck me that when concluded, to continue to be useful as a resource potential explorers would need a way of interrogating the posts. From the outset I made sure each post outlining a Web2.0 tool was tagged appropriately – with both a SAMR level and a category (or more than one) stating the affordances of the tool. The tags could then be used to filter posts corresponding to the type of tool a viewer might be seeking. However none of that could provide a overview and summary of all 366 posts and tools; for that I’d been considering using an infographic of some sort.

Right from the outset I’d had in my mind a graphic along the lines of the London underground map which had been morphed to use in other ways like Simon Patterson’s Great Bear which swapped the tube lines for fields or spheres of endeavour and the stations for people who were known in those spheres (like scientists, actors, authors etc), or Tubular Fells by Peter Burgess which used similar design principle to the Tube Map, but changed it to suit Lakeland Fells and walking routes. So in my version, Web Tube.0, the Web2.0 tools would be the stations, the categories of tools would be the lines and the SAMR categories become the zones. How hard could it be?!

Well it turned out … very! If I chose to use the London Underground system as a template (leaving aside the potential copyright pitfalls), with only 270 stations spread across 11 lines, I would be well short of my needs. I had 366 tools spread across 34 categories. So I tried using similar design principles (as Peter Burgess did in Tubular Fells), but quickly came to recognise the enormity of the task. The problem wasn’t drawing 366 stations on 34 lines, the real problem was where a tool fell into more than one category and the lines had to intersect … and some tools fell into four categories! In fact about half the tools fell into more than one category compared with less than a third on the Underground map. Then superimposed on top of that would be the zoning for the SAMR levels! Now all of that is doubtless doable and in fact I suspect a programmer could probably come up with some solution, but surprisingly an extensive search of the Web found few possibilities. Most ‘solutions’ suggest using graphics programmes like Inkscape, Illustrator or Freehand, but they seem to miss the problem – it’s not the graphical issues, it’s the computation that’s at the very heart of it. I came across a PhD thesis “Automated drawing of metro maps” which outlined the nature of the problem as follows:

Given a planar graph G of maximum degree 8 with its embedding and vertex locations (e.g. the physical location of the tracks and stations of a metro system) and a set L of paths or cycles in G (e.g. metro lines) such that each edge of G belongs to at least one element of L, draw G and L nicely. We first specify the niceness of a drawing by listing a number of hard and soft constraints. Then we show that it is NP-complete to decide whether a drawing of G satisfying all hard constraints exists. In spite of the hardness of the problem we present a mixed-integer linear program (MIP) which always finds a drawing that fulfils all hard constraints (if such a drawing exists) and optimizes a weighted sum of costs corresponding to the soft constraints. We also describe some heuristics that speed up the MIP and we show how to include vertex labels in the drawing. We have implemented the MIP, the heuristics and the vertex labelling.

I wasn’t reassured when further investigation led me to an application called Context Free which “… is a program that generates images from written instructions called a grammar” Err, yes, well I certainly found an example in the gallery that might help – 24 stations on 3 lines with only 4 points of intersection (& of only 2 stations each) required in excess of 750 lines of code (albeit some blank spacers & others single characters)!

Which is when I decided Web Tube.0 would go onto the back burner. I considered instead a dartboard or Mandala-style diagram which would adequately provide 34 sectors for the categories and outward protruding arcs for the SAMR levels, but I couldn’t easily resolve the issue of overlap where a tool spans several categories. I toyed briefly with the possibility of drawing a concept map (plenty of applications there), but once more it was the issue of the intersects. Recognising that the points of intersection were proving the stumbling block to this form of representation caused me to shift perspective and whilst considering, but rejecting the Periodic Table (overlapping categories once more!), I thought there might be merit in the grid-style layout. And that’s when I settled on:

366 Web2.0

cc licensed ( BY NC SA ) flickr photo by ianguest: http://flickr.com/photos/ianinsheffield/8504100927/

It’s a simple alphabetic layout following the chronology of the posts and is only that shape for ease of viewing in its entirety; it could easily be a single row sweeping out an extended linear area … not so good for Web viewing perhaps? So at a glance the tools offering a particular SAMR level are easily identified. Finding tools in a particular category, for example concept mapping tools, requires a little deeper interrogation and perhaps at a higher zoom level. Whilst the colour coding helps here, some colours do tend to merge with their neighbours and are less easy to tell apart unless side-by-side.

I designed and created it using Inkscape, an open source, freely downloadable vector graphics editor … which means the output can be scaled without the pixelation you get if using a bitmap editor like Photoshop or GIMP. In addition it provides another useful feature for those viewing the output svg file using certain browsers, namely the capacity to add interactivity to the image. So moving forward it’s my intention to make filtering the complete toolset down to just the ones you’re looking for with a single click – so clicking on ‘Survey’ for example will hide all tools except for the survey ones. I know it’s possible, but I suspect it will take a few more hours work … assuming the audience wants it?! Be grateful for your feedback.

366-366 posted. All has been said and done. January 1, 2013

Posted by IaninSheffield in Tools, Web 2.0.
Tags: , ,

As 2012 unfurled, I began a 365 Project, though one with a twist – 366Web2.0. Here then are my reflections on the project.


cc licensed ( BY ) flickr photo by Asja.: http://flickr.com/photos/asjaboros/6520949843/


In total, 366 AudioBoo podcasts (actually 368 because two applications merited extending to two podcasts) were recorded, representing almost 16 hours of audio. The reality was a little more demanding however since many applications were new to me and needed a degree of exploration prior to producing the podcast. Each podcast also required assembly of a blog post through which to deliver it and though brief, wherever possible a supplementary resource was sourced and added; sometimes a video, sometimes an artefact from the tool. All told then, preparation, recording and writing the blog for each post took between 15 and 30 minutes, sometimes longer. In other words, producing 366 occupied over 120 hours i.e. three work weeks.

Bang for buck?

Was it all worth it? Did the benefits outweigh the costs? I guess there were two beneficiaries: anyone who might have chanced on a post, found something of use, then took that away to develop further. Unfortunately I’ve no way of knowing the extent to which that happened since the viewing figures data from Posterous are notoriously unreliable and I find it hard to believe that any of the Boos I made attracted over 100 listens (The top Boo apparently got 832 listens!). Even then, with few comments posted, and there were few, it’s difficult to know if anyone found anything of value in the podcasts or blog posts. I must say here though I’m grateful to John Johnston & David Noble on Edutalk for their continued support and encouragement … and am honored to be included as a member of the Edutalk community.

I can write with a little more confidence about the second beneficiary – me. Right from the start I wanted to learn a little more about podcasting and whether I had the ‘right stuff’ to produce them. Well there’s no question that I learned something! I certainly find it difficult to speak with the ease and fluency that most of the other podcasters I listen to on Edutalk, EdtechCrew, Tightwad Tech, EdTech Talk and elsewhere seem to do. But part of that’s the format I guess; I’m not loquacious enough to talk into a mic. on my own for long (Couldn’t help but marvel at a recent podcast (Episode 397) from Wes Fryer where he spoke with clarity and focus for almost an hour solid … whilst driving home from a conference!). I guess I’m more of a listener and responder, perhaps better suited to dialogue rather than monologue.

I also learned a little more than I normally would about the new tools I came across. Usually I’d simply bookmark and tag them for future use, but if I was going to be talking about them in 366, I needed to explore them a little more fully. As a result I found several that have now become part of my ‘go to’ toolset that I return to and refer others to regularly; that rarely happens with tools I don’t take the time to explore more fully.

What might I have done differently?

Although I decided at the project outset what I ought to include in each podcast, I soon departed from that and tended to ‘wing it,’ often perhaps being more descriptive rather than as analytical or critical as I might have liked.  I sometimes wondered whether the supporting blog post was really necessary; could I have done the majority of it through AudioBoo by making greater use of the description and tag fields? AudioBoo have also recently introduced ‘Boards’ which can hold Boos having a common theme, a potentially useful addition for grouping my Boos, by tool-type perhaps. I did feel however that including a video with each podcast (where possible) perhaps provided a different perspective and also might have been more appropriate for those who prefer visual explanations rather than just audio.

Another area which gave me pause for thought was the attempt to try to categorise each tools using the SAMR model. Trying to pigeonhole a tool in this way is not without problems as the blog posts in the ‘About’ page explain. My hope was that offering a tentative level might spur debate about the ways in which the tool might be used, how that could be interpreted and challenge us to think beyond a surface level of simple usage to a deeper appreciation and understanding of how we might use it. Why am I using this tool and am I (and my students) getting the most from it?

End of the line?

The year is done and the project over. Or maybe not. Given the degree of commitment required to produce a podcast/post per day, I’m not sure I could sustain that into 2013; I also have other avenues I want to explore. However new tools continue to emerge and in order to better understand their potential, I do need to give them more than a cursory glance. Perhaps then this offers a way to extend 366 and use it to review and record Web2.0 tools on a continuing basis, albeit with a less demanding schedule.

Maybe there is life in the old dog yet?