21st Century Learners – Myth or Reality? April 26, 2015Posted by IaninSheffield in Musings, Teaching Idea.
Tags: elearning, Google maps, learning, lessons
add a comment
Earlier this week I was working with a colleague and her Year 6 group (10 year olds), introducing Google Maps – how to create your own customised map and add your own content. The group is shortly to visit Eyam on a field trip and we were exploring an alternative way to synthesise their learning from the trip, which has both a History and Geography focus. Rather than presenting the findings in a conventional way, using a customised map enables them to be rooted it in the geographical context from which they arose. Although familiar with Google docs, slides and sheets, creating a Google map constituted progression in their digital skills. This lesson then was about laying the foundational skills to enable them to work in the new environment, so the aims included creating a blank map, sharing it with their partner so both could edit, locating a specific point and adding a placemarker, editing the placemarker, adding text and an image, adding a line to represent a route from school to Eyam (then finding a shorter one). An extension task involved exporting the map to Google Earth and ‘flying’ along their route(s). If you’ve never used Google maps for anything other than searching for a place, then all of the above is likely to be quite new and (other than the notion of sharing) involves a different set of features than the ones commonly found in other applications. So in addition to teacher-led demonstrations of the tasks they were to undertake, I also produced a set of instructions to follow; a recipe book if you will. What happened next was quite interesting.
When the class began the activity (working in pairs), few bothered to refer to the instructions I provided and dived straight in, trying different parts of the available interface until they made headway. Those adopting the ‘trial and error’ method made faster progress than those following the instructions, up to the point where they got completely stuck then they floundered, trying to find the relevant point in the instructions (perhaps I need to rethink the way the instructions are compiled?). Once back on track, they raced ahead once more. They also made more mistakes, but seemed comfortable with that, happy to retry an attempt which had gone awry. Fascinating and delightful to see such resilience.
What intrigued and surprised me, though it probably shouldn’t have, was how different these ten-year-olds were when compared with the teacher groups with whom I often work. If I’d undertaken a similar activity with colleagues, I’m fairly sure (albeit anecdotally) that the proportions of those who begin with the instructions and those who would open with experimentation would be reversed. Which then begs the question, do young people these days approach a new task with more abandon than their older counterparts? Is this evidence for 21st Century Learners being somehow different i.e. that the digital era into which they were born is affecting their attitude? Or perhaps younger people are more experimental and happier to take risks, where time-poor teachers would rather adopt the low-risk strategy in order to ensure successful completion? If the two groups are not fundamentally different and all I’m seeing is age-related, developmental differences, I wonder where the transition from one approach to the other takes place and if it’s an incremental change, stretched out over time? As ten-year-olds, they’ve little experience of high-stakes testing; perhaps that’s the point when a trial-and-error approach becomes more of a liability and has to be dropped in favour of the safer, low-risk option? Sadly I don’t have the data to provide answers to these questions, but that one lesson prompted an awful lot of pondering!
Footnote. Two days later I was working with another class when a couple of students came by and said they couldn’t find the Google maps they had created last lesson. I couldn’t immediately leave the class I was supporting to help, but suggested they look in the instructions. They had; without joy. Fifteen minutes later when I could pop across to their class, they were all back on track, maps open and immersed in their activities. It transpired that my instructions had lapsed owing to the update to the new version of Google Maps. Although initially flummoxed, their ‘Try. Fail. Fail better.’ approach helped them to get up and running independently … and to be able to explain to me how my instructions needed amending!. I wonder if … more mature learners would have shown such persistence and adaptability?
In this TED Talk, Tim Harford talks about using a trial and error approach, which others discuss in more detail here.
MFD – most frustrating device? September 7, 2013Posted by IaninSheffield in Musings, Technology.
Tags: learning, learning theory
1 comment so far
During the summer break we replaced a number of elderly and inefficient photocopiers and a couple of printers with MFDs (multifunction devices). Most were colour, some mono, but all print, copy and scan. In one more attempt to cut down wastage, we elected to include integrated ‘pull’ functionality; this means that nothing emerges from the device until the sender goes to the machine and ‘pulls’ down their job(s) from a print queue. Hopefully this will reduce the quantity of printing sent to printers/copiers which is never collected. The upshot is of course that in order to ‘release’ your job from the pint queue, you have to authenticate yourself in some way. We could have elected to have swipe cards (problems likely to occur due to loss), PIN authentication (yet one more piece of data to remember), but settled on active directory integration, meaning people just have to enter the username and password they normally would to do log onto a computer or into our network remotely. This is done on a touchscreen interface on the device itself, the screen being about three times the size of a normal smartphone. Because there is a single queue for all devices, if someone goes to pick up their printing from a device which is busy, they can simply switch to another MFD and pick up their job there.
A change in one’s working practices will inevitably require a degree of adjustment, particularly when there’s a new step (authentication) in something as routine as photocopying. Delays in the initial configuration and setup have meant the launch has been less smooth than we would have liked, however observing the way that people approach this new challenge has been rather interesting. Essentially there are two groups of users:
As one would expect, there is a range of confidence and capability across the groups, from those requiring a high level of support through to those who just jump straight in and give it a go. What has been fascinating though is the different way adults and younger people approach their first attempt. The confident in both groups seemed to just get stuck in and start pressing buttons until they made what they wanted to achieve happen. It was at the other end of the spectrum where the differences in the two groups started to emerge. Requests I’ve had from staff ranged from “Is there a set of instructions I can have?” through “Can you show me what to do?” to “Can you help me get started?” Requests I’ve had from students – nil! That’s not say they always succeeded in what they were doing, but not one asked for assistance up front. Even the least confident ones approached the MFD, looked at the touchscreen, then made an attempt, even if somewhat hesitantly.
It’s interesting to speculate why the differences and whether they are truly generational? There have always been those who, having unboxed some new appliance will plug it in, switch it on and get started, whilst other will need to read the instructions long before the appliance has left its packaging. Perhaps it’s related to one’s preferred approach to learning? Are those who need the instruction manual or guided support tending towards a instructivist, procedural approach where they follow a sequence of provided steps, achieve a successful outcome, then repeat the same steps for continued success? And are the others learning constructively by exploring, observing, evaluating the outcomes, then adjusting their actions accordingly?
But what happens when their job fails to come out the printer?
What does each do next? What are the consequences of undertaking a few cycles of the feedback loop? Will the recipe book people repeat the same steps in their manual, perhaps more carefully, assuming they’ve made a mistake … or the MFD isn’t working. Will the explorers try out different options, different settings? Might those in each group, having experienced a failed strategy, revert to the tactics employed by those in the other group?
Who would have thought a new set of printers could have got me thinking about learning? Maybe there’s a research project right there … how do people adapt and learn when faced with new circumstances and what characteristics ought we to be nurturing to help students face those challenges? Perhaps we need to change the printers more regularly 😉
How do we weigh an academic’s time? May 25, 2013Posted by IaninSheffield in CPD, Musings, Teaching Idea.
Tags: academic, flipped classroom, flipping, learning, lecture, lecturer
add a comment
A while ago Nick Jackson (@largerama) asked of Twitter ‘What is the most worthwhile use of an academic’s time? Lecture or tutorial?’ The answer’s obvious right? Well … maybe. Wherever an apparent truism appears, particularly one with which I’d tend to immediately agree, I always try to step back and ask of myself what the alternative viewpoint might be and might I argue that case? In playing devil’s advocate, I’m seeking to challenge my own understanding. Have I missed something? Was my initial interpretation too shallow?
To the question of the lecture then. As I begin to probe in the Twitter exchange Nick captured in his subsequent blog post, is it fair to take the lecture in isolation? It is after all one aspect of a more complex environment which includes the academic and the student, the location within which the lecture takes place, the curriculum, the organisation which brings all those together (and the budget within which it must operate) plus the less tangible, but perhaps most significant element, the intent or purpose of the interchange. So let’s imagine a university or college which has one hundred mechanical engineering students and wishes them to experience a ‘Stress Analysis’ module of study over a 12 week period. It can spare a lecturer at the rate of 1 hour per week over that time. So 100 students, 12 weeks, 12 hours of lecturer time to cover content X. On the grounds of efficiency or economics, the lecture argument is an easy one to make. If lectures were swapped for tutorials for example, the group size would need to be smaller and the lecturer’s time spread more thinly resulting in each student benefiting from perhaps only an hour of lecturer support in the whole module.
Nick makes the point in his post that time spent in lectures is to some extent wasted and as the video above shows, during carelessly constructed, monotonously delivered, poorly crafted lectures, student attention inevitably wanders. In addition Nick highlights the weak pedagogical principles upon which lectures are built, which inevitably lead to a lower quality of learning. Perhaps then the lecture in which the lecturer is the most active agent in the room struggles to encourage the ‘deep’ learning1 that academics would surely want to engender in their students? Säljö2 categorised five graded conceptions of learning:
- Learning as a quantitative increase in knowledge. Learning is acquiring information or ‘knowing a lot’
- Learning as memorising. Learning is storing information that can be reproduced.
- Learning as acquiring facts, skills and methods that can be retained and used as necessary.
- Learning as making sense or abstracting meaning. Learning involves relating parts of the subject matter to each other and to the real world.
- Learning as interpreting and understanding reality in a different way. Learning involves comprehending the world by re-interpreting knowledge.
Would it be fair to say that (the majority of) lectures encourage 1 to 3, the ‘surface’ approaches, whereas say, tutorials lean more towards the deeper approaches found in 4 and 5? With that in mind, does the lecture really offer enough value to the students? Although Gunderman3 contends that lectures aren’t solely about transmitting information, they should:
…show the mind and heart of the lecturer at work, and to engage the minds and hearts of learners.
But realistically, what proportion of lectures actually do that? Is it truly possible for a lecturer to bring their ‘A’ game to every lecture? Perhaps it would be more appropriate to allow lecturers to provide their own defence:
And now I’m even more convinced than ever. It’s easy to criticise without offering an alternative though, so let’s imagine more efficient or more effective possibilities other than the traditional lecture, but which make no more demands on lecturer time. Perhaps the most obvious place to start is the flipped classroom model in which a pre-recorded version of the lecture can be watched by the students, maybe even given by other lecturers. But where the gain in that I hear you say? With a pre-recorded lecture then student has the option to watch it at a time and place to suit them (perhaps more important than ever in these times where a good proportion of students take on a job to help pay for their fees), but more importantly at a pace to suit them. The twelve scheduled hours could then be used in any number of ways: tutorial, workshop, Q&A. Maybe students could be asked to ‘front’ these sessions with pecha-kucha presentations, TeachMeet– or Bar Camp-style sessions, anything where they’ve been required to interpret and transfer what they learned from the recorded lecture. And the academic? S/he decides the most appropriate format for each particular aspect of the curriculum; they act as facilitator, arbiter, coach and mentor. They do what they (should) do best and redirect student learning, helping them see misinterpretations and misunderstandings. Will that be easy? Of course not. For many academics it might require considerable personal and professional development, but with deeper learning resulting in more accomplished students as the intended outcomes, surely it’s worth the investment of time?
The lecture only ever enjoyed moderate success a mechanism for facilitating student learning and even then only under particular circumstances, but given the constraints within which it operated, other options were limited. Digital technologies have broken the shackles binding academics … will they now make the most of that freedom?
1 Marton, F. & Saljo, R., 1976. On qualitative differences in learning: I. Outcome and process. British journal of educational psychology.
2 Saljo, R., 1979. Learning in the Learner’s Perspective. I. Some Common-Sense Conceptions. No. 76.,
3 Gunderman, R., 2013. Is the Lecture Dead? The Atlantic. Available at: http://www.theatlantic.com/health/archive/2013/01/is-the-lecture-dead/272578/ [Accessed May 23, 2013].
Tags: Coursera, CPD, CS101, learning, MOOC
add a comment
The six weeks are up and I’ve successfully reached the end of CS101 on Coursera, my first MOOC. Although I’m not entirely sure how many students were enrolled on the course, there was clearly a good international spread, with the age spectrum well represented too. (If their submissions were to be believed, the youngest was 11 and the oldest 82)
Before I reflect on how things worked out, it might be wise to return to my motivations for embarking on this course of study. I was hoping to explore:
- an example of the new learning environments known as MOOCs
- my attitude to learning through this medium
- introductory computer science.
I guess on all three fronts I succeeded, however that’s not to say the experience was entirely fulfilling. I certainly had a good look around the Coursera environment which offered a clear, well-structured, robust platform with course materials laid out and accessible intuitively ( at least for someone who is familiar with online learning environments). Providing the learning materials as short (10-20min) videos in lecture-style format, supported by course notes was perfectly acceptable, especially as the videos could be downloaded for offline viewing. I often find streaming an unsatisfying experience due to inevitable buffering, but here I could download the videos and watch them from the comfort of the sofa on the larger screen of my TV … and with a cuppa close to hand. The ‘test area’ coding environment for those parts of the course was a sensible move, but I didn’t find the need to use it much (more later).The assessment exercises were largely trivial, given the need to have them auto-marked; it wouldn’t have been difficult to include a few more multi-choice questions with an increasing level of demand perhaps. But then again, the assessments aren’t really there to differentiate one learner from another, nor to provide some element of summative grade; they’re just a mechanism by which the learner can check his/her understanding … though perhaps they didn’t really do that too well either.
To address the potentially missing social, interactive aspects of studying online, a forum environment was provided and whilst some participants were clearly enthusiastic contributors, I found the majority of threads either a little too trivial or far too long and wandering. Perhaps this was simply because the course contents didn’t offer sufficient demand that I had issues I needed to resolve through discussion with others or maybe in just a six week course I just didn’t feel the imperative to fully commit and begin to forge relationships.
I suspect that choosing computer science was my main mistake, though for the best intentions. With the current debate surrounding its reinvigoration in the UK school curriculum, it seemed like an appropriate topic to visit. Unfortunately an introductory course aimed at people with no prior experience of CS inevitably meant that there was insufficient challenge for me. Which is not to say I came away having learned nothing; quite the contrary in fact. The problem was more that I never needed to step away from the content to process it further or more deeply, either because it didn’t challenge me or I felt no imperative to push myself to take things further. To be fair I couldn’t really expect any more than that from a six week introductory course. The consequence however was that I never became fully immersed in the course, whether due to my attitude to the subject matter, course contents or the open nature (i.e. no commitment either personally or financially).
In the end I suspect my feelings were far less skeptical than Joshua Kim’s and at a similar (though not for the same reasons) level to those of Audrey Waters. Would I do another? Absolutely! In fact in order to evaluate MOOCs more rigorously, I need to do another; one in which I move beyond my comfort zone and into an area that fully challenges me. Are MOOCs suitable for everyone? Of course not! But that’s not because the technological environment might not suit all (which is indeed true), but that you have to have a determined and committed approach to your learning, recognising that the locus of that learning must come from within. Would I recommend it to someone else! Probably. There’s certainly nothing inherently weak in the principle or the practise, but I would advise them that progress and success will depend largely on their predisposition. Let’s keep a sense of perspective – these are free (arguably!), well-structured, well-resourced courses which provide learning opportunities for anyone with an Internet connection. They have to be worth a shot surely?
It’s at a Premium January 4, 2012Posted by IaninSheffield in Management, research.
Tags: attainment, learning, pupil premium, research, sutton trust, toolkit
add a comment
The Pupil Premium is one way in which the Government is attempting to address educational inequality.
We know a good education is the key to improving young people’s life chances, to enable them to progress into adulthood with the skills and confidence for success. The Pupil Premium will provide schools with the resources with which to address inequalities in the system and raise the attainment of those pupils from low income families.
I was having a look through the Pupil Premium Toolkit, commissioned by the Sutton Trust, which seeks to guide “teachers and schools on how best to use the Pupil Premium to improve the attainment of disadvantaged pupils.” By reviewing educational research, the toolkit compares different strategies used to improve pupil attainment (Assessment for Learning, One-to-one tutoring etc); exploring impact, cost and the strength of the evidence. You’ll see from either the online version or the downloadable PDF, that the information on each intervention is presented first in a summary, then subsequently in more detailed form.
I got to wondering which approaches might be most cost-effective in terms of greatest gain per £ – not a difficult calculation to do and mere moments of effort to produce a spreadsheet. But then I thought how we might also factor in the strength of the evidence; after all would it be wise to spend money on an approach producing a high attainment gain per £ if the evidence which suggested that approach isn’t quite so robust?
Which all led me to this bubble chart:
Gain in attainment on the horizontal axis, cost on the vertical and width of bubble indicating the strength of the research which produced the findings. So the sweet spot is the bottom right-hand quadrant where we get high gains for low (moderate?) cost. And what do we see? Providing effective feedback, meta-cognitive strategies, peer tutoring and homework (?!) all figure prominently. Sadly, ICT only produces moderate gains and at a high cost … but there’s a whole other post could come from that!
Does that reflect your day-to-day experience though?