jump to navigation

21st Century Learners – Myth or Reality? April 26, 2015

Posted by IaninSheffield in Musings, Teaching Idea.
Tags: , , ,
add a comment

Earlier this week I was working with a colleague and her Year 6 group (10 year olds), introducing Google Maps – how to create your own customised map and add your own content. The group is shortly to visit Eyam on a field trip and we were exploring an alternative way to synthesise their learning from the trip, which has both a History and Geography focus. Rather than presenting the findings in a conventional way, using a customised map enables them to be rooted it in the geographical context from which they arose. Although familiar with Google docs, slides and sheets, creating a Google map constituted progression in their digital skills. This lesson then was about laying the foundational skills to enable them to work in the new environment, so the aims included creating a blank map, sharing it with their partner so both could edit, locating a specific point and adding a placemarker, editing the placemarker, adding text and an image, adding a line to represent a route from school to Eyam (then finding a shorter one). An extension task involved exporting the map to Google Earth and ‘flying’ along their route(s). If you’ve never used Google maps for anything other than searching for a place, then all of the above is likely to be quite new and (other than the notion of sharing) involves a different set of features than the ones commonly found in other applications. So in addition to teacher-led demonstrations of the tasks they were to undertake, I also produced a set of instructions to follow; a recipe book if you will. What happened next was quite interesting.

When the class began the activity (working in pairs), few bothered to refer to the instructions I provided and dived straight in, trying different parts of the available interface until they made headway. Those adopting the ‘trial and error’ method made faster progress than those following the instructions, up to the point where they got completely stuck then they floundered, trying to find the relevant point in the instructions (perhaps I need to rethink the way the instructions are compiled?). Once back on track, they raced ahead once more. They also made more mistakes, but seemed comfortable with that, happy to retry an attempt which had gone awry. Fascinating and delightful to see such resilience.

What intrigued and surprised me, though it probably shouldn’t have, was how different these ten-year-olds were when compared with the teacher groups with whom I often work. If I’d undertaken a similar activity with colleagues, I’m fairly sure (albeit anecdotally) that the proportions of those who begin with the instructions and those who would open with experimentation would be reversed. Which then begs the question, do young people these days approach a new task with more abandon than their older counterparts? Is this evidence for 21st Century Learners being somehow different i.e. that the digital era into which they were born is affecting their attitude? Or perhaps younger people are more experimental and happier to take risks, where time-poor teachers would rather adopt the low-risk strategy in order to ensure successful completion? If the two groups are not fundamentally different and all I’m seeing is age-related, developmental differences, I wonder where the transition from one approach to the other takes place and if it’s an incremental change, stretched out over time? As ten-year-olds, they’ve little experience of high-stakes testing; perhaps that’s the point when a trial-and-error approach becomes more of a liability and has to be dropped in favour of the safer, low-risk option? Sadly I don’t have the data to provide answers to these questions, but that one lesson prompted an awful lot of pondering!

Footnote. Two days later I was working with another class when a couple of students came by and said they couldn’t find the Google maps they had created last lesson. I couldn’t immediately leave the class I was supporting to help, but suggested they look in the instructions. They had; without joy. Fifteen minutes later when I could pop across to their class, they were all back on track, maps open and immersed in their activities. It transpired that my instructions had lapsed owing to the update to the new version of Google Maps. Although initially flummoxed, their ‘Try. Fail. Fail better.’ approach helped them to get up and running independently … and to be able to explain to me how my instructions needed amending!. I wonder if … more mature learners would have shown such persistence and adaptability?

In this TED Talk, Tim Harford talks about using a trial and error approach, which others discuss in more detail here.

Thinking about teacher attitudes to technology May 12, 2014

Posted by IaninSheffield in CPD.
Tags: , , ,
1 comment so far

If we weren’t able to help our students appreciate their current capabilities, how they might improve and how to set about that, we’d be failing in our duties as teachers. But how do we know our own level of capabilities, at least in regard to the use of learning technologies? By what yardstick can we measure our own progress? Without that, how can we even begin to see a path forward?

In the quest to find answers to these questions, I’ve come across a whole raft of contenders:

SAMR

creative commons licensed (BY-NC-SA) flickr photo by langwitches: http://flickr.com/photos/langwitches/5687009271

1. SAMR – Proposed by Reuben Puentedura, you can find a helpful set of resources which delve into the topic in more detail here. The model is incredibly useful for reflecting on the role of technologies in activities developed for using with our learners. Its simple four level scale, divided into the two domains of enhancement and transformation is accessible, understandable and enables teachers to quickly consider the impact that technology might have on the learning process. However measurement against the SAMR model needs to be undertaken on an activity by activity basis; in one lesson with one group of students, you might be undertaking an activity at the Modification level, whilst during the very next lesson with a different group (or even the same one) technology might simply be used at the Substitution level. That’s absolutely fine. Technology isn’t always used to take us to new places, sometimes it simply helps make a task that little bit more manageable. Some people see the levels as a ladder and that we should aspire to climb the rungs to Transformational enlightenment. So by recording all the activities we undertake using technology, progress could be measured as the overall level moves towards Redefinition. I don’t subscribe to that. If someone understands how to use technology at the higher levels and does so within their practice at appropriate times, whilst at others uses technology at the Substitution level, then that to me is acceptable. If they’re not in a position to do that, then perhaps remedial action does need to be taken.

Summary Indicators from the Florida Centre for Instructional Technology TIM

2. To get a better overview of how technology is being used across a teacher’s practice, across the curriculum or across a school, the Technology Integration Matrix (TIM) offers itself up. Both that used by Florida and the one in Arizona have the same underpinnings and enable cross-referencing of five characteristics of meaningful technology integration at five different levels. Support for TIMs is extensive (lesson plans and video exemplars) and they offer useful lenses through which to view your own practice or that of others. The five characteristics quite rightly focus on the activities of the students and how they have been enabled or empowered to use technology … but I feel there are consequently areas within our own practice which are to some extent neglected.

TPACK

Reproduced by permission of the publisher, © 2012 by tpack.org

3. One powerful lens through which to view the use of technology in learning is the TPACK framework1 (Technological Pedagogical and Content Knowledge) proposed by Mishra & Koehler1. This requires teachers to consider three different components of their practice. Any particular teaching situation or activity involving the use of technology will involve expertise across the three domains and require an appreciation of the roles of the technology, the subject or content and the pedagogy which enables the learning. Each teacher with each activity will encounter a unique context. In some circumstances where their content knowledge is well-grounded, they may wish to use a new technological tool and therefore need to reconsider their pedagogy, yet in another they may be teaching something for the first time and want to explore how to make the most of a tool they’re already adept with. The more often a teacher finds themselves at the heart of the diagram where all three domains intersect, or the degree to which they can see how to quickly navigate there, the more developed their practice is becoming. Powerful though TPACK may be, it is a framework more suited to deep reflection and devising appropriate curricula and lessons which incorporate the use of technology appropriately.

There are plenty of other frameworks cited in Knezek & Arrowood’s2 “Instruments for assessing educator progress in technology integration,” which can be divided into the three areas of attitudes, skill/competency and level of proficiency. Dating back to the turn of the millenium, some aspects within some of these instruments are now slightly dated, but nevertheless could be updated.

In the past we’ve asked colleagues to report on their skill levels with technology and subsequently put in place a programme to provide support. More recently we shifted the emphasis of our self-reporting process to towards capability, rather than plain skills. Now however I’m wondering whether we need to dig a little deeper and explore some of the underlying attitudes which determine teachers’ beliefs towards eLearning and technology use.
Never one to shirk a challenge then I’ve drafted a framework which draws inspiration from SAMR, TIM and to some extent CBAM (Concerns Based Adoption Model, mentioned in Knezek & Arrowood). The matrix suggests teacher attitudes at four possible levels, across ten aspects of technology integration, the idea being that colleagues would choose statements that best reflect their attitude. This would generate a profile (a radar chart might be useful here), hopefully indicating areas in which they might be open to change. If nothing else, it should provide a starting point for discussion.

attitudinal matrix

creative commons licensed (BY-NC-SA) flickr photo by ianguest: http://flickr.com/photos/ianinsheffield/14192259133

The big BUT though is whether these criteria and the statements at each level are valid. What do you think? What might you add, leave out or amend? Feel free to add your observations below, or do please add comments to the draft document.

As Christensen et al (2000)3 observed

…not every educator is best served by training aimed at some arbitrary level, and that different levels of integration may require different techniques.

Before we decide on a professional development strategy, we clearly need to know the levels.

 

1Mishra, P., Koehler, M., 2006. Technological pedagogical content knowledge: A framework for teacher knowledge. The Teachers College Record 108, 1017–1054.

2Knezek, G.A., Arrowood, D.R., 2000. Instruments for assessing educator progress in technology integration. Institute for the Integration of Technology into Teaching and Learning, University of North Texas Denton. [online at http://www.iittl.unt.edu/pt3II/book1.htm, last accessed 12/05/2014]

3Christensen, R., Griffin, D., Knezek, G., 2001. Measures of Teacher Stages of Technology Integration and Their Correlates with Student Achievement.

Questing … in- or non- formal learning about ICT? October 2, 2011

Posted by IaninSheffield in CPD, Inspiration, Musings, Teaching Idea, Tools.
Tags: , , , ,
4 comments

Things have moved on somewhat since my previous post. Whilst working on a structure which might deliver some of the elements described in that post, I became aware of the Digital Media and Learning “Badges for Lifelong Learning Competition.” This competition:

is designed to encourage individuals and organizations to create digital tools that support, identify, recognize, measure, and account for new skills, competencies, knowledge, and achievements for 21st century learners wherever and whenever learning takes place.

The first two stages involve people/organisations working in separate strands, one the ‘Content and Programs’ focused largely on the pedagogical aspects and the second ‘Design and Tech’ addressing the technical elements of delivering a badge-based credit and achievement system. In the third stage, entrants from the first two strands will be ‘married’ based on their submissions and will then work together towards a final deliverable project proposal.

Assembling an entry for the Stage 1 strand simply meant arranging my planning for our self-study ICT extension activities into a format suitable as a submission.

Some of my hopes are:

  • That some of our students will learn a little more about how ICT can help them in their learning through school and later in life.
  • That students begin to take more responsibility for choosing learning paths.
  • That we are able to develop a system which celebrates their achievements by revealing it to a wider audience.
  • That our system can be further developed by partners with greater experience and skill in the technical aspects of badge creation, management and awarding.
  • That what we produce might be of interest to other organisations, educational or not, who might benefit from those resources.
  • That our library of Quests swells because other individuals/organisations contribute their ideas and inspiration.
  • That some of the Quest participants feel sufficiently enriched to become contributors to the ongoing project.

You can find my submission in the DML Competition or here:

View this document on Scribd

The deadline isn’t until 14th October, so if you have any comments or suggestions, I’d be delighted to hear.

9 out of 10 cats . . . October 28, 2009

Posted by IaninSheffield in TELIC, Tools.
Tags: , , , , ,
1 comment so far

As I mentioned in earlier post, we’re starting our second year of TELIC with an examination of learning spaces and what that means for the learner.  By way of introduction we’re analysing a paper – ‘Rethinking the Virtual’ by Nicholas C. Burbules, following an introduction from @GuyMerchant.  People approach this type of exercise in different ways, but we wondered whether some of the visualisation tools might offer a different perspective.  Each of the following accepts free text, then performs some black magic in which some element of visual importance is generated as a result of the frequency of occurrence of a word or phrase.

The popularity of Wordle continues to grow, so that seemed like a reasonable place to start

Wordle: Rethinking the Virtual

No surprises, given the title of the paper, that ‘virtual’ features prominently, but we can also see other patterns beginning to emerge. ‘Space’ is clearly of major significance here, with ‘experience(s),’ ‘sense,’ ‘people,’ ‘time’ and ‘learning’ all clearly important too.  Given that we’re studying learning spaces, this paper clearly has something to offer then and perhaps the other terms imply that the human dimension cannot be ignored.

Many Eyes is an online tool which enables visualisation of both numerical and textual data.  In addition to Wordle, Many Eyes provides three additional visualisation techniques:

78d95492-c1a7-11de-a580-000255111976Blog_this_caption

The Tag Cloud is similar to Wordle in that frequent words from the text feature more prominently in the cloud.  So the words mentioned above are the same ones which stand out again, however because the words are arranged in alphabetical order, plurals for example (experience/experiences) are more readily seen.  So the word ‘experience,’ occurring more frequently through its plural may now be considered more significant.  Additional features that this visualisation offers are through its interactivity – hovering over a word produces a pop-up which provides some examples of phrases within which that word can be found i.e. some measure of context.Tag cloud

We can also dig down for more detail by making use of the Search facility – from an examination of the main cloud, we can see more words starting with ‘i’ than might usually be anticipated, so we can focus on that area for further analysis.

Tag cloudWe can then go one step further and make use of the ‘2 word’ function which produces a cloud based on occurrences of pairs of words:

And at once see the emerging significance of ‘interest, involvement and imagination’

B87dd362-c1a6-11de-bfad-000255111976

The Phrase Net produces visualisations based on words linked by a conjunction; some presets are offered, but there is also the facility to provide your own custom phrase.

Having a space as the conjunction between two words produces quite a rich net which shows the words with which ‘virtual’ is closely linked – space(s), environment(s) and learning and how they in turn are linked with other words.  Interesting that the significant words (the ‘i’s) which emerged from the Tag cloud don’t carry the same weight here.

The Word Tree allows us to explore the beyond simple word and phrases, whilst still drawing significance from frequent words.
78d95492-c1a7-11de-a580-000255111976 Blog_this_caption
Clicking on branches within the tree narrows down the focus and allows to analyse the context within which important phrases can be found.  tree2From the main tree, ‘virtual space and time’ clearly plays an important role, so we can investigate why this might be by exploring the sentences which both commence with and terminate in that phrase.

So what has all this told me about ‘Rethinking the Virtual?’  Well it’s provided some targets I’d want to explore further: the relativistic link with space-time sounds intriguing and the ‘i’ words are clearly important.  The question is though, have I got more from this than simply reading the paper?  Well no, certainly not, but this is a 9000+ word paper which takes some reading. What these tools might be able to do then is to allow significant aspects to emerge more quickly.  A more experienced user would doubtless be able to pull greater detail and richer information than from my tentative exploration.  But textual analysis in this way is, for me an infant discipline.  As such I guess it’s no worse than the often rudimentary way numerical data is presented – 9 out of 10 cats . . . !