jump to navigation

Deep Impact? October 18, 2010

Posted by IaninSheffield in Management, Musings, research, Teaching Idea.
Tags: , , , ,
trackback
Comet McNaught

from chrs_snll on Flickr

At a recent meeting of ICT leaders from our partner schools, my colleagues from our iPod Touch Project gave a short presentation outlining progress in the first couple of weeks.  It seemed to stimulate a fair amount of interest judging by the number of questions which followed.  One colleague asked how we were going to measure the impact of the project and whether any of the apps might help in this respect; he wondered whether performance in some of the maths skill and drill type apps might be monitored for improvement.  A reasonable question, but one to which I suspect we already know the answer, but it got me thinking more seriously about measuring impact and whether crude ‘test-like’ performance indicators were actually what we ought to be using as impact metrics.

A good starting point might be to say what we mean by ‘impact.’ In the major ‘ImpaCT2’ study undertaken in 2002, impact is the result of an intervention intended to achieve an underlying policy goal; in this case clearly focused on the effects on pupil attainment as measured by national tests.  In a similarly wide-ranging study undertaken in Nordic countries, the focus was more on pupil learning and explored pupil performance, the teaching and learning processes and communication and co-operation.  Arguably less quantifiable than ImpaCT2, the Nordic study explored the perceptions of pupils, teachers and parents.  Given the depth and rigour of these (and other) studies, perhaps it is disappointing that the degree of impact is lower than we might hope, given the level of investment in ICT.

At present the evidence on attainment is somewhat inconsistent. And. The literature is very positive about some aspects of ICT use, rarely negative, but mainly incomplete or inconsistent. (Condie & Munro, 2007)

There’s two ways to look at this: either ICT isn’t solely about raising pupil attainment, or maybe we’re looking for impact in the wrong way.

The ICT Impact Report from European SchoolNet cautions us that

… measuring ICT impact on learning outcomes is only one area of potential impact. Much depends much on how ICT is used and so it is important to consider the factors that prepare the ground for improved learning and consequently lead to better learning outcomes. A second  crucial area of ICT impact is  therefore the underlying teaching conditions that promote ICT enhanced learning.

Within the four key objectives in the DfES Harnessing Technology Strategy from 2005 we find ‘sharing ideas,’ ‘providing motivating learning experiences,’ ‘building open and accessible systems’ and ‘providing online resources.’  Maybe these are areas rich for mining examples of impact?  In fact we perhaps need to heed Trucano (2005):

It may be that more useful analyses of the impact of ICT can only emerge when the methods used to measure achievement and outcomes are more closely related to the learning activities and processes promoted by the use of ICTs

So maybe on our list of metrics ought to be:

  • Motivation and engagement (though this has been covered in other studies like Passey et al, 2004)
  • Presenting and representing information in different ways (perhaps involving multimedia)
  • Classroom talk and pupil interaction & collaboration
  • Personalisation of learning, targeted to the needs of each learner
  • Facilitation and enabling of creativity
  • Involvement, inclusion and engagement of parents in the learning process
  • Level of access to ICT (anywhere/anywhen)

Though we need to keep in mind the cautionary note that

ICTs are used most by teachers to fit with traditional pedagogies, but the greatest impact is with teachers who are already experienced edtech integrators.

It’s perhaps a wise move if we return to the original premise on which our project came into being.  It was intended to be a proof of concept; an exploratory study seeking to surface the issues with mobile devices (technical, pedagogical, behavioural, safety, security) in school so that we are better placed to plan our next steps forward.

How do you think we ought to measure impact?

Condie, R. et al., 2007. The impact of ICT in schools: a landscape review, Becta.

DCSF, Harnessing Technology: Transforming Learning and Children’s Services. Available at: http://publications.education.gov.uk/default.aspx?PageFunction=productdetails&PageMode=publications&ProductId=DFES-1296-2005 [Accessed October 17, 2010].

Harrison, C., 2002. ImpaCT2 The Impact of information and communication technologies on pupil learning and attainment. Available at: http://research.becta.org.uk/index.php?section=rh&rid=13606 [Accessed October 15, 2010].

Passey, D et al., 2004. The Motivational Effect of ICT on Pupils, DfES Research Report RR523. Available at http://www.education.gov.uk/research/data/uploadfiles/rr523new.pdf [Accessed October 18, 2010]

Pedersen, S. et al., 2006. E-Learning Nordic 2006: Impact of ICT on Education, Ramboll Management. Available at: http://www.oph.fi/download/47637_eLearning_Nordic_English.pdf [Accessed October 15, 2010].

Trucano, M., 2005. Knowledge Maps: ICTs in Education-What Do We Know about the Effective Uses of Information and Communication Technologies in Education in Developing Countries?. Online Submission, p.77.

Advertisements

Comments»

1. Nick (@largerama) - October 18, 2010

Other factors re impact could be to see if use of devices in a context as you describe influence other subjects in any way, or increase/improve use of technology in other subjects as a result (knock on effect). Also, there is consideration as to whether such a scheme as you describe creates a better atmosphere within the subject almost like a ‘feel good’ factor which encourages interest and maybe even uptake at A Level for example

IaninSheffield - October 19, 2010

I think you’re onto something here Nick. The ‘feelgood’ factor and ‘knock-on effects’ could be of great significance, though I need to think carefully about how we could measure them. Where an ICT intervention is undertaken in a single subject domain, I think monitoring them might prove (relatively) easier than in our project which is already across the curriculum. However feelgood of itself is certainly important surely?

2. Nick - October 19, 2010

Measuring anything re education and the effects is difficult and those that I mentioned are arguably even harder to measure especially the feel good factor. The knock on effects could be seen over time but there so many other influences than could skew the analysis. The value of ‘feelgood’ is astronomical imo. Having worked in schools where changing the attitude to learning in my subject has been paramount, getting a positive vibe going for instance by showing students how they can achieve has such a massive impact

3. Dom Norrish - October 19, 2010

My view is that the edtech community needs to abandon the idea of anything so positivist as ‘proof’ of impact. A school is far too complex an environment for any one factor to be unquestionably identified as having an independent impact.

Where Becta et al have failed in the past (IMO) is by pursuing the “X% impact on GCSE Design Technology” route which damned ICT’s contribution with faint praise. Good grief – if great technology only helped kids to get better at passing exams, I’d say we should all give up now.

The true impact of technology is to be seen, as you rightly identify, in attitudes to learning, in children’s confidence and their belief that school has something to offer them; immeasurably more valuable than any statistically insignificant effect on the A*-C rate.

4. ianinsheffield - October 19, 2010

You’re absolutely right Dom although I suspect that the powers that be, whether national or local, have pass rates at the forefront of their minds. I guess it all depends on whether our priority is to churn out improved examination performances year on year, or to help prepare students for a rapidly changing world.

And actually it’s all of that of course … and more!

5. daibarnes - December 30, 2010

I marked this for reading a while ago. It is very interesting to think about methods of measurement. There are none that really strike home as being effective.

Made me think of school uniforms. I am sure there will be research about uniform and how it is a good thing in a school for various reasons. Some schools choose not to do it. Private schools, in the main, go over the top with it. It is about making a choice for that institution at that time. Something to focus everyone. What is the impact of insisting top buttons are done up?

What can we compare the measurement of ICT to? Differentiation? Literacy strategy? New homework planners? Streamlined sets? What else is measured in this way?

The truth is that education is many miles away from educational research. Unlike the medical profession where nothing can be done without statistical evidence. For example, there is no evidence to claim the different learning styles (visual etc) have an impact on learning outcomes, but schools use this method to make learners self-aware and make teachers vary activities.

No answers I’m afraid – just more questions.

6. ianinsheffield - December 30, 2010

Glad you got round to reading this Dai and really appreciate your thoughts.

I agree with each of your points and you’ve certainly made me think more about the ‘why’ of measuring impact, rather than just the ‘how.’

Starting with educational research vs medical. You’re probably more aware than I, how close medical practitioners are to medical research; I suspect far, far closer than the majority of people at the chalk face. I wonder why that is? I guess we don’t have the same imperative that they do in medicine – if a new initiative doesn’t provide the outcomes we might have hoped for, people are less likely to die. Would it be true that because of this, much medical research is done ‘up front,’ prior to rolling out an intervention large scale? Educational research often seems retrospective; looking for an effect (impact) after the event. All too often unfortunately those findings fail to inform future practice; anyone who has been in the business a while will have many tales of initiatives that cycle back and forth like the seasons (coursework versus terminal exams for example).

I also wonder about the people who are searching for the elusive ‘impact.’ There are those like me who are striving to provide evidence that using ICT makes a difference … positively. And why are we doing that? Perhaps to try to prove a point to the ne’ersayers, who are also on the lookout for evidence that ICT is failing to make a difference. Let’s not forget how much schools now have to apportion annually in order to provide the ICT resources their students need. Perhaps part of it is an attempt to justify that level of expenditure. I have it heard it said that if we took away all the ICT resource that a school demands and spent the equivalent on teachers, wouldn’t that make more of a difference?

D’you know though, I think we’ve gone beyond all that. When you consider how ubiquitous and all-pervading ICT is our everyday lives, who in their right mind would want an education system which didn’t reflect that … and ensure our students become comfortable, competent and safe with it.

I’ll still want to look for evidence of impact though, however that might be surfaced. Not because I need to justify an intervention to anyone, but because that’s actually just good practice. We should all be reflective about what we do and at times it’s wise to do that with a degree of rigour, both for our own benefit and for our pupils.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: