We are drawing to a close with Phase II of the LIDP, so naturally our thoughts are turning to what we would like to do with the data we have accumulated from the first two phases at Huddersfield. What would Phase 3 look like?
The aims of our original in house project were:
- To engage non-low users
- To increase attainment levels
- To work progressively with staff to embed library use
We had always intended to use the data we had analysed to make a direct impact on retention and attainment in the University. Our plan for 2012/13 was to
- Refine data collected and impact of targeted help
- Use this information to create a toolkit which will offer best practice to a given profile
- E.g. scenario based
In order to help us to focus on the areas we need to look at going forward, we held an event for Computing and Library Services staff and the Pro-Vice Chancellor for Teaching and Learning on 9 November.
The first half looked at what we had achieved so far – thanks to Ellen for an excellent presentation looking at some of our previous blogs – and some really cool new findings, which we will be blogging about very soon!
The second half of the session looked at the future and centred on a number of questions:
How can LIDP be utilized to help increase student retention at Huddersfield?
The data shows that there is a statistical significance between library usage and retention – although this is clearly not cause and effect, it does mean something and adds to our arsenal of ways of flagging up possible retention issues.
We need to get better data at subject level rather than the general view we have at the moment. We also need to get longitudinal data to see if usage changes over time – a sudden drop could indicate a potential problem?
Finally, we need to get live data, both phases of LIDP took 4 months to analyse the data to give results for a single point in time – live data would add great value to a retention dashboard.
How do we engage academic staff?
What are the mechanisms to deliver a step-change? We can show clearly evidenced work on what we have already done, but how do we get to the value-added bit at the end? We need to create a series of briefing papers for specific subject areas that shows the evidence we in areas that relate specifically to academic staff. We need to build relationships and look to move library contact time away from the first term to the point of need – of course we’ve known this for a while, but we still get booked up in the first term for inductions, with further engagement we can move sessions to suit the student using the data we have.
Is low usage appropriate in some areas?
We have found that usage is low in areas such as Art and Design and Computing and Engineering. Is this OK? We need to come up with a way to measure this and target the areas of need to find out why? Is low use acceptable, or are the resources inappropriate? Do our results show us that we have an issue with certain groups of overseas students and usage – or do they just work in different ways to European students – are they actually working in groups, which might account for lower usage? Anecdotal evidence says they maybe.
What data should we offer in future?
We need to offer two sets of data, one to look at improving retention, the live dashboard approach, and one to look at adding value to the student experience. We need longitudinal data to look at usage over time and also yearly stats so that we can start to benchmark. We also need to discriminate between years of study so that we can look for patterns.
The use of e-resources has worked as a broad indicator, we always said it was a fairly rough measure, we need to add some evidence based practice to this, e.g. have interventions made a difference?
Which areas do we prioritise?
Do we look at the NSS scores? Overseas students? Specific subjects, such as Computing and Engineering? We need to develop a strategy moving forward, we also need to get the live data. This is an area that needs to be developed, possibly using the Wollongong model (Brian Cox and Margie H. Jantti, “Capturing business intelligence required for targeted marketing, demonstrating value, and driving process improvement”, Library and information science research, vol.34, no.4 (2012): 308-316. doi: //dx.doi.org/10.1016/j.lisr.2012.06.002) and open source software?
Additionally, we need to do more work integrating our MyReading project – academics need to give out clearer guidance for reading, essential, recommended, additional etc. so we can monitor usage, non-engagement and follow up some of our finding about the impact of wider reading.