iNetwork awards finalists
The iNetwork Awards shortlists are out, and we're finalists in the iStandUK category for our Quality Safeguarding Data project in partnership with Wigan Council, Social Finance, and more than a dozen LAs from all over the country. Congratulations are well deserved for all involved in this work, which is changing how we think about local authority data challenges.
The text below is as good a summary as any of what we've been doing on this project, and why we think it matters.
Briefly describe the initiative/ project/service; please include your aims and objectives
Work to improve data quality is resource-intensive and not always afforded the high priority that it deserves. Low quality data leads to low quality analysis and this can lead to decisions being made based on inaccurate and incomplete data. Our project leveraged existing standardised data into economies of scale to help Local Authorities all over the country improve data quality in Children’s Services. Children’s Services data teams all over the country report statutory data returns to the Department for Education every year. The returns are to a predefined standard and the Department checks these for errors on submission. Every year, this creates a pinch point of data quality work for Local Authorities who must check their data using the Department’s submission tools, to ensure that it meets the high standard required for annual data returns.
Working with a broad partnership across the Data to Insight Children’s Services Data Network, Wigan Council commissioned a project to achieve a national economy of scale for a common problem. They produced a data quality tool based on the existing standardised statutory dataset for Looked After Children. Any Local Authority can now use this tool to check its data quality at any point in the year, using the same quality checks that the Department for Education will use at year-end. The tool passes three key accessibility tests for Councils: there’s no need to install new software, there’s no need to share any data off-site, and, crucially, there’s no need to generate any new datasets. The existing standards, and accompanying standard outputs which all Councils already produce, are all the tool needs to do its work.
By working with existing data standards, we can build tools that our colleagues in other councils can adopt effortlessly, meaning that we can justify the necessary expense to build a top-class solution. Ultimately, this helps us improve our services for the most vulnerable children in our care.
The outcome of such innovation is improved outcomes for children, young people, and their families. Local analysts are getting cleaner data for less time input, giving them more time to support leaders in high quality analysis and ultimately make better informed decisions.
What are the key achievements?
The standard data validation tool for Children Looked After was used 137 times by 42 local authorities in just its first month of deployment. It has since seen continued use by LA colleagues around the country at a rate of approximately 20 uses per week through its first statutory reporting year. This far exceeds the value proposition of the alternative, which would have been to build a bespoke tool for one LA to use a maximum of 12 times annually.
Colleagues also provided exceptional feedback on the value the tool added to their local data quality work (quotes are from 5different LAs):
“You are sharing gold dust.”
“Excellent piece of work. Our LA will definitely use the tool.”
“The tool helped identify errors that the DfE’s 903 site would produce this year, particularly OC2 errors and SDQ scores – we found it very useful from that side of things. It also helped us identify errors the DfE had made in its own validation rules,meaning we could report these and get them corrected!”
“Our recent Ofsted inspection really impacted on the resource available to complete end-of-year returns, so there was a clear benefit to having done the data quality work during the year instead.”
“The tool does things even the DfE site doesn’t, like showing all errors at once rather than hiding ‘stage 2’ errors until later in the process. We find it fantastic.”
All this work led not only to more impactful use of resource to deliver the tool, and time saved by LAs using the tool, meaning LAs had better quality data to work with. It also generated two other crucial benefits. First, it opened up a whole new route for collaborative development in the sector via the Data to Insight partnership. The project worked out a method for hosting Python data analysis code in a web browser, to read data which remained on the user’s local computer/server without ever going anywhere. This means we can now use advanced analysis in Python without installing software or sharing data off-site. We are now leveraging that method in two further projects – one to share co-developed analysis tools (including a Children in Need version of this data validation tool) and one to perform scenario modelling and forecasting against placement data for children looked after, helping local authorities better understand their future costs and placement availability, and better meet the needs of their most vulnerable children.
Second, we delivered the project in an unusual way, using our technical expertise to work alongside novice programmers in LAs so that they would do most of the coding work, learning to write Python code alongside skilled guides and working on data which was familiar to them. This work involved around 20 colleagues who otherwise had few options for exploring Python or other advanced analysis approaches in their local environments. It also produced – and continues to produce –flagship “Python” projects which colleagues can point to when trying to evidence the need for new software in their local environments. This is a really common problem in local authorities – how do you convince key stakeholders that it’s worth investing in a new, untested technology? Through this project, we created a deliverable which obviously could not have been done effectively in simpler technologies, and helped analysts develop enough domain knowledge to negotiated with IT and service stakeholders about their requirements for new software approaches.
We’ve been happy to continue this work into a further funded phase of the project, which will help more analysts develop these skills and extend them beyond data validation and into data visualisation, with our latest iteration of the tool providing ways to write code in-browser, then deploy and share across LAs for LA colleagues to point at their local data without datasharing concerns, and produce exciting data visualisations. And throughout this, the golden thread is maintained to use public money effectively to improve services for vulnerable people – minimising development expense, and maximising the skills and expertise we develop in colleagues, and the insight they can get from their local data.
How Innovative is your initiative?
In terms of children’s safeguarding data validation alone, this is the first time there’s been a tool to do this for local authorities, outside of the DfE’s standard once-a-year web portal. The impact of being able to do this year-round is noted above. More broadly, our approach here demonstrates a new way for LAs to collaborate wherever standard datasets exist. With work like this, previously, LAs had to start from scratch and build their own solution, if they could even afford to do so, despite these common data standards existing for decades. With this project we pooled resources to build something better than any one LA could have produced alone.
And in technical terms, the data-secure, browser-based Python workflow we’ve implemented is, as far as we know, new to public sector digital and data work. Combined with our use of GitHub codespaces for coding, it offers a way for local authorities to work with Python and other advanced programming languages, develop new analysis tools, deploy them for sharing with any LA in the country, and then use them without installing any new software and without sharing and data beyond their local network. The potential impact for this solution is huge, and we’re already taking it forward with our next round of projects.
Crucially, we are also helping to support a community which can take this work forward. This isn’t a one-off. Our code-along approach to generate skills and expertise in the sector will mean that the legacy of this work isn’t just better data tools, it’s better-equipped data professionals. We’ve developed a cost-effective workflow for introducing LAs to new analytical approaches without all the usual barriers, and we’re now bringing our second cohort of learners through that programme –and along the way the work they’re doing is building data tools which LAs all over the country will then use to enhance their data work.
What are the key learning points?
Where we’re all working to produce standard data outputs for government data returns or other purposes, we can get more value out of that by working together on shared analysis of those datasets. This is one of the key principles underpinning the success of Data to Insight as a network of children’s services data professionals, and we think it’s increasingly applicable to domains beyond children’s social care – we’re keen to help share that learning. We know colleagues in Adults Services, Special Educational Needs and Disabilities, and elsewhere, are beginning to work with more standardised person-level datasets, of the type we’ve been using for some years in children’s social care. We think now is the ideal time to look at how we share our experience of this kind of work to help other areas benefit.
We already know we can scale this kind of work in children’s services – Data to Insight shows how well we work together to maintain Excel analysis tools for inspection preparation, benchmarking and regular performance analysis – but this project shows a new route to doing that work with more advanced analysis and visualisation in Python, without any additional sharing or licensing overheads. Our learning here is as much about what doesn’t work as what does – we worked through numerous approaches both to collaborative coding and to data tool deployment before we arrived at a method which works for the LAs who most need it (usually those least equipped to take on the burden of approving new software or data approaches locally). There is great scope for us to support further collaboration in this way, and save time nationally on the boring, time-consuming bits of data work by tackling them together, so that we can dedicate our local expertise to the key purpose of understanding local context and producing actionable insights.