#LegacySystems: Why ‘tech for good’ is not good enough, Part 1 — Tech is not inherently fair
We don’t just need ‘tech for good’. We need good tech.
With Great Power Comes Great Responsibility
People often refer to our data age as a fourth ‘industrial revolution’ (4IR); our lives are changing at a record pace, and life without digital and data is now unimaginable.
It is almost impossible to live, work, socialise, travel, even eat and drink without interacting with some form of ‘4IR’ technology — i.e. Big Data — but the challenge we now face is mitigating the negative consequences of our digital ‘boom’. As with the 2nd Industrial Revolution (the one which ushered in the Victorian-era unhealthy working conditions, fall in urban living standards, and catastrophic pollution levels), the costs of ‘progress’ are often a heavy price to pay — and there are many risks associated with mismanaging rapid change. Primarily, all of these risks are borne inconsistently across the world, with some being more subject to th ‘collateral damage’ of progress than others. Thankfully, we are in a position to learn from history, and do better this time.
What is Tech for Good?
‘Tech for Good’ is the concept of techies doing good in the world through building tech products which improve things – and ‘AI for good’ falls under the same banner. Think apps for planting trees (Treeapp), apps preventing food waste (Olio, Too Good to Go), AI tools for calculating your carbon emissions (Microsoft Emissions Impact Dashboard) – all of these things are amazing initiativesand make a positive difference in the world. Topical wow factors aside, any tech product which aims to improve people’s lives (social care programmes, design tooling, online communities…) can be considered as tech for good, too – as that is fundamentally its mission.
However, there is more to our tech’s impact than the final aim and outcome; it’s like driving a car helps you get around, but emits fumes. Or, to give a more extreme example, while the military advantages provided by glow in the dark paint on clocks and dials helped support Allied war efforts, the price of this innovation was paid by the now infamous Radium Girls. Every innovation has its price.
Tech which we can truly consider ‘good’ is not just about the ‘product’ of our tech — we need to consider the tech we use as a ‘product’, which has its own impact, in and of itself. As the 4IR develops, we are in a position to know how to build our tech better, in a sustainable way, which will also make the world a better place.
There is more to our tech than its endgame
We are all now familiar with the phrase ‘tech for for good’. Using technology to solve problems is an admirable goal. However, it’s no longer just about why we use or build technology, it is also important we consider how it is used and built. Digital has become so baked into the fabric of our everyday lives that we need to address the damage it is doing simply by being used.
While innovation and technical progress are every day advancing our capabilities, we are no longer blind to the impact of how we use our resources. As such, we are in a position to approach our innovation in a sustainable way — and this is imperative, for the wellbeing of our planet, economy and global community.
So, unfortunately, ‘tech for good’ is not enough; there is so much damage that can be done by irresponsibly using tech as a means to an end, regardless of the consequences of our approach. We need to re-examine our wanton use of data, energy, and resources, and our ignorance of technology’s societal impact. Otherwise, we will forever need to keep working to make the world a better and a safer place.
Data is the ‘new oil’ of 4IR…
Data is the key resource of 4IR — its ‘new oil’. However, as with all resource use, and the empowerment / disenfranchisement associated with resource consumption and acquisition, there is a power dynamic attached. A very topical discussion, and key to the advancement of 4IR, is the ethical implications of Big Data, including the use and development of Artificial Intelligence (AI). This is because it has the potential to inform and influence our lives unfairly — and indeed already does so.
Big Data is everywhere — it is impossible to imagine our lives without digital platforms’ data and algorithms. If you’ve ever used Google, or browsed Amazon, and selected one of the top few results, that decision was heavily influenced by the way Google and Amazon use data. Even if it was not a ‘sponsored’ result, or the very first option presented to you, the results you were shown, and the order in which they were displayed, were prioritised according to the company’s data on you. Our digital lives depend on Big Data increasingly, and our ‘real’ lives depend on our digital lives increasingly, too; so, companies with the budgets to use the best tech are becoming ever entrenched in our lives — to the detriment of smaller businesses, less mainstream ideas, minority communities, and the general richness of our lived experience and all the world has to offer.
This extends to almost everything we do where our digital ‘footprint’ and data are relevant; whenever we interact with technology, we leave a trace of our behaviour — which informs the technology we will interact with tomorrow, or even in a few minutes’ time. Data is enmeshed in our lives, fully embedded, and its influence is compounded with every interaction.
… but Big Data is not fair.
While ‘data being everywhere’ is not inherently bad, our challenge is that data is not fair. It is almost impossible to find (or even build) an entirely ‘fair’ and unbiased data project.
The problem is not just that the data can have an influence; it is that the data and algorithms based on them can be biased, too. It can be unrepresentative, or used unfairly – or simply have unintended unfair consequences. There are two kinds of AI bias we should be wary of, and both can lead to inaccurate assumptions — meaning that the influence can be damaging in the extreme. We need to eradicate the belief that STEM and data are objective; they are just as informed by the subjectivity of creators and contexts as any of the things we create, direct and influence.
Examples of data and modern tech being unfair include sexist smart speakers, and biased AI image generation. But it gets much more serious — you may have heard of AI enabled cars not recognising black people as people who should not be run over, because the team building the software had not used sufficiently diverse data to train the image recognition. What was intended as a piece of human progress, enabling people to drive more reliably and more easily, has actually enabled some quite dangerous discriminatory behaviour. Similarly, there are also examples of software intended to ease the pressure on medical facilities’ staff for triaging patients working with biased data, which thus influences triage unfairly (in this example, the model was built on the historical data pertaining to the support patients received, where white patients had historically been favoured… this trend thus continued). Just like with Madison Tevlin’s powerful ‘Assume That I Can’ campaign against Down Syndrome assumptions demonstrated — decisions made become reality, and this can serve people unfairly, and have life-altering, or even endangering, consequences.
Over-reliance on data can result in poor, unfair decisions; imagine entrusting a child to decide something, when all it knows is the information you have given it in its short life so far, overlooking critical human experience, qualitative insights and context. This could result in decisions that ignore the complexity of real-world situations — especially where data is biased, insufficient or contaminated. Next, imagine the scenario again, but this time a stranger is raising said child — its learnt knowledge, behaviour and priorities are now all informed by that stranger and their own beliefs and experiences. Again; you cannot trust that nothing pertinent has been overlooked or miscommunicated. Big Data, too, has the same challenges.
Regulation is needed faster than it is coming
You would think that there would be as robust a response to regulating the data economy as there is the banking industry these days, in order to make Big Data ‘fair’. You would presume this, considering the many risks and issues associated with 4IR, and lessons we can already learn; but, alas, we are on the back foot when it comes to regulation; there is no global agreement on data protection — never mind Big Data development and usage — and the EU has only just adopted its AI Act, this month.
The vulnerabilities regarding transparency, reporting, ownership, localisation of data law, and data storage are thus all open to malpractice, manipulation and neglect. As we play catch-up, the fate of your data is essentially at the mercy of the companies who hold and gather it. There is no global consensus on best practice when it comes to Big Data or AI, and everywhere the key players are — like with all shiny new toys — focusing on the possibilities of what can be done, rather than bottoming out just how we should go about it. This leaves many risks inconsistently acknowledged and poorly managed, while every opportunity for growth, a cool new Proof of Concept, and general buzzy excitement is chased down and pursued. When it cones to responsibke data practices, we are currently incredibly dependent on individual developers’ skill and moral duty, and that of those in their ‘bubble’.
The problem with playing catch-up with unpredictable new technologies and businesses is that we are looking backwards, at what is already being done, and technology is evolving faster than our guardrails can keep up.
So… Tech is not fair. Is it all down to Data?
All these challenges are not unique to Big Data; bias in solution design and product design have led to other historical problems which have then required regulation, repair, and mitigation (e.g. fossil fuel consumption, aerosol production, gender bias in seat belt design, contraceptive development…). However, the scale of the rise of Big Data means that almost every piece of new tech is, by virtue of the data involved in its creation and use, now subject to the bias, mismanagement and consequences of the myriad of ways AI is built today.
Tech is fundamentally a product of those who built it, and what it is made of; due to the nature of humankind, who operate subjectively, tech is inherently as unfair as the world which produced it.
In order to ensure 4IR’s technological advancements do less harm than good, and do a fair amount of good across the board, we need to innovative in governance, foster sector collaboration, and invest in building worldwide data literacy. We need safeguards, and education — and that is going to be a big theme for this series. Otherwise, even ‘tech for good’ will create more problems to solve down the line.
Follow along for Part 2 — Tech Exacerbates Inequality
Reading List:
- Andrew Burgess — That Space Cadet Glow
- Caroline Criado Perez — Invisible Women
- Ulises A. Mejias — Data Grab : The new Colonialism of Big Tech and how to fight back
- Dr Annabel Sowemimo — Divided: Racism, Medicine, and Why We Need to Decolonise Healthcare
- IBM — AI Bias explainer