UnBias Research Project

April 4, 2020 by · Comments Off on UnBias Research Project 

Proboscis was a partner in the UnBias: Emancipating Users Against Algorithmic Biases for a Trusted Digital Economy project led by Horizon Digital Economy Institute at University of Nottingham, the Human Centred Computing research group in the Department of Computer Science at the University of Oxford, and the School of Informatics at the University of Edinburgh.

The project looked at the user experience of algorithm driven internet services and the process of algorithm design. A large part of this work was user group studies to understand the concerns and perspectives of citizens. UnBias aimed to provide policy recommendations, ethical guidelines and a ‘fairness toolkit’ co-produced with young people and other stakeholders that will include educational materials and resources to support youth understanding about online environments as well as raise awareness among online providers about the concerns and rights of young internet users. The project is relevant for young people as well as society as a whole to ensure trust and transparency are not missing from the internet.

Proboscis led the co-design of the UnBias Fairness Toolkit – which was launched at the Victoria & Albert Museum’s Digital Design Weekend in September 2018.

Team: Giles Lane & Alice Angus (Proboscis) with Alex Murdoch.

Partners:

Funded by EPSRC (EP/N02785X/1) under the Trust, Identity, Privacy and Security (TIPS) call.

Begun 2016 | Completed 2018

UnBias: Fairness in Pervasive Environments

December 18, 2018 by · Comments Off on UnBias: Fairness in Pervasive Environments 

Last week I ran a workshop at the TIPSbyDesign Symposium hosted by Design Informatics at the University of Edinburgh. It was the second symposium of the PACTMAN project, aiming to build a community of UK TIPS (trust, identity, privacy and security) researchers. There were 5 workshops run over two days, as well as two keynotes, one by Georgina Bourke of If on their collaboration with LSE Data and Society, “Understanding Automated Decisions”, and one by Prof Paul Coulton on “More-than-human centred design”.

Organiser, Bettina Nissen, invited me to devise a workshop that addressed the problem of designing fairness in ‘pervasive environments’ – i.e. spaces where technology is present and capturing data, but where we might not be giving our explicit permission for our data to be captured. Bettina was also keen to see and experience the UnBias Fairness Toolkit, so I devised a workshop that used its tools to frame a problem space and explore its implications; to define key concerns and values; and to develop some principles that could guide future design.

We began by imagining some actual ‘pervasive environments’ and chose three (airports, shopping centres and taxis) to explore in more depth. The 20 participants divided into 3 groups, each choosing one type of environment to explore – identifying the various ‘actors’ (those installing/imposing technology within the environment and/or capturing data from it) and those being acted upon (i.e. having data about them, their behaviours and potentially interactions with the devices being captured). To help with this, we used the Data cards from the UnBias Awareness deck, and to consider the consequences and impacts (potential benefits and harms) we used both the Factors and Examples cards. We also used the Rights cards to asses how rights and laws protecting individuals would come into play in such spaces.

The TrustScape worksheets were used to identify and communicate a key concern to be shared with the other groups:

After a break, we reconvened and each group passed their TrustScapes to another. We then used the MetaMap worksheets to respond to the TrustScapes, also using the Values cards to help guide the responses:

Finally, we discussed the outcomes of the exercises and used them to define 6 principles for designing ‘fair’ pervasive environments:

  • Allowing participants to opt out without missing out
  • Exposing the role and relationship to regulators for all actors and participants
  • Understanding the motivations of stakeholders who define and control such environments
  • Providing space for negotiating alternatives to standard Terms and Conditions
  • Providing transparency with regard to the bigger picture laws and rights governing public spaces and behaviours in them
  • Providing visibility of how power operates and what the imbalances are

The workshop was an intense process over almost 3 hours and I would like to thanks all the participants for their efforts and contributions making it such a valuable experience.

UnBias

December 8, 2018 by · Comments Off on UnBias 

The UnBias Fairness Toolkit is a critical and civic thinking tool for exploring how decisions are made by algorithms and the impact that they have on our lives.

Download the FREE UnBias Fairness Toolkit (Zip 18Mb)
Read the Handbook Online
Read the Facilitator Booklet Online

A practical companion and extension set for exploring ethics and governance in AI and automated decision making systems is now available:

Download the FREE AI For Decision Makers Toolkit (Zip 11Mb)
AI4DM Worksheet only (PDF 400Kb)
Read the Handbook Online
Watch training & info animations on our YouTube Playlist
Download Flyer (A4 PDF 80Kb)

*** Buy a set from our online store ***

 * * *

Print-on-demand 

  • Buy the UnBias Awareness Cards (direct from manufacturer)

 * * *

Research Project 2016-18

Proboscis was a partner in the UnBias: Emancipating Users Against Algorithmic Biases for a Trusted Digital Economy project – led by the Horizon Institute at Nottingham University with the Universities of Oxford & Edinburgh and funded by EPSRC (EP/N02785X/1) under the Trust, Identity, Privacy and Security (TIPS) programme.

Proboscis led the co-design of the UnBias Fairness Toolkit – which was launched at the Victoria & Albert Museum’s Digital Design Weekend in September 2018.

UnBias Showcase video

October 29, 2018 by · Comments Off on UnBias Showcase video 

A video with clips and brief interviews from the UnBias Showcase event on 1st October 2018:

See my clip at 5.05

Illustrating for algorithmic bias

September 27, 2018 by · Comments Off on Illustrating for algorithmic bias 

As part of the UnBias project I was asked to create illustrations for the Fairness Toolkit’s Trustscape and Awareness Cards. The toolkit is designed to raise awareness and create dialogue about algorithms, trust, bias and fairness. My involvement in the project started with a series of quick sketches for stickers to be used with the Trustscape. The sketches were made in response to the results of workshops with young people who identified issues, themes and difficulties in the network world, and described a wide range of bias in algorithmic decisions and how they impact on peoples lives. 

 

For the UnBias Awareness Cards the brief was to create a design for each of the eight suits: Rights, Data, Factors, Values, Process, Exercise and Glossary. The fronts of the cards contain examples, activities, scenarios and information about algorithmic bias and the ways prejudiced behaviours can emerge in systems. The focus of my illustrations was on how algorithmic decisions could affect people and communities; how do we know decisions are being made fairly and not threatening rights; how do we know decisions are not being based on gender and race? How do we know we are in social media bubble, what is real or fake and what to trust?

At the same time I also wanted the illustrations to celebrate some of the pioneering developments in computing, often made by people who wanted to enable others, and to reference the history of communication technologies, computation devices, predicting machines and mass communication technologies. 

It was important for each card to be unique but for the common themes to flow through all of them.  Across the cards you will find patterns and references to computation devices and processes: QR codes, punch cards, network diagrams, server arrays, excerpts of code for sorting algorithms, circuit board diagrams, flowcharts, early devices like the Difference Engine and Tide Predicting Machine no 2, the Mac Classic and the handheld devices and social media apps we use today. Since algorithms work behind the scenes of the web to filter and sort data, several cards feature machines used for measuring, weighing, sorting, ranking, dividing and filtering.

The main text styles are inspired by typefaces that have a relationship to the history of computing. ‘Factors’ is based on the early Selectric font for IBM’s Selectric electric typewriter which went on to become one of the first to provide word processing capability. ‘Exercise’ and ‘Example’ were inspired by the typefaces in early forms of electronic communication; telegrams,  teletext and ticker tape. The lettering of  ‘Data’, ‘Values’, ‘Rights’, ‘Process’ and ‘Glossary’ were inspired by fonts I had seen on early computation devices, like Pascal’s Typewriter, Babbage’s Difference Engine, Kelvin’s and Ferrel’s Tide Predicting Machines, and by typefaces used on mass-produced adverts and posters in the industrial revolution.

The edge of the main title scrolls are decorated with mathematical motifs like > <, ( ), X, etc. And the outer borders are decorated with binary. One of the simplest ways of visualising an algorithm is using a flowchart, and the centre shape of each card is inspired by the frames used in flowcharts to represent different stages of the process:- ‘stop/start’, ‘database’, ‘processing’, ‘decision’, ‘repetition’ ‘connector’.

UnBias Awareness Cards – Glossary Suit Illustration

Glossary is a bit different to the other cards, there is only one Glossary card and it holds a definition of the meaning of ‘ALGORITHM’. The images on the back reference various storage and processing devices, reel to reel, server array, a mac classic, an early word processor, tablet, ticker tape, punch cards, fortran cards, blackboard and an abacus. 

The card also celebrates some pioneers in mathematics. The algorithm on the computer screen and on the blackboard is Euclid’s Greatest Common Divisor (GCD), dating back to Ancient Greece it is one of the oldest algorithms still in usage.

The writing around the scroll border are excerpts from Ada Lovelace‘s pioneering algorithm to calculate Bernoulli numbers, written in the early 1840s, it is considered by some to be the first computer programme. Ada was an english mathematician, thought to be the first computer programmer and the work this is from is one of the most important documents in the history of computing. 

Standing at the chalkboard is Dorothy Vaughn, a leading mathematician and early programmer who worked at NASA and its predecessor in the 1930s, 40s, 50s and 60s. Working in a time of racial segregation she led the West Area Computing team. She was the first African American supervisor at NASA and one of very few women at that level, but was not officially acknowledged, or paid, as such for several years. She was visionary in her realisation that computers would take over much of the human calculators work and taught herself FORTRAN and other languages, which she then taught to the other women, to be ready for the change. Her work fed into many areas of research at the Langley Laboratory and she paved the way for a more diverse workforce and leadership at NASA today.

Grace Hopper was a groundbreaking programmer who, in the 1950s and 60s, pioneered machine-independent programming languages and invented one of the first compiler tools that translated English words into the machine code that computers understood. Grace was an American computer scientist who realised that people would more easily be able to use computers if they could programme in English words and then have those translated into machine code.  She created the FLOW-MATIC the first English like programming language and was instrumental in the Development of COBOL, which is still widely used today. She did much to increase understanding of computer communications and went on to push more women to enter the field and for people to experiment and take chances in computing.

A Raven sits on the Blackboard watching  because all Corvids (Ravens, Crows, Rooks etc)  are renowned for their problem solving skills (the Crow Search Algorithm (CSA) is based on the intelligent behaviour of crows).

UnBias Awareness Cards – Data Suit Illustration

UnBias Toolkit Workshops at V&A Digital Design Weekend

September 12, 2018 by · Comments Off on UnBias Toolkit Workshops at V&A Digital Design Weekend 

I will be running four workshops with Alex Murdoch exploring the UnBias Fairness Toolkit at the V&A’s Digital Design Weekend on Saturday 22nd and Sunday 23rd September. Each workshop is intended for different audiences and contexts in which the toolkit could be used.

UnBias Fairness Toolkit Educators Workshop
Seminar Room 1, Sackler Centre for arts education
Saturday 22, 11.30-13.30
Algorithms, bias, trust and fairness: how do you engage young people is understanding and discussing these issues? How do you stimulate critical thinking skills to analyse decision- making in online and automated systems? Explore practical ideas for using the UnBias Fairness Toolkit with young people to frame conversations about how we want our future internet to be fair and free for all.

UnBias Fairness Toolkit Industry Stakeholders Workshop
Seminar Room 1, Sackler Centre for arts education
Saturday 22, 14.30-16.30
The UnBias project is initiating a “public civic dialogue” on trust, fairness and bias in algorithmic systems. This session is for people in the tech industry, activists, researchers, policymakers and regulators to explore how the Fairness Toolkit can inform them about young people’s and others’ perceptions of these issues, and how it can facilitate their responses as contributions to the dialogue.

DESIGN TAKEOVER ON EXHIBITION ROAD
Sunday 23, 10.00-17.00
Celebrate ten years of London Design Festival at the V&A with a special event on Exhibition Road. Bringing together events by the Brompton Design District, Imperial College, the Natural History Museum, the Science Museum and the V&A, this fun-filled day of design, workshops and talks will offer something for everyone, and a unique way into the many marvels of Albertopolis.

UnBias Fairness Toolkit Workshops
Young people (12-22 yrs) 12.00-13.30
Open Sessions 15.30-17.00
What is algorithmic bias and how does it affect you? How far do you trust the apps and services you use in your daily life with your data and privacy? How can we judge when an automated decision is fair or not? Take part in group activities exploring these questions using the UnBias Fairness Toolkit to stimulate and inspire your own investigations.

Download the V&A DDW Brochure

Colleagues from Oxford University and Horizon Digital Economy Institute will also be running UnBias activities as part of the event:

UnBias
The Raphael Cartoons, Room 48a
Drop-in from 12.00-16.00
How do you feel about fake news, filter bubbles, unfair or discriminatory search results and other types of online bias? How are decisions made online? What types of personal data do you share with online companies and services? Do you trust them? Explore these through a range of activities, from Being the Algorithm to Creating a Data Garden, and from Public Voting to making a TrustScape of how you feel about these issues. Suitable for families.

UnBias: Our Future Internet video

May 21, 2018 by · Comments Off on UnBias: Our Future Internet video 

UnBias Fairness Toolkit Preview

March 13, 2018 by · Comments Off on UnBias Fairness Toolkit Preview 

Here is the presentation from a workshop held in London yesterday at which I previewed the Fairness Toolkit I’ve been leading the development of for the UnBias project. It still requires further testing and refining, so feedback and comments are most welcome:

UnBias Fairness Toolkit Workshop from Giles Lane