UnBias AI4DM Sets available now

November 23, 2020 by · Comments Off on UnBias AI4DM Sets available now 

If you missed our recent crowdfunding campaign we now have a limited number of first edition sets available to buy. Purchase copies direct here, or contact us [sales at proboscis dot org dot uk] if you wish to order multiple sets.

Proboscis is also offering a 1-2-1 Facilitation Training package (2 x 1.5 hour video call sessions plus a Personalised Facilitator’s Guide) and a Bespoke Workshop Planning & Facilitation service using the toolkit for your organisation.
Please contact us [sales at proboscis dot org dot uk] for details and prices.

AI For Decision Makers

September 15, 2020 by · Comments Off on AI For Decision Makers 

Our practical ethics and governance toolkit for AI and automated systems is now available to download in a DIY print-at-home version, and we are running a crowdfunding campaign on Indiegogo for a production run to make the toolkit widely affordable.

Download the FREE AI For Decision Makers Toolkit (Zip 11Mb)
AI4DM Worksheet only (PDF 400Kb)
Read the Handbook Online

Order your set now from our online store

“Quite frankly this is the best bit of communication in this area I have ever seen. It is the perfect complement to the UnBias Fairness Toolkit. Together they can be adopted by any organisation in business, charity, education, healthcare etc etc.
Especially in the light of recent events I just wish that every member of the Government and the Civil Service had a set! 
I know how difficult it is to refine the language so that it really gets through. You have done a superb job.”

Lord Clement-Jones CBE
Chair of the House of Lords Select Committee on Artificial Intelligence (2017–2018)

AI4DM is a suite of critical thinking tools enabling cross-organisational stakeholders to implement transdisciplinary ethical and governance assessments of planned or existing AI and automated decision-making systems.
It naturally fosters participation, bringing people together to map AI systems, existing and proposed, against the organisation’s own mission, vision, values and ethics.
It uses a whole systems approach to analyse organisational structures and operations, illuminating to participants the breadth of issues beyond their individual responsibilities.

The tools are intuitive, practical and can be used for:

  • revealing where and how a system is either in alignment, and where it is (or could be) misaligned with the organisation’s mission, vision, values and ethics;
  • enabling different stakeholders to appreciate where and how their obligations and responsibilities intersect with those of others. 
  • emphasising the collective nature of lawful and ethical responsibilities across the whole organisation
  • providing a mechanism for a deep analysis of complex challenges.

The toolkit was conceived, created and designed by Giles Lane with illustrations by Alice Angus. It was commissioned by Ansgar Koene at EY Global Services.

Download the Flyer (PDF 80Kb)

UnBias Facilitator Booklet

July 3, 2019 by · Comments Off on UnBias Facilitator Booklet 

Our colleagues, Helen Creswick and Liz Dowthwaite, at Horizon Digital Economy Institute (University of Nottingham) have recently produced a new booklet for facilitators to accompany the UnBias Fairness Toolkit.

The booklet is the result of an Impact Study grant to run a series of workshops with people of different ages and to co-devise games and activities using the Awareness Cards. It also contains further advice and feedback for facilitators and other running workshops using the Toolkit, to guide them to what works best with different groups.

Download PDF versions to print out and make up

UnBias: Fairness in Pervasive Environments

December 18, 2018 by · Comments Off on UnBias: Fairness in Pervasive Environments 

Last week I ran a workshop at the TIPSbyDesign Symposium hosted by Design Informatics at the University of Edinburgh. It was the second symposium of the PACTMAN project, aiming to build a community of UK TIPS (trust, identity, privacy and security) researchers. There were 5 workshops run over two days, as well as two keynotes, one by Georgina Bourke of If on their collaboration with LSE Data and Society, “Understanding Automated Decisions”, and one by Prof Paul Coulton on “More-than-human centred design”.

Organiser, Bettina Nissen, invited me to devise a workshop that addressed the problem of designing fairness in ‘pervasive environments’ – i.e. spaces where technology is present and capturing data, but where we might not be giving our explicit permission for our data to be captured. Bettina was also keen to see and experience the UnBias Fairness Toolkit, so I devised a workshop that used its tools to frame a problem space and explore its implications; to define key concerns and values; and to develop some principles that could guide future design.

We began by imagining some actual ‘pervasive environments’ and chose three (airports, shopping centres and taxis) to explore in more depth. The 20 participants divided into 3 groups, each choosing one type of environment to explore – identifying the various ‘actors’ (those installing/imposing technology within the environment and/or capturing data from it) and those being acted upon (i.e. having data about them, their behaviours and potentially interactions with the devices being captured). To help with this, we used the Data cards from the UnBias Awareness deck, and to consider the consequences and impacts (potential benefits and harms) we used both the Factors and Examples cards. We also used the Rights cards to asses how rights and laws protecting individuals would come into play in such spaces.

The TrustScape worksheets were used to identify and communicate a key concern to be shared with the other groups:

After a break, we reconvened and each group passed their TrustScapes to another. We then used the MetaMap worksheets to respond to the TrustScapes, also using the Values cards to help guide the responses:

Finally, we discussed the outcomes of the exercises and used them to define 6 principles for designing ‘fair’ pervasive environments:

  • Allowing participants to opt out without missing out
  • Exposing the role and relationship to regulators for all actors and participants
  • Understanding the motivations of stakeholders who define and control such environments
  • Providing space for negotiating alternatives to standard Terms and Conditions
  • Providing transparency with regard to the bigger picture laws and rights governing public spaces and behaviours in them
  • Providing visibility of how power operates and what the imbalances are

The workshop was an intense process over almost 3 hours and I would like to thanks all the participants for their efforts and contributions making it such a valuable experience.

UnBias

December 8, 2018 by · Comments Off on UnBias 

The UnBias Fairness Toolkit is a critical and civic thinking tool for exploring how decisions are made by algorithms and the impact that they have on our lives.

Download the FREE UnBias Fairness Toolkit (Zip 18Mb)
Read the Handbook Online
Read the Facilitator Booklet Online

A practical companion and extension set for exploring ethics and governance in AI and automated decision making systems is now available:

Download the FREE AI For Decision Makers Toolkit (Zip 11Mb)
AI4DM Worksheet only (PDF 400Kb)
Read the Handbook Online
Watch training & info animations on our YouTube Playlist
Download Flyer (A4 PDF 80Kb)

*** Buy a set from our online store ***

 * * *

Print-on-demand 

  • Buy the UnBias Awareness Cards (direct from manufacturer)

 * * *

Research Project 2016-18

Proboscis was a partner in the UnBias: Emancipating Users Against Algorithmic Biases for a Trusted Digital Economy project – led by the Horizon Institute at Nottingham University with the Universities of Oxford & Edinburgh and funded by EPSRC (EP/N02785X/1) under the Trust, Identity, Privacy and Security (TIPS) programme.

Proboscis led the co-design of the UnBias Fairness Toolkit – which was launched at the Victoria & Albert Museum’s Digital Design Weekend in September 2018.

UnBias Showcase video

October 29, 2018 by · Comments Off on UnBias Showcase video 

A video with clips and brief interviews from the UnBias Showcase event on 1st October 2018:

See my clip at 5.05

Illustrating for algorithmic bias

September 27, 2018 by · Comments Off on Illustrating for algorithmic bias 

As part of the UnBias project I was asked to create illustrations for the Fairness Toolkit’s Trustscape and Awareness Cards. The toolkit is designed to raise awareness and create dialogue about algorithms, trust, bias and fairness. My involvement in the project started with a series of quick sketches for stickers to be used with the Trustscape. The sketches were made in response to the results of workshops with young people who identified issues, themes and difficulties in the network world, and described a wide range of bias in algorithmic decisions and how they impact on peoples lives. 

 

For the UnBias Awareness Cards the brief was to create a design for each of the eight suits: Rights, Data, Factors, Values, Process, Exercise and Glossary. The fronts of the cards contain examples, activities, scenarios and information about algorithmic bias and the ways prejudiced behaviours can emerge in systems. The focus of my illustrations was on how algorithmic decisions could affect people and communities; how do we know decisions are being made fairly and not threatening rights; how do we know decisions are not being based on gender and race? How do we know we are in social media bubble, what is real or fake and what to trust?

At the same time I also wanted the illustrations to celebrate some of the pioneering developments in computing, often made by people who wanted to enable others, and to reference the history of communication technologies, computation devices, predicting machines and mass communication technologies. 

It was important for each card to be unique but for the common themes to flow through all of them.  Across the cards you will find patterns and references to computation devices and processes: QR codes, punch cards, network diagrams, server arrays, excerpts of code for sorting algorithms, circuit board diagrams, flowcharts, early devices like the Difference Engine and Tide Predicting Machine no 2, the Mac Classic and the handheld devices and social media apps we use today. Since algorithms work behind the scenes of the web to filter and sort data, several cards feature machines used for measuring, weighing, sorting, ranking, dividing and filtering.

The main text styles are inspired by typefaces that have a relationship to the history of computing. ‘Factors’ is based on the early Selectric font for IBM’s Selectric electric typewriter which went on to become one of the first to provide word processing capability. ‘Exercise’ and ‘Example’ were inspired by the typefaces in early forms of electronic communication; telegrams,  teletext and ticker tape. The lettering of  ‘Data’, ‘Values’, ‘Rights’, ‘Process’ and ‘Glossary’ were inspired by fonts I had seen on early computation devices, like Pascal’s Typewriter, Babbage’s Difference Engine, Kelvin’s and Ferrel’s Tide Predicting Machines, and by typefaces used on mass-produced adverts and posters in the industrial revolution.

The edge of the main title scrolls are decorated with mathematical motifs like > <, ( ), X, etc. And the outer borders are decorated with binary. One of the simplest ways of visualising an algorithm is using a flowchart, and the centre shape of each card is inspired by the frames used in flowcharts to represent different stages of the process:- ‘stop/start’, ‘database’, ‘processing’, ‘decision’, ‘repetition’ ‘connector’.

UnBias Awareness Cards – Glossary Suit Illustration

Glossary is a bit different to the other cards, there is only one Glossary card and it holds a definition of the meaning of ‘ALGORITHM’. The images on the back reference various storage and processing devices, reel to reel, server array, a mac classic, an early word processor, tablet, ticker tape, punch cards, fortran cards, blackboard and an abacus. 

The card also celebrates some pioneers in mathematics. The algorithm on the computer screen and on the blackboard is Euclid’s Greatest Common Divisor (GCD), dating back to Ancient Greece it is one of the oldest algorithms still in usage.

The writing around the scroll border are excerpts from Ada Lovelace‘s pioneering algorithm to calculate Bernoulli numbers, written in the early 1840s, it is considered by some to be the first computer programme. Ada was an english mathematician, thought to be the first computer programmer and the work this is from is one of the most important documents in the history of computing. 

Standing at the chalkboard is Dorothy Vaughn, a leading mathematician and early programmer who worked at NASA and its predecessor in the 1930s, 40s, 50s and 60s. Working in a time of racial segregation she led the West Area Computing team. She was the first African American supervisor at NASA and one of very few women at that level, but was not officially acknowledged, or paid, as such for several years. She was visionary in her realisation that computers would take over much of the human calculators work and taught herself FORTRAN and other languages, which she then taught to the other women, to be ready for the change. Her work fed into many areas of research at the Langley Laboratory and she paved the way for a more diverse workforce and leadership at NASA today.

Grace Hopper was a groundbreaking programmer who, in the 1950s and 60s, pioneered machine-independent programming languages and invented one of the first compiler tools that translated English words into the machine code that computers understood. Grace was an American computer scientist who realised that people would more easily be able to use computers if they could programme in English words and then have those translated into machine code.  She created the FLOW-MATIC the first English like programming language and was instrumental in the Development of COBOL, which is still widely used today. She did much to increase understanding of computer communications and went on to push more women to enter the field and for people to experiment and take chances in computing.

A Raven sits on the Blackboard watching  because all Corvids (Ravens, Crows, Rooks etc)  are renowned for their problem solving skills (the Crow Search Algorithm (CSA) is based on the intelligent behaviour of crows).

UnBias Awareness Cards – Data Suit Illustration

UnBias Toolkit Workshops at V&A Digital Design Weekend

September 12, 2018 by · Comments Off on UnBias Toolkit Workshops at V&A Digital Design Weekend 

I will be running four workshops with Alex Murdoch exploring the UnBias Fairness Toolkit at the V&A’s Digital Design Weekend on Saturday 22nd and Sunday 23rd September. Each workshop is intended for different audiences and contexts in which the toolkit could be used.

UnBias Fairness Toolkit Educators Workshop
Seminar Room 1, Sackler Centre for arts education
Saturday 22, 11.30-13.30
Algorithms, bias, trust and fairness: how do you engage young people is understanding and discussing these issues? How do you stimulate critical thinking skills to analyse decision- making in online and automated systems? Explore practical ideas for using the UnBias Fairness Toolkit with young people to frame conversations about how we want our future internet to be fair and free for all.

UnBias Fairness Toolkit Industry Stakeholders Workshop
Seminar Room 1, Sackler Centre for arts education
Saturday 22, 14.30-16.30
The UnBias project is initiating a “public civic dialogue” on trust, fairness and bias in algorithmic systems. This session is for people in the tech industry, activists, researchers, policymakers and regulators to explore how the Fairness Toolkit can inform them about young people’s and others’ perceptions of these issues, and how it can facilitate their responses as contributions to the dialogue.

DESIGN TAKEOVER ON EXHIBITION ROAD
Sunday 23, 10.00-17.00
Celebrate ten years of London Design Festival at the V&A with a special event on Exhibition Road. Bringing together events by the Brompton Design District, Imperial College, the Natural History Museum, the Science Museum and the V&A, this fun-filled day of design, workshops and talks will offer something for everyone, and a unique way into the many marvels of Albertopolis.

UnBias Fairness Toolkit Workshops
Young people (12-22 yrs) 12.00-13.30
Open Sessions 15.30-17.00
What is algorithmic bias and how does it affect you? How far do you trust the apps and services you use in your daily life with your data and privacy? How can we judge when an automated decision is fair or not? Take part in group activities exploring these questions using the UnBias Fairness Toolkit to stimulate and inspire your own investigations.

Download the V&A DDW Brochure

Colleagues from Oxford University and Horizon Digital Economy Institute will also be running UnBias activities as part of the event:

UnBias
The Raphael Cartoons, Room 48a
Drop-in from 12.00-16.00
How do you feel about fake news, filter bubbles, unfair or discriminatory search results and other types of online bias? How are decisions made online? What types of personal data do you share with online companies and services? Do you trust them? Explore these through a range of activities, from Being the Algorithm to Creating a Data Garden, and from Public Voting to making a TrustScape of how you feel about these issues. Suitable for families.

UnBias Fairness Toolkit

September 7, 2018 by · Comments Off on UnBias Fairness Toolkit 

This slideshow requires JavaScript.

The UnBias Fairness Toolkit is now available to download and use. It aims to promote awareness and to stimulate a public civic dialogue about algorithms, trust, bias and fairness. In particular, on how algorithms shape online experiences, influencing our everyday lives, and to reflect on how we want our future internet to be fair and free for all.

The tools not only encourage critical thinking, but civic thinking – supporting a more collective approach to imagining the future as a contrast to the individual atomising effect that such technologies often cause. The toolkit has been developed by Giles Lane, with illustrations by Alice Angus and Exercises devised by Alex Murdoch; alongside contributions from the UnBias team members and the input of young people and stakeholders.

The toolkit contains the following elements:

  1. Handbook
  2. Awareness Cards
  3. TrustScape
  4. MetaMap
  5. Value Perception Worksheets

All components of Toolkit are freely available to download and print under Creative Commons license (CC BY-NC-SA 4.0).

Download the complete UnBias Fairness Toolkit (zip archive 18Mb)

DOI


UnBias: Our Future Internet video

May 21, 2018 by · Comments Off on UnBias: Our Future Internet video 

UnBias Fairness Toolkit Preview

March 13, 2018 by · Comments Off on UnBias Fairness Toolkit Preview 

Here is the presentation from a workshop held in London yesterday at which I previewed the Fairness Toolkit I’ve been leading the development of for the UnBias project. It still requires further testing and refining, so feedback and comments are most welcome:

UnBias Fairness Toolkit Workshop from Giles Lane