UnBias AI4DM Teaching and Learning video
October 17, 2020 by Giles Lane · Comments Off on UnBias AI4DM Teaching and Learning video
A brief animation with some ideas for using the UnBias AI For Decision Makers in online and in-person classes. Watch our other animations here.
UnBias AI4DM Running a Workshop video
October 17, 2020 by Giles Lane · Comments Off on UnBias AI4DM Running a Workshop video
A brief animation with suggestions for how to run a workshop using the UnBias AI For Decision Makers Toolkit. Watch our other animations here.
AI For Decision Makers
September 15, 2020 by Giles Lane · Comments Off on AI For Decision Makers
Our practical ethics and governance toolkit for AI and automated systems is now available to download in a DIY print-at-home version, and we are running a crowdfunding campaign on Indiegogo for a production run to make the toolkit widely affordable.
Download the FREE AI For Decision Makers Toolkit (Zip 11Mb)
AI4DM Worksheet only (PDF 400Kb)
Read the Handbook Online
Order your set now from our online store
“Quite frankly this is the best bit of communication in this area I have ever seen. It is the perfect complement to the UnBias Fairness Toolkit. Together they can be adopted by any organisation in business, charity, education, healthcare etc etc.
Lord Clement-Jones CBE
Especially in the light of recent events I just wish that every member of the Government and the Civil Service had a set!
I know how difficult it is to refine the language so that it really gets through. You have done a superb job.”
Chair of the House of Lords Select Committee on Artificial Intelligence (2017–2018)
AI4DM is a suite of critical thinking tools enabling cross-organisational stakeholders to implement transdisciplinary ethical and governance assessments of planned or existing AI and automated decision-making systems.
It naturally fosters participation, bringing people together to map AI systems, existing and proposed, against the organisation’s own mission, vision, values and ethics.
It uses a whole systems approach to analyse organisational structures and operations, illuminating to participants the breadth of issues beyond their individual responsibilities.
The tools are intuitive, practical and can be used for:
- revealing where and how a system is either in alignment, and where it is (or could be) misaligned with the organisation’s mission, vision, values and ethics;
- enabling different stakeholders to appreciate where and how their obligations and responsibilities intersect with those of others.
- emphasising the collective nature of lawful and ethical responsibilities across the whole organisation
- providing a mechanism for a deep analysis of complex challenges.
The toolkit was conceived, created and designed by Giles Lane with illustrations by Alice Angus. It was commissioned by Ansgar Koene at EY Global Services.
Download the Flyer (PDF 80Kb)
UnBias Research Project
April 4, 2020 by Giles Lane · Comments Off on UnBias Research Project
Proboscis was a partner in the UnBias: Emancipating Users Against Algorithmic Biases for a Trusted Digital Economy project led by Horizon Digital Economy Institute at University of Nottingham, the Human Centred Computing research group in the Department of Computer Science at the University of Oxford, and the School of Informatics at the University of Edinburgh.
The project looked at the user experience of algorithm driven internet services and the process of algorithm design. A large part of this work was user group studies to understand the concerns and perspectives of citizens. UnBias aimed to provide policy recommendations, ethical guidelines and a ‘fairness toolkit’ co-produced with young people and other stakeholders that will include educational materials and resources to support youth understanding about online environments as well as raise awareness among online providers about the concerns and rights of young internet users. The project is relevant for young people as well as society as a whole to ensure trust and transparency are not missing from the internet.
Proboscis led the co-design of the UnBias Fairness Toolkit – which was launched at the Victoria & Albert Museum’s Digital Design Weekend in September 2018.
Team: Giles Lane & Alice Angus (Proboscis) with Alex Murdoch.
Partners:
- HORIZON Digital Economy Research at the University of Nottingham
- Human Centred Computing Group at the, University of Oxford
- The Centre for Intelligent Systems and their Applications (CISA)at the University of Edinburgh
Funded by EPSRC (EP/N02785X/1) under the Trust, Identity, Privacy and Security (TIPS) call.
Begun 2016 | Completed 2018
UnBias
December 8, 2018 by Giles Lane · Comments Off on UnBias
The UnBias Fairness Toolkit is a critical and civic thinking tool for exploring how decisions are made by algorithms and the impact that they have on our lives.
Download the FREE UnBias Fairness Toolkit (Zip 18Mb)
Read the Handbook Online
Read the Facilitator Booklet Online
A practical companion and extension set for exploring ethics and governance in AI and automated decision making systems is now available:
Download the FREE AI For Decision Makers Toolkit (Zip 11Mb)
AI4DM Worksheet only (PDF 400Kb)
Read the Handbook Online
Watch training & info animations on our YouTube Playlist
Download Flyer (A4 PDF 80Kb)
*** Buy a set from our online store ***
* * *
Print-on-demand
- Buy the UnBias Awareness Cards (direct from manufacturer)
* * *
Research Project 2016-18
Proboscis was a partner in the UnBias: Emancipating Users Against Algorithmic Biases for a Trusted Digital Economy project – led by the Horizon Institute at Nottingham University with the Universities of Oxford & Edinburgh and funded by EPSRC (EP/N02785X/1) under the Trust, Identity, Privacy and Security (TIPS) programme.
Proboscis led the co-design of the UnBias Fairness Toolkit – which was launched at the Victoria & Albert Museum’s Digital Design Weekend in September 2018.
UnBias Showcase video
October 29, 2018 by Giles Lane · Comments Off on UnBias Showcase video
A video with clips and brief interviews from the UnBias Showcase event on 1st October 2018:
See my clip at 5.05
Illustrating for algorithmic bias
September 27, 2018 by aliceangus · Comments Off on Illustrating for algorithmic bias
As part of the UnBias project I was asked to create illustrations for the Fairness Toolkit’s Trustscape and Awareness Cards. The toolkit is designed to raise awareness and create dialogue about algorithms, trust, bias and fairness. My involvement in the project started with a series of quick sketches for stickers to be used with the Trustscape. The sketches were made in response to the results of workshops with young people who identified issues, themes and difficulties in the network world, and described a wide range of bias in algorithmic decisions and how they impact on peoples lives.
For the UnBias Awareness Cards the brief was to create a design for each of the eight suits: Rights, Data, Factors, Values, Process, Exercise and Glossary. The fronts of the cards contain examples, activities, scenarios and information about algorithmic bias and the ways prejudiced behaviours can emerge in systems. The focus of my illustrations was on how algorithmic decisions could affect people and communities; how do we know decisions are being made fairly and not threatening rights; how do we know decisions are not being based on gender and race? How do we know we are in social media bubble, what is real or fake and what to trust?
At the same time I also wanted the illustrations to celebrate some of the pioneering developments in computing, often made by people who wanted to enable others, and to reference the history of communication technologies, computation devices, predicting machines and mass communication technologies.
It was important for each card to be unique but for the common themes to flow through all of them. Across the cards you will find patterns and references to computation devices and processes: QR codes, punch cards, network diagrams, server arrays, excerpts of code for sorting algorithms, circuit board diagrams, flowcharts, early devices like the Difference Engine and Tide Predicting Machine no 2, the Mac Classic and the handheld devices and social media apps we use today. Since algorithms work behind the scenes of the web to filter and sort data, several cards feature machines used for measuring, weighing, sorting, ranking, dividing and filtering.
The main text styles are inspired by typefaces that have a relationship to the history of computing. ‘Factors’ is based on the early Selectric font for IBM’s Selectric electric typewriter which went on to become one of the first to provide word processing capability. ‘Exercise’ and ‘Example’ were inspired by the typefaces in early forms of electronic communication; telegrams, teletext and ticker tape. The lettering of ‘Data’, ‘Values’, ‘Rights’, ‘Process’ and ‘Glossary’ were inspired by fonts I had seen on early computation devices, like Pascal’s Typewriter, Babbage’s Difference Engine, Kelvin’s and Ferrel’s Tide Predicting Machines, and by typefaces used on mass-produced adverts and posters in the industrial revolution.
The edge of the main title scrolls are decorated with mathematical motifs like > <, ( ), X, etc. And the outer borders are decorated with binary. One of the simplest ways of visualising an algorithm is using a flowchart, and the centre shape of each card is inspired by the frames used in flowcharts to represent different stages of the process:- ‘stop/start’, ‘database’, ‘processing’, ‘decision’, ‘repetition’ ‘connector’.
Glossary is a bit different to the other cards, there is only one Glossary card and it holds a definition of the meaning of ‘ALGORITHM’. The images on the back reference various storage and processing devices, reel to reel, server array, a mac classic, an early word processor, tablet, ticker tape, punch cards, fortran cards, blackboard and an abacus.
The card also celebrates some pioneers in mathematics. The algorithm on the computer screen and on the blackboard is Euclid’s Greatest Common Divisor (GCD), dating back to Ancient Greece it is one of the oldest algorithms still in usage.
The writing around the scroll border are excerpts from Ada Lovelace‘s pioneering algorithm to calculate Bernoulli numbers, written in the early 1840s, it is considered by some to be the first computer programme. Ada was an english mathematician, thought to be the first computer programmer and the work this is from is one of the most important documents in the history of computing.
Standing at the chalkboard is Dorothy Vaughn, a leading mathematician and early programmer who worked at NASA and its predecessor in the 1930s, 40s, 50s and 60s. Working in a time of racial segregation she led the West Area Computing team. She was the first African American supervisor at NASA and one of very few women at that level, but was not officially acknowledged, or paid, as such for several years. She was visionary in her realisation that computers would take over much of the human calculators work and taught herself FORTRAN and other languages, which she then taught to the other women, to be ready for the change. Her work fed into many areas of research at the Langley Laboratory and she paved the way for a more diverse workforce and leadership at NASA today.
Grace Hopper was a groundbreaking programmer who, in the 1950s and 60s, pioneered machine-independent programming languages and invented one of the first compiler tools that translated English words into the machine code that computers understood. Grace was an American computer scientist who realised that people would more easily be able to use computers if they could programme in English words and then have those translated into machine code. She created the FLOW-MATIC the first English like programming language and was instrumental in the Development of COBOL, which is still widely used today. She did much to increase understanding of computer communications and went on to push more women to enter the field and for people to experiment and take chances in computing.
A Raven sits on the Blackboard watching because all Corvids (Ravens, Crows, Rooks etc) are renowned for their problem solving skills (the Crow Search Algorithm (CSA) is based on the intelligent behaviour of crows).
UnBias Toolkit Workshops at V&A Digital Design Weekend
September 12, 2018 by Giles Lane · Comments Off on UnBias Toolkit Workshops at V&A Digital Design Weekend
I will be running four workshops with Alex Murdoch exploring the UnBias Fairness Toolkit at the V&A’s Digital Design Weekend on Saturday 22nd and Sunday 23rd September. Each workshop is intended for different audiences and contexts in which the toolkit could be used.
UnBias Fairness Toolkit Educators Workshop
Seminar Room 1, Sackler Centre for arts education
Saturday 22, 11.30-13.30
Algorithms, bias, trust and fairness: how do you engage young people is understanding and discussing these issues? How do you stimulate critical thinking skills to analyse decision- making in online and automated systems? Explore practical ideas for using the UnBias Fairness Toolkit with young people to frame conversations about how we want our future internet to be fair and free for all.
UnBias Fairness Toolkit Industry Stakeholders Workshop
Seminar Room 1, Sackler Centre for arts education
Saturday 22, 14.30-16.30
The UnBias project is initiating a “public civic dialogue” on trust, fairness and bias in algorithmic systems. This session is for people in the tech industry, activists, researchers, policymakers and regulators to explore how the Fairness Toolkit can inform them about young people’s and others’ perceptions of these issues, and how it can facilitate their responses as contributions to the dialogue.
DESIGN TAKEOVER ON EXHIBITION ROAD
Sunday 23, 10.00-17.00
Celebrate ten years of London Design Festival at the V&A with a special event on Exhibition Road. Bringing together events by the Brompton Design District, Imperial College, the Natural History Museum, the Science Museum and the V&A, this fun-filled day of design, workshops and talks will offer something for everyone, and a unique way into the many marvels of Albertopolis.
UnBias Fairness Toolkit Workshops
Young people (12-22 yrs) 12.00-13.30
Open Sessions 15.30-17.00
What is algorithmic bias and how does it affect you? How far do you trust the apps and services you use in your daily life with your data and privacy? How can we judge when an automated decision is fair or not? Take part in group activities exploring these questions using the UnBias Fairness Toolkit to stimulate and inspire your own investigations.
Colleagues from Oxford University and Horizon Digital Economy Institute will also be running UnBias activities as part of the event:
UnBias
The Raphael Cartoons, Room 48a
Drop-in from 12.00-16.00
How do you feel about fake news, filter bubbles, unfair or discriminatory search results and other types of online bias? How are decisions made online? What types of personal data do you share with online companies and services? Do you trust them? Explore these through a range of activities, from Being the Algorithm to Creating a Data Garden, and from Public Voting to making a TrustScape of how you feel about these issues. Suitable for families.
UnBias: Our Future Internet video
May 21, 2018 by Giles Lane · Comments Off on UnBias: Our Future Internet video
UnBias Fairness Toolkit Preview
March 13, 2018 by Giles Lane · Comments Off on UnBias Fairness Toolkit Preview
Here is the presentation from a workshop held in London yesterday at which I previewed the Fairness Toolkit I’ve been leading the development of for the UnBias project. It still requires further testing and refining, so feedback and comments are most welcome: