Artificial Intelligence and Public Standards

https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/863657/AI_and_Public_Standards_Web_Version.PDF

Este informe trata de dilucidar reglas para la IA operando en la función pública. Es el resultado de muchas reuniones y debates. Contiene muchas citas interesantes de personajes relevantes. Incluye una serie de recomendaciones para que la IA siga los mismos principios (Nolan) que la desarrollada por humanos.

“Artificial Intelligence is one of the most transformative forces of our time, and is bound to alter the fabric of society.” European Commission, Independent High-Level Expert Group on AI

The Data Ethics Framework principles
 1. Start with clear user need and public benefit
 2. Be aware of relevant legislation and codes of practice
 3. Use data that is proportionate to the user need
 4. Understand the limitations of the data
 5. Ensure robust practices and work within your skillset
 6. Make your work transparent and be accountable
 7. Embed data use responsibly.

“When decision systems are introduced into public contexts such as criminal justice, it is important they are subject to the scrutiny expected in a democratic society. Algorithmic systems have been criticised on this front, as when developed in secretive circumstances or outsourced to private entities, they can be construed as rulemaking not subject to appropriate procedural safeguards or societal oversight.” Law Society Report, Algorithms in the Criminal Justice System

“States should engage in inclusive, interdisciplinary, informed and public debates to define what areas of public services profoundly affecting access to or exercise of human rights may not be appropriately determined, decided or optimised through algorithmic systems.” The Council of Europe’s draft Guidelines for States on actions to be taken vis-à-vis the human rights impacts of algorithmic systems

“We are not aware of any body with systematic knowledge of where automated decision-making tools are being used in the public sector.” Centre for Data Ethics and Innovation

“There is a serious lack of transparency and concomitant lack of accountability about how the police and other law enforcement agencies are already using these technologies.” Professor Karen Yeung, Interdisciplinary Professorial Fellow in Law, Ethics and Informatics, University of Birmingham Law School and School of Computer Science

“Transparency – and therefore accountability – over the way in which public money is spent remains a very grey area in the UK…People are convinced that the growth of technology in the public sector has hugely important ramifications, but are baffled as to what exactly is going on and who is doing it.” Dr Crofton Black, Government Data Systems: The Bureau Investigates, The Bureau of Investigative Journalism

“Much of the public simply don’t yet know enough about how AI or automation works, or where innovations might be used, to make an informed decision on whether they support or oppose them. This creates a vacuum of information, into which negative narratives about Britain’s future are just as likely to take root as positive ones.” Mark Kleinman, Professor of Public Policy and Director of Analysis at the Policy Institute, King’s College London

“When you have a non-human decisionmaker, can you always ascribe the outcome to a human? If you cannot then you have a gap where there is no legal liability. One could stretch existing laws around negligence and vicarious liability, but the more independently AI takes decisions, the harder it will be to tie decisions back to human beings.” Jacob Turner, Barrister and Author of Robot Rules: Regulating Artificial Intelligence

“Rather than focusing on the concept of humans-in-the-loop, we need to think carefully about the end-to-end process and ensure that we think about how AI and humans work together to deliver efficiencies and better results.” Sana Khareghani, Head, Office for AI

“If you are saying that there may be some decisions that need to be made so rapidly that the machine makes the decision (if it has been appropriately codified), there is still human accountability at the design stage and in the verification and validation of the AI system before it is put into use. This means you may not have an accountability gap as ultimately a human is still accountable at the design and testing stages.” Fiona Butcher, Fellow, Defence Science and Technology Laboratory, Ministry of Defence

“The fact that we cannot always explain how an AI system made a decision and whether that process was adequate challenges public servants’ ability to make decisions in an open and transparent manner.” Leverhulme Centre for the Future of Intelligence, University of Cambridge

“If you stick with a simpler model which is inherently interpretable, you are not going to sacrifice that much on accuracy but you are going to keep the benefits of understanding the variables you are using and understanding how the model works.” Dr Reuben Binns, Postdoctoral Research Fellow in AI, ICO

“I think something we need to be challenging ourselves on is whether the lack of transparency and the lack of explainability is a real necessity for the system or whether it is bad design…sometimes there is a challenge to be made of vendors and people who are building the system.” Simon McDougall, Executive Director, Technology Policy and Innovation, ICO 

“Claims about what is technically (im)possible should be treated with caution. Our engagement with industry to date suggests that, if a degree of explainability is made a priority from the outset by its commissioner, it can be built in.” Centre for Data Ethics and Innovation

“The incorporation of an AI tool into a decisionmaking process may come with the risk of creating ‘substantial’ or ‘genuine’ doubt as to why decisions were made and what conclusions were reached…consideration should be given to the circumstances in which reasons for an explanation of the output may be required.” Marion Oswald, Senior Fellow in Law and Director of the Centre for Information Rights, University of Winchester

“There is a very old adage in computer science that sums up many of the concerns around AI enabled public services: ‘Garbage in, garbage out’ In other words, if you put poor, partial, flawed data into a computer it will mindlessly follow its programming and output poor, partial, flawed computations. AI is a statisticalinference technology that learns by example. This means if we allow AI systems to learn from ‘garbage’ examples, then we will end up with a statistical-inference model that is really good at producing ‘garbage’ inferences.” British Computer Society

“Decision-making, algorithmic or otherwise, can of course also be biased against characteristics which may not be protected in law, but which may be considered unfair, such as socio-economic background. In addition, the use of algorithms increases the chances of discrimination against characteristics that are not obvious or visible. For example, an algorithm might be effective at identifying people who lack financial literacy and use this to set interest rates or repayment terms.” Centre for Data Ethics and Innovation, Interim Report on Data Bias

“The statistics speak for themselves. We know that you are eight times more likely to be subject to stop and search in the UK if you are black. If you are building an algorithm on these statistics, that is a huge problem.” Sandra Wachter, Associate Professor and Senior Research Fellow, Oxford Internet Institute

“Some of our existing systems are designed in a way that makes it impossible to measure bias...One of the good things about machine learning technologies is that they have exposed some bias which has always been there.” Professor Helen Margetts, Professor of Society and the Internet at the University of Oxford and Director of the Public Policy Programme, The Alan Turing Institute

“Right now we are more likely to be replacing a human process with an AI process. All us humans are bringing a whole suitcase of preconceptions, prejudices and baggage along with us to that decision, some conscious and some unconscious. As we talk around bias in AI – and there is plenty of stuff to talk about – we have to keep in mind we are not moving from a beautiful neutral model.” Simon McDougall, Executive Director, Technology Policy and Innovation, ICO

“I think we have to start from the point of view that we are dealing with biased systems usually anyway. It is one of the hopes of artificial intelligence that it might be able to reduce bias in certain areas and, certainly, provide lots more ways of systematically thinking about measuring that bias.” Dr Jonathan Bright, Senior Research Fellow, Oxford Internet Institute

“There will be new jobs for humans to work out what machines are doing. And this is where it comes back to diversity – those humans in the loop must be diverse, so they can see the true range of possible impacts the machine is having.” Professor Dame Wendy Hall, Regius Professor of Computer Science, University of Southampton and co-author, UK government AI review

“What we might want to say is ‘it is unacceptable not to know the ways in which your system is biased, and you are then required to account for how you use and understand the results of that system in that context.’ You need to be able to provide a justification and that justification has to be subject to scrutiny and challenge.” Oliver Buckley, Executive Director, CDEI 

“A draft tool we have looked at (at West Midlands Police) had intelligence information built in as input factors, including things like the number of stop and search counts, and that raised red flags around what that could be a proxy for in that particular region.” Marion Oswald, Senior Fellow in Law and Director of the Centre for Information Rights, University of Winchester

“I’m not convinced that human cleansing of data adequately answers this problem. When we remove certain data points, how are we sure that we are making a dataset less biased? Whose rules are being used, why and who is saying that those rules are the right ones?” Sana Khareghani, Head, Office for AI

“A hallmark of good governance is the development of shared values, which become part of the organisation’s culture, underpinning policy and behaviour throughout the organisation, from the governing body to all staff.” The Independent Commission on Good Governance

“The guidelines and advice are the shared responsibility of the Office for AI in BEIS, and the Government Digital Service. The OAI is also responsible for promoting the development of AI technologies and industries, and so has a conflicting interest, and the GDS has wide responsibilities to support digitalization of central government. It seems unlikely that either organisation has the capacity or remit to ensure robust and consistent ethical supervision on broader questions of automated decision system adoption and use in public policy, including their use outside central government.” Dr Emma Carmel, Associate Professor, Social and Policy Sciences, University of Bath

“[It is] not adequate to employ technical legal arguments to ‘cobble together’ an ‘implicit’ lawful basis, given that power, scale and intrusiveness of these technologies create serious threats to the rights and freedoms of individuals, and to the collective foundations or our democratic freedoms.”  Professor Karen Yeung, Interdisciplinary Professorial Fellow in Law, Ethics and Informatics, University of Birmingham Law School and School of Computer Science

“[H]uman involvement has to be active and not just a token gesture. The question is whether a human reviews the decision before it is applied and has discretion to alter it or whether they are simply applying the decision taken by the automated system.” What does the GDPR say about automated decision-making and profiling? ICO

“You need to be able to give an individual an explanation of a fully automated decision to enable their rights, to obtain meaningful information, express their point of view and contest the decision.” ICO Guidance, Why Explain AI, Project ExplAIn

“Although predictive policing is simply reproducing and magnifying the same patterns of discrimination that policing has historically reflected, filtering this decisionmaking process through complex software that few people understand lends unwarranted legitimacy to biased policing strategies that disproportionately focus on BAME and lower income communities.” Policing by Machine, Liberty

“In 2017, Durham Constabulary started to implement a Harm Assessment Risk Tool (HART), which utilised a complex machine learning algorithm to classify individuals according to their risk of committing violent or non-violent crimes in the future. This classification is created by examining an individual’s age, gender and postcode. This information is then used by the custody officer, so a human decision maker, to determine whether further action should be taken. In particular, whether an individual should access the Constabulary’s Checkpoint programme which is an “out of court” disposal programme. There is potential for numerous claims here. A direct age discrimination could be brought by individuals within certain age groups who were scored negatively. Similarly, direct sex discrimination claims could be brought by men, in so far as their gender leads to a lower score than comparable women. Finally, indirect race discrimination or direct race discrimination claims could be pursued on the basis that an individual’s postcode can be a proxy for certain racial groups. Only an indirect race discrimination claim would be susceptible to a justification defence in these circumstances.” AI Law Hub

“Public bodies must consider the Public Sector Equality Duty when they make decisions about how they fulfil their public functions and deliver their services. When moving towards automated decision making the PSED provides an opportunity for equality considerations to be built into decision-making processes as they are developed.” Rebecca Hilsenrath, Chief Executive, Equality and Human Rights Commission 

“People often say ‘Let’s have a new regulator. Let’s have a new, shiny one.’ Actually, there is a lot of expertise already in the regulators because they are having to deal with this kind of thing in markets which they are there to regulate. We ought to build on that and use the expertise we have got.” Professor Helen Margetts, Professor of Society and the Internet, University of Oxford and Director of the Public Policy Programme, The Alan Turing Institute

“The Cabinet Office should reinforce the message that the Seven Principles of Public Life apply to any organisation delivering public services. The Cabinet Office should ensure that ethical standards reflecting the Seven Principles of Public Life are addressed in contractual arrangements, with providers required to undertake that they have the structures and arrangements in place to support this. Commissioners of services should include a Statement of Intent as part of the commissioning process or alongside contracts where they are extended, setting out the ethical behaviours expected by government of the service providers.” Recommendations from the Committee’s 2014 and 2018 reports into providers of public services

“Ethical standards are definitely not part of the procurement process at this point in time.” Ian O’Gara, Accenture

“Assertions of commercial confidentiality should not be accepted as an insurmountable barrier to appropriate rights of access to the [algorithmic] tool and its workings for the public sector body, particularly where the tool’s implementation will impact fundamental rights. Government procurement contracts relating to AI and machine learning should not only include source code escrow provisions, but rights for the public sector party…as standard.” Marion Oswald, Senior Fellow in Law and Director of the Centre for Information Rights, University of Winchester

“Public servants must be incentivised in some way to carry out impact assessments and act upon their results, without being constrained from adopting beneficial innovation.” Centre for Data Ethics and Innovation

“The AIA provides designers with a measure to evaluate AI solutions from an ethical and human perspective, so that they are built in a responsible and transparent way. For example, the AIA can ensure economic interests are balanced against environmental sustainability. The AIA also includes ways to measure potential impacts to the public, and outlines appropriate courses of action, like behavioral monitoring and algorithm assessments.” Canadian Government Video on AIA

Will you have documented processes in place to test datasets against biases and other unexpected outcomes? This could include experience in applying frameworks, methods, guidelines or other assessment tools. Will you be developing a process to document how data quality issues were resolved during the design process? Will you be making this information publicly available? Will you undertake a Gender Based Analysis Plus of the data? Questions on data quality taken from Canada’s Algorithmic Impact Assessment 

Goal-Setting and Objective-Mapping How are you defining the outcome (the target variable) that the system is optimising for? Is this a fair, reasonable, and widely acceptable definition? Does the target variable (or its measurable proxy) reflect a reasonable and justifiable translation of the project’s objective into the statistical frame? Is this translation justifiable given the general purpose of the project and the potential impacts that the outcomes of its implementation will have on the communities involved? Questions taken from the UK government guidance’s Stakeholder Impact Assessment

“We note the recommendation by the Law Society that a national register of automated decision making tools in use in criminal justice be established. Subject to appropriate exceptions, thresholds and safeguards, this would appear to support the Nolan Principles and would facilitate impact assessment of public sector ADMTs. Such a register may be appropriate in other parts of the public sector.” Centre for Data Ethics and Innovation

“You can imagine a scenario where things go wrong because the public sector has implemented some AI technology because it is shiny, cool and exciting rather than helpful.” Eddie Copeland, Director, London Office of Technology and Innovation (LOTI)

“Humans must be ultimately responsible for decisions made by any system...Good governance will require for each use case, a specific understanding of the appropriate division of responsibilities.” Centre for Data Ethics and Innovation

“The person [needs to have] both the agency and the knowledge necessary to make changes to the system’s behaviour and to intervene when it seems like something is going to go wrong.” Dr Brent Mittelstadt, Research Fellow and British Academy Postdoctoral Fellow, Oxford Internet Institute

“Another concern is when you have systems that continue to learn through interaction with the user. There is the potential for a user to either maliciously poison the training data or to be mischievous in the way that they train the system thereby influencing the way it develops in the future.” Fiona Butcher, Fellow, Defence, Science and Technology Laboratory, Ministry of Defence

“It is unclear whether civil society organisations have the capacity to engage in meaningful oversight, particularly given the rapidity with which different systems are being deployed across the sector and across the world.” Law Society Report, Algorithms in the Criminal Justice System

“We use oversight bodies to assure ourselves that we have consent from the public because we know that the people who are most likely to be adversely affected by AI are less likely to come forward and present their views. We use oversight bodies, scrutiny panels and independent advisory groups to be representative of those communities.” Superintendent Chris Todd, West Midlands Police

Working with the right skills to assess AI When identifying whether AI is the right solution, it’s important that you work with: • specialists who have a good knowledge of your data and the problem you’re trying to solve, such as data scientists • at least one domain knowledge expert who knows the environment where you will be deploying the AI model results.97 Office for AI Guidance, Assessing if artificial intelligence is the right solution

“From the perspective of the judiciary or the courts, I think education is the starting point… we are going to have to do a lot of work to develop effective training, knowledge systems and skills systems, to enable judges as well the Court Service staff to understand the implications of the operations of the systems.” John Sorabji, Principal Legal Adviser to the Lord Chief Justice and Master of the Rolls



*****, ******, *******,  Principios y regulación,

por Aniceto Pérez y Madrid, Filósofo de las Tecnologías y Editor de Actualidad Deep Learning (@forodeeplearn)

Comentarios

Popular

Herramientas de Evaluación de Sistemas Algorítmicos

Sistemas multiagentes: Desafíos técnicos y éticos del funcionamiento en un grupo mixto

Controversias éticas en torno a la privacidad, la confidencialidad y el anonimato en investigación social