Transparency, explainability and fairness in approaches to AI regulation: Takeaways from the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

a Financial Regulation Innovation Lab, Strathclyde Business School, University of Strathclyde, Glasgow, Scotland

b Michael Smurfit Graduate Business School, University College Dublin, Dublin, Ireland

Introduction and Purpose

AI offers amazing opportunities, but has the potential for both harm and good. Used responsibly it can perhaps redress urgent concerns. Conversely, careless use may worsen societal harms - fraud, discrimination, bias, and disinformation among others.  AI deployment for good and towards achieving its many benefits necessitates mitigation of its considerable risks, demanding efforts from government, the private sector, academia, and civil society (Biden Jr., 2023).

Thus, on the 30th of October 2023 an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI) was issued from the White House’s Briefing Room under the authority of President Biden (Biden Jr., 2023). Through the order’s authority, the utmost priority was placed on AI development and use governance via a coordinated, Federal Government-wide approach. The pace of AI capability advancements compelled this action (Biden Jr., 2023).

The order’s impact is assured by the force of law, and federal/executive departments and agencies[1] were made accountable for several duties within it. The aim is to achieve a more innovative, secure, productive, and prosperous future for equitable AI governance (Biden Jr., 2023). Consequently, they have undertaken initiatives to assist in shaping AI policy and advance the safe and responsible development and utilization of AI.[2]

The US’s systematic importance in shaping the global economic landscape makes it interesting to explore its approach to AI regulation (Jain, 2024). Thus, aspects centred around transparency, fairness and explainability within the Executive Order are outlined and form the basis of this piece. A particular emphasis is placed on Sections 7 (Advancing Equity and Civil Rights) and Section 8 (Protecting Consumers, Patients, Passengers, and Students), given the relevance of their respective content to explainability, transparency, and fairness in the context of this article. Finally, a juxtaposition against EU and UK regulatory approaches is made to draw out similarities and differences.

Executive Order Structure

The executive order is structured into the following sections:

  1. Purpose.
  2. Policy and Principles.
  3. Definitions.
  4. Ensuring the Safety and Security of AI Technology.
  5. Promoting Innovation and Competition.
  6. Supporting Workers.
  7. Advancing Equity and Civil Rights.
  8. Protecting Consumers, Patients, Passengers, and Students.
  9. Protecting Privacy.
  10. Advancing Federal Government Use of AI.
  11. Strengthening American Leadership Abroad.
  12. Implementation.
  13. General Provisions.

Policy and principles

Eight guiding priorities and adhering principles are outlined for agencies, to comply with the order’s mandate, as appropriate and consistent with applicable law, while, where feasible, considering the views of other agencies, industry, academia, civil society, labor unions, international allies and partners, and other relevant organizations (Biden Jr., 2023). In synopsis, they are:[3]

(a) Safe and secure AI, requiring robust, reliable, repeatable, and standardized AI system evaluations, as well as policies, institutions, and other mechanisms to test, understand, and mitigate risks before use. This includes addressing the most pressing security risks of AI systems, while navigating AI’s opacity and complexity (Biden Jr., 2023).

(b) Promote responsible innovation, competition, and collaboration for AI leadership, and unlock potential for society’s most difficult challenges, through related education, training, development, research, and capacity investments. Concurrently, tackle novel intellectual property (IP) questions and other problems to shield inventors and creators (Biden Jr., 2023).

(c) Responsible AI development and use requiring commitment to supporting workers.  As new jobs and industries are created, workers need a seat at the table, including collective bargaining, so they benefit from opportunities. Job training and education to be adapted for a diverse workforce and providing access to AI-created opportunities (Biden Jr., 2023).

(d)  AI policies consistent with the Administration’s dedication to advancing equity and civil rights.  AI use to disadvantage those already too often denied equal opportunity and justice should not be tolerated. From hiring to housing to healthcare, AI use can deepen discrimination and bias, rather than improving quality of life (Biden Jr., 2023).

(e)  Protect interest of those increasingly using, interacting with, or purchasing AI and enabled products in daily lives. New technology usage does not excuse organizations from legal obligations, and hard-won consumer protections are more important in moments of technological change (Biden Jr., 2023).

(f)  Protect privacy and civil liberties as AI continues advancing. AI makes it easier to extract, re-identify, link, infer, and act on sensitive information about people’s identities, locations, habits, and desires. AI’s capabilities in these areas can increase the risk that personal data is exploited and exposed (Biden Jr., 2023).

(g)  Manage the risks from Federal Government’s own AI use and increase its internal capacity to regulate, govern, and support responsible AI use for better results. Steps are to be taken to attract, retain, and develop public service-oriented AI professionals, including from underserved communities, across disciplines and ease AI professionals’ path into the Federal Government to help harness and govern AI (Biden Jr., 2023).

(h)  Lead the way to global societal, economic, and technological progress, as in previous eras of disruptive innovation and change. This is not measured solely by technological advancements the country makes.  Effective leadership also means pioneering systems and safeguards to deploy technology responsibly — and building and promoting safeguards with the rest of the world (Biden Jr., 2023).

Definitions

“Artificial intelligence” or “AI” is defined in the order as a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.  Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action (Biden Jr., 2023).

Further, “AI model” in the order means a component of an information system that implements AI technology and uses computational, statistical, or machine-learning techniques to produce outputs from a given set of inputs (Biden Jr., 2023).

Finally, the order’s “AI system” definition is any data system, software, hardware, application, tool, or utility that operates in whole or in part using AI (Biden Jr., 2023).

Transparency, explainability and fairness

While some notable elements of transparency, explainability and fairness are present, directly or indirectly, in other sections of the order, given their emphasised pertinence for human, consumer, and fundamental rights implications (Jain, 2024), over and above the guiding principles and policies discussed earlier, Section 7 and Section 8 delve into the greatest detail on these areas of particular interest.

Section 7 Advancing Equity and Civil Rights provides edification and guidance predominantly in relation to bias and discrimination from an AI perspective. This is in the context of varied rights including those related to the dispensation of criminal justice, and government benefits and programs. Finally, this is also done in the context of the broader economy: specifically, in so far as AI decision making is concerned, whether for disabilities, hiring, housing, consumer financial markets, tenant screening, among others (Biden Jr., 2023).[4]

Section 8 Protecting Consumers, Patients, Passengers, and Students illustrates, from the lens of AI, the direction and principles in relation to aspects of healthcare, public health, and human services. It also clarifies in relation to facets of bias and discrimination in such contexts. Moreover, it details guidance on transportation, education, and communication insofar as AI is concerned (Biden Jr., 2023).[5]

Disparities and parities viz-a-viz the UK and EU

Unlike the UK, and like the EU, explicit definitions for AI are mapped out within the order as highlighted earlier (Jain, 2024). For the most part, the order is phrased in the context of the US and its applicability is for the most part confined to the US, but similar to both the UK and EU, instances exist where international applicability comes into play (Jain, 2024). Notably however, the onus is largely laid upon existing regulatory bodies for the implementation of the order like the UK, albeit with the distinction that some existing US bodies (for example, TechCongress) mostly, if not entirely, have AI within their remits. Thus, in the latter respect, approach of the US is more similar to that of the EU, and perhaps most accurately defined as a combination of the two (Jain, 2024).

In so far as fairness, explainability and transparency are concerned, there is a very holistic emphasis from US lawmakers along several unique considerations. In this, the approach is more akin to that of the EU. As far as caveats and advantages are concerned, a comparison between the US and the UK can be drawn that is broadly parallel to the contrast between the EU and the UK. Specifically, due to its stricter approach, and bureaucratic structure, it will necessitate expending significantly more compliance time, cost, and effort. However, such regulatory guidelines have stronger ethical grounding, possibly ensuring the best interests of relevant stakeholders, and avoiding dark innovation, bad players, reputational damage, and insidious misuse (Jain, 2024). Lastly as seen for the EU and UK (Jain, 2024), fairness, explainability, and transparency once again come to the fore as key considerations in regulating AI within the order. They are also ubiquitously present principles in the approach of the US as evidenced above, underlining their importance and salience in lawmakers’ minds.

Future topics

Expounding upon and assessing the evolution of this regulatory space may be compelling subjects for future articles, as they could hold manifold implications for explainability, transparency and fairness. Further iterations or final versions of specific draft guidance (referenced in footnotes earlier in this piece) created in response to this order could be analysed in further detail (for instance, see here), and comparisons with other similar frameworks (for instance, see here) may be of interest.

References

Biden Jr., J. R. (2023, October 30). Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Retrieved from The White House's Official Website - Briefing Room - Presidential Actions: https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

Jain, K. (2024, April 03). How transparency, explainability and fairness are being connected under UK and EU approaches to AI regulation. Retrieved from FinTech Scotland: https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.fintechscotland.com%2Fhow-transparency-explainability-and-fairness-are-being-connected-under-uk-and-eu-approaches-to-ai-regulation%2F&data=05%7C02%7Ckushagra.jain%40strath.ac.uk%7C1f806

 

About the author

Kushagra Jain is a Research Associate at the Financial Regulation Innovation Lab (FRIL), University of Strathclyde. His research interests include artificial intelligence, machine learning, financial/regulatory technology, textual analysis, international finance, and risk management, among others. He was awarded doctoral scholarships from the Financial Mathematics and Computation Cluster (FMCC), Science Foundation Ireland (SFI), Higher Education Authority (HEA) and Michael Smurfit Graduate Business School, University College Dublin (UCD). Previously, he worked within wealth management and as a statutory auditor. He completed his doctoral studies in Finance from UCD in 2024, and obtained his MSc in Finance from UCD, his Accounting Technician accreditation from the Institute of Chartered Accountants of India and his undergraduate degree from Bangalore University. He was formerly FMCC Database Management Group Data Manager, Research Assistant, PhD Representative and Teaching Assistant for undergraduate, graduate and MBA programmes.

 

 

 

[1] Collectively referred to as agencies in the order and from hereon in.

[2] For instance, see here or here to see ongoing actions undertaken by the U.S. Department of Commerce, its National Institute of Standards and Technology (NIST) in the form of four draft publications, built atop a foundation of extant principles, standards, frameworks and guidelines - Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (NIST AI 600-1) , Secure Software Development Practices for Generative AI and Dual-Use Foundation Models (SP 800-218A) , Reducing Risks Posed by Synthetic Content (NIST AI 100-4) and A Plan for Global Engagement on AI Standards (NIST AI 100-5).

[3] For a more detailed and complete perspective, the interested reader is referred to the full text of the order here.

[4] While a more detailed treatment of the subject material is beyond the scope of this article, interested readers are directed to the full text of the order for more detail.

[5] While a more detailed treatment of the subject material is beyond the scope of this article, interested readers are directed to the full text of the order for more detail.

 

 

Image created by OpenAI's DALL·E, based on an article summary provided by ChatGPT.