Amendment #485 to H4600

Automated Decision-Making

Representatives Garballey of Arlington and Cataldo of Concord move to amend the bill by adding the following section:

“SECTION XXXX. SECTION 1. (a) As used in this section, the following words shall, unless the context clearly requires otherwise. have the following meanings:

 

“Algorithm”, a specific procedure, set of rules, or order of operations designed to solve a problem or make a calculation, classification, or recommendation.

 

“Artificial intelligence”, computerized methods and tools, including but not limited to machine learning and natural language processing, that act in a way that resembles human cognitive abilities when it comes to solving problems or performing certain tasks.

 

“Automated decision system”, any computer program, method, statistical model, or process that aims to aid or replace human decision-making using algorithms or artificial intelligence. These systems can include analyzing complex datasets about human populations and government services or other activities to generate scores, predictions, classifications, or recommendations used by agencies to make decisions that impact human welfare.

 

“Executive agency” a state agency within the office of the governor

 

“Identified group characteristic", age, race, creed, color, religion, national origin, gender, disability, sexual orientation, marital status, veteran status, receipt of public assistance, economic status, location of residence, or citizenship status.

 

“Source code”, the structure of a computer program that can be read and understood by people.

 

“Training data”, the data used to inform the development of an automated decision system and the decisions or recommendations it generates.

 

(b) Notwithstanding any special or general law to the contrary, there shall be a special legislative commission established pursuant to section 2A of chapter 4 of the General Laws to conduct a study on the use of automated decision systems by executive agencies.

 

The commission shall consist of 11 members: 2 of whom shall be the chairs of the joint committee on advanced information technology the internet and cybersecurity, who shall serve as co-chairs; 1 of whom appointed by the speaker of the house of representatives; 1 of whom shall be appointed by the president of the senate; 1 of whom shall be the secretary of the executive office of technology services and security, or a designee; 1 of whom shall be the attorney general or a designee; 1 of whom shall be the executive director of the American Civil Liberties Union of Massachusetts or a designee; 2 of whom shall be appointed by the Governor and shall work at academic institutions in the Commonwealth in the field of (i) artificial intelligence and machine learning, (ii) data science and information policy, (iii) social implications of artificial intelligence and technology; or (iv) technology and the law; 1 of whom shall be a member of the Massachusetts High Technology Council; and 1 of whom shall be a member of the Massachusetts Technology Collaborative.

 

(c) . The commission shall study the use of automated decision systems by executive agencies and make recommendations to the legislature regarding appropriate regulations, limits, standards, and safeguards. The commission shall:

 

(i) survey the current use of automated decision systems by executive agencies and the purposes for which such systems are used, including but not limited to:

 

(A) the principles, policies, and guidelines adopted by executive agencies to inform the procurement, evaluation, and use of automated decision systems, and the procedures by which such principles, policies, and guidelines are adopted;

 

(B) the training executive agencies provide to individuals using automated decision systems, and the procedures for enforcing the principles, policies, and guidelines regarding their use;

 

(C) the manner by which executive agencies validate and test the automated decision systems they use, and the manner by which they evaluate those systems on an ongoing basis, specifying the training data, input data, systems analysis, studies, vendor or community engagement, third-parties, or other methods used in such validation, testing, and evaluation;

 

(D) the manner and extent to which executive agencies make the automated decision systems they use available to external review, and any existing policies, laws, procedures, or guidelines that may limit external access to data or technical information that is necessary for audits, evaluation, or validation of such systems; and

 

(E) procedures and policies in place to protect the due process rights of individuals directly affected by the use of automated decision systems;

 

(ii) consult with experts in the fields of machine learning, algorithmic bias, algorithmic auditing, and civil and human rights;

 

(iii) examine research related to the use of automated decision systems that directly or indirectly result in disparate outcomes for individuals or communities based on an identified group characteristic;

 

(iv) conduct a survey of technical, legal, or policy controls to improve the just and equitable use of automated decision systems and mitigate any disparate impacts deriving from their use, including best practices, policy tools, laws, and regulations developed through research and academia or proposed or implemented in other states and jurisdictions;

 

(v) examine matters related to data sources, data sharing agreements, data security provisions, compliance with data protection laws and regulations, and all other issues related to how data is protected, used, and shared by executive agencies using automated decision systems;

 

(vi) examine any other opportunities and risks associated with the use of automated decision systems.

 

(vii) evaluate evidence based best practices for the use of automated decision systems;

 

(viii) make recommendations for regulatory or legislative action, if any;

 

(ix) make recommendations about if and how existing state laws, regulations, programs, policies, and practices related to the use of automated decision systems should be amended to promote racial and economic justice, equity, fairness, accountability, and transparency;

 

(x) make recommendations for the development and implementation of policies and procedures that may be used by the state for the following purposes:

 

(A) to allow a person affected by a rule, policy, or action made by, or with the assistance of, an automated decision system, to request and receive an explanation of such rule, policy, or action and the basis therefor;

 

(B) to determine whether an automated decision system disproportionately or unfairly impacts a person or group based on an identified group characteristic;

 

(C) to determine prior to or during the procurement or acquisition process whether a proposed agency automated decision system is likely to disproportionately or unfairly impact a person or group based on an identified group characteristic;

 

(D) to address instances in which a person or group is harmed by an agency automated decision system if any such system is found to disproportionately impact a person or group on the basis of an identified group characteristic.

 

(d) The commission shall submit its report and recommendations, including any proposed legislation, with the governor, the clerks of the house of representatives and the senate, and the joint committee on advanced information technology and cybersecurity on or before December 31, 2024."


Additional co-sponsor(s) added to Amendment #485 to H4600

Automated Decision-Making

Representative:

Kay Khan

David Paul Linsky

Natalie M. Higgins

Lindsay N. Sabadosa

Andres X. Vargas

Mike Connolly

Steven Owens

Marjorie C. Decker

Edward R. Philips

Samantha Montaño

David Henry Argosky LeBoeuf

James C. Arena-DeRosa

Erika Uyterhoeven

Tricia Farley-Bouvier