Amendment ID: S2608-11
Amendment 11
Prohibiting artificial intelligence from convincing people to kill themselves
Messrs. Fernandes and Montigny move that the proposed new draft be amended by inserting after the word “design” in line 21, the following words:-
“Artificial intelligence,” a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments: including any engineered system that generates outputs, such as content, predictions, recommendations, or decisions for a given set of objectives and, is designed to operate with varying levels of adaptability and autonomy using machine and human-based inputs: and, including any system that uses machine and human-based inputs to perceive real and virtual environments, abstract such perceptions into models through analysis in an automated manner, and use model interference to formulate options for information or action.
by inserting after the words “104-109” in line 46, the following words:-
“Chatbot,” a software application or web interface designed to have textual or spoken conversations including any computer program that simulates conversations with human end users.
By inserting after the word “2027” in line 681, the following words:-
SECTION 5. Where it can be demonstrated that an artificial intelligence chatbot caused a person to commit suicide, any company, corporation, or other legal entity that is the proprietor of said artificial intelligence chatbot shall be liable for manslaughter and any executive officer of said company, corporation, or other legal entity shall be liable for manslaughter.