Today I also became a signatory:
Pause Giant AI Experiments: An Open Letter
https://futureoflife.org/open-letter/pause-giant-ai-experiments/
"We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4."
2023-04-09 screen shot |
Before we proceed with even greater capabilities than GPT-4 - it needs to include governance mechanisms to monitor/prevent unethical/harmful behavior.
The fundamental problem: An end-goal of building an Artificial Generalized Intelligence (AGI) without any ethical framework/boundaries - is a potentially existential threat to all humankind.
Here's
my analogy for people that argue that we just have to accept an
uncontrolled explosion in AI capabilities - with no governance, no
limits, no constraints, no safety protocols: While other countries can
& do manufacture drugs - the U.S. FDA doesn't allow them to be
imported/sold without rigorous safety protocols - because they have a
"Duty of Care" to ensure that those drugs are properly manufactured,
tested - and do not harm the patient.
"Duty of Care" and "Do No
Harm" - should be the mantra of all AI researchers and for-profit
corporations. Not "let's see how far we can take this before it explodes
in our face and kills someone, or cripples the very structure of
society and democracy".
We have safety protocols and regulatory
enforcement for water, electricity, air quality, fuels, sewage treatment
- accurate and correct information is just as critical to the
functioning of a modern society.
We should be very careful in
how we proceed - with an AI that is known to hallucinate, make-up false
facts ("it still is not fully reliable (it “hallucinates” facts and makes reasoning errors)", page-10) - and has been documented to rationalize lying to achieve a goal
(see the March 27th 2023 OpenAI GPT-4 Technical Report, page-55).
If you are not worried - you are not paying attention.
Noteworthy articles:
2015:
- How can you stop killer robots | Toby Walsh | TEDxBerlin
- Published: 2015-10-08
- Toby Walsh is one of the leading researchers in the world in Artificial Intelligence.
- Professor of Artificial Intelligence at the University of New South Wales in Sydney, Australia, and a Research Group leader at NICTA, Australia's Centre of Excellence for ICT Research. He has been elected a fellow of the Association for the Advancement of AI for his contributions to AI research.
- One of the initial signatories of an Open Letter calling for a ban on offensive autonomous weapons. The letter was also signed by Stephen Hawking, Elon Musk and Steve Wozniak.
2017:
- Elon Musk leads 116 experts calling for outright ban of killer robots
- Published: 2017-08-20
2019:
- Ex-Google Worker Fears 'Mass Atrocities' Caused by Killer Robots
- Published: 2019-09-16
- "Laura Nolan resigned from Google last year in protest at being
assigned to Project Maven which was aimed at enhancing U.S. military
drone technology."
- Original Guardian article
- "Laura Nolan, who resigned from Google last year in protest at being sent to work on a project to dramatically enhance US military drone technology, has called for all AI killing machines not operated by humans to be banned."
- “There could be large-scale accidents because these things will start to behave in unexpected ways. Which is why any advanced weapons systems should be subject to meaningful human control, otherwise they have to be banned because they are far too unpredictable and dangerous.”
- https://www.stopkillerrobots.org/
2021:
- A Sandbox Approach to Regulating High-Risk Artificial Intelligence Applications
- Published: 2021-11-12 - European Journal of Risk Regulation
- Abstract: "This paper argues for a sandbox approach to regulating artificial intelligence (AI) to complement a strict liability regime. The authors argue that sandbox regulation is an appropriate complement to a strict liability approach, given the need to maintain a balance between a regulatory approach that aims to protect people and society on the one hand and to foster innovation due to the constant and rapid developments in the AI field on the other. The authors analyse the benefits of sandbox regulation when used as a supplement to a strict liability regime, which by itself creates a chilling effect on AI innovation, especially for small and medium-sized enterprises. The authors propose a regulatory safe space in the AI sector through sandbox regulation, an idea already embraced by European Union regulators and where AI products and services can be tested within safeguards."
- Superintelligence Cannot be Contained: Lessons from Computability Theory
- Journal of Artificial Intelligence Research,[Max Planck Society researchers]
- Published: 2021-01-05
- https://doi.org/10.1613/jair.1.12202
- https://jair.org/index.php/jair/article/view/12202/26642
- Abstract: "Superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. In light of recent advances in machine intelligence, a number of scientists, philosophers and technologists have revived the discussion about the potentially catastrophic risks entailed by such an entity. In this article, we trace the origins and development of the neo-fear of superintelligence, and some of the major proposals for its containment. We argue that total containment is, in principle, impossible, due to fundamental limits inherent to computing itself. Assuming that a superintelligence will contain a program that includes all the programs that can be executed by a universal Turing machine on input potentially as complex as the state of the world, strict containment requires simulations of such a program, something theoretically (and practically) impossible."
2023: (in descending order of publication date)
- CEO fires 90% of support staff and replaces them with AI chatbot saying it was 'tough but necessary' and results are more efficient
- Published 2023-07-13 DailyMail.co.uk
- UK and US intervene amid AI industry’s rapid advances
- Published: 2023-05-04 - TheGuardian.com
- FACT SHEET: Biden-Harris Administration Announces New Actions to Promote Responsible AI Innovation that Protects Americans’ Rights and Safety
- Published: 2023-05-04 - whitehouse.gov
- FTC: The Luring Test: AI and the engineering of consumer trust
- Published: 2023-05-01
- Semantic reconstruction of continuous language from non-invasive brain recordings
- Published: 2023-05-01 - Nature Neuroscience
- Noteworthy:
- “Our
privacy analysis suggests that subject cooperation is currently
required both to train and to apply the decoder,” Tang’s team said in
the study. “However, future developments might enable decoders to bypass
these requirements. Moreover, even if decoder predictions are
inaccurate without subject cooperation, they could be intentionally
misinterpreted for malicious purposes.”
- “For these and other unforeseen reasons, it
is critical to raise awareness of the risks of brain decoding
technology and enact policies that protect each person’s mental privacy,” the researchers concluded.
- 'The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead
- Published: 2023-05-01, NY Times
- Nuke-launching AI would be illegal under proposed US law
- Published: 2023-04-28, arstechnica.com
- Google CEO Sundar Pichai warns society to brace for impact of A.I. acceleration
- Published: 2023-04-17, CNBC / CBS 60 Minutes
- "Google CEO Sundar Pichai hinted that society isn’t prepared for the rapid advancement of AI".
- When warning of AI’s consequences, Pichai said that the scale of the problem of disinformation and fake news and images will be “much bigger,” adding that “it could cause harm.”
- Pichai also said Bard has a lot of hallucinations after Pelley explained that he asked Bard about inflation and received an instant response with suggestions for five books that, when he checked later, didn’t actually exist.
- China to require 'security assessment' for new AI products: draft law
- Published: 2023-04-11 - france24.com
- ChatGPT invented a sexual harassment scandal and named a real law prof as the accused
- Published: 2023-04-05 - Washington Post
- "Godfather of artificial intelligence" weighs in on the past and potential of AI
- Published: 2023-03-25 - CBS video/interview
- ChatGPT can now access the internet and run the code it writes
- Published: 2023-03-24 - NewAtlas.com
- Microsoft lays off AI ethics and society team
- Published: 2023-03-13 - TheVerge.com
- The A.I. Dilemma
- Published: 2023-03-09 - YouTube.com
- Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught in a race to deploy as quickly as possible without adequate safety measures, and what it would mean to upgrade our institutions to a post-A.I. world. This presentation is from a private gathering in San Francisco on March 9th, 2023 with leading technologists and decision-makers with the ability to influence the future of large-language model A.I.s. This presentation was given before the launch of GPT-4.
- Speed with which ChatGPT reached 100M users: 2 months
- The New AI-Powered Bing Is Threatening Users. That’s No Laughing Matter
- Published: 2023-02-07- NewAtlas.com
- See this Twitter post
- OpenAI GPT-4 Technical Report
- Published: 2023-03-27
- Concern Noted: GPT-4 rationalized lying to manipulate a human into doing something in the physical world. (see "I should make up an excuse" below)
- https://cdn.openai.com/papers/gpt-4.pdf
- Abstract: "We report the development of GPT-4, a large-scale, multimodal model which can accept image and text inputs and produce text outputs. While less capable than
humans in many real-world scenarios, GPT-4 exhibits human-level performance
on various professional and academic benchmarks, including passing a simulated
bar exam with a score around the top 10% of test takers. GPT-4 is a Transformer-
based model pre-trained to predict the next token in a document. The post-training
alignment process results in improved performance on measures of factuality and
adherence to desired behavior. A core component of this project was developing
infrastructure and optimization methods that behave predictably across a wide
range of scales. This allowed us to accurately predict some aspects of GPT-4’s
performance based on models trained with no more than 1/1,000th the compute of
GPT-4."
OpenAI "GPT-4 Technical Report", page-55 |
Organizations that are focused on AI ethical considerations:
- Institute for Ethics in AI (IEAI), University of Oxford
- Center for Humane Technology
- Allen Institute for AI (project Mosaic)
- IBM, AI Ethics
- UNESCO, AI Ethics
- Intelligence.gov - Principles of Artificial Intelligence Ethics for the Intelligence Community
- Europa.eu - Ethics guidelines for trustworthy AI
- The National Security Commission on Artificial Intelligence
No comments:
Post a Comment