Friday, March 31, 2023

2023-03-31 Friday - Success - Less Luck - More Preparation-Grit-Perseverance-Execution

[image credit: pthd on pixabay.com]

 

Working late on a Friday night...will be burning the midnight oil - and through the weekend. Preparing for a new client engagement. Going the extra mile. Putting in that extra effort - that differentiates. My bar for excellence in what I do - like forging the keen edge of a blade.

While luck may help - it isn't my secret to success - it is preparation, grit, perseverance, execution.

The heights by great men reached and kept were not attained by sudden flight, but they while their companions slept, were toiling upward in the night.
~ Henry Wadsworth Longfellow

Wednesday, March 29, 2023

2023-03-29 Wednesday - I am a signatory to the Open Letter - Pause Giant AI Experiments

Today I also became a signatory:

Pause Giant AI Experiments: An Open Letter
https://futureoflife.org/open-letter/pause-giant-ai-experiments/
"We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4."

2023-04-09 screen shot

 Before we proceed with even greater capabilities than GPT-4 - it needs to include governance mechanisms to monitor/prevent unethical/harmful behavior. 

The fundamental problem: An end-goal of building an Artificial Generalized Intelligence (AGI) without any ethical framework/boundaries - is a potentially existential threat to all humankind.

 Here's my analogy for people that argue that we just have to accept an uncontrolled explosion in AI capabilities - with no governance, no limits, no constraints, no safety protocols: While other countries can & do manufacture drugs - the U.S. FDA doesn't allow them to be imported/sold without rigorous safety protocols - because they have a "Duty of Care" to ensure that those drugs are properly manufactured, tested - and do not harm the patient.

"Duty of Care" and "Do No Harm" - should be the mantra of all AI researchers and for-profit corporations. Not "let's see how far we can take this before it explodes in our face and kills someone, or cripples the very structure of society and democracy".

We have safety protocols and regulatory enforcement for water, electricity, air quality, fuels, sewage treatment - accurate and correct information is just as critical to the functioning of a modern society.

We should be very careful in how we proceed - with an AI that is known to hallucinate, make-up false facts ("it still is not fully reliable (it “hallucinates” facts and makes reasoning errors)", page-10) - and has been documented to rationalize lying to achieve a goal (see the March 27th 2023 OpenAI GPT-4 Technical Report, page-55).

If you are not worried - you are not paying attention. 

 Noteworthy articles:

2015: 

  1. How can you stop killer robots | Toby Walsh | TEDxBerlin
    1. Published: 2015-10-08 
    2. Toby Walsh is one of the leading researchers in the world in Artificial Intelligence. 
    3. Professor of Artificial Intelligence at the University of New South Wales in Sydney, Australia, and a Research Group leader at NICTA, Australia's Centre of Excellence for ICT Research. He has been elected a fellow of the Association for the Advancement of AI for his contributions to AI research.
    4. One of the initial signatories of an Open Letter calling for a ban on offensive autonomous weapons. The letter was also signed by Stephen Hawking, Elon Musk and Steve Wozniak.

2017:

  1. Elon Musk leads 116 experts calling for outright ban of killer robots 
    1. Published: 2017-08-20

2019:

  1. Ex-Google Worker Fears 'Mass Atrocities' Caused by Killer Robots
    1. Published: 2019-09-16 
    2. "Laura Nolan resigned from Google last year in protest at being assigned to Project Maven which was aimed at enhancing U.S. military drone technology."
    3. Original Guardian article
      1. "Laura Nolan, who resigned from Google last year in protest at being sent to work on a project to dramatically enhance US military drone technology, has called for all AI killing machines not operated by humans to be banned."
      2.  “There could be large-scale accidents because these things will start to behave in unexpected ways. Which is why any advanced weapons systems should be subject to meaningful human control, otherwise they have to be banned because they are far too unpredictable and dangerous.”
      3. https://www.stopkillerrobots.org/

2021

  1. A Sandbox Approach to Regulating High-Risk Artificial Intelligence Applications 
    1. Published: 2021-11-12 - European Journal of Risk Regulation
    2. Abstract: "This paper argues for a sandbox approach to regulating artificial intelligence (AI) to complement a strict liability regime. The authors argue that sandbox regulation is an appropriate complement to a strict liability approach, given the need to maintain a balance between a regulatory approach that aims to protect people and society on the one hand and to foster innovation due to the constant and rapid developments in the AI field on the other. The authors analyse the benefits of sandbox regulation when used as a supplement to a strict liability regime, which by itself creates a chilling effect on AI innovation, especially for small and medium-sized enterprises. The authors propose a regulatory safe space in the AI sector through sandbox regulation, an idea already embraced by European Union regulators and where AI products and services can be tested within safeguards."
  2. Superintelligence Cannot be Contained: Lessons from Computability Theory  
    1. Journal of Artificial Intelligence Research,[Max Planck Society researchers]
    2. Published: 2021-01-05
    3. https://doi.org/10.1613/jair.1.12202 
    4. https://jair.org/index.php/jair/article/view/12202/26642
    5. Abstract: "Superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. In light of recent advances in machine intelligence, a number of scientists, philosophers and technologists have revived the discussion about the potentially catastrophic risks entailed by such an entity. In this article, we trace the origins and development of the neo-fear of superintelligence, and some of the major proposals for its containment. We argue that total containment is, in principle, impossible, due to fundamental limits inherent to computing itself. Assuming that a superintelligence will contain a program that includes all the programs that can be executed by a universal Turing machine on input potentially as complex as the state of the world, strict containment requires simulations of such a program, something theoretically (and practically) impossible."

2023: (in descending order of publication date)

  1. CEO fires 90% of support staff and replaces them with AI chatbot saying it was 'tough but necessary' and results are more efficient 
    1. Published 2023-07-13 DailyMail.co.uk
  2. UK and US intervene amid AI industry’s rapid advances 
    1. Published: 2023-05-04 - TheGuardian.com
  3. FACT SHEET: Biden-⁠Harris Administration Announces New Actions to Promote Responsible AI Innovation that Protects Americans’ Rights and Safety  
    1.  Published: 2023-05-04 - whitehouse.gov
  4. FTC: The Luring Test: AI and the engineering of consumer trust
    1. Published: 2023-05-01
  5. Semantic reconstruction of continuous language from non-invasive brain recordings 
    1. Published: 2023-05-01 - Nature Neuroscience
    2. Noteworthy: 
      1. Our privacy analysis suggests that subject cooperation is currently required both to train and to apply the decoder,” Tang’s team said in the study. “However, future developments might enable decoders to bypass these requirements. Moreover, even if decoder predictions are inaccurate without subject cooperation, they could be intentionally misinterpreted for malicious purposes.” 
      2. For these and other unforeseen reasons, it is critical to raise awareness of the risks of brain decoding technology and enact policies that protect each person’s mental privacy,” the researchers concluded.
  6.  'The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead 
    1. Published: 2023-05-01, NY Times
  7. Nuke-launching AI would be illegal under proposed US law 
    1. Published: 2023-04-28, arstechnica.com
  8. Google CEO Sundar Pichai warns society to brace for impact of A.I. acceleration 
    1. Published: 2023-04-17, CNBC / CBS 60 Minutes
    2. "Google CEO Sundar Pichai hinted that society isn’t prepared for the rapid advancement of AI".
    3. When warning of AI’s consequences, Pichai said that the scale of the problem of disinformation and fake news and images will be “much bigger,” adding that “it could cause harm.”
    4. Pichai also said Bard has a lot of hallucinations after Pelley explained that he asked Bard about inflation and received an instant response with suggestions for five books that, when he checked later, didn’t actually exist.
  9. China to require 'security assessment' for new AI products: draft law
    1. Published: 2023-04-11 - france24.com
  10. ChatGPT invented a sexual harassment scandal and named a real law prof as the accused
    1. Published: 2023-04-05 - Washington Post  
  11. "Godfather of artificial intelligence" weighs in on the past and potential of AI 
    1. Published: 2023-03-25 - CBS video/interview
  12. ChatGPT can now access the internet and run the code it writes
    1. Published: 2023-03-24 - NewAtlas.com 
  13. Microsoft lays off AI ethics and society team  
    1. Published: 2023-03-13 - TheVerge.com
  14. The A.I. Dilemma  
    1. Published: 2023-03-09 - YouTube.com
    2. Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught in a race to deploy as quickly as possible without adequate safety measures, and what it would mean to upgrade our institutions to a post-A.I. world. This presentation is from a private gathering in San Francisco on March 9th, 2023 with leading technologists and decision-makers with the ability to influence the future of large-language model A.I.s. This presentation was given before the launch of GPT-4.
    3. Speed with which ChatGPT reached 100M users: 2 months
  15. The New AI-Powered Bing Is Threatening Users. That’s No Laughing Matter 
    1. Published: 2023-02-07- NewAtlas.com
    2. See this Twitter post  
  16. OpenAI GPT-4 Technical Report
    1. Published: 2023-03-27
    2. Concern NotedGPT-4 rationalized lying to manipulate a human into doing something in the physical world. (see "I should make up an excuse" below)
    3. https://cdn.openai.com/papers/gpt-4.pdf
    4. Abstract: "We report the development of GPT-4, a large-scale, multimodal model which can accept image and text inputs and produce text outputs. While less capable than
      humans in many real-world scenarios, GPT-4 exhibits human-level performance
      on various professional and academic benchmarks, including passing a simulated
      bar exam with a score around the top 10% of test takers. GPT-4 is a Transformer-
      based model pre-trained to predict the next token in a document. The post-training
      alignment process results in improved performance on measures of factuality and
      adherence to desired behavior. A core component of this project was developing
      infrastructure and optimization methods that behave predictably across a wide
      range of scales. This allowed us to accurately predict some aspects of GPT-4’s
      performance based on models trained with no more than 1/1,000th the compute of
      GPT-4
      ."
OpenAI "GPT-4 Technical Report", page-55 


Organizations that are focused on AI ethical considerations:

Copyright

© 2001-2021 International Technology Ventures, Inc., All Rights Reserved.