#400 – KILLER AI AND RISK BASED, DECISION MAKING – GREG HUTCHINS PE CERM

What happens if an autonomous AI robot has preemptive authority to use deadly force to ensure its safety or the public’s safety ?  We are not too far from this when autonomous robots will have risk based, problem solving and decision making capabilities and even statutory authorities.

Last week, the San Francisco Board of Supervisor’s gave the San Francisco Police Department authority to use killer robots.  The vote was 8 to 3 to approve robo cops.  San Francisco is the most liberal city in the United States. Oakland and other cities are thinking the same.

What’s going on?

It’s all about personal safety and risk.  The city of San Francisco is a crime mess. I’ve lived in the city for many years. I went to Berkeley. The city is full of homelessness and violence.  Residents  are frankly scared for their safety.

It’s all about business risk.  Corporate headquarters are moving out.

Source SF Gate 2022.

The central business districts around SOMA and Union Square are ghost towns.  Employees are fearful of their safety.

The problem?

It’s all about public safety decision making.  San Francisco and many large US cities are in deep trouble in terms of crime. Murder in Portland, Oregon has doubled in the last 3 years.  This is common in many US cities.

Humans also  no longer want to be a police officers in large metro areas.  Why?  Lack of respect.  Lack of remuneration.  It becomes a personal risk benefit decision for the potential officer.

The AI solution or problem

Machine public safety.  Now, AI is largely used in making suggestions for purchasing such as on Amazon.  AI is used to interpret medical images.  Ai is largely supervised and has human intervention.

But what happens when AI has the capability and authority to make independent decisions about your healthcare, financial portfolio, hiring, and public safety.  And, without any human supervision, intervention, or over sight  Or, without any human capability to ask for a second AI or human opinion.  Or, without the option to escalate it to a human or machine.

The Logical or Illogical Next Step?

Now, we have robo killer cops with adult supervision.  In San Francisco, a senior police officer can only authorize a killer robot if public or officer safety is at risk.

Now, let’s give the robot more smarts such as autonomous AI decision making.  What is the next step?  Fully autonomous robo cops.   It’s not far-fetched to see a robotic armed police officer forestalling a robbery or interceding in a live felony.

  • What do you think of this?
  • What are possible solutions?
  • What are possible rules of engagement between human and machine?

 

Leave a Reply

Your email address will not be published.