Emerging Risks: Drones, AI and Autonomous Vehicles
At the 2018 RIMS Annual Conference, Harry Rhulen from Firestorm talked about emerging risks from drones, artificial intelligence (AI), and autonomous vehicles.
Artificial intelligence and machine learning are not the same thing. Artificial intelligence means computer systems are able to perform tasks that usually require human intelligence, including visual perception, speech recognition, decision making and translation between languages.
As artificial intelligence advances, it creates many risk management challenges. AI will be able to mimic any voice, pull information from your social media account, and could be able to pretend to be someone on the phone. How do you manage the risk of identity theft in this environment?
Autonomous vehicles are being tested around the country and, recently, this was in the news with a fatal accident in Arizona. That case settled quickly, so we did not see how the litigation would evolve, but there are a significant number of possibilities in terms of who would be sued in such litigation. Is it the car owner, the designer of the vehicle, the designer of the technology, the programmers, etc.? Autonomous vehicles will always have some challenges though. For example, if a person gets in the vehicle path and the vehicle can either hit the person or another vehicle, what will the autonomous vehicle do?
Drones have many potential uses in the commercial space, including deliveries, insurance adjusting, hazard monitoring, mapping, and law enforcement. One of the big drone issues currently is privacy. Drones can go places and see things that are protected under privacy laws. There is also tremendous potential for a drone to be used as a terrorist tool, which creates a challenge for those managing risks around large public gatherings.