This is the last of three parts examining privacy and robotics at home and in public in the U.S. In Part 1 , we discussed the privacy implications of robots at home and in public. In Part 2, we discussed applied federal and state privacy laws to robots. In this article, we look at privacy best practices for the robotics industry.
Privacy Best Practices
In order to navigate the complex landscape of privacy laws and ensure compliance, organizations involved in AI-powered robotics should adopt best practices, which requires an understanding of the types of data being collected and the purposes for which it is being used.
To do this, we need to consider the difference between an AI-powered robot’s tasks and its purpose. In this context, a task is the immediate job being performed by a robot, while its purpose is the broader use-case or mission it is fulfilling. For example, a task for a robot might be carrying a heavy shopping bag for an elderly person, whereas the purpose for the same robot might be independent living assistance for an Alzheimer’s patient. The robot’s tasks might require less use and retention of data, but its purpose may require more.
Data Minimization
Collection and use of personal information should be limited to what is necessary for the intended task and purpose. The types of data collected will vary based on the types of sensors an AI-powered robot has. And the types and sensitivity of sensors that a robot has should be limited to those needed to fulfil the robot’s tasks and purposes. For instance, a household support robot might not need ultrasensitive hearing capable of listening to conversations at unusual distances, but an AI-powered security robot might.
If data from a particular sensor is not needed at any given time to accomplish an AI-powered robot’s task or purposes, the sensor should be turned off or the sensitivity turned down. Understandably, whether a sensor can be turned off or the sensitivity turned down will depend on the use-cases and operating environment. A humanoid robot in the home being charged with or awaiting a task might not need to visually monitor the homeowner but it probably would need to listen to its environment for an instruction in order to fulfil its purpose. On the other hand, an AI-powered security robot would be unable to complete its purpose if its sensors were turned off during a time when no security threats were currently observed.
Use and Retention Limits
Use and retention of data collected by an AI-powered robot should be limited to performing its tasks and purposes. Data should only be used for the task or purpose for which it was collected. For example, if data is collected that could be used to identify a person but identification is not necessary to perform the robot’s task or purpose, then the data should not be used to identify the person, and (if no change in task or purpose is foreseen) the company should commit in its privacy policy not to attempt or permit such identification.
Data should not be retained if it is not required. If an AI-powered robot in public listens to conversations that are not necessary for performing its task or purpose, the data should be discarded. On the other hand, an AI-powered robot being used to support an Alzheimer’s patient might need to retain data collected from all conversations in a home to be able to perform its purpose.
Transparency
AI-powered robots will have a variety of sensors and data collection capabilities. Examples of different types of sensors include cameras, microphones, thermal cameras and LIDAR/radar. Other technologies such as GPS, Wifi, Bluetooth and near field communications (NFC) may be present as well, each of which is capable of collecting data.
Producers of AI-powered robots should develop user-friendly methods for their robots to communicate to people in their vicinity what types of information their robots are capable of collecting. This could be accomplished in a variety of ways, including verbally or visually, such as through an app or by displaying information on the robot.
Consent
Consent to collection, use, retention and disclosure of personal information is a foundational privacy protection, particularly in connection with sensitive personal information. When possible, some form of consent should be obtained from individuals when their personal information is being collected. In many situations, it would be infeasible to obtain consent from individuals for the robot to collect data about them, such as when an AI-powered delivery robot is navigating to its destination visually and with GPS. However, in other contexts, it may be possible to obtain consent. For example, if an AI-powered robot has a conversation with a person, the robot could offer to inform the user about its data use practices and obtain the person’s consent.
Data Security
As with all technologies that collect and use personal information, use of reasonable cybersecurity measures is a requirement under virtually every privacy law, as well as (according to the FTC) under the Federal Trade Commission Act. The correct combination of controls will, as always, depend on the context. But strong fundamentals like data encryption, physical security, user authentication, access controls, and implementation of regular security updates and patches to software are good starting points.
AI Risk Management
AI risk management frameworks provide a structured approach to identify, assess and mitigate risks associated with the development and deployment of AI systems. They include privacy considerations in them, but address many other AI-related risks as well. AI-powered robotics producers should incorporate an AI risk management framework as part of their development process. The National Institute of Standards and Technologies (NIST) Artificial Intelligence Risk Management Framework, ISO 42001, and ISO/IEC 23894 are just a few examples of well-respected frameworks that can be adopted.
Conclusion
The rapid advancement of AI-powered robotics presents both opportunities and challenges. While these technologies can drive significant progress, they also raise critical privacy concerns. By understanding the current state of the law and the industry, anticipating future developments, and adhering to best practices for compliance, organizations can navigate these challenges effectively and ensure the responsible use of AI-powered robotics.
Andrew Baer is the Chair of, and Christopher Dodson is a partner in, Cozen O’Connor’s Technology, Privacy and Data Security Group.