The Robots are Coming: Navigating Privacy Challenges in AI-Powered Robotics in Public Settings and Homes – Part 2

This is the second of three parts examining privacy and robotics at home and in public in the U.S. In Part 1, we discussed the privacy implications of robots at home and in public. In this article, we apply federal and state privacy laws to robots at home and in public.

Federal Law

In the United States, privacy laws are a patchwork of federal and state regulations. At the federal level, apart from sector-specific laws, such as laws applicable to health or financial data, the Federal Trade Commission (FTC) acts as the principal privacy regulator.

Unsurprisingly, the FTC has not issued guidance on robots specifically. But many of its prior staff reports, such as Protecting Consumer Privacy in an Era of Rapid Change (“Protecting Privacy”), are relevant to certain use-cases for robots. In Protecting Privacy, the FTC promotes privacy by design, the notion that companies should integrate privacy protections at every stage of product development.  Privacy protections that should be incorporated include reasonable data security, collection limits, and retention practices.  The FTC also recommends providing individuals with notice and choice in connection with privacy at a time and context relevant to the individual’s decisions about data.  Other staff reports, such as Mobile Privacy Disclosures and Internet of Things: Privacy and Security in a Connected World, emphasize similar themes. If the FTC believes a company has engaged in prohibited privacy practices, it can initiate administrative proceedings against the company, impose civil fines, and seek temporary and permanent injunctions from courts, among other remedies.

State Law

State-level comprehensive privacy laws are gaining importance in the U.S.  These laws generally apply based on the number of consumers from the state about whom a company receives personal information.  

None of the comprehensive state privacy laws addresses robotics.  But many regulate profiling when artificial intelligence is used, which could be relevant in some cases.  More on that topic can be found here.

Many states require that a company perform a data protection impact assessment when a heightened privacy risk is present. What qualifies as a heightened risk varies by state, but examples include processing of a child’s personal information, processing of sensitive personal information, the selling of personal information, or the use of profiling.  Depending on the use-cases for a robot, a data protection impact assessment may be required.

All states with comprehensive privacy laws provide residents with certain rights in connection with their personal information, such as the right to know what personal information about them is being collected by a company, the right to delete personal information, the right to correct personal information, or the right to opt out of the sale of personal information.

For robots in the home, at first glance, the situation may be similar to existing connected devices, such as robot vacuum cleaners, video doorbells, or Amazon Alexas and Google Homes.  But with the expanding use-cases that robotics provides, AI-powered robots in the home are likely to come with more cameras, microphones and other sensors collecting data.  Some of the data may remain on the AI-powered robot, but it is likely that some data will be sent back to the robot’s producer.  AI-powered robot producers that are subject to state comprehensive privacy laws will likely have to account for how to implement personal information rights requests, both for personal information on the robot (if the company can access this, whether as part of routine operations or for purposes of tech support) and on the remote systems of the company.  The situation could become further complicated if the personal information of someone other than the owner of the AI-powered robot is collected, which is likely to happen, whether it is another household resident or a guest.  Those individuals are empowered under state privacy laws to make personal information rights requests.  So the producers of AI-powered robots will need to anticipate how to handle data subject rights requests for individuals other than the robot owner where an AI-powered robot collects personal information from them.

Some states, such as Illinois, Texas and Washington, have specific laws that regulate biometric data. If an AI-powered robot collects biometric information, such as fingerprints or voiceprints, in one of these states, producers of the AI-powered robots may be required to take additional measures such as making special disclosures, obtaining informed consent, or complying with biometric data retention requirements.  Furthermore, many state comprehensive privacy laws treat biometric data as sensitive information, which can result in additional obligations, such as tighter consent requirements or performing data protection impact assessments.

AI-powered robots in public could pose a similar challenge.  U.S. state comprehensive privacy laws typically have exemptions for public information.  However, those exemptions can be relatively narrow (often relying on government records or information made available by the data subject).  The public information exceptions do not generally exclude information simply because it was collected in a public setting.  Whether the data collected by an AI-powered robot in public qualifies as personal information is likely to depend on the sensors used by the robot and how the data is used.  For example, video of a person collected by a robot in public when the video is used by the robot in real time to navigate its physical environment and is neither retained nor used for other purpose, is unlikely to trigger compliance obligations under state comprehensive privacy laws (either because the information would not constitute personal information or, even if it does, the information would not be collected by the robot’s provider within the meaning of the laws).  But if the same video is used by an AI-powered robot designed to assist memory-impaired individuals for the purpose of identifying people the memory-impaired person interacts with so the robot can provide prompts to assist the person, this probably would entail the collection of personal information.   So if an AI-powered robot collects information in a public setting that qualifies as personal information and the robot manufacturer is subject to a state comprehensive privacy law, the robot manufacturer may need to determine how to how to handle data subject rights requests from those individuals.

Furthermore, precise geolocation data is often specifically covered by these laws not only as personal information but as sensitive personal information, which often triggers requirements for obtaining opt-in consent and performing a data protection impact assessment.  If an AI-powered robot collects personal information as well as location information, then a data protection impact assessment would be required under many comprehensive state privacy laws.

The current generation of comprehensive state privacy laws do not include a private right of action for violations. So enforcement is by each state’s attorney general or the state’s privacy regulator in California.  Many states impose civil penalties of up to $7,500 per violation but some penalties, such as in Connecticut and Florida, run as high as $50,000 per violation.

In Part 3, we will look at privacy best practices for the robotics industry.

Andrew Baer is the Chair of, and Christopher Dodson is a partner in, Cozen O’Connor’s Technology, Privacy and Data Security Group.

Authors

  • Member

    Chris enjoys using his prior experience as a software engineer to solve clients' concerns where tech...

  • Chair, Technology, Privacy & Data Security

    Andrew Baer is the founder and chair of Cozen O’Connor’s Technology, Privacy & Data Security pra...

Related Posts