Artificial intelligence: Protecting personal data

‘A year spent in artificial intelligence is enough to make you believe in God,” said Alan Perlis, the computer scientist.

Although artificial intelligence is bringing huge benefits to mankind, it comes at a cost. As it promotes access to information, so vast amounts of data from different sources are being collected in the systems of various organizations. Some companies, for example, create new information by feeding consumer data into their AI-operated algorithms, unbeknownst to consumers. While there are many positive aspects to AI, the downside is a significant privacy deficit. 

Indeed, in a recent report, the Brookings Institution’s Artificial Intelligence and Emerging Technology Initiative noted that, as AI evolves, it “magnifies the ability to use personal information in ways that can intrude on privacy interests by raising analysis of personal information to new levels of power and speed”. Digital technology can be used to identify, and then monitor particular individuals across various devices, regardless of whether they are at home, at work, or elsewhere. Even when personal data is anonymized, and becomes integrated into a larger data framework, AI is reportedly able to de-anonymize it by drawing inferences from other devices.

In 2013, the Organization for Economic Co-operation and Development, recognizing the problem, updated its information privacy principles, first published in 1980. Its principles are designed to minimize the amount of information an organization holds about individuals, and to ensure that information is handled in the way the individual expects. Although the principles have been recognized by many places, they are not always being observed. 

The OECD’s principles indicate, firstly, that the collection of personal information should be confined to what is necessary, that it should be gathered by lawful means, and that, wherever appropriate, the consent of the individual concerned should be sought for its use. The purpose, secondly, of the personal information being collected should be indicated to the individual at the time of collection. And, thirdly, the use of personal information should be limited to the purpose for which it was collected, unless, of course, consent was obtained or there was a legal basis for doing otherwise.

Although laudable, the OECD’s principles are being increasingly challenged by AI, and jurisdictions are having to tighten their procedures accordingly. In 2018, for example, the European Union adopted its General Data Protection Regulation, and this raised the standards regulating an individual’s right to their own information, and formulated a modernized data protection regime. In consequence, companies must ensure that their data collection and user policies are compliant with essential privacy standards, or face consequences. To give the standards teeth, the GDPR provides that, if an organization fails to comply with the privacy requirements, it will face a fine of up to $22 million, or 4 percent of its annual turnover, and this will inevitably focus minds.

When the Personal Data (Privacy) Ordinance (Cap.486) was enacted in Hong Kong in 1995, regard was had to the OECD principles. Things, however, have since moved on, not least concerning AI, and the law does not fully reflect, for example, the considerations underpinning the EU’s GDPR. Indeed, there have, in various jurisdictions, including Hong Kong, been calls for an AI bill of rights, which would uphold personal privacy through constitutional safeguards. Enhanced data protection policies are now being prioritized around the world, including in Hong Kong and on the Chinese mainland, albeit in different ways.

On Aug 18, the privacy commissioner, Ada Chung Lai-ling, announced new AI guidelines, and this reflects the widespread use of digital technology in the city. The guidelines seek to provide practical assistance for organizations needing to manage their own AI systems, and have been inspired by global best practice models. They will hopefully help to ensure that AI is used in a professional way by its operators, and one that respects user privacy. 

In promulgating the practice guidelines, Chung has clearly been influenced not only by international and local developments, but also by the city’s wider responsibilities, notably in southern China. Having regard to the outline development plan for the Guangdong-Hong Kong-Macao Greater Bay Area, she noted that “the healthy development and use of AI can help Hong Kong exploit its advantages as a regional data hub, as well as empower Hong Kong to become an innovation and technology hub and a world-class smart city”. She has, therefore, an eye on the bigger picture, while also recognizing that AI usage should not be divorced from basic privacy protections.

Indeed, Chung’s guidelines contain a series of ethical principles, drawn from global paradigms, which should be applied whenever AI is deployed. Whereas companies should be transparent and responsible in their AI usage, personal data has to be maintained under effective data governance. They are also advised to provide appropriate human oversight in the operation of AI, otherwise, standards would fall away. Cyberattacks, moreover, are a real threat, and, said Chung, it is necessary for organizations to ensure that “AI systems operate reliably, can handle errors and are protected against attacks”. 

However, in terms of enforcement, Chung said there are no plans at present to legislate for new AI regulations, as there is no urgent need. Although there have been suggestions that legislation is necessary, other jurisdictions have also relied, at least to start with, upon voluntary arrangements. On July 30, 2020, for example, the United Kingdom’s Information Commissioner’s Office published its AI guidance, which contains a framework, albeit non-mandatory, for auditing AI systems for compliance with data protection obligations. In other words, there is no AI specific legislation to protect individual rights, although pressure is building for such. 

By contrast, however, on Aug 20, the National People’s Congress adopted the Personal Information Protection Law of the People’s Republic of China, and it will take effect on Nov 1. As with the EU’s GDPR, it creates a legal framework to ensure data privacy, and regulates the collection, storage, usage and sharing of personal information. After concerns over user privacy violations, it requires companies to obtain a user’s consent to data collection, to inform users how their data is being used, to allow them to view their data and to request corrections, or else deletions. In the handling of personal information, there must be a clear and reasonable purpose, and it should be limited to the “minimum scope necessary to achieve the goals of handling” data. If companies violate the law, they will face fines of up to 50 million yuan ($7.7 million), or 5 percent of annual turnover.

Although the final draft of China’s new law has yet to be released, companies will be required to establish independent oversight bodies, which is clearly sensible. They will be composed of outside observers, whose task will be to monitor compliance with the data protection rules, and such oversight would also be of benefit in Hong Kong. The new requirement that companies should publish reports informing the public of the actions they have taken to protect data privacy has much to commend it, not least because it keeps them on their toes and promotes transparency.      

Only time will tell if voluntary arrangements are salutary in Hong Kong, and close monitoring will be required. Although the AI guidelines are not mandatory, a self-certification scheme could assist, as this would enable companies to indicate their willingness to comply. In any event, the companies using personal data need to be actively encouraged to identify the risks inherent in their AI systems, and advised how to minimize them.

In their handling of AI systems, organizations will, hopefully, have full regard to Chung’s guidelines, and observe best practice. This, however, is by no means a given, and, for some companies, profit will always trump prudence. If, therefore, a “softly, softly” approach is not efficacious, a big stick will be unavoidable.       

The author is a senior counsel and professor of law, and was previously the director of public prosecutions of the Hong Kong SAR.

The views do not necessarily reflect those of China Daily.