Does Big Brother have a conscience? Ethical regulations in the
digital age

27 October2018

Originally published in Business Brief, November 2018

Risk. One of the most frequently used words in the modern commercial world. We have risk registers, risk appetites, cyber risk, risk matrices, Risk (the board game – Google it) and soon, a National Risk Assessment. Our lives are dominated by decisions calculated to achieve the best outcome (or avoid the worst).

Regulators have for some years favoured a risk-based approach to regulation. In a world where economic, cultural and political agendas vary enormously, it makes sense to utilise such a system. Regulation which enables sufficient flexibility to accommodate diversity of approach, but incorporates commonality of purpose, is more likely to lead to widespread adoption and thus achieve its goals.

Flexibility does however bring uncertainty. How do you know whether your assessment of the risks of a transaction will be viewed in the same light some months (or years) later by a regulator? Hindsight and the application of modern standards to historic actions are both tricky rocks to circumnavigate during an investigation; the contemporaneous pressures are often long forgotten. Documenting decisions and evidencing thought processes are both essential steps to take to justify your position.

Now add an extra dimension to the process – ethical implications. From ESG, impact investing and Guernsey’s Green Fund to cardboard straws and exporting our waste, corporate practices are undergoing a radical (and necessary) change. Some areas of regulation have undergone or are facing a similarly radical overhaul, leading to calls for ethical factors to be formally introduced as part of the decision-making process. Technological progress and the proliferation of data are two drivers of this call for change.

It is becoming less acceptable to simply do something because the law does not expressly forbid it. The question “Can I do this?“ should be immediately followed by “Should I do this?”. There may be no criminal or civil penalty attaching to a morally dubious practice, but once news circulates, customers are tending to vote with their feet (or mice). The Cambridge Analytica scandal showed us that there doesn’t have to be a finding of any breach of contract or criminality before a business is terminally impacted by public outrage.

Making ethical considerations central to development in the technology space, and a core consideration before processing our data, is fine on one level. But who decides what those ethical considerations should look like and polices their application? The UK Government recently issued a public statement calling for tech giants to play a greater part in removing “offensive” material from their sites. Do we really want Google/Instagram/Facebook employees imposing their versions of morality on us?

The problem with ethical guidelines is that our views vary enormously. Technology solutions face the same problem. An algorithm can detect and remove material from a website much more quickly and consistently than a human operator, but we have already seen in several cases from the USA, that algorithms are still subject to bias and flaws, just as we are. They are after all, the mechanical personification of their programmers. How is the application of technology to be monitored once it is installed, other than through the courts?

John Hancock, the US insurance business, recently confirmed that it would only be offering its policies to customers who committed to being monitored by a wearable device (such as a Fitbit). Encouraging exercise is the stated purpose, but there are also somewhat sinister implications – what happens if the data it reveals leads to an increase in your premium, based on a judgment of your eating habits? Will your health data be used to subsequently deny you (or your children) cover in the future?

The 40th International Conference of Privacy and Data Protection Commissioners takes place shortly and has a heavy focus on ethics and the future. It may be that the most appropriate route to incorporation of ethical considerations into regulation is through regulation of data. After all, data is the thread which runs throughout all of our daily lives; regulation of data sits as an “umbrella” across all industries. 

Even though it is difficult to easily achieve commonality in the area of ethics, we should all be prepared to engage and help set the boundaries. If there are none, our cultural and moral fabric is threatened. That is one area that cannot be replaced by an algorithm.

 

Locations
Practice Areas

Subscribe to Appleby's insights