Digital Ethics 1: Balancing the risks and rewards
By Subra Suppiah & Florin Rotar March 14, 2016
- No longer adequate for organisations to fulfil compliance and security obligations
- They must move towards implementing a digital ethics framework
FROM Internet banking to purchasing shoes or tickets online, there is a trust equation between the modern consumer and the digital businesses they interact with.
The emergence of digital technologies and the Internet of Things (IoT) is enabling organisations to collect, store and analyse unprecedented amounts of customer data.
The increasing mobile penetration rate and IoT expansion are clear identifiers that the way we do business is fast changing. Today, we can easily make purchases, pay bills and make bookings from just about anywhere.
That amounts to huge sums of data collected from customers by large companies.
However, even with all the automation and insights that digital enables, there is still the expectation among consumers that their personal information will be safeguarded and used appropriately.
For example, you may be comfortable with your car insurance provider tracking your driving habits and rewarding you as a safe driver, but would you feel comfortable with it automatically calling the police if you have an accident? Or arranging for a tow truck? Or moving the next meeting in your calendar because you’re delayed?
Many of us would consider it a breach of ethics for our insurance company to use its access to data to take these actions without our prior consent, even if digital technologies make it possible to do so.
To maintain the trust of customers, it is no longer adequate for organisations to fulfil compliance and security obligations; they must move towards implementing a digital ethics framework.
Ideally, this framework should be developed to define not only how a company innovates and does business with its customers, but also how employee information is used and managed.
What is digital ethics?
In our hyper-connected world, an explosion of data is combining with pattern recognition, machine learning, smart algorithms, and other intelligent software to underpin a new level of cognitive computing.
More than ever, machines are capable of imitating human thinking and decision-making across a raft of workflows, which presents exciting opportunities for companies to drive highly personalised customer experiences, as well as unprecedented productivity, efficiency, and innovation.
However, along with the benefits of this increased automation comes a greater risk for ethics to be compromised and human trust to be broken.
According to Gartner, digital ethics is the system of values and principles a company may embrace when conducting digital interactions between businesses, people and things (Gartner, Digital Humanism Makes People Better, Not Technology Better, 2015).
Digital ethics sits at the nexus of what is legally required; what can be made possible by digital technology; and what is morally desirable.
As digital ethics is not mandated by law, it is largely up to each individual organisation to set its own innovation parameters and define how its customer and employee data will be used.
A more humanist approach is typically needed for digital ethics than other governance frameworks. There is no one-size-fits-all approach, so the key for organisations is to assess the ethical implications of data usage from the perspective of their customers.
Each organisation must consider its own customers and what they would find to be an acceptable use of data and technology to deliver products and services. These expectations will differ from industry-to-industry, region-to-region, and country-to-country.
Expectations will also likely differ for online interactions versus human interactions. For example, would a customer be comfortable with store associates and bank clerks recommending products and services in the real world based on behaviours that cookies have tracked online?
Conversely, would a customer be comfortable receiving a bad health prognosis digitally, rather than from a human?
Failure to appropriately assess data ethics considerations will put the reputations of businesses at significant risk in the digital economy.
Similarly, in our increasingly hyper-connected and data-dense society, the ethical usage and management of employee data will be a priority for companies looking to attract and retain the best digital talent.
Embracing digital ethics
A digital ethics framework should facilitate consideration of what is morally desirable for the customer, not just what is possible with technology or legally permissible.
For most organisations, the first step in the process is building awareness of what digital ethics is, and educating stakeholders and employees that it is not the same as compliance, privacy, or security.
The next step is to establish the digital ethics parameters for the organisation, based on what is acceptable to its own customers.
Two main dimensions need to be considered: Risk and cultural acceptance. Consider not only whether ideas for translating data insights to new and improved products and services are possible, but also whether they are the ‘right’ thing to do from the perspective of the customer.
Effective matching of this risk and reward in digital decision-making is critical to keeping a high level of interaction and trust with customers.
To achieve that, a critical new job role as a digital humanist is beginning to emerge. This role will be instrumental to advocate for customer and employee expectations in digital innovation projects.
This advocacy will maintain the ethics of new technology innovations while also improving user experience and design.
Next Week: The digital humanist
Subra Suppiah is country manager of Avanade Malaysia and Florin Rotar is Avanade’s Digital global portfolio lead.
The legalities of big data and data analytics
Facebook under fire for conducting secretive emotions study
Moral character wins in business too
For more technology news and the latest updates, follow us on Twitter, LinkedIn or Like us on Facebook.