Data Ethics as Corporate Social Responsibility
Written by Sien Ubbink, intern Filosofie in actie (feb - aug 2021)
Introduction
New technologies and digital advancements are being designed and developed at a constantly increasing pace, and it is no stretch to state that we are highly dependent on certain technologies. Take the corona-pandemic as a recent and extreme example: through the use of software such as Zoom, Microsoft Teams and other video-call channels, many companies and educational institutions were able to continue functioning whilst most of society was on, in a figurative as well as literal sense, lockdown. An integral part of digital technologies – think of social media, online games, multimedia, mobile phones, etc. – is the mass production, collection and processing of all sorts of data.
Almost all organizations, whether that be private organizations (for-profit companies, NGOs, etc.) or public organizations (IGOs and (branches of) national governments) in some way make use of (personal) data. Therefore, we must be vigilant and critical of how (mis)use of such data can greatly impact individuals, specific social groups and society as a whole. Enter the concept of ‘data ethics’: the ethical reflection of how data can be collected, stored and processed in a responsible, ethical manner. ‘Data ethics’ are therefore not a fixed set of rules or obligations, it is the process of (continuously) reflecting on how to responsibly work with all sorts of data.
In academic discourse, there is not yet much information on how those who work in an organization that engages in data ethics view this practice. There are many different possible reasons for an organization to implement data ethics throughout their organization, such as aiming to increase trust among their clientele or consumer-base, or (avoiding) public scandals such as data-leaks, among many other reasons. This research paper will focus specifically on the implementation of data ethics as a manifestation of corporate social responsibility. This paper will analyze the convergence of data ethics as corporate social responsibility by the use of an academic literature review, various news sources and five in-depth interviews with respondents from different organizational backgrounds.
Firstly, the theoretical framework of this paper will elaborate on important concepts and theories for this research, comprising the bulk of this paper. The most important concepts are data ethics, corporate social responsibility, ESG-framework theory (Environment, Social and Governance), and corporate digital responsibility. This will be followed by the analysis, wherein the aforementioned concepts and relevant insights from the interviews will be discussed in order to formulate an answer to this paper’s open-ended research question: can the implementation of data ethics be seen or presented as a form of corporate social responsibility? In the conclusion the main argument will be summarized, the limitations of this research will be laid bare, and suggestions for further research will be posed. This paper has as its aims to give an overview of relevant concepts when looking at (a number of, not all) reasons to possibly engage in data ethics, and spark interest for the subject-matter which will hopefully motivate others to become (more) acquainted with the topics of data ethics, corporate social responsibility and corporate digital responsibility.
Theoretical analysis
Data Ethics
As mentioned in the introduction, broadly, data ethics is concerned with the ethical reflection that takes place in order to responsibly handle data, meaning conscious and continuous reflections and decisions are made regarding the data of the client or citizen. This includes how this data is generated, recorded, curated, processed, disseminated, shared and used (Floridi & Taddeo 2016: 3), meaning data ethical reflection is not something that ought to ‘happen’ just once whilst the data is first being collected. It should be viewed as a constant process, a code of behavior to abide by, and not as an empty box to be checked and then filed away.
When consciously implementing data ethics in an organization you acknowledge that many sorts of data can be sensitive in some way or form, and misuse or purposeful abuse of this data can harm individuals, societal groups and society in its entirety. Therefore, an important aspect of handling data responsibly is the conscious reflection of possible positive and negative impacts of data use on all stakeholders involved. It is also crucial that these ethical reflections are made regarding data when it comes to the employment of (smart) algorithms in the form of machine learning, robotics, artificial intelligence and more. Issues with which we are all familiar, such as (online) profiling, data leaks and hacks, personalized advertisements, tracking software, etc. are all examples wherein the responsible handling of data should be a priority; something that could be well mediated by making the ethical reflections that make up data ethics. Misuse, the leaking of and the abuse of our data, especially personally identifiable data (PII), wherein data is easily linked back to individuals, can greatly impact our privacy, sense of self-autonomy and sense of self (Yeung 2017; Sax 2018: 143-167) .
Take the popular video platform YouTube as an example. YouTube collects a great deal of (user)data and uses smart algorithms to determine which type of videos are recommended to each individual user on the homepage, and which videos will automatically play if the user has ‘auto-play’ on. The advantage of this use of smart algorithms is naturally a more personalized experience for the user: it helps users of the platform to find and engage with content that specifically interests them. This is beneficial for the users as well as the platform itself. However, we must not forget the possible disadvantages and harmful effects these algorithms can have. YouTube collects and accumulates data on your personal preferences with every click on a video, with every skip, with how many minutes you are kept engaged in longer videos, et cetera. Research has shown that this can create a platform that is entirely catered to the individual user (Camargo 2020; Goujard 2021).
Watching a few videos that are doubtful of the effectiveness of the corona vaccine, will quickly push videos denying COVID-19 altogether to the forefront of the homepage or recommended videos, since the algorithm has seen with other users like you that perhaps those videos might interest you. We must not forget that the main aim of platforms such as YouTube is to prolong user-engagement at all costs in order to increase their own ad-revenue. Drawing upon the corona vaccine example once more; within a few curious clicks on videos on the (non)existence of the coronavirus, your individual user experience of the platform is transformed completely, pushing more of the same and even more extreme of such content to the forefront. For many users, only seeing one side of the story, it is made easy to believe that this is ‘the truth’, when in fact, it is only ‘the truth’ that YouTube’s algorithm has artificed for you specifically. Since it is presented on the ‘homepage’ of YouTube, YouTube still masquerades itself as an unbiased platform when it is in fact tailored completely to each user. Skepticism of COVID-19 or the vaccine are merely examples (Camargo 2020; Susarla 2021). Other types of false information such as fake news and the alarming rate at which support bases are growing for conspiracy theories are other examples that are equally pressing (Spencer 2021). It has even been shown to be a domino-effect; the spreading of false information surrounding the COVID-19 pandemic seems to have boosted paranoia and other conspiracy theories internationally (Cramer 2021; Preidt 2021). This is a perfect example of how failing to think critically and reflect ethically on the way data is used, data-use can heavily impact individuals as well as entire societies (ibid; Douglas et al 2017).
Another example, one perhaps more close to home, is that of modern cars. Modern cars have the ability to collect all kinds of data on not only itself (how many kilometers it has driven, how well the brakes perform, whether it needs maintenance, etc.), but these ‘connected cars’ also gather information on the driver. It can know how much you weigh, whether you’re alone in the car or with others, where you’re going, when and how often, how tired you are based on how alert you’re driving, how reckless you drive based on your speed and the way you brake and more. If the car has the ability to connect to your smartphone, consider your emails, contacts and text messages fair game to be collected as well (Naafs & Weijnen 2020). What is perhaps more disconcerting, even more so than the ‘mere’ collection of circa 25 gigabytes of data on the car and driver every hour (ibid), is that according to the manufacturers, this data belongs to them (ibid). That data on the workings of the car itself belongs to them is one thing; that all this personal data on the driver belongs to them as well is another discussion entirely. To be allowed to collect and use this data on you, you must give consent when buying or leasing such a ‘connected car’, however, this far from settles the issue. When buying a connected car, this data collection is often not explicitly mentioned by the seller, and the information on it within the contract is firstly so abundant and secondly so incomprehensible that even if you were to read it, very few would understand the juridical jargon. Professor at the Technical University of Eindhoven states we much too easily press ‘agree’ or sign contracts when it comes to issues as this; though he also acknowledges it is unrealistic to expect people to read (and understand) all terms and agreements, admitting that this counts for himself as well (ibid).
All this personal data can and is frequently used and sold for external purposes: consultancy bureau McKinsey expects that, within 10 years, the car-data industry will be grossing between 450 and 750 billion dollars, making it 10% of the total income of the entire car-industry (ibid). If someone drives fast or in other ways recklessly, or the car registers that the driver often eats fast food by going to drive-throughs every other day, this data can be sold to insurance companies which could have a great influence on how high your premium is. In the car’s database it would be all too easy to expose a spouse having an affair, only with a simple glance at where the driver often goes after work. If you have an accident, the car’s camera footage and data on your alertness and tiredness (how often you are blinking or swerving) could reveal that perhaps you were actually not fit to drive or paying attention yourself; something insurance companies would likely want to know when it comes to determining whether you’re eligible for reimbursement (ibid).
A last example that stresses the importance of ethically reflecting on data use, though certainly not less urgent than the aforementioned two examples, is that certain societal groups can be harmed by the lack of ethical reflections in regard to data as well. This is especially harmful in the sense that it can be very discriminative to (possibly already marginalized) groups. Without the proper implementation of data ethical reflection, group discrimination (ethnicism, ageism, sexism, etc.) and in extreme cases even group-targeted forms of violence are a very real risk (Floridi & Taddeo 2016: 3).
The examples above firstly bring to light that the issue of whether or not our data (or data on us) is used responsibly and fairly, and what this could mean for us, affects us all. Therefore, ethical reflection on data use is not something only those in the IT-world or data-industry ought to be concerned about or aware of. Secondly, the examples above make clear that only because some things are technically ‘allowed’ to be done with our data (according to privacy law, for instance), this does not mean the discussion stops there. It is not a judicial question, it is rather a broad question of what we as individuals and we as a society deem desirable and ethical when it comes to how our data is used and treated.
The world is digitizing at an unprecedented pace, which is why ethical reflection on this digitization is highly necessary now more so than ever. Due to the increased use of algorithms and the accompanying decrease of need for human involvement or oversight with certain technologies, we can become (too) reliant on automatic processes and accidentally neglect or overlook ethical considerations that machines are also unable to pick up on (Floridi & Taddeo 2016: 2). It is a fine and intricate balance to simultaneously be wary of the harms of certain data use and avoid misuse at all costs, but to also not become too rigid which can unnecessarily bar the possibility to “harness the social value of data science”, as deputy director and director of the Digital Ethics Lab of Oxford University phrase it (Floridi & Taddeo 2016: idem).
There are legal frameworks in place that encourage the ethical reflection of data use, though these legal frameworks far from exhaust the subject. Firstly, regulations such as the GDPR, though admirable in their aims, are often not enough to ensure ethical conduct with data. The GDPR is forced to be deliberately ‘vague’ on some aspects to be able to be applicable to all different sorts of circumstances and different (information) technologies, as well as ensuring that the GDPR stays adjustable as existing technologies continue to progress at unprecedented paces and new technologies are invented rapidly (Lobschat et al. 2021: 875; Irwin 2018). Therefore, despite encouraging the ethical reflection of responsible data use, it does not quite offer tangible or practical tools for doing so satisfyingly (Raab & Hijmans 2018).
Corporate (Social) Responsibility (CSR)
Corporate social responsibility (hereafter: CSR) is widely researched and academically discussed, but the topic is far from exhausted. The origins of this umbrella-term can be tracked back to the 1950s in the United States, but the concept began to spike in popularity around the 1980s as larger and more transnational organizations started to settle in the international playing field (Utting 2007).
There are some definitional issues concerning the subject of CSR. The concept is firstly a contested and complex subject, and often open to multiple forms of interpretation and application. Secondly, as mentioned above, it is an umbrella-term which overlaps with other concepts such as corporate accountability, corporate citizenship and business ethics. These concepts all circulate in the academic discussion and can cause ambiguity and confusion. Also, it has been a fluid and dynamic phenomenon, having its definition shift over time, and according to Professors Dirk Matten and Jeremy Moon, the term even differs in definition per country (Matten & Moon 2008: 405-407). This inherent complexity and versatility of the term became apparent during both the literature review as well as the interviews for this research, wherein different authors and respondents all had slightly divergent (though all valid) conceptualizations of the concept.
To be able to somewhat operationalize the term CSR, a delimitation of the concept is due. When using the term CSR people generally roughly abide by the definition Carroll assigned to it in 1991: CSR “encompasses the economic, legal, ethical, and discretionary expectations that society has of organizations at a given point in time” (Andreasen & Drumwright 2001; Carroll 1991). Schematically, it is laid out in the form of a pyramid which implies a hierarchy of priorities (see figure 1).
Thus, though economic responsibility (making a profit) is at the base of a (for-profit) organization, it is also the responsibility of organizations to go beyond their economic motive and what they legally are and are not allowed to do. To partake in CSR, they must also ethically reflect on the social imperatives and consequences of their actions, especially so since organizations large and small can have many resources and have indisputable influence on the environment and society. The tip of the pyramid is purely philanthropic, implying a responsibility to actively better society and/or the environment, though it could be argued that this section, too, is a part of or related to ethics (Carroll 1991; Lobschat et al. 2021: 875-886; Matten & Moon 2008: 405-407).
Still, this conceptualisation is rather broad, though this does aid in being more freely interpretable throughout the research. To illustrate the fluidity and contestation of the concept over time: when corporate social responsibility began to emerge as a concept in the 50s and 60s, some argued, among who Milton Friedman, that the only corporate social responsibility companies have is to maximize their profits (Friedman 1979; Lougee & Wallace 2008: 96). In Friedman’s defence, research has proven that engaging in corporate social responsible behaviour boosts a company’s market position and profits over time, which would mean that maximizing a company’s profits goes hand in hand with behaving socially responsible (Matten & Moon 2008: 405-406).
Taking the two outer ends of how CSR is often viewed, (1) behaving socially responsible as a form of pure philanthropy and (2) behaving socially responsible purely for economic profit, you will find the most common conception of CSR: namely that it is both. ‘MVO Nederland’, a network of partners in the Netherlands that promotes CSR among businesses, abides by this dual conceptualization of CSR, which is close to that of Carroll’s. It acknowledges the duty organizations have towards society to act responsibly in regards to the environment, sustainability, inclusivity and other societal concerns (MVO Nederland), as well as acknowledging that organizations have increasing their profits as their main driver. This conceptualization of CSR combines the two to create a win-win type of approach; by behaving socially responsible as an organization, on the long-term, you will also see economic benefits as an organization. A key word here is ‘long-term’, since it almost always requires organizations to, at first, make extra costs and investments to improve their positive effect on society (by having to switch to the use of sustainable power or materials, for example) before seeing these investments pay off later on (Mahoney & Thorne 2005). However, we mustn’t mistake this promise of benefits on the long-term as a reason to expect that all and any organisation is eager to partake in CSR efforts. This aspect of becoming socially responsible as a long-term investment rather than something which promises short-term pay-offs, often plays as a hurdle for organisations to engage in CSR efforts. The success of CSR efforts also differs greatly upon the scope of the organization, the country in which the organization is embedded and the type of organization, which can add to organizations’ scepticism or trepidation to engage with CSR (Udayasankar 2008; Wanderley et al 2008). On the other hand, refusal to partake in any form of CSR has also been proven to have a negative impact on an organization’s image and reputation in society and among the organization’s clients and/or customers (Stanaland 2011).
The convergence of socially responsible behaviour and making profits or excelling as an organization (since non-profit organizations also engage in CSR and benefit by doing so in other ways than strictly economic profits) is often easily summarized as ‘the three P’s’ approach: focusing on people (employees, customers, the community), the planet and profit (Basham 2016).
Environment, Social and Governance Framework (ESG)
The ‘Environmental, Social and Governance framework’ (hereafter: ESG framework) is a framework which builds upon CSR by aiming to make an organization’s CSR efforts measurable for investors or (possible) clients and consumers who wish to invest time and money into organizations that are reliable and sustainable, so as to ensure a responsible investment on their part.
ESG framework gives more clear insights into an organizations’ philanthropic, social and internal governance practices by providing organizations with ‘quantifiable indicators’ to measure the organization’s ‘level of CSR’, as it were (Gupta 2021). By the application of numerical figures as to how organizations “treat their staff, manage supply chains, respond to climate change, increase diversity and inclusion, and build community links”, this boosts an organization’s incentive to partake in CSR by giving them something more tangible to show to the outside world (Herden et al 2021; Gupta 2021).
Now, despite that organizations who are proactive regarding CSR behaviors, and will therefore have promising ESG indicators, this is not all about the profits. Apart from it often being (economically) wise to invest into an organization that has a sustainable future ahead, research shows that many investors, especially those of newer generations, also wish to be a part of and support socially responsible organizations for moral reasonings, such as wanting to back organizations which reflect positive societal changes or that hold social values in high regard like sustainability and fair wage, but also issues such as privacy and data ethics (Gupta 2021). Besides investors, there is also a shift in consumer behavior. Consumers and clients are willing to lay down more money for products or services that have been produced ethically and sustainably (ibid). All in all, engaging in CSR, and if possible quantifying these efforts by the use of the indicators provided by the ESG framework, a win-win situation can truly be possible for organizations, clients, society and the environment alike, though it may take some patience and an altered business strategy.
Corporate Digital Responsibility (CDR)
The novel concept of corporate digital responsibility (hereafter: CDR) is not easily defined, just as the concept of CSR from which it is derived. The concept is new, only having begun to circulate in academic discourse from around 2019 (Herden et al . 2021; Lobschat et al. 2021). Partly due to this newness, there is no clear, generally agreed upon definition of the concept as of yet, though there is consensus for the most part on what it entails. CDR as conceptualized by Lobschat et al, has as its aim the encouragement of “shared values and norms [that guide] an organization’s operations with respect to the creation and operation of digital technologies and data” (Lobschat et al 2021: 876). They argue that if we believe “human behavior should be governed by moral norms and ethical considerations, then any creation, operation, impact assessment and refinement of digital technology and data should be assessed according to such rules”, hence, the birth of this novel concept (ibid).
CDR naturally immediately makes one think of the much more commonly known corporate social responsibility, as described above (Lougee & Wallace 2008: 96). The authors refer to this similarity, stating that they both fundamentally operate from the conviction that corporations (and organizations in general) have a duty and commitment toward society, and that due to their high level of power and influence they should carry the responsibility of bettering societal issues, let alone worsening them (Lobschat et al. 2021: 875-886). However, they claim the necessity of a separate concept for three reasons: (1) the unprecedented pace at which digital technologies are now being designed; (2) their malleability, meaning they are sometimes used unlike the designers have foreseen or intended; and (3) their pervasiveness, seeing as how they are increasingly inescapable in everyday and work life (ibid). Simplified, Lobschat et al claim that the basic conceptual parts of CDR as a framework include four stakeholders: organizations, individual actors, institutional/legal/governmental actors and artificial/technological actors. Besides this, they identify four lifecycle stages linked to digital technology and data, namely: the creation, operation, impact assessment and refinement of technology (Lobschat et al 2021: 875-888; Herden et al 2021: 14).
Herden et al (2021), though in agreement with the root of Lobschat et al’s definition (“shared values and norms [that guide] an organization’s operations with respect to the creation and operation of digital technologies and data”), they disagree with the choice to delink it from its mother term CSR (Lobschat et al 2021; Herden et al 2021). Instead, they regard CDR as an extension of CSR, “comprising all levels of corporate responsibilities as defined in Carroll’s (1991) CSR pyramid and all domains of the Environmental, Social, Governance (ESG) framework” (Herden et al 2021: 14), which thus combines two of the aforementioned concepts. More forwardly, Herden et al therefore assign the following conceptualization to the concept of CDR: “Corporate Digital Responsibility is an extension of a firm’s responsibilities which takes into account the ethical opportunities and challenges of digitalization” (2021: 17). Herden et al proceed to build forth upon Carrol’s pyramid of CSR (figure 1), adding the new and more contemporary relevant element of digital responsibility, called the CDR pyramid, see figure 2.
Herden et al further successfully intertwined the concept of CDR and an ESG framework in the table below, by also incorporating relevant CSR related principles. As both evident by figure 2 and table 1, Herden et al seem to have managed to create plausible linkages between CSR, CDR and ESG framework, which is promising for society as tools for facilitating organizations with accessible and step-by-step ways to familiarize themselves with digital responsibilities and data ethics.
All in all, it is safe to say that CSR, ESG-framework, CDR and data-ethics are all intertwined in some way or form. For the sake of clarity, this research firstly assumes a position wherein ESG framework and CDR are viewed as extensions of the broader term of CSR, and not completely standalone concepts. Simplified: firstly, ESG framework as part of CSR aims to make CSR goals and behavior measurable. Secondly, CDR as part of CSR helps lay focus on the highly innovative, malleable and pervasive world of data and digitalization, which is growing and is becoming increasingly more complex than ever before. And thirdly, when looking at the components of CSR and CDR, it is not only plausible, but rather logical that reflecting on what is (or is not) a responsible and ethical way to handle data is (or should be) an important element of CSR.
Interview analysis
Based on the theoretical analysis of the concepts of data ethics, CSR, ESG and CDR, it can be concluded that it is entirely plausible and possible to view data ethics as a form of CSR. This then begs the question; is that how others, especially those who work in the field of (or narrowly related to the field of) data ethics view it as well?
Five in depth interviews were conducted for this research. The way this sample was found was through the network of ‘Filosofie in actie’ (Philosophy in action); an organization that consults other organizations on data ethics, ethics and technology and privacy, and also provides workshops, readings and trainings on these subjects. The respondents are all from different organizations themselves, some working in the public sectors and others in the private sector. For the sake of the privacy of the respondents and/or that of their organization, their identities will be kept anonymous.
Respondent A works in the public sector as Privacy Officer at a Dutch municipality. Respondent B works as an ethical policy advisor at a different Dutch governmental institution. Respondent C works as an ethical advisor at a Dutch insurance company. Respondent D works as Global Public Affairs Leader at an international holding company. Respondent E works at a consulting firm and is an adjunct professor at a university. Refer to table 2 for an oversight of the respondents and their positions.
Firstly, all five respondents acknowledged a shift in the topic of data ethics the past few years; especially since the introduction of the GDPR. A common underlying reason that came up in the interviews for this increased awareness and relevance of data ethics was indeed the pace at which technologies are being designed, their pervasiveness, and their malleability, just as Lobschat et al and Herden et al claimed as they expressed the need for the new term of CDR.
Whether the respondents viewed data ethics in their organization as CSR differed per respondent. Respondent A, working in the public sector as Privacy Officer at a Dutch municipality, does not explicitly view data ethics as a CSR practice. Public actors behaving ethically and with regard to societal concerns (whether that be environmental, economic or otherwise) is generally not viewed as CSR. Though public actors often behave and function in a similar way that large corporations, IGOs or NGOs do, a key difference is that it is widely regarded as an inherent duty of public actors to be societally responsible. The respondent gave a compelling argument: ‘when it comes to public actors, people don’t have a choice. They cannot simply opt for a different organization, since there is only one government’ (respondent A). This intensifies the responsibility the (public) organization carries to process sensitive data properly, also since ‘once the damage is done, the damage is done. An important part of the public sector is to have the citizens’ trust, which naturally becomes tainted if we are not careful with people’s personal information’ (respondent A). Respondent B, working as an ethical policy advisor at a different Dutch governmental institution, also recognizes this added responsibility, claiming that whilst private organizations have as their main aim to deliver profits, public organizations have as their aim to deliver societal success (which naturally differs based on the context), which is much more difficult to measure.
Respondent C, however, working as an ethical advisor at a Dutch insurance company, does view data ethics as something that ought to be a fundamental part of CSR. He believes data ethics ought to be a fundamental value of all organizations that use data, even though at the current time the CSR team of his organization does not (yet) in any way specifically address data ethics: ‘as of now, it is still more part of the compliance team than the CSR team’. He mentions that it is a great priority of his organization that they are, at the very least, naturally always law-compliant to avoid litigiousness, but that they also handle data in such a way that it helps maintain a good reputation and trust among their clients. The insurance bank for which respondent C works happens to be a frontrunner when it comes to CSR, which he says is also a deliberate way for them to positively differentiate themselves from other organizations.
Respondent D, working as Global Public Affairs Leader for an international holding company, states that the way she and her team view data ethics is a translation of other issues they value such as the environment, equality and various human rights; the same values underlying these societal causes translate into the digital framework. She states data ethics is both a core value in her organization, as well as something that is specifically reflected on in a separate policy document that holds for the worldwide franchise. The organization in question has recently appointed a Data Ethics Manager and will soon introduce a Data and AI Ethics Team, prompted by a policy document related to data-ethics that the respondent has worked on, stating that “you can’t simply have a piece of paper [the policy] and say that you’re done. Writing the paper; that’s the easy part. The implementation of it; that’s the hard part”. This is partly the reason why the policy has not been publicly released as of yet; the respondent feels it is inauthentic to show something off when it will still take some time to wholly and authentically incorporate their ethical aims in a large organization.
Interestingly enough, when mentioning CSR, the respondent stated that CSR is not a term her organization generally works with, despite that when measuring this organization according to regular CSR standards, relatively speaking it seems to fare very well as a socially responsible company. She phrases it as the following: “We are, in our behavior and based on our values, a socially responsible corporation; we just don’t call it that. We have a strategy called the ‘people and planet positive strategy’, which is probably what other companies call CSR. For us it’s just not much about numbers we ‘need’ to hit; [the reason we do this,] it’s our own ambition”. Apparent from the interviews becomes the conceptual debate that was discussed in the theoretical analysis; wherein the definition of the term CSR differs from person to person, organization to organization.
When asked about the concept of CDR, none of the five respondents were yet familiar with it. Respondents A and B, both from the public sector, had no problem with the term after elaboration, though both stated that – as when speaking of CSR – as a public organization they ought to behave responsibly on all fronts due to the nature of their organization: serving the public interest of citizens and society as a whole. Respondent B did agree however, that there is added value to such terms and labels since they can help clarify the organizations’ conscious efforts to the citizens, and can be used as certifiers of ethical conduct. He does warn that such labels can be misused by being seen as a box to check, broadly referring to ‘ethics washing’ (ethic window dressing).
With the other respondents, though being unfamiliar with CDR, upon short elaboration on the concept there does seem to be some enthusiasm for it. Respondent C acknowledges that the fact that ‘to a certain extent, you must be prepared to give up data nowadays if you wish to participate in society’, and that if a separate team (like there are CSR teams on organizations, including his) is necessary to ensure this data is dealt with ethically, then he is supportive of that. Respondent D feels that CDR naturally follows from her organization’s extensive and encompassing core values and norms, but that for other organizations perhaps such a term is handy or necessary for them to distinguish from simply being law-compliant and showing that they are doing more than the bare minimum.
The importance of a culture of respect for data ethics in the workplace is a recurring subject in the interviews. Various respondents have also noted that there is a vast difference in understanding of data ethics among people in their organization, as well as a vast difference of appreciation for the need of it. Respondent E, an expert on the subjects of governance and ethics, noticed this difference of appreciation and understanding mainly based on two factors: the age of his colleagues/peers and their backgrounds. Respondent E, backed by statements from the other respondents, were in agreement that data ethics generally plays a much larger role among the younger generation(s), and it is sometimes difficult to explain the dire necessity of data ethics to senior colleagues. Also, in organizations where people from different backgrounds work together (think of people with a background in ethics along with people with a background in computer science), those with a more technical background tend to have less affiliation with the subject of data ethics than their more philosophical or social-study inclined peers.
Conclusion
Based on both the theoretical literature analysis and the interview analysis, it can be concluded that data ethics can indeed be viewed as a form of corporate social responsibility. It can also be viewed as a form of corporate digital responsibility since, as was found in the theoretical analysis, CDR can be argued to be an extension of the broader CSR. As was found through the explanation and examples within the theoretical analysis, data and how it is treated, and mainly whether this is done with ethical considerations and reflections or not, can have a huge impact on individuals, social groups and society in general. Thus, as the theory behind CSR suggests that corporations have an ethical and philanthropic responsibility toward society, wherein they reflect on the social imperatives and consequences of their organization, this means considering the effects of their data use as well.
Though, as also came to light during the interviews, despite that the above suggests data ethics can most certainly be viewed as an important form of CSR, this need not necessarily be the case. Organizations can thus engage in data ethics without doing so as an explicit form of CSR or CDR. Feeling a duty towards society due to the nature of your organisation (mainly public organisations, which aim to serve the public good), wanting to be law-compliant and avoid reputational harm, engaging in data ethics as a logical outcome of an organisation’s norms and values are other examples of how (representatives of) organisations explain their reasoning for engagement with data ethics besides CSR.
The contested nature and definitional issues surrounding CSR perhaps make it difficult for organisations to wholly get behind the term, whilst the newness of CDR and its current constriction to the academic world could explain its lack of popularity.
It is important to note the limitations of this research, which are mainly the small number of respondents and the fact that these respondents are quite different in regards to their function and the type of organisation in which they work. This means no generalizations can be drawn from the information gathered from the respondents: the sample is simply too small for that. However, what was interesting and enlightening about this divergent sample, is that, despite the respondents being asked the same questions, a wide variety of different answers emerged, of which some were surprisingly opposing. A more homogenous sample might have also given more homogenous answers, and though the information gained from these interviews is very far from comprehensive, it does offer insight into just how divergent the ways in which different organizations view data ethics can be. This prompts the necessity for further research, which can be done by conducting more in-depth interviews or opting for a more quantitative method by means of largescale surveying, for example. In any case, data ethics in general and organizations’ reasonings to engage with data ethics are hot topics as awareness for the subject rapidly increases due to technological advancements and societal changes, which incites excitement for more research to come.
Acknowledgements
I would like to thank the respondents for taking the time to speak with me on the subjects discussed in this research. Their co-operation made for interesting insights. I would also like to thank my internship supervisor Piek Visser-Knijff, who helped guide me through this research and provided me with valuable feedback.
Bibliography
Autoriteit Persoonsgegevens (2020). “Werkwijze Belastingdienst in strijd met de wet en discriminerend”, https://autoriteitpersoonsgegevens.nl/nl/nieuws/werkwijze-belastingdienst-strijd-met-de-wet-en-discriminerend. Accessed on 28-5-2021.
Basham, K. (2016). “ Corporate Social Responsibility: Three Ps”, https://medium.com/@KevinBasham/corporate-social-responsibility-three-ps-ec8753027ad7. Accessed on 2-6-2021.
Camargo, C. (2020). “YouTube’s algorithms might radicalise people – but the real problem is we’ve no idea how they work”, https://theconversation.com/youtubes-algorithms-might-radicalise-people-but-the-real-problem-is-weve-no-idea-how-they-work-129955. Accessed on 7-9-2021.
Carroll, A.B. (1991). “The pyramid of corporate social responsibility: toward the moral management of organizational stakeholders”, Bus Horiz 34(4):39–48.
Cramer, J. (2021). “Why people latch on to conspiracy theories, according to science”, https://www.nationalgeographic.com/science/article/why-people-latch-on-to-conspiracy-theories-according-to-science. Accessed on 10-9-2021.
Douglas, K.M., Sutton, R.M. and Cichocka, A., 2017. The psychology of conspiracy theories. Current directions in psychological science, 26(6), pp.538-542.
Floridi, L. & Taddeo, M. (2016). “What is data ethics?” Philosophical Transactions of the Royal Society.
Goujard, C. (2021). “YouTube’s algorithm pushes hateful content and misinformation: Report”, https://www.politico.eu/article/mozilla-firefox-report-youtube-algorithm-pushes-hateful-content-misinformation/. Accessed on 7-9-2021.
Gupta, P. (2021). “The Evolution of ESG from CSR”, https://www.lexology.com/library/detail.aspx?g=80bbe258-a1df-4d4c-88f0-6b7a2d2cbd6a. Accessed on 5-5-2021.
Herden, C.J., Alliu, E., Cakici, A., Cormier, T., Deguelle, C., Gambhir, S., Griffiths, C., Gupta, S., Kamani, S.R., Kiratli, Y.S. and Kispataki, M., (2021). “ Corporate Digital Responsibility”, Sustainability Management Forum| NachhaltigkeitsManagementForum, pp. 1-17. Springer Berlin Heidelberg.
Hofs, Y. (2020) “Belastingdienst schuldig aan structurele discriminatie van mensen die toeslagen ontvingen”, https://www.volkskrant.nl/nieuws-achtergrond/belastingdienst-schuldig-aan-structurele-discriminatie-van-mensen-die-toeslagen-ontvingen~baebefdb/?referrer=https%3A%2F%2Fwww.google.com%2F. Accessed on 16-6-2021.
Irwin, L. (2018). “The GDPR: Understanding the 6 data protection principles”, IT Governance, https://www.itgovernance.eu/blog/en/the-gdpr-understanding-the-6-data-protection-principles. Accessed on 7-6-2021.
Lougee, B. and Wallace, J. (2008). “The corporate social responsibility (CSR) trend”. Journal of Applied Corporate Finance, 20(1), pp.96-108.
Mahoney, L.S. and Thorne, L., 2005. Corporate social responsibility and long-term compensation: Evidence from Canada. Journal of Business Ethics, 57(3), pp. 241-253.
Preidt, R. (2021). “Pandemic Boosted Paranoia and Conspiracy Theories”, https://www.webmd.com/lung/news/20210729/pandemic-boosted-paranoia-and-conspiracy-theories-study-confirms. Accessed on 9-9-2021.
Sax, M., 2018. Privacy from an Ethical Perspective. The Handbook of Privacy Studies, p.143.
Spencer, C. (2021). “Terrifying new study says our conspiracy theory epidemic could be a result of human evolution”, https://thehill.com/changing-america/respect/equality/566982-terrifying-new-study-says-our-conspiracy-theory-epidemic. Accessed on 9-9-2021.
Stanaland, A.J., Lwin, M.O. and Murphy, P.E. (2011). “Consumer perceptions of the antecedents and consequences of corporate social responsibility”. Journal of business ethics, 102(1), pp.47-55.
Susarla, A. (2021). “Big tech has a vaccine misinformation problem – here’s what a social media expert recommends”, https://theconversation.com/big-tech-has-a-vaccine-misinformation-problem-heres-what-a-social-media-expert-recommends-164987. Accessed on 15-9-2021.
Travers, M. (2020). “Facebook Spreads Fake News Faster Than Any Other Social Website, According To New Research”, https://www.forbes.com/sites/traversmark/2020/03/21/facebook-spreads-fake-news-faster-than-any-other-social-website-according-to-new-research/?sh=b00f6956e1a9. Accessed on March 29th 2021.
Turker D. (2013). “Pyramid of CSR”, Encyclopedia of Corporate Social Responsibility. Springer, Berlin, Heidelberg.
Udayasankar, K. (2008). “Corporate social responsibility and firm size”, Journal of business ethics, 83(2), pp.167-175.
Wanderley, L.S.O., Lucian, R., Farache, F. and de Sousa Filho, J.M. (2008). “CSR information disclosure on the web: a context-based approach analysing the influence of country of origin and industry sector”, Journal of business ethics, 82(2), pp.369-378.
Yeung, K. (2017). “‘Hypernudge’: Big Data as a mode of regulation by design”. Information, Communication & Society, 20(1), pp.118-136.
Photo by Nick Fewings on Unsplash