Current Applications of Artificial Intelligence
In some industries, AI provides a holistic end-to-end solution. It does this by automating business intelligence and analytics processes. In other industries, artificial intelligence is deployed to simply improve efficiency in specific areas. Below are just a few examples of how AI, specifically machine learning, is being used to improve efficiency:
- Fraud detection and Information Security
Banks and many other businesses use the various applications of artificial intelligence to detect fraudulent activity or anomalies indicating cyber-criminal activity.[2] This is a simple and rules-based application of AI that serves the financial industry very well. It works in a straightforward way. The AI software is given a very large sample of data that includes fraudulent and non-fraudulent purchases and is trained to determine whether a transaction is valid based on data. In another similar approach, the AI software is given a large sample of baseline data to determine thresholds for “normal” activities across a given enterprise architecture. Over time, with significant training and calibration, the software easily spots fraudulent transactions or anomalous activity. This application for AI is an important one because cyber-attacks are increase in frequency using sophisticated tools and social engineering. This means that human operators alone require assistance for real-time threat detection, mitigation, and ideally, prevention. Using machine learning algorithms and feeding those algorithms great quantities of data, trains the AI solution to monitor behavior, detect anomalies, adapt, and respond to dynamically changing threats. However, there are inherent risks, particularly in bias, in any data used, particularly the data linked to financial transactions. Machine learning might infer, for your example, from your zip code alone whether you are white and this assumption might be both false and prejudicial. These inferred biases could impact both credit score or produce a false positive in terms of fraud detection.
- Online customer support and marketing in retail
Many websites now offer some form of “chat” functionality where you can talk to customer support that is really AI. In addition, AI is used in marketing for product or content recommendation, customer segmentation, or social listening and sentiment analysis. In terms of marketing, AI functions really well to design personalized ads and target specific potential customers. In terms of customer support, the marketing function is now highly automated through voice recognition AI. In addition, AI is used to monitor social media for social listening and sentiment analysis.[3] However, these chatbots, particularly on social media, have sometimes failed spectacularly. For example, Microsoft’s AI chatbot, Tay, was corrupted through the twitter data it was learning from and within 24 hours of launch was spewing Nazi propaganda in Twitter as it interacted with customers. It was immediately shut-down and Microsoft issued an apology. Importantly, uses of AI in marketing continue to emerge in areas such as augmented reality and matching human customer support with customers based on affinity and commonalities. There are plenty of opportunities but also risks evident in the example of the chatbot, Tay.
- Fulfillment and Supply-Chain Management
Inventory carrying cost, transportation, and labor are the three largest cost drivers in supply chain management. In this area, AI focusses on decisions in the cycle that impact these three things.[4] AI enables leadership to address each of them through rapid augmented decision-making. If decision-making is faster, based on historic data as well as actual demand patterns, then companies can reduce overall inventory levels and transportation cost. This also enables the company to make sure that product is in the right place at the right time. Leveraging AI for final mile delivery efficiency and using operational chatbots carries similar data risks to the examples discussed in marketing and fraud detection. Overall, by using AI driven automation to optimize solutions, companies can move products around more efficiently and also optimize labor.
All of these functions seem low on the risk spectrum and provide significant value to companies, usually justifying the cost of deployment. However, all technology deployment, including AI, has upside and downside risk. Therefore, it is essential to provide a broader context of risk evaluation for AI in order to address the ethical issues and to ensure that AI is deployed in ways that better enable businesses to achieve their bottom-line and top-line goals. Importantly, the deployment of AI fits into the broader umbrella of cyber risk as part of enterprise risk identification and mitigation. In this context, executives and even the board of directors are increasingly held responsible for cyber risk. What are the key issues that these decision-makers need to be aware of in order to make the best possible decisions? How can business leaders identify and then mitigate these risks?
The Broader Context
AI can help companies navigate complex problems and augment decision-making. Furthermore, deploying AI can enable businesses to become more agile, innovative, and provide a competitive edge—but as indicated this technology also engenders risks. So how do we identify and evaluate these risks?
The key risks for business leaders in this context are problems revolving around data and design.[5] [6] Extensive studies of risk and emerging technologies provide the following key takeaways in terms of risk and addressing AI risk:
- Data is critical, but the risk in deep learning are three-fold:
- Data bias
- Lack of data transparency
- Data monopoly
- It is essential moving forward that there is diversity in the designers to address each of these data issues and diversity in the executives and decision-makers deciding how these tools will be deployed.
The following section explores data risks in more detail.
Data Bias
Data bias can occur in many ways but most often arises methodologically before data is even collected. It also occurs at other stages in the training of AI. The critical consideration ethically is that data bias, just like our own biases, results in significant social problems. Importantly, a biased AI, after all, is not very useful at best and could even exacerbate the problems it is designed to solve or address. At worst, it amplifies our worst human tendencies such as racism or lack of opportunity for women. Both of these have been observed in AI deployments from the Los Angeles Police Department’s problems with predictive policing,[7] to healthcare AI used by Optimum Health that ranks white patients higher than sicker black patients for service provision based on embed bias.[8] AI deployments in filtering resumes for hiring practices have resulted in largely excluded women because the pool of data the AI was trained on mostly consisted of men.[9]
In all of these cases, the AI was not intentionally designed to produce these results, but instead was developed based on data biases built in to the methodology and the data itself. Here are three key stages methodologically where bias can and does occur:
- Problem framing: When data scientist/designers decide what they want to achieve, bias can occur in framing the problem. For example let’s talk about the vague goal of “creditworthiness” when a credit card company wants to predict what determines “creditworthiness.” This may seem a definitive and discrete goal, but it is actually quite vague. Methodologically the firm must decide whether “creditworthiness” means to maximize profit margins or the number of loans repaid. Deciding on this definition will influence how the problem is framed, the data collected, and the methodological approach. The challenge in terms of bias is that deciding on this definition could have embedded biases (such as the use of zip code, or purchasing habits). If these biases are not identified early this could end up with a result that could be discriminatory.
- Data collection: When the data is collected and is either unrepresentative of reality or it reflects existing prejudice then the data collection itself is biased. The data might be unrepresentative of reality if a deep-learning algorithm was trained on photos of light-skinned faces and therefore had trouble recognizing faces with different skin tones (and vice-versa). An example of the data sample itself being prejudiced can be seen when Amazon used an internal recruiting tool which was trained on historical hiring decisions. This historical data was biased toward men because this was the historical practice, therefore the AI was trained on biased data.[10]
- Data preparation: When attributes are selected for the algorithm to consider bias can be introduced. This is referred to as data preparation or choosing which descriptive data to use. In the credit example, attributes could be a customer’s age, income, loans paid, or zip code. In the Amazon example, this could be gender, education level, or work experience. The challenge is that the attributes chosen impact the model’s basic function.
Data Bias is very difficult to address because often it is not identified until the downstream impact on choices is noticed.[11] In addition, data scientists are not trained as sociologists and are themselves tending to come from a specific demographic. Therefore, they generally are not taught to think methodologically about social context or framing. The tendency is to see things through the optics of their own context. This is why diversity in designers is important and why diversity in STEM (Science, Technology, Engineering and Math) is important. Diverse designers will bring different questions to framing, identify bias in collection earlier, and bring different optics to the consideration of attributes.
To address data bias companies need to intentionally identify their biases and own their processes for design. Those who own the process, own the outcome and are better positioned to compete having an explicit ethical posture. In this regard, the marketing strategy, supply chain management, or security posture may produce more company value because the end result is more effective and both trust and brand will be stronger. Importantly, when it comes to marketing AI, risks are embedded in the marketing technologies that are designed for search and targeting. This can be effective because it is devoted to delivering to customers what they want to see. However, this can be sales as confirmation bias and in the long-run could limit new sales opportunities. This example demonstrates the key point that avoiding bias requires being conscious of data reasoning and even developing specialists in identifying patterns of bias. The key is diversifying the people who are designing and diversifying the decision-makers deploying AI marketing tools.
Lack of Data Transparency
For a variety of reasons, including intellectual property protection, many companies choose not to disclose their deep learning processes. This can compound the problem of bias and importantly lead to longer cycles of developing effective AI solutions. In contrast, making the process transparent opens up opportunities for bias to be identified earlier and enables the evaluation of risk to be embedded in the process of development. Openness usually carries less risk, particularly when it comes to trust and brand. AI systems come with additional risk, so companies need to choose the technology and processes they use, be as transparent as possible about the identified limitations, and continuously evaluate risk.[12] This should be part of their cyber risk posture because better outcomes are achieved if project development risks are identified early in deployment. This enables companies to be more agile and increase top line growth as well as protect their bottom line because they are better prepared. In short, transparency not only moves companies towards compliance and reduces the associated costs, but it improves a company’s risk posture along with top line growth potential through greater agility.
Data Monopoly
Data monopoly, similar to data/process transparency, increases risk because it limits the optics on the process and the agility to address disruption or something completely new. This is quite simply because data monopoly limits a company’s ability to discover or explore “unknowns” outside the bounded universe of the monopoly. This “blindness” can lead to failure, disruption, or inefficiencies. This is why monopolies inevitably fail.
In fact, the problems of data monopolies whether the data is held by big pharma or Facebook and Google, have become such a concern that some have called for a Magna Carta or Bill of Rights to protect data ownership.[13] Others have made a case for more radical reform by the imposition of anti-trust legislation given how closely these companies can resemble utilities.[14] Therefore, organizations are wise if they embrace the understanding that monopolies engender excessive risk. Addressing data monopolies requires building user ownership of data from the ground-up in any digital initiatives.
Design Bias
Without realizing it, biases can manifest themselves into our design decisions particularly when it comes to data collection and analysis. There are many types of data design bias which impact AI training but we will just look at a few. The first is response or activity bias which occurs in content generated by humans such as in reviews on Amazon, Twitter tweets, Facebook posts. These are biased because they are unlikely to reflect a population as a whole but only small subsets. Selection bias due to feedback loops is when a model itself influences the generation of the data it uses to train. If the model has embedded or unknown bias then the design is flawed. Bias due to system drift refers to changes over time to the system generating the data such as the definition of attributes captured in the data. This occurs with small shifts in data collection design over time. In addition, there can be omitted bias where critical attributes that influence outcomes are missing and society bias which is basically stereotypes. Carefully considering these design pitfalls and leveraging diverse heterogeneous teams in the digital world will help address design risks.
Lessons for Business Leaders
Critical challenges business leaders face in the smart machine age revolve around the tension between managing risk and creating agility amidst increasing turbulence. Incorporating these critical elements into decision-making requires that executives have a deeper understanding of the risks involved so they can make better decisions ensuring agility.[15] [16] Importantly, by identifying the risks, executives can either avoid risks or put in place people and processes to mitigate risk.
On the risk spectrum, reputation and trust are becoming more of a premium and, therefore, businesses must place these issues as high priorities for corporate governance. The keys to deploying ethical AI, regardless of the application, is mitigating the data limitations of bias, transparency, and monopoly. Importantly, decision-makers must realize that a biased AI is a defective AI. It will not only be limited in terms of execution but could compromise reputation and trust.
One way to address this is to ensure there is designer diversity and diversity in decision-making. Not only will this diversity in designers ensure different questions are asked at the different stages of the AI process (framing, collecting, and preparation) but it will ensure that different sets of questions are asked throughout the design process that limit bias. Diversity in executives and decision-making helps address data transparency and monopoly issues by ensuring there are a broad range of optics on these types of decisions. There should be talent in the boardroom with both technical and business understanding of these issues and the greater the diversity in decision-making the better. The diversification of boardroom talent is essential because better questions will be asked and the risk and audit committees will be able to provide the multifaceted fiduciary oversight they were designed to provide in the deployment of marketing AI. This not only has bottom and top-line benefits but benefits civil society.
Importantly, the ethical development of AI makes good business sense and companies should consider their moral agency as a good business decision. This ethical behavior starts at the executive and board level as business leaders create a culture of data ethics from the top-down.