Why ethics, and proper understanding of VUCA environments (environments characterized by volatility/risk, uncertainty, complexity and ambiguity) will matter more in the future than they matter even today? Because AI will require human control, and that control won't happen along programming skills axis, but will trace ethical and VUCA environments considerations.
Here's a neat intro: https://qz.com/1211313/artificial-intelligences-paper-clip-maximizer-metaphor-can-explain-humanitys-imminent-doom/. The examples are neat, but now consider one of them, touched in passim in the article: translation and interpretation. Near-perfect (native-level) language capabilities for AI are not only 'visible on the horizon', but are approaching us with a break-neck speed. Hardware - bio-tech link that can be embedded into our hearing and speech systems - is 'visible on the horizon'. With that, routine translation-requiring exchanges, such as basic meetings and discussions that do not involve complex, ambiguous and highly costly terms, are likely to be automated or outsourced to the AI. But there will remain the 'black swan' interactions - exchanges that involve huge costs of getting the meaning of the exchange exactly right, and also trace VUCA-type environment of the exchange (ambiguity and complexity are natural domains of semiotics). Here, human oversight over AI and even human displacement of AI will be required. And this oversight will not be based on technical / terminological skills of translators or interpreters, but on their ability to manage ambiguity and complexity. That, and ethics...
Another example is even closer to our times: AI-managed trading in financial assets. In normal markets, when there is a clear, stable and historically anchored trend for asset prices, AI can't be beat in terms of efficiency of trades placements and execution. By removing / controlling for our human behavioral biases, AI can effectively avoid big risk spillovers across traders and investors sharing the same information in the markets (although, AI can also amplify some costly biases, such as herding). However, this advantage becomes turns a loss, when markets are trading in a VUCA environment. When ambiguity about investors sentiment and/or direction, or complexity of counterparties underlying a transaction, or uncertainty about price trends enters the decision-making equation, algorithmic trading platforms have three sets of problems they must confront simultaneously:
- How do we detect the need for, structure, price and execute a potential shift in investment strategy (for example, from optimizing yield to maximizing portfolio resilience)?
- How do we use AI to identify the points for switching from consensus strategy to contrarian strategy, especially if algos are subject to herding risks?
- How do we migrate across unstable information sets (as information fades in and out of relevance or stability of core statistics is undermined)?
For a professional trader/investor, these are 'natural' spaces for decision making. They are also VUCA-rich environments. And they are environments in which errors carry significant costs. They can also be coincident with ethical considerations, especially for mandated investment undertakings, such as ESG funds. Like in the case of translation/interpretation, nuance can be more important than the core algorithm, and this is especially true when ambiguity and complexity rule.