“Any sufficiently advanced technology is indistinguishable from magic” –Arthur C. Clarke
Our fear of technology is deeply rooted in society. Some countries, mostly in East-Asia are more open to embrace unknown technologies than others. Those from my own tribe (the German speaking world) are usually skeptic when compared to the Japanese, where despite[¹] an aging population, a majority embraces tech automation, AI and robots. To the Japanese also inert objects have a soul which explains why also a robot can be «kawaii».
Sentient AI is a prominent menace in SciFi since Asimov, Clarke and Kubrik. Popular stories have a villain (or a company like SkyNet) greedy for power and dominance, and they often invent AI to outsmart competitors (and destroy anyone in the way). I can relate to this fear not because my Apple products might grow limbs and strangle me, but because history is full of examples where power concentrations cause a rift in society, leading to war and misery. Despite liberal promises of the early web to “level the playing field”, digital technology increases power concentrations and inequality.
Our fear of anything new is healthy and perfectly rational. So what’s the specific problem with demonizing AI? I’m concerned that due to the hype and panic around a superhuman AI threat, we fail to see more urgent and realistic problems.
I wrote about algorithmic-bias and the pitfalls of BigData models elsewhere[²]. To recap, we need (Big)Data-sets to produce any meaningful AI, but by doing so inherit its gotchas. No matter how monopolies are trying to pitch us on their ethical intentions, a conflict of interest arises when BigData turns from an instrument for producing economic merchandise, into the chief merchandise.
Uber, which doesn’t own any cars, replaced a whole industry of taxi-drivers with low paid part-timers, soon replaced with self driving cars. Amazon replaced hundreds of low-skilled workers with robots in their fully automated logistic centers. And if you’re a software developer thinking this can never touch you, then think again. You’re probably already working in an agile “continuous-integration treadmill”, where code-ownership was abolished a decade ago, and it’s easy to replace you and your code on a Friday afternoon.
Not all automation is bad. Why not remove any steps in a process/workflow if it benefits quality and reduces complexity! But there is a limit to automation when it comes to what a company should get away with. If a support engineer is rated by a customer with negative feedback, then this is dehumanizing for the employee if the reason had nothing to do with the employee, but was a shortcoming of the product. Yet it happens all the time and BI dashboards never take that into account.
Software Data is eating the world
The promise of tech is to give us an edge over our competition and so we put up with the cost, complexity and dehumanizing downside.
Customer support is only one example, where a large part of the workforce (and even customers) are slaves to BigData and poorly designed machine-logic. Companies are increasingly becoming like a set of «proprietary algorithms». Considering that their purpose is to maximize profits for shareholders when hardly any employees are shareholders is a problem. Maybe there is room to discuss a new type of corporate structure to make the future workplace less hostile for humans? (better be quick before the bots have their own union! ;))
In addition to BigData’s algorithmic-bias, we should discuss inability of AI to be open and transparent. «Open Source» (free both as in free beer and freedom) hardly exists. And even if you have the source code for the tools to generate the model from the raw input (corpus), there is no reproducibility[³]. From this perspective AI isn’t technology in the same way your software is, but closer to a biologic organism (test results might be reliable but the logical flow / path that produced the decisions are non-deterministic).
The Trouble with Bias
There are only a handful of companies which have the resources and the volume of data to produce meaningful AI and are already monopolies in other areas. While BigData tools may be distributed, there are no decentralized BigData concepts, and it’s impossible to build AI on decentralized systems (at least as long as the industry remains on its current trajectory). The danger AI poses is more urgent and imminent than what critics predict, and for totally different reasons. Not because we’re enslaved by a sudden sentient new life-form, but because AI furthers the already existing power concentration in the hands of those already abusing that power today.
Google is utilizing it’s massive user-base where real people solve puzzles that their robots got stuck with (reCaptcha). In that sense, we all already work for the bots without payment. Joking aside, the process is too slow for most of us to notice. The shift has been ongoing since 30 years and spans probably another generation in which we increasingly adapt our processes to make them more machine like.
From the 1964 “The One Dimensional Man“, by H. Marcuse:
In essays from the early 1940s, Marcuse is already describing how tendencies toward technological rationality were producing a system of totalitarian social control and domination. In a 1941 article, “Some Social Implications of Modern Technology,” Marcuse sketches the historical decline of individualism from the time of the bourgeois revolutions to the rise of modern technological society. Individual rationality, he claims, was won in the struggle against regnant superstitions, irrationality, and domination, and posed the individual in a critical stance against society. Critical reason was thus a creative principle which was the source of both the individual’s liberation and society’s advancement. The development of modern industry and technological rationality, however, undermined the basis of individual rationality. As capitalism and technology developed, advanced industrial society demanded increasing accommodation to the economic and social apparatus and submission to increasing domination and administration. Hence, a “mechanics of conformity” spread throughout the society. The efficiency and power of administration overwhelmed the individual, who gradually lost the earlier traits of critical rationality (i.e., autonomy, dissent, the power of negation), thus producing a “one-dimensional society” and “one-dimensional man.”
The Complexity Problem
The Math behind AI is complex that unfortunately it’s hard for engineers without former training to quickly get into the subject. In a sense we’re increasingly becoming the «henchmen» of the machine and to an elite group of highly skilled academics publishing theoretic papers on the subject (but often removed from any practical implementation).
If very smart people like Stephen Hawking claim «the end is nigh», it’s unfortunate because the hysteria masks far more pressing issues already impacting our life today.
Here are 2 articles on the topic which may help to deflate some of the hype around AI:
I’m also very much looking forward to the upcoming book by Brett Frischman/Even Selinger on the subject and hope it will give a more down to earth introduction for those outside and inside software engineering to educate themselves.
I’m an eager student of «Antifragile» (BlackSwan) risk management and would love to hear also Taleb’s position on AI. But I doubt a super-intelligence will destroy humanity soon. It’s far more plausible that human progress in other disciplines like Genetic-/Nano-/Climate- Engineering, could drive humanity off the cliff (if we continue to be asleep at the wheel). Maybe it’s time to sit down with the bots and negotiate? 🙂
Papers & Resources:
- Professional Judgment in an Era of Artificial Intelligence and Machine Learning: [link]
- Analyze and ameliorate unintended bias in text classification models [link]
- List of critical literature on algorithms and social / ethic concerns [link]
- AI Can Be Made Legally Accountable for Its Decisions [link]
- How algorithms and machine learning are affecting communities and societies [link]
- The field of AI research is about to get way bigger than code [link]
- One pixel attack for fooling deep neural networks [link]
- The relationship between statistical definitions of fairness in machine learning, and individual notions of fairness [link]
- Artificial intelligence can make our societies more equal [link]
- If automated decision is used in criminal justice, it must be open source [link]
- The Bad News About Online Discrimination in Algorithmic Systems [link]
- IEEE Dec 2016: Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Artificial Intelligence and Autonomous Systems [link]
- talk by Zeynep Tufekci: We’re building a dystopia just to make people click on ads [video]
- Learning to Trust Machines That Learn [link]
[¹] Whether the Japanese embrace it despite (or «due to»), their aging population would be an interesting question.
[²] My articles from long ago address «data-bias» and give effective hype-reduction techniques and claims on “BigData being AntiFragile”:
- My data is bigger than yours: NNTaleb on the fragility of BigData
- Using BigData to analyse your personality & character
[³] To be fair the problem of reproducing results are not unique to BigData but a general problem among scientific studies
Valbonne Consulting provides Research & Consulting for emerging technologies in Internet/Web of Things (WoT/IoT/M2M) and Emerging-Tech. We specialise in decentralisation, security and privacy. We work across a variety of traditional industry verticals (Telecommunications, Automotive, Energy, ...). We support Open Source and technologies built on open standards.
Passionate about Open Source, GNU/Linux and Security since 1996. I write about future technology and how to make R&D faster. Expatriate, Entrepreneur, Adventurer and Foodie, currently living near Nice, France.