Unexpected Item in the Bagging Area: AI Super-Intelligence Fear Mongering

“Any sufficiently advanced technology is indistinguishable from magic” –Arthur C. Clarke

Our fear of technology is deeply rooted in society. Some countries, mostly in East-Asia are more open to embrace unknown technologies than others. Those from my own tribe (the German speaking world) are usually skeptic when compared to the Japanese, where despite[¹] an aging population, a majority embraces tech automation, AI and robots. To the Japanese also inert objects have a soul which explains why also a robot can be «kawaii».

Youtube video: Henn-na Hotel T-Rex Receptionist

Youtube video: Henn-na Hotel T-Rex Receptionist

Sentient AI is a prominent menace in SciFi since Asimov, Clarke and Kubrik. Popular stories have a villain (or a company like SkyNet) greedy for power and dominance, and they often invent AI to outsmart competitors (and destroy anyone in the way). I can relate to this fear not because my Apple products might grow limbs and strangle me, but because history is full of examples where power concentrations cause a rift in society, leading to war and misery. Despite liberal promises of the early web to “level the playing field”, digital technology increases power concentrations and inequality.

Our fear of anything new is healthy and perfectly rational. So what’s the specific problem with demonizing AI? I’m concerned that due to the hype and panic around a superhuman AI threat, we fail to see more urgent and realistic problems.

I wrote about algorithmic-bias and the pitfalls of BigData models elsewhere[²]. To recap, we need (Big)Data-sets to produce any meaningful AI, but by doing so inherit its gotchas. No matter how monopolies are trying to pitch us on their ethical intentions, a conflict of interest arises when BigData turns from an instrument for producing economic merchandise, into the chief merchandise.

Uber, which doesn’t own any cars, replaced a whole industry of taxi-drivers with low paid part-timers, soon replaced with self driving cars. Amazon replaced hundreds of low-skilled workers with robots in their fully automated logistic centers. And if you’re a software developer thinking this can never touch you, then think again. You’re probably already working in an agile “continuous-integration treadmill”, where code-ownership was abolished a decade ago, and it’s easy to replace you and your code on a Friday afternoon.

Not all automation is bad. Why not remove any steps in a process/workflow if it benefits quality and reduces complexity! But there is a limit to automation when it comes to what a company should get away with. If a support engineer is rated by a customer with negative feedback, then this is dehumanizing for the employee if the reason had nothing to do with the employee, but was a shortcoming of the product. Yet it happens all the time and BI dashboards never take that into account.

Software Data is eating the world

The promise of tech is to give us an edge over our competition and so we put up with the cost, complexity and dehumanizing downside.

Customer support is only one example, where a large part of the workforce (and even customers) are slaves to BigData and poorly designed machine-logic. Companies are increasingly becoming like a set of «proprietary algorithms». Considering that their purpose is to maximize profits for shareholders when hardly any employees are shareholders is a problem. Maybe there is room to discuss a new type of corporate structure to make the future workplace less hostile for humans? (better be quick before the bots have their own union! ;))

In addition to BigData’s algorithmic-bias, we should discuss inability of AI to be open and transparent. «Open Source» (free both as in free beer and freedom) hardly exists. And even if you have the source code for the tools to generate the model from the raw input (corpus), there is no reproducibility[³]. From this perspective AI isn’t technology in the same way your software is, but closer to a biologic organism (test results might be reliable but the logical flow / path that produced the decisions are non-deterministic).

The Trouble with Bias

There are only a handful of companies which have the resources and the volume of data to produce meaningful AI and are already monopolies in other areas. While BigData tools may be distributed, there are no decentralized BigData concepts, and it’s impossible to build AI on decentralized systems (at least as long as the industry remains on its current trajectory). The danger AI poses is more urgent and imminent than what critics predict, and for totally different reasons. Not because we’re enslaved by a sudden sentient new life-form, but because AI furthers the already existing power concentration in the hands of those already abusing that power today.

Google is utilizing it’s massive user-base where real people solve puzzles that their robots got stuck with (reCaptcha). In that sense, we all already work for the bots without payment. Joking aside, the process is too slow for most of us to notice. The shift has been ongoing since 30 years and spans probably another generation in which we increasingly adapt our processes to make them more machine like.

From the 1964 “The One Dimensional Man“, by H. Marcuse:

In essays from the early 1940s, Marcuse is already describing how tendencies toward technological rationality were producing a system of totalitarian social control and domination. In a 1941 article, “Some Social Implications of Modern Technology,” Marcuse sketches the historical decline of individualism from the time of the bourgeois revolutions to the rise of modern technological society. Individual rationality, he claims, was won in the struggle against regnant superstitions, irrationality, and domination, and posed the individual in a critical stance against society. Critical reason was thus a creative principle which was the source of both the individual’s liberation and society’s advancement. The development of modern industry and technological rationality, however, undermined the basis of individual rationality. As capitalism and technology developed, advanced industrial society demanded increasing accommodation to the economic and social apparatus and submission to increasing domination and administration. Hence, a “mechanics of conformity” spread throughout the society. The efficiency and power of administration overwhelmed the individual, who gradually lost the earlier traits of critical rationality (i.e., autonomy, dissent, the power of negation), thus producing a “one-dimensional society” and “one-dimensional man.”

The Complexity Problem

The Math behind AI is complex that unfortunately it’s hard for engineers without former training to quickly get into the subject. In a sense we’re increasingly becoming the «henchmen» of the machine and to an elite group of highly skilled academics publishing theoretic papers on the subject (but often removed from any practical implementation).

If very smart people like Stephen Hawking claim «the end is nigh», it’s unfortunate because the hysteria masks far more pressing issues already impacting our life today.

Here are 2 articles on the topic which may help to deflate some of the hype around AI:

I’m also very much looking forward to the upcoming book by Brett Frischman/Even Selinger on the subject and hope it will give a more down to earth introduction for those outside and inside software engineering to educate themselves.

I’m an eager student of «Antifragile» (BlackSwan) risk management and would love to hear also Taleb’s position on AI. But I doubt a super-intelligence will destroy humanity soon. It’s far more plausible that human progress in other disciplines like Genetic-/Nano-/Climate- Engineering, could drive humanity off the cliff (if we continue to be asleep at the wheel). Maybe it’s time to sit down with the bots and negotiate? 🙂

Papers & Resources:

  • Professional Judgment in an Era of Artificial Intelligence and Machine Learning: [link]
  • Analyze and ameliorate unintended bias in text classification models [link]
  • List of critical literature on algorithms and social / ethic concerns [link]
  • AI Can Be Made Legally Accountable for Its Decisions [link]
  • How algorithms and machine learning are affecting communities and societies [link]
  • The field of AI research is about to get way bigger than code [link]
  • One pixel attack for fooling deep neural networks [link]
  • The relationship between statistical definitions of fairness in machine learning, and individual notions of fairness [link]
  • Artificial intelligence can make our societies more equal [link]
  • If automated decision is used in criminal justice, it must be open source [link]
  • The Bad News About Online Discrimination in Algorithmic Systems [link]
  • IEEE Dec 2016: Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Artificial Intelligence and Autonomous Systems [link]
  • talk by Zeynep Tufekci: We’re building a dystopia just to make people click on ads [video]
  • Learning to Trust Machines That Learn [link]

Footnotes:

[¹] Whether the Japanese embrace it despite (or «due to»), their aging population would be an interesting question.

[²] My articles from long ago address «data-bias» and give effective hype-reduction techniques and claims on “BigData being AntiFragile”:

[³] To be fair the problem of reproducing results are not unique to BigData but a general problem among scientific studies

Joachim Bauernberger

Passionate about Open Source, GNU/Linux and Security since 1996. I write about future technology and how to make R&D faster. Expatriate, Entrepreneur, Adventurer and Foodie, currently living near Nice, France.

What Kant can teach about critical thought and tech innovation

Technology bashing[1][2] has been in vogue lately. I’m probably not the only one disillusioned with the direction our industry is heading[1]. When I started out the Web/Internet promised democracy, freedom, personal empowerment through knowledge sharing. Today every big tech firm wants to become a monopoly[1][2]. There is no room for others. It’s all or nothing, winner-takes-all.

Are we, the engineers, asking the right questions, or have we lost our ability to think critically? I’ll let you answer that for yourself. In the last edition of the popular security magazine PoC||GTFO there were some great points on how Kant’s philosophy applies to our profession. Below is what Kant said in 1784, which still applies today in regard to how we think, work and innovate.

What Is Enlightenment?

by Immanuel Kant (translated by Mary C. Smith)

Enlightenment is man’s emergence from his self-imposed nonage. Nonage is the inability to use one’s own understanding without another’s guidance. This nonage is self-imposed if its cause lies not in lack of understanding but in indecision and lack of courage to use one’s own mind without another’s guidance. Dare to know! (Sapere aude.) “Have the courage to use your own understanding,” is therefore the motto of the enlightenment.

Laziness and cowardice are the reasons why such a large part of mankind gladly remain minors all their lives, long after nature has freed them from external guidance. They are the reasons why it is so easy for others to set themselves up as guardians. It is so comfortable to be a minor. If I have a book that thinks for me, a pastor who acts as my conscience, a physician who prescribes my diet, and so on–then I have no need to exert myself. I have no need to think, if only I can pay; others will take care of that disagreeable business for me. Those guardians who have kindly taken supervision upon themselves see to it that the overwhelming majority of mankind–among them the entire fair sex–should consider the step to maturity, not only as hard, but as extremely dangerous. First, these guardians make their domestic cattle stupid and carefully prevent the docile creatures from taking a single step without the leading-strings to which they have fastened them. Then they show them the danger that would threaten them if they should try to walk by themselves. Now this danger is really not very great; after stumbling a few times they would, at last, learn to walk. However, examples of such failures intimidate and generally discourage all further attempts.

Thus it is very difficult for the individual to work himself out of the nonage which has become almost second nature to him. He has even grown to like it, and is at first really incapable of using his own understanding because he has never been permitted to try it. Dogmas and formulas, these mechanical tools designed for reasonable use–or rather abuse–of his natural gifts, are the fetters of an everlasting nonage. The man who casts them off would make an uncertain leap over the narrowest ditch, because he is not used to such free movement. That is why there are only a few men who walk firmly, and who have emerged from nonage by cultivating their own minds.

It is more nearly possible, however, for the public to enlighten itself; indeed, if it is only given freedom, enlightenment is almost inevitable. There will always be a few independent thinkers, even among the self-appointed guardians of the multitude. Once such men have thrown off the yoke of nonage, they will spread about them the spirit of a reasonable appreciation of man’s value and of his duty to think for himself. It is especially to be noted that the public which was earlier brought under the yoke by these men afterwards forces these very guardians to remain in submission, if it is so incited by some of its guardians who are themselves incapable of any enlightenment. That shows how pernicious it is to implant prejudices: they will eventually revenge themselves upon their authors or their authors’ descendants. Therefore, a public can achieve enlightenment only slowly. A revolution may bring about the end of a personal despotism or of avaricious tyrannical oppression, but never a true reform of modes of thought. New prejudices will serve, in place of the old, as guide lines for the unthinking multitude.

This enlightenment requires nothing but freedom–and the most innocent of all that may be called “freedom”: freedom to make public use of one’s reason in all matters. Now I hear the cry from all sides: “Do not argue!” The officer says: “Do not argue–drill!” The tax collector: “Do not argue–pay!” The pastor: “Do not argue–believe!” Only one ruler in the world says: “Argue as much as you please, but obey!” We find restrictions on freedom everywhere. But which restriction is harmful to enlightenment? Which restriction is innocent, and which advances enlightenment? I reply: the public use of one’s reason must be free at all times, and this alone can bring enlightenment to mankind.

On the other hand, the private use of reason may frequently be narrowly restricted without especially hindering the progress of enlightenment. By “public use of one’s reason” I mean that use which a man, as scholar, makes of it before the reading public. I call “private use” that use which a man makes of his reason in a civic post that has been entrusted to him. In some affairs affecting the interest of the community a certain [governmental] mechanism is necessary in which some members of the community remain passive. This creates an artificial unanimity which will serve the fulfillment of public objectives, or at least keep these objectives from being destroyed. Here arguing is not permitted: one must obey. Insofar as a part of this machine considers himself at the same time a member of a universal community–a world society of citizens–(let us say that he thinks of himself as a scholar rationally addressing his public through his writings) he may indeed argue, and the affairs with which he is associated in part as a passive member will not suffer. Thus it would be very unfortunate if an officer on duty and under orders from his superiors should want to criticize the appropriateness or utility of his orders. He must obey. But as a scholar he could not rightfully be prevented from taking notice of the mistakes in the military service and from submitting his views to his public for its judgment. The citizen cannot refuse to pay the taxes levied upon him; indeed, impertinent censure of such taxes could be punished as a scandal that might cause general disobedience. Nevertheless, this man does not violate the duties of a citizen if, as a scholar, he publicly expresses his objections to the impropriety or possible injustice of such levies. A pastor, too, is bound to preach to his congregation in accord with the doctrines of the church which he serves, for he was ordained on that condition. But as a scholar he has full freedom, indeed the obligation, to communicate to his public all his carefully examined and constructive thoughts concerning errors in that doctrine and his proposals concerning improvement of religious dogma and church institutions. This is nothing that could burden his conscience. For what he teaches in pursuance of his office as representative of the church, he represents as something which he is not free to teach as he sees it. He speaks as one who is employed to speak in the name and under the orders of another. He will say: “Our church teaches this or that; these are the proofs which it employs.” Thus he will benefit his congregation as much as possible by presenting doctrines to which he may not subscribe with full conviction. He can commit himself to teach them because it is not completely impossible that they may contain hidden truth. In any event, he has found nothing in the doctrines that contradicts the heart of religion. For if he believed that such contradictions existed he would not be able to administer his office with a clear conscience. He would have to resign it. Therefore the use which a scholar makes of his reason before the congregation that employs him is only a private use, for no matter how sizable, this is only a domestic audience. In view of this he, as preacher, is not free and ought not to be free, since he is carrying out the orders of others. On the other hand, as the scholar who speaks to his own public (the world) through his writings, the minister in the public use of his reason enjoys unlimited freedom to use his own reason and to speak for himself. That the spiritual guardians of the people should themselves be treated as minors is an absurdity which would result in perpetuating absurdities.

But should a society of ministers, say a Church Council, . . . have the right to commit itself by oath to a certain unalterable doctrine, in order to secure perpetual guardianship over all its members and through them over the people? I say that this is quite impossible. Such a contract, concluded to keep all further enlightenment from humanity, is simply null and void even if it should be confirmed by the sovereign power, by parliaments, and the most solemn treaties. An epoch cannot conclude a pact that will commit succeeding ages, prevent them from increasing their significant insights, purging themselves of errors, and generally progressing in enlightenment. That would be a crime against human nature whose proper destiny lies precisely in such progress. Therefore, succeeding ages are fully entitled to repudiate such decisions as unauthorized and outrageous. The touchstone of all those decisions that may be made into law for a people lies in this question: Could a people impose such a law upon itself? Now it might be possible to introduce a certain order for a definite short period of time in expectation of better order. But, while this provisional order continues, each citizen (above all, each pastor acting as a scholar) should be left free to publish his criticisms of the faults of existing institutions. This should continue until public understanding of these matters has gone so far that, by uniting the voices of many (although not necessarily all) scholars, reform proposals could be brought before the sovereign to protect those congregations which had decided according to their best lights upon an altered religious order, without, however, hindering those who want to remain true to the old institutions. But to agree to a perpetual religious constitution which is not publicly questioned by anyone would be, as it were, to annihilate a period of time in the progress of man’s improvement. This must be absolutely forbidden.

A man may postpone his own enlightenment, but only for a limited period of time. And to give up enlightenment altogether, either for oneself or one’s descendants, is to violate and to trample upon the sacred rights of man. What a people may not decide for itself may even less be decided for it by a monarch, for his reputation as a ruler consists precisely in the way in which he unites the will of the whole people within his own. If he only sees to it that all true or supposed [religious] improvement remains in step with the civic order, he can for the rest leave his subjects alone to do what they find necessary for the salvation of their souls. Salvation is none of his business; it is his business to prevent one man from forcibly keeping another from determining and promoting his salvation to the best of his ability. Indeed, it would be prejudicial to his majesty if he meddled in these matters and supervised the writings in which his subjects seek to bring their [religious] views into the open, even when he does this from his own highest insight, because then he exposes himself to the reproach: Caesar non est supra grammaticos. [note: Caesar is not above grammarians.] It is worse when he debases his sovereign power so far as to support the spiritual despotism of a few tyrants in his state over the rest of his subjects.

When we ask, Are we now living in an enlightened age? the answer is, No, but we live in an age of enlightenment. As matters now stand it is still far from true that men are already capable of using their own reason in religious matters confidently and correctly without external guidance. Still, we have some obvious indications that the field of working toward the goal [of religious truth] is now opened. What is more, the hindrances against general enlightenment or the emergence from self-imposed nonage are gradually diminishing. In this respect this is the age of the enlightenment and the century of Frederick [the Great].

A prince ought not to deem it beneath his dignity to state that he considers it his duty not to dictate anything to his subjects in religious matters, but to leave them complete freedom. If he repudiates the arrogant word “tolerant”, he is himself enlightened; he deserves to be praised by a grateful world and posterity as that man who was the first to liberate mankind from dependence, at least on the government, and let everybody use his own reason in matters of conscience. Under his reign, honorable pastors, acting as scholars and regardless of the duties of their office, can freely and openly publish their ideas to the world for inspection, although they deviate here and there from accepted doctrine. This is even more true of every person not restrained by any oath of office. This spirit of freedom is spreading beyond the boundaries [of Prussia] even where it has to struggle against the external hindrances established by a government that fails to grasp its true interest. [Frederick’s Prussia] is a shining example that freedom need not cause the least worry concerning public order or the unity of the community. When one does not deliberately attempt to keep men in barbarism, they will gradually work out of that condition by themselves.

I have emphasized the main point of the enlightenment–man’s emergence from his self-imposed nonage–primarily in religious matters, because our rulers have no interest in playing the guardian to their subjects in the arts and sciences. Above all, nonage in religion is not only the most harmful but the most dishonorable. But the disposition of a sovereign ruler who favors freedom in the arts and sciences goes even further: he knows that there is no danger in permitting his subjects to make public use of their reason and to publish their ideas concerning a better constitution, as well as candid criticism of existing basic laws. We already have a striking example [of such freedom], and no monarch can match the one whom we venerate.

But only the man who is himself enlightened, who is not afraid of shadows, and who commands at the same time a well disciplined and numerous army as guarantor of public peace–only he can say what [the sovereign of] a free state cannot dare to say: “Argue as much as you like, and about what you like, but obey!” Thus we observe here as elsewhere in human affairs, in which almost everything is paradoxical, a surprising and unexpected course of events: a large degree of civic freedom appears to be of advantage to the intellectual freedom of the people, yet at the same time it establishes insurmountable barriers. A lesser degree of civic freedom, however, creates room to let that free spirit expand to the limits of its capacity. Nature, then, has carefully cultivated the seed within the hard core–namely the urge for and the vocation of free thought. And this free thought gradually reacts back on the modes of thought of the people, and men become more and more capable of acting in freedom. At last free thought acts even on the fundamentals of government and the state finds it agreeable to treat man, who is now more than a machine, in accord with his dignity.

Joachim Bauernberger

Passionate about Open Source, GNU/Linux and Security since 1996. I write about future technology and how to make R&D faster. Expatriate, Entrepreneur, Adventurer and Foodie, currently living near Nice, France.

SmartCities’s Cyber Security Role and Ethical Challenges

Security and safety challenges of smart-cities are under hot discussion, and thanks to its property as an umbrella term every cyber-security vendor has an opinion on it. Most technical research on smart-cities aren’t addressing cyber security and privacy concerns. The consensus is that it’s the vendor/integrator who should be held accountable when things go wrong.

But engineering professionals are too absorbed with technical implementation. Our thinking revolves around answering the question: “Can we build it?”, that we forget to ask ourselves:

What are potential negative effects to security, privacy, democracy, freedom, liberty?

Comments that touch on the subject of “ethical considerations” are treated as a distraction when brought up for debate in technical standardization groups. It isn’t easy addressing such fundamental questions especially when they can’t be solved by engineering. The biggest mistake we make, is believing ethical questions will be answered by somebody better qualified for the job. Maybe somebody from philosophy, psychology or theology? And if not that, then the market, or (as last resort) the courts will address it?

A smart-city architecture allows “better” information-sharing, strong identity management, better blanket surveillance as well as targeted surveillance, it benefits law-enforcement with better access to location tracking. In a nutshell a more powerful presence in people’s (voters) lives.

Once the money is spent, what court would rule that a smart-city should be rolled back or that it’s surveillance capabilities should be restricted? Do we realise that at this point our code (with all it’s bugs) becomes law? Think about that for a moment.

***

It isn’t surprising that most of technical research on smart-cities only highlights benefits, considering a lot of it was government funded (at least in the EU, programs like H2020 or FP7 contribute to a large share of smart-cities research). Very few documents are submitted on smart-city security. And none of these papers (including the ones dedicated to the subject) provide mitigation techniques to at least the same safety as a non smart-city. Adding network functionality to a previously isolated system is always going to make you less secure no matter how much dollars you pump into making people believe it will be safer. Of course the security industry will tell us that they can secure our WIFI lightbulbs. Vendors rarely ask security questions when it comes to early stage design of a product. Our attitude really should be to ask ourselves if such connected gadgets weren’t an utterly dumb idea in the first place.

The security sectors profitability depends on a certain fear factor to be present within the population. You can’t justify security spending when nobody perceives a threat. Smart cities are a great way to maintain and measure this fear-factor very accurately as showcased below.

The idea that a sleepy city council could provide better security by making their cities smart is a sham. Security works always by reducing the attack surface. Sure we’ll manage to curb crime in some notorious “dark corners”, because of smart lighting and better monitoring of public spaces (made possible by improving data analytics and image recognition techniques when filtering CCTV footage). But the real costs to society and democracy are huge in comparison to a short lived improvement in crime rates. Below I’ll try to explain some of my bigger worries with the current state of smart-cities and why many societies aren’t ready (and probably never will be).

***

This morning I stumbled over a fantastic piece of work[1]: How to mesh-up data in a smart-city taken from IoT sensor devices (environmental, cctv camera footage, face recognition, location) with data from social media posts (twitter & Co). The core focus of their research is a Sentiment Analysis platform to gauge citizen satisfaction in the name of improving local municipal services. Who wouldn’t want that? The software engineer in me actually wants to design such a system! The domain is cutting-edge and the possibilities are endless. We’re on the verge of several other breakthroughs in AI. Data-science is one of the best-payed disciplines in CompSci. A smart-city architecture lets engineers combine all these exciting new advancements.

A Smart-City should be designed with additional accountability harnesses to limit abuse, such as decentralized technologies. Also blockchain based auditing of public functions (e.g. bidding processes, decision-making, hand over of power, …) would be the right direction. Such decentralised systems would actually empower individuals, by allowing us to better track the performance over those that rule us.

Unfortunately I have not come across any paper that are addressing such solutions, nor will you see much funding from government grants. Instead all current proposals empower the state (otherwise you couldn’t sell it to them). Design proposals leave it up to the state (the customer) to decide where advantages can be passed to citizens. Accountability as a feature doesn’t make money and even sounds like a threat to those who are to be held accountable. Features we already see look promising (smarter parking, and telematics, automated billing, eco-friendly management, …), but there are no “features” to protect us from the new centralization of power, benefiting only those in the cockpit.

Data gathered from subscribers (who should be the one owning the data since it’s their tax money) becomes available to expected 3rd parties such as law-enforcement, the IRS, the bank or their risk-management proxies. The data will sooner or later be in the hands of individual hackers or in the hands of terrorist organisations or a foreign nation state adversary. It shouldn’t be too hard for even a single attacker breaching a sleepy municipal IT facility. And looking at breach history the cloud provides no relief.

Imagine a scenario where the attacker is a terrorist stealing the data prior a physical attack in a city. Either to amplify the effects of the actual terror-attack (by taking over billboards, or SMS warning systems to create more fear, DDOS emergency hotlines, etc), or to enable new forms of attacks due to the nature of the freshly gained previously unavailable info. Smart-cities can be a great vehicle in peace for stable nations no doubt.

From a security perspective I’m pessimistic about their real cost to our liberties. Even stable societies can’t fully isolate themselves, in times when national intelligence agencies around the globe engage in active attacks and then try to blame it on single fictional isolated individuals like Guccifer2.0. The future of security has a new benchmark and it’s called Advanced Persistent Threats (APT). Are smart city projects in Poland, and the Baltic countries prepared to have their systems taken over for display?

The hard question isn’t how to build smart-cities. It’s not a technical problem. I’m not trying to belittle the engineering effort. But we know the steps and how to build it.

Questions that should really be asked during the design is what happens if a smart-city flicks the switch on democracy, or has its switch is flicked by an outside adversary messing with local politics? Are we naive enough to believe that many of these “meme-democracies” around the globe (who won’t shy away from switching off their Internet in order to preserve their status-quo), will not use the data of it’s local smart city to squash dissent? … the coup d’état in Turkey, the “orange revolution” in Ukraine, aggression across the Arab world and dividing the enemy based on faith once again.

Citizens and consumers trust that a smart city closely tied to local politics and business will keep those secrets reliably and securely from third parties, when at the same time we know that these parties battle to control how, when and what type of data we consume? Surely, they’re having a laugh?

Critical topics to discuss for SmartCities architects:

  1. SmartCities play a role in cyberwar by increasing the decision making ability based on data. There are many overlaps where defence interests and political interests are concerned. They are all about “preserving peace”. A smart city doesn’t create peace. More accurately it preserves the current state by empowering whoever controls the data. Many features can be implemented in the name of security. To understand how smart-cities empower the defence sector please read:
    • NATO Cyber Security Framework [pdf]
    • Cyber War in Perspective: Analysis from the Crisis in Ukraine (BlackHat 2016) [pdf]
    • Russia’s new generation warfare in Ukraine: Implications for Latvian defence policy [pdf]
    • Cross-Domain Coercion: The Current Russian Art of Military Strategy [link]
    • Denial-of-Service: The Estonian Cyberwar and Its Implications for U.S. National Security [link]
  2. Most who have finished rolling out a smart-city security will tell you the system is 100% secure. But no one can even remotely prevent against another nation state. Poisoning data sets is far more easy and you don’t need a lot of security holes to inject information or game the system. So even you think you’re safe, your smart-city’s core value: the data, (the reason we bought the damn thing) is still open to compromise. Many of our future decisions will be made for us by machines to improve our efficiency. We rely on data to automate our life, it would be essential that if we want to trust that data to build models upon, to at least assess the soundness of our underlying assumptions: That the data we trust is also safe from tempering (see also my comments on why you want a smart city to have a blockchain). But here the attacks:
    • Attacking Machine Learning classifiers with adversarial examples [pdf]
    • Deep Learning Adversarial Examples – Clarifying Misconceptions [link]

I’ve been following the Santander Smart city project closely in the ETSI workgroups. There is a lot of awesome potential for better services and an improvement in the environment. Smart cities aren’t a technical challenge but a political one. They can be rolled out fast in smaller nations with less bureaucratic complexity. Especially centralised regimes with lean decision making can adopt these solutions very quickly.

Smart cities are not just a way to increase convenience for commuters and better parking systems. They are also a way to Engineer Consent. See Endward L. Bernays 1947 paper who coined this topic and the later BBC (3 part) documentary showing our history in this subject since WWII.

But it’s not the IoT aspirations of Luxembourg, Monaco, San Francisco, Santander that worry me. Smart cities are most successful when already run by a smart efficient public sector. Smart cities implemented over complex self serving bureaucratic processes can become an electronic manifestation of stupidity written in code. And we all know how long code stays in the field once it’s shipped?

/*
 * function disclaimer() 
 * When I wrote this, only God and I understood what I was doing.
 * Now, God only knows
 */

In this context “code becoming law” takes on a new and scary meaning. What happens once the human political decision making process becomes dependent on a smart-cities data generation? Smart cities become a vehicle of power through their data by allowing the state to better observe citizens behaviour and more importantly (in their eyes) protect itself against dissent. So especially those currently living under oppressive regimes have a lot to lose. Not to forget the risks if power suddenly tilts within a moderate country in favour of a right-wing party as seen in recent EU or US local elections. Do we want our rulers (the better and the worse ones) to wield this kind of power over individuals lives?

Many regimes across the globe currently race to showcase their continents 1st smarty-city, and in the process, “Become the regional flagship, then resell the model throughout the rest of the region”. Sounds like the business model fit for a prince? Well, it is.

How does it affect our responsibility as engineers to society and peace in an age where the biggest investors in Cyber(security) are nation states?

csoghoian

Sounds like doublethink to me!

doublethink

In conclusion,

one doesn’t have to wear a tinfoil hat to understand that these solutions will swing both ways. And some are going to get hurt. To all those who think smart-cities will liberate humanity from repressive regimes, please think again. They’re likely to become high-priced targets in cyber warfare. Anyone thought about dealing with that or is this left to the experts from NATO CCD COE or national intelligence communities? If we can’t protect these cities ourselves who will we contract their defence out to? 3-letter agencies and their external private security firms would be happy to help in exchange for more intrusive ways to track every move.

***

Thanks for bearing with me during this long post. We’d love to talk to you about your smart cities initiative and help you define a vendor neutral strategy as well as monetization strategies. All our proposals are built to empower individuals and based on well tested open source components which can be audited against backdoors. We believe that there are better (fairer) ways to monetize than centralized data harvesting, which regardless of all good intention in the end always leads to a security disaster.

pretty good reading:

  • ENISA Architecture model of the transport sector in Smart Cities [link]
  • US Department of Homeland Security: Future of Smart Cities: Cyber-Physical Infrastructure Risk [link]
  • Cyber security challenges in Smart Cities: Safety, security and privacy [link]
  • Cesar Cerrudo 2015 BlackHat slides on Hacking Smart-Cities [link] (youtube [video])
  • Panic City: For proponents of “smart cities,” urban complexity can simply be coded away [link]
  • Smart Cities Are Going to Be a Security Nightmare [link]
  • “Smart Cities,” Surveillance, and New Streetlights in San Jose [link]
  • Smart cities? Tell it like it is, they’re surveillance cities. Lots of lovely data, less of lovely privacy [link]
  • PROJECT: Smart Cities: Sacramento and the New State of Surveillance [link]
  • Surveillance issues in smart cities [link]
  • Tech Delivers Smart Cities – & Surveillance States [link]
Joachim Bauernberger

Passionate about Open Source, GNU/Linux and Security since 1996. I write about future technology and how to make R&D faster. Expatriate, Entrepreneur, Adventurer and Foodie, currently living near Nice, France.

Using Big Data to Analyse your Personality and Character

The age of Big Data, Machine Learning and Predictive Analytics promises some truly revolutionary changes in the way we select and filter in a modern HR process and narrow down the candidate pool. We are told that these systems can reduce OPEX/CAPEX and speed up the selection process, add quality to our hiring by spending time only on those applicants that matter.

I personally have a love/hate relationship with Big Data, but more about that later. Let’s first take a look what modern HRM systems promise.

For those unaware: these are the systems operating behind the scenes after you have submitted your CV for a specific job. Many are still pretty “dumb”, e.g. nothing more than a simple cloud based databases which map your CV to a specific job within the system and support a simple workflow within HR.

Some newer platforms which are either hitting the market now or have already been implemented by more tech-savy employers themselves are a different breed though. They might hook into system like watson or similar analytic platforms with state of the art artificial intelligence (AI).

They don’t just map skills from your CV to the job-spec and produce a ranking, but attempt to analyse your personality and character traits to decide if you will be a strong, mediocre or unlikely fit to the organizational culture.

Here is an overview of what many of the modern systems promise [source] (please bear with me throughout the spammy marketing tone in these bullets):

  • Ability to screen candidates using smart technology: Whether candidates submit their applications using an online application form, through a job board or through your internal career site, the candidates are screened and automatically ranked on specific qualities and skills.
  • Automatic scanning for exceptional talent: Not having a relevant vacancy should not mean missing good candidates. You can set up automated queries to scan each enquiry for a specific profile, generating an automated notification when a candidate fitting your profile applies.
  • Integration of 3rd party assessment tools: You may choose to increase the predictability of your screening and selection process by integrating market-leading tests and assessment tools from 3rd party test vendors to help make more objective hiring decisions.
  • Automated profiling and Talent Pool creation: Once you have built your talent pool searching within this invaluable resource will become your first step in uncovering talent for new opportunities. Our solution ensures that information from interviews and assessments are automatically added to the candidate’s profile, providing you with an even wider criteria base for screening when searching in the talent pool.
  • Weighted Scoring: Virtual Psychology’s e-Recruitment solutions give you the ability to weigh questions with a score, based on responses, which allows the recruiter to assess applicants at a glance.

Let’s take a look at the currently undisputed leader in AI technology: Watson. Developed by IBM is probably the most advanced AI engine, comes with an API allowing integration into pretty much any product.

You can take Watson for a spin by visiting http://watson-um-demo.mybluemix.net/ and check what it says about your personality by pasting your blog posts, LinkedIn summary, or job application (hint: cover letter) inside. I did a quick test-run throwing in all my blog posts and was astonished by how well Watson knows me. (Between you and me I felt Watson understood me better than by my ex-wife, but that’s another story.)

The “techie” in me loves this sort of stuff because it’s a simple interface that hides the arcane complexity and produces results which are truly amazing (I’d be interested to hear feedback of how well you thought Watson knows you.)

Before your eyes glaze over in awe, and decide to eliminate the risk of bias in decision making by outsourcing the hiring process to a machine, please take a critical look and ask yourself: Aren’t you automating bias?

Apologies for being stoic (aka “2000 and late”) in my thinking. So let’s take a look at how the worlds most advanced AI system judges some well known but ‟lesser liked” personalities from recent history:

1) Joseph Mengele (aka Angel of Death) gets the following result when we feed the loving letter to his wife into Watson:

You are social, somewhat verbose and can be perceived as shortsighted.
You are assertive: you tend to speak up and take charge of situations, and you are comfortable leading groups. You are unconcerned with art: you are less concerned with artistic or creative activities than most people who participated in our surveys. And you are respectful of authority: you prefer following with tradition in order to maintain a sense of stability.
Your choices are driven by a desire for sophistication.
You consider helping others to guide a large part of what you do: you think it is important to take care of the people around you. You are relatively unconcerned with tradition: you care more about making your own path than following what others have done.

IBM Watson on Joseph Mengele

IBM Watson on Joseph Mengele, a physician in the concentration camp Auschwitz and the doctor known as the “Angel of Death.”

Well done Herr Mengele, Watson thinks you are highly suitable for the job in most tech-companies. Your ability to assist and guide others would be a great asset to our organization. However we feel that your desire to take charge and speak your mind as well as your tendency to make your own path instead of following what others have done, would make you a more suitable fit for a fast-paced tech start-up than an established firm like ours.

2) Watson’s thoughts of Osama Bin Laden in his letter to the American people:

You are confident and heartfelt.

You are laid-back: you appreciate a relaxed pace in life. You are confident: you are hard to embarrass and are self-confident most of the time. And you are calm under pressure: you handle unexpected events calmly and effectively.

Your choices are driven by a desire for modernity.

You are relatively unconcerned with tradition: you care more about making your own path than following what others have done. You consider helping others to guide a large part of what you do: you think it is important to take care of the people around you.

IBM Watson on Ossama bin Laden

IBM Watson on Osama bin Laden

🙂 Maybe conspiracy theorists were right after all, and, 9/11 was an inside job? Maybe Osama is in fact still alive spending his days surrounded by forward thinking hipster friends in Brooklyn? I guess we need more data to say for sure.

Bad jokes aside, Watson’s predictions resonate with us especially when he is charming. He confirms what we want to believe: that we are special, have leadership qualities and really, really care about others. We tend to accept something nice about ourselves eagerly and without critical thinking. Don’t use AI to judge the personality of others – when most of times we can’t even trust ourselves with such judgement.

Another problem with such a system is that it never forgets. Once you are labelled it’s hard to shed such a label. People change and should be allowed to make mistakes.

Last but not least AI prediction (even it works 100% correctly) in psychological analysis becomes ineffective when these tools are used to craft or sanitize the input to make them conform. A cover letter, CV or any writing or data that is massaged to satisfy the tool is one of the biggest problems with data. When used in such intrusive ways leads us down a path of self-censorship and a world where only machines will read what you have to say because everyone else will find you utterly boring.

Joachim Bauernberger

Passionate about Open Source, GNU/Linux and Security since 1996. I write about future technology and how to make R&D faster. Expatriate, Entrepreneur, Adventurer and Foodie, currently living near Nice, France.

imposter syndrome

The Imposter Syndrome in Software Development

“The impostor syndrome, is a psychological phenomenon in which people are unable to internalize their accomplishments. Despite external evidence of their competence, those with the syndrome remain convinced that they are frauds and do not deserve the success they have achieved. Proof of success is dismissed as luck, timing, or as a result of deceiving others into thinking they are more intelligent and competent than they believe themselves to be.”

This is the definition Wikipedia provides for “Imposter Syndrome”.

Are you always questioning yourself if all the other developers you work with are more talented than you are? You fear that people will discover that you are faking your skills and think you are a phony? And, …. are you a Software Engineer?!

The imposter syndrome is very common among professions where work is peer-reviewed such as journalism, writing, … and Yes: Software Development. It is a sign that you apply extremely high standards to yourself which is not in balance to how you view others. Pair-programming can be particularly stressful but also writing open-source software and activities which push you into being genuine.  But keep in mind people with true ability tend to underestimate their relative competence.

I’m well aware of how us software engineers are judged. The industry still believes that the best programmers have no hobbies other than software development. And when we go home after a 12 hour work day, we’re expected to recharge our batteries while hacking away on some pet-project or busy ourselves learning one of the latest programming languages, which are all the rage now.

Unfortunately this behavior doesn’t actually make us better programmers. What it will make you though is a burnt out programmer.  Even worse, if all your input in life is from data/info/books/discussions related to programming, I’d go as far and say: You will not only turn into non-interesting individual, but eventually even lack the ability to see the bigger picture in your own tech projects!

Don’t believe me? Consider this: It’s 03:00 AM and you’re hunting for one of the hardest bugs you ever had to find in your code. You’ve already been stuck with this problem for the last 7 hours! You finally you give up and get back to your desk few hours later (after some sleep or a walk with your dog). Voila! You found the problem by doing nothing. Actually your mind found the problem!

Chances are you know this pattern very well and always wondered why sometimes the hardest bugs seem to get fixed by letting go.

The same applies on a macro-level. If you allow your mind some rest on a regular basis, even when work is your hobby, then you will automatically get better at your job and also more balanced than before.

If you feel like an imposter remember, the best work in science is built on previous research. Have a look at these computing Pioneers. Whether Charles Babbage, Dennis Ritchie, Einstein or Satoshi Nakamoto, … they all expanded on someone else’s work.

And being genuine isn’t nearly as important as being able to identify what is already available and can be re-used with a bit of “glue”.

Sometimes we put too much on our plate, but after some time we can handle it, and we aren’t phonies. And we move on to the next challenge. It’s OK to fake it until you make it. We all do it to grow our skills.

Learning new programming languages can be a nice, but what can be equally satisfying in terms of job-security and pleasure is deepening the understanding of a realm that you thought you already knew well. As a recruiter I prefer talking to people who have <5 languages on their CV and know they’re really experts in them, rather than a hipster engineer with ADHD, listing 20 exotic languages, where I’m sure they’ll lack deeper understanding in every single one of them.

Also remember that as a senior programmer it isn’t just the number of projects you have worked on or the languages you know, but also your ability to understand and translate requirements given to you by a person who doesn’t care about how it is implemented.

Sometimes being able to break the ice by talking about your last hiking trip or your passion for travel will get you further than cutting to the chase with low-level design details.

If you turned your passion into work and love your work so much that it becomes another way of expressing the passions in your life, remember: Nothing ever lasts! So the best chances to hang on to that passion for longer is by giving yourself a break regularly and staying balanced.

Joachim Bauernberger

Passionate about Open Source, GNU/Linux and Security since 1996. I write about future technology and how to make R&D faster. Expatriate, Entrepreneur, Adventurer and Foodie, currently living near Nice, France.