The information provided on this blog is to, the best of our knowledge, accurate and up-to-date as of the date of posting. However, please be aware that information can change rapidly and without notice. Therefore, we cannot guarantee the accuracy or completeness of the information presented after the posting date. It is advised that readers exercise due diligence and independently verify the accuracy of information they find on this blog news feed. Here are links to the most current information available in relation to our Membership, Saskatchewan Case Law, and Saskatchewan Legislation.
By Ken Fox
At the 2018 CALL/ACBD Conference in Halifax, Michael Ridley of the University of Guelph delivered a fascinating discussion on The Right to Explanation: Artificial Intelligence, Information Policy, and the Black Box of Deep Learning. His talk filled my head with questions and my notebook pages with confused scratching purporting to be notes. What follows is my best excuse for a coherent account based on said scratchings.
The popular conception of Artificial Intelligence (AI) has had many meanings. What they have in common is this: AI is what we haven’t done yet – it’s always the next thing. But the technology itself, for the most part, belongs to the classical age of programming. The features in popular software that anticipate your consumer behaviour are based on relatively simple linear regressions, old-school statistical logic.
As such, AI already permeates our online life. It is a pervasive ghost in the filters of our search engines.
There is a new movement toward autonomous AI, which has the attribute of agency. Given sufficient background information, the system can make decisions without a programmer’s assistance.
Designers endowed their system with the rules of Go, humanity’s most complex game. For forty days and forty nights the system played the game non-stop against itself, learning strategy thru iteration. Upon completing its rigorous trial, AlphaGo Zero played the world Go champion and soundly defeated him. The human master was shaken by the experience, reflecting that the computer played the game like no other player, frequently making illogical and outrageous moves. Was the tactic to confound strategy? To unsettle the opponent? Does the system have emotional intelligence? These questions remain unanswered. The best of human players are now mimicking the machine’s tactics.
The European Union’s General Data Protection Regulation (GDPR) is now in effect, and one of its key features is the right to an explanation. The regulation purports to legislate the idea that if an algorithm makes a decision about you, you have the right to know why, to look at the process of how the decision was made. Although the law is only effective in Europe, its impact is global, due to the Brussels effect.
But the right to an explanation, although law, is confronted by the black box problem. AI is opaque. Data goes in. Unsupervised learning occurs. Processed data come out.
Input, black box, output.
Deep learning has no explanatory power.
For example, there is no explanation for why AI perpetuates poverty and discrimination in insurance, advertising, education, and policing. This tale is told in Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O’Neil and Algorithms of Oppression: How Search Engines Reinforce Racism by Safiya Umoja Noble.
Explainable AI (XAI) is a name for all attempts to see inside the black box.
Sometimes XAI manifests in proofs: cause and effect, classic logic, mechanisms, mathematics. Oftentimes XAI employs validations, goes under the hood and picks apart the algorithm, removing bits of code and recording system response. Other times, XAI takes the form of authorizations: codes, standards, expertise, audits, legislation, regulation, due process.
XAI is often only achieved by limiting a system’s complexity. The X is purchased at the expense of the I.
What happens to libraries and librarians when machines can read all the books? asks Chris Bourg in Feral Librarian. Are machines and algorithms to become a new class of client?
How do we make the law into machine-relatable data?
Today’s AI issues can be related to libraries by analogy. Overfitting, when analysis too-closely follows the particularities of data at the expense of its power to be applied generally, can be related to the propensity to rely on Google or Wikipedia, where results are obtained too quickly and cheaply. Dimensionality, the problem of an overabundance of variables in need of reduction, relates to information overload (Note: to me this seems to be the same problem – too much data – in a different context, rather than an analogy). Hand Engineering, the process of manually tweaking the data to improve results, can be compared to metadata, the conceptualization of information.
XAI by AI: the solution to XAI is more AI.
The new digital divide is between those using algorithms and those used by algorithms.