Everybody Is Talking About AI

Everybody Is Talking About AI

If you ploughed through the thousands and thousands of articles and even books written recently you might believe that AI is the answer to almost every question or else that AI will result in the end of civilisation as we know it.

But what is AI?

The Builtin website describes it like this.

“Artificial intelligence (AI) is a wide-ranging branch of computer science concerned with building smart machines capable of performing tasks that typically require human intelligence. While AI is an interdisciplinary science with multiple approaches, advancements in machine learning and deep learning, in particular, are creating a paradigm shift in virtually every sector of the tech industry. 

“Artificial intelligence allows machines to model, or even improve upon, the capabilities of the human mind. And from the development of self-driving cars to the proliferation of generative AI tools like ChatGPT and Google’s Bard, AI is increasingly becoming part of everyday life — and an area companies across every industry are investing in.”

So is AI a new thing? 

Searching on the Harvard website I found this.

“The Logic Theorist was a program designed to mimic the problem solving skills of a human and was funded by Research and Development (RAND) Corporation. It’s considered by many to be the first artificial intelligence program and was presented at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) hosted by John McCarthy and Marvin Minsky in 1956.”

So AI, as a concept, is at least 67 years old but is probably, as an idea, much older than that. Technological advancement has meant that in more recent times we have both the speed and capability of today’s computers to make some of the concepts discussed, all that time ago, a reality.

The sheer volume of information available about AI is providing more heat than light right now but it is clear that AI will have a profound impact on how business will evolve and hence how the changes will impact and benefit everyday life. We have had a number of senior figures in the technology and AI spaces voice their concerns that AI needs to be controlled and regulated in order to ensure that it delivers on its potential and doesn’t become the decisive and destructive force that some fear.

This article in the Harvard Gazette gives a good balanced view of where we stand today.

Great promise but potential for peril.

I’ve extracted this piece but recommend that you read the whole article.

“For decades, artificial intelligence, or AI, was the engine of high-level STEM research. Most consumers became aware of the technology’s power and potential through internet platforms like Google and Facebook, and retailer Amazon. Today, AI is essential across a vast array of industries, including health care, banking, retail, and manufacturing.

“But its game-changing promise to do things like improve efficiency, bring down costs, and accelerate research and development has been tempered of late with worries that these complex, opaque systems may do more societal harm than economic good. With virtually no U.S. government oversight, private companies use AI software to make determinations about health and medicine, employment, creditworthiness, and even criminal justice without having to answer for how they’re ensuring that programs aren’t encoded, consciously or unconsciously, with structural biases.”

The revealing thing for me in this article was the phrase “to make determinations . . . without having to answer for how they’re ensuring that programs aren’t encoded, consciously or unconsciously, with structural biases.”

How we create and utilise AI systems needs to be closely monitored to ensure that how the systems evaluate and utilise data won’t slew the outputs with inherent bias.

This article by McKinsey & Co. explains this dilemma quite nicely:

Tackling bias in artificial intelligence (and in humans)

“AI can help reduce bias, but it can also bake in and scale bias”

“Biases in how humans make decisions are well documented. Some researchers have highlighted how judges’ decisions can be unconsciously influenced by their own personal characteristics, while employers have been shown to grant interviews at different rates to candidates with identical resumes but with names considered to reflect different racial groups. Humans are also prone to misapplying information. For example, employers may review prospective employees’ credit histories in ways that can hurt minority groups, even though a definitive link between credit history and on-the-job behavior has not been established. Human decisions are also difficult to probe or review: people may lie about the factors they considered, or may not understand the factors that influenced their thinking, leaving room for unconscious bias.

“In many cases, AI can reduce humans’ subjective interpretation of data, because machine learning algorithms learn to consider only the variables that improve their predictive accuracy, based on the training data used. In addition, some evidence shows that algorithms can improve decision making, causing it to become fairer in the process. For example, Jon Kleinberg and others have shown that algorithms could help reduce racial disparities in the criminal justice system. Another study found that automated financial underwriting systems particularly benefit historically underserved applicants. Unlike human decisions, decisions made by AI could in principle (and increasingly in practice) be opened up, examined, and interrogated. To quote Andrew McAfee of MIT, “If you want the bias out, get the algorithms in.”

“At the same time, extensive evidence suggests that AI models can embed human and societal biases and deploy them at scale. Julia Angwin and others at ProPublica have shown how COMPAS, used to predict recidivism in Broward County, Florida, incorrectly labeled African-American defendants as “high-risk” at nearly twice the rate it mislabeled white defendants. Recently, a technology company discontinued development of a hiring algorithm based on analyzing previous decisions after discovering that the algorithm penalized applicants from women’s colleges. Work by Joy Buolamwini and Timnit Gebru found error rates in facial analysis technologies differed by race and gender. In the “CEO image search,” only 11 percent of the top image results for “CEO” showed women, whereas women were 27 percent of US CEOs at the time.”

So back to the key question —

“What role should Company Boards have in the development, roll-out and evolution of AI technologies within their organisations?

According to the linked IoD report, quite a lot and quite a major responsibility.

Again I would encourage you to read the whole report but I’ve extracted this portion, which recommends the 12 things that Boards should be doing on a regular basis.

  1. Monitor the evolving regulatory environment.
  2. Continually audit and measure what AI is in use and what it’s doing.
  3. Undertake impact assessments which consider the business and the wider stakeholder community.
  4. Establish board accountability.
  5. Set high level goals for the business aligned with its values.
  6. Empower a diverse, cross-functional ethics committee that has the power to veto.
  7. Document and secure data sources.
  8. Train people to get the best out of AI and to interpret the results.
  9. Comply with privacy requirements.
  10. Comply with secure-by-design requirements.
  11. Test and remove from use if bias or other impacts are discovered.
  12. Review regularly.

I’ve highlighted point 6 as it can easily be overlooked. Having a review process is important but having the ability to say “Stop” and the teeth to make that happen is key.

For many boards there will be a steep learning curve to discover more about AI, how it is being used within their companies and how it is planned to evolve over time. A couple of good reads that were recommended by a friend of mine are:

Power and Prediction: The Disruptive Economics of Artificial Intelligence

A video of the author talking about the book can be found here
(the actual talk starts 5 minutes in)

Plus

The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity

Conclusion

In summary AI will keep evolving at an ever increasing rate, boards have the challenge to harness that progression for the good of ALL of their stakeholders plus play a role, within their industry, shaping how regulation will help us to get the most from AI for the benefit of all. At Praxonomy, we believe in taking a thoughtful and strategic approach to incorporating AI into our board portal solution, Boardlogic. Our key priority is to incorporate AI functionality in ways that continue to ensure high-end data security whilst creating new value and added convenience for both administrators and end-users. Clients can expect to see new AI-driven functionalities that build on Boardlogic’s already robust capabilities. Some of the new release changes will be subtle. All of them will be designed to make work-life easier for board members and the people who support them.

If you would like to learn more about Boardlogic click here
Or schedule a demo with us here