Co-creation and input from internal expertise is critical to a project’s success. Especially when said “project” is the backbone for how you develop, harness, and unleash emerging technology.

In our first article about our AI Governance Framework, we discussed why it’s necessary to create a framework that guides the ever-evolving process of artificial intelligence development and application at ATB. But in order to define the framework’s guiding principles it was imperative that input and expertise from ATB’s own data scientists and developers was continually sought. After all, they’re tasked with being a leading force of technological disruption at ATB, so it was important we balanced the need to manage risk and define the parameters of the sandbox, without dampening our team members’ ability to explore opportunities and push limits.

Creativity likes constraints (well, just the right amount of constraints). Imagine building a solution without knowing what the restrictions are beforehand. Too large a sandbox can not only dilute purpose and focus, but has even harsher consequences for projects that enevidably get stuck in endless feedback sessions and approval levels because too many people and departments look at it through their own perspectives. Ultimately, lack of governing principles for developers could become more of a deterrent to creating applications because it would take even longer to develop, deploy, and implement. A governance framework empowers developers with the necessary information not only so they know what to do (or what they can do), but also creates the right mechanisms to streamline the validation, QA, and approval processes. At the same time, the necessary controls and checks are put in place and the risks are mitigated before going into production.

ATB’s AI Governance Framework was spearheaded by Yukun Zhang, Analytical Model Specialist in Data Science at ATB - who also developed our Model Management Framework (which governs our data science models, ensuring the integrity and validity of data sets). To Yukun’s credit, the process for creating the AI governing framework was cracked open to encourage input from data scientists and developers across the organization as AI spans a wide range of technology disciplines that will very likely touch most aspects of our business as we move forward.

Hence, the guardrails for developing and testing artificial intelligence were defined collaboratively with the team members who develop the technology, as well as our leadership team members who want to be sure that the sandbox for creation has the proper barriers to protect the organization and Albertans.

At the same time, the Governance Framework had to strike a balance that laid out ATB’s tolerance for risk and the mandate we have to build cutting edge solutions. As a financial institution, there are regulations that are already in place (and more that are likely coming in response to AI and machine learning), so we’ve aimed to be proactive rather than reactive. Let’s make sure that our work is considerate, fair, and transparent for all the right reasons.

To do so we started by defining what Artificial Intelligence is for our organization and identified four categories of AI technology that are a focus at ATB:

Machine Learning

Robotic Process Automation

Computer Vision

Text and Voice/Natural Language Processing

Through this exercise team members tried to look at a picture of all the use cases that we have already solved, and the use cases that we are going the AI we create to work with said data. We wanted to create a framework that would cover what we are doing now, and what we envision ourselves doing in the near future (i.e. in the next five years). This meant a long hard look at data science and management to create a data Model Management Framework as a foundation upon which the AI Governance Framework could be built.

Our data science team, led by Enterprise Data Science Lab Director Kwame Asiedu, is responsible for three main aspects of data science: developing the models which our AI technology is built upon (both on-premise and in the cloud); working on the governance of our technology, and lastly, data science capability (working with our developers to take our AI solutions to commercialization). Asiedu and the Enterprise Data Science team work on the building blocks of our AI solutions, and their work is critical to ensuring that the technology we develop is not just geared towards commercial success, but more importantly, that they are built responsibly – and that starts with our data science models.

Our developers build solutions that are powered by data science models developed by our data scientists, which are governed by our Model Management Framework. It provides standards and guidelines for data scientists and developers around how the models are documented: including datasets, ideas, the problems they want to solve, and the outputs that came from the models. Our model managers validate these models, and they provide checks for biases, ethics, balance, and quality, ensuring that the data scientists are doing it right. From there they can approve or reject the models, and if rejected the data scientist is made aware of what went wrong, and what may need to be changed. This validation process is also documented, and this document is attached to each model. If we were to do things like set a threshold for model decay (for example), the Model Management Framework helps us ensure fairness and transparency and avoid bias. This framework was something that was necessary for us to create and put in place even before we created our AI development frameworks. It leaned heavily on our other frameworks such as ethics, information security, data security, and data governance.

From a developer standpoint, the AI Governance Framework needed to be comprehensive, but non-restrictive. It should talk about what the technology is, and define the areas that we are working on. We’re not building with AI in general, rather we are working on solutions that are more granular. We would need to have frameworks around chatbots for example, but not automated cars. We needed to know what we would be building for, what the challenges are, and highlights what is and isn’t permissible with enough context. It may seem like common sense, but having such frameworks helps developers in the long run to understand the space they can work within.

AI is a revolution, and it is inevitable that it will transform many facets of life, including banking. What AI lets us do is powerful: we can improve the customer experience, and even improve lives. With this same power, if it is not used properly, there are opportunities for it to cause harm in some way or form. Our AI Governance Framework ensures that we are addressing our greatest concern: building solutions that are responsible and put our customers first. Albertans should feel safe when they are dealing with ATB and our applications. Interactions with us as a bank should be upheld as fair and ethical. When people give us their data, we want them to trust that their data will not be used against them or to harm them.

Banking is deeply personal, and that is top of mind for us when we are creating solutions with AI. We know that we are not building on random data sets. Our work has a real impact on real people, and we want to make sure that Albertans know that we are acting responsibly.