What, in your opinions, are the biggest opportunities and risks for the law in the UK over the coming decade in relation to the development and use of artificial intelligence?

Does the ethical development and use of artificial intelligence require regulation? If so, what should the purpose of that regulation be?

It is likely that artificial intelligence systems could at some point malfunction, underperform or otherwise make erroneous decisions which cause individuals harm. Do new mechanisms for legal liability and redress in these situations need to be considered, or are existing legal mechanisms sufficient?

If new legislation was to be introduced to deal with the issues presented by artificial intelligence, should the UK Government go it alone and look to lead the way, or should they seek to collaborate with other governments to create international frameworks for legislation?

As artificial intelligence systems become increasingly autonomous in practice, will the legal system need to change in order to reflect and accommodate this autonomy, or are current mechanisms sufficiently adaptable?

When artificial intelligence systems are developed or trained using publicly-owned data, or personal data, who should own them? Should alternative models for individuals or trusts to retain ownership over personal data be explored?

What impact is artificial intelligence having on the legal profession itself at the present moment, and how do you anticipate this developing over the next decade?

If there was one recommendation you would like to see the committee make at the end of this inquiry, what would it be?