20 June 2023
ALPSP has submitted its response to the UK Government White Paper consultation: AI regulation: a pro-innovation approach.
The response in full:
1. Do you agree that requiring organizations to make it clear when they are using AI would adequately ensure transparency?
Clear labeling of how, where, and when AI is used in either a final output, work processes, or learning model should be the minimum transparency obligations imposed. Given the complexity of AI technology, labeling may need to vary by sector, technology, or use. Minimum labeling obligations, including clear declarations on how content was sourced and licensed, should be defined.
2. What other transparency measures would be appropriate, if any?
Consent over the content used in AI training is fundamental to the development of a healthy, sustainable and equitable technology and industry. Therefore, we do not understand why this proposed regulatory framework does not seek to address ‘the balancing of the rights of content producers and AI developers.’ [34] If this balance is not at the heart of every AI regulation we do not see how the government hopes to fulfill its intention of creating ‘regulation [that] can increase innovation by giving businesses the incentive to solve important problems while addressing the risk of harm to citizens’ [31] and we urge the government to reconsider. Any consultation on AI should include a wide range of stakeholders with broad partnership across the spectrum of the Creative Industries being sought.
No one understands the building blocks of AI more than the people who created those blocks. Rightsholders have a vested interest in ensuring their work is treated appropriately and ALPSP members are already experiencing calls from authors requesting their work be blocked from reuse in AI systems over professional standards/ethical concerns. Rightsholders are the safeguard to prevent the creation of technology based on abusive or false information and their centrality to proper AI governance should be recognised more clearly in current government policy. Without rightsholder engagement, we do not understand who will safeguard against poisoned datasets, and we have serious concerns over the development of AI technology built on duplicitous, false, or misleading material.
3. Do you agree that current routes to contestability or redress for AI-related harms are adequate?
Addressing whether the current UK intellectual property regime provides an adequate basis for AI related harms would be useful: how would numerous incidents of low independent value minor infringements be dealt with? Where could claimants turn when their personality, style, or other quasi-IP related rights are infringed and when they may currently struggle to be protected by the UK IP system? What happens when both intellectual property and data protection rights are infringed? We support further investigation into the range of measures that may need to be created in order to ensure proper redress for individuals within the UK who suffer harm due to AI technology.
4. How could routes to contestability or redress for AI-related harms be improved, if at all?
In order to understand what harm AI may cause, and has caused, it is vital that rightsholders are fully informed as to the extent to which AI models infringe their intellectual property rights, both by prior and future conduct. Once these harms are acknowledged, regulators need to be empowered to redress individuals, which may include managing a considerable volume of complaints across a range of sectors and legislative goals. A more cost-effective system for individuals to report harm outside of the court system may also be needed. We are concerned that not all sectors are regulated adequately and that greater enforcement powers could be granted to ensure compliance (and avoid further legal and ethical harms). Additionally, we recognise that given the pace of technology, any solutions implemented in the near future will need constant review to ensure they remain fit for purpose.
5. Do you agree that, when implemented effectively, the revised cross-sectoral principles will cover the risks posed by AI technologies?
No. Not as stated in the proposal. The proposal envisions a ‘pro-innovation’ approach and adequate copyright protection lies at the heart of the UK’s innovative advantage. As defined in the proposal, we do not believe that any of the 5 principles address properly the improper reproduction, distribution or making available of creative works by AI. Protecting intellectual property and human creation’s value should be balanced with technological development. While the 5 principles are a basis for regulation, without properly assessing and addressing IP concerns we do not believe the principles properly reflect the correct balance of concerns required to ensure healthy development of this technology.
6. What, if anything, is missing from the revised principles?
A statutory duty to ensure current copyright and intellectual property protection as well as a willingness to adapt the IP regime to allow for protection of new original authored works. The UK’s commitment to strong intellectual property protection should not be undermined as it is this commitment that has fostered the richness of the UK’s creative industries. Explicit reference to the key role copyright plays in ensuring we develop ethical, transparent, and humane algorithmic technology appears to currently be missing in the Government's approach. It is estimated the UK’s creative industries are worth £115BN (Creative Industries Council). This value is created by a wealth of different industries, with small, independent rightsholders forming a key backbone to this thriving economy. In particular, we are concerned that the government is not addressing independent, society, or smaller rights holders associations’ concerns; their partnership in creating a healthy, equitable, and sustainable AI ecosphere will be crucial.
7. Do you agree that introducing a statutory duty on regulators to have due regard to the principles would clarify and strengthen regulators’ mandates to implement our principles, while retaining a flexible approach to implementation?
A statutory duty on regulators to prevent and address AI-related harms would be beneficial. Regulators should have a clear mandate, along with the appropriate enforcement mechanisms to allow for necessary measures to prevent IP infringement, either in AI development/learning, model creation, or output. Clear guidance on regulatory scope of authority, regulators’ powers to enforce rules and regulations, how regulators work cross-sector, how they would monitor compliance/be able to prompt change in light of technological advancements etc. would be a difficult, but essential, task.
8. Is there an alternative statutory intervention that would be more effective? New central functions to support the framework. Do you agree that the functions outlined in section 3.3.1 would benefit our AI regulation framework if delivered centrally?
Depending upon successful implementation, we agree that a centrally regulated framework could be useful. How would the structure sit together? How would the education element work? Would the central function be expected to understand the sector-specific details to give reliable advice? What recourse would be available where incorrect advice was given which later impacted on an organisation’s AI implementation or harmed an individual? How would conflicts in advice be managed?
Businesses would be looking for reassurance and clear guidance, not vague recommendations, and there are concerns the creative industries could be fragmented and struggle to act meaningfully as a safeguard against many AI potential abuses.
10. What, if anything, is missing from the central functions?
Depending upon implementation, it is unclear if rightsholders would have adequate protection against future or prior AI infringements. It is likely that there will be a high number of minor infringements, which individually would be too burdensome for a single rightsholder to litigate. Clear authority to redress such public harms is needed. Additionally, an escalation function for when advice from regulators conflicts or differs from advice previously provided. Again, recognition that key stakeholders in AI’s functionality include rightsholders, not simply users and developers, is essential.
11. Do you know of any existing organizations who should deliver one or more of our proposed central functions?
We do understand AI works in an international context, and international cooperation is necessary to fight social harms, i.e. dark patterns, unfair commercial practices, or malicious interference with a technological system. The UK has been a leader in creating the ‘gold standard’ for IP protection internationally and we hope the Government does not abandon this position in the false hope such abandonment would generate innovation.
12. Are there additional activities that would help businesses confidently innovate and use AI technologies?
Licensing, including open access or public licences, will play a vital role in the development of best practices. The government should work directly with rightsholders on creating a sustainable code of practice that encourages innovation by recognising the importance of verified and quality creative works being used to build that innovation. While we acknowledge best practices may be useful, they must also be adaptable to future advancements in technology. A partnership between rightsholders, technology developers and policy makers is key to ensuring a healthy, honest, and competitive industry is championed in the UK. The potential for widespread commercial appropriation by AI systems of UK rightsholders works threatens this £115BN industry and we urge the government to take all action to protect this key pillar of the UK economy.
12.1. If so, should these activities be delivered by the government, regulators or a different organization?
Literary, artistic, musical and dramatic works are AI’s bedrock: there is no future development of AI without the continued ingestion of copyright protected content. A partnership between government, regulators and the private sector will be necessary to achieve the government’s goal.
13. Are there additional activities that would help individuals and consumers confidently use AI technologies?
Clear labeling that avoids consumer confusion must be an initial key priority. People should understand when they are using trusted and verified AI systems built on a strong and permitted foundation of licensed works; when they are interacting or receiving content created or co-created by AI, and this labeling should be machine readable. It is of concern that several different competing systems of adjudicating trustworthiness could be developed, only further confusing the market. Clear guidelines on what should be labeled, when it should be labeled, and how those labels are easy to understand needs cross-sector collaboration. This will in turn help develop sustainable data sets fit for business and private reuse as rightsholders understand how AI copies and reproduces their content.
13.1. If so, should these activities be delivered by the government, regulators or a different organization?
A kitemark scheme for organisations that are able to demonstrate their compliance with the five core principles could be offered. Consumer awareness campaigns would help educate individuals on the positive aspects of AI and how to spot duplicitous or malicious systems, while providing reassurance that AI is used according to those principles. For example, obtaining an insurance quote from a certified organisation that uses AI to calculate premiums would provide some reassurance that the price would be calculated fairly. These could be sector-specific, or universal, but it would be essential that they really do support the key principles in full, rather than just in spirit, if consumer trust is to be earned.
14. How can we avoid overlapping, duplicative or contradictory guidance on AI issued by different regulators?
Consistent monitoring and evaluation of the framework will help, as well as understanding that advice offered in the short term future may necessarily be contradictory in the longer term as we understand the effects of use of AI technologies. Until we understand this scope, it will be difficult to avoid overlapping regulations. Regular backwards and forwards between regulators for different industries and different government bodies, such as the ICO and IPO, are necessary to avoid confusion.
15. Do you agree with our overall approach to monitoring and evaluation?
Success will probably depend on how well the horizon scanning function is undertaken to address risks before they become wide-spread legal harms. It will require robust input from industry experts to ensure sufficient understanding is taken into consideration. Organisations of all sizes will need to contribute, not just the large organisations whose voices may often be the loudest. Organisational bodies can continue to play a key role in representing their industry’s interests and in providing the necessary quantitative and qualitative data that will be required to monitor and evaluate, and associations that represent minority voices, such as ALPSP, must be included in such policy measures to achieve actual diversity and representation.
16. What is the best way to measure the impact of our framework?
Feedback from the spectrum of related businesses on how well new risks are assessed and prioritised is required, and the time needed to understand this impact be acknowledged. Regulators will no doubt have concerns over their resource capabilities with this additional function. Success would reflect an ecosystem where rightsholders are fully empowered to control the use of their works, the UK retains its gold standard copyright regime, and business and tech partners work together to create healthy, sustainable, safe, and equitable AI.
17. Do you agree that our approach strikes the right balance between supporting AI innovation; addressing known, prioritized risks; and future-proofing the AI regulation framework?
With the speed innovation is moving, by the time this consultation is complete there will be a large gap to bridge in order to meet today’s use, never mind those advances and developments which we cannot even contemplate today. We are also concerned that copyright protection is not being prioritised sufficiently as the mechanism by which we may support AI innovation by balancing known risks and future proofing the UK as a home for AI investment.
18. Do you agree that regulators are best placed to apply the principles and the government is best placed to provide oversight and deliver central functions? Regulator capabilities
Providing they have sufficient knowledge, authority, and resources, then yes, regulators may be the best place to apply a correct set of guiding principles; however we do not underestimate this task. The breadth of misuse of AI capabilities is staggering and will require co-operation across regulatory authorities. A central body could coordinate expertise and different overlapping problems.
19. As a regulator, what support would you need in order to apply the principles in a proportionate and pro-innovation way?
N/A
20. Do you agree that a pooled team of AI experts would be the most effective way to address capability gaps and help regulators apply the principles?
Experts vary by academic discipline, professional background, sector, and industry. Experts solely in AI technology are not sufficient to fully address potential harms and help regulators enforce best practices. Experts from a wide range of industries, including representatives of small or individual rightsholders, must be included to fully build a knowledge bank on how AI operates across all spheres. Different fields, backgrounds, industries, and sizes of organisations must be included to allow for true innovation to be fostered.
21. Which non-regulatory tools for trustworthy AI would most help organizations to embed the AI regulation principles into existing business processes? Final thoughts
If not controlled, there is a risk of multiple AI standards developing across multiple different sectors. This may create confusion for consumers and businesses, particularly where they cross multiple regulators. Again the issue here is timing. Organisations would not want to see standards developing in 12 or 24 months which may require considerable reworking in light of tools being developed in the AI space. Clear thinking on the areas where solutions are needed is vital, notably: standards and guidelines; best practices (including licensing); and training and education.
22. Do you have any other thoughts on our overall approach? Please include any missed opportunities, flaws, and gaps in our framework.
If implemented as described, the current approach would help signify that the UK is open to AI when used with the appropriate safeguards, but we do not think the Government has gone far enough in acknowledging the instruments by which sustainable innovation will be fostered.
Additionally, the reality is that a huge amount of work will be passed onto the regulators. This is why market based solutions should also be investigated and encouraged and the government should partner with both large and small representatives of the creative industries in order to give the UK the best chance at creating a healthy environment for AI to flourish in both the short and long term. AI integrity is essential to the publishing industry, and as published works make up a large chunk of the material copied in AI learning, the government should recognise the centrality of publishers’ of all sizes investment in the creation of an AI economy.