Is the new frontier (model forum) an industry body fit for AI?

The use of AI within various software solutions and services is hardly a recent innovation. The past decade has seen a rapid maturing and scaling of the use of AI across a broad range of technologies. However, the challenge for the general public has been in recognising a key use case for it that exposed how the technology can have a fundamental and profound impact on how significant parts of the global population work.

So, with the release of ChatGPT 3.0 in November 2022, the world was exposed to a use case of AI that people could relate to and the perception of its sophistication surprised and shocked many. As is often the case, when a technology innovation resonates strongly with the public and permeates the zeitgeist, it leads to a wide range of reactions. From people decrying it as the first step in the downfall of society to the other extreme of it being touted as a fundamental change to how humanity will exist. This sort of reaction has happened repeatedly across human history, and PAC assumes the same reactions occurred with the invention of fire, the wheel, and electricity. Aside from making light of the reaction to generative AI (GenAI), spearheaded by ChatGPT, the point in this reflection is that humans have consistently invented a technology that has significantly impacted how we behave, and such inflection points in society have played out roughly the same way. An innovation spearheads change, a level of hype follows that exacerbates its potential, a degree of harmonisation occurs once consistent and scalable uses grow, and governments have to introduce laws and regulations to manage it.

To use a common term thrown around the IT industry, we are in the middle of a “hype cycle” currently where, from PAC’s experience at least, there is a degree of risk for organisations adopting AI without understanding all the parameters and complexities that make its use a feasible and sensible business option. Given the focus on GenAI since the release of ChatGPT last November, OpenAI CEO Sam Altman has been reported on across a range of related debates and discussions and, in the process, has to a degree, become the “poster boy” for representing both GenAI and how it should be regulated. For anybody observing how GenAI has caused opportunity and uncertainty across society, PAC considers the reported journey of Sam Altman to be the best example of how GenAI, and other forms of AI, require a combination of governments, technology companies, and companies within non-IT sectors to work together to create codes of practice, regulations, and laws sooner rather than later.

Over the last 9 months, it has been reported that Sam Altman has advocated that it is not the position of technology companies, like OpenAI, to regulate themselves and has appealed to lawmakers to step up to do this. PAC finds this position challenging to reconcile as it can be interpreted as technology firms cannot be reasonably expected to manage the societal impact of what they do, akin to “we make the weapon, but we are not responsible for firing it”. It could be said that PAC is oversimplifying the situation with this comment, which is far more nuanced than that. Still, in a developed society, there is a reasonable expectation of a social contract where individuals and companies have a degree of societal responsibility and accountability for their actions, irrespective of the laws and regulations relating to them. Interestingly it was also reported in June 2023 that OpenAI had lobbied the European Union to weaken its forthcoming AI regulation despite Sam Altman and the company calling in public for stronger AI guardrails through regulation.

Again, as mentioned earlier, this is nothing new when innovations occur in capitalist societies as “captains of industry” exert their influence to define an operating environment in favour of their need to generate revenue, grow profits, and operate their business how they prefer. However, this cycle clearly shows that the technology sector’s innovation velocity now requires governments and regulatory bodies to invest effort proactively to foresee potential technology developments that may require an accelerated approach to introducing new laws and regulations. To compress the time it takes for this process to occur to reduce the lag from an explosion in innovation due to reactive legislative analysis and decision-making. Over the past decade, irrespective of the industry, a large part of PAC’s conversation with any organisation is the need, in one form or another, to reduce operational friction and compress processes to react to the demands of an industry faster. The need for governments to regulate innovations like AI is challenged by the same need to reduce the complexity of business processes to legislate and regulate in shorter timescales.

All of this leads to the point of this blog which is the recent announcement of the formation of the Frontier Model Forum by the technology firms Anthropic, Google, Microsoft, and OpenAI. This has occurred after months of equivocating by various leaders across governments and the IT industry regarding the need to create and adopt standards to ensure a safe, ethical, secure, transparent, and consistent approach to AI. The cynical amongst us may consider this a situation where the technology industry has moved to appease governments ahead of what it perceives as degrees of impending regulation and legislation that may not favour how they want to operate. PAC welcomes this first step by key technology firms and hopes other software companies join the forum over time because managing the transition to a world where AI becomes ubiquitous in its everyday use requires both the private and public sectors to work hand-in-hand.

On July 6th 2023, before the announcement of the Frontier Model Forum, OpenAI published an interesting paper called “Frontier AI regulation: Managing emerging risks to public safety”. This paper sets the foundation and terminology of the Frontier Model Form and is well worth reading. The authors very sensibly distil the focus of their piece into the term “frontier AI” which they use to describe highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety. PAC would suggest that whilst this is an important area of focus that the bigger impact AI will have on society will be subtler, like other broad societal and industrial technologies, that will not impact public safety but have the potential to influence public and professional behaviour in more subtle, incremental, and “shades of grey” ways. All of which could profoundly affect humanity more than severe risks. However, PAC is not saying that what the paper discusses and the forum focuses on is not laying a foundation to address all forms of impact a “frontier AI” could have on society in conjunction with government regulation and legislation. As the paper discusses, Frontier AI models can pose a distinct regulatory challenge due in part to unexpected occurrences of dangerous capabilities. The paper posits that it is difficult to robustly prevent a deployed model from being misused and to stop its capabilities from proliferating broadly. However, PAC would contest the last point that, if we consider ChatGPT, OpenAI controls how its product is accessed and consumed by both public and private individuals and companies. Though PAC understands that in this instance, the authors are positioning this as once a product like ChatGPT proves the viability of a form of AI that the ability to replicate that form of “frontier AI” can proliferate with relative ease. To this end, from PAC’s perspective, it is important to separate legislation and regulation of a company’s AI product from the underlying “frontier AI” model, and even legal mechanisms like copyright are not necessarily able to address this.

PAC considers the announcement of the Frontier Model Forum to be a solid first step in the right direction for key technology companies operating in this space. The forum aims to promote research in AI safety by developing standards for evaluating models, encouraging the responsible deployment of models, discussing trust and safety risks in AI with politicians and academics, and identifying positive use cases for AI. Like many other technological innovations, it will require years of collaboration and refinement between companies and governments. Also, this announcement focuses on the relationship between those firms and certain governments, as not all governments worldwide will approach legislation and regulation similarly or potentially as ethically.

Share via ...