We're a Baker Tilly network member
About Baker Tilly
Back to top
Unregulated AI: advancements at what cost?
Article

Unregulated AI: advancements at what cost?

Key points:

  • The global AI race currently prioritises growth over safety, and regulation lags behind
  • Unchecked AI controlling critical infrastructure may pose potentially catastrophic risks
  • There are calls for stronger global regulations – overshadowed by AI’s financial benefits

Businesses are increasingly turning to generative AI as a tool to boost productivity, support innovation and leverage automated efficiencies. The more familiar businesses and their people become with these tools, the more ‘blind faith’ they put into the outputs. With no ‘guardrails’ on content generation and source material, many users engage with AI with little thought to how their prompts and content may be used (or misused). But, as the AI landscape lacks proper regulation, using the wrong tools and lack of quality control around outputs can pose a number of risks. The race between US and China to be the world leader in AI and the decision to not regulate AI could exacerbate these issues. So why is this a real risk, how did we end up with this situation and is there anything that we can do about it?

A recent podcast unpacked these questions in depth with Professor Stuart Russell OBE, a world-renowned AI expert and computer science professor at UC Berkeley.

History (and Prof. Russell) reminds us that intelligence is key to evolution and is ultimately what dictates authority and associated control. As AI is rapidly evolving, is there a world where it could rise to the top of the proverbial food chain? What humans take years and years to learn, AI can now learn in seconds – and thanks to advancements in technology, execute with robotic precision that would make even the greatest surgeon envious.

According to Prof. Russell, recent studies show that 80% of the population are already actively concerned about what the future holds with the rapid expansion of AI. With the growing intelligence and capacity of AI models, it highlights the need for regulation to balance growing risks. However recent calls for stronger regulations around AI and its uses have been overshadowed by the desire to grow faster and leverage greater benefits. AI is a trillion-dollar industry and global leaders have been swayed by the financial benefits to prioritise growth over safety on more than one occasion.

As examples, the US and later France were both set to host global summits on the safety and regulations of AI but, faced with attractive financial incentives, both held large AI trade shows instead.

Until there are accepted regulations in place, healthy caution is a key part of productive AI use for businesses and individuals – and with that, a need to replace ‘blind faith’ in responses with critical assessment.  So how can businesses and their people mitigate potential risks?

AI risks and potential solutions

  • Limited visibility of use: Employees may not report the use of AI tools, leading to a lack of risk controls
  • Accidental exposure of confidential information: Employees often interact with generative AI tools without realising that their inputs might be stored, reused, or accessed by third parties.
    Since not all platforms being used are monitored by a company’s IT or security teams, there’s no visibility into how data is handled, stored, or protected. In the worst-case scenario, this can lead to data breaches, intellectual property theft, or regulatory violations
  • Susceptibility to content manipulation: AI systems built on large language models (LLMs) are built and trained on provided data, making them inherently vulnerable to crafted inputs that can cause them to behave in unintended ways. This can lead to unexpected outcomes, such as exposing private data, executing harmful commands, or creating security loopholes.

Managing the risks

AI offers a range of useful tools and provides many benefits to individuals and businesses alike – but it’s important to responsibly manage AI use by:

  • Defining your risk appetite: Identify a level of risk that reflects any legal, strategic and reputational priorities. By guiding AI-related decisions with a risk matrix, low-risk applications can by adopted early, while applying rigorous controls to higher-risk scenarios. This balanced approach enables safe experimentation without compromising security or compliance.
  • Building a scalable AI governance framework: Adopt a gradual, scalable strategy that starts with pilot programs in controlled environments. For example, roll out AI tools in a single department or use-case, assess the outcomes, and then expand.
  • Involving employees in shaping the AI strategy: A collaborative approach not only uncovers risks but also ensures governance models are built around real user needs, increasing compliance and reducing the temptation to go ‘rogue’.
  • Continuously evolving governance to match the pace of AI:  Schedule frequent policy reviews and cross-functional working sessions to identify new risks, adapt to regulatory changes, and integrate lessons learned from audits or pilot projects.

Looking to the future and perspective regulations

The adoption of AI is not dissimilar to the rollout of the internet – starting with eager early adopters before an explosion of broader acceptance and day-to-day use. What started as an unregulated, open-source platform has evolved into a reputable tool, largely thanks to regulation and accepted standards for use. Regulation can help to elevate AI in the same way.

While global regulation is still in discussion, there is a growing movement around safe and ethical AI use with the recent inception of The International Association for Safe & Ethical AI – IASEAI).  Our experts are staying informed of any developments in this space – and as more organisations call for ethical AI use, better regulation is sure to follow.


This content is general commentary only and does not constitute advice. Before making any decision or taking any action in relation to the content, you should consult your professional advisor. To the maximum extent permitted by law, neither Pitcher Partners or its affiliated entities, nor any of our employees will be liable for any loss, damage, liability or claim whatsoever suffered or incurred arising directly or indirectly out of the use or reliance on the material contained in this content. Pitcher Partners is an association of independent firms. Pitcher Partners is a member of the global network of Baker Tilly International Limited, the members of which are separate and independent legal entities. Liability limited by a scheme approved under professional standards legislation.

Pitcher Partners insights

Get the latest Pitcher Partners updates direct to your inbox

Thank you for you interest

How can we help you?

Business or personal advice
General information
Career information
Media enquiries
Contact expert
Become a member
Specialist query
Please provide as much detail to ensure appropriate allocation of your query
Please highlight a realistic time frame that will enable us to provide advice within a suitable and timely manner. Please note given conflicting demands with our senior personnel, we will endeavour to respond to you within the nominated time frame. If you require an urgent response, please contact us on 03 8610 5477.
Responses to queries submitted via this form (“Response”) are produced by Pitcher Partners Advisors Proprietary Limited and are prepared for the exclusive use and benefit of those who are invited, and agree, to participate in the CRITICAL POINT NETWORK service. Responses provided, or any part thereof, must not be distributed, copied, used, or relied on by any other person, without our prior written consent. Any information provided is intended to be of a general nature and prepared without taking into account your objectives, circumstances, financial situation or particular needs. Any information provided does not constitute personal advice. If you act on anything contained in a Response without seeking personal advice you do so at your own risk. In providing this information, we are not purporting to act as solicitors or provide legal advice. Any information provided by us is prepared in the ordinary course of our profession and is based on the relevant law and its interpretations by relevant authorities as it stands at the time the information is provided. Any changes or modifications to the law and/or its interpretation after this time could affect the information we provide. It is not possible to guarantee that the tax authorities will not challenge a transaction or to guarantee the outcome of such a challenge if one is raised on the basis of the information we provide. To the maximum extent permitted by law, Pitcher Partners will not be liable for any loss, damage, liability or claim whatsoever suffered or incurred by any person arising directly or indirectly out of the use or reliance on the information contained within a Response. We recommend you seek a formal engagement of our professional services to consider the appropriateness of the information in a Response having regard to your objectives, circumstances, financial situation or needs before proceeding with any financial decisions. Pitcher Partners is an association of independent firms. Pitcher Partners is a member of the global network of Baker Tilly International Limited, the members of which are separate and independent legal entities. Liability limited by a scheme approved under professional standards legislation.
CPN Enquiry
Business Radar 2025
Dealmakers 2025
Not-for-profit survey 2025
Search by industry