Copilot-Header-image-no-type-no-logo

The Australian Government releases voluntary guardrails on AI safety standards

They’re the first step towards mandatory guidelines, but do they go far enough?

If you asked a typical business leader five years ago what ‘AI’ was, you’d probably get a mixed response. Today, however, AI is the technology theme that is dominating conversations from the boardroom to the cafeteria.

You’ll hear it applied to just about any software that does things smarter, faster and automatically. But officially, AI is the umbrella term for all systems that can simulate human-like cognitive functions. And it’s these functions that enable computing devices to perform tasks such as learning, reasoning, problem-solving, and decision-making.

In the Voluntary AI Standard digital publication1, the Australian Government estimates that AI will contribute between $170bn and $600bn to the Australian economy, so its potential value is unquestioned. What is in question, however, are the risks associated with the misuse of AI – be that intentional or unintentional.

A nationally representative survey from the University of Queensland2 revealed that “Australians are deeply concerned about the risks posed by Artificial Intelligence (AI). They want the government to take stronger action to ensure its safe development and use.”

The survey found that “80% of Australians believe preventing catastrophic risks from advanced AI systems should be a global priority on par with pandemics and nuclear war.”

To put the risks of AI into perspective, we need to remember that, broadly speaking, there are two different types of AI systems, which have different risk profiles.

Narrow AI systems are designed and trained to perform specific tasks, like a customer support chatbot helping someone to select a product or service by answering predefined and frequently asked questions. Another good example is facial recognition used on smart phones.

These AI systems generally pose little potential for misuse (unless perhaps you have a mischievous identical twin who likes to ‘borrow’ your iPhone).

General-purpose AI systems are a different story. They’re designed and trained to have broader capabilities and offer more flexibility in their use. As explained in the Australian Government’s Voluntary AI Standard briefing: “General AI systems are more prone to unexpected and unwanted behaviour. This is because of their increased flexibility of interactions, the reduced predictability of their capabilities and behaviour, and their reliance on large and diverse training data. For example, large language models can deliberately or inadvertently manipulate or misinform consumers.”

What this means is that General AI systems use complex algorithms and big data pools to learn patterns and generate responses. Because their outputs are based on statistical correlations (rather than explicit rules-based programming as in Narrow AI systems), they may produce misleading information or behave in misleading ways. And this can lead to situations where the outcomes can be harmful.

This includes infringements on personal civil liberties, harm to groups and communities (especially certain sub-groups of the population, such as people with disability, or people from multicultural backgrounds), and anything that harms our societal structures (such as disrupting free and democratic participation in an election, or someone’s access to education).

What the AI Standard delivers

The Australian Government’s new Voluntary AI Safety Standard provide guidelines for the ethical and responsible development and use of Artificial Intelligence (AI).

They focus on safety, responsibility, transparency, fairness, accountability, and privacy, ensuring AI systems are reliable, equitable, and secure. These standards aim to promote trust and mitigate risks associated with AI technologies in various applications.

The 10 AI Standard Guardrails

The Voluntary AI Safety Standard provides 10 Guardrails, which we’ve listed here. For a more detailed review, go to the Australian Government website3.

  1. Regulatory Compliance – Establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance.
  2. Risk Management – Establish and implement a risk management process to identify and mitigate risks.
  3. Data Integrity – Protect AI systems and implement data governance measures to manage data quality and provenance.
  4. Testing – Test AI models and systems to evaluate model performance and monitor the system once deployed.
  5. Ensure control – Enable human control or intervention in an AI system to achieve meaningful human oversight across the life cycle.
  6. Create trust with users – Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content.
  7. Establish processes – For users, organisations, people and society impacted by AI systems to challenge how they are using AI and contest decisions, outcomes or interactions that involve AI.
  8. Be Transparent – With other organisations across the AI supply chain about data, models and systems to help them effectively address risks.
  9. Maintain Records – Keep and maintain records to allow third parties to assess compliance with guardrails.
  10. Engage Stakeholders – Evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness.

Does the AI Standard go far enough to protect the welfare of Australians?

It's important to remember, the AI Standards are guidelines, and they’re voluntary. They’re not a set of rules, or ‘Do’s and Don’ts’, and they’re not legally binding. They simply provide practical guidance to all Australian organisations on how to safely and responsibly use and innovate with Artificial Intelligence (AI).

In the guidelines, the government is: “Acting to ensure that the development and deployment of AI systems in Australia in legitimate but high-risk settings is safe and can be relied on, while ensuring the use of AI in low-risk settings can continue to flourish largely unimpeded.”

So, they’re not going to do anything to stop deepfake scams featuring media personalities like David Koch or Hugh Jackman. And they definitely won’t stop misleading pictures of politicians being digitally altered during election campaigns to put them in compromising situations.

But what the standard does do is support a “risk-based approach to managing AI systems.” It does this by “supporting organisations – starting with AI deployers – to take proactive steps to identify risks and mitigate the potential for harm posed by the AI systems they deploy, use or rely on.”

The standard prioritises safety and the mitigation of harms and risks to people and their rights by asking organisations to commit to:

  • Understanding the specific factors and attributes of their use of AI systems
  • Meaningfully engaging with stakeholders
  • Performing appropriately detailed risk and impact assessments
  • Undertaking testing

Asking organisations to assess the potential for risk and harm to people is certainly important and is definitely a step in the right direction.

Shaun Leisegang, General Manager of Automation, Data and AI at Tecala said about the new guidelines: “As a leader in AI, I fully support the Australian government’s move towards national AI safety standards. It’s crucial that we develop AI responsibly, ensuring our innovations are safe, ethical, and aligned with societal values.

"Adopting these AI safety standards demonstrates our commitment to transparency and accountability in AI development. It’s essential that our technologies not only advance but also uphold the highest ethical standards."

What’s next for AI Standards in Australia?

Although the new Australian Government's AI Standards provide a strong foundation for promoting safe and ethical AI practices, their effectiveness in stopping unethical behaviour will depend on more robust law enforcement mechanisms.

Quoting from the University of Queensland again: “Australians expect the government to take decisive action on their behalf. An overwhelming majority (86%) want a new government body dedicated to AI regulation and governance, akin to the Therapeutic Goods Administration for medicines.

“Nine in ten Australians also believe the country should play a leading role in international efforts to regulate AI development.”

And probably the most notable insight of the report: “Two-thirds of Australians would support hitting pause on AI development for six months to allow regulators to catch up.”

Even though this not commercially viable in the global race for competitive advantage, commercial lawyers, Herbert Smith Freehills4 explain that industry involvement is important. “Industry associations can play a crucial role in enforcing AI Safety and Accountability by promoting best practices, providing guidance on compliance with regulations, and facilitating training programs for businesses. They can also engage in advocacy efforts, collaborate with the government on policy development, and establish accountability frameworks within their sectors.”

This industry-wide collaborative approach is strongly supported by Tecala. Right now, our team are working with technology and business leaders in Australia to ensure a safe and ethical future for AI.

Shaun Leisegang perhaps sums this up best: "Balancing innovation with responsibility is key. These AI safety standards provide a framework that will help us navigate the complexities of AI deployment while promoting ethical practices. By adhering to these guidelines, we ensure our continuing innovation in AI keeps us competitive while staying compliant on an international scale.”

For more information on Automation, Data and AI at Tecala, contact Shaun Leisegang.


1 Voluntary AI Standard digital publication; Australian Government – Department of Industry, Science and Resources, August 2024.
2 University of Queensland, Research, March 2024.
3 Australian Government, September 2024.
4 Herbert Smith Freehills, January 2024.

blog

Get ready for the APRA CPS 234 updates

As APRA finalises its new prudential standard on operational risk, we provide some background to the updates and explain how you can stay compliant.

blog

The Future of the Cloud – What we can expect in 2025 and beyond

To understand how our operational landscape will influence the business cloud, and to explore how organisations should be using it, we sat down with Tecala’s Managing Director, Pieter DeGunst, to get his insights.