How can startups create AI responsibly?

By Filippo Sanesi, Startup Program Manager, OVHcloud

The AI hype cycle has moved faster than any other in human history. From the introduction of ChatGPT, the proliferation of AI applications in the last year, to investor reticence because of heavy GPU usage, not to mention discussions on ethics culminating in agreements like the recent AI Act and the Bletchley Declaration -the pace has been unprecedented.

But what is ‘responsible AI’? How can developers ensure that their concept, development of and the ongoing application and future of their AI app is handled responsibly? These questions pose something of a rabbit hole for most startups and founders, who may even start questioning what the very idea of ‘responsibility’ means!

As Dr. Salman Valibeik, CEO of AI startup Orpiva says, “we are currently experiencing an unparalleled era, akin to the discovery of electricity. The idea of a life without AI is unimaginable, as it stands as the greatest innovation in human history. However, with such a powerful tool comes great accountability.

It is crucial that AI development is conducted ethically, in alignment with human values and does not cause harm to our species. Striking a delicate balance between utilising AI for the betterment of mankind and avoiding harm is a task that falls upon us. As we progress as a species, it is our responsibility to use AI transparently to augment ourselves and elevate humanity.”

Furthermore, this isn’t simply a philosophical question: given that the AI Act in Europe can currently enforce penalties of up to €35m for violations, and other regions are sure to follow, startups should be considering this from the get-go. Thankfully, a number of industry heavyweights have already made their views clear on what ‘responsible AI’ should look like.

HOW TO CREATE RESPONSIBLE AI IN PRACTICE
There’s lots of guidance to follow when it comes to responsible AI, and three of the key institutes in this area are the World Economic Forum (WEF), the UK’s Alan Turing Institute, and the US’s National Institute of Standards and Technology (NIST) – all of whom broadly agree on the most important points. For clarity, our recommendations will follow the Alan Turing Institute’s easy ‘FAST’ framework (Fair, Accountable, Sustainable and Transparent) but will also contain content from all three organisations.

FAIR
Startups and scaleups should always ensure that their AI systems are fair. This means avoiding both human bias and unfair harm.
From a process perspective, this means:
-Making sure that training datasets are as bias- and discrimination-free as possible.
-Checking that training models contain ethical content that has been obtained properly – for example, respecting copyright laws.
-Having processes in place to look for and manage implicit human bias, such as using diverse ‘red teams’ who look for problems with systems.

-Ensuring that AI systems always look after human autonomy, privacy and dignity – both from a data management and an application perspective, for example, safeguarding data, but also not creating apps that support weapons development.

ACCOUNTABLE
As well as fairness and minimising harm, AI systems must be audible so that they’re trustworthy.
The three organisations have the following advice for startups:
-Build audit capability into AI systems: Ensure that code is explainable, and that human teams are able to answer questions about how systems work
-Enable user feedback and ensure that any non-human interactions are disclosed so that testers and users know when they’re talking to an AI!

SUSTAINABLE
Startups increasingly have to consider not just the environment, but also the broader, long-term implications for and applications of their creation. This means considering the likes of:
-Making sure that an AI tool is accurate, reliable, robust and secure – for example, does it reliably do what it should do?
-Internal and external security of the system; is the data stored in an appropriate way, has cybersecurity been considered from the start, and does the AI system endanger people in any way?
-Should your organization be part of any broader governance initiatives, and does your AI application have social good at its heart?

TRANSPARENCY AND MORE
There are few other challenges for growing companies that the global organisations have highlighted, from ensuring good transparency to broader social issues. These include the likes of:
-Being able to explain why AI models perform as they do, and what goes into the model to start with.
– Being part of broader AI initiatives to help the public understand and use AI responsibly.
– Being part of international initiatives to encourage collaboration.

As AI gains pace and systems become ever-more sophisticated, it’s more important than ever before that we’re mindful of creating responsible AI systems that embody the principles of fairness, accountability, sustainability and transparency. However, this often seems far from straightforward, but by keeping the simple FAST principles at heart, startup leaders can make sure that they’re on the right path to better, fairer, and ultimately more responsible AI.

 

To read the reports in-depth, you can download the NIST AI Framework, the WEF Presidio Recommendations, and the Alan Turing Institute’s guide to understanding AI ethics and safety.

 

>>>Read more about OVHcloud Startup Program.

brought to you by: