Safeguarding data, privacy and reputation: best practices in the AI era

December 27, 2023

Share on LinkedIn Share on Facebook Share on X

As our world becomes more data-driven, safeguarding data and privacy has become a growing focus for businesses and the regulatory agencies that set industry standards. Last year, Gartner forecasted that by the end of 2024, 75% of the world’s population will have its personal data covered under modern privacy regulations. But there are still gaps in regulatory frameworks and areas where businesses and consumers alike should be particularly careful with privacy as technology innovation pushes forward. 

Over the next year and even into 2025, the interaction between privacy and artificial intelligence (AI) technology will be a significant focus for leaders in their tech and data applications.

In this blog, I’ll highlight three best practices for global businesses to safeguard data, privacy and reputation as 2024 approaches. 

Set a privacy approach as the default

Across industries, we’re seeing that leaders are starting to approach privacy as a standard embedded practice, rather than a tick-box exercise. This mindset has escalated since popular generative AI tools like ChatGPT have become easily accessible to the public.

Initially, at the onset of public use of these tools, companies tried to implement widespread bans of these tools to protect privacy. In Italy, for example, ChatGPT was banned for a short period of time, though the ban has since been lifted. 

With the proliferation of AI-powered tools, employees will be able to access these tools more regularly. After all, banning technology isn’t going to stop people from using the technology, nor will it sufficiently keep people safe.

As a result, companies need to first understand and accept that their employees and colleagues will find ways to access technology outside of the work environment. Therefore, by default they need to put human centric privacy principles in place for AI technology or any technology that endangers user privacy.

What’s helpful for leaders embracing privacy principles is that privacy laws tend to be technology neutral, meaning that the same foundational privacy principles can be applied regardless of this ever-evolving technological landscape. For instance, generative AI tools like ChatGPT didn’t exist when GDPR was put into place in the EU several years ago. But companies can apply similar GDPR principles by default to protect personal data when using generative AI and users will have much more secure, robust and familiar guidelines to follow.

The responsible use and protection of personal data or sensitive data starts with providing people with principles and resources to keep them knowledgeable and data safe. 

Personalize technological experiences

Externally, customers are very aware of AI as a growing part of how business is practiced. AI is fully immersed in the public domain and the general public, on the whole, has knowledge and awareness about the importance of keeping their personal data safe.

As a result, customers today are looking for personalized experiences built into technology tools that gives them more access and control over their own data privacy. They want resources to be at their fingertips to check, alter or object to things like consent, marketing, cookies and privacy settings. 

Leaders building a privacy-centric approach to data should be intentional about how customer and user experience is built within their technology tools. Customers immediately see the benefit of being able to securely access and control their information, track progress and upload supporting information from anywhere, and it isn’t a one-way street. Having customers manage their own requests can significantly decrease the demands on your internal resources in areas such as customer service, privacy and legal. It can also limit the risks associated with sharing and transferring data through traditional mediums, such as post and email. These types of portals and tools for consumers is very likely the way that the future is headed, so companies that get ahead of the curve and start integrating thoughtful interfaces with consumer data privacy will build their trust and reputation with consumers.

Offer training and educational resources

As companies grow, business data increasingly needs to exist in a borderless digital world. The free flow of data is the next biggest challenge for many organizations. Yet modern privacy regulations may vary state by state or country-by-country, depending on where a company, or its customers’ data, is based. 

To keep up with the increasing flow of data, businesses are expanding their homegrown teams rapidly to meet challenges that may arise with local expertise. Businesses need to take a human-centric approach to growing their privacy capabilities as business expands. Safe stewardship of data privacy also means that companies need to provide awareness resources and, in certain circumstances, training about the responsible use of data and AI. 

Internally, that means offering education and support for colleagues at all levels regarding adherence to local privacy laws and internal guidelines, understanding global privacy restrictions, transfer limitations, maintaining reputational trust, and protecting information security. 

Externally, that means communicating with clients and customers transparently about how their data will be used, where it is stored and what steps businesses are taking to ensure their data privacy remains secure. Companies that communicate clearly about privacy and data use in clear and transparent terms –– will see more success in their privacy programs and build consumer trust along the way.

Implementing privacy by design and educating colleagues at the outset is key to not only maintaining program success but replicating and expanding those practices into multiple countries and jurisdictions.

A company’s reputation is built on upholding the promises of privacy, trust and security that its stakeholders – colleagues and customers – expect their data to be safe. If a company has a vulnerability in any of these core competencies, there will be a negative consequence on brand reputation. 

At Sedgwick, we’re focused on overcoming the challenges posed by a borderless digital world and building privacy centric functions into our technology so that no matter who’s using it or where they are based, our stakeholders feel safe in the knowledge their data is safeguarded.