AI can provide organisations with increased efficiency and productivity, enhanced decision making and cost savings. But there are risks. Andy Compton, CEO Cortida explains what these are and how to retain control
Organisations around the globe are embracing AI technologies with breathtaking speed and in a way that, according to UN Trade and Development (UNCTAD) report(i) will see the global AI market soar to $4.8 trillion by 2033.
There are numerous benefits for FM, such as enhanced analytics, which can predict failures before they occur, energy optimisation, derived from improved analysis of occupancy patterns, and improved building performance, from intelligent adaption to connected devices and IoT building sensors. These concepts are not new, but the opportunity to identify, understand and respond in quicker, cheaper, and ever more predictive and reliable ways is genuinely exciting.
However, early adopters of AI may not be aware, or prepared for the potential pitfalls that appear at the start of innovation. What exactly are the risks associated with using AI?
Cybersecurity: Sophisticated, AI-powered attacks are on the increase and can bring untold damage to both revenue and reputation. AI cyber-attacks have evolved to include the poisoning of data in building sensors, autonomous or self-driving malware and the use of realistic audio, video, even document forgeries to enhance phishing attacks on users of operational systems.
Operational Errors: Systems that rely on AI produced data to make informed or predictive decisions can be negatively impacted when the integrity of the data delivered is compromised, wrong, or biased, due to ‘AI thinking’.
Privacy Risks: When we feed AI systems with personal or sensitive data, we run a risk that it may inadvertently become part of its knowledge and therefore available for use in its outputs. Unintentional disclosure of personal or sensitive information can be not only embarrassing but also harmful, it may even attract regulatory attention and penalties.
Bias and Discrimination: Because AI systems learn from data, it’s possible that the information it has learned may generate results that are less favourable toward people or groups with certain characteristics and therefore produces biased results. In an HR setting for example, where AI is employed to analyse CVs and make a recommendation, it’s plausible that the algorithm could deliver discriminatory results. In an FM setting this could be the inappropriate allocation of space, based on bias rather than optimisation and fairness.
Regulatory Compliance: The UK is yet to introduce regulation on the use of AI; however, AI tools can still break existing laws such as, the Data Protection Act (UK GDPR), which governs data collection and processing. Although data may have historically been processed using existing consent models, if the purpose of data processing has changed (with the use of AI), you may be required to refresh that consent to avoid regulatory issues and ensure trust.
TAKING CONTROL OF RISKS
For most organisations, a sensible place to start is to establish how it’s currently being used within your organisation. Let’s look at these areas in more detail:
Understanding Current Use: Key questions should be asked about the applications or services in use, their locations (cloud or local), connectivity types (permanent or on-demand), the quantities and format of data (plain text, anonymised data, word documents etc.) and crucially, the type and sensitivity of the data in use. This exercise will give you a clearer picture of any areas within your business that may sit exposed to risk.
Risk Assess: This information will show if your organisation is using AI ethically, transparently, fairly, legally and that all measures have been implemented using the principles of privacy by design. To protect your data moving forward, create a list of AI services or platforms that have been explicitly evaluated, secured and approved.
Policy: An AI usage policy is the setting down of your company’s commitment to the way it engages with AI. It will include your objectives, the responsibilities of your users and the consequences for non-compliance. Of particular importance are rules on what systems and usage is allowed, what can be passed into AI systems, their format and how outputs can be used along with any approval steps required to ensure accurate, reliable, ethical, legal, and secure use. Critically, your whole organisation must agree to adhere to your policy and commit to being guardians of the data you hold.
Governance: To ensure an AI Policy is kept in-check and remains ethical, fair, legal and secure, you will also need an internal AI Ethics and Governance team. In a larger organisation this function is typically managed by a committee and in smaller companies, by the organisation’s leadership team. An oversight group that periodically meets to understand the evolving regulatory landscape is a must. This committee stands in place to review policy violations and complaints, evaluate risk assessments, mandate on privacy and security controls and consider new technologies or advancements.
Finally, while performing these tasks is a great starting point for managing the risks associated with the use of AI, for total peace of mind, it’s also a good idea to seek professional support.
Andy Compton will be hosting a session at Facilities & Estates Management Live on the hidden cyber risk, offering practical insights and strategies to better protect your organisation from threats that come from within.