Blame Hollywood. Or the news. Either way, there’s a general fear of artificial intelligence (AI) and machine learning; a concern that machines will one day in the future replace humans. Recently, San Francisco passed a ban on local agencies to use facial recognition technologies, citing potential abuse and privacy concerns. The reality is, it is possible to balance between innovation and ethical treatment of the data that AI is analyzing. Here’s how:
Understand AI’s Potential
AI can help us to better understand data feeds from various sources. In security, data can come from video, Internet of Things devices and metadata that can then be used to ensure security operators gather only actionable alarms instead of having to manually sift through volumes of incoming data. Not only does this save a significant amount of time and resources but it also increases an operator’s efficiency and response time. For example, say you have many people who enter and exit through a set of doors in a building. In one instance, you have three people enter, but only two badges are scanned. Through the use and analysis of multiple data points, this event can be tagged with a higher level of urgency for an operator in an effort to identify potential threats.
AI can also correlate seemingly unrelated events to surface insights or discover patterns with a broader scope. For example, certain traffic patterns at some locations in a city could be matched with anomaly conditions that occur irregularly at other locations. AI systems can discover these correlations and make predictions that, for example, better facilitate traffic control.
In a broader sense, AI can be leveraged to identify potential diseases in screening tests, such as magnetic resonance imaging (MRIs), or predict position and movement of objects.
Fair Game Data
One of the biggest concerns surrounding AI is the ethical use of data. Video data, specifically, is gathered on a regular basis by both public and private entities: a person’s presence in a store, airport or any number of locations is being recorded. Many of the uses of this data are strictly for investigative purposes in the event of an emergency or safety threat.
However, as more businesses look to use data to gather additional insights and intelligence, it is imperative these companies follow ethical practices and ensure privacy safeguards are in place. Ethical practices and privacy safeguards are key in order to gain the public’s acceptance of AI and data privacy. The fact is, too many breaches and the misuse of information to profile or target specific groups can lead to distrust in the use of the technology. If that trust is broken, we run the risk of being unable to realize the value that AI can bring to any number of use cases.
Software developers typically build software around models that have already been built to harness the intelligence from various pieces of data. Powerful models have the potential to return a rich data set; however, the data returned has to be used with care to ensure legal and privacy concerns are handled correctly.
Current Regulatory Climate
With the exception of the recent San Francisco ruling against the use of facial recognition, regulation and development of ethics boards has largely been done on an ad hoc basis. Google and Microsoft, for example, have long been talking about AI ethics, engaging in internal ethical reviews, but operating without much oversight from external bodies about how AI is used.
Today, several companies are engaging in internal code of conduct practices that govern the use of data within the world of AI development. As AI continues to be developed and better understood, more best practices and ethical guidelines will be formulated to help reinforce trust in companies using identifiable information. These practices and guidelines are often validated through process reviews, internal and external ethics reviews/panels, and personnel training. As a result, these organizations can benefit from harnessing the power of AI for business while still adhering to best practices and ethical use of data.
Small glimpses into more widespread state and federal regulation will come, but in the meantime, it is the responsibility of private entities to take the ethical handling and treatment of data into consideration when implementing such innovation using AI and machine learning technology.
AI Development and the Future
There is much to gain from the use of AI in day-to-day functionality. For traditionally unsorted data, AI technology has the potential to identify insights or intelligence such as anomalies and unusual patterns. Despite the potential of AI, there can also be a high level of risk involved with AI, particularly when entities develop and promote AI that engages in the identification of patterns using unethical practices or biased models. In particular, unethical use of AI has the potential to reduce efficiency, increase costs, damage a company’s reputation, and even worse, it could adversely affect people’s interests and welfare.
AI implementation that harnesses ethical norms and standards, in contrast, can change the face of innovation as we know it, bringing critical information to the forefront that otherwise might be missed. Today’s technology practitioners must strike a good balance between innovation and ethics, embrace adherence to privacy laws, and develop any necessary processes and reviews that can lead to the successful deployment of solutions that do not compromise the inherent social contract that businesses have with their users.