Expert Panel | Forbes.com »
Recent advances in artificial intelligence make its potential seem boundless. One of its possible uses is employing bots to filter sensitive material across a variety of programs and applications. However, there can be risks in allowing bots to access, determine and block sensitive material, so it’s important to proceed with caution.
To better understand these risks, we asked a panel of Forbes Technology Council members to share some important considerations to remember when you’re developing an AI bot for filtering sensitive material. Their best responses are below.
1. Eliminate the noise.
Start with eliminating the noise. Ensure your assumptions are based on fundamental concepts that do not discriminate or create a foundation of bias. It’s an important consideration that a bot remains as impartial as possible when judging subjective material. – Sam Amrani, Olvin
2. Validate your data set and retrain regularly.
The supervised data set that the AI bot trains over has to be clean and precise. Otherwise, the bot will miss filtering out critical information. Before even training the bot, the data set has to be validated for its accuracy and correctness. Secondly, there should be a regular cadence for retraining with the latest data, as sensitive information appears in new and different ways. – Shub Jain, Auquan
3. Involve interdisciplinary experts to eliminate bias.
Algorithmic biases have gradually crept into AI tools—a product of inherent biases of the developers/programmers. These could snowball into serious issues for businesses, especially with bots that filter sensitive material, where the stakes are higher and shocks are ruder. Involve interdisciplinary experts like data ethnographers, social scientists and ethicists for bias-free bots. – Sayandeb Banerjee, TheMathCompany
4. Manage your costs carefully.
Employing bots to filter sensitive material can be quite expensive, especially if the parameters are governed strictly. If the database is large, the cost of deploying and managing the bots and checking the data sets that they filter can quickly spiral out of control, so it must be kept in check. – Irsa Faruqui, RetroCube – Software and Mobile Application Development Company
5. Program a clear set of ethical values.
A clear set of values needs to be programmed into any AI that is either human-facing or making decisions that affect humans. Bots used to filter sensitive material have to exhibit a clear set of human values and be transparent about what they are, about their objectives and about the criteria they use. – Ran Zilca, Happify
6. Understand how filtering might impact the speed of delivery.
The definition of “sensitive” is different for different industries. For regulated industries like banking and the public sector, this term defines the compliance levels of the data, and filtering them using template standards is a very smart idea. However, filtering impedes the speed of delivery for organizations, so it should be used judiciously for other industries. – Sandeep Shilawat, ManTech
7. Control the bot’s access in the same way you do a human user’s.
Bots add a lot of opportunities, but they also create new risks and exposure. They access mission-critical systems, applications and data just like any other user within the organization, and it is important to control their access. Hackers can easily spoof a bot and, as a result, access all of the applications and data that the bot has access to. – Juliette Rizkallah, SailPoint
8. Design your bot with cultural sensitivity.
As we gain collective consciousness on the need to redesign our organizations to be culturally sensitive, we need to apply the same level of expectation to the bots we develop. If their purpose is to help filter sensitive materials, they need to understand the full context of said materials—author, content, medium, channel, purpose, etc.—to not unexpectedly discriminate. – Florian Quarré, Exponential AI
9. Scrutinize third-party access and data privacy.
Many AI solutions require data to be moved off of a company’s servers and onto a third-party solution provider’s. Your company may have data protection policies and procedures in place, but you may be at the mercy of a third party when they begin to handle your data. Cambridge Analytica gave us a worst-case scenario of what can happen when a third party has access to your customer’s data. – Sean Herman, Kinzoo
10. Ensure there’s enough data in the initial training feed.
Machine learning and AI require a great deal of information to train on, so it’s vital there is enough data in the initial training feed. If you think you have enough, add some. More is better. It’s also important to include control data (non-sensitive) and edge cases so the AI gets a clear picture. – Saryu Nayyar, Gurucul
11. Create a secure digital environment.
Whether it’s social media or internal business communications, the bot has to operate in a totally secure digital environment. If the AI bot is going to be filtering and handling sensitive data, it’s eventually going to become a target for hackers. Filtering bots will use encryption keys by necessity, so you’ll need a secure key management environment to ensure the sensitive material doesn’t leak. – John Shin, RSI Security
12. Involve your corporate security team.
A corporate security team ultimately holds the responsibility regarding everything sensitive around data, information, devices, networks, etc. If we involve the CSO’s team in the process of continuously translating and signing off on the security requirements, the bots team can just follow without a gap. This is the only way we can keep the AI development and the corporate security processes in sync. – Anbu Muppidathi, Cognizant
13. Dedicate the necessary human resources.
Bots have to be trained very well to function properly. Be sure your organization can dedicate the necessary human resources to own and manage the knowledge base required for the bot to be successful. It’s the synergy of human experience and AI potential that make the best impact. – Steven Khuong, Curacubby
14. Ensure ample historical data and human intervention.
AI bots work well when transactional, repeatable, simple decision criteria are established. A key consideration with purpose-built AI is ensuring there is a human in the loop on the machine learning to avoid autonomously created bias (as Facebook and others have found out the hard way). A large quantity of historical data is also helpful in more rapidly maturing the AI to autonomy. – John Walsh III, Red Summit Global
External Link: 14 Important Strategies for Building a Filtering Bot